Experts alarmed over AI in military as Gaza turns into “testing ground” for US-made war robots

Experts are alarmed by the increasing use of artificial intelligence in military technologies, especially autonomous weapons that can select and engage targets without human oversight. This could undermine accountability and risk targeting civilians in violations of international law.

While the Pentagon’s directive on AI aims to minimize unintended bias, it allows for autonomous weapons development and deployment in urgent situations and does not apply to all U.S. agencies. Critics say it does not adequately address legal and ethical risks.

Military contractors are developing increasingly autonomous weapons like drones and robotic dogs, with Gaza serving as a testing ground for some U.S.-made systems. This could spread and prolong conflicts while distancing forces from risks.

Proponents argue new technologies enhance precision but critics note drone strikes have still caused civilian deaths. Autonomous targeting also relies on data and algorithms that may incorporate biases.

Ultimately, experts say the drive for new weapons technologies should not distract from human responsibility for conflict decisions and that war inherently exceeds ethical bounds, with or without new machines. More work is needed on policy and oversight of autonomous military AI.

Source: salon

Share the Post:

Related Posts