The Future of War: Autonomous AI and the Threat of ‘Killer Robots’ – Report

The Future of War: Autonomous AI and the Threat of ‘Killer Robots’ - Report

The use of lethal weapons in combat against an enemy has always, until now, been decided by a human – but changes are coming.

As artificial intelligence (AI) advancements — including cutting-edge robotics and silicon-based image recognition technology — have now pushed the once-fantastic idea of ‘killer robots’ onto the global stage, modern autonomous war machines that fire live ammo could soon seek and destroy battlefield combatants, leading many to wonder if there is an ‘off’ switch. 

Among other nations, China and the US are working to make advancements in artificial intelligence, machine image recognition and semi-autonomous robotics to be used in combination with sensors and targeting computers, according to a New York Post published Thursday.

Britain and Israel are currently using missiles and drones with autonomous features; such weapons can attack enemy radar, vehicles or ships without human commands.

Technology for weapon systems to autonomously identify and destroy targets has existed for several decades. In the 1980s and 90s, Harpoon and Tomahawk missiles, which could identify targets autonomously, were developed by US war planners. In 2003, the US Army developed the Counter-Rocket, Artillery and Mortar system (C-RAM), a set of systems to detect incoming rockets, artillery and mortar rounds while they are still airborne and alert a human operator. Such a system allows the operator — with the press of a button — to destroy the incoming threat with ammunition that self-destructs in the air to minimize injuring civilians on the ground.

During the 1947-1991 Cold War between the Soviet Union and the US, the US Navy used its Phalanx weapon system for defense against anti-ship missiles and helicopters, linking sensors on fleet ships and aircraft to single out airborne threats and attack them with shipboard missiles, all without the intervention of human operators.

Robert Work, a senior fellow at the Center for a New American Security in Washington, recently told the New York Post that a true robot killing marching could be defined as a lethal autonomous weapon that decides — based on its programming — who and what to destroy.

“There’s a type of fire-and-forget weapon where the weapon itself decides, ‘Okay, this is what I see happening in the battlefield, and I think I’m going to have to take out this particular tank because I think this tank is the command tank,'” Work noted, describing his definition of a true independent weapons platform.

Currently, the US Defense Advanced Research Projects Agency (DARPA) has a Collaborative Operations in Denied Environment (CODE) program focused on developing software that allows groups of drones to work in teams.

According to the DARPA website, “DARPA’s CODE program aims to overcome these limitations with new algorithms and software for existing unmanned aircraft that would extend mission capabilities and improve US forces’ ability to conduct operations in denied or contested airspace. CODE researchers seek to create a modular software architecture beyond the current state of the art that is resilient to bandwidth limitations and communications disruptions yet compatible with existing standards and amenable to affordable retrofit into existing platforms.”

According to Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War,” the purpose of CODE is not to develop autonomous weapons, but rather to adapt to “a world where we’ll have groups of robots operating collaboratively together under one person in supervisory control. The program manager has compared it to wolves hunting in coordinated packs,” the New York Post reported.

The role of a human operators in the CODE platform would be to simply monitor the drones. 

“The idea here is that CODE is going after mobile or rapidly relocatable targets, so the target locations cannot be specified precisely in advance by humans,” Scharre said.

“It’s not like a Tomahawk cruise missile, where you just program in the coordinates and then the missile goes and strikes it. The drones have to be able to search an area and find targets on the move,” he added, cited by Nytimes.com.

In addition, companies in the US have also been developing loitering munitions — essentially drones that can independently identity and destroy enemy radar systems.

According to specialists like Tony Cerri, who recently inspected data science, models and simulations at the US Army Training and Doctrine Command, independent weapons are necessary.

“Imagine that we are fighting in a city and we have a foe that is using human life indiscriminately as a human shield,” Cerri told the New York Post. “It’s constantly in your face as you’re out walking in the street. You can’t deal with every situation. You are going to make a mistake.”

“A robot, operating with milliseconds, looking at data that you can’t even begin to conceive, is going to say this is the right time to use this kind of weapon to limit collateral damage,” he added, arguing that robot killing machines are less likely to make mistakes.

Robotics activists, however, claim that the use of autonomous killer robots is morally wrong. 

“We’ve focused on two things that we want to see remain under meaningful, or appropriate, or adequate, or necessary human control,” Mary Wareham, the global coordinator for the Campaign to Stop Killer Robots, told the New York Post.

“That’s the identification and selection of targets and then the use of force against them, lethal or otherwise,” key decision points, she noted, where only human judgment — capable of discriminating between enemy and bystander — can keep a sense of proportionality in responding and can be held accountable while satisfying the conventions of war.

In June, Google published a statement asserting that neither it nor its parent company Alphabet, Inc. will not use artificial intelligence to develop “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

Sourse: sputniknews.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *