Pentagon hires British scientist to help build robot soldier that 'won't commit war crimes'
The US Army and Navy have both hired experts in the ethics of building machines to prevent the creation of an amoral Terminator-style killing machine that murders indiscriminately.
By 2010 the US will have invested $4 billion in a research programme into "autonomous systems", the military jargon for robots, on the basis that they would not succumb to fear or the desire for vengeance that afflicts frontline soldiers.
A British robotics expert has been recruited by the US Navy to advise them on building robots that do not violate the Geneva Conventions.
Colin Allen, a scientific philosopher at Indiana University's has just published a book summarising his views entitled Moral Machines: Teaching Robots Right From Wrong.
He told The Daily Telegraph: "The question they want answered is whether we can build automated weapons that would conform to the laws of war. Can we use ethical theory to help design these machines?"
Pentagon chiefs are concerned by studies of combat stress in Iraq that show high proportions of frontline troops supporting torture and retribution against enemy combatants.
Ronald Arkin, a computer scientist at Georgia Tech university, who is working on software for the US Army has written a report which concludes robots, while not "perfectly ethical in the battlefield" can "perform more ethically than human soldiers."
He says that robots "do not need to protect themselves" and "they can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events".
Airborne drones are already used in Iraq and Afghanistan to launch air strikes against militant targets and robotic vehicles are used to disable roadside bombs and other improvised explosive devices.
Last month the US Army took delivery of a new robot built by an American subsidiary of the British defence company QinetiQ, which can fire everything from bean bags and pepper spray to high-explosive grenades and a 7.62mm machine gun.
But this generation of robots are all remotely operated by humans. Researchers are now working on "soldier bots" which would be able to identify targets, weapons and distinguish between enemy forces like tanks or armed men and soft targets like ambulances or civilians.
Their software would be embedded with rules of engagement conforming with the Geneva Conventions to tell the robot when to open fire.
Dr Allen applauded the decision to tackle the ethical dilemmas at an early stage. "It's time we started thinking about the issues of how to take ethical theory and build it into the software that will ensure robots act correctly rather than wait until it's too late," he said.
"We already have computers out there that are making decisions that affect people's lives but they do it in an ethically blind way. Computers decide on credit card approvals without any human involvement and we're seeing it in some situations regarding medical care for the elderly," a reference to hospitals in the US that use computer programmes to help decide which patients should not be resuscitated if they fall unconscious.
Dr Allen said the US military wants fully autonomous robots because they currently use highly trained manpower to operate them. "The really expensive robots are under the most human control because they can't afford to lose them," he said.
"It takes six people to operate a Predator drone round the clock. I know the Air Force has developed software, which they claim is to train Predator operators. But if the computer can train the human it could also ultimately fly the drone itself."
Some are concerned that it will be impossible to devise robots that avoid mistakes, conjuring up visions of machines killing indiscriminately when they malfunction, like the robot in the film Robocop.
Noel Sharkey, a computer scientist at Sheffield University, best known for his involvement with the cult television show Robot Wars, is the leading critic of the US plans.
He says: "It sends a cold shiver down my spine. I have worked in artificial intelligence for decades, and the idea of a robot making decisions about human termination is terrifying."
Regardless of the imperfections of human troops, the concept of an automated robot that makes decisions about who to kill raises many ethical and legal concerns, not to mention the fact that a software glitch could prove devastating. Nevertheless, it seems certain that automated or semi-automated robots will play a growing role in global conflicts.
The odd technological catastrophe aside, these robots will follow the rules of engagement as defined by their masters. People in combat zones should therefore probably be more concerned about which conventions of war the relevant commander-in-chief believes are applicable than about individual decisions made by the troops (robotic or otherwise) on the ground. source