Moral Battle Robots for War

Paging Philip K Dick, Horselover Fat, come in...Second variety? Moral?
http://www.nytimes.com/2008/11/25/scien ... ?_r=2&8dpc
ATLANTA — In the heat of battle, their minds clouded by fear, anger or vengefulness, even the best-trained soldiers can act in ways that violate the Geneva Conventions or battlefield rules of engagement. Now some researchers suggest that robots could do better.
“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army. “That’s the case I make.”
Robot drones, mine detectors and sensing devices are already common on the battlefield but are controlled by humans. Many of the drones in Iraq and Afghanistan are operated from a command post in Nevada. Dr. Arkin is talking about true robots operating autonomously, on their own.
He and others say that the technology to make lethal autonomous robots is inexpensive and proliferating, and that the advent of these robots on the battlefield is only a matter of time. That means, they say, it is time for people to start talking about whether this technology is something they want to embrace. “The important thing is not to be blind to it,” Dr. Arkin said. Noel Sharkey, a computer scientist at the University of Sheffield in Britain, wrote last year in the journal Innovative Technology for Computer Professionals that “this is not a ‘Terminator’-style science fiction but grim reality.”
He said South Korea and Israel were among countries already deploying armed robot border guards. In an interview, he said there was “a headlong rush” to develop battlefield robots that make their own decisions about when to attack.
“We don’t want to get to the point where we should have had this discussion 20 years ago,” said Colin Allen, a philosopher at Indiana University and a co-author of “Moral Machines: Teaching Robots Right From Wrong,” published this month by Oxford University Press.
Randy Zachery, who directs the Information Science Directorate of the Army Research Office, which is financing Dr. Arkin’s work, said the Army hoped this “basic science” would show how human soldiers might use and interact with autonomous systems and how software might be developed to “allow autonomous systems to operate within the bounds imposed by the warfighter.”
“It doesn’t have a particular product or application in mind,” said Dr. Zachery, an electrical engineer. “It is basically to answer questions that can stimulate further research or illuminate things we did not know about before.”
And Lt. Col. Martin Downie, a spokesman for the Army, noted that whatever emerged from the work “is ultimately in the hands of the commander in chief, and he’s obviously answerable to the American people, just like we are.
In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.
” Rest at link.
I think there is already a good code for robots.
That penned by Asimov.
A robot shall not cause harm to a living being, nor allow harm to come to a living being through inaction. If I remember it right.
http://www.nytimes.com/2008/11/25/scien ... ?_r=2&8dpc
ATLANTA — In the heat of battle, their minds clouded by fear, anger or vengefulness, even the best-trained soldiers can act in ways that violate the Geneva Conventions or battlefield rules of engagement. Now some researchers suggest that robots could do better.
“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army. “That’s the case I make.”
Robot drones, mine detectors and sensing devices are already common on the battlefield but are controlled by humans. Many of the drones in Iraq and Afghanistan are operated from a command post in Nevada. Dr. Arkin is talking about true robots operating autonomously, on their own.
He and others say that the technology to make lethal autonomous robots is inexpensive and proliferating, and that the advent of these robots on the battlefield is only a matter of time. That means, they say, it is time for people to start talking about whether this technology is something they want to embrace. “The important thing is not to be blind to it,” Dr. Arkin said. Noel Sharkey, a computer scientist at the University of Sheffield in Britain, wrote last year in the journal Innovative Technology for Computer Professionals that “this is not a ‘Terminator’-style science fiction but grim reality.”
He said South Korea and Israel were among countries already deploying armed robot border guards. In an interview, he said there was “a headlong rush” to develop battlefield robots that make their own decisions about when to attack.
“We don’t want to get to the point where we should have had this discussion 20 years ago,” said Colin Allen, a philosopher at Indiana University and a co-author of “Moral Machines: Teaching Robots Right From Wrong,” published this month by Oxford University Press.
Randy Zachery, who directs the Information Science Directorate of the Army Research Office, which is financing Dr. Arkin’s work, said the Army hoped this “basic science” would show how human soldiers might use and interact with autonomous systems and how software might be developed to “allow autonomous systems to operate within the bounds imposed by the warfighter.”
“It doesn’t have a particular product or application in mind,” said Dr. Zachery, an electrical engineer. “It is basically to answer questions that can stimulate further research or illuminate things we did not know about before.”
And Lt. Col. Martin Downie, a spokesman for the Army, noted that whatever emerged from the work “is ultimately in the hands of the commander in chief, and he’s obviously answerable to the American people, just like we are.
In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.
” Rest at link.
I think there is already a good code for robots.
That penned by Asimov.
A robot shall not cause harm to a living being, nor allow harm to come to a living being through inaction. If I remember it right.