The stark warning — which includes discussion of a "Terminator"-style scenario in which robots turn on their human masters — is part of a hefty report funded by and prepared for the U.S. Navy's high-tech and secretive Office of Naval Research.
The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans.
Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers.
"There is a common misconception that robots will do only what we have programmed them to do," Patrick Lin, the chief compiler of the report, said. "Unfortunately, such a belief is sorely outdated, harking back to a time when ... programs could be written and understood by a single person."
The reality, Dr. Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire program.
Accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field — an option that may either be unavailable or deliberately sidestepped by the designers of fighting robots.
A simple ethical code along the lines of the “Three Laws of Robotics” postulated in 1950 by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.
Isaac Asimov’s three laws of robotics:
Introduced in his 1942 short story Runaround