An anonymous reader shares a report: Prolific science and science fiction writer Isaac Asimov (1920-1992) developed the Three Laws of Robotics, in the hope of guarding against potentially dangerous artificial intelligence. They first appeared in his 1942 short story Runaround:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov fans tell us that the laws were implicit in his earlier stories. A 0th law was added in Robots and Empire (1985): “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

[…] Chris Stokes, a philosopher at Wuhan University in China, says, “Many computer engineers use the three laws as a tool for how they think about programming.” But the trouble is, they don’t work. He explains in an open-access paper (PDF): The First Law fails because of ambiguity in language, and because of complicated ethical problems that are too complex to have a simple yes or no answer.
The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves.
The Third Law fails because it results in a permanent social stratification, with the vast amount of potential exploitation built into this system of laws.
The ‘Zeroth’ Law, like the first, fails because of ambiguous ideology. All of the Laws also fail because of how easy it is to circumvent the spirit of the law but still remaining bound by the letter of the law.

Share on Google+

of this story at Slashdot.

…read more

Source:: Slashdot