The Defense Innovation Board, a panel of 16 prominent technologists advising the Pentagon, today voted to approve AI ethics principles for the Department of Defense. From a news article: The report includes 12 recommendations for how the U.S. military can apply ethics in the future for both combat and non-combat AI systems. The principles are broken into five main principles: responsible, equitable, traceable, reliable, and governable. The principles state that humans should remain responsible for “developments, deployments, use and outcomes,” and AI systems used by the military should be free of bias that can lead to unintended human harm. AI deployed by the DoD should also be reliable, governable, and use “transparent and auditable methodologies, data sources, and design procedure and documentation.” “You may see resonances of the word fairness in here [AI ethics principle document]. I will caution you that in many cases the Department of Defense should not be fair,” DIB board member and Carnegie Mellon University VP of research Michael McQuade said today. “It should be a firm principle that ours is to not have unintended bias in our systems.” Applied Inventions cofounder and computer theorist Danny Hillis and board members agreed to amend the draft document to say the governable principle should include “avoid unintended harm and disruption and for human disengagement of deployed systems.” The report, Hillis said, should be explicit and unambiguous that AI systems used by the military should come with an off switch for a human to press in case things go wrong.

Share on Google+

of this story at Slashdot.

…read more

Source:: Slashdot