Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. From a report: The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software. This recommendation and other ideas for ensuring AI is made with public trust and societal well-being in mind were detailed in a preprint paper published this week. The bug bounty hunting community might be too small to create strong assurances, but developers could still unearth more bias than is revealed by measures in place today, the authors say.

“Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties,” the paper reads. “We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored.”

of this story at Slashdot.

…read more

Source:: Slashdot