The spectre of superintelligent machines doing us harm is not just science fiction, technologists say — so how can we ensure AI remains ‘friendly’ to its makers? From a story: Jaan Tallinn (co-founder of Skype) warns that any approach to AI safety will be hard to get right. If an AI is sufficiently smart, it might have a better understanding of the constraints than its creators do. Imagine, he said, “waking up in a prison built by a bunch of blind five-year-olds.” That is what it might be like for a super-intelligent AI that is confined by humans. The theorist Eliezer Yudkowsky, who has written hundreds of essays on superintelligence, found evidence this might be true when, starting in 2002, he conducted chat sessions in which he played the role of an AI enclosed in a box, while a rotation of other people played the gatekeeper tasked with keeping the AI in. Three out of five times, Yudkowsky — a mere mortal — says he convinced the gatekeeper to release him. His experiments have not discouraged researchers from trying to design a better box, however.

The researchers that Tallinn funds are pursuing a broad variety of strategies, from the practical to the seemingly far-fetched. Some theorise about boxing AI, either physically, by building an actual structure to contain it, or by programming in limits to what it can do. Others are trying to teach AI to adhere to human values. A few are working on a last-ditch off-switch. One researcher who is delving into all three is mathematician and philosopher Stuart Armstrong at Oxford University’s Future of Humanity Institute, which Tallinn calls “the most interesting place in the universe.” (Tallinn has given FHI more than $310,000.) Armstrong is one of the few researchers in the world who focuses full-time on AI safety. When I asked him what it might look like to succeed at AI safety, he said: “Have you seen the Lego movie? Everything is awesome.”

Share on Google+

of this story at Slashdot.

…read more

Source:: Slashdot