A group of researchers claims to have built a prototype for an “online polygraph” that uses machine learning to detect deception from text alone. But as a few machine learning academics point out, what these researchers have actually demonstrated is the inherent danger of overblown machine learning claims. From a report: When Wired showed the study to a few academics and machine learning experts, they responded with deep skepticism. Not only does the study not necessarily serve as the basis of any kind of reliable truth-telling algorithm, it makes potentially dangerous claims: A text-based “online polygraph” that’s faulty, they warn, could have far worse social and ethical implications if adopted than leaving those determinations up to human judgment.
“It’s an eye-catching result. But when we’re dealing with humans, we have to be extra careful, especially when the implications of whether someone’s lying could lead to conviction, censorship, the loss of a job,” says Jevin West, a professor at the Information School at the University of Washington and a noted critic of machine learning hype. “When people think the technology has these abilities, the implications are bigger than a study.”
of this story at Slashdot.