An anonymous reader writes:
Can you get into trouble under anti-hacking laws for tricking machine learning…? A new paper by security researchers and legal experts asks whether fooling a driverless car into seeing a stop sign as a speed sign, for instance, is the same as hacking into it.
The original submission asks another question — “Do you have inadequate security if your product is too easy to trick?” But the paper explores the possibility of bad actors who deliberately build a secret blind spot into a learning system, or reconstruct all the private data that was used for training. One of the paper’s authors even coded DNA that corrupts gene-sequencing software and takes control of its underlying computer, and the researchers ultimately warn about the dangers of “missing or skewed security incentives” in the status quo.
“Our aim is to introduce the law and policy community within and beyond academia to the ways adversarial machine learning alter the nature of [cracking] and with it the cybersecurity landscape.”

Share on Google+

of this story at Slashdot.

…read more

Source:: Slashdot