Nerval’s Lobster writes “The U.S. Army Research Laboratory has awarded as much as $48 million to researchers trying to build computer-security systems that can identify even the most subtle human-exploit attacks and respond without human intervention. The more difficult part of the research will be to develop models of human behavior that allow security systems decide, accurately and on their own, whether actions by humans are part of an attack (whether the humans involved realize it or not). The Army Research Lab (ARL) announced Oct. 8 a grant of $23.2 million to fund a five-year cooperative effort among a team of researchers at Penn State University, the University of California, Davis, Univ. California, Riverside and Indiana University. The five-year program comes with the option to extend it to 10 years with the addition of another $25 million in funding. As part of the project, researchers will need to systematize the criteria and tools used for security analysis, making sure the code detects malicious intrusions rather than legitimate access, all while preserving enough data about any breach for later forensic analysis, according to Alexander Kott, associate director for science and technology at the U.S. Army Research Laboratory. Identifying whether the behavior of humans is malicious or not is difficult even for other humans, especially when it’s not clear whether users who open a door to attackers knew what they were doing or, conversely, whether the “attackers” are perfectly legitimate and it’s the security monitoring staff who are overreacting. Twenty-nine percent of attacks tracked in the April 23 2013 Verizon Data Breach Investigations Report could be traced to social-engineering or phishing tactics whose goal is to manipulate humans into giving attackers access to secured systems.”… Nerval’s Lobster writes “The U.S. Army Research Laboratory has awarded as much as $48 million to researchers trying to build computer-security systems that can identify even the most subtle human-exploit attacks and respond without human intervention. The more difficult part of the research will be to develop models of human behavior that allow security systems decide, accurately and on their own, whether actions by humans are part of an attack (whether the humans involved realize it or not). The Army Research Lab (ARL) announced Oct. 8 a grant of $23.2 million to fund a five-year cooperative effort among a team of researchers at Penn State University, the University of California, Davis, Univ. California, Riverside and Indiana University. The five-year program comes with the option to extend it to 10 years with the addition of another $25 million in funding. As part of the project, researchers will need to systematize the criteria and tools used for security analysis, making sure the code detects malicious intrusions rather than legitimate access, all while preserving enough data about any breach for later forensic analysis, according to Alexander Kott, associate director for science and technology at the U.S. Army Research Laboratory. Identifying whether the behavior of humans is malicious or not is difficult even for other humans, especially when it’s not clear whether users who open a door to attackers knew what they were doing or, conversely, whether the “attackers” are perfectly legitimate and it’s the security monitoring staff who are overreacting. Twenty-nine percent of attacks tracked in the April 23 2013 Verizon Data Breach Investigations Report could be traced to social-engineering or phishing tactics whose goal is to manipulate humans into giving attackers access to secured systems.”

Read more of this story at Slashdot.






Read more http://rss.slashdot.org/~r/Slashdot/slashdot/~3/fUK6C8qONXw/story01.htm