Symantec has been working on ways to better detect and mitigate malware for nearly two decades with the goal of helping customers stay ahead of threats to their data.
There was a time, around the year 2000, when Symantec teams only saw one or two localized malware threats a week. They could analyze the malware as it came in and address the specific threat it posed. That seems like a halcyon era compared with the present era when malware pours through global networks by the millions per day.
Leyla Bilge, a member of the Symantec Research Labs (SRL) team, received her PhD in 2012, with a focus on system-attacking malware. During her doctoral studies, she recalled, the deployment of artificial Intelligence (AI) as a way to block attacks began gaining popularity. But at the same time, many still questioned how far they could trust the technology.
The first question was always: how easy was AI to evade?
“We can use AI to solve for a problem but then the question of how easy it is to compromise the AI comes up,” Bilge said.
The strength of AI lies in its ability to monitor a large data pool for patterns. What if someone taints the data in the pool? The classic example of how easy it is to fool an AI is that of asking it to discern between pictures of cats and dogs. There are a multitude of variants in both species. It’s been shown that a few altered pixels in a picture can deceive an AI into thinking a picture of a dog is one of a cat.
Meanwhile, adversarial malware – code that targets the pool of data the AI is monitoring for patterns and masks the malware – has grown in volume and complexity for this very purpose.
What’s clear is that the threat is too much for humans to combat alone. So 4 years ago, SRL began to investigate ways to protect the internal systems of organizations, rather than just trying to get ahead of the external threat posed by malware. This is where SRL believes AI can make a difference.
SRL has since been applying machine learning to analyze all of the data running into or out of an organization to determine where the vulnerabilities of those systems lie. In 2017, Symantec’s Research Labs published research demonstrating that security measures on internal systems could be advanced by 400% though the use of Artificial Intelligence, a risk prediction that finally proved the proactive promise of AI.
To do risk prediction accurately, an AI examines the behavior of the threats, the attackers, and the system users.
- Examining what attackers consider valuable will help determine which internal systems in a company are most vulnerable.
- Using what is known of malware, coupled with what continues to be learned on a daily basis, AI algorithms can identify where threats target a system.
- Evaluating user behavior in applications like email can reveal to an AI vulnerabilities born of user activity that were previously undetected.
Better security is just part of the benefit of using AI to predict risks inside a system. However, Bilge noted that the deployment of multiple layers of security can get very expensive in a hurry.
“You might not be able to deploy every advanced technology on every node in your enterprise. You want to choose the riskier ones to protect,” she said. “Using prediction in organization systems helps with the questions of what to secure and what to insure. Cyber insurance is a trend on rise. Prediction aids in the pricing and underwriting of what should be insured in a system.”
While the SRL cyber risk prediction solution is still considered relatively new, it’s off to a promising start. On November 1, 2017 Bilge presented the SRL research and its findings to industry peers in a lecture titled, “Predicting the Risk of Cyber Incidents.”
Their work was well received, suggesting that more organizations are going to consider similar ways to arm themselves from the inside against mounting threats from the outside.