In our last post, we discussed how recent advances in AI and ML can be used for good and bad. In this installment, I’ll explain why it’s so easy for security vulnerabilities to make their way into the wild, and which steps to take to protect ourselves.
The security tradeoff
Large computer systems are a combination of many elements that need protection. Login and credential systems, cryptographic algorithms that protect data, digital rights management in a media player – these are all important. But security comes at a price.
Security means tradeoffs. For armored cars, security might mean added weight and decreased mobility, and for computer systems it could be performance or even usability.
Security engineers will make sure that the important stuff gets at least one layer of protection but, to quote professional light-heavyweight boxer Willie Pastrano, “It’s the punch you don’t see coming that knocks you out.”
Why this matters for ML
Machine learning excels at the ‘what you didn’t see coming.’ It’s a problem in a lot of domains because people want (and expect) results that make sense from a human perspective. Machines aren’t human and therefore do not suffer (or benefit) from our preconceived notions.
Instead, an ML-based approach to problem solving often means trying things most people would never consider. For security analysts, this can make ML a particularly effective tool.
Protecting yourself
The Achilles heel of all ML systems is that they need ready access to data; there needs to be something to generate the data for the algorithms to process. In this case, the system being attacked, or at least a simulation of it, such as a digital twin.
In the movie Star Wars, the protagonist Rebels were able to find a critical vulnerability in the Empire’s brand-new battle station – the Death Star. This zero-day security flaw was exposed because the Rebels had access to the full plans to the Death Star. While a work of fiction, this isn’t that different from how an AI was able to discover a bug in the 1980s Q-Bert (allowing it to break all previous high scores). In both cases, an obscure flaw was revealed through careful analysis of sensitive intellectual property, such as the Q-Bert ROM and a faithful emulation of the game hardware.
Restricting and preventing access to the inner workings of a system is a critical step in protecting systems from ML-assisted adversaries.
Limiting access
A 2016 paper titled ‘Stealing Machine Learning Models via Prediction APIs’ demonstrated how easy it is to use ML to replicate services using ML. A year later, researchers at the Georgia Institute of Technology demonstrated how ML can be used to replicate a video game just by watching it being played.
Machine learning assisted ‘lifting’ works because automated systems act like a person who has been injected with truth serum, happily giving away all their secrets by answering hundreds of thousands of questions. With access to these question and answer pairs, it’s not overly difficult to use ML to replicate the system that produces them, even if the system is a complex simulation or an industrial process.
In addition to traditional forms of access control which can help limit how much data can be collected by an adversary, more advanced methods like anomaly detection help detect unusual usage patterns. A simple example of an unusual pattern would be a bot trying to learn what makes your service tick by making an unusually high number of requests. In this way, AI can help distinguish between someone legitimately using your system, and someone who is interrogating it.
Irdeto uses this sort of technology to monitor internet traffic, detect piracy, and even spot cheaters in video games. Last July, Irdeto and Microsoft teamed up to accurately detect anomalies in data that contains repetitive cycles, such as access requests and load volumes, with considerable success.
When designed and deployed properly, advanced monitoring technologies like anomaly detection can be used by organizations to stop data exfiltration attempts while they are in their early stages and before critical IP has been lifted.
Protecting Intellectual Property
All the access control in the world isn’t going to be of much use once someone has lifted your IP and reverse engineered its inner workings. For this reason, one of the best methods to protect a system from AI-powered attacks is to ensure that it is hardened against reverse engineering.
Take the Q-Bert example in my last post; had the Q-Bert ROM not been so easy to obtain and an emulator of the hardware not trivial to implement, it wouldn’t have been practical for researchers to create the simulation environment needed for ML analysis. While Q-Bert was unprotected, the Sega Saturn resisted hackers for 20 years thanks to its protections; this is evidence that solid security engineering works and is effective.
Developers can use many different tricks to make their code difficult to reverse engineer. Products like Irdeto’s Cloakware Software Protection provide organizations with the tools they need to keep code private and secure. Nothing is fool proof, but the difference between protected code and unprotected code is a crack or exploit that happens in days (like the Q-Bert ROM) or one that takes 20 years (like the Sega Saturn).
Using AI to protect software systems
One of the problems with security is that it’s complicated. Good security combines different branches of mathematics with a high degree of engineering acumen. There are also practical challenges – code projects can involve millions of lines of code and paying someone to review and possibly secure each and every line can be costly. Fortunately, we can use ML to help automate this process.
Recently, Irdeto developed models that can read computer code in the same way that a person would by using a technique called ‘natural language processing.’ These models assess how important a block of computer code is from a security standpoint by attempting to mimic what senior security engineers might say. While these models are far from perfect, they can sort through millions of lines of code in just a few minutes and can quickly tell us what is and isn’t important at a high level.
In addition to estimating security requirements, Irdeto also built ML models that can estimate the impact of applying some of our most advanced (and computationally expensive) security features. These models are quite accurate, enough so that we can generally optimize the application of code protection to fit within a given budget.
By knowing what needs to be secured and the cost of applying different security treatments, the problem of protecting software becomes more like a traditional (but highly complex) optimization problem.
It is these AI-powered features that are going to provide industry with a cost-effective way of building high levels of security into their products from day one. This means that attackers can’t just ‘lift’ portions of code and use the latest-and-greatest algorithms to analyze them. It also means a safer environment for all of us.
Conclusion
The cyber arms race will continue just like it has for the last two decades, and no amount of technology will change that.
It is conceivable that organizations will utilize AI-powered adversaries to probe and attack their own systems before they’re deployed, helping to reduce the threat of the unknown. The chess match between security engineer and adversarial hacker will take place virtually in simulated environments, and the edge will go to whoever has access to the most powerful ML environments.
Thinking back to the movie Star Wars, had the Empire used their own ML-based analysis of the Death Star, there is a good chance that the critical flaw (exposed by a single exhaust vent) would have been detected by a tireless algorithm. At the same time, had the Empire used IP protection technology to secure the plans to the Death Star, the vulnerability would never have been discovered by the Rebels in the first place. Or, instead of humans directing the defense, maybe an AI, like the one used to play StarCraft II, could have coordinated some drones to protect what was a known vulnerability ahead of time.
It’s fun to think about the future.