A lot has changed since I first started applying machine learning (ML) in the 1990s. Back then, using something like a neural network meant implementing everything, including the learning algorithm, from scratch. Assuming your code would even work, there weren’t good guidelines on learning rates, data requirements, or even expectations in terms of outcomes. Simply put, using these technologies was risky.
And while the practical knowledge about ML algorithms has advanced considerably, the amount of data and the computing power that is now at our disposal is beyond comprehension. This has allowed for some very impressive achievements over the past couple of years.
Amazing achievements
In January 2019, AlphaStar successfully defeated a professional competitive StarCraft II player. By October, the same technology could reportedly defeat 99.8% of all active players. StarCraft II is one of the most popular real-time strategy games ever created, with players from around the world competing in organized and highly competitive tournaments. This is significant because StarCraft II is a much closer approximation to how humans manage things, like logistical networks and battlefield environments, than other games like Chess or Go. Like in real life, there are no turns, game pieces move and behave almost with a mind of their own, and everything is happening very quickly.

To accomplish this feat the AlphaStar team didn’t require the analysis of human experts or long complex hand-written rules. Instead, AlphaStar learned to master StarCraft II using deep reinforcement learning – by playing match after match against itself until it was able to beat top-tier human opponents. In other words, the only thing it needed to master the game was the access to a virtual monitor, keyboard and mouse. Here is a link to an article about this accomplishment, if you are interested and have not read about it yet.
It’s not hard to imagine how this might be applied to other problems, such as ‘fly the plane in the safest way possible’ or ‘plan the most ecologically friendly route for delivering packages.’ All that is needed is a simulation of the original problem so that the computer can ‘play’ repeatedly.
And speaking of simulations, in late 2018 Nvidia released an extremely impressive implementation of a Generative Adversarial Network (GAN). Nvidia’s GAN can create photorealistic faces, and even provides end users with the ability to tune aspects of these digital creations to alter hair style, gender, and lighting among other things. Not only is this technology able to generate pictures of people, it can also be used to create lifelike animals, cars, and even bedrooms. What makes GANs so interesting as ML tools is that there is no need for a human to manually label the computers creations as ‘good’ or ‘bad.’ Instead GANs self-assess their ability and improve over time, to the point that their creations are indistinguishable from the real thing. In other words, GANs learn how to build simulations (albeit simple ones for now).
As with DeepMind’s AlphaStar, the same method could, in theory, be applied to just about anything – instead of synthesizing pictures, maybe synthesize a logistical network or the layout of a chip. The possibilities are nearly endless.
Risks
In the same way a hammer doesn’t care if it’s hitting a nail or a finger, even the most sophisticated tools lack a conscience. The same process that can learn the best way to win a game could just as easily be turned around to find the best way to lose. An ML algorithm isn’t dissuaded by the complexity of an industrial control system, and an adversary (using AI to attack such a system) doesn’t even need to understand which system is being controlled – all the attacker needs to do is specify a goal and the algorithms do the rest.
In an AI-powered arms race advanced protections will be more important than ever before.
In 2018, a trio of researchers at the University of Freiburg, Germany used a sophisticated form of reinforcement learning to teach a computer how to play the popular 1980s arcade game Q-Bert. Their algorithm was able to achieve what looked like an impossibly high score. The AI, having no preconceptions about what is and isn’t good play, discovered a bug that has eluded millions of human players for nearly 40 years. It was later verified that the bug does in fact exist in real hardware and can be exploited by human players as well as AIs.
In 2017, researchers from the University of Washington utilized adversarial ML to discover flaws in the way a self-driving car identifies road signs – in their case interpreting a stop sign as a 45-mph speed limit sign. According to security researcher Yoshi Kohno, all that an ML-savvy hacker would need is access to the machine vision classifier used by the car and a picture of a stop sign.
While being seemingly inconsequential examples, these cases illustrate how easily it is for critical flaws to go unnoticed into production even after decades of field testing, and how AI and ML can be used to both discover and exploit undiscovered vulnerabilities without access to anything more than an emulator (or a bit of code in the case of the self-driving car).
Conclusion
As we enter the IoT age where more and more devices are becoming connected, one must contemplate how AI and ML will be used for both good and nefarious reasons.
While ML will certainly help reduce costs of delivering goods and services, improve our lives by providing rich and sophisticated diagnostic tools to doctors, and help keep our complex logistical networks running, it isn’t difficult to imagine a scenario where these very same technologies could be turned against us by an adversary with access to little more than some Python and sufficient computing time.
Stay tuned – In my next blog post, I will be discussing next-level defenses; how ML can be used to protect ML.