The Future of AI: Coding AIs

The Future of AI: Coding AIs Pixabay/Public Domain

Today’s programming world belongs to humans, but tomorrow might be a completely different story, with Artificial Intelligence (AI) learning how to code better than humans can, with fewer bugs to worry about.

In a chat we had with F-Secure’s Mikko Hypponnen, the man behind the Hypponen law that says that if something is described as smart, it is vulnerable, he mentioned that we will eventually have programs written by other programs. “They are so good – a million times better than we are – they will not write bugs, there will be no vulnerabilities. Or, if there is a bug, it’s so goddamn complicated we will never be able to exploit it,” he told us during the interview.

Well, that future isn’t that far off. Google introduced AutoML earlier this year, in May. In essence, AutoML is a machine learning software that can create self-learning code. The problem Google encountered in its efforts to broaden its AI horizons was that there weren’t enough researchers to do the job they wanted to get done. But AutoML can solve the problem.

In fact, AutoML has learned so much in the past few months, that it has already managed to become better at coding machine-learning systems than the researchers who made it, ironically.

“In our approach, a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” Google said upon launching AutoML.

One of these “child models” AutoML produced is NASNet, which had the task of recognizing objects, humans, and so on. During an image recognition task, it managed to reach a record high 82% accuracy, picking out humans, and kites in one picture, for instance. Some elements had a better recognition score than others, of course, but it’s still a marvelous result.

This is a remarkable achievement for Google, or better said, for Google’s AutoML, since that’s who the work belongs to, in a way.

The future of AI

The future may very well see the development of more of these tools that can code on their own whatever it is we need. By creating code on its own, the machine may be more successful in its efforts than humans can ever be, simply due to the way it learns to perform a task.

Hypponen mentioned another interesting element that we should take into account for the future – creating programming programs, as he called them, might very well lead to an arms race where “good AIs” write programs with no vulnerabilities and “bad AIs” try to find vulnerabilities in those programs. While these bugs may be invisible to the human eye, an AI may be good enough to find them.

We are already using Artificial Intelligence to discover new cyber threats, to learn how malware works and how we can detect it, so it won’t be long before AI is also used for concerted attacks. Thankfully, right now, AI is still a bit out of reach of regular hackers. There’s no saying, however, what could happen when the world’s governments start working on their own AI soldiers.

Leave a Reply

Your email address will not be published. Required fields are marked *