Feature Story

The future of machine learning is here

by Jason Deign

The future of machine learning is here

Machines are getting better at learning for themselves, in many cases to the point where they can outdo humans.

Are our machines turning into gods? So it seemed to 19-year-old Ke Jie in May. Jie, the world's best player of the world's oldest board game, Go, had just met his match… in the form of a program called AlphaGo.

In the space of a year, the program had become "almost like the god of Go," said Jie after losing to AlphaGo. Jie had been playing the game, viewed as too hard for machines to excel at, since he was 10. AlphaGo was only made by Google's parent, Alphabet, in 2014.

See also: What machine learning and AI can do for human health

But it had machine learning on its side. Machine learning is where systems learn without having to follow a program. Closely linked to artificial intelligence (AI), it is helping machines do many things that used to be in the human domain alone.

In June, for instance, a machine passed China's yearly math university exam. It did so quicker than a human student "but with a below-average grade," Agence France-Presse said.  

As with AlphaGo, though, machine learning is more and more helping machines to do things better than human experts. Such as healing people.

At Imperial College in London, United Kingdom, Daniel Rueckert, professor of visual information, leads a team using not one but two levels of machine learning to help detect lesions in brain scans.

"We use artificial intelligence and machine learning to try to teach computers how to interpret images," Rueckert explains.

Along with other types of data, this can "help them make a diagnosis or predict the outcome of what will happen to a patient," he says.

This is a task that "humans are normally quite good at," he points out, and one which cannot easily be broken down to a set of rules. If machines are to learn the process then they must have access to a lot of training data.

See also: Using big data to track corporate reputations

But getting doctors to provide large volumes of scans and notes takes time and is costly. So Rueckert and his team don't just use machine learning to teach their IT systems to spot lesions. They also create lifelike scans as a way of teaching machines how to spot fake ones.

For the latter task, the team has started using a technique called adversarial learning. This involves pitting two systems against each other. In the Imperial College case, one system tries to make fake scans that are so good the other system thinks they are real.

In a nutshell, one machine learns from another machine system's training materials.

The process is even now helping machines outdo humans in some clinical tasks, such as working out what stage of Alzheimer's disease a patient has, Rueckert says. "You're enabling a new, more quantitative, more objective assessment which is currently not done," he claims.

Machine learning is taking off in many other clinical areas, and is even being used by high school students to help fight cancer. This does not mean a robot is about to replace your doctor, though.

Instead, doctors profit from computer-aided decision support, which gives them more clues with which to decide on a course of treatment. And while looking at scans is a clear fit for machine learning, it is by far not the only task getting a helping hand from AI.

Firms such as GE, for instance, are using AI to improve the way power plants work. They are building ‘digital twins' of actual plants, which can be used as models to see where failures may occur or output could be upped.

Meanwhile Apple is rolling out machine learning platforms that can be used by the people who make its mobile apps.

In fact, if you own an Amazon Echo smart speaker then you may have some machine learning going on in your living room: the latest version of the product features AI that can be used to give you fashion tips.

The speed at which new machine learning is being pressed into new tasks is growing fast, matched only by the rate at which machines themselves can learn. But this quickly growing field has been around for longer than you might think, says Rueckert.

"It has its origins in game theory," he says.

That being the case, perhaps Ke Jie shouldn't have been so surprised at being beaten at Go by computer.


The contents or opinions in this feature are independent and may not necessarily represent the views of Cisco. They are offered in an effort to encourage continuing conversations on a broad range of innovative technology subjects. We welcome your comments and engagement.

We welcome the re-use, republication, and distribution of "The Network" content. Please credit us with the following information: Used with the permission of http://thenetwork.cisco.com/.