On March 27, three computer scientists received the Turing Prize, the equivalent for their field of a Nobel Prize. This low-profile award rewards their work in the field of artificial intelligence, which has seen a real revolution in the last decade.
But let’s take this story back to its beginning. The term artificial intelligence appeared in 1956 at the Dartmouth conference in New Hampshire. At the crossroads of computer science, mathematics and psychology, this field has since fed many fantasies ranging from the most utopian (the project Cybersyn in Chile) to the most dystopian (Terminator films).
In the 1980s, the three aforementioned researchers will develop a new learning technique for artificial intelligence, based on artificial neural networks. The principle is simple but terribly effective: gather a huge set of data on a problem to solve and lead a virgin neural network on this data.
A typical example is to have a set of images containing or not a cat. Each image is given one by one to the artificial brain, which must guess whether there is a cat or not in the image. At the very beginning, the neural network will be very often wrong. But as and when his mistakes, he will adjust the weight of different paths between neurons to improve its predictions.
If the mathematical concepts used are quite complex, we see that the principle is relatively simple and in fact resembles normal human learning. For example, a novice car driver will find it difficult to associate an external stimulus, such as a red light, with the action associated with it – press the brake pedal. But after a little practice, this type of association becomes a reflex. Indeed, our brain learns to make the connection between stimulus and action. Artificial neural networks operate on the same principle.
The main concern with this innovative technique is the very long learning time that prevents any practical application. This new discovery ends up falling back into oblivion. Until the end of the 2000s, when the computing power of computers grew sharply. Especially thanks to the addition of graphic cards, very powerful for the type of mathematical operations necessary for the construction of the artificial brain.
The neural networks are suddenly coming back to the forefront. The computing power now available makes it possible to develop much larger networks of neurons – thus effective, what we call deep learning (deep learning in English).
Around 2012, researchers using these neural networks begin to win artificial intelligence competitions. Large IT companies, but also many startups, are rushing to this new El Dorado. Indeed, potential applications are huge, ranging from medical data analysis to driverless cars.
The impact on the US job market is soon to be felt. Between 2015 and 2018, the number of new jobs of machine learning engineers increased by 344%. Average salary of these employees: over $ 140,000 per year. Indirectly affected domains also see a steady rise: full-stack developer jobs are up 206%. While artificial intelligence is not the only factor, it contributes to the dramatic drop in the unemployment rate in the United States, from 10% in October 2010 to 3.8% in February 2019.