For the majority of its existence, artificial intelligence has been fairly dumb. We’ve designed AIs that are extremely good at one very specific thing, like playing chess or sorting widgets, but these are completely oblivious to everything else. And they are only good at their assigned activity because we told them absolutely everything they needed to know. That’s beginning to change.
Advances in machine learning are beginning to create AIs that can teach themselves sophisticated behaviors, and one place where this is making waves in technology and society is in self-driving cars. In this article, our collaborator, Ross Pamphilon, a Portfolio Manager (of ECM Asset Management), discusses autonomous vehicles and how Improvements in AI are making them possible.
Modeling the Driven World
A basic requirement for an autonomous vehicle is the ability to discern where it is, what’s around it, and where it should go. AIs are now capable of generating detailed models of the physical world, and machine learning are helping them work out what those models are populated with.
While each manufacturer’s designs are different, the basic science is the same. Self-driving cars use a sophisticated set of onboard sensors that can include radar, lasers, and ultra-high-resolution cameras to build a deep, three-dimensional map of the landscape surrounding the vehicle.
GPS helps the car understand where it and its internal map are located in space, and certain hard-coded instructions help it understand basic traffic rules, but it’s up to the AI to guide the car to its destination.
Learning to Drive
In the past, teaching a car to drive would have meant breaking down the process into millions of separate instructions, in an attempt to tell the car what to do in every possible scenario it might encounter, as well as define every single object it might come into contact with. This method works for learning chess, where the rules are simple, and the possible outcomes are predictable. However, it breaks down quickly when trying to teach something as complicated a driving a car.
Machine learning takes a different approach. AI researchers use computer programs called neural nets that work similarly to the human brain to replace the need to describe objects and processes in excruciating detail.
As an example, autonomous vehicles need to be able to discern a bicyclist. Neural nets can teach themselves to recognize bicyclists on the road by pouring through hundreds of thousands of bicycle pictures.
All we have to do is tell them when they identify a bicycle correctly and when they don’t. They look for patterns in the data and eventually get very good at identifying specific objects, no matter what angle they’re viewed from, without any human input.
Autonomous vehicle AIs apply this concept across the entire driving experience, and the more they do it, the better they get. Supplemented with instructions that we supply, AIs are already better drivers than most humans, under normal driving conditions.
Human Drivers Are Still Important, But Not for Long
Level 5 autonomous vehicles, those that can drive without human intervention under any road and weather conditions, aren’t yet a reality. But they’re hurtling out of the future and barreling down on us quickly.
Every year computing power goes up, and AIs get better. It’s only a matter of time before human drivers are rendered unnecessary. And yet, as amazing as this artificial intelligence will be at driving a car, it’ll still be a simpleton for other tasks. The dream of strong AI, an intelligence like ours that can work out solutions to any problem on its own, is still a long way off.