Removing the human driver from behind the steering wheel may require removing the human being from the development lab.
Helm.ai, a startup based in Menlo Park, Calif., that recently came out of stealth, seeks to dramatically reduce the bottlenecks in autonomous vehicle development cycles with an AI training method known as unsupervised learning. Like other advanced learning tools such as active learning, unsupervised learning takes an intelligent approach to training to lessen the burden on human annotators.
An autonomous vehicle must learn from a massive amount of data — measured in the petabytes, or millions of gigabytes — to safely drive without a human at the wheel. Even lower levels of autonomy, such as level 2+ and level 3 AI-assisted driving, require intensive training to operate effectively.
Embedded in this massive amount of data are the images of pedestrians, cars, signs and other objects in the millions of frames. For supervised learning, all these must be labeled, so that deep neural networks can learn to recognize them on their own. It’s an incredibly expensive, time- and labor-intensive process, making it difficult to develop and iterate new capabilities quickly.
Helm.ai uses the high performance of NVIDIA data center GPUs to run its unsupervised learning techniques to train its self-driving algorithms. The startup is also relying on NVIDIA inside the vehicle, running its self-driving software on NVIDIA DRIVE AGX Xavier.
Removing the Training Wheels
Unsupervised learning is a method of training neural networks without labeled data. It’s an open area of AI research and comes in a variety of flavors, with some previous approaches aiming to identify new patterns in data or pre-processing a pool of data.
Helm.ai is working to expand the scope of unsupervised learning as well as mathematical modeling to efficiently scale autonomous vehicle training.
Rather than pursue traditional approaches, the startup is looking to master unsupervised learning to remove the need for large-scale fleet data and armies of human annotators. The result is scalable AI software that can achieve autonomous driving capabilities on an improved timeline and budget.
“We identified some key challenges that we felt weren’t being addressed with the traditional approaches, in particular regarding the scalability and accuracy of AI software,” said Vladislav Voroninski, founder and CEO of Helm.ai. “We built some prototypes early on that made us believe that we can actually take this all the way.”
Innovating with High-Performance Compute
To achieve these breakthroughs in AI training, Helm.ai is relying on industry-leading computing in the data center and in the vehicle.
NVIDIA V100 Tensor Core GPUs in the data center make it possible to process petabytes of data without experiencing costly roadblocks, enabling advanced learning techniques like Helm.ai’s.
Once trained, the startup’s level 2+ self-driving software can then be tested in the vehicle using the NVIDIA DRIVE AGX Xavier AI compute platform. DRIVE AGX Xavier delivers 30 trillion operations per second for level 2+ and level 3 automated driving.
At its core is the Xavier system-on-a-chip, the first-ever production auto-grade SoC, which incorporates six different types of processors for high-performance, energy-efficient AI compute.
With more accurate and cost-effective AI training, Helm.ai and NVIDIA are enabling the industry to safely deploy transportation technologies that will transform the way people and goods move.