NVIDIA Partners With Mercedes to Pursue Self-Driving Cars

AI is the talk of CES 2024 (Consumer Electronics Show) in Las Vegas as the top developers and manufacturers unveil what their large-language models can achieve, especially on the road. Yahoo Finance’s Dan Howley sits down with Nvidia Vice President of Automotive Danny Shapiro to discuss the chipmaker’s autonomous driving projects.

Nvidia (NVDA) announced a partnership with Mercedes-Benz to hone self-driving capabilities. Starting with an assisted driving feature, Shapiro believes the technology through this partnership will “add greater and greater levels of autonomy” until full self-driving is achieved.

“Safety has got be the top priority” Shapiro insists, and suggests that a timeline for self-driving cars is hard to gauge.

Going beyond in-car technology, Shapiro states that the company’s generative AI is “leveraging the exact same data that is used to build the car,” by offering an automotive configurator that allows consumers to get the actual experience of the car before buying.

Click here to view more of Yahoo Finance’s CES 2024 coverage this week, or you can watch this full episode of Yahoo Finance Live here.

Editor’s note: This article was written by Eyek Ntekim.

Video Transcript

DAN HOWLEY: CES 2024 is in full swing. And right now, we’re speaking with NVIDIA VP of Automobiles Danny Shapiro. Danny, thank you so much for joining us. I guess, you know, obviously, the big theme at CES 2024 is AI. So how does that fit into NVIDIA’s automotive strategy?

DANNY SHAPIRO: That’s a great question. We’re super excited to be here in the Mercedes-Benz booth. And what we’re showing behind me actually is the new CLA concept, which is going to be the first vehicle from Mercedes-Benz with NVIDIA DRIVE inside.

So that’s our AI platform for automated driving, driver assistance, all kinds of convenience features. So we basically bringing the type of AI from the cloud that we’re used to seeing but bringing it right into the car, processing the sensor data, and making the vehicle much safer to be in.

DAN HOWLEY: Yeah, I think that’s one of the interesting things to point out, right, is we all talk about generative AI in general. But self-driving cars– or self-driving technology only exists because of AI.

DANNY SHAPIRO: That’s absolutely right. There’s a massive amount of data that’s being generated from all the cameras on the car, the radar, now LiDAR on this vehicle. And that has to be processed in real time. So that’s where NVIDIA comes in, providing those horsepower to take all that data and make sense of it and understand exactly where the lanes are, where the potential hazards are to be able to read signs, detect the lights.

And so we’re bringing that out now to make these vehicles safer, to be an assistance feature for them. But they’re software updatable vehicles. So over time, we’re developing the software with Mercedes and all of our auto customers to be able to then add greater and greater levels of autonomy. And eventually, we’ll get to self-driving.

DAN HOWLEY: I guess when it comes to self-driving, is there a thought on when that might come? I know it’s always the, you know, I guess, billion dollar question that everybody’s kind of banding about. And the early prognosticators had said, oh, it’ll be here in no time. But, obviously, it’s a little bit longer than that probably.

DANNY SHAPIRO: Absolutely. You know, so this is a challenge we’ve been working on for well over a decade. It’s something I think the entire industry underestimated the complexity. And the reality is safety has to be the top priority. It is for us. It is for so many of our partners. And we need to make sure we get it right.

So while these estimates were put out there initially, we realized we underestimated the complexity. And so we’re focused on making sure that before we put anything out on the road, that it’s tested and validated for every possible scenario.

So this is where another aspect of NVIDIA comes in. We can use simulation technology for the creation of the AI but also for testing and validating that AI and making sure that in all types of lighting conditions, all kinds of weather conditions, many different scenarios, the kinds of things that don’t happen very often, it’s hard to train for. So we can use AI and create synthetic data to understand what possibly could happen and make sure the car will react appropriately.

DAN HOWLEY: Is that something similar to the Omniverse, the digital twin kind of idea that NVIDIA has been working on where, yeah, you can build these digital–

DANNY SHAPIRO: Absolutely.

DAN HOWLEY: –versions of objects, factories, things like that?

DANNY SHAPIRO: You’re absolutely right. So we really are able to apply this across the entire workflow within the auto industry from the designers that can use Omniverse and create essentially digital versions of the vehicle, right? That’s part of the design process. But then using Omniverse, that exact same model becomes part of what goes into the engineering team, then can go into the manufacturing side where we can create a virtual factory, a digital twin of the factory.

It’s modeling every aspect of the factory– the robots, the conveyors, the other employees working inside the factory. We can model all that, optimize it, and make sure it all works before the factory is even built. And then we can even extend that model beyond into the retail side using all that same data to create a virtual retail or showroom experience. People can customize their car; choose different materials, the interior trims, different wheels; and even take it on a virtual test drive.

So all of that simulation then is a very valuable tool throughout that whole workflow. So in addition to testing and validating the AVs, it really applies to all the mechanical, physical, and even sort of retail and service extensions of that whole workflow.

DAN HOWLEY: So I want to ask you about generative AI. Obviously, it was the huge theme of 2023. Still going to be a big theme into 2024. And I want to get your thoughts on how that kind of fits into the automotive side of things for NVIDIA as well.

DANNY SHAPIRO: Yeah, so generative AI, we’ve just started. And I think the important thing to recognize, it’s not just about text in and text out. That’s, you know, what ChatGPT kind of started. It has amazing capabilities. There’s a lot of room for improvement– of course, something that’s trained on, just a vast array of data. Some of it real. Some of it not– means the results aren’t going to always be accurate.

So what we’re doing is putting together tools in place to be able to curate data and be able to make sure that if you’re going to talk to your Mercedes, you wanted to make sure it has accurate information. So Mercedes can train this large language model with the history of Mercedes vehicles with all the information about the CLA concept, the manual, the service manual, whatever it is so that when you have a dialogue with that vehicle, it comes back with the right answer.

But beyond that, we’re using generative AI for other data streams that we can put text in and imagery out, text in and video out. Could be video in and text out. So there’s so many different ways that generative AI can be used.

Imagine, we have an automated vehicle. The front-facing camera is taking in 30 frames a second of video. We can then use a large language model to convert the pixels in that video into an explanation of what’s happening in the scene.

So basically, the car can explain to you why it’s making certain driving decisions or tell you what’s going on in the scene to improve your trust and confidence in the system or provide alerts that mean something other than just the beep. So there’s so many different ways that generative AI is really helping the auto industry from a designer that may do a sketch. And the generative AI will create a 3D model and in different permutations on that. It becomes a copilot for them, an assistant that is able to make their job and their productivity much better.

So it’s not going to take their job away. But it’s going to make them more productive and create higher quality results. And in the case of all the safety systems, all these tools are going to increase the safety inside the vehicle.

DAN HOWLEY: You know, on the automotive side and almost kind of the automotive consumer side, I know NVIDIA also introduced an automotive configurator. It’s kind of the idea of being able to build your car on a company’s website but in a more advanced way. So can you just kind of explain that to us?

DANNY SHAPIRO: Sure. So we’re leveraging the exact same data that’s used to build the car and putting those models into more of a marketing role as opposed to engineering role. We can photorealistically render it.

People can choose all different aspects of their car, kind of create their dream car. Maybe on their PC, they’re doing it. Maybe even on a VR headset experience, look all around, see what that vehicle is going to be like to drive. And maybe even using our simulation technology to take it on a virtual test drive in a digital twin of the city that they live in or see what it looks like parked in their driveway.

So generative AI is going to help create all these different kinds of scenes and be able to help the automakers increase, you know, the types of options that are added to the vehicle. Because if people can kind of see it and maybe experience it, we could simulate different features and functions in the car actually working. And people could add that to their cart as they’re ordering their vehicle.

DAN HOWLEY: And, you know, one of the things that NVIDIA is highlighting is just the breadth of different automakers that you work with. I guess, how do you see those relationships continuing to grow over time?

DANNY SHAPIRO: Well, you know, we’re working with hundreds of automakers, truck makers, robotaxi companies, shuttle companies and then– so the whole ecosystem, the tier one suppliers, the sensor companies, the mapping companies, a lot of software developers, they’re all building on the NVIDIA DRIVE platform. So we’ve created this open system such that it’s not a fixed function but rather a supercomputer that delivers the horsepower, the computing horsepower to run the software that’s required today but also with headroom so that it can continue to evolve and develop in the future.

So a year from now, we might be sitting here at CES talking about all kinds of new AI technology that no one’s thought of yet. But we’ll be able to update the software in the car to add those new features and capabilities.

DAN HOWLEY: Yeah, unfortunately, you can’t do that for my ’07 Mustang. That’s not really very well connected. But Danny Shapiro, VP of automotive at NVIDIA, thank you so much for joining us.

DANNY SHAPIRO: Thanks, Dan. It’s great to be with you.

Go to Source