Nvidia (NVDA) announced on Monday that it will be expanding its partnerships with several Chinese EV makers, including BYD (BYDDY) and XPeng (XPEV). These companies will use Nvidia’s next generation of in-vehicle chips, called DRIVE Thor. The announcement signals a further push to integrate AI into everyday life.
Nvidia Automotive VP Danny Shapiro sits down with Yahoo Finance Tech Editor Dan Howley to discuss Nvidia’s expanded partnerships and the implementation of chips in vehicles.
Generative AI can be used to assist simulation for testing and validating autonomous vehicles, but it can also be applied inside vehicles, Shapiro says: “We’ve been focused on in-vehicle experiences for more than a decade and we pioneered bringing consumer electronics experience into the cockpit: touch screens and digital instrument clusters and rear seat entertainment. The last several years now we’ve been bringing artificial intelligence to that as well, so having a concierge that you could interact with. Convenience and safety features all built around AI.”
For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.
Editor’s note: This article was written by Nicholas Jacobino
Video Transcript
DAN HOWLEY: We’re here with Nvidia VP of automotive Danny Shapiro. Danny, thank you so much for joining us here at GTC 2024, kind of a huge event. This particular one, first time in five years that you guys are live but also because of all of the AI hype going around. I guess, you know, just to get things started has the AI hype that we’ve seen around Nvidia trickled into the different businesses? How has it impacted the automotive side of things?
DANNY SHAPIRO: Absolutely. We’re seeing this amazing development going on throughout all aspects of the auto industry. One of the things we’re focused on is AI supercomputing in the car, so creating an AI brain for automated and autonomous driving.
So we just announced a new platform called Blackwell at the keynote yesterday. This is going to really help in the form of generative AI, the kinds of things that ChatGPT is doing, being able to take data streams in and other data streams being generated whether it’s text-or-text, or text-to-image, text-to-video, or video-to-text. I mean, so it doesn’t really matter what it is.
And so we’re seeing that kind of technology helping transform what will go inside vehicles, whether it’s cars or trucks or robo taxis, being able to have a natural conversation with your vehicle. But also, it’s really helping the development of autonomous vehicles as well. So generative AI can be used to assist simulation for testing and validating autonomous vehicles.
DAN HOWLEY: So it’s not just ChatGPT or Sora, which is the video generative AI capability that OpenAI has, it’s something that’s going to actually, I guess, manifest some physical way down the line.
DANNY SHAPIRO: So imagine an automated car and there’s cameras on it that are understanding the environment. Well, what we can do is generative AI can take that video feed and understand it and communicate it to you in real time and alert you to the fact that maybe somebody is jaywalking, or there’s a baby stroller being pushed across the street here, or a motorcycle that’s coming in your blind spot. So the cameras won’t just create a beep but actually could tell you what’s going on.
DAN HOWLEY: So I guess, how far away is that? We’re still– I mean, ChatGPT was, what, 2022, so it seems still pretty early, right? How far away is that? And how long is Nvidia even been thinking about that kind of technology in the car?
DANNY SHAPIRO: So we’ve been focused on Nvidia experiences for more than a decade, and we really pioneered bringing kind of consumer electronics experience into the cockpit– touchscreens, digital, instrument clusters, rear seat entertainment. And the last several years now, we’ve been bringing artificial intelligence to that as well, so having a concierge that you could interact with, convenience and safety features all built around AI. So being able to understand what’s going on outside as well as inside, who’s in the vehicle, what’s their agenda, where are they going, what do they want to do, and merging all of that together.
So what’s nice now is that these large language models are really enabling new experiences. And instead of having to learn specific commands for your car, you could speak to your car in natural language.
DAN HOWLEY: So I guess this is something that is now kind of coming to fruition. And during the keynote, Jensen had mentioned a number of new deals with automakers. I guess, can you just give us a sense of what those mean, some of them BYDs as well as a few others? And where do you see those kinds of deals going?
DANNY SHAPIRO: So BYD is really exciting. They are the world’s largest EV maker. And so they’re going to be using DRIVE thor so that’s our new AI supercomputer for the cars automotive grade, meaning it’s designed to work at all temperatures. Unlike your phone, which won’t work in the cold or the heat, our drive computer will work under all conditions. And so that becomes the AI brain for their future fleets of cars.
But BYD is also using Nvidia AI for training the self-driving, how do you teach the self-driving cars so their data centers have Nvidia inside. They’re using Nvidia for simulation, for their factories to be able to plan the factories using Nvidia Omniverse. So that’s our digital twin technology. Nvidia Isaac Sim, so this is a robotics stack as well. So how are these robots going to build the cars while they’re using Nvidia artificial intelligence as well in the factory?
And even retail so, we can use Omniverse, which again is the digital twin technology. We take a model of the car that the designers use. And with generative AI, we can put that model anywhere we can envision it on the highway, on a country road, by the beach, in your driveway. So the retail experience and dynamic car configurators will be something that will transform going to the dealership or just buying a car online or on your mobile device.
DAN HOWLEY: And just one last question, where do you see this kind of technology going for Nvidia in the year ahead two years from now?
DANNY SHAPIRO: Yeah. The explosive growth of AI is quite astounding. But we’re just getting started. That’s the exciting thing. These vehicles that we’re making it’s not like it’s a fixed vehicle and just ships, but it’s a living entity now. There’s a supercomputer on board. And we can update the software on the car, so it can get better and better over time.
So when you buy the car, it’ll be at its most basic level the day that you drive it home. And it will just get better with new software updates over time. So we’ll be able to add new features, new capabilities, new autonomous driving modes. And it’ll just be a real joy. Every time you get an update, there’s something a new app essentially for the vehicle.