Tesla announces it’s moving away from ultrasonic sensors in favor of ‘Tesla Vision’

Tesla announced today that it is moving away from using ultrasonic sensors in its suite of Autopilot sensors in favor of its camera-only “Tesla Vision” system.

Last year, Tesla announced it would transition to its “Tesla Vision” Autopilot without radar and start producing vehicles without a front-facing radar.

Originally, the suite of Autopilot sensors – which Tesla claimed would include everything needed to achieve full self-driving capability eventually – included eight cameras, a front-facing radar, and several ultrasonic sensors all around its vehicles.

The transition to Tesla Vision means shifting to only use camera-based computer vision in the Autopilot system instead of inputs from both cameras and radars.

You would think that more data would be better, but Tesla’s idea is that the roads are designed for humans who navigate them using a vision-based system – the natural neural nets in their brains. The automaker believes it best to try to replicate that purely with cameras and artificial neural nets and not let the radar data pollute the system.

This shift resulted in some Autopilot features being limited in vehicles without radar. For example, Tesla limited the Autosteer speed of Tesla Vision vehicles to only 75 mph until May of this year.

Now Tesla is announcing that it is going a step further and removing ultrasonic sensors and replacing them with its Tesla Vision technology:

Today, we are taking the next step in Tesla Vision by removing ultrasonic sensors (USS) from Model 3 and Model Y. We will continue this rollout with Model 3 and Model Y, globally, over the next few months, followed by Model S and Model X in 2023.

The ultrasonic sensors were primarily used for short-range object detections in applications like auto-park and collision warnings.

Tesla explains how its vision neural nets are replacing the USS:

Along with the removal of USS, we have simultaneously launched our vision-based occupancy network – currently used in Full Self-Driving (FSD) Beta – to replace the inputs generated by USS. With today’s software, this approach gives Autopilot high-definition spatial positioning, longer range visibility and ability to identify and differentiate between objects. As with many Tesla features, our occupancy network will continue to improve rapidly over time.

Tesla confirmed that moving toward a camera-only approach will again result in some feature limitations:

Features* Equipped with USS Not equipped with USS
Forward Collision Warning
Automatic Emergency Braking
Lane Departure Warning / Avoidance
Emergency Lane Departure Avoidance
Pedal Misapplication Mitigation
Auto High Beam
Autowiper
Blind Spot Collision Warning Chime
AutoSteer
Auto Lane Changes
Navigate on Autopilot
Traffic Light and Stop Sign Control
Park Assist Coming soon
Autopark Coming soon
Summon Coming soon
Smart Summon Coming soon

Over time, Tesla will push new software updates to improve features and release Park Assist, Autopark, Summon, and Smart Summon using its occupancy network.

Electrek’s Take

While this shift might be perceived as another cost-cutting effort by Tesla, which now will not have to embed ultrasonic sensors in its body panels, the automaker truly believes that its vision system is a better approach.

Unfortunately, Tesla again decided to roll out the change before being ready to replace all the features.

It’s going to be interesting to track the progress of Tesla vehicle features without USS over the next few months.

FTC: We use income earning auto affiliate links. More.


Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

Go to Source