Cleantechnica: Big Changes In This Next Tesla FSD Upgrade 002602

By Jim Ringold
Originally included in the Tesla Full Self-Driving (FSD) suite was Tesla’s rather basic highway program. It worked okay for rather simple limited access, divided highway driving requirements. Then the city street program, much more complicated in order to deal with more complex “off highway” requirements, was added. It’s an entirely different software program, language and all.
They were two separate programs in each Tesla chosen by the car depending on where you were driving. Now, these two programs are being combined into one more complex program that is being used at all times and locations. It is a most necessary step for future progress and FSD development. And I suspect it frees up computer space in FSD hardware version 3.
Here is a description of these AI improvements and features. For people who have not experienced FSD, I don’t expect the descriptions to make a lot sense. But you can grasp the complexity, direction, and depth of the project.
Most of the requirements of the recent, very widely publicized Tesla FSD “recall” are most likely implemented with this software version — even before the recall letter goes out. A better name needs to be found by the federal government for “over the air” updates to the vehicle software instead of lumping them in with physical recalls that require you to take the car to the dealer.
Just a reminder: All Tesla software updates can be accomplished in one’s home garage (or within range of your Wi-Fi) in the middle of the night without any expenditure of time, physical effort, or cost to the Tesla owner. Updates are done promptly to the whole fleet of FSD Teslas with no problem of procrastination about finding the time, making an appointment, and taking the car to a dealer.
FSD will come to pass at Tesla. And it will all be done basically with only video camera images for input. Every Tesla since 2020 has the cameras and computer hardware to implement FSD. You can turn FSD on from your smartphone, in the Tesla app, simply by paying the substantial FSD fee.
The time it has taken to move from “beta” FSD is a clear indication of the complexity of the project.
With these AI software upgrades and the real-time improvement feedback from the 350,000 FSD Teslas on the road, removal of the “beta” designation is ever closer. Can’t wait!

Below is a description of these AI improvement and features. Note that if you have not experienced FSD, I don’t expect the descriptions to make a lot sense, but you can still grasp the complexity, direction, and depth of the project.
FSD Beta 11.3 Release Notes:

Enabled FSD Beta on highway. This unifies the vision and planning stack on and off highway and replaces the legacy highway stack, which is over four years old. The legacy highway stack still relies on several single-camera and single-frame networks, and was set up to handle simple lane-specific maneuvers. FSD Beta’s multi-camera video networks and next-gen planner, that allows for more complex agent interactions with less reliance on lanes, make way for adding more intelligent behaviors, smoother control, and better decision making.
Added voice drive-notes. After an intervention, you can now send Tesla an anonymous voice message describing your experience to help improve Autopilot.
Expanded Automatic Emergency Braking (AEB) to handle vehicles that cross ego’s path. This includes cases where other vehicles run their red light or turn across ego’s path, stealing the right-of-way. Replay of previous collisions of this type suggests that 49% of the events would be mitigated by the new behavior. This improvement is now active in both manual driving and Autopilot operation.
Improved Autopilot reaction time to red light runners and stop sign runners by 500ms, by increased reliance on object’s instantaneous kinematics along with trajectory estimates.
Added a long-range highway lanes network to enable earlier response to blocked lanes and high curvature.
Reduced goal pose prediction error for candidate trajectory neural network by 40% and reduced runtime by 3×. This was achieved by improving the dataset using heavier and more robust offline optimization, increasing the size of this improved dataset by 4×, and implementing a better architecture and feature space.
Improved occupancy network detections by oversampling on 180K challenging videos, including rain reflections, road debris, and high curvature.
Improved recall for close-by cut-in cases by 20% by adding 40K autolabeled fleet clips of this scenario to the dataset. Also improved handling of cut-in cases by improved modeling of their motion into ego’s lane, leveraging the same for smoother lateral and longitudinal control for cut-in objects.
Added “lane guidance” module and perceptual loss to the Road Edges and Lines network, improving the absolute recall of lines by 6% and the absolute recall of road edges by 7%.
Improved overall geometry and stability of lane predictions by updating the “lane guidance” module representation with information relevant to predicting crossing and oncoming lanes.
Improved handling through high-speed and high-curvature scenarios by offsetting towards inner lane lines.
Improved lane changes, including: earlier detection and handling for simultaneous lane changes, better gap selection when approaching deadlines, better integration between speed-based and nav-based lane change decisions and more differentiation between the FSD driving profiles with respect to speed lane changes.
Improved longitudinal control response smoothness when following lead vehicles by better modeling the possible effect of lead vehicles’ brake lights on their future speed profiles.
Improved detection of rare objects by 18% and reduced the depth error to large trucks by 9%, primarily from migrating to more densely supervised autolabeled datasets.
Improved semantic detections for school busses by 12% and vehicles transitioning from stationary-to-driving by 15%. This was achieved by improving dataset label accuracy and increasing dataset size by 5%.
Improved decision making at crosswalks by leveraging neural network based ego trajectory estimation in place of approximated kinematic models.
Improved reliability and smoothness of merge control, by deprecating legacy merge region tasks in favor of merge topologies derived from vector lanes.
Unlocked longer fleet telemetry clips (by up to 26%) by balancing compressed IPC buffers and optimized write scheduling across twin SOCs.

Any thoughts?

Go to Source