Cleantechnica: Tesla FSD Training — Garbage In, Garbage Out 002000

In recent weeks, there have been a number of articles about the lack of progress in Tesla Full Self-Driving (FSD). Those stories have been from the Beta test drivers’ point of view. This article is from a system architect’s and system integrator’s point of view.
Before the system architect can get to work, there is a preliminary step. The system analyst must describe the system and the environment that it is going to build. Based on the data provided by the analyst, the architect can decide on the subsystems and how they interact.
With the Full Self Driving system well underway in being built, and Tesla having discovered many of constraints and requirements by trial and error, I can skip to the current state of the system.
It seems from the outside that progress has stalled as it did a few times before. Elon Musk has described those instances as reaching a “local maximum.” It is a nice way of saying that it is the best that can be done with the technology used. The solution each time was better, newer technology.
I have no doubt that that is what Tesla is looking for now. But the irony of the current local maximum is that it is not about the technology. It is about methodology, and possibly architectural shortcomings.
The current system is a neural net–based artificial intelligence system. The first attempts to build AI systems were rule-based and repository-based “Knowledge Systems.” And here is the big irony — the best driving instructors train their pupils to behave like rule-based automatons.
Rules are to be broken, in the view of many drivers. And they are right. But there are rules for breaking the rules safely.
The FSD system should not try to drive like a human, but like a rule-based automaton. You cannot train this behavior by mimicking the driving of a million individuals all following their own rules. The examples the training software gets should be scored according to the rules of the well behaved automation.
Tesla has billions of miles of driving data. For nearly every conceivable situation, there is enough data to train their AI. What is lacking is a clear qualification as to what are good examples and what is garbage. This is clear from the many instances where the FSD system chooses a wrong solution, making driving mistakes that are very common.
I think the basic mistake Tesla has made is trying to train the FSD system how to drive great based on examples of how to drive badly. American drivers are among the worst in the developed world.
In some third world countries, a driver’s license is granted after showing that you can drive a car for 30 feet in a parking lot — just start, drive, and stop. In some parts of the USA, the requirements are not much different.
In many European countries, the candidate must show ability to drive through rush-hour traffic, on highways, and in city centers. There are always a number of tricky situations along the route. A single intervention is a failure, and an intervention is not because there is a dangerous situation, an intervention happens just because the candidate is not driving according to the rules. The result is that we have fewer than half the traffic accidents and casualties in those European countries. [Editor’s note: There are other factors that influence accident rates as well, including city design and transportation infrastructure planning. —Zach Shahan]
We need FSD. Without a system that really knows the rules, however, we will never get it. The FSD neural net has to learn to drive like a rule-based automaton.

Advertisement

 

Appreciate CleanTechnica’s originality? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.

Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Go to Source