Human drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: False Positive [pdf]
Challenge: A Balloon Floating Across The Road
A vehicle equipped with an advanced driver assistance system (ADAS) is traveling down a residential block on a sunny afternoon when the air is relatively still. A balloon from a child’s birthday party comes floating across the road. It drifts down and ends up suspended almost motionless in the lane ahead. If the driver of an ADAS vehicle isn’t paying attention, this is a dangerous situation. Its perception system must make a series of quick assessments to avoid causing an accident. Not only must it detect the object in front of it, it must also classify it to determine whether it’s a threat. The vehicle’s domain controller can then decide that the balloon is not a threat and drive through it.
How Current Solutions Fall Short
Today’s advanced driver assistance systems (ADAS) will experience great difficulty detecting the balloon or classifying it fast enough to react in the safest way possible. Typically, ADAS vehicle sensors are trained to avoid activating the brakes for every anomaly on the road because it is assumed that a human driver is paying attention. As a result, in many cases, they will allow the car to drive into them. In contrast, level 4 or 5 self-driving vehicles are biased toward avoiding collisions. In this scenario, they’ll either undertake evasive maneuvers or slam on the brakes, creating an unnecessary incident or causing an accident.
Camera. It is extremely difficult for a camera to distinguish between soft and hard objects; everything is just pixels. In this case, perception training is practically impossible because in the real world, soft objects can appear in an almost infinite variety of shapes, forms, and colors—possibly even taking on human-like shapes in poor lighting conditions. Camera detection performance is completely dependent on proper training of all possible permutations of a soft target’s appearance in combination with the right conditions. Sun glare, shade, or night time operation will negatively impact performance.
Radar. An object’s material is of vital significance to radar. A soft object containing no metal or having no reflectivity is unable to reflect radio waves, so radar will miss the balloon altogether. Additionally, radar is typically trained to disregard stationary objects because otherwise it would be detecting thousands of objects as the vehicle advances through the environment. So, even if the balloon is made from reflective metallic plastic, because it’s floating in the air, there might not be enough movement for the radar to detect it. Therefore, radar will provide little, if any, value in correctly classifying the balloon and assessing it as a potential threat.
Camera + Radar. Together, camera and radar would be unable to assess the scenario and react correctly every time. The camera would try to detect the balloon. However, there would be many scenarios where the camera will identify it incorrectly or not at all depending on lighting and perception training. The camera will frequently be confused—it might identify the balloon as a pedestrian or something else for which the vehicle needs to brake. And radar will be unable to eliminate the camera confusion because it typically won’t detect the balloon at all.
LiDAR. Unlike radar and camera, LiDAR is much more resilient to lighting conditions, or an object’s material. LiDAR would be able to precisely determine the balloon’s 3D position in space to centimeter-level accuracy. However, conventional low density scanning LiDAR falls short when it comes to providing sufficient data fast enough for classification and path planning. Typically, LiDAR detection algorithms require many laser points on an object over several frames to register as a valid object. A low density LiDAR that passively scans the surroundings horizontally can experience challenges achieving the required number of detects when it comes to soft, shape-shifting objects like balloons.
Successfully Resolving the Challenge with iDAR
In this scenario, iDAR excels because it can gather sufficient data at the sensor level for classifying the balloon and determining its distance, shape, and velocity before any data is sent to the domain controller. This is possible because as soon as there’s a single LiDAR detection of the balloon, iDAR will immediately flag it with a Dynamic Region of Interest (ROI). At that point, the LiDAR will generate a dense pattern of laser pulses in the area, interrogating the balloon for additional information. All this takes place while iDAR also continues to track the background environment to ensure it never misses new objects.
Software Components and Data Types
Computer Vision. iDAR is designed with computer vision that creates a smarter, more focused LiDAR point cloud. In order to effectively “see” the balloon, iDAR combines the camera’s 2D pixels with the LiDAR’s 3D voxels to create Dynamic Vixels. This combination helps iDAR refine the LiDAR point cloud on the balloon, effectively eliminating all the irrelevant points.
Cueing. For safety purposes, it’s essential to classify soft targets at range because their identities determine the vehicle’s specific and immediate response. To generate a dataset that is rich enough to apply perception algorithms for classification, as soon as LiDAR detects an object, it will cue the camera for deeper information about its color, size, and shape. The perception system will then review the pixels, running algorithms to define the object’s possible identities. To gain additional insights, the camera cues the LiDAR for additional data, which allocates more shots.
Feedback Loops. Intelligent iDAR sensors are capable of cueing each other for additional data, and they are also capable of cueing themselves. If the camera lacks data (due to light conditions, etc.), the LiDAR will generate a feedback loop that tells the sensor to “paint” the balloon with a dense pattern of laser pulses. This enables the LiDAR to gather enough data about the target’s size, speed, and direction to effectively aid the perception system in classifying the object without the benefit of camera data.
The Value of AEye’s iDAR
LiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. When iDAR registers a single detection of a soft target in the road, it’s priority is classification. To avoid false positives, iDAR will schedule a series of LiDAR shots in that area to determine that it’s a balloon, or something else like a cement bag, tumbleweed, or a pedestrian. iDAR can flexibly adjust point cloud density on and around objects of interest and then use classification algorithms at the edge of the network. This ensures only the most important data is sent to the domain controller for optimal path planning.
False Positive —
- AEye Named to Forbes AI 50
- AEye Team Profile: Indu Vijayan
- AEye Team Profile: Jim Robnett
- AEye Advisory Board Profile: Adrian Kaehler
- AEye Team Profile: Aravind Ratnam
- AEye Team Profile: Dr. Allan Steinhardt
- AEye Team Profile: Vivek Thotla
- Cargo Protruding from Vehicle
- Flatbed Trailer Across Roadway