Human drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Obstacle Avoidance [pdf]
Challenge: Black Trash Can on Roadway
A vehicle equipped with an advanced driver assistance system (ADAS) is cruising down a city street at 35mph. Its driver is somewhat distracted and also driving too close to the vehicle ahead. Suddenly, the vehicle ahead swerves out of the lane, narrowly avoiding a black trash can that has fallen off a garbage truck. To avoid collision, the ADAS system must make a quick series of assessments. It must not only detect the trash can, it must also classify it and gauge its size and threat level. Then, it can decide whether to brake quickly or plan a safe path around it while avoiding a collision with parallel traffic.
How Current Solutions Fall Short
Today’s advanced driver assistance systems (ADAS) will experience great difficulty detecting the trash can and/or classifying it fast enough to react in the safest way possible. Typically, ADAS vehicle systems are trained to avoid activating the brakes for every anomaly on the road. As a result, in many cases they will simply drive into objects. In contrast, level 4 or 5 self-driving vehicles are biased toward avoiding collisions. In this scenario, they’ll either undertake evasive maneuvers or slam on the brakes, which could create a nuisance or cause an accident.
Camera. A perception system must be comprehensively trained to interpret all pixels of an image. In order to solve this edge case, the perception system would need to be trained on every possible permutation of objects lying in the road under every possible lighting condition. Achieving this goal is particularly difficult because objects can appear in an almost infinite array of shapes, forms, and colors. Moreover, the black trash can on black asphalt will further challenge the camera, especially at night and during low visibility and glare conditions.
Radar. Radar performance is poor when objects are made of plastic, rubber, and other non-metallic materials. As such, a black plastic trash can is difficult for radar to detect.
Camera + Radar. In many cases, a system using camera and radar would be unable to detect the black trash can at all. Moreover, a vehicle that constantly brakes for every road anomaly creates a nuisance and can cause a rear end accident. So, an ADAS system equipped with camera plus radar would typically be trained to ignore the trash can in an effort to avoid false positives when encountering objects like speed bumps and small debris.
LiDAR. LiDAR would detect the trash can regardless of perception training, lighting conditions, or its position on the road. At issue here is the low resolution of today’s LiDAR systems. A four-channel LiDAR completes a scan of the surroundings every 100 milliseconds. At this rate, LiDAR would be not be able to achieve the required number of shots on the trash can to register a valid detection. It would take 0.5 seconds before the trash can was even considered an object of interest. Even 16-channel LiDAR would struggle to get five points fast enough.
Successfully Resolving the Challenge with iDAR
As soon as the trash can appears in the road ahead, iDAR’s first priority is classification. One of iDAR’s biggest advantages is that it is agile in nature. It can adjust laser scan patterns in real time, selectively targeting specific objects in the environment and dynamically changing scan density to learn more about them. This ability to instantaneously increase resolution is a critical ability that enables it to classify the trash can quickly. During this process, iDAR simultaneously keeps tabs on everything else. Once the trash can is classified, the domain controller uses what it already knows about the surrounding environment to respond in the safest way possible.
Software Components
Computer Vision. iDAR is designed with computer vision that creates a smarter, more focused LiDAR point cloud. In order to effectively “see” the trash can, iDAR combines the camera’s 2D pixels with the LiDAR’s 3D voxels to create Dynamic Vixels. This combination helps the AI refine the LiDAR point clouds around the trash can, effectively eliminating all the irrelevant points and leaving only its edges.
Cueing. For safety purposes, it’s essential to classify objects at range because their identities determine the vehicle’s specific and immediate response. To generate a dataset that is rich enough to apply perception algorithms for classification, as soon as LiDAR detects the trash can, it will cue the camera for deeper real-time analysis about its color, size, and shape. The camera will then review the pixels, running algorithms to define its possible identities. If it needs more information, the camera may then cue the LiDAR to allocate additional shots.
Feedback Loops. Intelligent iDAR sensors are capable of cueing themselves. If the camera lacks data, the LiDAR will generate a feedback loop that tells itself to “paint” the trash can with a dense pattern of laser pulses. This enables it to gather enough information for the LiDAR to run algorithms to effectively guess what it is. At the same time, it can also collect information about the intensity of laser light reflecting back. Because a plastic trash can is more reflective than the road, the laser light bouncing off of it will be more intense. Thus, the perception system can better distinguish it.
The Value of AEye’s iDAR
LiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. When iDAR registers a single detection of an object in the road, its priority is to determine its size and identify it. iDAR will schedule a series of LiDAR shots in that area and combine that data with camera pixels. iDAR can flexibly adjust point cloud density around objects, using classification algorithms at the edge of the network before anything is sent to the domain controller. This ensures that there’s greatly reduced latency and that only the most important data is used to determine whether the vehicle should brake or swerve.
Obstacle Avoidance —
- AEye Wins Award for Most Innovative Autonomous Driving Platform at AutoSens Brussels
- AEye Team Profile: Aravind Ratnam
- AEye Team Profile: Ove Salomonsson
- Leading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech Ventures
- Cargo Protruding from Vehicle
- AEye Team Profile: Indu Vijayan
- The Human Classification Framework: Search, Acquire, and Act
- AEye: Developing Artificial Perception Technologies That Exceed Human Perception
- AEye Team Profile: Jim Robnett