Human drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Cargo Protruding From Vehicle [pdf]
Challenge: Cargo Protruding from Vehicle
A vehicle equipped with an advanced driver assistance system (ADAS) is driving down a road at 20 mph. Directly ahead, a large pick-up truck stops abruptly. Its bed is filled with lumber, much of which is jutting out the back and into the lane. If the driver of an ADAS vehicle isn’t paying attention, this is a potentially fatal scenario. As the distance between the two vehicles quickly shrinks, the ADAS vehicle’s domain controller must make a series of critical assessments to identify the object and avoid a collision. However, this is dependant on its perception system’s ability to detect the lumber. Numerous factors can negatively impact whether or not a detection takes place, including adverse lighting, weather, and road conditions.
How Current Solutions Fall Short
Today’s advanced driver assistance systems (ADAS) will experience great difficulty recognizing this threat or reacting appropriately. Depending on their sensor configuration and perception training, many will fail to register the cargo before it’s too late.
Camera. In scenarios where depth perception is important, cameras run into challenges. By their nature, camera images are two dimensional. To an untrained camera, cargo sticking out of a truck bed will look like small, elongated rectangles floating above the roadway. In order to interpret this 2D image in 3D, the perception system must be trained—something that is difficult to do given the innumerable permutations of cargo shapes. The scenario becomes even more challenging depending on time of day. In the afternoon, sunlight reflecting off the truck bed or directly into the camera can create blind spots, obscuring the cargo. At night, there may not be enough dynamic range in the camera image for the perception system to successfully analyze the scene. If the vehicle’s headlights are in low beam mode, most of the light will pass underneath the lumber.
Radar. Radar detection is quite limited in scenarios where objects are small and stationary. Typically, radar perception systems disregard stationary objects because otherwise, there would be too many objects for the radar to track. In a scenario featuring narrow, non-reflective objects that are surrounded by reflections from the metal truck bed and parked cars, the radar would have great difficulty detecting the lumber at all.
Camera + Radar. Due to the above explained deficiencies, in most cases, a system that combines radar with a camera would be unable to detect the lumber or react quickly. The perception system would need to be trained on an almost infinite variety of small stationary objects associated with all manner of vehicles in all possible light conditions. For radar, many objects are simply less capable of reflecting radio waves. As a result, radar will likely miss or disregard small, non-reflective stationary objects. In addition, radar would be incapable of compensating for the camera’s lack of depth perception.
LiDAR. Conventional LiDAR doesn’t struggle with depth perception. And its performance isn’t significantly impacted by light conditions, nor by an object’s material and reflectivity. However, conventional LiDAR systems are limited because their scan patterns are fixed, as are their Field-of-View, sampling density, and laser shot schedule. In this scenario, as the LiDAR passively scans the environment, its laser points will only hit the small ends of the lumber a few times. Typically, LiDAR perception systems require a minimum of five detections to register an object. Today’s 4-, 16-, and 32-channel systems would likely not collect enough detections early enough to determine that the object was present and a threat.
Successfully Resolving the Challenge with iDAR
Accurately measuring distance is crucial to solving this challenge. A single LiDAR detection will cause iDAR to immediately flag the cargo as a potential threat. At that point, a quick series of LiDAR shots will be scheduled directly targeting the cargo and the area around it. Dynamically changing both LiDAR’s temporal and spatial sampling density, iDAR can comprehensively interrogate the cargo to gain critical information, such as its position in space and distance ahead. Only the most useful and actionable data is sent to the domain controller for planning the safest response.
Software Components
Computer Vision. iDAR combines 2D camera pixels with 3D LiDAR voxels to create Dynamic Vixels. This data type helps the system’s AI refine the LiDAR point cloud on and around the cargo, effectively eliminating all the irrelevant points and creating information from discrete data.
Cueing. As soon as iDAR registers a single detection of the cargo, the sensor flags the region where cargo appears and cues the camera for deeper real-time analysis about its color, shape, etc. If light conditions are favorable, the camera’s AI reviews the pixels to see if there are distinct differences in that region. If there are, it will send detailed data back to the LiDAR. This will cue the LiDAR to focus a Dynamic Region of Interest (ROI) on the cargo. If the camera lacks data, the LiDAR will cue itself to increase the point density on and around the detected object creating an ROI.
Feedback Loops. A feedback loop is triggered when an algorithm needs additional data from sensors. In this scenario, a feedback loop will be triggered between the camera and the LiDAR. The camera can cue the LiDAR, and the LiDAR can cue additional interrogation points, or a Dynamic Region of Interest, to determine the cargo’s location, size, and true velocity. Once enough data has been gathered, it will be sent to the domain controller so that it can decide whether to apply the brakes or swerve to avoid a collision.
The Value of AEye’s iDAR
LiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. As soon as the perception system registers a single valid LiDAR detection of an object extending into the road, iDAR responds intelligently. The LiDAR instantly modifies its scan pattern, increasing laser shots to cover the cargo in a dense pattern of laser pulses. Camera data is used to refine this information. Once the cargo has been classified, and its position in space and distance ahead determined, the domain controller can understand that the cargo poses a threat. At that point, it plans the safest response.
Cargo Protruding from Vehicle —
- AEye Team Profile: Jim Robnett
- SAE's Autonomous Vehicle Engineering on New LiDAR Performance Metrics
- AEye Team Profile: Aravind Ratnam
- AEye Team Profile: Amy Ishiguro
- AEye’s New AE110 iDAR System Integrated into HELLA Vehicle at IAA in Frankfurt
- Abrupt Stop Detection
- Leading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech Ventures
- AEye Team Profile: Vivek Thotla
- Rethinking the Three “Rs” of LiDAR: Rate, Resolution and Range