Human drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: A Pedestrian in Headlights [pdf]
Challenge: A Pedestrian in Headlights
A vehicle equipped with an advanced driver assistance system (ADAS) is on the road at night, traveling down a busy city block filled with pedestrians and vehicles. Its driver is distracted by a text message. As it approaches an intersection, the headlights of an oncoming car points directly into the lens of its perception system’s camera—just as a pedestrian steps off the curb. In order to react correctly, the system must not only register the pedestrian, but it must also send detailed data about her to the domain controller. This data must enable the controller to classify the pedestrian, determine the direction she’s headed, and how fast she’s moving, so that the controller can decide whether to brake or swerve.
How Current Solutions Fall Short
Today’s advanced driver assistance systems (ADAS) will experience great difficulty recognizing these threats or reacting appropriately. They will either fail to detect the pedestrian before it’s too late or, if the system is biased towards braking, it will constantly slam on the brakes whenever an unclassified object, like a reflection or soft target, enters the vehicle’s path. Such behavior will either create a nuisance or cause accidents.
Camera. A camera’s performance is conditional on the environment. In this scenario, the problem is that the camera’s limited dynamic range may not be able to handle the sharp contrast between the ambient low light and the glare from oncoming headlights. The large difference in light intensity between the surroundings and what’s shining into the camera lens causes some of the image sensor pixels to be saturated—an effect called blooming. As a result, there is little-to-no information from the camera to send to the perception system. And there is potential for obstacles—or pedestrians—to be hiding in that blind spot.
Radar. Radar is not adversely affected by light conditions, so oncoming headlights have no impact on its ability to see the pedestrian. However, the manner in which it detects objects—via radio waves— does not contribute much to resolve the problem due to its limited resolution. Radar can only provide low resolution detection of objects, which means that everything a radar detects appears as an amorphous shape. Moreover, radar’s ability to detect objects is impacted by their materials. Metallic objects, like vehicles, produce strong radar signals; soft objects, like pedestrians, create weak ones.
Camera + Radar. While camera and radar might potentially improve detectability, a system that relies on a camera combined with radar will be unable to assess this situation accurately. When the camera fails to detect the pedestrian, the perception system will rely entirely on the radar to send data about the environment to the domain controller. While surrounding vehicles will register clearly, other soft objects like pedestrians, especially if they are close to vehicles, will be hard to distinguish at all—certainly not enough for classification.
LiDAR. LiDAR relies on directed laser light to precisely determine an object’s 3D position in space to centimeter-level accuracy. As such, LiDAR also does not struggle with issues of light saturation. Where conventional LiDAR falls short is that its scans are collected via a passive process. LiDAR scans the environment uniformly, giving the same attention to irrelevant objects (parked vehicles, buildings, trees) as to objects in motion (pedestrians, moving vehicles). In this scenario, low density fixed scanning LiDAR would be challenged to prioritize and track the pedestrian. As a result, the system would likely be unable to gather sufficient data about her location, velocity, and trajectory fast enough for the vehicle’s controller to respond in time.
Successfully Resolving the Challenge with iDAR
The moment the camera experiences a loss of data, iDAR dynamically changes the LiDAR’s temporal and spatial sampling density, selectively foveating on every moving object—much like the human eye—and comprehensively “painting” them with a dense pattern of laser pulses. At the same time, it keeps tabs on stationary background objects (parked cars, buildings, trees). By selectively allocating additional shots to the most important objects in a scene, like pedestrians, iDAR is able to gather comprehensive data without overloading system resources. This data can then be used to extract additional information about moving objects, such as their identity, direction, and velocity.
Software Components and Data Types
Cueing + Feedback Loops. During difficult or low light conditions, iDAR’s intelligent perception system relies on LiDAR to collect data about stationary and moving objects. When the pixels are saturated and the camera returns little or no data, the system will immediately generate a feedback loop that tells the LiDAR to increase shots in the area of the blooming to search for potential threats.
True Velocity. Scanning the pedestrian at a much higher rate than the rest of the environment enables iDAR to gather all useful information, including vector and true velocity. These data types are crucial information for the domain controller, which needs to determine how fast the pedestrian is moving and in which direction she’s headed.
Intensity. iDAR collects data about the intensity of laser light reflecting back to the LiDAR and uses it to make crucial decisions. Pedestrians are inherently less reflective than metallic objects, like vehicles, so laser light bouncing off of them is less intense. In many situations, intensity data can help iDAR’s perception system better distinguish soft objects from the surrounding environment.
The Value of AEye’s iDAR
Intelligent LiDAR sensors embedded with AI for perception are very different than those that passively collect data. When a vehicle’s perception system loses the benefit of camera data, iDAR selectively allocates additional LiDAR shots to generate a dense pattern of laser pulses around every object that’s in motion. Using this information, the LiDAR can classify objects and extract important information, such as direction and velocity. This unprecedented ability to calculate valuable attributes enables the vehicle to act more rapidly to immediate threats and track them through time and space more accurately.
A Pedestrian in Headlights —
- AEye Advisory Board Profile: Adrian Kaehler
- AEye Expands Business Development and Customer Success Team to Support Growing Network of Global Partners and Customers
- AEye Team Profile: Dr. Allan Steinhardt
- AEye Team Profile: Bailey Da Costa
- Obstacle Avoidance
- Autonomous Cars with Marc Hoag Talks "Biomomicry" with AEye President, Blair LaCorte
- AEye Team Profile: Jim Robnett
- The Human Classification Framework: Search, Acquire, and Act
- AEye Advisory Board Profile: Luke Schneider