skip to Main ContentHuman drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: A Pedestrian in Headlights [pdf]
Challenge: A Pedestrian in HeadlightsA vehicle equipped with an advanced driver assistance system (ADAS) is on the road at night, traveling down a busy city block filled with pedestrians and vehicles. Its driver is distracted by a text message. As it approaches an intersection, the headlights of an oncoming car points directly into the lens of its perception system’s camera—just as a pedestrian steps off the curb. In order to react correctly, the system must not only register the pedestrian, but it must also send detailed data about her to the domain controller. This data must enable the controller to classify the pedestrian, determine the direction she’s headed, and how fast she’s moving, so that the controller can decide whether to brake or swerve.
How Current Solutions Fall ShortToday’s advanced driver assistance systems (ADAS) will experience great difficulty recognizing these threats or reacting appropriately. They will either fail to detect the pedestrian before it’s too late or, if the system is biased towards braking, it will constantly slam on the brakes whenever an unclassified object, like a reflection or soft target, enters the vehicle’s path. Such behavior will either create a nuisance or cause accidents.
Camera. A camera’s performance is conditional on the environment. In this scenario, the problem is that the camera’s limited dynamic range may not be able to handle the sharp contrast between the ambient low light and the glare from oncoming headlights. The large difference in light intensity between the surroundings and what’s shining into the camera lens causes some of the image sensor pixels to be saturated—an effect called blooming. As a result, there is little-to-no information from the camera to send to the perception system. And there is potential for obstacles—or pedestrians—to be hiding in that blind spot.
Radar. Radar is not adversely affected by light conditions, so oncoming headlights have no impact on its ability to see the pedestrian. However, the manner in which it detects objects—via radio waves— does not contribute much to resolve the problem due to its limited resolution. Radar can only provide low resolution detection of objects, which means that everything a radar detects appears as an amorphous shape. Moreover, radar’s ability to detect objects is impacted by their materials. Metallic objects, like vehicles, produce strong radar signals; soft objects, like pedestrians, create weak ones.
Camera + Radar. While camera and radar might potentially improve detectability, a system that relies on a camera combined with radar will be unable to assess this situation accurately. When the camera fails to detect the pedestrian, the perception system will rely entirely on the radar to send data about the environment to the domain controller. While surrounding vehicles will register clearly, other soft objects like pedestrians, especially if they are close to vehicles, will be hard to distinguish at all—certainly not enough for classification.
LiDAR. LiDAR relies on directed laser light to precisely determine an object’s 3D position in space to centimeter-level accuracy. As such, LiDAR also does not struggle with issues of light saturation. Where conventional LiDAR falls short is that its scans are collected via a passive process. LiDAR scans the environment uniformly, giving the same attention to irrelevant objects (parked vehicles, buildings, trees) as to objects in motion (pedestrians, moving vehicles). In this scenario, low density fixed scanning LiDAR would be challenged to prioritize and track the pedestrian. As a result, the system would likely be unable to gather sufficient data about her location, velocity, and trajectory fast enough for the vehicle’s controller to respond in time.
Successfully Resolving the Challenge with iDARThe moment the camera experiences a loss of data, iDAR dynamically changes the LiDAR’s temporal and spatial sampling density, selectively foveating on every moving object—much like the human eye—and comprehensively “painting” them with a dense pattern of laser pulses. At the same time, it keeps tabs on stationary background objects (parked cars, buildings, trees). By selectively allocating additional shots to the most important objects in a scene, like pedestrians, iDAR is able to gather comprehensive data without overloading system resources. This data can then be used to extract additional information about moving objects, such as their identity, direction, and velocity.
Software Components and Data TypesCueing + Feedback Loops. During difficult or low light conditions, iDAR’s intelligent perception system relies on LiDAR to collect data about stationary and moving objects. When the pixels are saturated and the camera returns little or no data, the system will immediately generate a feedback loop that tells the LiDAR to increase shots in the area of the blooming to search for potential threats.
True Velocity. Scanning the pedestrian at a much higher rate than the rest of the environment enables iDAR to gather all useful information, including vector and true velocity. These data types are crucial information for the domain controller, which needs to determine how fast the pedestrian is moving and in which direction she’s headed.
Intensity. iDAR collects data about the intensity of laser light reflecting back to the LiDAR and uses it to make crucial decisions. Pedestrians are inherently less reflective than metallic objects, like vehicles, so laser light bouncing off of them is less intense. In many situations, intensity data can help iDAR’s perception system better distinguish soft objects from the surrounding environment.
The Value of AEye’s iDARIntelligent LiDAR sensors embedded with AI for perception are very different than those that passively collect data. When a vehicle’s perception system loses the benefit of camera data, iDAR selectively allocates additional LiDAR shots to generate a dense pattern of laser pulses around every object that’s in motion. Using this information, the LiDAR can classify objects and extract important information, such as direction and velocity. This unprecedented ability to calculate valuable attributes enables the vehicle to act more rapidly to immediate threats and track them through time and space more accurately.
A Pedestrian in Headlights —AEye Advisory Board Profile: Adrian KaehlerAEye Expands Business Development and Customer Success Team to Support Growing Network of Global Partners and CustomersAEye Team Profile: Dr. Allan SteinhardtAEye Team Profile: Bailey Da CostaObstacle AvoidanceAutonomous Cars with Marc Hoag Talks “Biomomicry” with AEye President, Blair LaCorteAEye Team Profile: Jim RobnettThe Human Classification Framework: Search, Acquire, and ActAEye Advisory Board Profile: Luke Schneidernext post: Next ← Flatbed Trailer Across RoadwayAbout Management Team Advisory Board InvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNews Press Releases AEye in the News Events AwardsLibrary Technology News & Views Profiles Videos BlogCareersSupportContact Back To Top
Author: AEye Official_News
Flatbed Trailer Across Roadway
skip to Main ContentHuman drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Flatbed Trailer Across Roadway [pdf]
Challenge: Flatbed Trailer Across RoadwayA vehicle equipped with an advanced driver assistance system (ADAS) is traveling 45mph down a four-lane road that passes through a sparsely populated town. Relying on the vehicle to navigate, the ADAS driver has largely stopped paying attention. Ahead, a semi-truck towing a flatbed trailer slowly traverses across the road. As the distance between the vehicle and the trailer shrinks rapidly, it’s up to the perception system to detect and classify the trailer, as well as measure its velocity and distance. At SAE Level 3 and beyond, where the car is assumed to be in control, the vehicle’s path planning software must make a critical decision about whether to swerve or slam on the brakes before it’s too late.
How Current Solutions Fall ShortToday’s advanced driver assistance systems (ADAS) will experience great difficulty recognizing this threat or reacting appropriately. Depending on its sensor configuration and perception training, the system may fail to register the trailer due to its very thin profile.
Camera. A perception system based on camera sensors will be prone to either misinterpret the threat, register a false positive, or miss the threat entirely. In the distance, the trailer will appear as little more than a two-dimensional line across the roadway. If the vehicle is turning, those same pixels could also be interpreted as a guardrail. In order to be accurate in all scenarios, the perception system must be trained in every possible light condition in combination with all color and size permutations. This poses an immense challenge, as there will be instances that haven’t been foreseen, creating a potentially deadly combination for perception systems that primarily depend on camera data.
Radar. Approached from the side, the profile of a flatbed trailer is very thin. With no better than a few degrees of angular resolution, radars are ill-equipped to detect such narrow horizontal objects. In this case, a majority of the radar’s radio waves will miss the slim profile of the trailer.
Camera + Radar. A perception system that only relies on camera and radar would likely be unable to detect the flatbed trailer and react in time. The camera data would be insufficiently detailed to classify the trailer and would likely lead the perception system to mistakenly classify the trailer as one of several common roadway features. As radar would also be unlikely to accurately detect the full length of the trailer, it would also mislead the perception system. In this instance, the combination of a camera and radar does little to improve the odds of accurately classifying the trailer.
LiDAR. Today’s conventional LiDAR produces very dense horizontal scan lines coupled with very poor vertical density. This scan pattern creates a challenge for detection when objects are horizontal, thin, and narrow—it’s easy for LiDAR’s laser shots to miss them entirely. Some LiDAR shots will hit the trailer. However, it takes time to gather the requisite number of detections to register any object. Depending on the vehicle’s speed, this process may take too much time to prevent a collision.
Successfully Resolving the Challenge with iDARA vehicle that enters a scene laterally is very difficult to track. iDAR overcomes this difficulty with its ability to selectively allocate LiDAR shots to Regions of Interest (ROIs). As soon as the LiDAR registers a single detection of the trailer, iDAR dynamically changes both the LiDAR’s temporal and spatial sampling density to comprehensively interrogate the trailer, thus gaining critical information like its size and distance ahead.
iDAR can schedule LiDAR shots to revisit Regions of Interest in a matter of microseconds to milliseconds. This means that iDAR can interrogate an object up to 3000x faster than conventional LiDAR systems, which typically require hundreds of milliseconds to revisit an object. As a result, iDAR has an unprecedented ability to calculate valuable attributes, including object distance and velocity (both lateral and radial), faster than any other system.
Software ComponentsComputer Vision. iDAR combines 2D camera pixels with 3D LiDAR voxels to create Dynamic Vixels. This data type helps the system’s AI refine the LiDAR point cloud around the trailer edges, effectively eliminating all the irrelevant points. As a result, iDAR is able to clearly distinguish the trailer from other roadway features, like guardrails and signage.
Cueing. For safety purposes, it’s essential to classify threats at range because their identities determine the vehicle’s specific and immediate response. To generate a dataset that is rich enough to apply perception algorithms for classification, as soon as LiDAR detects an object, it will cue the AI camera for deeper real-time analysis about its color, size, and shape. The camera will then review the pixels, running algorithms to define the object’s possible identities. To gain additional insights, the camera cues the LiDAR for additional data, which allocates more shots.
Feedback Loops. A feedback loop is triggered when an algorithm needs additional data from sensors. In this scenario, a feedback loop will be triggered between the camera and the LiDAR. The camera can cue the LiDAR, and the LiDAR can cue additional interrogation points, or a Dynamic Region of Interest, to determine the trailer’s true velocity. This information is sent to the domain controller so that it can decide whether to apply the brakes or swerve to avoid a collision.
The Value of AEye’s iDARLiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. As soon as iDAR registers a single detection of the flatbed trailer, it dynamically modifies the LiDAR scan pattern, scheduling a rapid series of shots to cover the trailer with a dense pattern of laser pulses to extract information about its distance and velocity. Flexible shot allocation vastly reduces the required number of shots per frame to extract the most valuable information in every scenario. This not only enables the vehicle’s perception system to more accurately track objects through time and space, it also makes autonomous driving much safer because it eliminates ambiguity, accelerates the perception process, and allows for more efficient use of processing resources.
Flatbed Trailer Across Roadway —Smarter Cars Podcast Talks LiDAR and Perception Systems with AEye President, Blair LaCorteThe Human Classification Framework: Search, Acquire, and ActCargo Protruding from VehicleAEye Advisory Board Profile: Luke SchneiderUnique iDAR Features That Drive SAE’s 5 Levels of AutonomyAEye: Developing Artificial Perception Technologies That Exceed Human PerceptionFalse PositiveAEye Wins Award for Most Innovative Autonomous Driving Platform at AutoSens BrusselsAEye Advisory Board Profile: Adrian Kaehlerprevious post: Previousnext post: Next ← A Pedestrian in Headlights ← Obstacle AvoidanceAbout Management Team Advisory Board InvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNews Press Releases AEye in the News Events AwardsLibrary Technology News & Views Profiles Videos BlogCareersSupportContact Back To Top
Obstacle Avoidance
skip to Main ContentHuman drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Obstacle Avoidance [pdf]
Challenge: Black Trash Can on RoadwayA vehicle equipped with an advanced driver assistance system (ADAS) is cruising down a city street at 35mph. Its driver is somewhat distracted and also driving too close to the vehicle ahead. Suddenly, the vehicle ahead swerves out of the lane, narrowly avoiding a black trash can that has fallen off a garbage truck. To avoid collision, the ADAS system must make a quick series of assessments. It must not only detect the trash can, it must also classify it and gauge its size and threat level. Then, it can decide whether to brake quickly or plan a safe path around it while avoiding a collision with parallel traffic.
How Current Solutions Fall ShortToday’s advanced driver assistance systems (ADAS) will experience great difficulty detecting the trash can and/or classifying it fast enough to react in the safest way possible. Typically, ADAS vehicle systems are trained to avoid activating the brakes for every anomaly on the road. As a result, in many cases they will simply drive into objects. In contrast, level 4 or 5 self-driving vehicles are biased toward avoiding collisions. In this scenario, they’ll either undertake evasive maneuvers or slam on the brakes, which could create a nuisance or cause an accident.
Camera. A perception system must be comprehensively trained to interpret all pixels of an image. In order to solve this edge case, the perception system would need to be trained on every possible permutation of objects lying in the road under every possible lighting condition. Achieving this goal is particularly difficult because objects can appear in an almost infinite array of shapes, forms, and colors. Moreover, the black trash can on black asphalt will further challenge the camera, especially at night and during low visibility and glare conditions.
Radar. Radar performance is poor when objects are made of plastic, rubber, and other non-metallic materials. As such, a black plastic trash can is difficult for radar to detect.
Camera + Radar. In many cases, a system using camera and radar would be unable to detect the black trash can at all. Moreover, a vehicle that constantly brakes for every road anomaly creates a nuisance and can cause a rear end accident. So, an ADAS system equipped with camera plus radar would typically be trained to ignore the trash can in an effort to avoid false positives when encountering objects like speed bumps and small debris.
LiDAR. LiDAR would detect the trash can regardless of perception training, lighting conditions, or its position on the road. At issue here is the low resolution of today’s LiDAR systems. A four-channel LiDAR completes a scan of the surroundings every 100 milliseconds. At this rate, LiDAR would be not be able to achieve the required number of shots on the trash can to register a valid detection. It would take 0.5 seconds before the trash can was even considered an object of interest. Even 16-channel LiDAR would struggle to get five points fast enough.
Successfully Resolving the Challenge with iDARAs soon as the trash can appears in the road ahead, iDAR’s first priority is classification. One of iDAR’s biggest advantages is that it is agile in nature. It can adjust laser scan patterns in real time, selectively targeting specific objects in the environment and dynamically changing scan density to learn more about them. This ability to instantaneously increase resolution is a critical ability that enables it to classify the trash can quickly. During this process, iDAR simultaneously keeps tabs on everything else. Once the trash can is classified, the domain controller uses what it already knows about the surrounding environment to respond in the safest way possible.
Software ComponentsComputer Vision. iDAR is designed with computer vision that creates a smarter, more focused LiDAR point cloud. In order to effectively “see” the trash can, iDAR combines the camera’s 2D pixels with the LiDAR’s 3D voxels to create Dynamic Vixels. This combination helps the AI refine the LiDAR point clouds around the trash can, effectively eliminating all the irrelevant points and leaving only its edges.
Cueing. For safety purposes, it’s essential to classify objects at range because their identities determine the vehicle’s specific and immediate response. To generate a dataset that is rich enough to apply perception algorithms for classification, as soon as LiDAR detects the trash can, it will cue the camera for deeper real-time analysis about its color, size, and shape. The camera will then review the pixels, running algorithms to define its possible identities. If it needs more information, the camera may then cue the LiDAR to allocate additional shots.
Feedback Loops. Intelligent iDAR sensors are capable of cueing themselves. If the camera lacks data, the LiDAR will generate a feedback loop that tells itself to “paint” the trash can with a dense pattern of laser pulses. This enables it to gather enough information for the LiDAR to run algorithms to effectively guess what it is. At the same time, it can also collect information about the intensity of laser light reflecting back. Because a plastic trash can is more reflective than the road, the laser light bouncing off of it will be more intense. Thus, the perception system can better distinguish it.
The Value of AEye’s iDARLiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. When iDAR registers a single detection of an object in the road, its priority is to determine its size and identify it. iDAR will schedule a series of LiDAR shots in that area and combine that data with camera pixels. iDAR can flexibly adjust point cloud density around objects, using classification algorithms at the edge of the network before anything is sent to the domain controller. This ensures that there’s greatly reduced latency and that only the most important data is used to determine whether the vehicle should brake or swerve.
Obstacle Avoidance —AEye Wins Award for Most Innovative Autonomous Driving Platform at AutoSens BrusselsAEye Team Profile: Aravind RatnamAEye Team Profile: Ove SalomonssonLeading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech VenturesCargo Protruding from VehicleAEye Team Profile: Indu VijayanThe Human Classification Framework: Search, Acquire, and ActAEye: Developing Artificial Perception Technologies That Exceed Human PerceptionAEye Team Profile: Jim Robnettprevious post: Previousnext post: Next ← Flatbed Trailer Across Roadway ← Abrupt Stop DetectionAbout Management Team Advisory Board InvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNews Press Releases AEye in the News Events AwardsLibrary Technology News & Views Profiles Videos BlogCareersSupportContact Back To Top
Abrupt Stop Detection
skip to Main Content Human drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Abrupt Stop Detection [pdf]
Challenge: A Child Runs into the Street Chasing a BallA vehicle equipped with an advanced driver assistance system (ADAS) is cruising down a leafy residential street at 25 mph on a sunny day with a second vehicle following behind. Its driver is distracted by the radio. Suddenly, a small object enters the road laterally. At that moment, the vehicle’s perception system must make several assessments before the vehicle path controls can react. What is the object, and is it a threat? Is it a ball or something else? More importantly, is a child in pursuit? Each of these scenarios require a unique response. It’s imperative to brake or swerve for the child. However, engaging the vehicle’s brakes for a lone ball is unnecessary and even dangerous.
How Current Solutions Fall ShortAccording to a recent study done by AAA, today’s advanced driver assistance systems (ADAS) will experience great difficulty recognizing these threats or reacting appropriately. Depending on road conditions, their passive sensors may fail to detect the ball and won’t register a child until it’s too late. Alternatively, vehicles equipped with systems that are biased towards braking will constantly slam on the brakes for every soft target in the street, creating a nuisance or even causing accidents.
Camera. Camera performance depends on a combination of image quality, Field-of-View, and perception training. While all three are important, perception training is especially relevant here. Cameras are limited when it comes to interpreting unique environments because everything is just a light value. To understand any combination of pixels, AI is required. And AI can’t invent what it hasn’t seen. In order for the perception system to correctly identify a child chasing a ball, it must be trained on every possible permutation of this scenario, including balls of varying colors, materials, and sizes, as well as children of different sizes in various clothing. Moreover, the children would need to be trained in all possible variations—with some approaching the vehicle from behind a parked car, with just an arm protruding, etc. Street conditions would need to be accounted for, too, like those with and without shade, and sun glare at different angles. Perception training for every possible scenario may be possible. However, it’s an incredibly costly and time-consuming process.
Radar. Radar’s basic flaw is that it can only pick up a few degrees of angular resolution. When radar picks up an object, it will only provide a few detection points to the perception system to distinguish a general blob in the area. Moreover, an object’s size, shape, and material will influence its detectability. Radar can’t distinguish soft objects from other objects, so the signature of a rubber or leather ball would be close to nothing. While radar would detect the child, there would simply not be enough data or time for the system to detect, and then classify and react.
Camera + Radar. A system that combines radar with a camera would have difficulty assessing this situation quickly enough to respond correctly. Too many factors have the potential to negatively impact their performance. The perception system would need to be trained for the precise scenario to classify exactly what it was “seeing.” And the radar would need to detect the child early enough, at a wide angle, and possibly from behind parked vehicles (strong surrounding radar reflections), predict its path, and act. In addition, radar may not have sufficient resolution to distinguish between the child and the ball.
LiDAR. Conventional LiDAR’s greatest value in this scenario is that it brings automatic depth measurement for the ball and the child. It can determine within approximately a few centimeters exactly how far away each is in relation to the vehicle. However, today’s LiDAR systems are unable to ensure vehicle safety because they don’t gather important information—such as shape, velocity, and trajectory—fast enough. This is because conventional LiDAR systems are passive sensors that scan everything uniformly in a fixed pattern and assign every detection an equal priority. Therefore, it is unable to prioritize and track moving objects, like a child and a ball, over the background environment, like parked cars, the sky, and trees.
Successfully Resolving the Challenge with iDARAEye’s iDAR solves this challenge successfully because it can prioritize how it gathers information and thereby understand an object’s context. As soon as an object moves into the road, a single LiDAR detection will set the perception system into action. First, iDAR will cue the camera to learn about its shape and color. In addition, iDAR will define a dense Dynamic Region of Interest (ROI) on the ball. The LiDAR will then interrogate the object, scheduling a rapid series of shots to generate a dense pixel grid of the ROI. This dataset is rich enough to start applying perception algorithms for classification, which will inform and cue further interrogations.
Having classified the ball, the system’s intelligent sensors are trained with algorithms that instruct them to anticipate something in pursuit. At that point, the LiDAR will then schedule another rapid series of shots on the path behind the ball, generating another pixel grid to search for a child. iDAR has a unique ability to intelligently survey the environment, focus on objects, identify them, and make rapid decisions based on their context.
Software ComponentsComputer Vision. iDAR is designed with computer vision, creating a smarter, more focused LiDAR point cloud that mimics the way humans perceive the environment. In order to effectively “see” the ball and the child, iDAR combines the camera’s 2D pixels with the LiDAR’s 3D voxels to create Dynamic Vixels. This combination helps the AI refine the LiDAR point clouds around the ball and the child, effectively eliminating all the irrelevant points and leaving only their edges.
Cueing. A single LiDAR’s detection on the ball sets the first cue into motion. Immediately, the sensor flags the region where the ball appears, cueing the LiDAR to focus a Dynamic ROI on the ball. Cueing generates a dataset that is rich enough to apply perception algorithms for classification. If the camera lacks data (due to light conditions, etc.), the LiDAR will cue itself to increase the point density around the ROI. This enables it to gather enough data to classify an object and determine whether it’s relevant.
Feedback Loops. Once the ball is detected, a feedback loop is generated by an algorithm that triggers the sensors to focus another ROI immediately behind the ball and to the side of the road to capture anything in pursuit, initiating faster and more accurate classification. This starts another cue. With that data, the system can classify whatever is behind the ball and determine its true velocity so that it can decide whether to apply the brakes or swerve to avoid a collision.
The Value of AEye’s iDARLiDAR sensors embedded with AI for intelligent perception are vastly different than those that passively collect data. After detecting and classifying the ball, iDAR will immediately foveate in the direction where the child will most likely enter the frame. This ability to intelligently understand the context of a scene enables iDAR to detect the child quickly, calculate the child’s speed of approach, and apply the brakes or swerve to avoid collision. To speed reaction times, each sensor’s data is processed intelligently at the edge of the network. Only the most salient data is then sent to the domain controller for advanced analysis and path planning, ensuring optimal safety.
Abrupt Stop Detection —A Pedestrian in HeadlightsSmarter Cars Podcast Talks LiDAR and Perception Systems with AEye President, Blair LaCorteFalse PositiveCargo Protruding from VehicleAEye Team Profile: Jim RobnettAEye Team Profile: Aravind RatnamAEye Expands Business Development and Customer Success Team to Support Growing Network of Global Partners and CustomersiDAR Sees Only What MattersAutonomous Cars with Marc Hoag Talks “Biomomicry” with AEye President, Blair LaCorteprevious post: Previousnext post: Next ← Obstacle Avoidance ← False PositiveAbout Management Team Advisory Board InvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNews Press Releases AEye in the News Events AwardsLibrary Technology News & Views Profiles Videos BlogCareersSupportContact Back To Top
False Positive
skip to Main Content Human drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: False Positive [pdf]
Challenge: A Balloon Floating Across The RoadA vehicle equipped with an advanced driver assistance system (ADAS) is traveling down a residential block on a sunny afternoon when the air is relatively still. A balloon from a child’s birthday party comes floating across the road. It drifts down and ends up suspended almost motionless in the lane ahead. If the driver of an ADAS vehicle isn’t paying attention, this is a dangerous situation. Its perception system must make a series of quick assessments to avoid causing an accident. Not only must it detect the object in front of it, it must also classify it to determine whether it’s a threat. The vehicle’s domain controller can then decide that the balloon is not a threat and drive through it.
How Current Solutions Fall ShortToday’s advanced driver assistance systems (ADAS) will experience great difficulty detecting the balloon or classifying it fast enough to react in the safest way possible. Typically, ADAS vehicle sensors are trained to avoid activating the brakes for every anomaly on the road because it is assumed that a human driver is paying attention. As a result, in many cases, they will allow the car to drive into them. In contrast, level 4 or 5 self-driving vehicles are biased toward avoiding collisions. In this scenario, they’ll either undertake evasive maneuvers or slam on the brakes, creating an unnecessary incident or causing an accident.
Camera. It is extremely difficult for a camera to distinguish between soft and hard objects; everything is just pixels. In this case, perception training is practically impossible because in the real world, soft objects can appear in an almost infinite variety of shapes, forms, and colors—possibly even taking on human-like shapes in poor lighting conditions. Camera detection performance is completely dependent on proper training of all possible permutations of a soft target’s appearance in combination with the right conditions. Sun glare, shade, or night time operation will negatively impact performance.
Radar. An object’s material is of vital significance to radar. A soft object containing no metal or having no reflectivity is unable to reflect radio waves, so radar will miss the balloon altogether. Additionally, radar is typically trained to disregard stationary objects because otherwise it would be detecting thousands of objects as the vehicle advances through the environment. So, even if the balloon is made from reflective metallic plastic, because it’s floating in the air, there might not be enough movement for the radar to detect it. Therefore, radar will provide little, if any, value in correctly classifying the balloon and assessing it as a potential threat.
Camera + Radar. Together, camera and radar would be unable to assess the scenario and react correctly every time. The camera would try to detect the balloon. However, there would be many scenarios where the camera will identify it incorrectly or not at all depending on lighting and perception training. The camera will frequently be confused—it might identify the balloon as a pedestrian or something else for which the vehicle needs to brake. And radar will be unable to eliminate the camera confusion because it typically won’t detect the balloon at all.
LiDAR. Unlike radar and camera, LiDAR is much more resilient to lighting conditions, or an object’s material. LiDAR would be able to precisely determine the balloon’s 3D position in space to centimeter-level accuracy. However, conventional low density scanning LiDAR falls short when it comes to providing sufficient data fast enough for classification and path planning. Typically, LiDAR detection algorithms require many laser points on an object over several frames to register as a valid object. A low density LiDAR that passively scans the surroundings horizontally can experience challenges achieving the required number of detects when it comes to soft, shape-shifting objects like balloons.
Successfully Resolving the Challenge with iDARIn this scenario, iDAR excels because it can gather sufficient data at the sensor level for classifying the balloon and determining its distance, shape, and velocity before any data is sent to the domain controller. This is possible because as soon as there’s a single LiDAR detection of the balloon, iDAR will immediately flag it with a Dynamic Region of Interest (ROI). At that point, the LiDAR will generate a dense pattern of laser pulses in the area, interrogating the balloon for additional information. All this takes place while iDAR also continues to track the background environment to ensure it never misses new objects.
Software Components and Data TypesComputer Vision. iDAR is designed with computer vision that creates a smarter, more focused LiDAR point cloud. In order to effectively “see” the balloon, iDAR combines the camera’s 2D pixels with the LiDAR’s 3D voxels to create Dynamic Vixels. This combination helps iDAR refine the LiDAR point cloud on the balloon, effectively eliminating all the irrelevant points.
Cueing. For safety purposes, it’s essential to classify soft targets at range because their identities determine the vehicle’s specific and immediate response. To generate a dataset that is rich enough to apply perception algorithms for classification, as soon as LiDAR detects an object, it will cue the camera for deeper information about its color, size, and shape. The perception system will then review the pixels, running algorithms to define the object’s possible identities. To gain additional insights, the camera cues the LiDAR for additional data, which allocates more shots.
Feedback Loops. Intelligent iDAR sensors are capable of cueing each other for additional data, and they are also capable of cueing themselves. If the camera lacks data (due to light conditions, etc.), the LiDAR will generate a feedback loop that tells the sensor to “paint” the balloon with a dense pattern of laser pulses. This enables the LiDAR to gather enough data about the target’s size, speed, and direction to effectively aid the perception system in classifying the object without the benefit of camera data.
The Value of AEye’s iDARLiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. When iDAR registers a single detection of a soft target in the road, it’s priority is classification. To avoid false positives, iDAR will schedule a series of LiDAR shots in that area to determine that it’s a balloon, or something else like a cement bag, tumbleweed, or a pedestrian. iDAR can flexibly adjust point cloud density on and around objects of interest and then use classification algorithms at the edge of the network. This ensures only the most important data is sent to the domain controller for optimal path planning.
False Positive —AEye Named to Forbes AI 50AEye Team Profile: Indu VijayanAEye Team Profile: Jim RobnettAEye Advisory Board Profile: Adrian KaehlerAEye Team Profile: Aravind RatnamAEye Team Profile: Dr. Allan SteinhardtAEye Team Profile: Vivek ThotlaCargo Protruding from VehicleFlatbed Trailer Across Roadwayprevious post: Previousnext post: Next ← Abrupt Stop Detection ← Cargo Protruding from VehicleAboutManagement TeamAdvisory BoardInvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNewsPress ReleasesAEye in the NewsEventsAwardsLibraryTechnologyNews & ViewsProfilesVideosBlogCareersSupportContact Back To Top
Cargo Protruding from Vehicle
skip to Main ContentHuman drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Cargo Protruding From Vehicle [pdf]
Challenge: Cargo Protruding from VehicleA vehicle equipped with an advanced driver assistance system (ADAS) is driving down a road at 20 mph. Directly ahead, a large pick-up truck stops abruptly. Its bed is filled with lumber, much of which is jutting out the back and into the lane. If the driver of an ADAS vehicle isn’t paying attention, this is a potentially fatal scenario. As the distance between the two vehicles quickly shrinks, the ADAS vehicle’s domain controller must make a series of critical assessments to identify the object and avoid a collision. However, this is dependant on its perception system’s ability to detect the lumber. Numerous factors can negatively impact whether or not a detection takes place, including adverse lighting, weather, and road conditions.
How Current Solutions Fall ShortToday’s advanced driver assistance systems (ADAS) will experience great difficulty recognizing this threat or reacting appropriately. Depending on their sensor configuration and perception training, many will fail to register the cargo before it’s too late.
Camera. In scenarios where depth perception is important, cameras run into challenges. By their nature, camera images are two dimensional. To an untrained camera, cargo sticking out of a truck bed will look like small, elongated rectangles floating above the roadway. In order to interpret this 2D image in 3D, the perception system must be trained—something that is difficult to do given the innumerable permutations of cargo shapes. The scenario becomes even more challenging depending on time of day. In the afternoon, sunlight reflecting off the truck bed or directly into the camera can create blind spots, obscuring the cargo. At night, there may not be enough dynamic range in the camera image for the perception system to successfully analyze the scene. If the vehicle’s headlights are in low beam mode, most of the light will pass underneath the lumber.
Radar. Radar detection is quite limited in scenarios where objects are small and stationary. Typically, radar perception systems disregard stationary objects because otherwise, there would be too many objects for the radar to track. In a scenario featuring narrow, non-reflective objects that are surrounded by reflections from the metal truck bed and parked cars, the radar would have great difficulty detecting the lumber at all.
Camera + Radar. Due to the above explained deficiencies, in most cases, a system that combines radar with a camera would be unable to detect the lumber or react quickly. The perception system would need to be trained on an almost infinite variety of small stationary objects associated with all manner of vehicles in all possible light conditions. For radar, many objects are simply less capable of reflecting radio waves. As a result, radar will likely miss or disregard small, non-reflective stationary objects. In addition, radar would be incapable of compensating for the camera’s lack of depth perception.
LiDAR. Conventional LiDAR doesn’t struggle with depth perception. And its performance isn’t significantly impacted by light conditions, nor by an object’s material and reflectivity. However, conventional LiDAR systems are limited because their scan patterns are fixed, as are their Field-of-View, sampling density, and laser shot schedule. In this scenario, as the LiDAR passively scans the environment, its laser points will only hit the small ends of the lumber a few times. Typically, LiDAR perception systems require a minimum of five detections to register an object. Today’s 4-, 16-, and 32-channel systems would likely not collect enough detections early enough to determine that the object was present and a threat.
Successfully Resolving the Challenge with iDARAccurately measuring distance is crucial to solving this challenge. A single LiDAR detection will cause iDAR to immediately flag the cargo as a potential threat. At that point, a quick series of LiDAR shots will be scheduled directly targeting the cargo and the area around it. Dynamically changing both LiDAR’s temporal and spatial sampling density, iDAR can comprehensively interrogate the cargo to gain critical information, such as its position in space and distance ahead. Only the most useful and actionable data is sent to the domain controller for planning the safest response.
Software ComponentsComputer Vision. iDAR combines 2D camera pixels with 3D LiDAR voxels to create Dynamic Vixels. This data type helps the system’s AI refine the LiDAR point cloud on and around the cargo, effectively eliminating all the irrelevant points and creating information from discrete data.
Cueing. As soon as iDAR registers a single detection of the cargo, the sensor flags the region where cargo appears and cues the camera for deeper real-time analysis about its color, shape, etc. If light conditions are favorable, the camera’s AI reviews the pixels to see if there are distinct differences in that region. If there are, it will send detailed data back to the LiDAR. This will cue the LiDAR to focus a Dynamic Region of Interest (ROI) on the cargo. If the camera lacks data, the LiDAR will cue itself to increase the point density on and around the detected object creating an ROI.
Feedback Loops. A feedback loop is triggered when an algorithm needs additional data from sensors. In this scenario, a feedback loop will be triggered between the camera and the LiDAR. The camera can cue the LiDAR, and the LiDAR can cue additional interrogation points, or a Dynamic Region of Interest, to determine the cargo’s location, size, and true velocity. Once enough data has been gathered, it will be sent to the domain controller so that it can decide whether to apply the brakes or swerve to avoid a collision.
The Value of AEye’s iDARLiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. As soon as the perception system registers a single valid LiDAR detection of an object extending into the road, iDAR responds intelligently. The LiDAR instantly modifies its scan pattern, increasing laser shots to cover the cargo in a dense pattern of laser pulses. Camera data is used to refine this information. Once the cargo has been classified, and its position in space and distance ahead determined, the domain controller can understand that the cargo poses a threat. At that point, it plans the safest response.
Cargo Protruding from Vehicle —AEye Team Profile: Jim RobnettSAE's Autonomous Vehicle Engineering on New LiDAR Performance MetricsAEye Team Profile: Aravind RatnamAEye Team Profile: Amy IshiguroAEye’s New AE110 iDAR System Integrated into HELLA Vehicle at IAA in FrankfurtAbrupt Stop DetectionLeading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech VenturesAEye Team Profile: Vivek ThotlaRethinking the Three “Rs” of LiDAR: Rate, Resolution and Rangeprevious post: Previous ← False PositiveAboutManagement TeamAdvisory BoardInvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNewsPress ReleasesAEye in the NewsEventsAwardsLibraryTechnologyNews & ViewsProfilesVideosBlogCareersSupportContact Back To Top
AEye Team Profile: Ove Salomonsson
On October 30th, AEye’s Sr. Director, LiDAR Product Architect, Ove Salomonsson, will speak at two sessions during SAE Innovations in Mobility in Novi, Michigan: Using Intelligent Sensing to Achieve Accurate, Fast Perception and Bringing Intelligence to the Edge.
Ove Salomonsson has 30+ years of experience in engineering of automotive safety electronics. He came to AEye from Lucid, where he was director of Autonomous Driving and ADAS. Before that, he led long range ADAS system development at Magna Electronics and directed technology development at Autoliv Electronics, where he was also General Manager for the Night Vision camera division. Salomonsson was also VP of Traffic Systems at Saab Systems and in DSRC (V2V) at Saab Combitech. He began his career at Volvo, where he managed safety technology projects. Salomonsson holds a BSC in Innovation Engineering from the University of Halmstad.We sat down with Ove to learn about how ADAS, self-driving technologies, and perception sensors have evolved over the years, and what he misses most about Sweden.
Q: You’ve been working in ADAS and autonomy for quite some time – how have you seen these technologies evolve over the years?Radar has been around for quite some time and continues to evolve into smaller, better, and less costly sensors. In fact, cost has come down so significantly that they are now on close to every new car delivered – either providing standalone applications like cross traffic alert or blindspot detection, or as part of a larger ADAS system. However, radar is still quite limited because of its relatively low resolution and multi-path problems.
It was certainly a big deal when cameras became qualified for automotive and low cost enough to make it on to vehicles in greater volume, such as backup cameras and forward facing cameras for collision alerts. There is so much more information captured in a camera image than by radar, and the resolution nowadays has increased up to 8 megapixel. However, the costly and energy consuming AI compute portion of ADAS and AV systems will still need time to catch up to that amount of information rushing into the AI algorithms.
Based on recent test information from AAA, performance is still flawed in automatic emergency braking (AEB) systems, which highlights the importance of LiDAR. LiDAR is the final sensor modality that is needed to make ADAS systems (and eventually full autonomy) work effectively in all conditions. LiDAR is more deterministic by nature, as it can detect and measure the distance to all objects. And with an agile LiDAR, such as AEye’s iDAR, this can be done incredibly fast with the added ability to classify objects and determine their velocity.
Q: How have perception sensors for AVs evolved during this time?I have seen this evolution take place from both the OEM and Tier 1 point of view. However, the most important part is end-customer and societal benefits, such as a reduction in automotive accidents. The lowering of costs and increased capability in terms of resolution and Field-of-View has meant that new applications have been created, expanded upon, and deployed in the market. The list of driver assistance systems is growing: from lane departure warning, forward collision warning, and automatic high beam assist systems to newer features like front and rear automatic emergency braking (AEB) and adaptive cruise control (ACC) with lane following.
With improved sensors and perception algorithms, the focus has now shifted to allowing “hands off” the wheel and, more recently, “eyes off” the road under certain conditions. This happens to be the first (and most challenging) step towards true autonomy since responsibility is transferred to the vehicle for at least some amount of time. Any time we allow the driver to hand over driving tasks to the vehicle, it also becomes important to constantly monitor the driver’s awareness in case there is a need to transfer control back. For example, driver monitoring cameras have been introduced in certain circumstances.
However, perception sensors still need to achieve enough redundancy for autonomous driving systems to be able to “fail operationally.” For example, in a truly autonomous vehicle, the passenger may be sleeping, and the vehicle will have to be able to continue to drive by itself, say, if the camera loses power or a bird hits the windshield right in front of the camera. The vehicle still needs to be able to operate for a certain time period (or until it reaches a safe place to stop) using LiDAR and radar together.
The last and most important piece of the puzzle needed to provide enough redundancy for these systems to “fail operationally” (and also cover additional edge cases) is indeed LiDAR. LiDAR’s deterministic range measurements, high resolution, and low light capability makes it a great complement to both radar and camera.
Q: You grew up in Sweden! What Swedish traditions (holidays, foods, activities) do you miss most here in the States?I moved to the US almost 25 years ago, but I still go back to Sweden at least once a year to celebrate Midsummer.
What I miss the most is Swedish chocolate and fresh seafood, including Sweden’s wide variety of marinated herring. And, believe it or not, Sweden is home to an exquisite kebab pizza (not a Viking tradition, but rather, a new delicacy).
By the way, did you know that one of Sweden’s biggest exports is music? ABBA, of course, ruled the seventies; Roxette the eighties, Robyn and the Cardigans the nineties; and, more recently, Tove Lo and Zara Larsson. Even Spotify is Swedish!
—–
Connect with AEye at SAE Innovations in Mobility.
AEye Team Profile: Ove Salomonsson —AEye’s New AE110 iDAR System Integrated into HELLA Vehicle at IAA in FrankfurtAEye Wins Award for Most Innovative Autonomous Driving Platform at AutoSens BrusselsAEye Team Profile: Jim RobnettAEye Adds VP of AI and Software to Executive TeamAEye: Developing Artificial Perception Technologies That Exceed Human PerceptionAEye Redefines the Three “Rs” of LiDAR – Rate, Resolution, and RangeAEye Extends Patent Portfolio, Creating Industry’s Most Comprehensive Library of Solid-State Lidar Intellectual PropertyAEye Expands Business Development and Customer Success Team to Support Growing Network of Global Partners and CustomersUnique iDAR Features That Drive SAE’s 5 Levels of Autonomy
Forbes AI 50 – America’s Most Promising Artificial Intelligence Companies
AI 50: America’s Most Promising Artificial Intelligence Companies of 2019 READ MOREForbes AI 50 – America’s Most Promising Artificial Intelligence Companies —BeTerrific Interviews AEye's Head of Customer Success at CES 2019Autonomous Cars with Marc Hoag Talks “Biomomicry” with AEye President, Blair LaCorteSmarter Cars Podcast Talks LiDAR and Perception Systems with AEye President, Blair LaCorteAutomotive World Explores the Flourishing Relationship Between Artificial Intelligence and SensorsAEye Team Profile: Amy IshiguroIEEE Spectrum Examines LiDAR Wavelength Safety for the Human Eye and Digital CamerasEETimes Talks ‘Corner Cases’ with AEye’s VP of Product Management, Aravind Ratnam, at AutoSens Brussels 2019Autonomous Vehicle Technology Publishes Aravind Ratnam’s New Metrics for LiDAR EvaluationAEye Redefines the Three “Rs” of LiDAR – Rate, Resolution, and Range
AutoSens Brussels – Most Innovative Autonomous Driving Platform
AutoSens Brussels – Most Innovative Autonomous Driving Platform of 2019 READ MOREAutoSens Brussels – Most Innovative Autonomous Driving Platform —AEye Team Profile: Aravind RatnamAEye Team Profile: Indu VijayanForbes On How AEye Teaches Autonomous Vehicles to Perceive Like a HumanAEye: Developing Artificial Perception Technologies That Exceed Human PerceptionAEye Advisory Board Profile: Luke SchneiderEETimes Talks ‘Corner Cases’ with AEye’s VP of Product Management, Aravind Ratnam, at AutoSens Brussels 2019Autonomous Vehicle Technology Publishes Aravind Ratnam’s New Metrics for LiDAR EvaluationRethinking the Three “Rs” of LiDAR: Rate, Resolution and RangeThis Is AEye
AEye: Developing Artificial Perception Technologies That Exceed Human Perception
Nothing can take in more information and process it faster and more accurately than the human visual cortex…until now. Humans classify complex objects at speeds up to 27Hz, with the brain processing 580 megapixels of data in as little as 13 milliseconds. While conventional LiDAR sensors on autonomous vehicles average around a 10Hz frame rate and revisit rate, iDAR sensors can achieve a frame rate in excess of 100Hz (>3x human vision), and an object revisit rate of >500Hz.
So what?A single interrogation point rarely delivers sufficient confidence – it is only suggestive. That’s why LiDAR systems must capture multiple detects of the same object to fully comprehend it, making the speed of subsequent interrogations/detects (the object revisit rate) significantly more critical to autonomous vehicle safety than frame rate alone. For conventional LiDAR sensors, the object revisit rate is the frame rate.
AEye’s iDAR not only has the ability to revisit an object within a single frame, it can revisit multiple points/objects of interest. The achievable object revisit rate of AEye’s iDAR system for objects of interest is microseconds to a few milliseconds – which can be up to 3000x faster, compared to conventional LiDAR systems that typically require hundreds of milliseconds between revisits.
Reducing the time between object detections within the same frame is critical, as shorter object revisit times keep processing times low for advanced algorithms that correlate multiple moving objects in a scene.
“iDAR takes the guesswork out of artificial perception and replaces it with actionable data.” – Dr. Allan Steinhardt, AEye Chief Scientist
AEye: Developing Artificial Perception Technologies That Exceed Human Perception —This Is AEyeForbes On How AEye Teaches Autonomous Vehicles to Perceive Like a HumanAEye Advisory Board Profile: Scott PfotenhauerAEye Advisory Board Profile: Luke SchneiderAEye Adds VP of AI and Software to Executive TeamRethinking the Three “Rs” of LiDAR: Rate, Resolution and RangeAEye Team Profile: Bailey Da CostaAEye Team Profile: Jim RobnettAEye Announces Industry Leading Family of Perception Sensors for ADAS Solutions