skip to Main ContentHuman drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Obstacle Avoidance [pdf]
Challenge: Black Trash Can on RoadwayA vehicle equipped with an advanced driver assistance system (ADAS) is cruising down a city street at 35mph. Its driver is somewhat distracted and also driving too close to the vehicle ahead. Suddenly, the vehicle ahead swerves out of the lane, narrowly avoiding a black trash can that has fallen off a garbage truck. To avoid collision, the ADAS system must make a quick series of assessments. It must not only detect the trash can, it must also classify it and gauge its size and threat level. Then, it can decide whether to brake quickly or plan a safe path around it while avoiding a collision with parallel traffic.
How Current Solutions Fall ShortToday’s advanced driver assistance systems (ADAS) will experience great difficulty detecting the trash can and/or classifying it fast enough to react in the safest way possible. Typically, ADAS vehicle systems are trained to avoid activating the brakes for every anomaly on the road. As a result, in many cases they will simply drive into objects. In contrast, level 4 or 5 self-driving vehicles are biased toward avoiding collisions. In this scenario, they’ll either undertake evasive maneuvers or slam on the brakes, which could create a nuisance or cause an accident.
Camera. A perception system must be comprehensively trained to interpret all pixels of an image. In order to solve this edge case, the perception system would need to be trained on every possible permutation of objects lying in the road under every possible lighting condition. Achieving this goal is particularly difficult because objects can appear in an almost infinite array of shapes, forms, and colors. Moreover, the black trash can on black asphalt will further challenge the camera, especially at night and during low visibility and glare conditions.
Radar. Radar performance is poor when objects are made of plastic, rubber, and other non-metallic materials. As such, a black plastic trash can is difficult for radar to detect.
Camera + Radar. In many cases, a system using camera and radar would be unable to detect the black trash can at all. Moreover, a vehicle that constantly brakes for every road anomaly creates a nuisance and can cause a rear end accident. So, an ADAS system equipped with camera plus radar would typically be trained to ignore the trash can in an effort to avoid false positives when encountering objects like speed bumps and small debris.
LiDAR. LiDAR would detect the trash can regardless of perception training, lighting conditions, or its position on the road. At issue here is the low resolution of today’s LiDAR systems. A four-channel LiDAR completes a scan of the surroundings every 100 milliseconds. At this rate, LiDAR would be not be able to achieve the required number of shots on the trash can to register a valid detection. It would take 0.5 seconds before the trash can was even considered an object of interest. Even 16-channel LiDAR would struggle to get five points fast enough.
Successfully Resolving the Challenge with iDARAs soon as the trash can appears in the road ahead, iDAR’s first priority is classification. One of iDAR’s biggest advantages is that it is agile in nature. It can adjust laser scan patterns in real time, selectively targeting specific objects in the environment and dynamically changing scan density to learn more about them. This ability to instantaneously increase resolution is a critical ability that enables it to classify the trash can quickly. During this process, iDAR simultaneously keeps tabs on everything else. Once the trash can is classified, the domain controller uses what it already knows about the surrounding environment to respond in the safest way possible.
Software ComponentsComputer Vision. iDAR is designed with computer vision that creates a smarter, more focused LiDAR point cloud. In order to effectively “see” the trash can, iDAR combines the camera’s 2D pixels with the LiDAR’s 3D voxels to create Dynamic Vixels. This combination helps the AI refine the LiDAR point clouds around the trash can, effectively eliminating all the irrelevant points and leaving only its edges.
Cueing. For safety purposes, it’s essential to classify objects at range because their identities determine the vehicle’s specific and immediate response. To generate a dataset that is rich enough to apply perception algorithms for classification, as soon as LiDAR detects the trash can, it will cue the camera for deeper real-time analysis about its color, size, and shape. The camera will then review the pixels, running algorithms to define its possible identities. If it needs more information, the camera may then cue the LiDAR to allocate additional shots.
Feedback Loops. Intelligent iDAR sensors are capable of cueing themselves. If the camera lacks data, the LiDAR will generate a feedback loop that tells itself to “paint” the trash can with a dense pattern of laser pulses. This enables it to gather enough information for the LiDAR to run algorithms to effectively guess what it is. At the same time, it can also collect information about the intensity of laser light reflecting back. Because a plastic trash can is more reflective than the road, the laser light bouncing off of it will be more intense. Thus, the perception system can better distinguish it.
The Value of AEye’s iDARLiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. When iDAR registers a single detection of an object in the road, its priority is to determine its size and identify it. iDAR will schedule a series of LiDAR shots in that area and combine that data with camera pixels. iDAR can flexibly adjust point cloud density around objects, using classification algorithms at the edge of the network before anything is sent to the domain controller. This ensures that there’s greatly reduced latency and that only the most important data is used to determine whether the vehicle should brake or swerve.
Obstacle Avoidance —AEye Wins Award for Most Innovative Autonomous Driving Platform at AutoSens BrusselsAEye Team Profile: Aravind RatnamAEye Team Profile: Ove SalomonssonLeading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech VenturesCargo Protruding from VehicleAEye Team Profile: Indu VijayanThe Human Classification Framework: Search, Acquire, and ActAEye: Developing Artificial Perception Technologies That Exceed Human PerceptionAEye Team Profile: Jim Robnettprevious post: Previousnext post: Next ← Flatbed Trailer Across Roadway ← Abrupt Stop DetectionAbout Management Team Advisory Board InvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNews Press Releases AEye in the News Events AwardsLibrary Technology News & Views Profiles Videos BlogCareersSupportContact Back To Top
Tag: Autonomous
Faraday Future Reveals Its New Concept of the Third Internet Living Space
Revolutionary user experience designed to create a mobile, connected and luxury third internet living spaceSignificant product innovations, including an all-in-one car with smart mobility and advanced artificial intelligenceIntegrated internet and AI applications including voice controls, predictive interfaces and autonomous driving capabilitiesLOS ANGELES, Nov. 19, 2019 (GLOBE NEWSWIRE) — Faraday Future (FF), a California-based global shared… Continue reading Faraday Future Reveals Its New Concept of the Third Internet Living Space
@Groupe PSA: Groupe PSA: The Trémery Plant in France’s Grand Est Region Is at the Forefront of Groupe PSA’s Energy Transition
RUEIL-MALMAISON, France–(BUSINESS WIRE)–Regulatory News: Yann Vincent, Executive Vice President, Manufacturing & Supply Chain for Groupe PSA (Paris:UG) said, “Years ago we made the decision to invest in the energy transition and make our plants more flexible, as illustrated by the Trémery plant. We are very proud of all our plant employees in the Grand Est… Continue reading @Groupe PSA: Groupe PSA: The Trémery Plant in France’s Grand Est Region Is at the Forefront of Groupe PSA’s Energy Transition
CloudFactory Raises $65M in Growth Equity Funding
CloudFactory, a Reading, England, UK-based provider of managed workforce solutions for artificial intelligence (AI), secured $65m in growth equity funding. The round was led by FTV Capital with participation from Weatherford Capital. As part of the transaction, FTV Capital partner Alex Mason and principal Abhay Puskoor and Weatherford Capital partner Sam Weatherford will join the… Continue reading CloudFactory Raises $65M in Growth Equity Funding
Continental’s Supervisory Board Supports Accelerated Technological Transition
Continuation of global structural program “Transformation 2019–2029” aims to strengthen competitiveness in the long term Resolutions of the Supervisory Board approve plans already discussed at its meeting on September 25, 2019 Accelerated transition to electric mobility necessitates adjustments and the phasing out of production at several locations worldwide Rapid digitalization of display and control technologies,… Continue reading Continental’s Supervisory Board Supports Accelerated Technological Transition
Uber self-drive crash ‘mostly due to human error’
Media playback is unsupported on your device A distracted safety operator in an Uber self-driving car was primarily to blame for a fatal crash in 2018, a US regulator has ruled. The National Transportation Safety Board (NTSB) said an “inadequate safety culture” at Uber was also a major contributing factor. As too were poor, or… Continue reading Uber self-drive crash ‘mostly due to human error’
Tesla’s Andrej Karpathy Discusses Autopilot, Full Self-Driving, PyTorch
Tesla Autopilot is far ahead of all other similar technologies. Karpathy can tell us why. Andrej Karpathy, Tesla’s Director of Artificial Intelligence and Autopilot Vision, is one of the chief architects of Tesla’s self-driving vision. In July, he hosted a workshop on Neural Network Multi-Task Learning, where he offered some detailed insights on Tesla’s use of… Continue reading Tesla’s Andrej Karpathy Discusses Autopilot, Full Self-Driving, PyTorch
@Hyundai: Transforming How We Interact with Our Cars : Natural Language Speech Recognition
The era has come when navigation is searched by voice while driving. Hyundai Motor Group’s first natural language voice recognition service in the 8th generation Sonata, how was it developed? Talk to your car just like you would talk to a friend! The 8th generation SONATA is equipped with natural language processing, a HMG first.… Continue reading @Hyundai: Transforming How We Interact with Our Cars : Natural Language Speech Recognition
Abrupt Stop Detection
skip to Main Content Human drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Abrupt Stop Detection [pdf]
Challenge: A Child Runs into the Street Chasing a BallA vehicle equipped with an advanced driver assistance system (ADAS) is cruising down a leafy residential street at 25 mph on a sunny day with a second vehicle following behind. Its driver is distracted by the radio. Suddenly, a small object enters the road laterally. At that moment, the vehicle’s perception system must make several assessments before the vehicle path controls can react. What is the object, and is it a threat? Is it a ball or something else? More importantly, is a child in pursuit? Each of these scenarios require a unique response. It’s imperative to brake or swerve for the child. However, engaging the vehicle’s brakes for a lone ball is unnecessary and even dangerous.
How Current Solutions Fall ShortAccording to a recent study done by AAA, today’s advanced driver assistance systems (ADAS) will experience great difficulty recognizing these threats or reacting appropriately. Depending on road conditions, their passive sensors may fail to detect the ball and won’t register a child until it’s too late. Alternatively, vehicles equipped with systems that are biased towards braking will constantly slam on the brakes for every soft target in the street, creating a nuisance or even causing accidents.
Camera. Camera performance depends on a combination of image quality, Field-of-View, and perception training. While all three are important, perception training is especially relevant here. Cameras are limited when it comes to interpreting unique environments because everything is just a light value. To understand any combination of pixels, AI is required. And AI can’t invent what it hasn’t seen. In order for the perception system to correctly identify a child chasing a ball, it must be trained on every possible permutation of this scenario, including balls of varying colors, materials, and sizes, as well as children of different sizes in various clothing. Moreover, the children would need to be trained in all possible variations—with some approaching the vehicle from behind a parked car, with just an arm protruding, etc. Street conditions would need to be accounted for, too, like those with and without shade, and sun glare at different angles. Perception training for every possible scenario may be possible. However, it’s an incredibly costly and time-consuming process.
Radar. Radar’s basic flaw is that it can only pick up a few degrees of angular resolution. When radar picks up an object, it will only provide a few detection points to the perception system to distinguish a general blob in the area. Moreover, an object’s size, shape, and material will influence its detectability. Radar can’t distinguish soft objects from other objects, so the signature of a rubber or leather ball would be close to nothing. While radar would detect the child, there would simply not be enough data or time for the system to detect, and then classify and react.
Camera + Radar. A system that combines radar with a camera would have difficulty assessing this situation quickly enough to respond correctly. Too many factors have the potential to negatively impact their performance. The perception system would need to be trained for the precise scenario to classify exactly what it was “seeing.” And the radar would need to detect the child early enough, at a wide angle, and possibly from behind parked vehicles (strong surrounding radar reflections), predict its path, and act. In addition, radar may not have sufficient resolution to distinguish between the child and the ball.
LiDAR. Conventional LiDAR’s greatest value in this scenario is that it brings automatic depth measurement for the ball and the child. It can determine within approximately a few centimeters exactly how far away each is in relation to the vehicle. However, today’s LiDAR systems are unable to ensure vehicle safety because they don’t gather important information—such as shape, velocity, and trajectory—fast enough. This is because conventional LiDAR systems are passive sensors that scan everything uniformly in a fixed pattern and assign every detection an equal priority. Therefore, it is unable to prioritize and track moving objects, like a child and a ball, over the background environment, like parked cars, the sky, and trees.
Successfully Resolving the Challenge with iDARAEye’s iDAR solves this challenge successfully because it can prioritize how it gathers information and thereby understand an object’s context. As soon as an object moves into the road, a single LiDAR detection will set the perception system into action. First, iDAR will cue the camera to learn about its shape and color. In addition, iDAR will define a dense Dynamic Region of Interest (ROI) on the ball. The LiDAR will then interrogate the object, scheduling a rapid series of shots to generate a dense pixel grid of the ROI. This dataset is rich enough to start applying perception algorithms for classification, which will inform and cue further interrogations.
Having classified the ball, the system’s intelligent sensors are trained with algorithms that instruct them to anticipate something in pursuit. At that point, the LiDAR will then schedule another rapid series of shots on the path behind the ball, generating another pixel grid to search for a child. iDAR has a unique ability to intelligently survey the environment, focus on objects, identify them, and make rapid decisions based on their context.
Software ComponentsComputer Vision. iDAR is designed with computer vision, creating a smarter, more focused LiDAR point cloud that mimics the way humans perceive the environment. In order to effectively “see” the ball and the child, iDAR combines the camera’s 2D pixels with the LiDAR’s 3D voxels to create Dynamic Vixels. This combination helps the AI refine the LiDAR point clouds around the ball and the child, effectively eliminating all the irrelevant points and leaving only their edges.
Cueing. A single LiDAR’s detection on the ball sets the first cue into motion. Immediately, the sensor flags the region where the ball appears, cueing the LiDAR to focus a Dynamic ROI on the ball. Cueing generates a dataset that is rich enough to apply perception algorithms for classification. If the camera lacks data (due to light conditions, etc.), the LiDAR will cue itself to increase the point density around the ROI. This enables it to gather enough data to classify an object and determine whether it’s relevant.
Feedback Loops. Once the ball is detected, a feedback loop is generated by an algorithm that triggers the sensors to focus another ROI immediately behind the ball and to the side of the road to capture anything in pursuit, initiating faster and more accurate classification. This starts another cue. With that data, the system can classify whatever is behind the ball and determine its true velocity so that it can decide whether to apply the brakes or swerve to avoid a collision.
The Value of AEye’s iDARLiDAR sensors embedded with AI for intelligent perception are vastly different than those that passively collect data. After detecting and classifying the ball, iDAR will immediately foveate in the direction where the child will most likely enter the frame. This ability to intelligently understand the context of a scene enables iDAR to detect the child quickly, calculate the child’s speed of approach, and apply the brakes or swerve to avoid collision. To speed reaction times, each sensor’s data is processed intelligently at the edge of the network. Only the most salient data is then sent to the domain controller for advanced analysis and path planning, ensuring optimal safety.
Abrupt Stop Detection —A Pedestrian in HeadlightsSmarter Cars Podcast Talks LiDAR and Perception Systems with AEye President, Blair LaCorteFalse PositiveCargo Protruding from VehicleAEye Team Profile: Jim RobnettAEye Team Profile: Aravind RatnamAEye Expands Business Development and Customer Success Team to Support Growing Network of Global Partners and CustomersiDAR Sees Only What MattersAutonomous Cars with Marc Hoag Talks “Biomomicry” with AEye President, Blair LaCorteprevious post: Previousnext post: Next ← Obstacle Avoidance ← False PositiveAbout Management Team Advisory Board InvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNews Press Releases AEye in the News Events AwardsLibrary Technology News & Views Profiles Videos BlogCareersSupportContact Back To Top
Media Explores Where 3D Automotive Lidar is Headed with Velodyne CTO Anand Gopalan
November 19, 2019 Real-time 3D lidar is poised to be the third leg of the trifecta of sensor technologies enabling both advanced driver-assistance (ADAS) and autonomous vehicles, writes Ed Brown in a Photonics & Imaging Technology story. Brown spoke with Velodyne Lidar CTO Anand Gopalan about the current state of 3D automotive lidar and where… Continue reading Media Explores Where 3D Automotive Lidar is Headed with Velodyne CTO Anand Gopalan