Mercedes prices its all-electric EQC SUV at $67,900

The Mercedes-Benz EQC 400 4MATIC, the German automaker’s first all-electric vehicle under its new EQ brand, will start at $67,900 when it arrives in the U.S. early next year. Mercedes-Benz announced Wednesday the price of the EQC 400 at the LA Auto Show. The price, which doesn’t account for the $7,500 federal tax credit, is… Continue reading Mercedes prices its all-electric EQC SUV at $67,900

Flatbed Trailer Across Roadway

skip to Main ContentHuman drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Flatbed Trailer Across Roadway [pdf]
Challenge: Flatbed Trailer Across RoadwayA vehicle equipped with an advanced driver assistance system (ADAS) is traveling 45mph down a four-lane road that passes through a sparsely populated town. Relying on the vehicle to navigate, the ADAS driver has largely stopped paying attention. Ahead, a semi-truck towing a flatbed trailer slowly traverses across the road. As the distance between the vehicle and the trailer shrinks rapidly, it’s up to the perception system to detect and classify the trailer, as well as measure its velocity and distance. At SAE Level 3 and beyond, where the car is assumed to be in control, the vehicle’s path planning software must make a critical decision about whether to swerve or slam on the brakes before it’s too late.
How Current Solutions Fall ShortToday’s advanced driver assistance systems (ADAS) will experience great difficulty recognizing this threat or reacting appropriately. Depending on its sensor configuration and perception training, the system may fail to register the trailer due to its very thin profile.
Camera. A perception system based on camera sensors will be prone to either misinterpret the threat, register a false positive, or miss the threat entirely. In the distance, the trailer will appear as little more than a two-dimensional line across the roadway. If the vehicle is turning, those same pixels could also be interpreted as a guardrail. In order to be accurate in all scenarios, the perception system must be trained in every possible light condition in combination with all color and size permutations. This poses an immense challenge, as there will be instances that haven’t been foreseen, creating a potentially deadly combination for perception systems that primarily depend on camera data.
Radar. Approached from the side, the profile of a flatbed trailer is very thin. With no better than a few degrees of angular resolution, radars are ill-equipped to detect such narrow horizontal objects. In this case, a majority of the radar’s radio waves will miss the slim profile of the trailer.
Camera + Radar. A perception system that only relies on camera and radar would likely be unable to detect the flatbed trailer and react in time. The camera data would be insufficiently detailed to classify the trailer and would likely lead the perception system to mistakenly classify the trailer as one of several common roadway features. As radar would also be unlikely to accurately detect the full length of the trailer, it would also mislead the perception system. In this instance, the combination of a camera and radar does little to improve the odds of accurately classifying the trailer.
LiDAR. Today’s conventional LiDAR produces very dense horizontal scan lines coupled with very poor vertical density. This scan pattern creates a challenge for detection when objects are horizontal, thin, and narrow—it’s easy for LiDAR’s laser shots to miss them entirely. Some LiDAR shots will hit the trailer. However, it takes time to gather the requisite number of detections to register any object. Depending on the vehicle’s speed, this process may take too much time to prevent a collision.
Successfully Resolving the Challenge with iDARA vehicle that enters a scene laterally is very difficult to track. iDAR overcomes this difficulty with its ability to selectively allocate LiDAR shots to Regions of Interest (ROIs). As soon as the LiDAR registers a single detection of the trailer, iDAR dynamically changes both the LiDAR’s temporal and spatial sampling density to comprehensively interrogate the trailer, thus gaining critical information like its size and distance ahead.
iDAR can schedule LiDAR shots to revisit Regions of Interest in a matter of microseconds to milliseconds. This means that iDAR can interrogate an object up to 3000x faster than conventional LiDAR systems, which typically require hundreds of milliseconds to revisit an object. As a result, iDAR has an unprecedented ability to calculate valuable attributes, including object distance and velocity (both lateral and radial), faster than any other system.
Software ComponentsComputer Vision. iDAR combines 2D camera pixels with 3D LiDAR voxels to create Dynamic Vixels. This data type helps the system’s AI refine the LiDAR point cloud around the trailer edges, effectively eliminating all the irrelevant points. As a result, iDAR is able to clearly distinguish the trailer from other roadway features, like guardrails and signage.
Cueing. For safety purposes, it’s essential to classify threats at range because their identities determine the vehicle’s specific and immediate response. To generate a dataset that is rich enough to apply perception algorithms for classification, as soon as LiDAR detects an object, it will cue the AI camera for deeper real-time analysis about its color, size, and shape. The camera will then review the pixels, running algorithms to define the object’s possible identities. To gain additional insights, the camera cues the LiDAR for additional data, which allocates more shots.
Feedback Loops. A feedback loop is triggered when an algorithm needs additional data from sensors. In this scenario, a feedback loop will be triggered between the camera and the LiDAR. The camera can cue the LiDAR, and the LiDAR can cue additional interrogation points, or a Dynamic Region of Interest, to determine the trailer’s true velocity. This information is sent to the domain controller so that it can decide whether to apply the brakes or swerve to avoid a collision.
The Value of AEye’s iDARLiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. As soon as iDAR registers a single detection of the flatbed trailer, it dynamically modifies the LiDAR scan pattern, scheduling a rapid series of shots to cover the trailer with a dense pattern of laser pulses to extract information about its distance and velocity. Flexible shot allocation vastly reduces the required number of shots per frame to extract the most valuable information in every scenario. This not only enables the vehicle’s perception system to more accurately track objects through time and space, it also makes autonomous driving much safer because it eliminates ambiguity, accelerates the perception process, and allows for more efficient use of processing resources.
Flatbed Trailer Across Roadway —Smarter Cars Podcast Talks LiDAR and Perception Systems with AEye President, Blair LaCorteThe Human Classification Framework: Search, Acquire, and ActCargo Protruding from VehicleAEye Advisory Board Profile: Luke SchneiderUnique iDAR Features That Drive SAE’s 5 Levels of AutonomyAEye: Developing Artificial Perception Technologies That Exceed Human PerceptionFalse PositiveAEye Wins Award for Most Innovative Autonomous Driving Platform at AutoSens BrusselsAEye Advisory Board Profile: Adrian Kaehlerprevious post: Previousnext post: Next ← A Pedestrian in Headlights ← Obstacle AvoidanceAbout Management Team Advisory Board InvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNews Press Releases AEye in the News Events AwardsLibrary Technology News & Views Profiles Videos BlogCareersSupportContact Back To Top

Obstacle Avoidance

skip to Main ContentHuman drivers confront and handle an incredible variety of situations and scenarios—terrain, roadway types, traffic conditions, weather conditions—for which autonomous vehicle technology needs to navigate both safely, and efficiently. These are edge cases, and they occur with surprising frequency. In order to achieve advanced levels of autonomy or breakthrough ADAS features, these edge cases must be addressed. In this series, we explore common, real-world scenarios that are difficult for today’s conventional perception solutions to handle reliably. We’ll then describe how AEye’s software definable iDAR™ (Intelligent Detection and Ranging) successfully perceives and responds to these challenges, improving overall safety.
Download AEye Edge Case: Obstacle Avoidance [pdf]
Challenge: Black Trash Can on RoadwayA vehicle equipped with an advanced driver assistance system (ADAS) is cruising down a city street at 35mph. Its driver is somewhat distracted and also driving too close to the vehicle ahead. Suddenly, the vehicle ahead swerves out of the lane, narrowly avoiding a black trash can that has fallen off a garbage truck. To avoid collision, the ADAS system must make a quick series of assessments. It must not only detect the trash can, it must also classify it and gauge its size and threat level. Then, it can decide whether to brake quickly or plan a safe path around it while avoiding a collision with parallel traffic.
How Current Solutions Fall ShortToday’s advanced driver assistance systems (ADAS) will experience great difficulty detecting the trash can and/or classifying it fast enough to react in the safest way possible. Typically, ADAS vehicle systems are trained to avoid activating the brakes for every anomaly on the road. As a result, in many cases they will simply drive into objects. In contrast, level 4 or 5 self-driving vehicles are biased toward avoiding collisions. In this scenario, they’ll either undertake evasive maneuvers or slam on the brakes, which could create a nuisance or cause an accident.
Camera. A perception system must be comprehensively trained to interpret all pixels of an image. In order to solve this edge case, the perception system would need to be trained on every possible permutation of objects lying in the road under every possible lighting condition. Achieving this goal is particularly difficult because objects can appear in an almost infinite array of shapes, forms, and colors. Moreover, the black trash can on black asphalt will further challenge the camera, especially at night and during low visibility and glare conditions.
Radar. Radar performance is poor when objects are made of plastic, rubber, and other non-metallic materials. As such, a black plastic trash can is difficult for radar to detect.
Camera + Radar. In many cases, a system using camera and radar would be unable to detect the black trash can at all. Moreover, a vehicle that constantly brakes for every road anomaly creates a nuisance and can cause a rear end accident. So, an ADAS system equipped with camera plus radar would typically be trained to ignore the trash can in an effort to avoid false positives when encountering objects like speed bumps and small debris.
LiDAR. LiDAR would detect the trash can regardless of perception training, lighting conditions, or its position on the road. At issue here is the low resolution of today’s LiDAR systems. A four-channel LiDAR completes a scan of the surroundings every 100 milliseconds. At this rate, LiDAR would be not be able to achieve the required number of shots on the trash can to register a valid detection. It would take 0.5 seconds before the trash can was even considered an object of interest. Even 16-channel LiDAR would struggle to get five points fast enough.
Successfully Resolving the Challenge with iDARAs soon as the trash can appears in the road ahead, iDAR’s first priority is classification. One of iDAR’s biggest advantages is that it is agile in nature. It can adjust laser scan patterns in real time, selectively targeting specific objects in the environment and dynamically changing scan density to learn more about them. This ability to instantaneously increase resolution is a critical ability that enables it to classify the trash can quickly. During this process, iDAR simultaneously keeps tabs on everything else. Once the trash can is classified, the domain controller uses what it already knows about the surrounding environment to respond in the safest way possible.
Software ComponentsComputer Vision. iDAR is designed with computer vision that creates a smarter, more focused LiDAR point cloud. In order to effectively “see” the trash can, iDAR combines the camera’s 2D pixels with the LiDAR’s 3D voxels to create Dynamic Vixels. This combination helps the AI refine the LiDAR point clouds around the trash can, effectively eliminating all the irrelevant points and leaving only its edges.
Cueing. For safety purposes, it’s essential to classify objects at range because their identities determine the vehicle’s specific and immediate response. To generate a dataset that is rich enough to apply perception algorithms for classification, as soon as LiDAR detects the trash can, it will cue the camera for deeper real-time analysis about its color, size, and shape. The camera will then review the pixels, running algorithms to define its possible identities. If it needs more information, the camera may then cue the LiDAR to allocate additional shots.
Feedback Loops. Intelligent iDAR sensors are capable of cueing themselves. If the camera lacks data, the LiDAR will generate a feedback loop that tells itself to “paint” the trash can with a dense pattern of laser pulses. This enables it to gather enough information for the LiDAR to run algorithms to effectively guess what it is. At the same time, it can also collect information about the intensity of laser light reflecting back. Because a plastic trash can is more reflective than the road, the laser light bouncing off of it will be more intense. Thus, the perception system can better distinguish it.
The Value of AEye’s iDARLiDAR sensors embedded with AI for intelligent perception are very different than those that passively collect data. When iDAR registers a single detection of an object in the road, its priority is to determine its size and identify it. iDAR will schedule a series of LiDAR shots in that area and combine that data with camera pixels. iDAR can flexibly adjust point cloud density around objects, using classification algorithms at the edge of the network before anything is sent to the domain controller. This ensures that there’s greatly reduced latency and that only the most important data is used to determine whether the vehicle should brake or swerve.
Obstacle Avoidance —AEye Wins Award for Most Innovative Autonomous Driving Platform at AutoSens BrusselsAEye Team Profile: Aravind RatnamAEye Team Profile: Ove SalomonssonLeading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech VenturesCargo Protruding from VehicleAEye Team Profile: Indu VijayanThe Human Classification Framework: Search, Acquire, and ActAEye: Developing Artificial Perception Technologies That Exceed Human PerceptionAEye Team Profile: Jim Robnettprevious post: Previousnext post: Next ← Flatbed Trailer Across Roadway ← Abrupt Stop DetectionAbout Management Team Advisory Board InvestorsiDAR Agile LiDAR Dynamic Vixels AI & Software Definability iDAR in ActionProducts AE110 AE200 iDAR Select Partner ProgramNews Press Releases AEye in the News Events AwardsLibrary Technology News & Views Profiles Videos BlogCareersSupportContact Back To Top

BMW to install 4,100 new charging points in Germany

BMW plans to install 4,100 charging points at its German locations by 2021. The charging points will include AC and DC fast chargers, and will be powered by renewable energy. Most of the stations will be installed around Munich – other sites include Berlin, Leipzig, Regensburg, Landshut, Wackersdorf, and Dingolfing. BMW anticipates that, by 2021,… Continue reading BMW to install 4,100 new charging points in Germany

Faraday Future Reveals Its New Concept of the Third Internet Living Space

Revolutionary user experience designed to create a mobile, connected and luxury third internet living spaceSignificant product innovations, including an all-in-one car with smart mobility and advanced artificial intelligenceIntegrated internet and AI applications including voice controls, predictive interfaces and autonomous driving capabilitiesLOS ANGELES, Nov. 19, 2019 (GLOBE NEWSWIRE) — Faraday Future (FF), a California-based global shared… Continue reading Faraday Future Reveals Its New Concept of the Third Internet Living Space

@Groupe PSA: Groupe PSA: The Trémery Plant in France’s Grand Est Region Is at the Forefront of Groupe PSA’s Energy Transition

RUEIL-MALMAISON, France–(BUSINESS WIRE)–Regulatory News: Yann Vincent, Executive Vice President, Manufacturing & Supply Chain for Groupe PSA (Paris:UG) said, “Years ago we made the decision to invest in the energy transition and make our plants more flexible, as illustrated by the Trémery plant. We are very proud of all our plant employees in the Grand Est… Continue reading @Groupe PSA: Groupe PSA: The Trémery Plant in France’s Grand Est Region Is at the Forefront of Groupe PSA’s Energy Transition

Lyft: Lyft Supports Shopping Small in New York City

Small businesses are the heart of New York City, anchoring communities and employing nearly half of the city’s workforce.1 To help support these businesses on Small Business Saturday® (Nov. 30), we’ve teamed up with neighborhoods throughout the city to offer shoppers 20% off 2 rides to or from their favorite businesses. Choose your neighborhood below… Continue reading Lyft: Lyft Supports Shopping Small in New York City

Uber: A Note on Transparency: Government Requests for Data

By Uttara Sivaram, Global Privacy and Security Public Policy at Uber Today, Uber is updating our Transparency Report on government requests for user data, which encompasses requests for the full year of 2018. Because we know that transparency is a crucial part of the trust our users place in us, we are continually looking for… Continue reading Uber: A Note on Transparency: Government Requests for Data

Daimler CEO says electric cars need incentives to become mainstream

FILE PHOTO: Incoming Daimler AG CEO Ola Kaellenius is seen at the Daimler annual shareholder meeting in Berlin, Germany, May 22, 2019. REUTERS/Hannibal Hanschke FRANKFURT (Reuters) – Daimler Chief Executive Ola Kaellenius on Wednesday said the resulting drop in sales of electric and hybrid vehicles in China after incentives were curbed shows that they are… Continue reading Daimler CEO says electric cars need incentives to become mainstream

CloudFactory Raises $65M in Growth Equity Funding

CloudFactory, a Reading, England, UK-based provider of managed workforce solutions for artificial intelligence (AI), secured $65m in growth equity funding. The round was led by FTV Capital with participation from Weatherford Capital. As part of the transaction, FTV Capital partner Alex Mason and principal Abhay Puskoor and Weatherford Capital partner Sam Weatherford will join the… Continue reading CloudFactory Raises $65M in Growth Equity Funding