Mar 11, 2021 Nearly 6,000-mile trip begins in NYC on Saturday, March 13 VW ID.4 Pro S and 1st Edition models boast an EPA-estimated range of 250 miles on a single charge All-electric compact SUV deliveries begin this month Herndon, VA — Volkswagen of America is embarking on a drive across the U.S. this weekend… Continue reading @VW Group: All-electric Volkswagen ID.4 EV prepares for cross-country drive002589
Tag: Mobility
Uber: Leading Rideshare Companies Launch Industry Sharing Safety Program in the U.S.
SAN FRANCISCO–(BUSINESS WIRE)–Uber and Lyft today announced the Industry Sharing Safety Program, a first-of-its-kind effort to share information about the drivers and delivery people deactivated from each company’s platform for the most serious safety incidents including sexual assault and physical assaults resulting in a fatality. The goal of the Program is to further enhance the… Continue reading Uber: Leading Rideshare Companies Launch Industry Sharing Safety Program in the U.S.
@Groupe PSA: PR ECO-MOBILITY ACCORDING TO PEUGEOT: THE “POWER OF CHOICE” STRATEGY001357
By 2040, almost 5 billion men and women will live in cities. Making the urban space “sustainable” is one of the main challenges of the 21st century, and the aspiration of creating “sustainable cities” is shared among all city dwellers. Factors such as air quality, the growth of e-commerce and home delivery, as well as… Continue reading @Groupe PSA: PR ECO-MOBILITY ACCORDING TO PEUGEOT: THE “POWER OF CHOICE” STRATEGY001357
@FCA: 2022 Wagoneer and Grand Wagoneer Reborn as the New Standard of Sophistication, Authenticity and Modern Mobility
March 11, 2021 , Auburn Hills, Mich. – The all-new 2022 Wagoneer and Grand Wagoneer mark the rebirth of a premium American icon, with legendary capability courtesy of three available 4×4 systems, exceptional driving dynamics, powerful performance, including best-in-class towing capability of up to 10,000 lbs., advanced technology, safety and a new level of comfort for… Continue reading @FCA: 2022 Wagoneer and Grand Wagoneer Reborn as the New Standard of Sophistication, Authenticity and Modern Mobility
Lidar range at one kilometer
The start-up AEye has made a breakthrough in the lidar segment: a range of 1,000 meters. Of the Lidar sensor Some companies are considered indispensable, others shy away from the high costs. But the sensor is getting cheaper and how AEye now proves also getting more powerful. The sensor is considered the main perception option… Continue reading Lidar range at one kilometer
Rethinking the Four “Rs” of LiDAR: Rate, Resolution, Returns and Range
Extending Conventional LiDAR Metrics to Better Evaluate Advanced Sensor SystemsBy Blair LaCorte, Luis Dussan, Allan Steinhardt, and Barry Behnken
Executive SummaryAs the autonomous vehicle market matures, sensor and perception engineers have become increasingly sophisticated in how they evaluate system efficiency, reliability, and performance. Many industry leaders have recognized that conventional metrics for LiDAR data collection (such as frame rate, full frame resolution, points per second, and detection range) no longer adequately measure the effectiveness of sensors to solve real-world use cases that underlie autonomous driving.
First generation LiDAR sensors passively search a scene and detect objects using background patterns that are fixed in both time (no ability to enhance with a faster revisit) and in space (no ability to apply extra resolution to high interest areas like the road surface or pedestrians). A new class of solid-state, high-performance, active LiDAR sensors enable intelligent information capture that expands their capabilities — moving from “passive search” or detection of objects, to “active search,” and in many cases, to the actual acquisition of classification attributes of objects in real time.
Because early generation LiDARs use passive fixed raster scans, the industry adopted very simplistic performance metrics that don’t capture all the nuances of the sensor requirements needed to enable AVs. In response, AEye is proposing the consideration of four new corresponding metrics for extending LiDAR evaluation. Specifically: extending the metric of frame rate to include object revisit rate; extending the metric of resolution to capture instantaneous resolution; extending points per second to signify the overall more useful quality returns per second; and extending detection range to reflect the more critically important object classification range.
We are proposing that these new metrics be used in conjunction with existing measurements of basic camera, radar, and passive LiDAR performance. These extended metrics measure a sensor’s ability to intelligently enhance perception and create a more complete evaluation of a sensor system’s efficacy in improving the safety and performance of autonomous vehicles in real-world scenarios.
Download “Rethinking the Four “Rs” of LiDAR: Rate, Resolution, Returns and Range” [pdf]
IntroductionOur industry has leveraged proven frameworks from advanced robotic vision research and applied them to LiDAR-specific product architectures. One framework, “Search, Acquire [or classify], and Act,” has proven to be both versatile and instructive relative to object identification.
Search is the ability to detect any and all objects without the risk of missing anything.Acquire is defined as the ability to take a search detection and enhance the understanding of an object’s attributes to accelerate classification and determine possible intent (this could be done by classifying object type or by calculating velocity).Act defines an appropriate sensor response as trained, or as recommended, by the vehicle’s perception system or domain controller. Responses can largely fall into four categories:Continue scan for new objects with no enhanced information required;Continue scan and interrogate the object further, gathering more information on an acquired object’s attributes to enable classification;Continue scan and track an object classified as non-threatening;Continue scan and instruct the control system to take evasive action.Within this framework, performance specifications and system effectiveness need to be assessed with an “eye” firmly on the ultimate objective: completely safe operation of the vehicle. However, as most LiDAR systems today are passive, they are only capable of basic search. Therefore, conventional metrics used for evaluating these systems’ performance relate to basic object detection capabilities – frame rate, resolution, points per second, and detection range. If safety is the ultimate goal, then search needs to be more intelligent, and acquisition (and classification) done more quickly and accurately so that the sensor or the vehicle can determine how to act immediately.
Rethinking the MetricsMakers of automotive LiDAR systems are frequently asked about their frame rate, and whether or not their technology has the ability to detect objects with 10% reflectivity at some range (often 230 meters). We believe these benchmarks are required, but insufficient as they don’t capture critical details, such as the size of the target, the speed at which it needs to be detected and recognized, or the cost of collecting that information.
We believe it would be productive for the industry to adopt a more holistic approach when it comes to assessing LiDAR systems for automotive use. We argue that we must look at metrics as they relate to a perception system in general, rather than as an individual point sensor, and ask ourselves: “What information would enable a perception system to make better, faster decisions?” In this white paper, we outline the four conventional LiDAR metrics with recommendations on how to extend them.
Conventional Metric #1: Frame Rate of 10Hz – 20HzExtended Metric: Object Revisit Rate
The time between two shots at the same point or set of pointsDefining single point detection range alone is insufficient because a single interrogation point (shot) rarely delivers sufficient confidence – it is only suggestive. Therefore, passive LiDAR systems need either multiple interrogations/detects at the same location or multiple interrogations/detects on the same object to validate an object or scene. In passive LiDAR systems, the time it takes to detect an object is dependent on many variables, such as distance, interrogation pattern, resolution, reflectivity, the shape of the object, and the scan rate.
A key factor missing from the conventional metric is a finer definition of time. Thus, we propose that object revisit rate become a new, more refined metric for automotive LiDAR because a high-performance, active LiDAR, such as AEye’s iDAR™, has the ability to revisit an object within the same frame. The time between the first and second measurement of an object is critical, as shorter object revisit times keep processing times low for advanced algorithms that correlate multiple moving objects in a scene. The best algorithms used to associate/correlate multiple moving objects can be confused when time elapsed between samples is high. This lengthy combined processing time, or latency, is a primary issue for the industry.
The active iDAR platform accelerates revisit rate by allowing for intelligent shot scheduling within a frame. Not only can iDAR interrogate a position or object multiple times within a conventional frame, it can maintain a background search pattern while simultaneously overlaying additional intelligent shots. For example, an iDAR sensor can schedule two repeated shots on an object of interest in quick succession (30μsec). These multiple interrogations can be contextually integrated with the needs of the user (either human or computer) to increase confidence, reduce latency, or extend ranging performance.
These additional interrogations can also be data dependent. For example, an object can be revisited if a low confidence detection occurs, and it is desirable to quickly validate or reject it, enabled with secondary data and measurement, as seen in Figure 1. A typical frame rate for conventional passive sensors is 10Hz. For conventional passive sensors, this is the object revisit rate. With AEye’s active iDAR technology, the object revisit rate is now different from the frame rate, and it can be as low as tens of microseconds between revisits to key points/objects – easily 100x to 1000x faster than conventional passive sensors.
What this means is that a perception engineering team using dynamic object revisit capabilities can create a perception system that is at least an order of magnitude faster than what can be delivered by conventional passive LiDAR without disrupting the background scan patterns. We believe this capability is invaluable for delivering level 4/5 autonomy as the vehicle will need to handle complex edge cases, such as identifying a pedestrian in front of oncoming headlights or a flatbed semi-trailer laterally crossing the path of the vehicle.
Figure 1. Advanced active LiDAR sensors utilize intelligent scan patterns that enable an Object Revisit Interval, such as the random scan pattern of AEye’s iDAR (B). This is compared to the Revisit Interval on a passive, fixed pattern LiDAR (A). For example, in this instance, iDAR is able to get eight detects on a vehicle, while passive, fixed pattern LiDAR can only achieve one.
Within the “Search, Acquire, and Act” framework, an accelerated object revisit rate, therefore, allows for faster acquisition because it can identify and automatically revisit an object, painting a more complete picture of it within the context of the scene. Ultimately, this allows for collection of object classification attributes in the sensor, as well as efficient and effective interrogation and tracking of a potential threat.
Real-World ApplicationsUse Case: Head-On DetectionWhen you’re driving, the world can change dramatically in a tenth of a second. In fact, two cars traveling towards each other at 100 kph are 5.5 meters closer after 0.1 seconds. By having an accelerated revisit rate, we increase the likelihood of hitting the same target with a subsequent shot due to the decreased likelihood that the target has moved significantly in the time between shots. This helps the user solve the “Correspondence Problem,” determining which parts of one “snapshot” of a dynamic scene correspond to which parts of another snapshot of the same scene. It does this while simultaneously enabling the user to quickly build statistical measures of confidence and generate aggregate information that downstream proce..
@BMW: Strong second half-year 2020 driven by high demand and well-coordinated management000962
Munich. The BMW Group’s profitable performance in the second half of the financial year 2020 provided a good tailwind going into 2021. Despite the global pandemic, the premium automobile manufacturer recorded an impressive pre-tax profit for the final six months of the year amounting to € 4,724 million, 9.8% up on the previous year’s already high… Continue reading @BMW: Strong second half-year 2020 driven by high demand and well-coordinated management000962
ThunderSoft and Human Horizons announced a joint venture to develop SOA framework for connected vehicles
ThunderSoft, the operating system products and technologies provider recently reached an agreement with Human Horizons to establish a new joint venture, which will focus on the development of SOA framework for next-generation connected vehicles. The new joint venture will be operated by ThunderSoft. It will leverage ThunderSoft’s advanced operation system technologies plus Human Horizons’ outstanding… Continue reading ThunderSoft and Human Horizons announced a joint venture to develop SOA framework for connected vehicles
@Groupe PSA: PR Distribution of Faurecia shares and cash has become unconditional001351
Amsterdam – Stellantis N.V. (NYSE / MTA / Euronext Paris: STLA) (“Stellantis”) announced today that the previously announced conditional distribution (the “Distribution”), pursuant to a capital reduction, by Stellantis to the holders of its common shares of up to 54,297,006 ordinary shares of Faurecia S.E. (“Faurecia”) and up to €308 million in cash, being the… Continue reading @Groupe PSA: PR Distribution of Faurecia shares and cash has become unconditional001351
Geely taps Foxconn’s playbook to build EVs for other carmakers
BEIJING — Geely, China’s largest private-sector automaker, has taken the first step toward becoming the Foxconn of electric vehicles. Geely and Foxconn, the Taiwanese Apple assembler, are establishing a 50-50 joint venture that will manufacture whole electric vehicles for clients as well as parts. Just like Foxconn did with iPhones, Geely will pursue an economy… Continue reading Geely taps Foxconn’s playbook to build EVs for other carmakers