@Ferrari: Ferrari share repurchase program000084

Maranello (Italy), March 11, 2021 – Ferrari N.V. (NYSE/MTA: RACE) (“Ferrari” or the “Company”) communicates its intention to restart its multi-year share repurchase program announced on December 28, 2018 (the “Program”) with a fourth tranche of up to Euro 150 million to start on March 12, 2021 (“Fourth Tranche”) and to end no later than… Continue reading @Ferrari: Ferrari share repurchase program000084

Uber: Leading Rideshare Companies Launch Industry Sharing Safety Program in the U.S.

SAN FRANCISCO–(BUSINESS WIRE)–Uber and Lyft today announced the Industry Sharing Safety Program, a first-of-its-kind effort to share information about the drivers and delivery people deactivated from each company’s platform for the most serious safety incidents including sexual assault and physical assaults resulting in a fatality. The goal of the Program is to further enhance the… Continue reading Uber: Leading Rideshare Companies Launch Industry Sharing Safety Program in the U.S.

@FCA: 2022 Wagoneer and Grand Wagoneer Reborn as the New Standard of Sophistication, Authenticity and Modern Mobility

March 11, 2021 , Auburn Hills, Mich. – The all-new 2022 Wagoneer and Grand Wagoneer mark the rebirth of a premium American icon, with legendary capability courtesy of three available 4×4 systems, exceptional driving dynamics, powerful performance, including best-in-class towing capability of up to 10,000 lbs., advanced technology, safety and a new level of comfort for… Continue reading @FCA: 2022 Wagoneer and Grand Wagoneer Reborn as the New Standard of Sophistication, Authenticity and Modern Mobility

GROUPE RENAULT: SALE OF ITS ENTIRE STAKE IN DAIMLER BY RENAULT

 Not for ditribution, directly or indirectly, in Canada, Australia or Japan PRESS RELEASE                                                                                                                                                                  SALE OF ITS ENTIRE STAKE IN DAIMLER BY RENAULTPARIS, March 11, 2021 – Renault S.A (“Renault”) announces today that it intends to sell its entire stake in Daimler A.G (“Daimler”) (i.e. 16,448,378 shares, representing 1.54% of the share capital of Daimler) through a placement… Continue reading GROUPE RENAULT: SALE OF ITS ENTIRE STAKE IN DAIMLER BY RENAULT

The Return of an Icon: All-new 2022 Wagoneer and Grand Wagoneer Now Available to Order; Pricing Announced

For the first time in 30 years, customers searching for a true premium SUV experience can now place an order for the all-new 2022 model year Wagoneer and Grand Wagoneer. The return of Wagoneer as a premium extension of the Jeep® brand has a starting U.S. manufacturer’s suggested retail price (MSRP) of $57,995; Grand Wagoneer… Continue reading The Return of an Icon: All-new 2022 Wagoneer and Grand Wagoneer Now Available to Order; Pricing Announced

First-Ever Automotive Reference System from McIntosh ® Featured in 2022 Grand Wagoneer

BINGHAMTON, N.Y., March 11, 2021 /PRNewswire/ — McIntosh Laboratory is world-famous for its unparalleled luxury home audio systems. Today, McIntosh is proud to announce two extraordinary McIntosh Entertainment Systems—the MX1375 Reference Entertainment System and the MX950 Entertainment System are going to hit the road in the upcoming 2022 Wagoneer and Grand Wagoneer.  Both vehicles will… Continue reading First-Ever Automotive Reference System from McIntosh ® Featured in 2022 Grand Wagoneer

Alpha Motor Corporation Releases The Striking Pure Electric WOLF™ Utility Truck

IRVINE, Calif., March 11, 2021 /PRNewswire/ — Alpha Motor Corporation has unveiled WOLF™, the automotive company’s pure electric pickup truck built on a shared platform with the Alpha JAX™. The Alpha WOLF™ Electric Truck launch can be viewed at https://youtu.be/BQNW-eiXRR4. Tweet this WOLF™ which represents balance, endurance, and friendship, is positioned as a fun utility truck… Continue reading Alpha Motor Corporation Releases The Striking Pure Electric WOLF™ Utility Truck

Haldex appoints new CEO, Helene Svahn hands over to Jean-Luc Desire

STOCKHOLM, March 11, 2021 /PRNewswire/ — Haldex’s CEO, Helene Svahn, hands over the CEO position to Jean-Luc Desire per July 1, 2021 at the latest. Jean-Luc Desire is currently with Tenneco Automotive in Belgium and will relocate to Sweden when he assumes his new role. Helene Svahn has led Haldex through tough cost saving programs… Continue reading Haldex appoints new CEO, Helene Svahn hands over to Jean-Luc Desire

Rethinking the Four “Rs” of LiDAR: Rate, Resolution, Returns and Range

Extending Conventional LiDAR Metrics to Better Evaluate Advanced Sensor SystemsBy Blair LaCorte, Luis Dussan, Allan Steinhardt, and Barry Behnken
Executive SummaryAs the autonomous vehicle market matures, sensor and perception engineers have become increasingly sophisticated in how they evaluate system efficiency, reliability, and performance. Many industry leaders have recognized that conventional metrics for LiDAR data collection (such as frame rate, full frame resolution, points per second, and detection range) no longer adequately measure the effectiveness of sensors to solve real-world use cases that underlie autonomous driving.
First generation LiDAR sensors passively search a scene and detect objects using background patterns that are fixed in both time (no ability to enhance with a faster revisit) and in space (no ability to apply extra resolution to high interest areas like the road surface or pedestrians). A new class of solid-state, high-performance, active LiDAR sensors enable intelligent information capture that expands their capabilities — moving from “passive search” or detection of objects, to “active search,” and in many cases, to the actual acquisition of classification attributes of objects in real time.
Because early generation LiDARs use passive fixed raster scans, the industry adopted very simplistic performance metrics that don’t capture all the nuances of the sensor requirements needed to enable AVs. In response, AEye is proposing the consideration of four new corresponding metrics for extending LiDAR evaluation. Specifically: extending the metric of frame rate to include object revisit rate; extending the metric of resolution to capture instantaneous resolution; extending points per second to signify the overall more useful quality returns per second; and extending detection range to reflect the more critically important object classification range.
We are proposing that these new metrics be used in conjunction with existing measurements of basic camera, radar, and passive LiDAR performance. These extended metrics measure a sensor’s ability to intelligently enhance perception and create a more complete evaluation of a sensor system’s efficacy in improving the safety and performance of autonomous vehicles in real-world scenarios.
Download “Rethinking the Four “Rs” of LiDAR: Rate, Resolution, Returns and Range” [pdf]
IntroductionOur industry has leveraged proven frameworks from advanced robotic vision research and applied them to LiDAR-specific product architectures. One framework, “Search, Acquire [or classify], and Act,” has proven to be both versatile and instructive relative to object identification.
Search is the ability to detect any and all objects without the risk of missing anything.Acquire is defined as the ability to take a search detection and enhance the understanding of an object’s attributes to accelerate classification and determine possible intent (this could be done by classifying object type or by calculating velocity).Act defines an appropriate sensor response as trained, or as recommended, by the vehicle’s perception system or domain controller. Responses can largely fall into four categories:Continue scan for new objects with no enhanced information required;Continue scan and interrogate the object further, gathering more information on an acquired object’s attributes to enable classification;Continue scan and track an object classified as non-threatening;Continue scan and instruct the control system to take evasive action.Within this framework, performance specifications and system effectiveness need to be assessed with an “eye” firmly on the ultimate objective: completely safe operation of the vehicle. However, as most LiDAR systems today are passive, they are only capable of basic search. Therefore, conventional metrics used for evaluating these systems’ performance relate to basic object detection capabilities – frame rate, resolution, points per second, and detection range. If safety is the ultimate goal, then search needs to be more intelligent, and acquisition (and classification) done more quickly and accurately so that the sensor or the vehicle can determine how to act immediately.
Rethinking the MetricsMakers of automotive LiDAR systems are frequently asked about their frame rate, and whether or not their technology has the ability to detect objects with 10% reflectivity at some range (often 230 meters). We believe these benchmarks are required, but insufficient as they don’t capture critical details, such as the size of the target, the speed at which it needs to be detected and recognized, or the cost of collecting that information.
We believe it would be productive for the industry to adopt a more holistic approach when it comes to assessing LiDAR systems for automotive use. We argue that we must look at metrics as they relate to a perception system in general, rather than as an individual point sensor, and ask ourselves: “What information would enable a perception system to make better, faster decisions?” In this white paper, we outline the four conventional LiDAR metrics with recommendations on how to extend them.
Conventional Metric #1: Frame Rate of 10Hz – 20HzExtended Metric: Object Revisit Rate
The time between two shots at the same point or set of pointsDefining single point detection range alone is insufficient because a single interrogation point (shot) rarely delivers sufficient confidence – it is only suggestive. Therefore, passive LiDAR systems need either multiple interrogations/detects at the same location or multiple interrogations/detects on the same object to validate an object or scene. In passive LiDAR systems, the time it takes to detect an object is dependent on many variables, such as distance, interrogation pattern, resolution, reflectivity, the shape of the object, and the scan rate.
A key factor missing from the conventional metric is a finer definition of time. Thus, we propose that object revisit rate become a new, more refined metric for automotive LiDAR because a high-performance, active LiDAR, such as AEye’s iDAR™, has the ability to revisit an object within the same frame. The time between the first and second measurement of an object is critical, as shorter object revisit times keep processing times low for advanced algorithms that correlate multiple moving objects in a scene. The best algorithms used to associate/correlate multiple moving objects can be confused when time elapsed between samples is high. This lengthy combined processing time, or latency, is a primary issue for the industry.
The active iDAR platform accelerates revisit rate by allowing for intelligent shot scheduling within a frame. Not only can iDAR interrogate a position or object multiple times within a conventional frame, it can maintain a background search pattern while simultaneously overlaying additional intelligent shots. For example, an iDAR sensor can schedule two repeated shots on an object of interest in quick succession (30μsec). These multiple interrogations can be contextually integrated with the needs of the user (either human or computer) to increase confidence, reduce latency, or extend ranging performance.
These additional interrogations can also be data dependent. For example, an object can be revisited if a low confidence detection occurs, and it is desirable to quickly validate or reject it, enabled with secondary data and measurement, as seen in Figure 1. A typical frame rate for conventional passive sensors is 10Hz. For conventional passive sensors, this is the object revisit rate. With AEye’s active iDAR technology, the object revisit rate is now different from the frame rate, and it can be as low as tens of microseconds between revisits to key points/objects – easily 100x to 1000x faster than conventional passive sensors.
What this means is that a perception engineering team using dynamic object revisit capabilities can create a perception system that is at least an order of magnitude faster than what can be delivered by conventional passive LiDAR without disrupting the background scan patterns. We believe this capability is invaluable for delivering level 4/5 autonomy as the vehicle will need to handle complex edge cases, such as identifying a pedestrian in front of oncoming headlights or a flatbed semi-trailer laterally crossing the path of the vehicle.

Figure 1. Advanced active LiDAR sensors utilize intelligent scan patterns that enable an Object Revisit Interval, such as the random scan pattern of AEye’s iDAR (B). This is compared to the Revisit Interval on a passive, fixed pattern LiDAR (A). For example, in this instance, iDAR is able to get eight detects on a vehicle, while passive, fixed pattern LiDAR can only achieve one.
Within the “Search, Acquire, and Act” framework, an accelerated object revisit rate, therefore, allows for faster acquisition because it can identify and automatically revisit an object, painting a more complete picture of it within the context of the scene. Ultimately, this allows for collection of object classification attributes in the sensor, as well as efficient and effective interrogation and tracking of a potential threat.
Real-World ApplicationsUse Case: Head-On DetectionWhen you’re driving, the world can change dramatically in a tenth of a second. In fact, two cars traveling towards each other at 100 kph are 5.5 meters closer after 0.1 seconds. By having an accelerated revisit rate, we increase the likelihood of hitting the same target with a subsequent shot due to the decreased likelihood that the target has moved significantly in the time between shots. This helps the user solve the “Correspondence Problem,” determining which parts of one “snapshot” of a dynamic scene correspond to which parts of another snapshot of the same scene. It does this while simultaneously enabling the user to quickly build statistical measures of confidence and generate aggregate information that downstream proce..