We’ll have to wait before autonomous vehicles are mainstream, CEO of Softbank’s Arm says

ARM Holdings CEO on the future of self-driving cars
15 Hours Ago | 05:31

It's going to be “a while” before self-driving cars become mainstream, the CEO of Arm Holdings told CNBC Tuesday.
“It is a phenomenally hard problem to anticipate what a car could do under absolutely any set of circumstances,” Simon Segars, who was speaking at CNBC's Karen Tso at the Mobile World Congress in Barcelona, Spain, added.
“I think you're going to start to see early services, in quite a constrained way, quite soon over the next couple of years,” he added, explaining that there was “some way to come” before the technology was “completely mainstream.”
Over the last few years, the development of technology has led to several trial runs of autonomous vehicles.

In August 2018, for example, the Hyundai Motor announced that the first journey by an autonomous truck on a South Korean highway had taken place. The firm's Xcient truck, which has a maximum load capacity of 40 tons, drove around 40 kilometers between Uiwang and Incheon.
The vehicle used an autonomous driving system that allowed it to accelerate, decelerate, steer and maneuver through traffic without needing input from a human, although one was on board to take control as and when required.
Back in Barcelona, Arm Holdings' Segars gave an insight into the technology required for autonomous vehicles.
“Self-driving cars have … racks of servers in them, that's great for prototyping, but if you want to make millions of them then you've got to shrink it down, so there has to be a pathway to get all of that technology into very low cost, very power efficient chips.”

Deconstructing Two Conventional LiDAR Metrics

Executive Summary
Conventional metrics (such as frame rate and resolution) used for evaluating LiDAR systems don’t adequately or explicitly address real-world problems facing autonomous driving. Therefore, AEye, the developer of iDAR™ technology, proposes two new corresponding metrics for evaluation: object revisit rate and instantaneous resolution. These additional metrics are necessary to better describe the safety and performance of more advanced LiDAR sensors in real-world scenarios.

Download “Deconstructing Two Conventional LiDAR Metrics” [pdf]

Introduction
How is the effectiveness of an intelligent detection system measured? Conventional metrics used for evaluating LiDAR systems rely on frame rate and resolution (as well as range which we will discuss at a later time) as the touchstones of success. However, AEye believes that these measurements are inadequate for evaluating the effectiveness of more advanced LiDAR systems for autonomous vehicles. In this white paper, we discuss why object revisit rate and instantaneous resolution are more meaningful metrics to assess the capabilities of our iDAR system, and why these metrics are ultimately more advantageous for autonomous vehicle development.

Deconstructing the Metrics
Makers of automotive LiDAR systems are frequently asked about their frame rate, and whether or not their technology has the ability to detect objects with 10 percent reflectivity at some range and at some frame rate with some arbitrary resolution. While most manufacturers can readily answer these questions, we believe that this description is insufficient and that the industry must adopt a more holistic approach when it comes to assessing LiDAR systems for automotive use. Additionally, we must think of them as they relate to a perception system in general—rather than as an individual point sensor. Below, we have outlined two conventional LiDAR metrics and AEye’s additional metrics.

Conventional Metric #1: Frame rate of xx Hz

AEye’s Metric
Objective revisit rate (the time between two shots at the same point or set of points)
Defining single point detection range alone is insufficient because a single interrogation point (shot) rarely delivers enough confidence—it is only suggestive. Therefore, we need multiple interrogation/detects at the same point or multiple interrogation/detects on the same object to validate or comprehend an object or scene. The time it takes to detect an object is dependent on many variables, such as distance, interrogation pattern and resolution, reflectivity, or the shape of the objects to interrogate, and can “traditionally” take several full frames to achieve. What is missing from the conventional metric, therefore, is a finer definition of time. Thus, AEye proposes that object revisit rate becomes a new, more critical metric for automotive LiDAR because an agile LiDAR such as AEye’s iDAR can have object revisit rate that is vastly superior to its traditional classic frame rate.

The time between the first measurement of an object and the second is critical, as shorter object revisit times can help keep processing times low for advanced algorithms that need to correlate between multiple moving objects in a scene. Additionally, too long of an object revisit time at fast velocities could be the difference between detecting an object before it’s too late and loss of life, since even the best algorithms used to associate/correlate multiple moving objects can be confused when many objects are in the scene and time elapsed between samples is high.

The agile AEye platform accelerates revisit rate by allowing for intelligent shot scheduling within a frame, including the capability to interrogate a target position or object multiple times before the traditional classic frame is completed. For example, an iDAR sensor can schedule two repeated shots on a point or points of interest in quick succession. These multiple interrogations can then be used according to the scene context and the needs of the user (either human or another computer) to increase confidence (or even extend ranging performance).

These interrogations can also be data dependent. For example, an object can be revisited if a (low confidence) detection occurs, and it is desirable to quickly validate, or reject, said detect with a secondary measurement, as seen in Figure 1. A typical completive full frame rate (traditional classic) for conventional sensors is approximately 10Hz, or 100 msec. This is also, for said conventional sensors, equivalent to the “object revisit rate.” With AEye’s flexible iDAR technology, the object revisit rate is now different from the frame rate and it can be as low as 10s of microseconds between revisits to key points/objects as the user/host requires—easily 3 to 4 orders of magnitude faster than alternative fixed scan sensors.

Figure 1. “Foveation in Time” Intra-Frame Revisit Interval and random scan pattern of iDAR (B) compared to Revisit Interval on a typical fixed pattern LiDAR (A)

What this means is that an effective perception engineering team using dynamic object revisit capabilities can create a perception system that is at least an order of magnitude faster than what can be delivered by conventional LiDAR. We believe this capability is invaluable in delivering level 4/5 autonomy as the vehicle will need to handle significantly complex corner cases.

Real-World Application: When you’re driving, the world can change dramatically in a tenth of a second. In fact, two cars closing at a mutual speed of 200 km/hour are 18 feet closer after 0.1 seconds. By having an accelerated revisit rate, we increase the likelihood of hitting the same target with a subsequent shot due to the decreased likelihood that the target has moved significantly in the time between shots. This helps the user solve the “Correspondence Problem” (determining which parts of one “snapshot” of a dynamic scene correspond to which parts of another snapshot of the same scene), while simultaneously enabling the user to quickly build statistical measures of confidence and generate aggregate information that downstream processors might require (such as object velocity and acceleration). While the “Correspondence Problem” will always be a challenge for autonomous systems, the ability to selectively increase revisit rate on points of interest can significantly aid higher level inferencing algorithms, allowing them to more quickly determine correct solutions.

Furthermore, only allocating shots to extract velocity and acceleration when detections have occurred (part of the acquisition chain) rather than allocating repeat shots everywhere in the frame vastly reduces the required number of shots per frame. For example, even in dense traffic, only 1% of the occupancy grid may contain detections. Adding a second detection, via iDAR, to build a velocity estimate on each detection increases the overall number of shots by only 1%, whereas obtaining velocity everywhere, as mandated by fixed scan systems, doubles the required shots (100%, i.e., 2x increase). This speed and shot saliency ultimately makes autonomous driving much safer because it eliminates ambiguity and allows for more efficient use of downstream processing resources. Solving other “Correspondence Problems” (think: camera/LiDAR) with iDAR is the subject of a future paper.

The AEye Advantage: Whereas other LiDAR systems are limited by the physics of fixed laser pulse energy, fixed dwell time, and fixed scan patterns, AEye’s iDAR technology is a software definable system that allows downstream processors to tailor their data collection strategy to best suit their information processing needs at design time and/or run time. Physics, of course, remains the ultimate arbiter, with the primary physics constraints being the photon budget (laser average power), and the speed of light induced round trip flight time, but the AEye software agility allows us to achieve the limit of physics in a tailored (as opposed to global) fashion. The achievable object revisit rate of AEye’s iDAR system for points of interest (not just the exact point just visited) is microseconds to a few milliseconds, compared to conventional LiDAR systems that require many tens or hundreds of milliseconds between revisits, and therefore, a high degree of object correspondence ambiguity. This gives the unprecedented ability to calculate things like object velocity in any direction faster than any other system.

The ability to define the new metric, Object Revisit Rate, which is decoupled from the traditional “frame rate,” is important also for the next metric we introduce. This second metric helps to segregate the basic idea of “search” algorithms from “acquisition” algorithms: two algorithm types that should never be confused. Separation of these two basic types of algorithms provides insight into the heart of iDAR, which is the Principle of Information Quality as opposed to Data Quantity. Or, in other words: “more information, less data.”

Conventional Metric #2: Fixed (angular) resolution over a fixed Field-of-View

AEye’s Metric
Instantaneous (angular) resolution
The assumption behind the use of resolution as a conventional metric is that it is assumed the Field-of-View will be scanned with a constant pattern. This makes perfect sense for less intelligent traditional sensors that have limited or no ability to adapt their collection capabilities. Additionally, the conventional metric assumes that salient information resident within the scene is uniform in space and time, which we know is not true. Because of these assumptions, conventional LiDAR systems indiscriminately collect gigabytes of data from a vehicle’s surroundings, sending those inputs to the CPU for decimation and interpretation (wherein an estimated 70 to 90 percent of this data is found to be useless or redundant, and thrown out). It’s an incredibly inefficient process. Note this is doubly inefficient: the active collection of..

AEye Adds VP of AI and Software to Executive Team

Abhijit Thatte to Lead Software Product Development as AEye Continues to Expand and Enhance iDAR Artificial Perception Platform
“AEye is the only LiDAR company that has built its company with intelligent data as the guiding principle. This gives me a great framework to build from, as I look to enhance the existing software product suite, while delivering a powerful platform for perception innovation via a versatile and powerful toolset for our engineering development customers. It’s exciting to be at the helm of such transformational technology, and I’m thrilled to join AEye, a leader in revolutionizing transportation.”

Pleasanton, CA – February 26, 2019 – Artificial perception pioneer AEye today announced the addition of Abhijit Thatte as VP of AI and Software to its executive team. Thatte is an accomplished leader with more than 20 years of software product development experience across industries from robotics to industrials. At AEye, he is charged with leveraging the company’s core artificial intelligence capabilities to ensure the delivery of better perception data, faster to autonomous vehicle perception systems.

“Abhijit brings both the big picture vision of what AEye can do with our unique architecture and software stack, and the skillset to execute and bring software products to the market,” said AEye co-founder and CEO Luis Dussan. “This is a critical role as we look to extend the capabilities of our software products, enable seamless integration and interoperability, and deliver smarter data that drives actionable information faster to vehicle path-planning systems, for improved safety, efficiency, and performance.”

As the head of software product engineering at AEye, Thatte will be responsible for the entire software product suite development, including 3D perception, visualization, device drivers, embedded software, and SDK. Prior to AEye, he led Artificial Intelligence at Aricent, Data Science at GE and Software Engineering at Varian Medical Systems. Thatte is a member of the Forbes Technology Council, and a sought-after speaker in artificial intelligence, deep learning, and machine learning. He has a bachelors in Electrical Engineering and a Master of Information and Data Science, Artificial Intelligence, from UC Berkeley.

“AEye is the only LiDAR company that has built its company with intelligent data as the guiding principle,” said Thatte. “This gives me a great framework to build from, as I look to enhance the existing software product suite, while delivering a powerful platform for perception innovation via a versatile and powerful toolset for our engineering development customers. It’s exciting to be at the helm of such transformational technology, and I’m thrilled to join AEye, a leader in revolutionizing transportation.”

AEye’s iDAR is a new form of intelligent data collection that fuses 1550 nanometer (nm), solid-state agile LiDAR with a low-light HD camera and embedded AI to intelligently capture data at the sensor level. The only software-definable intelligent agile LiDAR, AEye’s iDAR artificial perception system leads the industry in range and scan rate performance for automotive-grade LiDAR.

About AEye
AEye is an artificial perception pioneer and creator of iDAR™, a perception system that acts as the eyes and visual cortex of autonomous vehicles. Since its demonstration of its solid-state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. The company is based in the San Francisco Bay Area, and backed by world-renowned investors including Kleiner Perkins Caufield & Byers, Taiwania Capital, Intel Capital and Airbus Ventures.

Media Contact:

AEye, Inc.
Jennifer Deitsch
[email protected]

925-400-4366

AEye Adds VP of AI and Software to Executive Team — AEye’s iDAR Shatters Both Range and Scan Rate Performance Records for Automotive Grade LiDARThe Future of Autonomous Vehicles: Part I – Think Like a Robot, Perceive Like a HumanAEye Announces Addition of Aravind Ratnam as Vice President of Product ManagementDeconstructing Two Conventional LiDAR MetricsAEye Introduces Advanced Mobility Product for the Autonomous Vehicle MarketGartner Names AEye Cool Vendor in AI for Computer VisionBlair LaCorte Named President of AEyeAEye Extends Patent Portfolio, Creating Industry’s Most Comprehensive Library of Solid-State Lidar Intellectual PropertyLG Electronics and AEye Announce Strategic Partnership to Address Sensing and Perception Needs of ADAS MarketAEye’s $40M Series B Includes Numerous Automotive Leaders Including Subaru, Hella, LG, and SK

VW to Invest Nearly $2 Billion in Ford Self-Driving Car Venture – The Wall Street Journal

BERLIN—Volkswagen AG is planning to invest around $1.7 billion in a self-driving car venture with Ford Motor Co.’s Argo subsidiary, according to people familiar with the matter. After months of talks, the German and U.S. car makers have agreed to make Ford’s autonomous-driving unit Argo the nucleus of an equally held joint venture that could… Continue reading VW to Invest Nearly $2 Billion in Ford Self-Driving Car Venture – The Wall Street Journal

Continental and Hewlett Packard Enterprise Launch Blockchain-Based Data Monetization Platform

New platform fuels sharing of vehicle data to improve driver safety, convenience, by providing unprecedented level of data sovereignty Barcelona, February 26, 2019. The technology company Continental and Hewlett Packard Enterprise announced a new platform, expected to be available in 2019, for sharing vehicle data to enable new digital services that improve driver safety and… Continue reading Continental and Hewlett Packard Enterprise Launch Blockchain-Based Data Monetization Platform

HERE Location Suite helps bring China’s maps to the rest of the world

Our ongoing dedication to enhancing our capabilities and providing a consistent global offering to our customers is now extended to China.  As an extension of our longstanding partnership with Chinese mapping company NavInfo, we now offer our customers an even more seamless global mapping experience that expands to China through our HERE Location Suite. One of… Continue reading HERE Location Suite helps bring China’s maps to the rest of the world

MAHLE Licenses WiTricity Technology for Electric Vehicle Wireless Charging – Business Wire

WATERTOWN, Mass.–(BUSINESS WIRE)–WiTricity, the industry pioneer in wireless power transfer over distance, today announced that MAHLE, a top 20 global Tier 1 supplier to the automotive industry headquartered in Stuttgart, Germany, has entered into a technology license for access to WiTricity’s patented technology. MAHLE is a well-known international development partner and supplier to the automotive industry… Continue reading MAHLE Licenses WiTricity Technology for Electric Vehicle Wireless Charging – Business Wire

GM’s Driverless Car Unit Cruise Leases 47K SF in Pasadena – Commercial Observer

The driverless car company GM Cruise signed a lease for 47,051 square feet in Pasadena, Commercial Observer has learned. The self-driving unit of General Motors will occupy a two-story building at 465 North Halstead Street, near the Hastings Village Shopping Center, according to information from CBRE. EverWest Real Estate Investors bought the 239,000-square-foot building in… Continue reading GM’s Driverless Car Unit Cruise Leases 47K SF in Pasadena – Commercial Observer

BMW makes interacting with your car’s AI systems more natural

Even after years of using systems like the Google Assistant or Siri, talking to inanimate objects can still feel weird. In cars, the early voice recognition systems were typically close to unusable, with a user experience that was often eclipsed by the worst of customer service phone trees. Nowadays, though, AI has made for a… Continue reading BMW makes interacting with your car’s AI systems more natural

Hewlett Packard Enterprise and Continental Launch Blockchain-Based Data Monetization Platform 

New platform fuels sharing of vehicle data to improve driver safety, convenience, by providing unprecedented level of data sovereignty  Auburn Hills, Mich., February 25, 2019 – Hewlett Packard Enterprise (HPE) and the technology company Continental today announced a new platform, expected to be available in 2019, for sharing vehicle data to enable new digital services… Continue reading Hewlett Packard Enterprise and Continental Launch Blockchain-Based Data Monetization Platform