AEye’s New AE110 iDAR System Integrated into HELLA Vehicle at IAA in Frankfurt

Companies Partner to Bring Best-in-Class ADAS Solutions to Global Automotive OEMs
Pleasanton, CA – September 10, 2019 – AEye, a world leader in solid state LiDAR-based artificial perception systems and the developer of iDAR™, today announced that its AE110 system will be an integrated component in the HELLA demonstration vehicle at IAA, the world’s largest motor show taking place this month in Frankfurt, Germany.
HELLA and AEye announced in January that the two companies had entered a joint development and manufacturing agreement to bring best-in-class ADAS solutions to global automotive OEMs. The physical co-location of AEye’s AE110 solid state LiDAR-based artificial perception system with HELLA’s camera software and radar is the first step to a much tighter integration resulting from this collaboration.
“We are very proud of the progress of our ongoing collaboration with HELLA and are excited to demonstrate our latest products to the IAA community,” said Luis Dussan, founder and CEO of AEye. “Select visitors to HELLA’s booth will be able to experience the AE110 in action and see what’s possible when two companies with a shared vision work together to deliver an exceptional product.”
“This is an important step in an already successful partnership with AEye,” said Frank Petznick, member of the Electronics executive board and head of the global product center Automated Driving at HELLA. “Together, we are developing game-changing products that can be tailored to meet the specific requirements of global automotive OEMs.”
iDAR Delivers Better Data, FasterThe AE110 features the industry’s only software-definable, solid-state agile LiDAR, with embedded artificial intelligence and industry-leading performance. The AE110 sensor system delivers four to eight times the information of conventional, first-generation LiDAR sensors, using a fraction of the time and energy.
At IAA, HELLA will be demonstrating some of the unique capabilities of the AE110, including instantaneous resolution in the form of defining multiple Regions of Interest within a scene; LiDAR-first perception; a fully software-definable sensor system with a comprehensive Software Development Kit (SDK) with full ROS support, and data management tools to import and integrate heterogeneous sensor data.
Design for ManufacturabilityAEye’s strategy is to design for cost, supply chain efficiency and quality, and to partner with leading tier ones to manufacture automotive-grade products at scale. As such, HELLA will develop and manufacture sensing and perception solutions based on AEye’s iDAR technology, customized to individual OEM ADAS requirements.
AEye has recently published a white paper on LiDAR metrics, “Rethinking the three R’s of LiDAR”, which takes into consideration the second generation “smart” systems and their ability to use intelligent scanning and redundant sensors to enhance safety and deliver unprecedented power savings.
For more information on the AE110, including a detailed spec sheet, visit AEye’s Product page.
Please visit the HELLA booth at IAA in the New Mobility World, Hall 5 Stand B06.
About AEyeAEye is an artificial perception pioneer and creator of iDAR™, a perception system that acts as the eyes and visual cortex of autonomous vehicles. Since its demonstration of its solid-state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. The company is based in the San Francisco Bay Area, and backed by world-renowned investors including Kleiner Perkins Caufield & Byers, Taiwania Capital, Hella Ventures, LG Electronics, Subaru-SBI, Aisin, Intel Capital, Airbus Ventures, and others.
Media Contact:
AEye, Inc.
Jennifer Deitsch
[email protected]
925-400-4366
AEye’s New AE110 iDAR System Integrated into HELLA Vehicle at IAA in Frankfurt —A Decade of Autonomous Vehicle InvestmentsAEye Redefines the Three “Rs” of LiDAR – Rate, Resolution, and RangeAEye Team Profile: Indu VijayanHella and AEye Extend Strategic Partnership to Deliver Sensing and Perception Solutions for ADAS and Autonomous DrivingAEye Team Profile: Vivek ThotlaForbes Features AEye’s iDAR 1000 Meter Detection Range and 100 HZ Scan Rate AchievementRethinking the Three “Rs” of LiDAR: Rate, Resolution and RangeAEye Team Profile: Jim RobnettMotor Trend Features AEye's iDAR System for Self Driving Cars

Autonomous Vehicle Technology Publishes Aravind Ratnam’s New Metrics for LiDAR Evaluation

In the August 2019 issue of Autonomous Vehicle Technology magazine, AEye’s VP of Product Management, Aravind Ratnam, rethinks the three Rs of LiDAR — rate, resolution, and range — and proposes extending automotive LiDAR evaluation metrics to meet the capabilities of today’s technology.
Download the ArticleAutonomous Vehicle Technology Publishes Aravind Ratnam’s New Metrics for LiDAR Evaluation —AEye Announces Industry Leading Family of Perception Sensors for ADAS SolutionsRethinking the Three “Rs” of LiDAR: Rate, Resolution and RangeHella and AEye Extend Strategic Partnership to Deliver Sensing and Perception Solutions for ADAS and Autonomous DrivingAEye Team Profile: Dr. Allan SteinhardtAEye Team Profile: Indu VijayanA Decade of Autonomous Vehicle InvestmentsDigital Journal Asks: “Will agile sensor technology surpass LiDAR”?AEye iDAR: Sensor Performance Validation Report (VSI Labs)AEye’s iDAR Shatters Both Range and Scan Rate Performance Records for Automotive Grade LiDAR

AEye Redefines the Three “Rs” of LiDAR – Rate, Resolution, and Range

Company’s Intelligent Sensor Extends Metrics for Evaluating Automotive LiDAR System Performance

Pleasanton, CA – July 18, 2019 – In a new white paper released today, artificial perception pioneer AEye proposes newly extended metrics for evaluating advanced LiDAR system performance. Industry leaders recognize that the conventional metrics of frame rate, angular resolution, and detection range currently used for evaluating LiDAR performance no longer alone adequately measure the effectiveness of sensors to solve real world use cases that underlie autonomous driving. In response, AEye has proposed three new extended metrics for LiDAR evaluation: intra-frame object revisit rate, instantaneous enhanced resolution, and object classification range. The AEye white paper describes these capabilities and why they matter within the context of real-world automotive applications.

“Current metrics used for evaluating LiDAR systems designed for autonomous driving often fail to adequately address how a system will perform in real-world conditions,” said AEye co-founder and CEO, Luis Dussan. “These extended metrics are more apropos to measuring advanced LiDAR performance, and are key to evaluating systems that will solve the most challenging use cases.”

First generation LiDAR sensors passively search a scene and detect objects using scan patterns that are fixed in both time and in space, with no ability to enhance performance with a faster revisit nor to apply extra resolution to high interest areas like the road surface or intersections. A new class of advanced solid-state LiDAR sensors enable intelligent information capture that expands the capabilities of LiDAR and moves from passive “search” or detection of objects to active search and, in many cases, to the actual acquisition and classification attributes of objects in real-time making perception and path planning software safer and more effective.

Extended Metric #1: From Frame Rate to Object Revisit Rate
It is universally accepted that a single interrogation point, or shot, does not deliver enough confidence to verify a hazard. Therefore, passive LiDAR systems need multiple interrogations/detects on the same object or position over multiple frames to validate an object. New, intelligent LiDAR systems, such as AEye’s iDAR™, can revisit an object within the same frame. These agile systems can accelerate the revisit rate by allowing for intelligent shot scheduling within a frame, with the ability to interrogate an object or position multiple times within a conventional frame.

In addition, existing LiDAR systems are limited by the physics of fixed laser pulse energy, fixed dwell time, and fixed scan patterns. Next generation systems such as iDAR, are software definable by perception, path and motion planning modules so that they can dynamically adjust their data collection approach to best fit their needs. Therefore, Object Revisit Rate, or the time between two shots at the same point or set of points, is a more important and relevant metric than Frame Rate alone.

Extended Metric #2: From Angular Resolution to Instantaneous (Angular) Resolution
The assumption behind the use of resolution as a conventional LiDAR metric is that the entire Field of view will be scanned with a constant pattern and uniform power. However, AEye’s iDAR technology, based on advanced robotic vision paradigms like those utilized in missile defense systems, was developed to break this assumption. Agile LiDAR systems enable a dynamic change in both temporal and spatial sampling density within a region of interest, creating instantaneous resolution. These regions of interest can be fixed at design time, triggered by specific conditions, or dynamically generated at run-time.

“Laser power is a valuable commodity. LiDAR systems need to be able to focus their defined laser power on objects that matter, said Allan Steinhardt, Chief Scientist at AEye. “Therefore, it is beneficial to measure how much more resolution can be applied on demand to key objects in addition to merely measuring static angular resolution over a fixed pattern. If you are not intelligently scanning, you are either over sampling, or under sampling the majority of a scene, wasting precious power with no gain in information value.”

Extended Metric #3: From Detection Range to Classification Range
The traditional metric of detection range to may work for simple applications, but for autonomy the more critical performance measurement is classification range. While it has been generally assumed that LiDAR manufacturers need not know or care about how the domain controller classifies or how long it takes, this can ultimately add latency and leave the vehicle vulnerable to dangerous situations. The more a sensor can provide classification attributes, the faster the perception system can confirm and classify. Measuring classification range, in addition to detection range, will provide better assessment of an automotive LiDAR’s capabilities, since it eliminates the unknowns in the perception stack, pinpointing salient information faster.

Unlike first generation LiDAR sensors, AEye’s iDAR is an integrated, responsive perception system that mimics the way the human visual cortex focuses on and evaluates potential driving hazards. Using a distributed architecture and edge processing, iDAR dynamically tracks objects of interest, while always critically assessing general surroundings. Its software-configurable hardware enables vehicle control system software to selectively customize data collection in real-time, while edge processing reduces control loop latency. By combining software-definability, artificial intelligence, feedback loops, with smart, agile sensors, iDAR is able to capture more intelligent information with less data, faster, for optimal performance and safety.

AEye’s iDAR system uniquely architected to scale from modular ADAS solutions to fully integrated mobility/robot-taxi implementations. In order to deliver automotive-grade ADAS solutions at scale, AEye has partnered with top Tier 1 global automotive suppliers such as Hella, LG Electronics, and Aisin to design and manufacture best-in-class ADAS systems to global automakers. In addition, the company is engaged in pilots with more than a dozen non-disclosed OEMs and mobility companies.

“To create an effective sensing system, two things matter most – the quality of the data and the speed at which you can make it actionable,” said AEye Co-Founder and SVP of Engineering, Barry Behnken. “Performance metrics matter because they determine how designers and engineers approach problem-solving. These extended metrics help the industry focus on what matters most.”

About AEye
AEye is an artificial perception pioneer and creator of iDAR™, a perception system that acts as the eyes and visual cortex of autonomous vehicles. Since its demonstration of its solid-state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. The company is based in the San Francisco Bay Area, and backed by world-renowned investors including Kleiner Perkins Caufield & Byers, Taiwania Capital, Hella Ventures, LG Electronics, Subaru-SBI, Aisin, Intel Capital, Airbus Ventures, and others.

Media Contact:

AEye, Inc.
Jennifer Deitsch
[email protected]

925-400-4366

AEye Redefines the Three “Rs” of LiDAR – Rate, Resolution, and Range — AEye Sets New Benchmark for LiDAR RangeAEye Advisory Board Profile: Luke SchneiderBlair LaCorte Named President of AEyeAEye Team Profile: Vivek ThotlaElon Musk Is Right: LiDAR Is a Crutch (Sort of.)SAE's Autonomous Vehicle Engineering on New LiDAR Performance MetricsLeading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech VenturesAEye Advisory Board Profile: Tim ShipleAEye Team Profile: Indu Vijayan

iDAR Sees Only What Matters

AEye’s iDAR™ (Intelligent Detection and Ranging) is the world’s most intelligent artificial perception system for autonomous vehicles.

Only iDAR brings intelligence to the sensor layer. By fusing 1550 nanometer, solid-state agile LiDAR with a low-light HD camera, CV algorithms, and embedded AI – at the sensor – iDAR filters out the most important aspects of a given scene. With iDAR, the vehicle’s perception system can target only the salient 5% of data it needs to safely navigate.

Watch how.

Fixed grid scan pattern with High Density Elevation coloring where blue indicates the highest elevation and red indicates the lowest.
Moving grid scan pattern with iDAR edge detection enabled, also with High Density Elevation coloring.
Moving grid scan pattern with iDAR edge detection enabled. The scene is colored by point type, where the “passive” points (part of the fixed background scan) are orange. The iDAR edge detection points – which pass only the scene’s relevant information to the perception system – are green. The computer vision edge detection algorithm analyzes the camera stream, which cues the LiDAR to optimally capture only the important aspects of the scene. Ensuring that iDAR never misses a thing!
Moving grid scan pattern with edges only.

iDAR enables self-driving cars to see only what matters.

iDAR Sees Only What Matters — Elon Musk Is Right: LiDAR Is a Crutch (Sort of.)AEye’s $40M Series B Includes Numerous Automotive Leaders Including Subaru, Hella, LG, and SKAEye Extends Patent Portfolio, Creating Industry’s Most Comprehensive Library of Solid-State Lidar Intellectual PropertyAEye Announces Industry Leading Family of Perception Sensors for ADAS SolutionsAEye Advisory Board Profile: Adrian KaehlerLeading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech VenturesAEye’s iDAR Shatters Both Range and Scan Rate Performance Records for Automotive Grade LiDARRethinking the Three “Rs” of LiDAR: Rate, Resolution and RangeAEye Advisory Board Profile: Luke Schneider

Rethinking the Three “Rs” of LiDAR: Rate, Resolution and Range

Extending Conventional LiDAR Metrics to Better Evaluate Advanced Sensor Systems
Executive Summary
As the autonomous vehicle market matures, sensor and perception engineers have become increasingly sophisticated in how they evaluate system efficiency, reliability and performance. Many industry leaders have recognized that conventional metrics for LiDAR data collection (such as frame rate, full frame resolution, and detection range) currently used for evaluating performance no longer adequately measure the effectiveness of sensors to solve real world use cases that underlie autonomous driving.

First generation LiDAR sensors passively search a scene and detect objects using background patterns that are fixed in both time (no ability to enhance with a faster revisit) and in space (no ability to apply extra resolution to high interest areas like the road surface or intersections).

A new class of advanced solid-state LiDAR sensors enable intelligent information capture that expands their capabilities and moves from passive “search” or detection of objects, to active search and, in many cases, to the actual acquisition and classification attributes of objects in real time.

Because early generation LiDARs used fixed raster scans, the industry was forced to adopt overly simplistic performance metrics that did not capture all the nuances of the sensor requirements needed to enable AVs. In response, AEye, the developer of iDAR technology (which includes agile LiDAR) is proposing the consideration of three new corresponding metrics for extending LiDAR evaluation. Specifically: extending the metric of frame rate to include intra-frame object revisit rate; expanding resolution to capture instantaneous enhanced resolution; and enhancing detection range to reflect the more critically important object classification range.

We are proposing that these new metrics be used in conjunction with existing measurements of basic camera and passive LiDAR performance as they measure a sensor’s ability to intelligently enhance perception and create a more complete evaluation of a sensor system’s efficacy in improving safety and performance in real-world scenarios.

Download “Rethinking the Three “Rs” of LiDAR: Rate, Resolution and Range” [pdf]

Introduction
We have often found it useful to leverage proven frameworks from advanced robotic vision research and apply them to LiDAR-specific product architecture. One that has proven to be both versatile and instructive has been work around object identification that connects search, acquisition (or classification) and action.

Search is the ability to detect any and all objects without the risk of missing anything.
Acquire is defined as the ability to take a search detection and enhance the understanding of an object’s attributes to accelerate classification and determine possible intent (this could be by calculating velocity or by classifying object type).
Act defines an appropriate sensor response as trained or as recommended by the vehicle’s perception system or domain controller. Responses can largely fall into four categories:

Continue scan for new objects (no enhanced information needed)
Continue scan but also interrogate the object further and gather more information on an acquired object’s attributes to enable classification
Continue scan but also continue to track an object classified as currently non-threatening
Continue scan but the control system is going to take evasive action.

Within this framework, performance specifications and system effectiveness need to be assessed with an “eye” firmly on the ultimate objective: completely safe operation of the vehicle. However, as most LiDAR systems today are passive, they are only capable of basic search. Therefore, conventional metrics used for evaluating these systems’ performance relate to basic object detection capabilities – frame rate, resolution, and detection range. If safety is the ultimate goal, then search needs to be more intelligent and acquisition (and classification) done more quickly and accurately so that the sensor or the vehicle can determine how to act immediately.

Rethinking the Metrics
Makers of automotive LiDAR systems are frequently asked about their frame rate, and whether or not their technology has the ability to detect objects with 10 percent reflectivity at some range (often 230 meters). We believe these benchmarks are required, but insufficient as they don’t capture critical details such as the size of the target, speed at which it needs to be detected and recognized, or the cost of collecting that information. We believe it would be productive for the industry to adopt a more holistic approach when it comes to assessing LiDAR systems for automotive use. Additionally, we make the argument that we must look at metrics as they relate to a perception system in general – rather than as an individual point sensor and ask ourselves: “What information would enable a perception system to make better, faster decisions?” Below, we have outlined the three conventional LiDAR metrics and a recommendation on how to extend these metrics.

Conventional Metric #1: Frame rate of 10Hz – 20Hz

New Metric: Object Revisit Rate
(The time between two shots at the same point or set of points)
Defining single point detection range alone is insufficient for sensor detection because a single interrogation point (shot) rarely delivers sufficient confidence – it is only suggestive. Therefore, passive LiDAR systems need multiple interrogation/detects at the same point or multiple interrogations/detects on the same object to validate an object or scene. The time it takes to detect an object is dependent on many variables, such as distance, interrogation pattern and resolution, reflectivity, or the shape of the objects to interrogate, and can “traditionally” take several full frames to achieve.

A key factor that is missing from the conventional metric is a finer definition of time. Thus, we propose that Object Revisit Rate becomes a new, more refined metric for automotive LiDAR because an agile LiDAR, such as AEye’s iDAR, can revisit an object within the same frame. The time between the first measurement of an object and the second is critical, as shorter object revisit times can help keep processing times low for advanced algorithms that need to correlate between multiple moving objects in a scene. The best algorithms used to associate/correlate multiple moving objects can be confused when many objects are in the scene and time elapsed between samples is high. This lengthy combined processing time is a primary issue for the industry.

The agile AEye iDAR platform accelerates revisit rate by allowing for intelligent shot scheduling within a frame. Not only can iDAR interrogate a position or object multiple times within a conventional frame, it can maintain a background search pattern while overlaying additional intelligent shots with the same frame. For example, an iDAR sensor can schedule two repeated shots on a point of interest in quick succession (30ms). These multiple interrogations can then be contextually integrated with the needs of the user (either human or computer) to increase confidence, reduce latency, or extend ranging performance.

These interrogations can also be data dependent. For example, an object can be revisited if a (low confidence) detection occurs, and it is desirable to quickly validate, or reject, enabled with secondary data and measurement, as seen in Figure 1. A typical completive full frame rate (traditional classic) for conventional sensors is approximately 10Hz, or 100 msec. This is also, for said conventional sensors, equivalent to the “object revisit rate.” With AEye’s flexible iDAR technology, the object revisit rate is now different from the frame rate and it can be as low as 10s of microseconds between revisits to key points/objects as the user/host requires – easily 100x to 1000x faster than alternative fixed scan sensors.

Figure 1. Advanced Agile LiDAR Sensors enable intelligent scan patterns such as the “Foveation in Time” Intra-Frame Revisit Interval and random scan pattern of iDAR (B) compared to Revisit Interval on a typical fixed pattern LiDAR (A)

What this means is that a perception engineering team using dynamic object revisit capabilities can create a perception system that is at least an order of magnitude faster than what can be delivered by conventional LiDAR without disrupting the background scan patterns. We believe this capability is invaluable in delivering level 4/5 autonomy as the vehicle will need to handle significantly complex corner cases, such as identifying a pedestrian next to oncoming headlights or a semi-trailer laterally crossing the path of the vehicle.

Within the “Search, Acquire, and Act” framework, an accelerated object revisit rate, therefore, allows for faster acquisition because it can identify and automatically revisit an object, painting a more complete picture of it within the context of the scene. Ultimately, this allows for collection of object classification attributes in the sensor, as well as efficient and effective interrogation and tracking of a potential threat.

Real-World Applications
Use Case: Head-On Detection
When you’re driving, the world can change dramatically in a tenth of a second. In fact, two cars traveling towards each other at 100 kph are 5.5 meters closer to each other after 0.1 seconds. By having an accelerated revisit rate, we increase the likelihood of hitting the same target with a subsequent shot due to the decreased likelihood that the target has moved significantly in the time between shots. This helps the user solve the “Correspondence Problem” (determining which parts of one “snapshot” of a dynamic scene correspond to which parts of another snapshot of the same scene), while simultaneously enabling the user to quickly build statistical measures of confidence and generate aggregate information that downstream processors might ..

AEye Team Profile: Vivek Thotla

On June 26th, AEye Staff Engineer, Vivek Thotla, will be speaking on a panel called “Should We Take CV To The Edge?” at IoT Forum on Computer Vision @ Sensors Expo.
Vivek is a staff engineer at AEye, where he leads product verification and validation, and is responsible for LiDAR simulation and data strategy in producing automotive grade products. Previously, he was a Component Owner / Functional Delivery Owner for point cloud algorithms at Continental, where he was responsible for planning, requirements, design and development of embedded platform-based algorithms for a Hi-Res 3D Flash LiDAR, in addition to enforcing ADAS process stages to meet ASPICE levels and functional safety. He has also held engineering roles at Tribis, AmpliSine Labs, Missouri S&T and Enigma Portal. Vivek holds an MBA in Information Technology Project Management and a PhD and Masters in Electrical, Electronics and Communications Engineering.

We sat down with Vivek to learn more about the advantages of integrating computer vision at the sensor, building automotive grade LiDAR products, and why he decided to move to the Bay Area.

Q: How much of an autonomous vehicle’s computer vision should be done at the sensor, as opposed to a central processor?
The amount of data produced today by a perception system is enormous. And incorporating all the data from the different kinds of sensors used (like radar, camera, and LiDAR) makes it very difficult and expensive to process and store. In a typical perception system, roughly 80% of the data produced by the sensors is thrown out.

However, intelligent sensors – like what we develop at AEye – are software definable. Meaning, you can adjust its settings to get high resolution data from an object and get sparse data in the background, cutting down the overall amount of data processed by more than 80%. This makes computer vision algorithms at the central processor faster and efficient because once you preprocess data, latency becomes less of an issue. Currently, AV companies are spending a tremendous amount of money storing useless data. Preprocessing saves both time and money.

Q: What is the largest challenge in producing automotive grade LiDAR products?
Industry wide, the greatest challenge is maintaining the quality, reliability, and consistency needed on all components and software that go into a LiDAR sensor of over 100,000 samples or more and over the sensor’s lifetime. Another major challenge for bringing LiDAR products to the automotive market is designing the sensor to fit in different regions of the car. There are a lot of constraints based on where the sensor is placed on the vehicle and certain issues that arise from each placement. For example, a sensor placed behind a windshield might need a completely different design than a sensor that’s placed in the front bumper.

There are many interesting LiDAR architectures out there that work really well at smaller samples and in the lab. But the moment the product needs to scale and deal with all the quality and environmental requirements of being an automotive grade product, they fail. AEye is mitigating these challenges by partnering directly with Tier 1’s who know the process of making large-scale, automotive grade products. In my own experience, I’ve found that once a Tier 1 partners with you, they are extremely supportive because they believe in you, and that proves you are capable of achieving it.

In addition to our partners who help us push the sensor to automotive grade, we have a great functional safety team here at AEye. I came to AEye from a Tier 1, so I know what goes into developing an automotive grade sensor, and the AEye team is made up of people from all over the automotive industry that have great, diverse insight into how to bring a product to market.

Q: You moved to the Bay Area from Santa Barbara. What was it about Silicon Valley that drew you here?
It has always been my dream to come to Silicon Valley – you hear about it so much as the epicenter of technology and innovation. And it’s true: Silicon Valley is at the heart of the autonomous driving industry. All the innovative and novel work happening today in the LiDAR industry is happening here and I did not want to miss my chance to help develop the tools for true autonomy.

Connect with AEye at Sensors Expo! Learn more here.

AEye Team Profile: Vivek Thotla — VentureBeat Says iDAR Is “Built for Speed”AEye Team Profile: Dr. Allan SteinhardtAEye Adds VP of AI and Software to Executive TeamAEye Advisory Board Profile: Scott PfotenhauerThe Future of Autonomous Vehicles: Part II – Blind Technology without Compassion Is RuthlessAEye Advisory Board Profile: Adrian KaehlerAEye Team Profile: Indu VijayanAEye Expands Business Development and Customer Success Team to Support Growing Network of Global Partners and CustomersAEye Sets New Benchmark for LiDAR Range

A Decade of Autonomous Vehicle Investments

Investment in new technologies to enable autonomous vehicles has taken off in the last decade.
Need proof?
According to our calculations, total cumulative funding for AV technology had reached $37 billion in May 2019, and is poised to set a new record by the end of the year!

So, what does this say about the future of our industry?
“Clearly, the train has left the station. When you combine Silicon Valley’s investment and innovation with Detroit’s productization discipline, you’re going to see greater transformation in the automotive industry in the next decade than any time in its history.”

– Jim Robnett, VP of Automotive Business Development, AEye

Source: Crunchbase

A Decade of Autonomous Vehicle Investments — Deconstructing Two Conventional LiDAR MetricsAEye Team Profile: Dr. Allan SteinhardtAEye’s iDAR Shatters Both Range and Scan Rate Performance Records for Automotive Grade LiDARThe Future of Autonomous Vehicles: Part II – Blind Technology without Compassion Is RuthlessThe Future of Autonomous Vehicles: Part I – Think Like a Robot, Perceive Like a HumanAEye Advisory Board Profile: Luke SchneiderAEye Team Profile: Indu VijayanAEye Expands Business Development and Customer Success Team to Support Growing Network of Global Partners and CustomersAEye Extends Patent Portfolio, Creating Industry’s Most Comprehensive Library of Solid-State Lidar Intellectual Property

AEye Advisory Board Profile: Adrian Kaehler

We sat down with each of our Advisory Board Members to ask them why they’re excited about working with AEye…
Dr. Adrian Kaehler is an independent scientist, adviser, and start-up founder. He is the founder and CEO of Giant.AI. His current research includes topics in deep learning, machine learning more generally, statistical modeling, and computer vision. Adrian received his Ph.D. in Theoretical Physics from Columbia University in 1998. Adrian has since held positions at Intel Corporation and the Stanford University AI Lab, is an Applied Invention Fellow, and was a member of the winning Stanley race team in the DARPA Grand Challenge. Dr. Kaehler was Vice President of Special Projects at Magic Leap, Inc., a startup company that raised over $1.4B in venture funding from 2014 to 2016. He is a co-founder of the Silicon Valley Deep Learning Group, has a wide variety of published papers and patents in physics, electrical engineering, computer science, and robotics.

Q: What in your past experience ultimately drew you to the autonomous vehicle arena?
In 2004, the first DARPA Grand Challenge took place. The goal for the challenge was to have an autonomous vehicle (AV) successfully navigate a 142-mile course through the desert of Nevada. By doing so, DARPA aimed to accelerate the development of autonomous vehicle technology for US Military use. Although no teams completed the course and no winner was announced, it set the stage for a thrilling second round the following year.

I was part of the microprocessor design team at Intel in the early 2000s. At that time, computer vision and machine learning were merging (and we were coming to understand that problems facing computer vision could be solved with machine learning).

For the second Grand Challenge, Stanford University teamed up with Intel and VW to build the winning car: Stanley. I got involved in the project and, ultimately, in AVs, because I felt that the Grand Challenge was cool and interesting, and a clever and effective way to develop AV technology. Not only was it a great use of time and resources, but it was a lot of fun working out in the desert with such a passionate and skilled team. It was also quite exciting to become a minor celebrity and to receive a $2 million prize!

Q: Why AEye?
I have a rich working history with Luis. When he reached out to me to help empower the computer vision side of the iDAR system, I did so immediately because I believed in him, the core team he was assembling, and the technology.

AEye is solving important problems in the AV world by being thoughtful, economical and, what I call, “delightfully tricky.” Developing impactful solutions is tricky and relies on critical insight, which ultimately opens the door for products to be much cheaper and smaller than the competition. Luis figured out that there is a smarter way to approach the problem at hand. iDAR has the characteristics of a winning perception solution. You win by being smart. AEye and its iDAR system are most definitely smart.

Q: Where do you see ADAS solutions, autonomous vehicles, and/or artificial perception, heading within the next few years? The next decade? Beyond? How do you see AEye playing a pivotal role in this vision?
I am a firm believer in contextual autonomy, like a car maneuvering itself to find a parking space. This is because, in that sort of environment, the task is simple and humans can easily intervene, if necessary. Therefore, I envision many circumstances in which full autonomy can be achieved, but they will need to be well-defined by automakers and the government.

Lawmakers and regulators must take a more active role in creating context for autonomy, drafting and passing legislation to create an infrastructure that is friendly to autonomous driving. They will need to upgrade HOV lanes and transform them into autonomous HOV lanes. Advancements such as adding transponders or visually distinct targets in tunnels will guarantee that ADAS systems work in these designated lanes.

In the next 5 years, we will see more ADAS systems being implemented into cars, not just luxury models. These will be seen as standard features an average consumer will have in their car, like a rear-facing camera is now. And like the rear-facing camera, these ADAS solutions will have a positive impact on safety. However, cost is going to be very important for consumers. In the world of feature-based ADAS systems, cost needs to be much lower. In the foreseeable future, automakers will focus on producing features that consumers love, but are cheaper for the mainstream market.

This predicament plays into AEye’s strengths because AEye produces solutions that are not only smarter than what’s currently on the market, but also less expensive, with a clear roadmap in place to bring price down even further. As consumers become more excited by ADAS features, there will be a growing demand for them to be widely available at reasonable prices. Consequently, AEye’s iDAR system fits very well into the world that is emerging. The real, serious problems facing that last hurdle towards mainstream, elegant ADAS features can be achieved by iDAR. Now’s a great time to partner with AEye.

AEye Advisory Board Profile: Adrian Kaehler — AEye Advisory Board Profile: Scott PfotenhauerAEye Team Profile: Jim RobnettThe Future of Autonomous Vehicles: Part I – Think Like a Robot, Perceive Like a HumanAEye Advisory Board Profile: Elliot GarbusAEye Team Profile: Indu VijayanAEye Advisory Board Profile: Tim ShipleThe Future of Autonomous Vehicles: Part II – Blind Technology without Compassion Is RuthlessAEye Team Profile: Dr. Allan SteinhardtAEye Advisory Board Profile: Luke Schneider

AEye Team Profile: Jim Robnett

On June 5th, AEye’s VP of Automotive Business Development, Jim Robnett, will give a Keynote Address entitled “Brains vs. Brawn: The Quest for Artificial Perception” at TU-Automotive Detroit.
A 25-year automotive veteran, Jim Robnett is charged with building AEye’s partnerships with leading OEMs and Tier 1s. Robnett is a proven executive and technology leader with a strong track record of driving product innovation, development, and revenue across automotive and industrial markets. Prior to joining AEye, he was VP of Strategic Partnerships for Luminar. He has also held executive leadership positions at NNG, Fiat Chrysler Automobiles, HERE, SiriusXM, and Denso. Robnett earned his Bachelor’s Degree in Mechanical Engineering at the University of Michigan, and his MBA from Michigan State University.

We sat down with Jim to learn more about what sparked his interest in autonomous vehicles, the burgeoning relationship between Detroit and Silicon Valley, and his all time favorite musician.

Q: You have extensive experience in the automotive industry. What was it about autonomous vehicles (AVs) that shifted your interest and drew you into this space?
For the last 10-15 years, I worked in the infotainment sector of the automotive industry. This included anything from maps and navigation, telematics, connected services, traffic, etc. Going back 5-10 years ago, there was a lot of innovation in that space. The innovation continues today, but at a much slower rate, and that’s because now, the main source of infotainment in the car comes from the cell phone. So, the main source of infotainment innovation in the automotive industry is focused on incorporating the cell phone into the vehicle. This will change with better embedded connectivity in the vehicle, but this is the current trend.

At the same time that the infotainment sector was slowing down, ADAS, advanced safety and autonomous vehicle innovation was picking up. Having grown up in Detroit, witnessing my dad’s 30 year career at GM, I wanted to continue to be a part of the incredible legacy of innovation in the automotive industry. Advanced ADAS solutions and, eventually, fully autonomous vehicles, will be the most important transportation technology innovation event in my lifetime – and I knew that I needed to be a part of it.

Q: How have you seen the Detroit automotive culture interact with Silicon Valley technology culture? Do you view it as more of a collision or a co-mingling?
It’s interesting to see the mix of cultures, because I spend half of my time in each. There is a definite merging and the two co-exist, but there is a sense of friction, still.

I consider myself, and my role, as a bridge between the two cultures, especially since I grew up in the automotive industry, but feel very comfortable in the emerging technology space.

AEye is the perfect example of a “disruptor” to the industry. Making cars is very difficult. To be successful, we have to take the best aspects of the history and experience of the Detroit culture with the innovation and velocity of change of Silicon Valley. The companies that combine these two cultures the best will not only benefit from both worlds, but and will emerge as the industry leaders.

Q: You sing and play guitar in a band – what is your favorite music genre to play? To listen to? Who is your favorite musician?
I didn’t start playing guitar until after college, but once I started, I almost immediately formed a band with some buddies. That band (in different versions) has been going strong for about 25 years. We write a lot of our own songs – I’d describe it as kind of punk/rock and roll. In terms of my own taste in music, I listen to almost everything, but I especially love the Rolling Stones. And that’s because they’ve survived after so many decades and are still rocking and innovating. Their longevity and creativity is what interests me the most. As to my favorite musician? Keith Richards is my hero.

Connect with AEye at TU-Automotive Detroit! Learn more here.

AEye Team Profile: Jim Robnett — AEye Team Profile: Dr. Allan SteinhardtAEye Expands Business Development and Customer Success Team to Support Growing Network of Global Partners and CustomersAEye Advisory Board Profile: Luke SchneiderAEye Team Profile: Indu VijayanAEye Advisory Board Profile: Tim ShipleAEye Advisory Board Profile: Elliot GarbusLG Electronics and AEye Announce Strategic Partnership to Address Sensing and Perception Needs of ADAS MarketAEye Advisory Board Profile: Scott PfotenhauerBlair LaCorte Named President of AEyeThe Future of Autonomous Vehicles: Part I – Think Like a Robot, Perceive Like a Human

SAE’s Autonomous Vehicle Engineering on New LiDAR Performance Metrics

In the May 2019 issue of SAE’s Autonomous Vehicle Engineering (AVE) magazine, AEye’s technical product manager, Indu Vijayan, argues that conventional metrics used for evaluating the unique capabilities of more advanced lidar systems are inadequate, failing to address real-world driving problems facing autonomous vehicles. In ‘New Performance Metrics for Lidar,’ Indu proposes that new measurements, such as object revisit rate and instantaneous (angular) resolution, are more advantageous for autonomous vehicle development.

Downloadable PDF
SAE's Autonomous Vehicle Engineering on New LiDAR Performance Metrics — AEye Team Profile: Indu VijayanDeconstructing Two Conventional LiDAR Metrics, Part 2Deconstructing Two Conventional LiDAR Metrics, Part 1AEye Expands Business Development and Customer Success Team to Support Growing Network of Global Partners and CustomersAEye Team Profile: Dr. Allan SteinhardtLeading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech VenturesAEye’s $40M Series B Includes Numerous Automotive Leaders Including Subaru, Hella, LG, and SKAEye’s iDAR Shatters Both Range and Scan Rate Performance Records for Automotive Grade LiDARGartner Names AEye Cool Vendor in AI for Computer VisionLG Electronics and AEye Announce Strategic Partnership to Address Sensing and Perception Needs of ADAS Market