Amazon is taking a slice of Europe’s food delivery market after the U.S. e-commerce giant led a $575 million investment in Deliveroo. First reported by Sky yesterday, the Series G round was confirmed in an early UK morning announcement from Deliveroo, which confirmed that existing backers including T. Rowe Price, Fidelity Management and Research Company,… Continue reading Amazon leads $575M investment in Deliveroo
Tag: Mobility
Deconstructing Two Conventional LiDAR Metrics, Part 2
Executive Summary
Conventional metrics for evaluating LiDAR systems designed for autonomous driving are problematic because they often fail to adequately or explicitly address real-world scenarios. Therefore, AEye, the developer of iDAR™ technology, proposes a number of new metrics to better assess the safety and performance of advanced automotive LiDAR sensors.
In Part 1 of this series, two metrics (frame rate and fixed [angular] resolution over a fixed Field-of-View) were discussed in relation to the more meaningful metrics of object revisit rate and instantaneous (angular) resolution. Now in Part 2, we’ll explore the metrics of detection range and velocity, and propose two new corresponding metrics for consideration: object classification range and time to true velocity.
Download “Deconstructing Two Conventional LiDAR Metrics, Part 2” [pdf]
Introduction
How is the effectiveness of an autonomous vehicle’s perception system measured? Performance metrics matter because they ultimately determine how designers and engineers approach problem-solving. Defining problems accurately makes them easier to solve, saving time, money, and resources.
When it comes to measuring how well automotive LiDAR systems perceive the space around them, manufacturers commonly agree that it’s valuable to determine their detection range. To optimize safety, the on-board computer system should detect obstacles as far ahead as possible. The speed with which they can do so theoretically determines whether control systems can plan and perform timely, evasive maneuvers. However, AEye believes that detection range is not the most important measurement in this scenario. Ultimately, it’s the control system’s ability to classify an object (here we refer to low level classification [e.g., blob plus dimensionality]) that enables it to decide on a basic course of action.
What matters most then, is how quickly an object can be identified and classified and how quickly a decision can be made about an object so an appropriate response can be calculated. In other words, it is not simply enough to quantify a distance at which a potential object can be detected at the sensor. One must also quantify the latency from the actual event to the sensor detection — plus the latency from the sensor detection to the CPU decision.
Similarly, the conventional metric of velocity has limitations. Today, some lab prototype frequency modulated continuous wave (FMCW) LiDAR systems can determine the radial velocity of nearby objects by interrogating them continuously for a period of time sufficient to observe a discernible change in position. However, this has two disadvantages: 1) the beam must remain locked on in fixed position for a certain period of time, and 2) only velocity in the radial direction can be discerned. Lateral velocity must be calculated with the standard update in position method. Exploration of these disadvantages will illustrate why, to achieve the highest degree of safety, time to true velocity is a much more useful metric. In other words, how long does it take a system to determine the velocity — in any direction — of a newly identified or appearing object?
Both object classification range and time to true velocity are more relevant metrics for assessing what a LiDAR system can and should achieve in tomorrow’s autonomous vehicles. In this white paper, we examine how these new metrics better measure and define the problems solved by more advanced LiDAR systems, such as AEye’s iDAR (Intelligent Detection and Ranging).
Conventional Metric #1: Detection Range
A single point detection — where the LiDAR registers one detect on a new object or person entering the scene — is indistinguishable from noise. Therefore, we will use a common industry definition for detection which involves persistence in adjacent shots per frame and/or across frames. For example, we might require 5 detects on an object per frame (5 points at the same range) and/or from frame-to-frame (1 single related point in 5 consecutive frames) to declare that a detection is a valid object.
It is a widely held belief that a detection range of 200+ meters at highway speeds is the required range for vehicles to effectively react to changing road conditions and surroundings. Conventional LiDAR sensors scan and collect data about the occupancy grid in a uniform pattern without discretion. This forms part of a constant stream of gigabytes of data sent to the vehicle’s on-board controller in order to detect objects. This design puts a massive strain on resources. Anywhere from 70 to 90+ percent of data is redundant or useless, which means it’s discarded.
Under these conditions, even a system that’s able to operate at a 10-30 Hz frame rate will struggle to deliver low latency while supporting high frame rates and high performance. And if latency for newly appearing objects is even 0.25 seconds, the frame rate hardly matters — by the time the data is made available to the central compute platform in some circumstances, it’s practically worthless. On the road, driving conditions can change dramatically in a tenth of a second. After 0.1 seconds, two cars closing in at a mutual speed of 200 km/hour are 18 feet closer. While predictive algorithms work well to counter this latency in structured, well-behaved environments, there are several examples where they don’t. One such example is the fast, “head-on” approaching small object. Here, a newly appearing object appears “head-on” with a single LiDAR point and it requires N consecutive single LiDAR point detects before it can be classified as an object. In this example, it’s easy to see that detection range and object classification range are two vastly different things.
With a variety of factors influencing the domain controller’s processing speed, measuring the efficacy of a system by its detection range is problematic. Without knowledge of latency or other pertinent factors, unwarranted trust is put on the controller’s ability to manage competing priorities. While it is generally assumed that LiDAR manufacturers are not supposed to know or care about how the domain controller classifies (or how long classification takes), we propose that ultimately, this leaves designers vulnerable to very dangerous situations.
AEye’s Metric
Object Classification Range
Currently, classification takes place somewhere in the domain controller. It’s at this point that objects are labeled as such and eventually, more clearly identified. At some level of identification, this data is used to predict known behavior patterns or trajectories. It is obviously extremely important and therefore, AEye argues that a better measurement for assessing an automotive LiDAR’s capability is its object classification range. This metric reduces the unknowns — such as latency associated with noise suppression (e.g., N of M detections) — early in the perception stack, pinpointing the salient information about whether a LiDAR system is capable of operating at optimal safety.
As a relatively new field, the definition of how much data is necessary for classification in automotive LiDAR has not yet been defined. Thus, AEye proposes that adopting perception standards used by video classification provides a valuable provisional definition. According to video standards, enabling classification begins with a 3×3 pixel grid of an object. Under this definition, an automotive LiDAR system might be assessed by how fast it’s able to generate a high quality, high-resolution 3×3 point cloud that enables the domain controller to comprehend objects and people in a scene.
Generating a 3×3 point cloud is a struggle for conventional LiDAR systems. While many tout an ability to manifest point clouds comprised of half a million or more points in one second, there is a lack of uniformity in these images. Point clouds created by most LiDAR systems display a fine degree of high-density horizontal lines coupled with very poor density vertical spacing, or in general, low overall density. Regardless, these fixed angular sampling patterns can be difficult for classification routines because the domain controller has to grapple with half a million points per second that are, in many cases, out of balance with the resolution required for the critical sampling of the object in question. Such an askew “mish-mash” of points means it needs to do additional interpretation, putting extra strain on CPU resources.
A much more efficient approach would be to gather about 10 percent of this data, focusing solely on Special Regions of Interest (e.g., moving vehicles and pedestrians) while keeping tabs on the background scene (trees, parked cars, buildings, etc.). Collecting only the salient data in the scene significantly speeds up classification. AEye’s agile iDAR is a LiDAR system integrated with AI that can intelligently accelerate shots only in a Region of Interest (ROI). This comes from its ability to selectively revisit points twice in 10’s of microseconds — an improvement of 3 orders of magnitude over conventional 64-line systems that can only hit an object once per frame (every 100 milliseconds). Future white papers will discuss various methods of using iDAR to ensure that we do not discount important background information by correctly employing the concepts of Search, Acquisition, and Tracking. This is similar to how humans perceive.
In summary, one can move low-level object detection to the sensor level by employing, as an example, a dense 3×3 voxel grid every time a significant detection occurs more or less in real-time. This happens before the data is sent to the central controller, allowing for higher instantaneous resolution than a fixed pattern system can offer and, ultimately, better object classification ranges when using video detection range analogies.
Real-World Applications: Imagine that an autonomous vehicle is driving on a desolate highway. Ahead, the road appears empty. Suddenly, the sensor per..
Scooter mania spreads through Seattle region as Lime launches in Everett
Lime is launching in Everett. (GeekWire Photos / Kurt Schlosser) This Friday, Lime scooters will roll out on the streets of Everett, Wash., a town 35 miles north of Seattle. Everett is launching a three-month scooter-share pilot starting with 100 Lime scooters. Two other cities in Washington — Tacoma and Spokane — are also piloting… Continue reading Scooter mania spreads through Seattle region as Lime launches in Everett
Daimler and BMW-backed Kapten rides into London with anti-Uber ad campaign
Kapten, the French ride-hailing app backed by Daimler and BMW, has today launched in London, coupled with a feisty ad campaign taking a swipe at Uber’s tax arrangements. It follows Kapten (formerly called “Chauffeur Prive”) obtaining a license from TfL, London’s transport regulator, to operate its private-hire vehicle (PHV) service in the U.K. capital city.… Continue reading Daimler and BMW-backed Kapten rides into London with anti-Uber ad campaign
Mercedes-Benz lays out plan for carbon-neutral future
2020 Mercedes-Benz EQC Edition 1886
Mercedes-Benz's CEO-apparent Ola Källenius laid out a plan Monday, just after the Norway launch of its EQC electric SUV, that outlines the development of an entire “carbon-neutral” passenger-car fleet.
Källenius, currently the head of product development at Mercedes, has been named future Chairman of the Board of management of Daimler, the parent company of Mercedes-Benz starting later this year.
Part of the plan, called Vision 2039, calls to make half the company's models plug in by 2030—either plug-in hybrids or all-electric vehicles.
READ THIS: First Mercedes-Benz EQC rolls off assembly line in Germany
According to a transcript of his speech, Källenius said, “Let’s be clear what this means for us: a fundamental transformation of our company within less than three product cycles. That’s not much time when you consider that fossil fuels have dominated our business since the invention of the car by Carl Benz and Gottlieb Daimler some 130 years ago. But as a company founded by engineers, we believe technology can also help to engineer a better future.”
He laid out plans to electrify the company's vans, trucks, and buses as well as its cars, and said the Mercedes is focusing first on building better electric cars, by bringing “EV performance up and costs down,” he said.
Källenius also doubled-down on Daimler's commitment to developing hydrogen fuel-cell vehicles. “There’s also room and need to continue to work on other solutions,” he said, “for example, the fuel cell or eFuels…. Today, no one knows for sure which drivetrain mix will best serve our customers’ needs 20 years from now. That’s why we encourage policy makers to pave the way for tech neutrality: Let’s fix the target, but not the means to achieve it.”
He noted that in addition to electric buses, the company will also build city buses with fuel cells.
READ MORE: Mercedes joins forces with BMW to build an electric ecosystem
Källenius pointed to the company's car-sharing efforts such as Car2Go as a means to help customers reduce their carbon footprint, and said the company will be making new efforts to encourage its customers who buy EVs to charge them using renewable energy. The company launched an effort in March, in conjunction with BMW's Reach Now, to provide clean power to chargers, and smart home chargers, similar programs at Tesla and Volkswagen.
Of course, electric cars are only as carbon-neutral as the factories that build them. The company plans to convert all of its European factories to renewable energy by 2022, starting with an extension of its main plant in Sindelfingen, Germany, and including the factory in Bremen that builds the EQC and the Kamenz factory in Saxony where the EQC's battery is built.
By 2039, the year that gives the plan its name, the company intends to propagate the changes throughout its worldwide factories as well as its suppliers, using incentives as motivation.
Via and SamTrans Launch On-Demand Public Transportation in Pacifica, California
Published May 6, 2019 7:00 am, Via NYC
Via and SamTrans Launch On-Demand Public Transportation in Pacifica, California
The new partnership with SamTrans brings another shared, on-demand service powered by Via to the San Francisco Bay area.
May 6, 2019 (Pacifica, CA) — Via, the world’s leading provider and developer of on-demand shared mobility solutions, today announced a new microtransit deployment in the San Francisco Bay Area, operated in partnership with public transportation leader SamTrans. The new service brings corner-to-corner shuttle service to Pacifica, California.
Starting May 6, SamTrans will offer on-demand shared transportation within the coverage area for the standard SamTrans fare. The new on-demand service replaces the fixed-route FLX Pacifica bus service, offering customers a more convenient public transportation solution.
SamTrans OnDemand is powered by Via’s advanced algorithm, which enables multiple riders to seamlessly share a single vehicle. The powerful technology directs passengers to a nearby corner — a virtual bus stop — for pick up and drop off, and dynamically routes the vehicle in real-time, allowing for quick and efficient shared trips without lengthy detours, or inconvenient fixed routes and schedules.
“Via’s powerful technology is redefining mobility across the globe, seamlessly integrating with public transit infrastructure to provide on-demand dynamic transportation solutions,” said Daniel Ramot, co-founder and CEO of Via. “We are delighted to be partnering with SamTrans to help Pacifica lead the way in mobility innovation through the launch of this convenient, affordable, and eco-friendly service.”
SamTrans operates 70 routes throughout San Mateo County. Funded in part by a half-cent sales tax, the San Mateo County Transit District also provides administrative support for Caltrain and the San Mateo County Transportation Authority.
“SamTrans OnDemand is an innovative service that will better serve Pacifica residents by providing a flexible and affordable transportation option,” said SamTrans Chief Planning Officer April Chan. “If this way of delivering service works well, we hope to extend this to other neighborhoods in San Mateo County that can take advantage of this service.”
“Pacifica has unique challenges regarding transportation that result in congested roads and needless stress,” said Pacifica Mayor Sue Vaterlauss. “SamTrans OnDemand offers an alternative that will appeal to students, seniors and people that want options beyond owning a car.”
SamTrans OnDemand allows riders to request a trip through an iOS or Android app operated by Via, or through the SamTrans Customer Service Call Center. Once a trip is booked, a shuttle is routed to the pickup location and can be tracked through the app in real-time.
Via has been tapped by cities and transportation players around the world to help re-engineer public transit from a regulated system of rigid routes and schedules to a fully dynamic, on-demand network. Via now has more than 70 launched and pending deployments in more than 15 countries. To learn more about Via, visit www.platform.ridewithvia.com.
About Via
Via is re-engineering public transit, from a regulated system of rigid routes and schedules to a fully dynamic, on-demand network. Via’s mobile app connects multiple passengers who are headed the same way, allowing riders to seamlessly share a premium vehicle. First launched in New York City in September 2013, the Via platform operates in the United States and in Europe through its joint venture with Mercedes-Benz Vans, ViaVan. Via’s technology is also deployed worldwide through dozens of partner projects with public transportation agencies, private transit operators, taxi fleets, private companies, and universities, seamlessly integrating with public transit infrastructure to power cutting-edge on-demand mobility. For more information, visit www.platform.ridewithvia.com.
Read more
Geely’s new R&D center in Germany to promote next-gen mobility tech
Geely’s new R&D center in Germany to promote next-gen mobility tech
Lilium’s latest flying taxi prototype can at least hover
Sponsored Links Lilium There are a number of flying taxi startups out there, but Lilium has stood out with its unique airplane-like design and serious aeronautical cred. Now, the company has unveiled an all-new prototype and flaunted the first successful tests with the craft. In a video, it shows the craft taking off vertically, hovering… Continue reading Lilium’s latest flying taxi prototype can at least hover
Lilium launches city travel electric air taxi
[MUSIC PLAYING] DANIEL WIEGAND: We’re creating a new means of transportation. Lilium is not a company that develops airplanes. Lilium is a mobility service. It’s an airborne mobility service, but it’s a service company. The product we finally sell to you as a customer is a service. Go to Source
China’s Tesla wannabe Xpeng starts ride-hailing service
There’re a lot of synergies between electric vehicles and ride-hailing. Drivers are able to save more steering an EV compared to a gas vehicle. Environmentally conscious consumers will choose to hire an electric car. And EVs are designed with better compatibility with autonomous driving, which is expected to hit the public road in the coming… Continue reading China’s Tesla wannabe Xpeng starts ride-hailing service