SAN FRANCISCO, July 27, 2020 (GLOBE NEWSWIRE) — Lyft, Inc. (Nasdaq:LYFT) (the “Company” or “Lyft”), today announced that it will release financial results for its second fiscal quarter ended June 30, 2020 after the close of the market on Wednesday, August 12, 2020. On the same day, Lyft will host a conference call at 1:30… Continue reading Lyft: Lyft To Announce Second Quarter 2020 Financial Results
Tag: Mobility
AEye Unveils 4Sight™, a Breakthrough LiDAR Sensor That Delivers Automotive Solid-State Reliability and Record-Breaking Performance
Available in July, 2020, 4Sight provides automotive-grade ruggedness combined with software-definable range of up to 1000M and 0.025°resolution at a fraction of the price of current LiDAR sensors.
Dublin, CA – June 9, 2020 – AEye, Inc, an artificial perception pioneer today announced 4Sight™, a groundbreaking new sensor family built on its unique iDAR™ platform. 4Sight redefines LiDAR performance while establishing a benchmark for the next generation of LiDAR sensors and intelligent robotic vision systems. Debunking previous assumptions that high performing long-range 1550nm LiDAR could not achieve both solid state reliability and lower cost, 4Sight delivers on all three – performance, reliability and price.
The first 4Sight sensor to be released in July is the 4Sight M. The 4Sight M is designed to meet the diverse range of performance and functional requirements to power autonomous and partially automated applications. 4Sight family of sensors has been developed and tested over the last 18 months in conjunction with a wide range of customers and integrators in automotive, trucking, transit, construction, rail, intelligent traffic systems (ITS), aerospace and defense markets. 4Sight leverages the complete iDAR software platform, which incorporates an upgraded visualizer (which allows you to model various shot patterns) and a comprehensive SDK so it is fully extensible and customizable.
“The primary issue that has delayed broad adoption of LiDAR has been the industry’s inability to produce a high-performance deterministic sensor with solid state reliability at a reasonable cost,” said Blair LaCorte, President of AEye. “We created a more intelligent, agile sensor that is software definable to meet the unique needs of any application – the result is 4Sight.”
Some of the unique features of the 4Sight M are:
LiDAR Performance
Software definable range optimization of up to 1,000 meters (eye- and camera-safe)Up to 4 million points per second with horizontal and vertical resolution less than 0.1°Instantaneous addressable resolution of 0.025°Integrated Intelligence
Library of functionally-safe deterministic scan patterns that can be customized and fixed or triggered to adjust to changing environments (highway, urban, weather, etc.)Integrated automotive camera, boresight aligned with AEye’s agile LiDAR – instantaneously generating true color point clouds. Parallel camera-only feed can provide cost-effective redundant camera sensor.Enhanced ground plane detection to determine topology at extended rangesAdvanced Vision Capabilities
Detection and classification of objects with advanced perception features such as intraframe radial and lateral velocityDetection through foliage and adverse weather conditions such as rain and fog through the use of dynamic range, full-waveform processing of multiple returnsDetection of pedestrians at over 200 metersDetection of small, low reflective objects such tire fragments, bricks or other road debris (10x50cm at 10% reflectivity) at ranges of over 120 metersReliability
Shock and Vibration – designed and tested for solid-state performance and reliability. 4Sight has proven in third-party testing to sustain mechanical shock of over 50G, random vibration over 12Grms (5-2000Hz), and sustained vibration of over 3G for each axis.Automotive-grade production:Automotive-qualified supply chain utilizing standard production processes and overseen by global manufacturing partnersDesigned for manufacturability using a simple solid-state architecture consisting of only 1 scanner, 1 laser, 1 receiver, and 1 SoCCommon hardware architecture and software/data structures across all fully autonomous to partially automated applications (ADAS) – leveraging R&D and economies of scalePrice
4Sight can be configured for high-volume Mobility applications with SOP 2021 at an estimated 2x-5x lower price than any other high-performance LiDAR, additionally for ADAS applications with SOP 2023 4Sight is designed to be priced 1.5x-3x lower than any other long or medium range LiDAR.4Sight series production packaging options include roof, grill, and behind the windshield, software optimized depending on the placement“In my work with automotive OEMs, the value delivered by the 4Sight sensor surpasses anything I have seen in the automotive industry,” said Sebastian Bihari, Managing Director at Vektor Partners. “By starting with understanding the data a perception system needs, AEye has developed a simpler more responsive design for efficiently capturing and processing perception data. In doing this, 4Sight establishes standards for the industry that will take years for others to achieve.”
In addition to setting new standards for performance, reliability and price, the 4Sight M is also extremely power efficient. 4Sight’s groundbreaking system attributes such as time-to-detect, agile scanning, shot energy optimization, power optimization of returns, and boresight camera/LiDAR data generation, greatly reduces the processing and bandwidth typically required for full perception stacks. When implemented in a vehicle, AEye’s expects 4Sight sensors to be nearly power neutral in relation to a vehicle’s perception stack power budget.
Given the current constraints in travel, AEye is also announcing another industry-first innovation in the launch of Raptor – a unique high-performance web-based remote demo platform. Raptor will enable participants to engage in a real-time interactive test drive with an AEye engineer. From the comfort of their own home or office, AEye’s customers and partners will have the ability to see what a truly software-defined sensor can do and witness the record breaking 4Sight M performance in real time and to customize the demo to meet their specific use cases. Please contact [email protected] to schedule a demo.
About AEyeAEye is an artificial perception pioneer and creator of iDAR™, a perception system that acts as the eyes and visual cortex of autonomous vehicles. Since the demonstration of its solid-state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. The company is based in the San Francisco Bay Area, and backed by world-renowned investors including Kleiner Perkins Caufield & Byers, Taiwania Capital, Hella Ventures, LG Electronics, Subaru-SBI, Aisin, Intel Capital, Airbus Ventures, and others.
Media Contact:
AEye, Inc.
Jennifer Deitsch
[email protected]
925-596-3945
AEye Unveils 4Sight™, a Breakthrough LiDAR Sensor That Delivers Automotive Solid-State Reliability and Record-Breaking Performance —DesignNews Spotlights AEye President Blair LaCorte's Press Conference at AutoMobility LAAbrupt Stop DetectionAEye Reveals Advanced MEMS That Delivers Solid-State Performance and Reliability Needed to Advance Adoption of Low-Cost LiDARAEye Unveils New Headquarters in Dublin, CaliforniaCargo Protruding from VehicleAEye Announces World’s First Commercially Available Perception Software Designed to Run Inside the Sensors of Autonomous VehiclesObstacle AvoidanceAEye Hires Veteran Financial Exec as VP of FinanceAEye Sets New Benchmark for LiDAR Range
Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison
IntroductionRecent papers1–5 have presented a number of marketing claims about the benefits of Frequency Modulated Continuous Wave (FMCW) LiDAR systems. As might be expected, there is more to the story than the headlines claim. This white paper examines these claims and offers a technical comparison of Time of Flight (TOF) vs. FMCW LiDAR for each of them. We hope this serves to outline some of the difficult system trade-offs a successful practitioner must overcome, thereby stimulating robust informed discussion, competition, and ultimately, improvement of both TOF and FMCW offerings to advance perception for autonomy.
Competitive ClaimsBelow is a summary of our views and a side-by-side comparison between TOF vs. FMCW LiDAR claims.
Download “Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison” [pdf]
Claim #1: FMCW is a (new) revolutionary technologyThis is untrueContrary to the recent news articles, FMCW LiDAR has been around for a very long time, with its beginnings stemming from work done at MIT Lincoln Laboratory in the 1960s,8 only seven years after the laser itself was invented.9 Many of the lessons we learned about FMCW over the years—while unclassified and public domain—have unfortunately been long forgotten. What has changed in recent years is the higher availability of long coherence-length lasers. While this has justifiably rejuvenated interest in the established technology, as it can theoretically provide an extremely high signal gain, there are still several limitations, long ago identified, that must be addressed to make this LiDAR viable for autonomous vehicles. If not addressed, the claim that “new” FMCW will cost-effectively solve the automotive industry’s challenges with both scalable data collection and long-range, small object detections, will prove untrue.
Claim #2: FMCW detects/tracks objects farther, fasterThis is unprovenTOF LiDAR systems can offer very fast laser shot rates (several million shots per second in the AEye system), agile scanning, increased return salience, and the ability to apply high density Regions of Interest (ROIs)—giving you a factor of 2x–4x better information from returns versus other systems. By comparison, many low complexity FMCW systems are only capable of shot rates in the 10’s to 100’s of thousands of shots per second (~50x slower). So, in essence, we are comparing nanosecond dwell times and high repetition rates with tens of microsecond dwell times and low repetition rates (per laser/rx pair).
Detection, acquisition (classification), and tracking of objects at long range are all heavily influenced by laser shot rate, because higher laser shot density (in space and/or time) provides more information that allows for faster detection times and better noise filtering. AEye has demonstrated a system that is capable of multi-point detects of low reflectivity: small objects and pedestrians at over 200m, vehicles at 300m, and a class-3 truck at 1km range. This speaks to the ranging capability of TOF technology. Indeed, virtually all laser rangefinders use TOF, not FMCW, for distance ranging (e.g., the Voxtel rangefinder10 products, some with a 10+km detection range). Although recent articles claim that FMCW has superior range, we haven’t seen an FMCW system that can match the range of an advanced TOF system.
Claim #3: FMCW measures velocity and range more accurately and efficientlyThis is misleadingTOF systems, including AEye’s LiDAR, do require multiple laser shots to determine target velocity. This might seem like extra overhead when compared to the claims of FMCW with single shots. Much more important, is the understanding that not all velocity measurements are equal. While radial velocity in two cars moving head-on is urgent (one of the reasons a longer range of detection is so desired), so too is lateral velocity as it comprises over 90% of the most dangerous edge cases. Cars running a red light, swerving vehicles, pedestrians stepping into a street, all require lateral velocity for evasive decision making. FMCW cannot measure lateral velocity simultaneously, in one shot, and has no benefit whatsoever in finding lateral velocity over TOF systems.
Consider a car moving between 30 and 40 meters/second (~67 to 89 MPH) detected by a laser shot. If a second laser shot is taken a short period later, say 50us after the first, the target will only have moved ~1.75mm during that interval. To establish a velocity that is statistically significant, the target should have moved at least 2cm, which takes about 500us (while requiring sufficient SNR to interpolate range samples). With that second measurement, a statistically significant range and velocity can be established within a time frame that is negligible compared to a frame rate. With an agile scanner, such as the one AEye has developed, the 500us is not solely dedicated or “captive” to velocity estimation. Instead, many other shots can be fired at targets in the interim. We can use the time wisely to look at other areas/targets before returning to the original target for a high confidence velocity measurement. Whereas, an FMCW system is captive for their entire dwell time.
Compounding the captivity time is the additional fact that FMCW often requires a minimum of two laser frequency sweeps (up and down) to form an unambiguous detection, with the down sweep providing information needed to overcome ambiguity arising from the mixing range + Doppler shift. This doubles the dwell time required per shot above and beyond that already described in the previous paragraph. The amount of motion of a target in 10us can be typically only 0.5mm. This level of displacement enters the regime where it is difficult to separate vibration versus real, lineal motion. Again, in the case of lateral velocity, no FMCW system will instantly detect lateral speed at all without multi-position estimates such as those used by TOF systems, but with the additional baggage of long FMCW dwell times.
Lastly, in an extreme TOF example, the AEye system has demonstrated detected objects at 1km. Even if it required two consecutive shots to get velocity on a target at 1km, it’s easy to see how that would be superior to a single shot at 100m given a common frame rate of 20Hz and typical vehicle speeds.
Claim #4: FMCW has less interferenceQuite the opposite actually!Spurious reflections arise in both TOF and FMCW systems. These can include retroreflector anomalies like “halos,” “shells,” first surface reflections (even worse behind windshields), off-axis spatial sidelobes, as well as multipath, and clutter. The key to any good LiDAR is to suppress sidelobes in both the spatial domain (with good optics) and the temporal/waveform domain. TOF and FMCW are comparable in spatial behavior, but where FMCW truly suffers is in the time domain/waveform domain when high contrast targets are present.
ClutterFMCW relies on window-based sidelobe rejection to address self-interference (clutter) which is far less robust than TOF, which has no sidelobes to begin with. To provide context, a 10us FMCW pulse spreads light radially across 1.5km range. Any objects within this range extent will be caught in the FFT (time) sidelobes. Even a shorter 1us FMCW pulse can be corrupted by high intensity clutter 150m away. The 1st sidelobe of a Rectangular Window FFT is well known to be -13dB, far above the levels needed for a consistently good point cloud. (Unless no object in the shot differs in intensity by any other range point in a shot by more than about 13dB, something that is unlikely in operational road conditions).
Of course, deeper sidelobe taper can be applied, but at the sacrifice of pulse broadening. Furthermore, nonlinearities in the receiver front end (so-called spurious-free dynamic range) will limit the effective overall system sidelobe levels achievable due to: compression and ADC spurs (third order intercepts); phase noise;6 and atmospheric phase modulation etc., which no amount of window taper can mitigate. Aerospace and defense systems of course can and do overcome such limitations, but we are unaware of any low-cost automotive grade systems capable of the time-instantaneous >100db dynamic range required to sort out long-range small objects from near-range retroreflectors, such as arise in FMCW.
In contrast, a typical Gaussian TOF system, at 2ns pulse duration, has no time-based sidelobes whatsoever beyond the few cm of the pulse duration itself. No amount of dynamic range between small and large offset returns has any effect on the light incident on the photodetector when the small target return is captured. We invite anyone evaluating LiDAR systems to carefully inspect the point cloud quality of TOF vs FMCW under various driving conditions for themselves. The multitude of potential sidelobes in FMCW lead to artifacts that impact not just local range samples, but the entire returned waveform for a given pulse!
First surface (e.g., FMCW behind a windshield or other first surface)A potentially stronger interference source is a reflection caused by either a windshield or other first surface that is applied to the LiDAR system. Just as the transmit beam is on near continuously, the reflections will be continuous, and very strong, relative to distant objects, representing a similar kind of low frequency component that creates undesirable FFT sidelobes in the transformed data. The result can also be a significant reduction of usable dynamic range. Furthermore, windshields, being multilayer glass under mechanical stress, have complex inhomogeneous polarization. This randomizes the electric field of the signal return on the photodetector surface complicating (decohering) optical mixing.
Lastly, due to the nature of the time domain processing vs frequency domain processing, the handling of multi-echoes—even with high dynamic range—is a straightforward process in TOF systems. Whereas, it requires significant disambiguation in FMCW systems. Multi-echo processing is..
The Station: Winners and losers in Paris, Rivian sets a delivery date, Waymo and FCA deepen ties
The Station is a weekly newsletter dedicated to all things transportation. Sign up here — just click The Station — to receive it every Saturday in your inbox. Hello and welcome back to The Station, a newsletter dedicated to all the present and future ways people and packages move from Point A to Point B. It’s… Continue reading The Station: Winners and losers in Paris, Rivian sets a delivery date, Waymo and FCA deepen ties
Gojek appoints Amazon, Microsoft veteran as its new chief technology officer
Indonesia-based ride-hailing company and “super app” Gojek said today that it has named a new chief technology officer. Severan Rault, who previously held leadership positions at Amazon and Microsoft, takes over the role from Ajey Gore, who announced last month he was leaving for personal reasons. In a statement, the company said Rault will oversee… Continue reading Gojek appoints Amazon, Microsoft veteran as its new chief technology officer
Uber: Uber ATG issues enhancements to Safety Case Framework
By: Nat Beuse, Head of Safety at Uber Advanced Technologies Group In July 2019, the team at Uber ATG released our first, open-sourced Safety Case Framework, which clearly organizes the goals, claims, and evidence necessary to substantiate that our self-driving vehicles (SDVs) are acceptably safe to operate on public roads. The framework charted an outline… Continue reading Uber: Uber ATG issues enhancements to Safety Case Framework
Liberbank and Next Electric Motors sign an alliance for the development of electric mobility
MADRID, Jul 27 (EUROPA PRESS) – Liberbank and the Valencian motorcycle manufacturer Next Electric Motors have signed a collaboration agreement to promote sustainable mobility and offer new options for electric mobility, by offering Next’s electric motorcycle models with special conditions and financing. In order to collaborate with the awareness of the change towards clean energy,… Continue reading Liberbank and Next Electric Motors sign an alliance for the development of electric mobility
Piaggio cuts 73% its profit until June, with 9.1 million euros
PONTEDERA (ITALY), Jul 27 (EUROPA PRESS) – The Italian group Piaggio obtained a net profit of 9.1 million euros during the first half of this year, which represents a decrease of 73% compared to the 34.6 million euros it earned during the first six months of 2019 . According to data from the company that… Continue reading Piaggio cuts 73% its profit until June, with 9.1 million euros
In a robot taxi through Shanghai
While waiting for the robot, old taxi stories come to mind. When the driver on the way home asked the “Bar Rouge” to charge five times the price and then asked for a fist fight. The drive from the airport in Pudong when the driver reported the three apartments he had bought in central Shanghai… Continue reading In a robot taxi through Shanghai
ABB breaks ground on new EV charger factory to meet global demand
Since entering the EV charging market a decade ago, ABB has sold over 14,000 DC fast chargers, in more than 80 countries. However, the Swiss-based electronics giant believes this is just the beginning. It has begun construction on a new production facility in San Giovanni Valdarno, Italy. The company plans to invest $30 million in… Continue reading ABB breaks ground on new EV charger factory to meet global demand