In operation in since early 2020: a networked computer system with NVIDIA technology reduces development time from weeks to hours Areas of application include artificial intelligence, deep learning and virtual simulation Computer cluster ranked as the most powerful computer in the automotive industry according to list of TOP500 supercomputers Software powerhouse Continental has more than 20,000… Continue reading Continental Accelerates AI Development Using Supercomputer
Category: Suppliers
Continental Fastest Computer in the Industry
In operation in Frankfurt since early 2020: a networked computer system with NVIDIA technology reduces development time from a few weeks to a few hours Areas of application in particular include artificial intelligence, deep learning and virtual simulation Computer cluster now ranked according to list of TOP500 supercomputers as the most powerful computer in the… Continue reading Continental Fastest Computer in the Industry
Cummins Reports Second Quarter 2020 Results
COLUMBUS, Ind.–(BUSINESS WIRE)–Cummins Inc. (NYSE: CMI) today reported results for the second quarter of 2020. Second quarter revenues of $3.9 billion decreased 38 percent from the same quarter in 2019. COVID-19 related customer shutdowns and weak economic activity led to lower demand in most end markets and regions except China. Sales in North America declined… Continue reading Cummins Reports Second Quarter 2020 Results
GOODYEAR LAUNCHES EXCLUSIVE WATCH COLLECTION WITH B.R.M CHRONOGRAPHESJul 28, 2020Goodyear has combined a global licensing collaboration with a motorsp…
AKRON, Ohio, July 28, 2020 – Goodyear has combined a global licensing collaboration with a motorsport agreement to launch an exclusive watch collection with French luxury watch specialist B.R.M Chronographes. This follows the recent announcement that Goodyear has joined forces with the Algarve Pro Racing team champion European Le Mans Series, who will be showcasing… Continue reading GOODYEAR LAUNCHES EXCLUSIVE WATCH COLLECTION WITH B.R.M CHRONOGRAPHESJul 28, 2020Goodyear has combined a global licensing collaboration with a motorsp…
Adient to discuss Q2 fiscal 2020 financial results on May 5, 2020
Searching for your content…
No results found. Please change your search terms and try again.
Intel: Intel Makes Changes to Technology Organization
SANTA CLARA, Calif., July 27, 2020 – Today, Intel CEO Bob Swan announced changes to the company’s technology organization and executive team to accelerate product leadership and improve focus and accountability in process technology execution. Effective immediately, the Technology, Systems Architecture and Client Group (TSCG) will be separated into the following teams, whose leaders will… Continue reading Intel: Intel Makes Changes to Technology Organization
Visteon Career Profile: Kris Doyle, Director of Investor Relations and Strategic Planning
By: Elaine Zhu What is the coolest thing about working at Visteon? The automotive industry is changing rapidly, with vehicles becoming more connected and digital. This has created a great opportunity for Visteon, a pure play cockpit electronics company developing innovative products aligned with these secular trends. I think it makes working at Visteon extremely exciting.… Continue reading Visteon Career Profile: Kris Doyle, Director of Investor Relations and Strategic Planning
AEye Unveils 4Sight™, a Breakthrough LiDAR Sensor That Delivers Automotive Solid-State Reliability and Record-Breaking Performance
Available in July, 2020, 4Sight provides automotive-grade ruggedness combined with software-definable range of up to 1000M and 0.025°resolution at a fraction of the price of current LiDAR sensors.
Dublin, CA – June 9, 2020 – AEye, Inc, an artificial perception pioneer today announced 4Sight™, a groundbreaking new sensor family built on its unique iDAR™ platform. 4Sight redefines LiDAR performance while establishing a benchmark for the next generation of LiDAR sensors and intelligent robotic vision systems. Debunking previous assumptions that high performing long-range 1550nm LiDAR could not achieve both solid state reliability and lower cost, 4Sight delivers on all three – performance, reliability and price.
The first 4Sight sensor to be released in July is the 4Sight M. The 4Sight M is designed to meet the diverse range of performance and functional requirements to power autonomous and partially automated applications. 4Sight family of sensors has been developed and tested over the last 18 months in conjunction with a wide range of customers and integrators in automotive, trucking, transit, construction, rail, intelligent traffic systems (ITS), aerospace and defense markets. 4Sight leverages the complete iDAR software platform, which incorporates an upgraded visualizer (which allows you to model various shot patterns) and a comprehensive SDK so it is fully extensible and customizable.
“The primary issue that has delayed broad adoption of LiDAR has been the industry’s inability to produce a high-performance deterministic sensor with solid state reliability at a reasonable cost,” said Blair LaCorte, President of AEye. “We created a more intelligent, agile sensor that is software definable to meet the unique needs of any application – the result is 4Sight.”
Some of the unique features of the 4Sight M are:
LiDAR Performance
Software definable range optimization of up to 1,000 meters (eye- and camera-safe)Up to 4 million points per second with horizontal and vertical resolution less than 0.1°Instantaneous addressable resolution of 0.025°Integrated Intelligence
Library of functionally-safe deterministic scan patterns that can be customized and fixed or triggered to adjust to changing environments (highway, urban, weather, etc.)Integrated automotive camera, boresight aligned with AEye’s agile LiDAR – instantaneously generating true color point clouds. Parallel camera-only feed can provide cost-effective redundant camera sensor.Enhanced ground plane detection to determine topology at extended rangesAdvanced Vision Capabilities
Detection and classification of objects with advanced perception features such as intraframe radial and lateral velocityDetection through foliage and adverse weather conditions such as rain and fog through the use of dynamic range, full-waveform processing of multiple returnsDetection of pedestrians at over 200 metersDetection of small, low reflective objects such tire fragments, bricks or other road debris (10x50cm at 10% reflectivity) at ranges of over 120 metersReliability
Shock and Vibration – designed and tested for solid-state performance and reliability. 4Sight has proven in third-party testing to sustain mechanical shock of over 50G, random vibration over 12Grms (5-2000Hz), and sustained vibration of over 3G for each axis.Automotive-grade production:Automotive-qualified supply chain utilizing standard production processes and overseen by global manufacturing partnersDesigned for manufacturability using a simple solid-state architecture consisting of only 1 scanner, 1 laser, 1 receiver, and 1 SoCCommon hardware architecture and software/data structures across all fully autonomous to partially automated applications (ADAS) – leveraging R&D and economies of scalePrice
4Sight can be configured for high-volume Mobility applications with SOP 2021 at an estimated 2x-5x lower price than any other high-performance LiDAR, additionally for ADAS applications with SOP 2023 4Sight is designed to be priced 1.5x-3x lower than any other long or medium range LiDAR.4Sight series production packaging options include roof, grill, and behind the windshield, software optimized depending on the placement“In my work with automotive OEMs, the value delivered by the 4Sight sensor surpasses anything I have seen in the automotive industry,” said Sebastian Bihari, Managing Director at Vektor Partners. “By starting with understanding the data a perception system needs, AEye has developed a simpler more responsive design for efficiently capturing and processing perception data. In doing this, 4Sight establishes standards for the industry that will take years for others to achieve.”
In addition to setting new standards for performance, reliability and price, the 4Sight M is also extremely power efficient. 4Sight’s groundbreaking system attributes such as time-to-detect, agile scanning, shot energy optimization, power optimization of returns, and boresight camera/LiDAR data generation, greatly reduces the processing and bandwidth typically required for full perception stacks. When implemented in a vehicle, AEye’s expects 4Sight sensors to be nearly power neutral in relation to a vehicle’s perception stack power budget.
Given the current constraints in travel, AEye is also announcing another industry-first innovation in the launch of Raptor – a unique high-performance web-based remote demo platform. Raptor will enable participants to engage in a real-time interactive test drive with an AEye engineer. From the comfort of their own home or office, AEye’s customers and partners will have the ability to see what a truly software-defined sensor can do and witness the record breaking 4Sight M performance in real time and to customize the demo to meet their specific use cases. Please contact [email protected] to schedule a demo.
About AEyeAEye is an artificial perception pioneer and creator of iDAR™, a perception system that acts as the eyes and visual cortex of autonomous vehicles. Since the demonstration of its solid-state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. The company is based in the San Francisco Bay Area, and backed by world-renowned investors including Kleiner Perkins Caufield & Byers, Taiwania Capital, Hella Ventures, LG Electronics, Subaru-SBI, Aisin, Intel Capital, Airbus Ventures, and others.
Media Contact:
AEye, Inc.
Jennifer Deitsch
[email protected]
925-596-3945
AEye Unveils 4Sight™, a Breakthrough LiDAR Sensor That Delivers Automotive Solid-State Reliability and Record-Breaking Performance —DesignNews Spotlights AEye President Blair LaCorte's Press Conference at AutoMobility LAAbrupt Stop DetectionAEye Reveals Advanced MEMS That Delivers Solid-State Performance and Reliability Needed to Advance Adoption of Low-Cost LiDARAEye Unveils New Headquarters in Dublin, CaliforniaCargo Protruding from VehicleAEye Announces World’s First Commercially Available Perception Software Designed to Run Inside the Sensors of Autonomous VehiclesObstacle AvoidanceAEye Hires Veteran Financial Exec as VP of FinanceAEye Sets New Benchmark for LiDAR Range
Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison
IntroductionRecent papers1–5 have presented a number of marketing claims about the benefits of Frequency Modulated Continuous Wave (FMCW) LiDAR systems. As might be expected, there is more to the story than the headlines claim. This white paper examines these claims and offers a technical comparison of Time of Flight (TOF) vs. FMCW LiDAR for each of them. We hope this serves to outline some of the difficult system trade-offs a successful practitioner must overcome, thereby stimulating robust informed discussion, competition, and ultimately, improvement of both TOF and FMCW offerings to advance perception for autonomy.
Competitive ClaimsBelow is a summary of our views and a side-by-side comparison between TOF vs. FMCW LiDAR claims.
Download “Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison” [pdf]
Claim #1: FMCW is a (new) revolutionary technologyThis is untrueContrary to the recent news articles, FMCW LiDAR has been around for a very long time, with its beginnings stemming from work done at MIT Lincoln Laboratory in the 1960s,8 only seven years after the laser itself was invented.9 Many of the lessons we learned about FMCW over the years—while unclassified and public domain—have unfortunately been long forgotten. What has changed in recent years is the higher availability of long coherence-length lasers. While this has justifiably rejuvenated interest in the established technology, as it can theoretically provide an extremely high signal gain, there are still several limitations, long ago identified, that must be addressed to make this LiDAR viable for autonomous vehicles. If not addressed, the claim that “new” FMCW will cost-effectively solve the automotive industry’s challenges with both scalable data collection and long-range, small object detections, will prove untrue.
Claim #2: FMCW detects/tracks objects farther, fasterThis is unprovenTOF LiDAR systems can offer very fast laser shot rates (several million shots per second in the AEye system), agile scanning, increased return salience, and the ability to apply high density Regions of Interest (ROIs)—giving you a factor of 2x–4x better information from returns versus other systems. By comparison, many low complexity FMCW systems are only capable of shot rates in the 10’s to 100’s of thousands of shots per second (~50x slower). So, in essence, we are comparing nanosecond dwell times and high repetition rates with tens of microsecond dwell times and low repetition rates (per laser/rx pair).
Detection, acquisition (classification), and tracking of objects at long range are all heavily influenced by laser shot rate, because higher laser shot density (in space and/or time) provides more information that allows for faster detection times and better noise filtering. AEye has demonstrated a system that is capable of multi-point detects of low reflectivity: small objects and pedestrians at over 200m, vehicles at 300m, and a class-3 truck at 1km range. This speaks to the ranging capability of TOF technology. Indeed, virtually all laser rangefinders use TOF, not FMCW, for distance ranging (e.g., the Voxtel rangefinder10 products, some with a 10+km detection range). Although recent articles claim that FMCW has superior range, we haven’t seen an FMCW system that can match the range of an advanced TOF system.
Claim #3: FMCW measures velocity and range more accurately and efficientlyThis is misleadingTOF systems, including AEye’s LiDAR, do require multiple laser shots to determine target velocity. This might seem like extra overhead when compared to the claims of FMCW with single shots. Much more important, is the understanding that not all velocity measurements are equal. While radial velocity in two cars moving head-on is urgent (one of the reasons a longer range of detection is so desired), so too is lateral velocity as it comprises over 90% of the most dangerous edge cases. Cars running a red light, swerving vehicles, pedestrians stepping into a street, all require lateral velocity for evasive decision making. FMCW cannot measure lateral velocity simultaneously, in one shot, and has no benefit whatsoever in finding lateral velocity over TOF systems.
Consider a car moving between 30 and 40 meters/second (~67 to 89 MPH) detected by a laser shot. If a second laser shot is taken a short period later, say 50us after the first, the target will only have moved ~1.75mm during that interval. To establish a velocity that is statistically significant, the target should have moved at least 2cm, which takes about 500us (while requiring sufficient SNR to interpolate range samples). With that second measurement, a statistically significant range and velocity can be established within a time frame that is negligible compared to a frame rate. With an agile scanner, such as the one AEye has developed, the 500us is not solely dedicated or “captive” to velocity estimation. Instead, many other shots can be fired at targets in the interim. We can use the time wisely to look at other areas/targets before returning to the original target for a high confidence velocity measurement. Whereas, an FMCW system is captive for their entire dwell time.
Compounding the captivity time is the additional fact that FMCW often requires a minimum of two laser frequency sweeps (up and down) to form an unambiguous detection, with the down sweep providing information needed to overcome ambiguity arising from the mixing range + Doppler shift. This doubles the dwell time required per shot above and beyond that already described in the previous paragraph. The amount of motion of a target in 10us can be typically only 0.5mm. This level of displacement enters the regime where it is difficult to separate vibration versus real, lineal motion. Again, in the case of lateral velocity, no FMCW system will instantly detect lateral speed at all without multi-position estimates such as those used by TOF systems, but with the additional baggage of long FMCW dwell times.
Lastly, in an extreme TOF example, the AEye system has demonstrated detected objects at 1km. Even if it required two consecutive shots to get velocity on a target at 1km, it’s easy to see how that would be superior to a single shot at 100m given a common frame rate of 20Hz and typical vehicle speeds.
Claim #4: FMCW has less interferenceQuite the opposite actually!Spurious reflections arise in both TOF and FMCW systems. These can include retroreflector anomalies like “halos,” “shells,” first surface reflections (even worse behind windshields), off-axis spatial sidelobes, as well as multipath, and clutter. The key to any good LiDAR is to suppress sidelobes in both the spatial domain (with good optics) and the temporal/waveform domain. TOF and FMCW are comparable in spatial behavior, but where FMCW truly suffers is in the time domain/waveform domain when high contrast targets are present.
ClutterFMCW relies on window-based sidelobe rejection to address self-interference (clutter) which is far less robust than TOF, which has no sidelobes to begin with. To provide context, a 10us FMCW pulse spreads light radially across 1.5km range. Any objects within this range extent will be caught in the FFT (time) sidelobes. Even a shorter 1us FMCW pulse can be corrupted by high intensity clutter 150m away. The 1st sidelobe of a Rectangular Window FFT is well known to be -13dB, far above the levels needed for a consistently good point cloud. (Unless no object in the shot differs in intensity by any other range point in a shot by more than about 13dB, something that is unlikely in operational road conditions).
Of course, deeper sidelobe taper can be applied, but at the sacrifice of pulse broadening. Furthermore, nonlinearities in the receiver front end (so-called spurious-free dynamic range) will limit the effective overall system sidelobe levels achievable due to: compression and ADC spurs (third order intercepts); phase noise;6 and atmospheric phase modulation etc., which no amount of window taper can mitigate. Aerospace and defense systems of course can and do overcome such limitations, but we are unaware of any low-cost automotive grade systems capable of the time-instantaneous >100db dynamic range required to sort out long-range small objects from near-range retroreflectors, such as arise in FMCW.
In contrast, a typical Gaussian TOF system, at 2ns pulse duration, has no time-based sidelobes whatsoever beyond the few cm of the pulse duration itself. No amount of dynamic range between small and large offset returns has any effect on the light incident on the photodetector when the small target return is captured. We invite anyone evaluating LiDAR systems to carefully inspect the point cloud quality of TOF vs FMCW under various driving conditions for themselves. The multitude of potential sidelobes in FMCW lead to artifacts that impact not just local range samples, but the entire returned waveform for a given pulse!
First surface (e.g., FMCW behind a windshield or other first surface)A potentially stronger interference source is a reflection caused by either a windshield or other first surface that is applied to the LiDAR system. Just as the transmit beam is on near continuously, the reflections will be continuous, and very strong, relative to distant objects, representing a similar kind of low frequency component that creates undesirable FFT sidelobes in the transformed data. The result can also be a significant reduction of usable dynamic range. Furthermore, windshields, being multilayer glass under mechanical stress, have complex inhomogeneous polarization. This randomizes the electric field of the signal return on the photodetector surface complicating (decohering) optical mixing.
Lastly, due to the nature of the time domain processing vs frequency domain processing, the handling of multi-echoes—even with high dynamic range—is a straightforward process in TOF systems. Whereas, it requires significant disambiguation in FMCW systems. Multi-echo processing is..
Grammer invites to virtual Annual General Meeting
06/09/2020
Grammer invites to virtual Annual General MeetingGrammer invites to virtual Annual General Meeting
-Annual General Meeting on July 08, 2020 in virtual form for the first time
-The regular period of office of the six shareholder representatives on the Supervisory Board expiring at the end of the Annual General Meeting
-Four new shareholder representatives nominated
Ursensollen, June 09, 2020 – Grammer AG published the invitation to its Annual General Meeting (AGM) 2020 on June 08, 2020. In view of the exceptional circumstances caused by the COVID 19 pandemic, this Annual General Meeting will be fundamentally different to the ones of previous years. In the interests of all shareholders, employees and other parties involved, Grammer AG will be holding its Annual General Meeting in solely virtual form for the first time.
The period of office of the six members of Grammer AG’s Supervisory Board elected by the AGM ends at the end of the Annual General Meeting on July 08, 2020. Four of the six current shareholder representatives – including the current Chairman of the Supervisory Board, Dr. Klaus Probst – are not standing for re-election.
“After 15 years on the Supervisory Board, including 10 years as Chairman, I have decided not to stand for re-election to the Supervisory Board in order to pave the way for a new generation,” says Dr. Klaus Probst, Chairman of the Supervisory Board, commenting on the planned changes. “Thanks to the great commitment of all the employees of the Grammer Group, the Company has developed into a global and innovative partner to its customers in the automotive and commercial vehicle industries and is excellently positioned to meet the challenges it faces in the future – even in times of the Covid 19 pandemic.”
Experienced candidates nominated for election to the future Supervisory Board
Grammer AG’s Supervisory Board will be asking the shareholders to elect the following persons to the Supervisory Board as shareholder representatives on the basis of the recommendations of the Nomination Committee and taking into account the objectives adopted by the Supervisory Board for its own composition:
• Dr.-Ing. Ping He, Wenzenbach-Irlbach, development engineer in the Powertrain Division of Continental AG
• Mr. Jürgen Kostanjevec, Cologne, independent consultant
• Ms. Gabriele Sons, Berlin, attorney at the law firm Sons
• Mr. Alfred Weber, Stuttgart, former chief executive officer of MANN+HUMMEL GmbH
Dr. Peter Merten and Prof. Dr.-Ing. Birgit Vogel-Heuser will be standing for re-election to the Supervisory Board of GRAMMER AG thus ensuring continuity in its previous work following their election.
“At this stage, I would like to thank the members who will soon be leaving the Supervisory Board, Ms. Ingrid Hunger, Dr. Bernhard Wankerl and Mr. Wolfram Hatz, for their dedication and commitment over the past years,” Probst adds. “The proposed nominees are internationally experienced, independent industry and technical experts who will contribute to the Supervisory Board and the Company’s future development.”
About Grammer AG
Located in Amberg, Germany, Grammer AG specializes in the development and production of components and systems for automotive interiors as well as suspended driver and passenger seats for onroad and offroad vehicles.
In the Automotive Division, Grammer supplies headrests, armrests, center console systems, high-quality interior components, operating systems and innovative thermo-plastic solutions to premium automakers and automotive system suppliers. The Commercial Vehicles Division comprises seats for the truck and offroad seat segments (tractors, construction machinery, and forklifts) as well as train and bus seats.
With over 15,500 employees, Grammer operates in 20 countries around the world. Grammer shares are listed in the Prime Standard and traded on the Frankfurt and Munich stock exchanges via the electronic trading system Xetra.
Download Press Information
Back