Close your left eye as you look at this screen. Now close your right eye and open your left — you’ll notice that your field of vision shifts depending on which eye you’re using. That’s because while we see in two dimensions, the images captured by your retinas are combined to provide depth and produce… Continue reading 2D or Not 2D: NVIDIA Researchers Bring Images to Life with AI
Category: Suppliers
CAPITAL REDUCTION – Cancellation of 1,345,821 treasury shares
CAPITAL REDUCTION – Cancellation of 1,345,821 treasury shares Pursuant to the decision of the Chief Executive Officer on November 29, 2019 and the fifth and the fourteenth resolutions of the May 17, 2019 Shareholders Meeting and the fifth and the twenty-second resolutions of the May 18, 2018 Shareholders Meeting, Compagnie Générale des Etablissements Michelin has… Continue reading CAPITAL REDUCTION – Cancellation of 1,345,821 treasury shares
Intel: Intel Research to Solve Real-World Challenges
What’s New: This week at the annual Neural Information Processing Systems (NeurIPS) conference in Vancouver, British Columbia, Intel is contributing almost three dozen conference, workshop and spotlight papers covering deep equilibrium models, imitation learning, machine programming and more. “Intel continues to push the frontiers in fundamental and applied research as we work to infuse AI everywhere, from… Continue reading Intel: Intel Research to Solve Real-World Challenges
Vitesco Technologies Cuts Costs for Plug-In Hybrid Powertrain
Vitesco Technologies presents cost effective hybrid transmission with integrated electric motors Expanded role for electric motors results in vastly simplified transmission architecture and reduced costs Solution enables energy-saving high-voltage hybrid vehicles to be tailored to the mass market Regensburg, December 9, 2019. Vitesco Technologies, the Powertrain business area of Continental, presented at the CTI… Continue reading Vitesco Technologies Cuts Costs for Plug-In Hybrid Powertrain
Edited Transcript of ETN earnings conference call or presentation 29-Oct-19 3:00pm GMT
Q3 2019 Eaton Corporation PLC Earnings Call DUBLIN Dec 8, 2019 (Thomson StreetEvents) — Edited Transcript of Eaton Corporation PLC earnings conference call or presentation Tuesday, October 29, 2019 at 3:00:00pm GMT TEXT version of Transcript ================================================================================ Corporate Participants ================================================================================ * Craig Arnold Eaton Corporation plc – Chairman & CEO * Richard H. Fearon Eaton… Continue reading Edited Transcript of ETN earnings conference call or presentation 29-Oct-19 3:00pm GMT
NIO: Punishing Dilution Inbound
This week, Piper Jaffray became the latest Wall Street shop to chronicle the strange travails of the struggling company. Despite initiating coverage with a “Neutral” rating, Piper’s analysts could find little cause for optimism. Warning! GuruFocus has detected 3 Warning Signs with NIO. Click here to check it out. NIO 30-Year Financial Data The intrinsic… Continue reading NIO: Punishing Dilution Inbound
AEye Team Profile: Umar Piracha
AEye’s very own Umar Piracha will be chairing the Optical Technologies for Autonomous Cars and Mobility symposium at CLEO 2020 in San Jose this spring.
Umar Piracha is a Staff LiDAR Systems Engineer at AEye. He has a Masters from the University of Southern California and a PhD from the College of Optics (CREOL) at the University of Central Florida, where he developed a LiDAR system using a mode locked laser for high resolution ranging at tens of kilometers of distances. He has experience working for large and small companies, including Intel, Imec, and Luminar Technologies, and has successfully co-founded a fiber sensing startup. He is a Senior Member of the IEEE, Associate Editor for SPIE’s Optical Engineering Journal, and serves as a reviewer for NSF’s SBIR program. Dr. Piracha has 32 conference and journal publications and 3 patents.We sat down with Umar to discuss chairing a session at CLEO 2020, why LiDAR is critical for autonomous vehicle perception, and why he’s known around town as “Dr. Laser”.
Q: Congratulations on being named Chair of Optical Technologies for Autonomous Cars and Mobility at CLEO 2020! Can you tell us a bit about what this role entails?Thank you! It will be a very rewarding and fun experience. As Chair, I’ll review the latest results from different research groups in academia and industry around the world and select the ones that are making the most impact in the field of self-driving cars. These research groups will then have the opportunity to present their results to an audience of technical leaders from around the world during my session at CLEO.
Q: Why is LiDAR so imperative to the overall safety and reliability of artificial perception systems for self-driving cars?LiDAR is enabling self-driving cars to become a reality by ensuring that they drive with the least amount of risk to other drivers or those around it. Humans are emotional, can be easily distracted, and are prone to mistakes. However, the human visual cortex is the most advanced perception engine on the planet. Since the processing and perceptive power of the human visual cortex and brain is far beyond the fastest supercomputer in the world, it is not an easy task to safely replace human drivers with automation and artificial intelligence. Therefore, the inclusion of LiDAR is necessary for autonomous vehicles because it reduces the burden of real-time perception and prediction, which is not possible using AI and stereo-cameras alone. The reality is: multiple sensors, including LiDAR, radar, cameras, etc., coupled with advanced data processing and AI will be required to make self-driving cars safe and reliable.
Q: You’re known around town as “Dr. Laser.” Care to elaborate on that nickname?I used to teach at CREOL (The University of Central Florida) as an Adjunct Instructor, where my students would call me Dr. Umar. But since I love lasers, when I purchased my new car, I wanted to get a fun license plate to go with it, so I got one that says “Dr. Laser”. I think it’s very appropriate since I love cars – and lasers too.
Q: You used to do standup comedy! How would you describe your act? Who are a few of your favorite comedians?After getting a double Masters degree, a PhD in Lasers, and the nickname of “Dr. Laser”, I wanted to involve myself in something totally opposite of all that! So I took improv and stand-up comedy classes and performed a few times in Orlando. Unlike most stand up comics, I always made sure my humor was suitable for all audiences. Personally, I think Maz Jobrani is an extremely hilarious comic!
AEye Team Profile: Umar Piracha —AEye Wins Most Outstanding Autonomous Vehicle Technology Innovation of the Year Award at TECH.ADFlatbed Trailer Across RoadwayAEye Wins Award for Most Innovative Autonomous Driving Platform at AutoSens BrusselsAEye Team Profile: Ove SalomonssonDo you believe you’re a better driver than an autonomous car?Ride by Kelley Blue Book Names AEye “Top 6 New Mobility Innovations” at the 2019 Los Angeles Auto ShowAEye Team Profile: Jim RobnettLeading Global Automotive Supplier Aisin Invests in AEye through Pegasus Tech VenturesAEye Wins Best Innovative Sensor Technology Award at IDTechEx
Lidar Technology Making Smart Cities a Reality
December 6, 2019 Cities worldwide are looking to improve the lives of their citizens through technology. A smart city uses information and communications technologies to enhance its livability, workability and sustainability, according to the Smart Cities Council. The starting point is collecting data through sensors and systems in order to measure and monitor conditions throughout… Continue reading Lidar Technology Making Smart Cities a Reality
Visual Effects Studio Pioneers RTX Servers to Boost Final-Frame Rendering
Creating cinematic-quality visuals has traditionally been a time-consuming, costly process. But that’s changing, with UNIT, a London-based visual effects company pioneering the use of NVIDIA RTX Servers to dramatically accelerate the rendering workflow for its work on post- production, design, grade, audio, and computer graphics for film, TV and commercials. In searching for the best… Continue reading Visual Effects Studio Pioneers RTX Servers to Boost Final-Frame Rendering
December 5th, 2019 Blog Perception Blog: Technology Comparison – Flash and Scannin…
Perception Blog: Technology Comparison -Flash and Scanning LiDAR
A blog post by
David Brodie BSc (Eng), Sr. Project Manager of Perception and AI, LeddarTech®
My name is David Brodie. With this occasional blog, I hope to create a platform to exchange ideas on technology and solution related to perception. I hope to discuss a selection of topics, some at a high level and some in more technical detail.
To start with let’s consider a high-level analysis of one of the classic problems which every autonomous vehicle faces: detecting debris or small objects.
Consider first a classic scanning lidar that surveys its environment using several scan lines. Consider the following figure.
Figure 1: a scanning lidar’s view of the world
Theoretically, all four small objects could be detected. Objects 1 and 3 are detected as they are directly in the path of some scan line. Although object 2 is nearer than object 3 it is not detected as it falls between scan lines. Similarly, object 4 is not detected through in range as it too falls between scan lines. As a scanning lidar approaches a small object, the object will be detected and lost with increasing frequency until it is close enough to be hit by a scan line in every frame.
The following figure shows the same scene surveyed by a flash lidar like LeddarTech’s Pixell.
Figure 2: flash lidars view of the world
In this case, the entire field of view of the lidar is illuminated. Rather than detecting point reflections, the flash lidar detects reflections from a segment. Once an object becomes detectable it is continuously detected as there are no gaps in the illumination. There are challenges here too. Object 4 only fills a small section of the relevant area or “segment”. Object 3 is larger but split across multiple segments and so also only fills a small portion of each segment. It may not be easy to detect but with good beam steering and signal processing, it is possible. Once an object becomes detectable; it is reliably detected from then on.
Have a question about Leddar technology or Leddar sensors? An expert from LeddarTech will be happy to discuss it with you.
Return to Latest News and Media Coverage
This website places cookies on your device to give you the best possible user experience. By using our websites, you agree to our cookies being saved on your device (unless you have disabled cookies in your settings). For more information, please read our Terms of Use and our Privacy policy.
I Agree