Lidar — which helps the Waymo Driver perceive the world around it in 3D — has become more common than ever before. Car manufacturers are integrating it into driver-assist systems; phone manufacturers are using it to enhance augmented reality capabilities, and robot vacuums even rely on it to better map the layout of your home. At Waymo, lidar is core to our sensor suite, with its ability to perceive objects and other road users with industry-leading resolution, up to hundreds of meters away, across a range of conditions.
Today, Mark released a new optical model he developed that helps researchers progress and test laser capabilities in different lighting conditions. The optical model—unique for its portability, accessibility and simplicity—offers new insights into the interactions of lasers with a broad selection of people across a range of viewing conditions.
Ahead of the fall meeting of the IEC’s Technical Committee (TC) on laser equipment and optical safety, TC 76, where Mark will present this work, we sat down with Mark to hear more about his project, and how his new model could help researchers unlock new capabilities from lidar systems in the automotive industry and beyond. Read on for the conversation.
A conversation with Mark Shand, Software Engineer, Lidar
The Waymo Team: Good to chat, Mark! We have some really talented researchers at Waymo, working on everything from data-driven range lidar image compression to large-scale, image-based scene reconstruction. Can you tell us more about your research?
Mark Shand: Essentially, lidar and other laser-based systems have gotten much more effective over recent decades—and the latest biomedical research shows we could take even greater advantage of these developments without having any effect on their safety classification — for example, achieving the safety level required to be a Class-1 laser. But first hand engagement from researchers is needed to ensure that standards governing the use of such lasers embed the latest findings.
One way standards could be updated is to consider how lasers can be safely used in different lighting conditions. Imagine shining the same laser pointer, with a relatively wide beam, in a dark room and then outside on a sunny day. While the laser emits the same energy in both scenarios, the eye captures different amounts of light because pupils constrict and dilate in various conditions.
But current technical standards don’t offer a framework for adjusting the power of laser systems to account for this difference—even for safe, low-power lasers like Waymo’s lidar systems. The Waymo Driver’s lidar, for example, is a Class-1 laser product within the FDA certification system—the same classification as the low-power lasers in common household appliances including CD and DVD players.
My research and model aims to explore the feasibility of a new framework, providing a way of virtually testing the effects of different laser beam profiles. By using the power of today’s computers and the latest understanding of the human eye, we can quickly and easily test some of the core assumptions that have underpinned laser standards for decades.
WT: An exciting aspect of working in the autonomous driving space is seeing the effects the technology has on other industries. Can you tell me a bit more about the broader implications of your work?
MS: The research will be part of a body of work that’s informing updated laser standards. That’s important, because modern laser standards could unlock safer and more innovative laser technology. For lidar, new standards could encourage the development of sensors that can adjust the strength of the laser for certain environmental conditions—increasing in bright light and decreasing in darker conditions—all while maintaining bystander safety.
You and I both know how critical high-performance lidar is. There’s been a worrying increase in pedestrian deaths on our roads recently, and the vast majority of them happen in low-light conditions. The fact that lidar can spot pedestrians in the roadway hundreds of meters away day and night is a big reason the developers of advanced driving technologies are becoming more interested in this type of sensor. More powerful lidars can see farther and could make our roads even safer.
WT: What motivated you to join IEC Technical Committee 76 and pursue this research?
MS: At the encouragement of Waymo’s lead optical engineer, Hamilton Shepard, I went back to school for a Masters in optical sciences 30 years after getting my PhD. What was most brilliant about the whole experience was being able to go back to University at the same time as my eldest child. The Shand family weekends were all about homework for some time!
The Shand Family at their youngest daughter’s graduation earlier this summer |
My enrollment was also prompted by my work with the IEC. I was invited to join IEC TC76 in a personal capacity in 2017 because of my experience working with lidar at Waymo, representing expertise in an area that wasn’t well understood or otherwise accounted for among the industry experts that were participating at that time. There weren’t many of us at IEC TC76 working on automotive lidar then—just myself and one other engineer —but our numbers have grown as interest in the implications of better laser standards has!
WT: It feels like these potential updates to the safety standards are really ramping up as the auto industry invests in the technology. How are you helping the IEC think about standards for vehicle-based lidar?
MS: IEC TC 76 has three working groups considering vehicle-based lidar—the High Ambient Illumination project, the Automatic Emission Control project, and the Moving Platforms project. The aim of each is to create standards for a series of well-defined scenarios in the operation of a vehicle, like daytime driving, that affect the performance of laser-based systems. This has big implications beyond autonomous driving. A gantry crane in a shipyard, for example, is another lidar-capable moving platform that operates in bright-light conditions—so having better lidar standards could encourage more innovation in shipping and logistics, too.
That’s why I’m so excited to share this research. By open-sourcing the model, we are helping fill a gap in lidar studies so researchers can help advance lidar standards and guidelines for the safe deployment of lasers even further—without needing to build these models from scratch.
WT: Thanks for taking the time, Mark! Any final thoughts?
MS: This work wouldn’t have been possible without Larry Thibos¹, professor emeritus at Indiana University, and Raymond Applegate² ³ at the University of Houston, and colleagues, whose seminal papers looking at ocular aberrations provided critical data that helped form the basis of my model. Just as I was able to build my work on the foundation these individuals created, I look forward to seeing what the broader community discovers with my work as we all continue innovating and developing the safest and most capable technology.
²Applegate, Donnelly, Marsack, Koenig, & Pesudovs. (2007). Three-dimensional relationship between high-order root-mean-square wavefront error, pupil diameter, and aging. Journal of the Optical Society of America A., 24(3), 578–87
³Hastings, Marsack, Thibos, & Applegate. (2018). Normative best-corrected values of the visual image quality metric VSX as a function of age and pupil size. Journal of the Optical Society of America A., 35(5), 732–9