Luminar and Volvo Show Off High-Res, Long-Range Lidar 27 Nov

About the Cars That Think blog

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.

Follow @/CarsThatThink

Philip E. Ross, Senior Editor

Willie D. Jones, Assistant Editor

Evan Ackerman, Senior Writer

Lucas Laursen, Contributor

Subscribe to RSS Feed

Ford Statement on Business Transformation

About Ford Motor Company Ford Motor Company is a global company based in Dearborn, Michigan. The company designs, manufactures, markets and services a full line of Ford cars, trucks, SUVs, electrified vehicles and Lincoln luxury vehicles, provides financial services through Ford Motor Credit Company and is pursuing leadership positions in electrification, autonomous vehicles and mobility solutions.… Continue reading Ford Statement on Business Transformation

VW turned its electric cargo van concept into a race support vehicle

VW Group has put a new spin on its all-electric I.D. Buzz Cargo van concept. This time, the German automaker has reimagined the concept as a support vehicle for the Volkswagen I.D R, the electric vehicle prototype that had a record-breaking run this year in the Pikes Peak International Hill Climb competition. VW Group’s commercial vehicles… Continue reading VW turned its electric cargo van concept into a race support vehicle

Profitability of auto companies: Toyota ahead of BMW, Daimler and VW – but German carmakers invest more in the future

Global car sales decline for the first time since the financial crisis. The profit margin of the 16 leading auto companies has fallen to its lowest level since the financial crisis. Toyota and Suzuki work more profitably than the German carmaker: But that the profit margin of VW, Daimler and BMW is shrinking, is mainly… Continue reading Profitability of auto companies: Toyota ahead of BMW, Daimler and VW – but German carmakers invest more in the future

Driverless cars will need cities covered in sensors, China’s Didi Chuxing says

Driverless cars will need cities covered in sensors, DiDi VP says
2 Hours Ago | 02:33

Chinese ride-hailing giant Didi Chuxing wants to become one of the front-runners in developing self-driving cars, the company's chief scientist for smart transportation initiatives said on Tuesday.

Didi has been working to develop autonomous vehicle technologies for three years, and has teams based in the United States and China, Henry Liu told CNBC during a fireside chat at the East Tech West conference held in the Nansha district of Guangzhou, China.

“We already have autonomous vehicles being equipped with our sensors and we have licenses in both Mountain View, California, as well as in Beijing, China,” he said. “We'll be one of the front-runners in terms of the autonomous vehicle technology development.”

Automakers and internet companies around the world are investing millions of dollars and rolling out long-term plans for self-driving vehicles. Many analysts believe the widespread adoption of these vehicles will potentially start to pick up in 2021 or 2022.

“We can also predict in terms of what's going to happen in the next 15 to 30 minutes, in terms of traffic flows.”
-Henry Liu, chief scientist for smart transportation, Didi Chuxing

For its part, Didi is developing autonomous vehicles on two fronts, said Liu.

First, by installing sensors in vehicles that can sense the environment on the road, detect objects, plan travel routes and ultimately, control the cars. The second front is what he described as “cooperative vehicle-highway systems” that rely more on the environment — that means having sensors installed on roads, buildings, lamp posts and the surrounding areas to provide relevant information to self-driving cars.

“The main difference is that we not only have the vehicle sensing capability, we're also going to have a roadside sensing capability, so we will be able to provide the autonomous vehicles with environment information, from the infrastructure side,” he said.

But such a development will require the presence of very high-speed mobile internet connection readily available, Liu added. China is developing that technology very aggressively and hasoutspent the United States since 2015.

Didi's advantage: 550 million users

One of Didi's major advantages when it comes to developing self-driving, smart cars is that it has a massive transportation network, according to Liu.

Didi has about 550 million users taking an average of 30 million rides every day across more than 400 cities. That allows the Chinese firm to collect plenty of data about its users, from their travel habits to traffic conditions in various cities. Generally, artificial intelligence systems require large volumes of so-called training data to learn patterns and behaviors.

“We collect a hundred terabytes of vehicle trajectory data per day,” Liu said, adding that Didi processes nearly five times as much information daily to better estimate travel routes, prices and demand for vehicles at any given time. The data also helps cities plan their traffic networks better to avoid congestion.

Earlier this year, Didi launched a so-called “Smart Transportation Brain” service with Chinese traffic management authorities. Using vast amounts of data from Didi, local governments and other businesses, the service uses artificial intelligence to recommend improvements to existing transport systems that can reduce travel times for commuters.

“We can also predict in terms of what's going to happen in the next 15 to 30 minutes, in terms of traffic flows,” Liu added.

Didi remains one of China's most valuable start-ups, backed by major names including Apple, Alibaba and SoftBank, and it has a valuation of $56 billion, according to CB Insights. Two years ago, it acquired Uber's China business to establish its dominating position in the Chinese ride-hailing market.

WATCH: Didi Chuxing's chief scientist for smart transportation talks about self-driving cars

We use the data to help passengers and cities, says DiDi VP
3 Hours Ago | 03:45

  Daimler and Bosch Will Launch a Pilot Robotaxi Program in San Jose in 2019 8 Nov

About the Cars That Think blog

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.

Follow @/CarsThatThink

Philip E. Ross, Senior Editor

Willie D. Jones, Assistant Editor

Evan Ackerman, Senior Writer

Lucas Laursen, Contributor

Subscribe to RSS Feed

AEye Advisory Board Profile: Elliot Garbus

We sat down with each of our Advisory Board Members to ask them why they’re excited about working with AEye…
Elliot Garbus is a strategy & management consultant working with startups, established companies, and Venture Capital firms. Elliot retired from Intel in May of 2017, where he was the Vice President and General Manager of the Transportation Solutions Division, responsible for delivering Intel’s vision for connected cars, autonomous driving, and intelligent transportation systems. Having worked directly with technology companies, automotive manufactures, and automotive suppliers, Mr. Garbus has a unique perspective on the coming disruptive transformation that will occur as self-driving vehicles become reality.

Q: What in your past experience ultimately drew you to the autonomous vehicle arena?
I was formerly the Vice President and General Manager of the Transportation Solutions Division at Intel. In that role, I had a front row seat as autonomous driving went from research to a race to commercialism.

The opportunity for autonomous vehicles excites me. The main reason is the positive social impact. Today there are about 1.3 million people that die every year from traffic accidents globally and an additional 50 million that are injured. Over 94% of collisions are caused by human error. I believe we have an opportunity to largely eliminate these fatalities and injuries, making the world a much better and much safer place.

Q: Why AEye?
AEye has a fantastic set of technologies that they’ve combined in a new way to deliver breakthroughs in perception. I’m also very impressed with the unique history of the leadership team. They have a tremendous amount of experience with LiDAR from their work in aerospace. It is unusual to find a start up in the United States with this kind of experience, and a team that has worked with LiDAR for decades.

Q: Where do you see ADAS solutions, autonomous vehicles, and/or artificial perception, heading within the next few years? The next decade? Beyond? How do you see AEye playing a pivotal role in this vision?
In biological evolution, when sight emerged, it led to a stunning acceleration of biological diversity. It’s my expectation that the integration of visual perception and computing, in the way that AEye is pioneering, will lead to a similar explosion of innovation across many industries. Ultimately, autonomous driving is going to change the way our cities are put together. It will change the way products are delivered. It will address congestion, air pollution, and will have a dramatic impact to the insurance and healthcare industries — all while making the future a better and brighter place.

ALL PROFILES
Advisory Board Profile: Elliot Garbus — The Future of Autonomous Vehicles: Part I – Think Like a Robot, Perceive Like a HumanAEye Introduces Groundbreaking iDAR TechnologyThe Future of Autonomous Vehicles: Part II – Blind Technology without Compassion Is RuthlessCB Insights Unveils Second Annual AI 100 Companies at A-ha!Observe, Orient, Decide, Act: How AEye’s iDAR System Adopts Principles of the OODA Loop to Achieve Intelligent, Long-Range DetectionAEye Announces the AE100 Robotic Perception System for Autonomous VehiclesAEye Announces Addition of Aravind Ratnam as Vice President of Product ManagementAEye Introduces Next Generation of Artificial Perception: New Dynamic Vixels™Nate Ramanathan Joins AEye as Vice President of OperationsElon Musk Is Right: LiDAR Is a Crutch (Sort of.)

The Future of Autonomous Vehicles: Part II – Blind Technology without Compassion Is Ruthless

By James R. Doty, MD

The automotive industry is on a quest to create fully functioning, safe, and responsive autonomous vehicles enhanced by artificial intelligence (AI). Today’s AI is a function of machine learning and neural networks that rely almost completely on training and repetition. This makes sense for repeatable tasks, but what about for higher-order skills, such as driving, that require more than mere mechanized logic? Humans do not drive purely from an autonomic response to input. We add emotional intelligence, intuition, and morality as decisive factors while we’re behind the wheel.

At an early age, I was introduced to four mindfulness techniques: relaxation, taming the mind, opening the heart, and clarifying the intent. Over the years, I’ve spoken often about the importance of the third lesson, which I didn’t quite grasp until later in my life: opening the heart. What I learned is that nurturing compassion and its connectedness to all of life clarifies the last practice of setting intention. For when the heart is open, we are able to think more about longer lasting and purpose-driven goals, such as cultivating relationships, helping others, and growing more aware of the intricate beauty in our shared experiences. By opening the heart, we are capable of thinking beyond cost benefit analysis, beyond selfish desires, and into support for others and the greater good.

Today, artificial intelligence faces the same challenge. As a neurosurgeon and founder of the Center for Compassion and Altruism Research and Education at Stanford University, I am fascinated by the possibility that one day AI may not only mimic human sensory perception, but also human compassion. Currently, AI has been trained to relax, focus and clarify — but it hasn’t been trained to open its heart. Together with AEye — a company that mimics human perception to create an artificial perception platform for autonomous vehicles — we are leading the discussion to change that. But we need your help. It is our collective responsibility, as compassionate humans, to initiate a dialogue with those at the helm of this technology, so we may consider the “behaviors” that will ultimately be written into self-driving software programs for potentially dangerous situations. In other words: what will be the car’s guiding ethics? To make truly safe, autonomous vehicles, will we need to “teach” them empathy and compassion?

Can we train AI to have compassion?
As I outline in Part I of this article, Think Like a Robot, Perceive Like a Human, we have been able to give autonomous vehicles human-like perception. But can we give them a heart? And should we? This is where the debate begins. Self-driving vehicles are achieving the ability to identify relevant information, process it, and respond accordingly, as a human would. Now, we must consider how to incorporate human-like empathy and compassion in their decision-making — for blind technology without compassion is ruthless. If we want computers to have compassion, we must allow space and time to build the software appropriately to prevent processes that blinds us from our humanity.

The first way to train computers to perceive with compassion is to give them enough time to infer intent of movement. Is that a child standing on the sidewalk with potentially unpredictable behavior? Is there a ball in the road and a person following it? Is a blind person crossing the intersection, unaware of the approaching vehicle? As humans, we process these evaluations first, allowing us to not only see an object, but a person. We are compassionate in our understanding that a child may run into the street, or an individual may be unaware of a vehicle approaching the intersection. This ultimately allows us to drive with intention, taking on responsibility for the safety of other people, in addition to ourselves. This is more advanced than conventional AI, which is programmed and trained to track objects (or “blobs”), in a cold, repetitive way.

The second way to give computers compassion is to develop AI with situational awareness. Situational awareness means that a driver understands the need to approach an intersection with caution, as people may be crossing the street. Conventional AI in autonomous vehicles lacks this type of perception. However, innovative companies like AEye build sensors to have situational awareness, allowing autonomous vehicles to not only have capabilities that we take for granted in human perception (like differentiating between how we maneuver our vehicles through an urban area versus driving along a country road), but to have intuition, compassion and understanding of possible intent. For example, if the system’s LiDAR sensor identifies a large object in front of the vehicle, the camera and computer vision algorithms work in unison to more fully investigate the scene to identify whether it is a truck, an ambulance, or a school bus and, therefore, initiate an appropriate response (such as slowing down in anticipation of the school bus stopping). Building situational awareness into self-driving systems inherently builds in social morals.

Third, if we look at our own behavior (as a species and as individuals), we see that we continually act upon our assumptions (rules of thumb or shared knowledge), which are guided by feedback and effects witnessed from past behavior. Choosing the most compassionate decision is determined by the context of a single moment, and this context is determined by our unique ability to efficiently assess our environment. Therefore, in the situation of driving a vehicle, for our own survival and for the empathetic survival of others, we make calculated risks. As examples: turning left at an intersection, estimating the required speed to merge with oncoming traffic, predicting the best moment to change lanes on the highway, even simply driving in one city verses another. Each of these scenarios requires knowledge of different contexts (situational awareness) and rules guided by previous experiences (assumptions). Our perspective of the world is built by our unique history, which in turn leads to better, more compassionate perception.

An AI ‘Trolley Problem’
The AI algorithm that coordinates self-driving car responses must make decisions, which at times, may not be easy, even for a human. Consider a scenario where a cat runs into the path of an autonomous vehicle on a crowded, city street. What is the vehicle programmed to do? Option 1) it hits the cat. This decision doesn’t impact the safety of the humans in the vehicle, thus the AI optimizes for the safety of humans overall (sorry, Kitty). Option 2) it breaks hard or swerves to avoid hitting the cat. Although this decision would spare the cat’s life, it could potentially cause a serious accident, which would harm humans and cause traffic delays. Option 3) it develops a sophisticated algorithm that calculates the potential risk of stopping/swerving for the cat and determines the optimal outcome before deciding. But in this scenario, how many dimensions can be considered simultaneously? My choice would be Option 3, as I would opt (if I can) to save the cat. But this poses another ethical conundrum: who determines the programmed decision the vehicle would make?

As hundreds of thousands of autonomous vehicles enter our streets, will we need a standard definition of compassion shared by all vehicles so they can predict behavior based on situational awareness? Or will compassion be a feature that is offered to differentiate one service from another? Should the vehicle owner, the car company, or an independent entity define a vehicle’s ethics? How about its level of empathy and compassion? These are all questions that have yet to be answered.

‘A man without ethics is a wild beast loosed upon this world.’
The danger of our machinery mimicking human perception without compassion is that, if AI works correctly, it eventually won’t require human tinkering. Thus, if we don’t know how to open AI’s heart and do not prioritize certain contextual data now, we will create a world in which we get from Point A to Point B by ruthless calculations, which could potentially result in immense destruction along the way. Therefore, I amend the Camus quote, above: “A machine without ethics is a wild beast loosed upon this world.”

Unlike machines, we humans use our minds and our hearts to make decisions, cultivated by knowledge and perspective based on personal experience. The AI technology being developed today must be sensitive to that, and we must consider setting clear intentions when writing these programs from the beginning. While typical software programs have been calibrated to identify a cost-benefit to many decisions, we will soon face challenges that pose new moral issues where the cost may be a life.

Artificial intelligence will only be as sensitive, compassionate and aware as we design it to be. Humans are caring and kind, and we must incorporate the best parts of humanity (our compassion and empathy) into our artificial intelligence. Otherwise, we risk blindly building a future full of cold, calculating, and ruthless technology. Now is the time to recognize this need for the lesson I learned late in life: to open the heart and make compassion the priority — to make compassion our culture. It’s our responsibility to design our computers in this same light. Therefore, we must move beyond artificial intelligence. We must create artificial humanity.

_____

James R. Doty, MD, is a clinical professor in the Department of Neurosurgery at Stanford University School of Medicine. He is also the founder and director of the Center for Compassion and Altruism Research and Education at Stanford University of which His Holiness the Dalai Lama is the founding benefactor. He works with scientists from a number of disciplines examining the neural bases for compassion and altruism. He holds multiple patents and is the former CEO of Accuray. Dr. D..

Self-Driving Trucks Will Transport Limestone in a Norwegian Mine

Rock Carriers An often-overlooked use for autonomous driving technology is in industrial applications, where raw materials have to be shipped from point A to point B. In what automaker Volvo calls its “first commercial autonomous solution transporting limestone from an open pit mine to a nearby port,” six existing autonomous trucks will be upgraded with sophisticated… Continue reading Self-Driving Trucks Will Transport Limestone in a Norwegian Mine