Europcar Mobility Group Makes Leadership Changes

Photo courtesy of Europcar.  Europcar Mobility Group announced that changes will be coming to its leadership group.  As of January 1, the Group management board will be composed of: Caroline Parot, Group CEO Fabrizio Ruggiero, Group deputy CEO, head of business units (cars, vans & trucks, low cost, new mobility and international coverage) Olivier Baldassari,… Continue reading Europcar Mobility Group Makes Leadership Changes

  Daimler and Bosch Will Launch a Pilot Robotaxi Program in San Jose in 2019 8 Nov

About the Cars That Think blog

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.

Follow @/CarsThatThink

Philip E. Ross, Senior Editor

Willie D. Jones, Assistant Editor

Evan Ackerman, Senior Writer

Lucas Laursen, Contributor

Subscribe to RSS Feed

AEye Advisory Board Profile: Elliot Garbus

We sat down with each of our Advisory Board Members to ask them why they’re excited about working with AEye…
Elliot Garbus is a strategy & management consultant working with startups, established companies, and Venture Capital firms. Elliot retired from Intel in May of 2017, where he was the Vice President and General Manager of the Transportation Solutions Division, responsible for delivering Intel’s vision for connected cars, autonomous driving, and intelligent transportation systems. Having worked directly with technology companies, automotive manufactures, and automotive suppliers, Mr. Garbus has a unique perspective on the coming disruptive transformation that will occur as self-driving vehicles become reality.

Q: What in your past experience ultimately drew you to the autonomous vehicle arena?
I was formerly the Vice President and General Manager of the Transportation Solutions Division at Intel. In that role, I had a front row seat as autonomous driving went from research to a race to commercialism.

The opportunity for autonomous vehicles excites me. The main reason is the positive social impact. Today there are about 1.3 million people that die every year from traffic accidents globally and an additional 50 million that are injured. Over 94% of collisions are caused by human error. I believe we have an opportunity to largely eliminate these fatalities and injuries, making the world a much better and much safer place.

Q: Why AEye?
AEye has a fantastic set of technologies that they’ve combined in a new way to deliver breakthroughs in perception. I’m also very impressed with the unique history of the leadership team. They have a tremendous amount of experience with LiDAR from their work in aerospace. It is unusual to find a start up in the United States with this kind of experience, and a team that has worked with LiDAR for decades.

Q: Where do you see ADAS solutions, autonomous vehicles, and/or artificial perception, heading within the next few years? The next decade? Beyond? How do you see AEye playing a pivotal role in this vision?
In biological evolution, when sight emerged, it led to a stunning acceleration of biological diversity. It’s my expectation that the integration of visual perception and computing, in the way that AEye is pioneering, will lead to a similar explosion of innovation across many industries. Ultimately, autonomous driving is going to change the way our cities are put together. It will change the way products are delivered. It will address congestion, air pollution, and will have a dramatic impact to the insurance and healthcare industries — all while making the future a better and brighter place.

ALL PROFILES
Advisory Board Profile: Elliot Garbus — The Future of Autonomous Vehicles: Part I – Think Like a Robot, Perceive Like a HumanAEye Introduces Groundbreaking iDAR TechnologyThe Future of Autonomous Vehicles: Part II – Blind Technology without Compassion Is RuthlessCB Insights Unveils Second Annual AI 100 Companies at A-ha!Observe, Orient, Decide, Act: How AEye’s iDAR System Adopts Principles of the OODA Loop to Achieve Intelligent, Long-Range DetectionAEye Announces the AE100 Robotic Perception System for Autonomous VehiclesAEye Announces Addition of Aravind Ratnam as Vice President of Product ManagementAEye Introduces Next Generation of Artificial Perception: New Dynamic Vixels™Nate Ramanathan Joins AEye as Vice President of OperationsElon Musk Is Right: LiDAR Is a Crutch (Sort of.)

The Future of Autonomous Vehicles: Part II – Blind Technology without Compassion Is Ruthless

By James R. Doty, MD

The automotive industry is on a quest to create fully functioning, safe, and responsive autonomous vehicles enhanced by artificial intelligence (AI). Today’s AI is a function of machine learning and neural networks that rely almost completely on training and repetition. This makes sense for repeatable tasks, but what about for higher-order skills, such as driving, that require more than mere mechanized logic? Humans do not drive purely from an autonomic response to input. We add emotional intelligence, intuition, and morality as decisive factors while we’re behind the wheel.

At an early age, I was introduced to four mindfulness techniques: relaxation, taming the mind, opening the heart, and clarifying the intent. Over the years, I’ve spoken often about the importance of the third lesson, which I didn’t quite grasp until later in my life: opening the heart. What I learned is that nurturing compassion and its connectedness to all of life clarifies the last practice of setting intention. For when the heart is open, we are able to think more about longer lasting and purpose-driven goals, such as cultivating relationships, helping others, and growing more aware of the intricate beauty in our shared experiences. By opening the heart, we are capable of thinking beyond cost benefit analysis, beyond selfish desires, and into support for others and the greater good.

Today, artificial intelligence faces the same challenge. As a neurosurgeon and founder of the Center for Compassion and Altruism Research and Education at Stanford University, I am fascinated by the possibility that one day AI may not only mimic human sensory perception, but also human compassion. Currently, AI has been trained to relax, focus and clarify — but it hasn’t been trained to open its heart. Together with AEye — a company that mimics human perception to create an artificial perception platform for autonomous vehicles — we are leading the discussion to change that. But we need your help. It is our collective responsibility, as compassionate humans, to initiate a dialogue with those at the helm of this technology, so we may consider the “behaviors” that will ultimately be written into self-driving software programs for potentially dangerous situations. In other words: what will be the car’s guiding ethics? To make truly safe, autonomous vehicles, will we need to “teach” them empathy and compassion?

Can we train AI to have compassion?
As I outline in Part I of this article, Think Like a Robot, Perceive Like a Human, we have been able to give autonomous vehicles human-like perception. But can we give them a heart? And should we? This is where the debate begins. Self-driving vehicles are achieving the ability to identify relevant information, process it, and respond accordingly, as a human would. Now, we must consider how to incorporate human-like empathy and compassion in their decision-making — for blind technology without compassion is ruthless. If we want computers to have compassion, we must allow space and time to build the software appropriately to prevent processes that blinds us from our humanity.

The first way to train computers to perceive with compassion is to give them enough time to infer intent of movement. Is that a child standing on the sidewalk with potentially unpredictable behavior? Is there a ball in the road and a person following it? Is a blind person crossing the intersection, unaware of the approaching vehicle? As humans, we process these evaluations first, allowing us to not only see an object, but a person. We are compassionate in our understanding that a child may run into the street, or an individual may be unaware of a vehicle approaching the intersection. This ultimately allows us to drive with intention, taking on responsibility for the safety of other people, in addition to ourselves. This is more advanced than conventional AI, which is programmed and trained to track objects (or “blobs”), in a cold, repetitive way.

The second way to give computers compassion is to develop AI with situational awareness. Situational awareness means that a driver understands the need to approach an intersection with caution, as people may be crossing the street. Conventional AI in autonomous vehicles lacks this type of perception. However, innovative companies like AEye build sensors to have situational awareness, allowing autonomous vehicles to not only have capabilities that we take for granted in human perception (like differentiating between how we maneuver our vehicles through an urban area versus driving along a country road), but to have intuition, compassion and understanding of possible intent. For example, if the system’s LiDAR sensor identifies a large object in front of the vehicle, the camera and computer vision algorithms work in unison to more fully investigate the scene to identify whether it is a truck, an ambulance, or a school bus and, therefore, initiate an appropriate response (such as slowing down in anticipation of the school bus stopping). Building situational awareness into self-driving systems inherently builds in social morals.

Third, if we look at our own behavior (as a species and as individuals), we see that we continually act upon our assumptions (rules of thumb or shared knowledge), which are guided by feedback and effects witnessed from past behavior. Choosing the most compassionate decision is determined by the context of a single moment, and this context is determined by our unique ability to efficiently assess our environment. Therefore, in the situation of driving a vehicle, for our own survival and for the empathetic survival of others, we make calculated risks. As examples: turning left at an intersection, estimating the required speed to merge with oncoming traffic, predicting the best moment to change lanes on the highway, even simply driving in one city verses another. Each of these scenarios requires knowledge of different contexts (situational awareness) and rules guided by previous experiences (assumptions). Our perspective of the world is built by our unique history, which in turn leads to better, more compassionate perception.

An AI ‘Trolley Problem’
The AI algorithm that coordinates self-driving car responses must make decisions, which at times, may not be easy, even for a human. Consider a scenario where a cat runs into the path of an autonomous vehicle on a crowded, city street. What is the vehicle programmed to do? Option 1) it hits the cat. This decision doesn’t impact the safety of the humans in the vehicle, thus the AI optimizes for the safety of humans overall (sorry, Kitty). Option 2) it breaks hard or swerves to avoid hitting the cat. Although this decision would spare the cat’s life, it could potentially cause a serious accident, which would harm humans and cause traffic delays. Option 3) it develops a sophisticated algorithm that calculates the potential risk of stopping/swerving for the cat and determines the optimal outcome before deciding. But in this scenario, how many dimensions can be considered simultaneously? My choice would be Option 3, as I would opt (if I can) to save the cat. But this poses another ethical conundrum: who determines the programmed decision the vehicle would make?

As hundreds of thousands of autonomous vehicles enter our streets, will we need a standard definition of compassion shared by all vehicles so they can predict behavior based on situational awareness? Or will compassion be a feature that is offered to differentiate one service from another? Should the vehicle owner, the car company, or an independent entity define a vehicle’s ethics? How about its level of empathy and compassion? These are all questions that have yet to be answered.

‘A man without ethics is a wild beast loosed upon this world.’
The danger of our machinery mimicking human perception without compassion is that, if AI works correctly, it eventually won’t require human tinkering. Thus, if we don’t know how to open AI’s heart and do not prioritize certain contextual data now, we will create a world in which we get from Point A to Point B by ruthless calculations, which could potentially result in immense destruction along the way. Therefore, I amend the Camus quote, above: “A machine without ethics is a wild beast loosed upon this world.”

Unlike machines, we humans use our minds and our hearts to make decisions, cultivated by knowledge and perspective based on personal experience. The AI technology being developed today must be sensitive to that, and we must consider setting clear intentions when writing these programs from the beginning. While typical software programs have been calibrated to identify a cost-benefit to many decisions, we will soon face challenges that pose new moral issues where the cost may be a life.

Artificial intelligence will only be as sensitive, compassionate and aware as we design it to be. Humans are caring and kind, and we must incorporate the best parts of humanity (our compassion and empathy) into our artificial intelligence. Otherwise, we risk blindly building a future full of cold, calculating, and ruthless technology. Now is the time to recognize this need for the lesson I learned late in life: to open the heart and make compassion the priority — to make compassion our culture. It’s our responsibility to design our computers in this same light. Therefore, we must move beyond artificial intelligence. We must create artificial humanity.

_____

James R. Doty, MD, is a clinical professor in the Department of Neurosurgery at Stanford University School of Medicine. He is also the founder and director of the Center for Compassion and Altruism Research and Education at Stanford University of which His Holiness the Dalai Lama is the founding benefactor. He works with scientists from a number of disciplines examining the neural bases for compassion and altruism. He holds multiple patents and is the former CEO of Accuray. Dr. D..

SEAT named leader in digital transformation by Financial Times study

SEAT named leader in digital transformation by Financial Times study

The report by the Financial Times singles out 100 leading European companies in the field of digitalisation
The selected companies stand out for their ability to adapt to new technologies in an innovative way
SEAT, recognised for tackling digital challenges in manufacturing

MARTORELL, 23-Nov-2018 — /EuropaWire/ — SEAT has been acknowledged as a leader in digital transformation by a study carried out by the prestigious British daily the Financial Times. This publication, together with Google, Nesta and The Innovation Foundation, selected 100 organisations, people and companies from 4,000 entries, that are spearheading digital transformation in Europe, a key factor in economic growth, job creation and entering new markets. In recognising SEAT, the Financial Times cited the Company’s open innovation programme at its flagship plant in Martorell, to help tackle digital challenges in manufacturing.

SEAT President Luca de Meo stated that “digitalisation is a strategic priority. SEAT is one of the companies that invests the most in R&D in Spain, and has concentrated its efforts on developing new technologies in order to boost productivity and diversify its business. We are working on becoming a benchmark in future mobility. Having been selected by the Financial Times as one of the 100 leading European companies in digital transformation is a significant recognition of the efforts made by the company and the entire team.”

Industry 4.0, key to SEAT’s digitalisation

SEAT is promoting an ambitious transformation process whereby all of its production activities are being adapted to the digital environment with the most disruptive technologies on the market. The Spanish carmaker is developing and applying digital tools and solutions aimed at vehicle production that enable the company to gain in efficiency, flexibility and agility. For example, by implementing artificial intelligence, the use of collaborative robots as well as virtual reality and big data in the Martorell factory to revolutionise vehicle design and production.

Furthermore, SEAT has a biomechanical laboratory which stands out for its contribution to developing more ergonomic workstations. This one-of-a-kind facility in Spain features more than 20 cameras that process workers’ musculoskeletal characteristics in 3D with the aim of preventing pathologies resulting from the production process as well as improving rehabilitation in the event of injuries. In addition, SEAT has implemented training programmes using an innovative method that explains industrial transformation in a way that is easy, interactive and digital. Since the programme began, more than 2.500 employees have attended the courses.

Designing future mobility

In the framework of the Easy Mobility strategy, SEAT’s goal is to build a portfolio of products and services to offer customers new urban mobility solutions. In this sense, SEAT created Metropolis:Lab Barcelona in 2017, a centre of excellence dedicated to researching and developing new urban mobility solutions which is integrated in the Volkswagen Group’s IT Lab network. This year the company launched XMOBA to test and commercialise mobility services. In addition, SEAT acquired Respiro, a pioneering hourly car sharing service in Spain.

Last week at the Smart City Expo World Congress, SEAT presented its latest developments that will contribute to boosting its global transition towards smarter, more sustainable mobility. The initiatives developed by the SEAT Metropolis:Lab include ride-sharing and Bus on Demand, which XMOBA is going to roll out as a pilot test in 2019. Other novelties included the evolution of the SEAT Cristobal concept car, now with 5G technology; the new socially responsible navigation project together with Waze and the Barcelona City Council; and the brand’s first vehicle in its urban micromobility strategy, the SEAT eXS powered by Segway.

SEAT Communications

Cristina Vall-Losada
Head of Corporate Communications
T / +34 93 708 53 78
M/ +34 646 295 296
cristina.vall-llosada@seat.es

Aurora Vidal
International Corporate Communications
T / +34 93 708 40 05
M/ +34 608 483 266
aurora.vidal@seat.es

SOURCE: SEAT, S.A.

Tweet

Share

0

+1

LinkedIn

0

Email

Mahindra’s Alturas G4 to Redefine the High-End SUV Segment

Mahindra’s Alturas G4 to Redefine the High-End SUV Segment

Key highlights

To be equipped with Hi-Tech & Safety features like 8-way Powered Driver Seat with Memory Profile, Dual Zone FATC, 9 Airbags, 3D 360 degree Around View Camera, etc.
To be available through a separate high-end showroom within the Mahindra ‘World of SUVs’ with exclusive Relationship Managers
Launch of Purple Club+, a First in category loyalty program

Mumbai, November 16, 2018: Mahindra & Mahindra Ltd. (M&M), a part of the US $20.7 billion Mahindra Group, today revealed the key features of its high-end SUV, the Alturas G4.

The Alturas G4 will be equipped with significant Hi-Tech & Safety features including 8-way Powered Driver Seat with Memory Profile, Dual Zone FATC, 9 Airbags, 3D 360 degree Around View Camera, Ventilated Seats, Active Roll-over Protection, etc. It will compete with players that operate in the Rs.30+ Lakhs (ex-showroom) price range.

Veejay Ram Nakra, Chief of Sales & Marketing, M&M Ltd. said, “The Alturas G4 is our most luxurious offering till date and will come with a host of technology & safety features, many of which are not available in vehicles in a similar price range. We have always been a pioneer when it comes to creating industry benchmarks and the Alturas G4 will be no different. We are certain that with the Alturas G4, we would redefine the high-end SUV segment”

The Alturas G4 would be exclusively available through a separate high-end showroom within the existing Mahindra ‘World of SUVs’ dealerships. These outlets will be equipped with ultra-modern digital technology to provide an enhanced and immersive high-end experience for customers. All Alturas G4 customers would have access to exclusive Relationship Managers to cater to their requirements, another segment first.

With the Alturas G4, Mahindra would also be introducing a new premium loyalty program, called Purple Club+. The Purple Club+ program will be a first-in-category loyalty program that would enable customers to earn and redeem points based on engagements with the brand.

Brand website at www.alturasg4.com

Please use the following hashtag for social media updates

#AlturasG4

About Mahindra

The Mahindra Group is a USD 20.7 billion federation of companies that enables people to rise through innovative mobility solutions, driving rural prosperity, enhancing urban living, nurturing new businesses and fostering communities. It enjoys a leadership position in utility vehicles, information technology, financial services and vacation ownership in India and is the world’s largest tractor company, by volume. It also enjoys a strong presence in agribusiness, aerospace, commercial vehicles, components, defense, logistics, real estate, renewable energy, speedboats and steel, amongst other businesses. Headquartered in India, Mahindra employs over 2,40,000 people across 100 countries.

Learn more about Mahindra on www.mahindra.com / Twitter and Facebook: @MahindraRise

Media contact information

Mohan Nair
Vice President (Communications)
Mahindra & Mahindra Ltd.
Landline – + 91 22 28468510
Email – nair.mohan@mahindra.com

Self-Driving Trucks Will Transport Limestone in a Norwegian Mine

Rock Carriers An often-overlooked use for autonomous driving technology is in industrial applications, where raw materials have to be shipped from point A to point B. In what automaker Volvo calls its “first commercial autonomous solution transporting limestone from an open pit mine to a nearby port,” six existing autonomous trucks will be upgraded with sophisticated… Continue reading Self-Driving Trucks Will Transport Limestone in a Norwegian Mine

Salaried-worker layoffs will cut deep at GM

Salaried-worker layoffs will cut deep at GMGeneral Motors Co. will likely have to lay off nearly 6,000 salaried workers after roughly 2,250 employees requested to take the buyout the Detroit automaker offered to North American salaried employees and global executives last month.
The number of employees who asked to take the buyout was outlined in a portion of a memo to employees from CEO Mary Barra, obtained by The Detroit News. GM said Monday it was targeting 8,000 jobs with the buyouts, a benchmark the company will now have to meet with about 5,750 layoffs.
Managers from each department were given cost-cutting goals to meet by the end of the year, which could be met by addressing discretionary spending or leveraging buyouts. The managers still have to approve the buyout requests from their employees before GM knows exactly how many employees it needs to lay off.
The automaker offered buyouts to 18,000 salaried workers on Halloween, and the deadline to accept the offer was last week.
Under GM's buyout offer, eligible employees could receive six months' pay and six months' health care coverage starting in February, though on a case-by-case basis some employees could leave before the end of the year to effectively get eight months' compensation.
The expected layoffs come as GM is also planning to stop production at five plants next year, including Detroit-Hamtramck Assembly and Warren Transmission, affecting about 14,300 jobs across the company.
The buyouts and layoffs among GM's salaried workers are part of what the automaker has called a transformation of its workforce. At the same time GM executes some 6,000 layoffs among salaried workers, it is hiring aggressively in emerging automotive disciplines like software development, batter and fuel cell technology and autonomous vehicle development.
GM Cruise LLC, the automaker's self-driving vehicle development arm based in San Francisco, recently surpassing 1,000 workers. A new office in Seattle opening early next year will also add up to 200 new workers.
“We are going to continue to hire,” Barra told reporters Monday. She says GM is focusing harder on the “skillsets of the future”
“You will see us having new employees join the company as others are leaving,” she said. “We still need many technical resources across the company.”
Staff Writer Ian Thibodeau contributed to this report
nnaughton@detroitnews.com
Twitter: @NoraNaughton
Read or Share this story: https://www.detroitnews.com/story/business/autos/general-motors/2018/11/26/gm-needs-lay-off-nearly-6-000-salaried-workers/2115695002/

Elon Musk says Tesla was ‘single-digit weeks’ away from death during Model 3 production

Tesla CEO Elon Musk revealed how close the company came to folding in an Axios interview broadcast on HBO Sunday night. “Tesla really faced a severe threat of death due to the Model 3 production ramp,” said Musk. He added that the company was “bleeding money like crazy,” and the haemorrhaging of cash got to… Continue reading Elon Musk says Tesla was ‘single-digit weeks’ away from death during Model 3 production