Ford Motor is laying off 7,000 salaried employees as part of CEO Jim Hackett’s restructuring plan to reduce bureaucracy, cut costs and turn the automaker into a more agile company prepared for a future that extends beyond its traditional business of producing and selling cars and trucks. The cuts represent about 10 percent of the… Continue reading Ford will slash 7,000 salaried jobs by August
Tag: Autonomous
Ultrafast Motion-Planning Chip Could Make Autonomous Cars Safer 16 May
About the Cars That Think blog
IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.
Follow @/CarsThatThink
Philip E. Ross, Senior Editor
Willie D. Jones, Assistant Editor
Evan Ackerman, Senior Writer
Lucas Laursen, Contributor
Subscribe to RSS Feed
Jaguar Land Rover prepares tomorrow’s engineers for the self-driving future through its Land Rover 4×4 in Schools programme
Jaguar Land Rover prepares tomorrow’s engineers for the self-driving future through its Land Rover 4×4 in Schools programme
WHITLEY, 16-Apr-2019 — /EuropaWire/ — Jaguar Land Rover is looking to inspire the next generation of software coders and talents with its Land Rover 4×4 In Schools programme. Land Rover 4×4 in Schools programme gives tomorrow’s engineers the chance to learn to code self-driving vehicles of the future today.
Launched in 2006 in UK the Land Rover 4×4 in Schools Technology Challenge was taken globally in 2015 and is now active in 20 countries. Since its inception, more than 15,000 young people have joined the programme. The challenge has inspired many students in pursuing STEM careers in the automotive industry including former participants who have later on joined Jaguar Land Rover as undergraduates and apprentices.
Jaguar Land Rover’s Land Rover 4×4 in Schools programme helped the company reach more than four million young people since 2000. In the 2019’s Land Rover 4×4 In Schools Technology Challenge world finals some 110 students from 14 countries have qualified. This year’s world finals were held at the University of Warwick where the NewGen Motors team from Greece won the trophy following two intensive days of competition.
According to Arm an estimated one billion lines of computer code is needed for self-driving cars, as compared to just the 145,000 lines required by NASA to land Apollo 11 on the moon in 1969 (LOC for Apollo Guidance Computer (AGC)).
Land Rover 4×4 In Schools programme is aimed at inspiring the next generation of software engineers in order to meet the growing need for more coders to deliver these future autonomous and connected vehicles.
During this year’s Land Rover 4×4 In Schools Technology Challenge world finals talented teenagers have written 200 lines of code in just 30 minutes, to successfully navigate a scale model Range Rover Evoque around a 5.7-meter circuit.
According to David Lakin, who is head of education from the IET, UK alone will require more than 1 million software engineers to fill the growing demand for skills like coding, software engineering or electronics. Here is what he said:
“We’re in the midst of a digital skills shortage – the UK alone requires more than 1 million software engineers to fill the growing demand for roles requiring a knowledge of coding, software engineering or electronics. Digital skills are vital to the economy, which is why the IET is proud to support initiatives like the Land Rover 4×4 In Schools Technology Challenge to ensure we inspire, inform and develop future engineers and encourage diversity across STEM subjects from a young age. If we are to safeguard jobs for the next generation, we must equip the workforce of the future with the skills they will need to engineer a better world.”
According to Evans Data Corporation there were 23 million software developers worldwide as of 2018 and this number is expected to to grow to 27.7 million by 2023. Furthermore, research from the World Economic Forum predicts some 65% of today’s students will end up working in jobs that don’t currently exist.
That’s why, Jaguar Land Rover is planning this year to launch a new Digital Skills Apprenticeship programme aimed at attracting the best computer engineers to help code its next-generation electric, connected and autonomous vehicles and support the factories of the future.
According to Nick Rogers, Executive Director, Product Engineering at Jaguar Land Rover, the rapidly changing automotive industry sees the computer engineering and software skills as more important than ever. He commented further:
“Computer engineering and software skills are more important than ever in the rapidly changing automotive industry, and that will only increase as we see more autonomous, connected and electric vehicles on the roads. The UK will need 1.2 million more people with specialist digital skills by 2022, and as a technology company, it’s our job to help inspire and develop the next generation of technically curious and pioneering digital engineers. The Land Rover 4×4 In Schools Technology Challenge is just one of the ways we are doing this, as well as our new Digital Skills Apprenticeship programme we are launching this year.”
Mark Wemyss-Holdenformer, Teacher and Curriculum Content Developer:
“Coding is high on the agenda across industry and teachers do a fantastic job delivering the curriculum, but schools have competing priorities and are hamstrung by limited budgets and time. The private sector, and programmes like Land Rover 4×4 In Schools, have a real opportunity to bridge the gap between what learners enjoy studying and how that translates into a future career.”
Jaguar Land Rover is heavily investing in Global Pioneering Hubs around the world including centers already active in Shannon, Republic of Ireland; Budapest, Hungary and Portland, in the United States in order to maintain its leadership in the Autonomous, Connected, Electric and Shared (ACES) mobility services.
John Cormican, General Manager for Vehicle Engineering, Shannon:
“Shannon has an important role to play in realising the company’s vision for autonomous and connected vehicles, but we can not deliver this future without the very best minds – individuals who could write the next chapter for Jaguar Land Rover. It’s fantastic to see the company taking such an innovative approach towards investing in the next generation.”
SOURCE: Jaguar Land Rover
MORE ON JAGUAR LAND ROVER, AUTONOMOUS VEHICLES, CONNECTED VEHICLES, ELECTRIC VEHICLES, ETC.:
Jaguar Land Rover’s venture capital fund made seed investment in WeTripWasps Rugby stars Bradley Davies and Alapati Leiua helped wounded warriors prepare for the Invictus Games presented by Jaguar Land Rover on September 10 at Queen Elizabeth Olympic ParkJaguar Land Rover short film captures Her Majesty's drive through the streets of WindsorDaimler takes the next step towards securing its CASE (connectivity, autonomous, shared & services and electric) corporate strategyGlobal Talent Competitiveness Index (GTCI) 2019: Switzerland remains #1; Singapore leads Asia-Pacific; Europe dominates the top 10; China leader among BRICS nationsJaguar Land Rover to consolidate its global strategic and creative account for Land Rover brand to Spark44, equally owned by Spark44 management and Jaguar Land RoverSafety-critical positioning for autonomous driving: with ESCAPE the first fully integrated device is on its wayJaguar Land Rover provides 135 vehicles supporting HM Government at the NATO Summit running 4-5 September at Celtic Manor in Newport TomTom navigation for motorcyclists now available on the BMW Motorrad Connected appVolkswagen Group, Mobileye and Champion Motors announce plans to deploy Israel’s first commercial Mobility-as-a-Service (MaaS) with self-driving vehiclesZF will unveil a steering wheel and pedal-free robo-taxi at CES 2019Jaguar Land Rover unveiled all-new Jaguar XFL developed exclusively for ChinaNick Rogers to succeed Dr Wolfgang Ziebart as Engineering Director with board-level responsibilities for Jaguar Land Rover's global engineering operations21st Technical Congress in Berlin: artificial intelligence key for connected and automated drivingKoenigsegg to accelerate growth in the hyper car market with EUR 150 million funding from NEVS (National Electric Vehicle Sweden)Bosch at Hannover Messe: autonomous transport system, AI based visual fault detection and 3D printing with 5GJaguar Land Rover adds 3.0-litre straight six cylinder petrol engine to its Ingenium engine family; debuts on the Range Rover SportZF unveils the latest model of its automotive supercomputer ZF ProAINew advanced vehicle safety features will become mandatory in 2022 across EUUniversity of Nottingham part of government-backed initiative to trial self-driving taxis in LondonJaguar Land Rover announces the resignation of its Group Marketing Director Phil Popham to take up the position of CEO at Sunseeker Continental at CES 2019: Mobility at your service. Freedom to LiveThe plug-in hybrid models of the new BMW 7 Series: 745e, 745Le and 745Le xDrive – market launch in spring 2019
EDITOR'S PICK:
Digi Communications N.V Q1 2019 Financial Report and the report regarding legal documents for April 2019, in accordance with article 82 of Law no. 24/2017 and FSA Regulation no. 5/2018 released
Digi Communications N.V. announces the new date of the Conference Call for the presentation of the Q1 2019 Financial Report. Update to the Company’s 2019 Financial Calendar.
Rhenium-SCT® (SCT= Skin Cancer Therapy) now being offered in Hanau, Germany
Project consortium aims at driving the adoption of selective laser melting (SLM) for large scale metal parts printing
Digi Communications N.V. announces the availability of the instructions on the 2018 share dividend payment
Notification shares buy-back: DIGI COMMUNICATIONS N.V. reports to the regulated market the transactions which occurred under the DIGI symbol between 29 April 2019 – 1 May 2019, under the class B shares buy-back progr
Biodiversity Report Is Urgent Call to Action Beyond Fixes; Geneva Global Initiative Calls on World Community to Focus on Concrete Actions
Digi Communications N.V.’s general Shareholders’ meeting resolutions from 30 Apr 2019 approving, amongst others, the 2018 Annual Accounts and the availability of the adopted Annual Financial Report for the year ended Dec 31, 2018
Notification shares buy-back: DIGI COMMUNICATIONS N.V. reports to the regulated market the transactions occurred under DIGI symbol, 22 – 26 April 2019
Notification shares buy-back: DIGI COMMUNICATIONS N.V. reports to the regulated market the transactions which occurred under the DIGI symbol between 15 April 2019 – 19 April 2019s
XPAND Code was successfully scanned from 200 meters/700 feet at Olympics stadium
Wasser eingießen, Warten und Einschalten
Digi Communications N.V.: Reporting of legal..
F1 drivers beware, AI super cars are coming for you
Artificial Intelligence (AI) autonomous car track tests at Stanford University suggest that it is only a matter of time before driverless cars can compete with Formula 1 racing superstars. The latest self-driving electric race car prototypes from the likes of Roborace can handle straights, chicanes, and hairpins as well as some of the world’s leading racing… Continue reading F1 drivers beware, AI super cars are coming for you
Press: ams, Ibeo and ZF partner to deliver industry-first solid-state LiDAR systems for the automotive industry
ams´ unique laser array brings industry-first solid-state LiDAR system to Ibeo and ZF Premstaetten/Austria, (20 May, 2019) — ams (SIX: AMS), a leading worldwide supplier of high per-formance sensor solutions, announces today that it has signed an agreement to team with Ibeo Automotive Systems GmbH, the German specialist for automotive LiDAR sensor technology, and ZF… Continue reading Press: ams, Ibeo and ZF partner to deliver industry-first solid-state LiDAR systems for the automotive industry
Tesla Begins Shipping Parts To Nearest Service Center When Vehicle Fault Is Detected
Invest
Electric Cars
Electric Car Benefits
Electric Car Sales
Solar Energy Rocks
RSS
Advertise
Privacy Policy
Autonomous Vehicles
Published on May 6th, 2019 |
by Kyle Field
Tesla Begins Shipping Parts To Nearest Service Center When Vehicle Fault Is Detected
Twitter
LinkedIn
Facebook
May 6th, 2019 by Kyle Field
Tesla’s Vehicles Self-Diagnose Then Self-Medicate When Faults Are Detected
Tesla will now start shipping any required parts to the nearest Service Center when the vehicle detects a fault, according to a tweet from the company earlier today.
Tesla has defied the norms and pains of many service center visits with its over-the-air diagnostic capabilities, but this takes that to the next level by eliminating the next barrier to getting the issue fixed. When an issue is detected by the in-car diagnostic system, a popup message appears in the car notifying the owner of the issue and requesting that they make an appointment at the nearest Tesla Service Center.
An unexpected condition has been detected with the Power Conversion System on your Model 3 & replacement part has been pre-shipped to your preferred Tesla Service Center. Please use your Tesla Mobile App or your Tesla Account to schedule a service visit appointment now.
Improving Service for owners continues to be a focal point for Tesla CEO Elon Musk as the number of Teslas roaming the roads of the world continues to grow at an exponential rate. Enabling vehicles to both self diagnose and to order any required parts gives Tesla that much more of an advantage in getting a jump on finding and shipping the parts needed for a repair.
Parts supply is an area that the company has historically, and to this day, continues to struggle with. Stories of owners waiting not just weeks, but upwards of nearly 6 months in some cases, for the required parts to arrive litter the internet. Seeing a story here and a story there about the issue makes it hard to put the scale of the problem into context, but there is enough evidence to make it clear that parts supply continues to be an opportunity for the company.
The new functionality is an impressive step towards the future that will require vehicles to be more self-aware as Tesla ramps up its preparations for its fully autonomous Tesla Network of robotaxis. These vehicles will need to know when they need service and be able to not only get the parts ordered, but also schedule their own service appointments without owner intervention.
In related news, Tesla’s twitter account has been given new life in the last few days, with a new bite of personality and responsiveness that has many wondering if the Securities and Exchange Commission’s restrictions on Tesla CEO Elon Musk’s tweets simply pushed him to use the company’s official Twitter account for his snide, snarky, insightful, playful tweets and replies. Oh, SEC, when will you learn? Don’t mess with the genius in the corner. Just leave him alone with his toys so he can get back to transforming the world for the better.
If you want to take advantage of my Tesla referral link to get 5,000 miles of free Supercharging on a Tesla Model S, Model X, or Model 3, here’s the link: http://ts.la/kyle623(if someone else helped you, please use their code instead of mine). You can also use it to get a new Tesla Solar system for your home.
About the Author
Kyle Field I'm a tech geek passionately in search of actionable ways to reduce the negative impact my life has on the planet, save money and reduce stress. Live intentionally, make conscious decisions, love more, act responsibly, play. The more you know, the less you need. TSLA investor.
Back to Top ↑
Advertisement
Advertise with CleanTechnica to get your company in front of millions of monthly readers.
CleanTechnica Clothing & Cups
Top News On CleanTechnica
Join CleanTechnica Today!
Advertisement
Advertisement
Follow CleanTechnica Follow @cleantechnica
Our Electric Car Driver Report
Read & share our new report on “electric car drivers, what they desire, and what the demand.”
The EV Safety Advantage
Read & share our free report on EV safety, “The EV Safety Advantage.”
EV Charging Guidelines for Cities
Share our free report on EV charging guidelines for cities, “Electric Vehicle Charging Infrastructure: Guidelines For Cities.”
30 Electric Car Benefits
Our Electric Vehicle Reviews
Tesla News
Cleantech Press Releases
Tesla Raising ~$2 Billion
ABB Technology Supports TriMet’s Wind-Powered All-Electric Buses In Portland, Oregon Area
Wiesbaden Orders 56 Electric Buses From Mercedes-Benz
38 Anti-Cleantech Myths
Wind & Solar Prices Beat Fossils
Cost of Solar Panels Collapses
© 2018 Sustainable Enterprises Media, Inc.
Invest
Electric Cars
Electric Car Benefits
Electric Car Sales
Solar Energy Rocks
RSS
Advertise
Privacy Policy
This site uses cookies: Find out more.Okay, thanks
PlanetM Awards New Grants to Advance Mobility Pilots in Michigan
PlanetM, the state of Michigan-backed business development organization, has announced a new batch of grants that aim to entice mobility startups and corporations to pilot their innovations in Michigan, or test and validate their technology at one of Michigan’s proving grounds. PlanetM has awarded a total of $440,000 in grants to five companies. The startups… Continue reading PlanetM Awards New Grants to Advance Mobility Pilots in Michigan
Toyota’s Statement Re: WH Proclamation on 232
May 17, 2019 Today’s Executive Proclamation is a major set-back for American consumers, workers and the auto industry. Toyota has been deeply engrained in the U.S. for over 60 years. Between our R&D centers, 10 manufacturing plants, 1,500-strong dealer network, extensive supply chain and other operations, we directly and indirectly employ over 475,000 in the U.S.,… Continue reading Toyota’s Statement Re: WH Proclamation on 232
MG ZS EV To Be Equipped With 44.5 kWh Battery
With 44.5 kWh battery, MG ZS EV probably should be able to go more than 250 km (155 miles) MG (part of the Shanghai-based SAIC Motor) announced further details about the MG ZS EV, currently presented at the London Motor Show. The electric crossover will be equipped with 44.5 kWh liquid-cooled battery, which places it almost… Continue reading MG ZS EV To Be Equipped With 44.5 kWh Battery
Deconstructing Two Conventional LiDAR Metrics, Part 2
Executive Summary
Conventional metrics for evaluating LiDAR systems designed for autonomous driving are problematic because they often fail to adequately or explicitly address real-world scenarios. Therefore, AEye, the developer of iDAR™ technology, proposes a number of new metrics to better assess the safety and performance of advanced automotive LiDAR sensors.
In Part 1 of this series, two metrics (frame rate and fixed [angular] resolution over a fixed Field-of-View) were discussed in relation to the more meaningful metrics of object revisit rate and instantaneous (angular) resolution. Now in Part 2, we’ll explore the metrics of detection range and velocity, and propose two new corresponding metrics for consideration: object classification range and time to true velocity.
Download “Deconstructing Two Conventional LiDAR Metrics, Part 2” [pdf]
Introduction
How is the effectiveness of an autonomous vehicle’s perception system measured? Performance metrics matter because they ultimately determine how designers and engineers approach problem-solving. Defining problems accurately makes them easier to solve, saving time, money, and resources.
When it comes to measuring how well automotive LiDAR systems perceive the space around them, manufacturers commonly agree that it’s valuable to determine their detection range. To optimize safety, the on-board computer system should detect obstacles as far ahead as possible. The speed with which they can do so theoretically determines whether control systems can plan and perform timely, evasive maneuvers. However, AEye believes that detection range is not the most important measurement in this scenario. Ultimately, it’s the control system’s ability to classify an object (here we refer to low level classification [e.g., blob plus dimensionality]) that enables it to decide on a basic course of action.
What matters most then, is how quickly an object can be identified and classified and how quickly a decision can be made about an object so an appropriate response can be calculated. In other words, it is not simply enough to quantify a distance at which a potential object can be detected at the sensor. One must also quantify the latency from the actual event to the sensor detection — plus the latency from the sensor detection to the CPU decision.
Similarly, the conventional metric of velocity has limitations. Today, some lab prototype frequency modulated continuous wave (FMCW) LiDAR systems can determine the radial velocity of nearby objects by interrogating them continuously for a period of time sufficient to observe a discernible change in position. However, this has two disadvantages: 1) the beam must remain locked on in fixed position for a certain period of time, and 2) only velocity in the radial direction can be discerned. Lateral velocity must be calculated with the standard update in position method. Exploration of these disadvantages will illustrate why, to achieve the highest degree of safety, time to true velocity is a much more useful metric. In other words, how long does it take a system to determine the velocity — in any direction — of a newly identified or appearing object?
Both object classification range and time to true velocity are more relevant metrics for assessing what a LiDAR system can and should achieve in tomorrow’s autonomous vehicles. In this white paper, we examine how these new metrics better measure and define the problems solved by more advanced LiDAR systems, such as AEye’s iDAR (Intelligent Detection and Ranging).
Conventional Metric #1: Detection Range
A single point detection — where the LiDAR registers one detect on a new object or person entering the scene — is indistinguishable from noise. Therefore, we will use a common industry definition for detection which involves persistence in adjacent shots per frame and/or across frames. For example, we might require 5 detects on an object per frame (5 points at the same range) and/or from frame-to-frame (1 single related point in 5 consecutive frames) to declare that a detection is a valid object.
It is a widely held belief that a detection range of 200+ meters at highway speeds is the required range for vehicles to effectively react to changing road conditions and surroundings. Conventional LiDAR sensors scan and collect data about the occupancy grid in a uniform pattern without discretion. This forms part of a constant stream of gigabytes of data sent to the vehicle’s on-board controller in order to detect objects. This design puts a massive strain on resources. Anywhere from 70 to 90+ percent of data is redundant or useless, which means it’s discarded.
Under these conditions, even a system that’s able to operate at a 10-30 Hz frame rate will struggle to deliver low latency while supporting high frame rates and high performance. And if latency for newly appearing objects is even 0.25 seconds, the frame rate hardly matters — by the time the data is made available to the central compute platform in some circumstances, it’s practically worthless. On the road, driving conditions can change dramatically in a tenth of a second. After 0.1 seconds, two cars closing in at a mutual speed of 200 km/hour are 18 feet closer. While predictive algorithms work well to counter this latency in structured, well-behaved environments, there are several examples where they don’t. One such example is the fast, “head-on” approaching small object. Here, a newly appearing object appears “head-on” with a single LiDAR point and it requires N consecutive single LiDAR point detects before it can be classified as an object. In this example, it’s easy to see that detection range and object classification range are two vastly different things.
With a variety of factors influencing the domain controller’s processing speed, measuring the efficacy of a system by its detection range is problematic. Without knowledge of latency or other pertinent factors, unwarranted trust is put on the controller’s ability to manage competing priorities. While it is generally assumed that LiDAR manufacturers are not supposed to know or care about how the domain controller classifies (or how long classification takes), we propose that ultimately, this leaves designers vulnerable to very dangerous situations.
AEye’s Metric
Object Classification Range
Currently, classification takes place somewhere in the domain controller. It’s at this point that objects are labeled as such and eventually, more clearly identified. At some level of identification, this data is used to predict known behavior patterns or trajectories. It is obviously extremely important and therefore, AEye argues that a better measurement for assessing an automotive LiDAR’s capability is its object classification range. This metric reduces the unknowns — such as latency associated with noise suppression (e.g., N of M detections) — early in the perception stack, pinpointing the salient information about whether a LiDAR system is capable of operating at optimal safety.
As a relatively new field, the definition of how much data is necessary for classification in automotive LiDAR has not yet been defined. Thus, AEye proposes that adopting perception standards used by video classification provides a valuable provisional definition. According to video standards, enabling classification begins with a 3×3 pixel grid of an object. Under this definition, an automotive LiDAR system might be assessed by how fast it’s able to generate a high quality, high-resolution 3×3 point cloud that enables the domain controller to comprehend objects and people in a scene.
Generating a 3×3 point cloud is a struggle for conventional LiDAR systems. While many tout an ability to manifest point clouds comprised of half a million or more points in one second, there is a lack of uniformity in these images. Point clouds created by most LiDAR systems display a fine degree of high-density horizontal lines coupled with very poor density vertical spacing, or in general, low overall density. Regardless, these fixed angular sampling patterns can be difficult for classification routines because the domain controller has to grapple with half a million points per second that are, in many cases, out of balance with the resolution required for the critical sampling of the object in question. Such an askew “mish-mash” of points means it needs to do additional interpretation, putting extra strain on CPU resources.
A much more efficient approach would be to gather about 10 percent of this data, focusing solely on Special Regions of Interest (e.g., moving vehicles and pedestrians) while keeping tabs on the background scene (trees, parked cars, buildings, etc.). Collecting only the salient data in the scene significantly speeds up classification. AEye’s agile iDAR is a LiDAR system integrated with AI that can intelligently accelerate shots only in a Region of Interest (ROI). This comes from its ability to selectively revisit points twice in 10’s of microseconds — an improvement of 3 orders of magnitude over conventional 64-line systems that can only hit an object once per frame (every 100 milliseconds). Future white papers will discuss various methods of using iDAR to ensure that we do not discount important background information by correctly employing the concepts of Search, Acquisition, and Tracking. This is similar to how humans perceive.
In summary, one can move low-level object detection to the sensor level by employing, as an example, a dense 3×3 voxel grid every time a significant detection occurs more or less in real-time. This happens before the data is sent to the central controller, allowing for higher instantaneous resolution than a fixed pattern system can offer and, ultimately, better object classification ranges when using video detection range analogies.
Real-World Applications: Imagine that an autonomous vehicle is driving on a desolate highway. Ahead, the road appears empty. Suddenly, the sensor per..