In the past decade, the automotive industry saw biginvestments in R&D of Advanced Driver Assistance Systems (ADAS). Global Government authorities and regulaory bodies equally started investment to enable the infrastructural framework needed for driving ADAS equipped vehicles on the road.
Still, statistics on Indian road accidents are at an alarming rate and many of these accidents could have been prevented by adopting ADAS technology in the vehicle. With OEMs and Tier-1 suppliers being on the forefront of innovation, integration of ADAS features into vehicles is on the rise.
ADAS as safety assisted system
ADAS features like Forward Collision Warning (FCW), Automated Emergency Braking (AEB), Lane Departure Warning (LDW), Driver Monitoring System (DMS), Traffic Sign Recognition (TSR), Blind Spot Detection (BSD) are technological attributes to assist the drivers and thereby increasing the safety of driving.
Typically, the camera sensor captures the raw data in the known format at a defined frame rate providing the image which shall be preprocessed. Noise reduction, distortion correction and other preprocessing techniques shall be applied so that the required image data set is available for post processing.
Perception algorithms which include detection, classification, depth estimation and tracking apart from other key road asset information, provide information on static and dynamic objects, lane and traffic sign/lights, motion as well as trajectory of vehicle information present in the surrounding of the ego (the vehicle that contains the sensors that perceive the environment around it). vehicle.
Sensors like RADAR, LIDAR and Ultrasonic (US) are now being used to improve the performance accuracy in the respective use case. Sensor fusion algorithms provide the results in the use case scenario under consideration of the inputs of the multiple sensors, sending the respective assistance information as visual, haptic and acoustic alerts and warnings to the HMI.
ADAS/Autonomous Driving: Complex use case handling
FEV has been focusing on scalable solutions and redundancy optimization for Autonomous Driving, leveraging its expertise from various global customer projects. ADAS Level 3 automation and above require the combination of sensors including the camera, RADAR, LIDAR, US, GPS, IMU along with HD MAP from navigation units. Perception software provides the static and dynamic object list, range and lane information, data on Traffic Sign/ Traffic Lights (TS/TL), information on approaching and departing vehicles as well as information on environmental conditions.
Let us consider a use case example which involves vehicle control for the curved road having a greater radius of curvature needs a robust trajectory estimation along with the target vehicle identification, tracking and driver behavioral analysis looping back to control functions.
There are many sub use cases which we shall be able to derive from this example to subject the ego vehicles performance based on its perception algorithms performance, control algorithms efficiency, use case functions, validation methods and its reports in fulfilling the set and measured performance KPI.
Consider an ego vehicle for instance on an uphill road facing towards the sun glare, the images captured by camera will be blurred, which results in a poor detection, classification and tracking of the target vehicles and the environment around. This is where sensors like RADAR, LIDAR, IMU and US come into consideration for the fusion of results and initiation of appropriate actions for the use case function being active in the automated driving conditions.
Technology helps, but sometimes…
RADARs do malfunction when there is interference and background noise producing the false detection. Adding this into the above scenario would lead the ego vehicle to a more complex task to handle the self-driving of the vehicle due to prioritization of models and components within the algorithms to provide the next course of action.
Whats the way out?
HD MAP provides the waypoints having the road elements with the granular details in the structured data of known ODD to the destination. If it lacks the real time corrections during the accident-prone areas like the use case mentioned above, rerouting the vehicles trajectory/path planning will be difficult. Localisation algorithms based on raw IMU input data calculate and localize the vehicle position and algorithmically compensate and correct error on displacements as well as other parameters with respect to positioning.
In case of all sensors malfunctioning, HD Map having rerouting issues while on autonomous driving mode and the driver not responding to take over demands, IMU data along with the previous known GPS data will locate the ego vehicle. Efficiency and accuracy of the localization algorithms are measured based on its ability to calculate and compensate for the Bias Instability (BI) error appropriately so that yaw value can be determined accurately for positioning ego vehicles in the erroneous or absence of other parameters like GPS/GNSS. If not compensated, a minor Bias Instability error can put the vehicle off the course.
AI can help
This is where embedding AI based intelligence, redundancies, along with the features for example like DMS modules can help to bring the vehicle to safe halt. Building such AI based models shall help to understand the user’s attention, driving task and driving behaviour based on the artificial learning algorithms capabilities accumulated over the time and thereby smoothening the driving experience even before such incidents are predicted to happen.
Data Storage System for Automated Driving or DSSAD which acts as black box when used solely for R&D purpose in the vehicle shall be utilised to capture the vehicle data having user inputs and Autonomous Driving function behavior. This data shall be used offline during research to understand, enhance and fine tune artificial learning based on AI algorithms of a known user of his driving behavior and interaction with respect to AD algorithms driving task within the ODD.
ADAS: Defining the future of mobility
FEV understands the technical complexities and utilises its capability to solve the challenges to meet the critical and corner case scenarios. Having discussed complex ADAS/AD use case scenarios above, FEV’s expertise on automated data label and annotation of various types of Indian road assets like cars, trucks, bus, pedestrians in a short period of time shall reduce huge manual effort the customer needed to spend. Generated ground truth data set shall be used for ADAS algorithm training, performance fine tuning and validation purposes. Capability that requires digital scene generation using various simulators has been helping customers in rapid prototyping and virtual validation of various ADAS/AD features. Model Based Design, Classic or Adaptive Autosar and integration of respective ADAS functions on to Software Defined Platform (SDV) or ServiceOriented Architecture (SOA) shall further be leveraged to any Indian OEMs in rapid prototyping and realizing the ADAS/AD feature.
Autonomous driving
Apart from supporting various customers on ADAS & AD solution, FEV is continuously working on developing the SAE Level 3 and 4 Autonomous Driving vehicle functions as partof a smart vehicle program capability demonstrator which controls both Lateral and Longitudinal use cases on highway and urban roads. These solutions have been demonstrated at various events and recognized forums. FEV believes in its domain specific niche skills and capabilities it has acquired on executing various R&D and production programs to serve its esteemed customers globally.