Adaptive Computing Is Fueling Evolution of Automotive Automation & Safety

With the rise of electric vehicles and innovative Advanced Driver Assistance Systems (ADAS), safety, and infotainment systems — many driven by AI, the automotive industry is experiencing massive changes. In India, OEMs, Tier-1s, and startups are accelerating efforts to integrate technology and data-driven intelligence into automobiles, while managing challenges such as space, power efficiency, cost, and road diversity.

Moreover, as the number of sensors and processors on vehicles continues to increase, demand will grow for embedded technologies with adaptive and versatile architectures. Such solutions must be capable of powering high-reliability systems; processing and networking large amounts of data at low latency, operating at low power; supporting increasingly complex algorithms; meeting evolving automotive technology standards and functional safety requirements. 

Taking a step backwards, the current possibilities of the connected car can be separated into three submarkets: Automated Driving, ADAS, and In-Vehicle Experience (IVX). There are six levels of driving automation, ranging from Level 0 (L0), which describes an entirely manually controlled vehicle, up to L5, a fully automated vehicle requiring no driver engagement.

The distinction point between comes when you move from L2, where automated driving features are present but the ultimate responsibility for controlling the vehicle is still on the driver, to L3 when the automated driving system becomes responsible for any faults when enabled.

Unlike the U.S., where robotaxis and long-haul autonomous trucks dominate conversations, India’s mobility landscape has unique challenges. Dense traffic, two-wheelers weaving between lanes, inconsistent road infrastructure, and varied weather conditions make it essential for ADAS to be tuned for local realities.

Due to the distinction between L2 and L3, a lot of innovation is taking place at L2 – with more advanced features now being described as L2++ or L2.99. These vehicles incorporate automated driving features that can control steering, acceleration, and braking, but require constant driver supervision and intervention.

Furthermore, they will include ADAS, a broad term describing features that support the driver by alerting them to certain scenarios, including advanced sensors, but also extends to features that assist the driver using temporary system control. These features include Adaptive Cruise Control (ACC), Lane Keep Assist (LKA), and Blind Spot Detection (BSD), which are increasingly being piloted by OEMs to improve safety on Indian roads.

More vehicles feature ADAS to increase the safety of drivers and other road users, using information collated by on-vehicle sensors to alert drivers to potential hazards or detect when they are distracted/fatigued using Driver Monitoring Systems and In-Cabin Monitoring Systems; assist in scenarios such as parking and traffic jams; and even override the driver to avoid collisions.

For Indian roads, this can mean recognizing pedestrians on poorly lit streets, handling unpredictable vehicle behavior, or alerting drivers distracted by mobile phone use. This is enabled by technologies including enhanced camera systems, radar-based imaging sensors, and LiDAR (Light Detection and Ranging) sensors. 

The third ‘subset’ of the connected car is In-Vehicle Experience (IVX). Increased vehicle connectivity drives improvements in infotainment systems. As well as guarding against fatigue and assisting mental stimulation, intuitively and seamlessly sharing information such as navigation, vehicle maintenance, and live traffic updates can enhance driver experience as well as improve safety.

Electric vehicles offer a huge inflection point for the In-Cabin experience – increasing demand for more elaborate digital cockpits which can keep the driver productive or entertained whilst the car charges. In India, the demand for multilingual, voice-enabled systems is especially strong. Adaptive computing platforms, CPUs, and GPUs are enabling generative AI-driven interfaces that can handle regional languages and conversational interactions, moving beyond rigid command-based infotainment.

Safety Embedded 

The most important aspect of automated driving and ADAS features is ensuring driver and passenger safety. All stakeholders – manufacturers, suppliers, road and commercial vehicle users – expect the highest level of safety and reliability in vehicles with autonomous systems.

However, Data Aggregation, Pre-Processing and Distribution (DAPD), the process of a vehicle collecting, transporting and processing sensor data to inform relevant actions, brings complex technical challenges around bandwidth availability, power efficiency, sensor performance and reliability.

Automated emergency braking, lane keeping, and driver monitoring must function reliably even with variable bandwidth, limited connectivity, or inconsistent road signage. An example of adaptive computing in action is the collaboration with Mercedes-Benz Research and Development India (MBRDI) on the MBUX Interior Assistant. 

Safety testing and certification must be rigorous to ensure such systems never fail, have adequate connectivity and power supply at all times, and are secure against interference from malicious and non-malicious cyber threats. This means technology inside automotive systems must comply with quality testing (AEC-Q100) and safety specifications (ISO26262).

AEC-Q100 is an industry standard that outlines testing requirements for electronics products for automotive applications. The ISO26262 is an international functional safety stand for road vehicles defined by the International Organization for Standardization (ISO). There are four levels of ISO262262 ASIL certifications – with ASIL A representing the lowest degree and ASIL D the highest degree of automotive hazard. Embedded technologies as well as systems must meet these standards. 

These technical and safety requirements make adaptive computing a critical component in maintaining the integrity of automated driving features as standards rapidly evolve and become increasingly complex. Adaptive computing hardware – based on programmable logic (PL) – can be programmed and reprogrammed repeatedly after deployment in the field to fulfill a diverse range of functions and evolve with their environment.

Their parallel processing capabilities mean they can compute multiple tasks and data streams quickly and efficiently. Therefore, PL-based devices are a good fit for vehicles with automated features, which require low-latency, low-power, and high-reliability adaptive silicon to aggregate, process, and distribute sensor data. Furthermore, adaptive SoCs and cost-optimized FPGA platforms are already being evaluated by Indian OEMs, and Tier-1s are designed to support these safety-critical requirements.

Automation and AI

As the market moves toward highly automated and fully autonomous driving, vehicles will become more reliant on advanced sensors and domain controllers equipped with Machine Learning. Furthermore, AI processing performance and heterogeneous computing architectures will be critical for AI-guided, real-time decision making and increased vehicle autonomy. PL devices like FPGAs will play a central role for enabling adaptive computing and onboard vehicle intelligence.

The overall demand for high-performance processing, graphics and adaptive computing to enable next-generation Automated Driving, ADAS and IVX capabilities is expected to skyrocket in the coming years. In the immediate future, forecasts anticipate a 2X increase in performance needs every two to three years for infotainment alone, across CPUs, GPUs, and graphics displays. Meeting these growing performance requirements will require expanding the available processing headroom to accommodate additional workloads deployed to the vehicle through its lifecycle. 

Size, cost, and power constraints, along with the desire to embrace the era of Software Defined Vehicles, are driving automotive design to use more centralized computing architectures to consolidate Automated Driving, ADAS, and IVX functions and reduce complexity. Rather than having many intelligent subsystems, automotive OEMs are moving to designs where intelligence is split between the edge and domain/zonal controllers.

For example, rather than having a microcontroller unit (MCU) for each sensor, centralizing the compute in a domain, or zonal controller, can consolidate sensor processing. This shift can reduce wiring complexity, system cost, and power consumption, critical considerations for India’s cost-sensitive market.  As the enabling technologies become more affordable over time, premium safety and advanced AI-enabled features, such as parking assist and automated highway driving, will eventually reach the mass market. For the AI-enabled cars of tomorrow, these will become standard features required in all vehicles.

When this happens, automobile manufacturers will face even more of a compute and power crunch, requiring high-performance, ultra-low latency, and low-power devices that meet advanced functional safety standards. Rather than deploying multiple computing solutions – compounding issues such as space and power usage – heterogeneous architectures can provide a single-chip solution to tackle all phases of an automated driving system – sense, perceive, plan, and act.

This is where the Adaptive SoC devices come to the forefront. Emerging adaptive SoCs that integrate AI engines with programmable logic are delivering the processing power and bandwidth needed for sensor fusion. Sensor inputs such as vision, radar, and LiDAR are ingested via programmable I/O blocks and fed directly to the programmable logic for low-latency sensor-specific processing.

This offers a flexible means of implementing innovative sensor fusion algorithms prior to perception/inference processing in the AI Engines and the scalability to support various L2/L2+ system requirements as well as L3 and L4 systems where redundancy is critical.

Looking beyond the role of adaptive computing, the future is bright for embedded automotive technologies in general, as the speed of innovation shows no sign of slowing. Whether it’s CPUs, GPUs, FPGAs, Adaptive SoCs, or Embedded APUs, automotive designers, carmakers require architectures offering dedicated scalar, graphics, AI, and programmable logic compute subsystems to enable flexible and adaptable system designs updating throughout the life of a vehicle. 

Collaborations with R&D hubs like MBRDI, as well as engagements with Tier-1 suppliers and startups, show how adaptive technologies are already addressing India’s unique mobility challenges. As OEMs in India scale toward more automated and software-defined architectures, adaptive computing will ensure that vehicles remain resilient to local infrastructure conditions while staying aligned with global safety and performance standards. 

Rohith Gopalakrishnan is Country Sales Manager, Embedded Group in AMD India. Views expressed are the author’s personal. 

Go to Source