DUBLIN, March 30, 2023 /PRNewswire/ — The “Global and China Automotive Vision Algorithm Industry Research Report, 2023” report has been added to ResearchAndMarkets.com’s offering.
What is BEV?
BEV (Bird’s Eye View), also known as God’s Eye View, is an end-to-end technology where the neural network converts image information from image space into BEV space.
Compared with conventional image space perception, BEV perception can input data collected by multiple sensors into a unified space for processing, acting as an effective way to avoid error superposition, and also makes temporal fusion easier to form a 4D space.
BEV is not a new technology. In 2016, Baidu began to realize point cloud perception at the BEV; in 2021, Tesla’s introduction of BEV draw widespread attention in the industry. There are BEV perception algorithms corresponding to different sensor input layers, basic tasks, and scenarios. Examples include BEVFormer algorithm only based on vision, and BEVFusion algorithm based on multi-modal fusion strategy.
Three Technology Routes of BEV Perception Algorithm
In terms of implementation of BEV technology, the technology architecture of each player is roughly the same, but technical solutions they adopt are different.
So far, there have been three major technology routes:
- Vision-only BEV perception route in which the typical company is Tesla
- BEV fused perception route in which the typical company is Haomo.ai
- Vehicle-road integrated BEV perception route in which the typical company is Baidu
Vision-only BEV perception technology route: Tesla is a representative company of this technology route. In 2021, it was the first one to use the pre-fusion BEV algorithm for directly transmitting the image perceived by cameras into the AI algorithm to generate a 3D space at a bird’s-eye view, and output perception results in the space.
This space incorporates dynamic information such as vehicles and pedestrians, and static information like lane lines, traffic signs, traffic lights and buildings, as well as the coordinate position, direction angle, distance, speed, and acceleration of each element.
Tesla uses the backbone network to extracts features of each camera. It adopts the Transformer technology to convert multi-camera data from image space into BEV space. Transformer, a deep learning model based on the Attention mechanism, can deal with massive data-level learning tasks and accurately perceive and predict the depth of objects.
BEV fused perception technology route: Haomo.ai is an autonomous driving company under Great Wall Motor. In 2022, it announced an urban NOH solution that underlines perception and neglects maps. The core technology comes from MANA (Snow Lake).
In the MANA perception architecture, Haomo.ai adopts BEV fused perception (visual Camera + LiDAR) technology. Using the self-developed Transformer algorithm, MANA not only completes the transformation of vision-only information into BEV, but also finishes the fusion of Camera and LiDAR feature data, that is, the fusion of cross-modal raw data.
Since its launch in late 2021, MANA has kept evolving. With Transformer-based perception algorithms, it has solved multiple road perception problems, such as lane line detection, obstacle detection, drivable area segmentation, traffic light detection & recognition, and traffic sign recognition.
In January 2023, MANA got further upgraded by introducing five major models to enable the transgenerational upgrade of the vehicle perception architecture and complete such tasks as common obstacle recognition, local road network and behavior prediction.
The five models are: visual self-supervision model (automatic annotation of 4D Clip), 3D reconstruction model (low-cost solution to data distribution problems), multi-modal mutual supervision model (common obstacle recognition), dynamic environment model (using perception-focused technology for lower dependence on HD maps), and human-driving self-supervised cognition model (driving policy is more humane, safe and smooth).
Vehicle-road integrated BEV perception technology route: in January 2023, Baidu introduced UniBEV, a vehicle-road integrated solution which is the industry’s first end-to-end vehicle-road integrated perception solution.
Features:
- Fusion of all vehicle and roadside data, covering online mapping with multiple vehicle cameras and sensors, dynamic obstacle perception, and multi-intersection multi-sensor fusion from the roadside perspective;
- Self-developed internal and external parameters decoupling algorithm, enabling UniBEV to project the sensors into a unified BEV space regardless of how they are positioned on the vehicle and at the roadside
- In the unified BEV space, it is easier for UniBEV to realize multi-modal, multi-view, and multi-temporal fusion of spatial-temporal features;
- The big data + big model + miniaturization technology closed-loop remains superior in dynamic and static perception tasks at the vehicle side and roadside.
Baidu’s UniBEV solution will be applied to ANP3.0, its advanced intelligent driving product planned to be mass-produced and delivered in 2023. Currently, Baidu has started ANP3.0 generalization tests in Beijing, Shanghai, Guangzhou and Shenzhen.
Baidu ANP3.0 adopts the “vision-only + LiDAR” dual redundancy solution. In the R&D and testing phase, with the “BEV Surround View 3D Perception” technology, ANP3.0 has become an intelligent driving solution that enables multiple urban scenarios solely relying on vision. In the mass production stage, ANP3.0 will introduce LiDAR to realize multi-sensor fused perception to deal with more complex urban scenarios.
BEV Perception Algorithm Favors Application of Urban NOA
As vision algorithms evolve, BEV perception algorithms become the core technology for OEMs and autonomous driving companies such as Tesla, Xpeng, Great Wall Motor, ARCFOX, QCraft and Pony.ai, to develop urban scenarios.
Xpeng Motors: the new-generation perception architecture XNet can fuse the data collected by cameras before multi-frame timing, and output 4D dynamic information (e.g., vehicle speed and motion prediction) and 3D static information (e.g., lane line position) at the BEV.
Pony.ai: In January 2023, it announced the intelligent driving solution – Pony Shitu. The self-developed BEV perception algorithm, the key feature of the solution, can recognize various types of obstacles, lane lines and passable areas, minimize computing power requirements, and enable highway and urban NOA only using navigation maps.
Key Topics Covered:
1 Overview of Vision Algorithm
1.1 Vehicle Perception System Architecture
1.2 Vehicle Visual Sensors and Solutions
1.3 Vehicle Visual Perception Tasks
1.4 Computing Architecture and Algorithms of Exterior Visual Perception Systems
1.4.1 Mono Camera Algorithm
1.4.2 Stereo Camera Algorithm
1.4.3 Surround View Camera Algorithm
1.5 Architecture and Algorithms of In-vehicle Visual DMS
1.5.1 Visual DMS Solution
1.5.2 Visual OMS Solution
1.6 BEV Perception Algorithm
2 Foreign Vision Algorithm Companies
2.1 Mobileye
2.1.1 Profile
2.1.2 Main Technologies
2.1.3 Visual Solutions
2.1.4 Major Customers
2.2 Continental
2.3 Bosch
2.4 StradVision
2.5 NVIDIA
2.6 Qualcomm
2.7 Valeo
2.8 Seeing Machines
2.9 Smart Eyes
2.10 Cipia
2.11 XPERI
2.12 Tesla
3 Chinese Vision Algorithm Companies
3.1 Momenta
3.1.1 Profile
3.1.2 Visual Perception Algorithm
3.1.3 Mass-produced Autonomous Driving Solutions
3.1.4 Fully Intelligent Driving Solution
3.1.5 Dynamics in Autonomous Driving
3.2 Haomo.ai
3.3 Nullmax
3.4 Motovis
3.5 MINIEYE
3.6 JIMU Intelligent
3.7 Smarter Eye
3.8 SenseTime
3.9 ArcSoft
3.10 Baidu Apollo
3.11 UISEE
3.12 Horizon Robotics
3.13 Juefx
3.14 ZongMu Technology
3.15 ThunderSoft
3.16 iVICAR
4 Summary and Trends
4.1 Summary on Companies
4.1.1 List of Foreign Vision Algorithm Companies
4.1.2 List of Chinese Vision Algorithm Companies
4.2 Development Trends
For more information about this report visit https://www.researchandmarkets.com/r/lr6btu
About ResearchAndMarkets.com
ResearchAndMarkets.com is the world’s leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.
Media Contact:
Research and Markets
Laura Wood, Senior Manager
[email protected]
For E.S.T Office Hours Call +1-917-300-0470
For U.S./CAN Toll Free Call +1-800-526-8630
For GMT Office Hours Call +353-1-416-8900
U.S. Fax: 646-607-1907
Fax (outside U.S.): +353-1-481-1716
Logo: https://mma.prnewswire.com/media/539438/Research_and_Markets_Logo.jpg
SOURCE Research and Markets