Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures

Introduction:

This article is authorized by Abao1990 and written by Abao1990.

Declining component costs and intensified competition in the mid-to-low-end vehicle market have rapidly increased the penetration rate of ADAS in the Chinese market, significantly boosting the installation of ADAS in domestic brands. Let’s take a look at the mainstream chips and platform architectures from major car manufacturers.

Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
Declining component costs and intensified competition in the mid-to-low-end vehicle market have rapidly increased the penetration rate of ADAS in the Chinese market, significantly boosting the installation of ADAS in domestic brands. Five years ago, ADAS features were only available in some high-end models. Since 2015, the cost of electronic components has continuously declined, and consumers tend to choose vehicles with higher safety performance and equipped with intelligent driving assistance features. Now, the installation rate of ADAS in mid-to-low-end models, especially domestic brands, is increasing, particularly for features such as FCW (Forward Collision Warning), AEB (Automatic Emergency Braking), ACC (Adaptive Cruise Control), LDW (Lane Departure Warning), and DMS (Driver Monitoring System).
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
The rapid increase in ADAS penetration comes from several driving forces:
1) The hardware costs associated with ADAS have rapidly decreased in recent years; for example, the price of 77GHz millimeter-wave radar has dropped by more than 50% compared to five years ago;
2) The CNCAP has included some basic ADAS features, such as AEB, into its evaluation system, which has objectively promoted the popularity of these features;
3) Intensified competition in the mid-to-low-end vehicle market has led to a higher installation rate of ADAS features in mainstream joint-venture and domestic brands, even surpassing some high-end models sold in China.
It is expected that the penetration rate of intelligent driving assistance features in the Chinese market will continue to increase rapidly, and the number of intelligent driving assistance features in mid-to-low-end vehicles will gradually increase. According to Strategy Analytics, the penetration rate of ADAS features in passenger cars in China is expected to rise from less than 20% in 2019 to over 70%; the current penetration rate of automatic parking in models is relatively low, indicating significant potential for future growth. According to statistics from Autohome Big Data, the penetration rate of models priced below 300,000 is far less than 20%, but it is expected to reach around 50% by 2025.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures

1

Introduction to Autonomous Driving Components and Key Technologies

Perception Layer: Mainly composed of LiDAR, cameras, high-precision maps, IMU/GPS, responsible for collecting information around the vehicle;
Decision Layer: Based on perceived information data, using high-computing power data centers to obtain optimized driving decisions;
Execution Layer: Based on the driving decisions given by the decision layer, issuing commands to the braking system, engine steering, etc., responsible for driving execution;
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
Autonomous Driving Industry Chain:
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
In the context of smart cockpits, the industry chain is divided according to Tier1 and Tier2, and the autonomous driving technology chain is divided more clearly.
Perception layer visual systems include manufacturers such as Sunny Optical, Largan Precision, and OFILM;
Millimeter-wave radar systems include Continental, Bosch, Desay SV, and Huayu Automotive;
LiDAR manufacturers include ibeo, Bosch, Velodyne, Quanergy, Innoviz, RoboSense, Hesai Technology, Beike Tianhui, and Soaring Technology;
Ultrasonic radar systems include Denso, Panasonic, and Murata;
Data service providers/map manufacturers include Baidu, NavInfo, and AutoNavi;
Decision layer includes Mobileye, NVIDIA, Aptiv, Neusoft, NavInfo, and Zhongke Chuangda;
Chip suppliers include NVIDIA, Intel, Qualcomm, Huawei, and Horizon Robotics;
Vehicle networking service platforms include China Unicom Smart Network, China Mobile Smart Travel, Jiuwu Smart Driving, and NavInfo Smart Link;
Execution layer control solutions include Aptiv, Denso, and Bosch;
Introduction to Autonomous Driving Components and Key Technologies
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
From the perspective of various R&D stages of autonomous driving, it mainly involves software engineering & hardware engineering:
1) Software Engineering:
Operating systems,
Basic software (basic libraries, distributed systems, core services)
Algorithm design (determination, perception, planning)
Engineering implementation (FCW, LDW, etc.)
Cloud services (simulation, high-precision maps)
High-precision maps
2) Hardware Engineering:
Domain control design (hardware architecture, computing units, functional safety)
Sensors (LiDAR, millimeter-wave radar, ultrasonic radar, cameras, GPS, IMU, etc.)
System integration, wire control transformation.
The upstream of the supply chain: CPU chips
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures

The semiconductor and energy revolution drives this wave of automotive intelligence and electrification, reflecting the pattern of the semiconductor industry chain.

Cabin chips: Qualcomm has high computing power, high integration, and high cost performance, with a noticeable increase in market share.
Autonomous Driving Chips: Closed ecosystems triumph over open ecosystems
L3+: NVIDIA > Qualcomm > Huawei
L3 below: Mobileye has the highest market share, but the black box delivery model is increasingly unpopular with car manufacturers, and open models will be more welcomed in the future; domestic manufacturers such as Horizon Robotics and Black Sesame have opportunities.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
Changes in intelligent vehicle chips currently mainly occur in the cabin domain and the two major domain controllers for assisted driving/autonomous driving.
Intelligent cabin chips are upgraded from central control screen chips, and the main participants currently include traditional automotive chip suppliers and new entrants from consumer electronics manufacturers. Domestic manufacturers are entering from the aftermarket to the original equipment market, including NavInfo (Jiefa Technology) and Allwinner Technology.
Autonomous driving domain controllers are a new computing platform generated under the changes in electronic and electrical architecture. Currently, the dominant players are Intel Mobileye and NVIDIA, with Qualcomm and Huawei focusing on layout, while start-ups such as Horizon Robotics and Core Technology are also participating.

2

Introduction to Autonomous Driving Chip Performance

The autonomous driving era industry chain is divided into three levels: hardware companies at the low level, software layers responsible for providing intelligence/connectivity/management above, and service layers related to consumer experience at the top;
High-performance chips with large computing power: Compared to traditional vehicles, the data volume of intelligent vehicles has increased significantly, and high-performance chips have become a necessity, such as the popular SA8155;
Algorithm upgrades: Currently, hardware module upgrades are relatively slow, while algorithm iterations are changing rapidly, and continuously optimizing algorithms helps reduce costs and provide more safety redundancy.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
From the perspective of mass production levels, recent mass-produced models are mainly concentrated in L2+ to L3 level vehicles;
From the hardware configuration perspective, related models mainly include vehicle-mounted cameras, millimeter-wave radar, ultrasonic radar, high-performance chips, etc. LiDAR has not yet been configured, and among sensor chips, Mobileye-related products are the most common, while Tesla uses its self-developed FSD;
In the applicable scenarios of autonomous driving, if it is a closed road section, high-precision maps are generally required, while the use range is relatively small in open road sections.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
Requirements for Computing Power in Autonomous Driving
Intelligent driving vehicles involve processes such as sensor environmental perception, high-precision maps/GPS accurate positioning, V2X information communication, multi-data fusion, decision-making and planning algorithm calculations, and electronic control and execution of calculation results. This process requires a powerful computing platform to analyze and process massive amounts of data in real-time and perform complex logical operations, resulting in very high computing power requirements.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
According to Horizon Robotics data disclosure, for each level increase in autonomous driving, the required chip computing power will increase exponentially. The computing power demand for L2 level autonomous driving is only 2-2.5 TOPS, but the computing power demand for L3 level autonomous driving requires 20-30 TOPS, while L4 level requires more than 200 TOPS, and L5 level computing power demand exceeds 2000 TOPS.
Each increase in autonomous driving level results in an order of magnitude increase in computing power demand. According to Intel’s estimates, in the era of fully autonomous driving, each vehicle generates up to 4000 GB of data daily. To achieve better intelligent driving performance, the computing platform becomes a key focus of vehicle design, leading to a rapid increase in the value of automotive semiconductors and sparking an arms race for computing power. Taking industry leader Tesla as an example, recent media reports have indicated that Tesla is collaborating with Broadcom to develop a new HW 4.0 autonomous driving chip, expected to be mass-produced in the fourth quarter of next year, with the new generation chip adopting a 7nm process. The HW 4.0 computing power is expected to exceed 432 TOPS, more than three times that of HW 3.0, and will be used for calculations in four major areas: ADAS, electric vehicle power transmission, in-vehicle entertainment systems, and vehicle electronics. Let’s take a look at the computing power of mainstream autonomous driving chips.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
Here is a comparison of the computing power of mass-produced autonomous driving chips. NVIDIA’s latest Orin chip dominates the scene, but it has not yet been mass-produced. Currently, the strongest single-chip computing power observed in Tesla is 72 TOPS among the mass-produced ones.
Perception algorithms, including SLAM algorithms, autonomous driving perception algorithms; decision-making algorithms include autonomous driving planning algorithms, autonomous driving decision-making algorithms; execution algorithms mainly refer to autonomous driving control algorithms;
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
The operating systems mainly use Linux, and programming languages include C/C++/PYTHON/MATLAB, etc.;
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
Sensor Fusion Technology:
A single type of sensor cannot overcome inherent shortcomings; we need to combine information from different types of sensors together, integrating data and information obtained from multiple sensors for comprehensive analysis to more accurately and reliably describe the external environment, improving the correctness of system decisions, such as the typical combination of LiDAR + camera + IMU + high-precision map.
Front fusion algorithms: Data is fused together at the raw level; the fused data is like a super sensor that can see infrared as well as camera or RGB data, and can also perceive 3D information from LiDAR, like a pair of super eyes. On these super eyes, we develop our perception algorithms, ultimately outputting a result layer of objects.
Post-fusion algorithms: Each sensor processes generated target data independently, and after all sensors complete target data generation, the main processor performs data fusion.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
Road/cloud: Can be used for data storage, simulation, high-precision map drawing, and deep learning model training, serving to provide offline computing and storage capabilities for autonomous vehicles. Through the cloud platform, we can test new algorithms, update high-precision maps, and train more effective recognition, tracking, and decision-making models. It also supports global information storage and sharing, interconnectivity of business flows, and path optimization for autonomous vehicles.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
In the Era of Intelligent Driving, the Data Processing Volume of Vehicles has Increased Significantly, with Higher Requirements for Chip Performance, AI Chips as the Mainstream
Hardware architecture upgrades drive the demand for chip computing power to show an exponential growth trend. Vehicles need to process a large amount of images, videos, and other unstructured data, while the processor also needs to integrate radar, video, and other multi-source data. All of these place higher demands on the parallel computing efficiency of the vehicle-mounted processor, making AI-capable main control chips the mainstream.
Data, computing power, and algorithms are the three essential elements of AI. The mode of CPU combined with acceleration chips has become a typical AI deployment solution, where the CPU provides computing power, and the acceleration chip enhances computing power and promotes the generation of algorithms. Common AI acceleration chips include GPU, FPGA, and ASIC.
GPU refers to single-instruction, multiple-data processing, using a large number of computing units and ultra-long pipelines, mainly processing computation acceleration in the image field. However, GPUs cannot work alone and must be controlled and called by the CPU. The CPU can work independently, handling complex logical operations and different data types, but when a large amount of uniformly processed data is needed, it can call the GPU for parallel computing.
FPGA is suitable for multiple-instruction, single-data flow analysis, which is the opposite of GPU, thus commonly used in the prediction phase, such as in the cloud. FPGA implements software algorithms using hardware, so there is a certain difficulty in implementing complex algorithms, and the drawback is the relatively high cost. Comparing FPGA and GPU, one can find that the former lacks the storage and reading parts brought by memory and control, resulting in faster speeds. The disadvantage is that the computation volume is not very large. A solution that combines the advantages of CPU and GPU is heterogeneous.
ASIC is a dedicated AI chip customized to meet specific requirements. Besides being non-expandable, it has advantages in power consumption, reliability, and size, especially in high-performance, low-power mobile applications.
Brain-like chip architecture is a new chip programming architecture that simulates the functions of the human brain, allowing perception, behavior, and thinking. In simple terms, it replicates the human brain.
Different Application Scenarios Demand Different AI Chip Performance and Specific Indicators
There are two locations for AI chip deployment: cloud and terminal. Cloud AI applications are mainly used in data centers, requiring enormous amounts of data and computation during the deep learning training phase, making it most cost-effective to implement the training phase in the cloud or data center, while a single terminal chip cannot independently complete a large number of training tasks.
Terminal AI chips are used in devices that perform edge computing, such as smartphones, security cameras, vehicles, smart home devices, and various IoT devices. The characteristics of terminal AI chips are small size, low power consumption, and generally do not require particularly powerful performance, usually only needing to support one or two AI capabilities.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
From a functional perspective, current AI chips mainly focus on two areas: one is the training of AI systems (mainly the pre-training of deep neural networks), and the other is inference after model training deployment.
In theory, training and inference have similar characteristics, but currently, due to significant differences in computational volume, accuracy requirements, energy consumption conditions, and algorithms, training and inference remain separate.
In the training field, massive parameters need to be iteratively trained, so the chip design direction is generally super high performance, high flexibility, and high precision. Chips aimed at training are generally deployed in the cloud or data centers, with high costs and energy consumption. Currently, in the training field, Nvidia’s GPU dominates the market, and the majority of deep neural networks and project implementations adopt Nvidia’s GPU acceleration solutions. The explosion of the deep learning acceleration market has also attracted competitors to enter the fray.
Google released the first-generation TPU chip in 2015, and in May 2017, it launched the ASIC-based TPU 2.0 version, which adopted the systolic array technology, achieving peak computational capabilities of 45 TFlops per second. The second generation also addressed the issue that the first generation TPU could only perform inference and not training. According to Google, in natural language processing deep learning networks, one-eighth of a TPU Pod (Google’s self-built processing unit based on 64 TPU 2.0 units) can complete the training task of 32 top GPUs in six hours.
In addition to Google, AMD has also released an accelerator solution based on Radeon Instinct, while Intel has launched the Xeon Phi + Nervana solution. In the training field, substantial investments are required, and the R&D costs are high. Currently, the main competitors are Nvidia’s GPU, Google TPU, and new entrants like AMD Radeon Instinct (based on GPU) and Intel Xeon Phi + Nervana (based on ASIC). Currently, whether it is Google’s TPU + TensorFlow or other giants’ new solutions, it is very challenging to shake Nvidia’s position in the training market.

17

Comparison of Autonomous Driving Platforms and Selection Considerations

Comparison of Autonomous Driving Platforms
As mentioned earlier, L2 level requires 2 TOPS of computing power, L3 requires 24 TOPS, L4 requires 320 TOPS, and L5 requires 4000+ TOPS.
Currently, it can be seen that the mainstream autonomous driving computing platforms generally have computing power above 200+ TOPS; of course, Tesla has not reached 200 TOPS, as it does not use LiDAR, which significantly reduces the required data processing capabilities.
Interestingly, a single Xavier only has a computing power of 30 TOPS, but through connections such as PCIe, the computing power of the computing platform significantly increases, with NVIDIA’s DRIVE PX Pegasus platform reaching 320 TOPS, even surpassing Tesla’s HW 3.0 computing power.
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
Customer Expansion Progress of Autonomous Driving Chips
Comprehensive Analysis: Understanding Mainstream Autonomous Driving Chips and Platform Architectures
It can be seen that NVIDIA occupies a large number of car manufacturers in autonomous driving, with the Xavier platform being used in Xiaopeng P7, SAIC, Mercedes-Benz, FAW, while the single-chip computing power of the strongest Orin chip is used in Li Auto and NIO, although the final models have not yet been launched, but it is always the new car-making forces that are the first to try.
Qualcomm’s Snapdragon Ride platform has partnerships with General Motors, Great Wall, Weima, and GAC;
Huawei’s MDC platform collaborates with Great Wall, Changan, and BAIC, and the fastest progress so far is seen in BAIC’s new energy Arctic Fox.
Mobileye, due to its system’s closed nature, currently only collaborates with Geely and BMW.
In the autonomous driving chip platform battle, the leading advantage is undoubtedly NVIDIA, with first-class performance. As for the price, as Huang said, “the more you buy, the more you save.” Emerging car-making enterprises pursue high performance, and several of them, such as NIO, Li Auto, and Xiaopeng, have hundreds of billions of cash reserves, making NVIDIA highly favored.
Especially with the Orin platform, if this chip stabilizes, it will lead for at least 5-8 years.

18

Conclusion

To summarize, the article provides a comprehensive analysis of mainstream autonomous driving chips and platform architectures, highlighting the rapid advancements and competitive landscape in the autonomous driving industry.

Click the business card below to follow us immediately

Leave a Comment

×