Intelligent technology is an important technical path to achieve the goal of making cars the third living space for people. Currently, there are two major development directions for automotive intelligence: driving automation and cabin intelligence. The mission of autonomous driving (ADAS/AD) is to free humans from driving tasks such as using their feet (longitudinal control), hands (lateral control), eyes (perception), and brain (decision-making). Once human energy is released, it further promotes the needs for office work, leisure, and entertainment inside the car, driving the digitalization and informatization of automotive cabins and the vigorous development of emerging human-machine interaction technologies, which is the “Intelligent Cockpit” technology.Currently, autonomous driving technology has entered a rapid development phase globally. As vehicles equipped with L1/L2 level ADAS functions enter large-scale mass production, the market penetration rate of L1/L2 level ADAS functions will rapidly increase. Meanwhile, L3/L4 level autonomous driving systems are still in small-scale prototype testing. In today’s autonomous driving industry, the Chinese market is undoubtedly the main force. This year, the expected installation volume of L2 in China will exceed 800,000, with Chinese brands occupying the vast majority of the market share.The rapid increase in the market penetration rate of ADAS functions comes from several driving forces:1. The software and hardware technologies related to ADAS are becoming increasingly mature and stable, and costs are decreasing. For example, the cost of millimeter-wave radar has decreased by more than 50% compared to five years ago.2. Some basic ADAS functions (e.g., Automatic Emergency Braking AEB) have been incorporated into the automobile testing systems of various countries (e.g., C-NCAP), which has greatly promoted the popularization of these ADAS functions.3. Competition among mid-to-low-end vehicles is intensifying, and ADAS functions can effectively enhance the technological feel and driving experience of brands, causing mainstream joint venture brands and independent brands to focus on key models that even surpass some high-end international brands.In the future, the penetration rate of ADAS functions in the Chinese market will continue to increase rapidly, and the installation of ADAS functions in mid-to-low-end vehicles will gradually increase. According to a research report by iResearch, it is expected that by 2025, the penetration rate of ADAS functions in the passenger car market may reach approximately 65%. The current model penetration rates for L3 level high-speed automatic navigation (HWP) functions and L4 level automatic parking (AVP) functions are relatively low, but there is significant room for improvement in the future.Figure 3-1 Forecast of ADAS Function Market Penetration Rate
1.1 Summary of ADAS/AD Functions
Currently, the ADAS systems in the industry have achieved many auxiliary driving functions. Overall, these functions can be divided into several categories based on their purposes: active safety functions, comfort auxiliary driving functions, parking assistance functions, and supervised/unmanned autonomous driving functions.Typically, L0-L2 level autonomous driving is represented by ADAS; L2+ level autonomous driving is represented by ADAS/AD to indicate a transition; and L3-L4 level autonomous driving is represented by AD.
1.2 ADAS/AD System Architecture
The intelligent driving system essentially aims to solve three problems: 1) Where am I? 2) Where am I going? 3) How do I get there? Based on this system model, a typical intelligent driving system or autonomous driving system usually consists of three parts:1. Environmental Perception: The perception system relies on various sensors (including cameras, millimeter-wave radar, ultrasonic radar, laser radar, high-precision maps/IMU/GPS, etc.) to obtain information about the environment where the car is located and surrounding vehicles, pedestrians, traffic lights, and road signs, providing data support for the car’s comprehensive decision-making, solving the core problem of “Where am I?”2. Decision Planning: This involves data fusion using the results of environmental perception, combined with high-precision map data to determine the appropriate working model and decide on the corresponding trajectory planning scheme, aiming to replace human driving decisions and integrate the intelligent vehicle into the overall traffic flow in a human-like manner, solving the core problem of “Where am I going?”3. Control Execution: This refers to the actual execution of a specific minimal decision planning result to achieve the planned objective. Specifically, in the vehicle, this is usually manifested through various control theories and algorithms to control the vehicle’s drive, brake, and steering systems, thereby enabling the vehicle to accurately perform actions such as evasive maneuvers, deceleration, maintaining distance, and steering according to the decision plan, solving the core problem of “How do I get there?”Figure 3-2 System Model of a Typical Autonomous Driving System
2
ADAS/AD Perception System
The perception system of autonomous driving actually includes several modules: environmental perception, vehicle self-status perception, and vehicle positioning. Sensors are important means for the vehicle’s perception system to collect environmental information, vehicle self-status information, and location information. The sensors equipped on autonomous driving vehicles can be divided into three categories:
Vehicle self-status perception sensors (referred to as: self-perception sensors): self-perception uses body sensing sensors to measure the current state of the vehicle, including vehicle speed, acceleration, yaw, and steering angle, etc. Body sensing is typically achieved using pre-installed measurement units, such as odometers, inertial measurement units (IMU), gyroscopes, and information from the Controller Area Network (CAN) bus.
Localization sensors: Localization sensors use external sensors (Exteroceptive Sensor) such as GPS or inertial measurement unit readings for positioning, determining the vehicle’s global and local positions. High-precision vehicle positioning usually relies on a combination of information from multiple sensors, such as GPS, IMU, odometers, and cameras. Data fusion from multiple sensors can minimize the limitations and drawbacks of individual sensors, improving positioning accuracy and reliability.
Environmental perception sensors: Environmental perception sensors mainly include cameras, ultrasonic radar, millimeter-wave radar, and laser radar.
The environmental perception system relies on these environmental perception sensors to collect information data about the vehicle’s environment and perform a series of calculations and processing to accurately model the surrounding environment, resulting in an environmental model. The so-called environmental model refers to the digital representation of the physical world outside the vehicle, including the representation of roads, objects to be avoided (such as other vehicles and vulnerable road users), and drivable “freespace”.Different Sensor CharacteristicsDifferent sensors have different characteristics due to their working principles. To ensure the redundancy and robustness of the ADAS perception system, manufacturers usually adopt a configuration scheme that fuses multiple sensors. The table below summarizes the characteristics of various commonly used sensors in ADAS systems:
2.1 ADAS Sensor Layout Scheme
Mainstream ADAS systems have evolved from L0 level to the current L2+ level, and the technical solutions have undergone significant changes, evolving from early distributed intelligent sensor solutions to the current centralized solutions based on ADAS domain controllers. Consequently, the sensor layout has also changed significantly.Below are some common terms used in ADAS sensor layouts:
FCM: Front Camera Module, front-facing camera assembly, available in four forms: mono (Mono), stereo (Stereo), bi-focal (Bi-Focals), and tri-focal (Tri-Focals).
FCR: Front Central Radar, front radar module, available in two forms: MRR (Mid-Range Radar) and LRR (Long-Range Radar). Generally, the 1R1V scheme (which will be explained later) commonly chooses MRR as the front radar module, while the 5R1V scheme often selects LRR as the front radar.
SRRs: Side-Rear Radars, side rear radar modules (left and right, generally left master and right slave), available in two forms: SRR (Short-Range Radar) and MRR (Mid-Range Radar); SRR is often 24G millimeter-wave, while MRR is typically 77-79G millimeter-wave. Here, the abbreviation SRR has two meanings; it may refer to the side rear radar module or short-range millimeter-wave radar, hence the ‘s’ to differentiate the side rear radar module (SRRs).
USS: Ultra Sonic Sensor, ultrasonic radar sensor.
The early L0-L2 level ADAS systems were composed of several independent subsystems, each implementing corresponding independent ADAS functions, thus also referred to as distributed ADAS system solutions.Figure 3-3 Early L0-L2 Level ADAS System Implementation Scheme
Forward ADAS system: Generally consists of a single FCR or a single FCM; the current mainstream configuration is the 1R1V scheme composed of FCR+FCM, capable of supporting TJA/ICA’s L2 ADAS (single-lane driving assistance). With the improvement of visual detection capabilities, there is a trend towards a single FCM in L0-L2 level ADAS/AD positioned models, as the perception information required for lateral control, such as lane markings, can only be provided by vision; eliminating radar can reduce system costs.
Side-rear ADAS system: Generally consists of two SRRs at the side rear, implementing most side-rear ADAS functions.
Automatic Parking System: Composed of a parking controller and 12 ultrasonic sensors (USS), the APA (Automatic Parking Assist) system; the main functions are APA and FAPA, etc.
Surround View System: Composed of a surround view controller (which is now rarely seen as this component has been absorbed into other control nodes; mainly replaced by the vehicle machine, parking controller, or domain controller) and four fisheye cameras. Implements AVM function (Around View Monitoring).
The first two systems are often referred to as driving ADAS systems (Driving ADAS System), and this driving ADAS solution is sometimes referred to as the 3R1V scheme, meaning 3 Radar 1 Vision; the latter two are often referred to as parking ADAS systems (Parking ADAS System).As the ADAS systems progress to L2+ level, the integration and performance of ADAS domain controllers consolidate the previously dispersed ADAS subsystems, allowing sensor data originally exclusive to the dispersed systems to be reused by multiple ADAS functions.The L2+ level ADAS systems mainly fall into two categories: 1) Multi-radar centralized scheme, mainly the 5 radar scheme, commonly seen in 5R1V, 5R2V, 5R5V, etc.; 2) Multi-vision centralized scheme, which refers to the continued evolution based on multi-radar schemes, forming a multi-vision perception + radar redundancy perception system, such as the 5R12V scheme.The following diagram shows the maximum sensor architecture scheme for L2+ level ADAS/AD systems—the 5R-12V-12USS scheme. The upper limit of this sensor layout architecture is “definitely not using laser radar.” As long as laser radar (generally forward laser radar) is used, it reaches the sensor architecture of L3 level AD AD systems;Figure 3-4 Ultimate Sensor Layout Architecture of L2+ Level ADAS Systems
Front main camera (Main Camera, x1): The main camera corresponds to the FCM assembly in L0-L2 stages, namely the single forward-facing scheme; in the L2+ domain control scheme, it acts as a dummy Camera, connected to the domain controller via LVDS. The common HFOV (Horizontal Field Of View) design values are mainly 30° – 50° – 60° – 100° – 120°, which are generally rounded. The actual engineering implementation values will vary according to different optical lenses, with specifications like 48°/52° (design value 50°), 28° (design value 30°), etc. The camera color matrix (Pattern) is usually RCCB or RCCC, with a trend towards RYYCy. RYYCy has no Clear, and color information is not lost, ensuring color restoration performance. Detection distance is 150-170 meters.
Front narrow-angle camera (Narrow Camera, x1): Approximately 30° front-facing camera used to observe key targets like traffic lights/vehicles/pedestrians. Generally uses the same image sensor as the main front camera (e.g., both 1.3MP, or both 2MP, or even both 8MP image sensors), with a reduced FOV that increases pixel density, allowing for a longer detection distance relative to the Main Camera; the Pattern is often RCCB or RCCC. Detection distance is 250 meters.
Front wide-angle camera (Wide Camera, x1): HFOV about 140°, similar to the wide-angle camera in Tesla’s tri-focal camera. Once 8MP cameras are added, the FOV of the Main Camera can reach 120°, making the Wide Camera potentially unnecessary.
Side front (left and right cameras) (Corner Camera, x2): HFOV about 70°-80°, which may be upgraded to about 100° later; similar to Tesla’s B-pillar cameras, looking towards the side front, mainly focusing on nearby vehicle cut-ins and the vehicle’s lane change needs. The Pattern is often RCCB or RCCC.
Side rear (left and right cameras) (Wing Camera, x2): HFOV about 80°-90°, which may eventually be standardized to 100°. The Pattern is often RCCB or RCCC; focusing on side and rear targets to meet lane change needs.
Rear view camera (Rear Camera): Same as the front main camera, used for rear target detection.
These cameras are often referred to as Driving Cameras (mainly used for driving functions).
Front fisheye camera (Front Fisheye Camera): One of the fisheye surround view cameras, used for the display of the surround view function (for visual representation) and for the visual detection of parking functions (for vehicle perception and target detection); the common color matrix is RGGB due to the need for color restoration. If an 8MP camera is used and pixel merging technology is applied to reduce to 2MP, RYYCy can be selected.
Left fisheye camera (Left Fisheye Camera): Same as above.
Right fisheye camera (Right Fisheye Camera): Same as above.
Rear fisheye camera (Rear Fisheye Camera): Same as above.
The four fisheye cameras are also commonly referred to as Parking Cameras (mainly used for parking functions); of course, in the L2+ stage, as various sensors continue to merge, the boundaries between Driving Cameras and Parking Cameras have gradually blurred. Parking functions often use the front-facing camera for memory parking (MPA, Memory Park Assist); driving functions often use side fisheye cameras to detect lane markings for safety stops.In addition to the above visual sensors, there are many active sensors:
Front millimeter-wave radar (Front Central Radar): Generally LRR (Long-Range Radar), responsible for detecting targets ahead, with good distance and speed measurement performance, and is not easily obstructed;
Side front corner radar (Side-Front Radar, SFR x2) and side rear corner radar (Side-Rear Radar, SRR x2): Located at the four corners of the vehicle, generally using SRR (Short-Range Radar) or MRR (Middle-Range Radar). They can provide dual-mode detection, Long Range Mode and Short Range Mode; the long-range mode has a small FOV with a long detection distance; the short-range mode has a large FOV with a short detection distance. In the domain controller scheme, radars are not distinguished as Master and Slave. In the distributed scheme, the left-side radar is generally the Master, and the right-side radar is the Slave.
Ultrasonic sensors (Ultra Sonic Sensor, USS): 12 units, with 4 long-range on the side and 8 short-range in the front and rear.
In addition to the environmental perception sensors mentioned above, L2+ and above ADAS/AD systems also require GNSS positioning, IMU (generally the signal comes from the airbag controller or ESP system), high-precision maps, and other different perception data sources.
3
ADAS/AD Domain Control Chips and Solutions
(1) ADAS Solutions for L0-L2 LevelsAs previously mentioned, most early L0-L2 level ADAS systems are based on distributed controller architecture, with the entire ADAS system consisting of 4-5 ADAS subsystems, each typically being an integrated unit solution (which can be seen as a smart sensor), with each subsystem exclusively controlling its configured sensors and generally operating independently of each other.Taking the Intelligent Front Camera Module (FCM) as an example, the entire subsystem ECU mainboard contains two chips: one is the safety core (Safety Core); the other is the performance core (Performance Core). The safety core is generally represented by MCUs like Infineon TC297/397, which carry control tasks and thus require a higher functional safety level; the performance core is usually a multi-core heterogeneous MPU with higher performance computing power, which will carry a large number of computational tasks.Below is a summary of the L0-L2 level solutions:
L0 level solution: Implements various ADAS warning functions, such as FCW, LDW, BSW, LCA, etc. Distributed architecture, generally composed of major hardware modules such as FCM, FCR, SRRs, AVS, APA, etc.
L1 level solution: Completes various ADAS single longitudinal core and single lateral control functions, such as ACC, AEB, LKA, etc. Also a distributed architecture, with hardware modules similar to those in the L0 level solution.
L2 level solution: Completes ADAS longitudinal + lateral combined control functions. For example, based on the FCM+FCR fusion system, it integrates forward visual perception and front radar target perception information to achieve TJA/ICA functions; or based on the AVS+APA fusion system to achieve automatic parking functions.
(2) ADAS Solutions for L2+ Levels and AboveThe distributed architecture of ADAS systems has two fatal flaws: 1) Each subsystem operates independently, making it impossible to achieve deep fusion between multiple sensors. 2) Each subsystem exclusively controls its configured sensors, making it impossible to implement complex functions across multiple different subsystem sensors.As the vehicle EE architecture evolves to domain centralized EEA, ADAS domain controllers are equipped with more integrated and higher performance computing processor platforms, enabling the support of more complex sensor data fusion algorithms to achieve higher-level ADAS functions, such as HWP, AVP, etc.The centralized ADAS domain controller solutions have transitioned from the earliest four-chip solutions to three-chip solutions, and now to the current mainstream two-chip solutions, as shown in the following Figure 3-5:Figure 3-5 Evolution History of ADAS Domain Controller SolutionsThe following Figure 3-6 is a schematic diagram of a typical vehicle-mounted ADAS domain functional structure. Regardless of how the hardware solutions change, the functional structure required by each scheme is similar.Figure 3-6 Schematic Diagram of a Typical Vehicle-mounted ADAS Domain Functional Structure
3.1 Mobileye EyeQ Series Chip Solutions
Mobileye was founded in 1999 and is a global leader in providing ADAS/AD solutions based on visual algorithm analysis and data processing. As of the end of 2021, the total shipments of its EyeQ series chip products have approached 100 million units. Although it has been suppressed by NVIDIA and Qualcomm in the L3/L4 field, it remains the dominant player in the mainstream L2 level ADAS market, with a market share as high as 75%. The shipment volume in 2021 reached 28.1 million units.Mobileye has always adopted a binding hardware and software integrated ADAS solution model of “sensor + chip + algorithm.” The advantage of this “black box” business model is a short development cycle, allowing customers to quickly launch products, which is favored by traditional OEMs or Tier 1 manufacturers that are late to transform or have weak software/algorithm capabilities. However, the downside is that it leads to a decrease in customer development flexibility, making it difficult to meet the demand for differentiated customized products. An increasing number of OEMs hope to adopt a more open platform, separating “chips and algorithms, and using programmable chips to achieve continuous algorithm iteration and upgrades through OTA.” This is also the idea behind software-defined vehicles.Below is the basic situation of its EyeQ4/5/6 three generations of products:(1) EyeQ4 Chip PlatformThe EyeQ4 new product is equipped with 4 MIPS CPU cores, 6 vector microcode processors (VMP), and two programmable macro arrays (PMA). Each CPU core has 4 hardware threads. It has a total computing power of 2.5 TOPS, capable of processing video information from 8 cameras at a processing speed of 36 frames per second. Overall performance is 8 times that of EyeQ3, and EyeQ4 also introduces a “Road Network Collection Management (REM)” system that uses longitudinal data packaging to compress road signs, lane markings, etc., ultimately aggregating into a road book to provide more accurate positioning for autonomous vehicles.Below is the functional module diagram of the EyeQ4 new product.Figure 3-7 EyeQ4 Chip Functional Module Diagram(2) EyeQ5 Chip PlatformEyeQ5 mainly has 4 modules: CPU core, computer vision processor (CVP), deep learning accelerator (DLA), and multithreaded accelerator (MA). Among them, the CPU and CVP are the main components.EyeQ5 chose Imagination’s MIPS I6500 as the CPU core, with each MIPS I6500 core having 2 hardware threads. A total of 8 CPU cores are configured, providing up to 52,000 DMIPS computing power.EyeQ5 is configured with a total of 18 CVP cores. CVP is a new generation of visual processors designed by Mobileye for many traditional computer vision algorithms. Mobileye has been known for its CV algorithms since its inception, and it is also known for achieving extremely low power consumption by running these CV algorithms on dedicated ASICs.EyeQ5 uses a 7nm process technology, providing up to 24 TOPS of computing power with a TDP of around 10W, thus achieving an excellent power-to-performance ratio. EyeQ5 supports up to 20 external sensors, including cameras, radar, or laser radar. The excellent computing performance allows us to perform deep sensor fusion on EyeQ5 to achieve more complex L2+/L3 level ADAS functions.Below is the chip functional module diagram of EyeQ5:Figure 3-8 EyeQ5 Block Diagram(3) EyeQ6 Chip PlatformEyeQ6H differs from previous Mobileye chips mainly by the addition of two small-scale GPU cores, one being an ARM Mali GPU with a computing power of 64 GFLOPS, expected to be used for ADAS AR image overlay output. The other is Imagination’s BXS 1024 MC-2, with a computing power of 1000 GFLOPS, expected to be used for OpenCL acceleration engines.The CPU remains EyeQ5’s MIPS I6500-F architecture, with the difference being that the number of threads per CPU core has increased from 2 to 4, totaling 8 cores and 32 threads.EyeQ6H can provide more than 3 times the computing performance of EyeQ5 with 25% more power consumption.Figure 3-9 EyeQ6 ADAS Domain Control ProcessorThe main advantage of the Mobileye chip platform is its low product cost, short development cycle, and extremely low development cost, with most functions having been verified, posing no risk. However, the downside is that the system is very closed, making it difficult to create distinctive features, and iteration is challenging; if problems arise, it is difficult to improve or upgrade. For traditional automakers, Mobileye is basically the only choice, while it may be difficult for emerging automakers who want to stand out. However, emerging automakers are still very few. Mobileye’s dominant position is expected to remain stable for at least five years.
3.2 TI Jacinto 7 Chip Platform
At the CES conference in early 2020, TI released its latest series of vehicle-mounted chips based on the Jacinto 7 architecture. The previous generation Jacinto 6 architecture mainly focused on in-vehicle infotainment functions, such as more flashy UIs and more display screens. With the release of the new generation Jacinto 7 architecture chips, it can be seen that TI has basically abandoned the intelligent cockpit and IVI markets, shifting its focus to ADAS domain control and automotive gateway domain directions.The Jacinto 7 series chips include two automotive-grade chips: (1) TDA4VM chip for advanced driver assistance (ADAS) systems; (2) DRA829V processor for gateway systems. Both processors include dedicated accelerators for accelerating data-intensive computing tasks (such as computer vision and deep learning), and they also integrate MCU cores that support ISO26262 functional safety, allowing us to use one chip to simultaneously carry out ASIL-D level functional safety control tasks and computationally intensive sensor data processing tasks.
3.2.1 TDA4VM ADAS Chip
The TDA4VM processor based on the Jacinto™ 7 architecture is designed for L2+ or above centralized ADAS domain controller platforms. It integrates various accelerators, deep learning processors, and on-chip memory, providing powerful data analysis and processing capabilities, making it a fully functional, programmable, highly integrated ADAS domain control processor platform.This multi-level processing capability allows the TDA4VM to serve various central processing unit roles in the ADAS domain. For example, the TDA4VM processor supports access to 8MP (8 million pixels) high-resolution cameras; a more powerful front-facing camera can help vehicles see further, thus enabling the development of stronger auxiliary driving enhancement functions. Users can also use the TDA4VM processor to simultaneously operate 4 to 6 3MP cameras and process data from multiple sensors such as millimeter-wave radars, ultrasonic radars, and laser radars on a single chip platform.The TDA4VM processor can also serve as the central processor in automatic parking systems, achieving 360-degree surround perception capability, thus enabling the development of a better user experience 360-degree full-screen parking system.The left side of the following block diagram shows the current typical ADAS system block diagram, where the main data processing part is completed by the GPU or NPU. Outside this application processor, external MCUs, external ISPs, Ethernet switches, and PCIe switches need to be integrated. The right side shows the ADAS system block diagram after using TDA4VM. The TDA4 integrates the above external modules that were previously needed into the chip, including the CPU for general processing, real-time MCU, functional safety MCU, C7x DSP, MMA deep learning accelerator, VPAC DMPAC vision accelerator, internal ISP, and Ethernet switch, as well as PCIe switch, etc. Clearly, using TDA4VM can greatly simplify the hardware complexity of the ADAS system.Figure 3-10 TDA4x Domain Control Processor Simplifying ADAS System ArchitectureThe key features of the TDA4VM processor include:
Two 64-bit Arm® Cortex®-A72 microprocessor subsystems, with a working frequency of up to 1.8GHz, 22K DMIPS;
Each Cortex®-A72 core integrates 32KB L1 D-Cache and 48KB L1 I-Cache.
Each dual-core Cortex-A72 Cluster shares a 1MB L2 Cache.
Six Arm® Cortex®-R5F MCUs, with a working frequency of up to 1.0GHz, 12K DMIPS;
Each core has 64K L2 RAM
The MCU subsystem in the isolated safety island has two Arm® Cortex®-R5F MCUs
The general computing section has four Arm® Cortex®-R5F MCUs
Two C66x floating-point DSPs, with a working frequency of up to 1.35 GHz, 40 GFLOPS, 160 GOPS;
C7x floating-point, vector DSP, up to 1.0 GHz, 80 GFLOPS, 256 GOPS;
Matrix multiplication accelerator (MMA), up to 1.0GHz, 8 TOPS (INT8);
Vision processing accelerator (VPAC) and image signal processor (ISP) and multiple perspective auxiliary accelerators;
Depth and motion processing accelerator (DMPAC);
Figure 3-11 TI TDA4x Processor Functional Module DiagramThe “C7x” is TI’s next-generation DSP, integrating TI’s industry-leading DSP and EVE cores into a single higher-performance core while adding floating-point vector computing capabilities, thus ensuring backward compatibility with old code while simplifying software programming. The new “MMA” deep learning accelerator can achieve up to 8 TOPS performance within the industry’s lowest power envelope. Dedicated ADAS/AV hardware accelerators can provide visual preprocessing as well as distance and motion processing.The TDA4VM processor also meets the power consumption requirements of the entire system. When fulfilling the high-performance computing requirements of these ADAS domain controllers, the TDA4VM processor only requires 5 to 20W of power consumption, thus no active cooling is needed. For example, Momenta demonstrated at the 2020 CES that when customers touched the TDA4 chip casing at the site, they found that no heat dissipation was done on the chip casing, indicating that the power consumption is very low.The functional safety design within the TDA4VM processor includes two parts: 1) The isolated functional safety island integrates two Cortex-R5F cores that support dual-core lock-step mode, achieving ASIL-D level functional safety; 2) The remaining main processing part can achieve ASIL-B functional safety.
The isolated functional safety island integrates two Cortex-R5F cores that support dual-core lock-step mode, achieving ASIL-D level functional safety;
The two ARM Cortex-R5F cores in the functional safety island are equipped with floating-point coprocessors and support dual-core lock-step operation mode.
512 bytes of Scratchpad RAM memory
Up to 1MB on-chip SRAM with ECC support
Dedicated voltage and clock domains within the safety island (independent of the main processor)
Dedicated memory and interface design within the safety island (independent of the main processor)
The remaining part of the main processor can achieve ASIL-B functional safety:
On-chip memory and interconnects are ECC protected
Built-in self-test mechanism (BIST)
3.2.2 DRA829V Gateway Chip
Traditional vehicles have long used low-speed networks (such as CAN/LIN, etc.) for communication. Therefore, if all electronic control units of the vehicle need software upgrades, it will be very slow (as shown in the left part of the diagram below). When modern vehicles evolve into domain centralized EEA, such as the common three-domain EE architecture (ADAS domain, cabin domain, vehicle control domain), communication between domains requires a very high-speed communication bus (as shown in the right part of the diagram below), thus requiring the central gateway’s network protocol conversion and packet forwarding capabilities. The DRA829V processor is designed for this scenario.Figure 3-12 DRA829V as a Communication Gateway Between Automotive DomainsThe DRA829V processor is the industry’s first processor to integrate an on-chip PCIe switch, and it also integrates an Ethernet switch that supports 8 ports of gigabit TSN, enabling faster high-performance computing and vehicle communication.The left side of the following diagram shows the overall vehicle computing platform framework as understood by TI. In this framework, external PCIe switches, Ethernet switches, and external information security modules (eHSM) are needed to be connected outside the application processor, as well as external MCUs. The right side of the diagram shows that the DRA829V processor integrates all the external IP modules that were previously needed, thus greatly reducing hardware complexity and improving reliability. The core point of TI’s automotive gateway processor is high-performance processing while maintaining very low power consumption.Figure 3-13 TI DRV8x Series Gateway Chip Simplifying Gateway System ArchitectureThe DRA829V SoC addresses the challenges brought by the new vehicle computing architecture by providing computing resources, efficiently moving data within the vehicle computing platform, and facilitating communication across the entire vehicle network. It can be seen that the DRA829V mainly handles data exchange and security issues.Compared to NXP S32G2/S32G3, although both chips are designed for automotive central gateway scenarios, their design characteristics are different.Figure 3-14 NXP S32G Central Gateway ProcessorNXP’s S32G is designed as a mature network processor, with the biggest feature being the use of a network offload engine to accelerate layer 3 protocol conversion and packet forwarding. It is completely tailored for the central gateway scenario in the automotive domain centralized EE architecture, effectively handling tasks such as OTA upgrades for various controllers, data gateway interactions, and secure information transmission.On the other hand, the DRA829V focuses more on the collection and forwarding of high-speed signals within the vehicle. These capabilities make the DRA829V more suitable for acting as a collection and forwarding node for high-speed signals within the domain (note: this is different from the central gateway role of NXP S32G and can be considered as the gateway function required by the domain master control processor). Of course, the DRA829V can also act as a central gateway, but it is not its main feature due to the lack of a packet forwarding engine similar to that in the NXP S32G gateway.
3.3 NVIDIA Xavier/Orin Solutions
NVIDIA is the world’s largest intelligent computing platform company. The company initially focused on PC graphics computing, later leveraging its GPU architecture suitable for large-scale parallel computing to gradually expand its business focus to cloud AI acceleration, HPC high-performance computing, AR/VR, and other fields. Besides excellent hardware platform architecture and performance, NVIDIA also has a significant advantage in software and ecosystem. The CUDA software development platform based on NVIDIA GPU architecture is the industry standard framework for heterogeneous computing. Based on the CUDA computing framework, NVIDIA has developed DNN acceleration libraries, compilers, development debugging tools, and the TensorRT inference engine.NVIDIA officially launched its Tegra X1 intelligent processor for mobile/robotics/autonomous driving and other fields in 2015, which integrated NVIDIA’s most advanced Maxwell architecture GPU cores at that time. The release of this SoC processor also marked the beginning of the AI computing era in the embedded field globally.Leveraging its accumulated cloud advantages in the CUDA + TensorRT ecosystem, NVIDIA provides an end-to-end solution of “chip + complete autonomous driving software stack” for the autonomous driving field, including the Drive AV software platform, Drive IX software platform, Drive Sim, and other complete software ecosystems.
NVIDIA launched the Xavier platform at the CES 2018, which is an evolution of the Driver PX2. NVIDIA claims that Xavier is the “world’s most powerful SoC (System on Chip).” Xavier can process autonomous driving perception data from vehicle radars, cameras, laser radars, and ultrasonic sensors, achieving a higher energy efficiency ratio than similar products on the market while being smaller in size. “NVIDIA® Jetson AGX Xavier™ sets a new benchmark for computing density, energy efficiency, and AI inference capability for edge devices.”In April 2020, the Xiaopeng P7 became the first mass production vehicle equipped with the NVIDIA DRIVE AGX Xavier autonomous driving platform, featuring 13 cameras, 5 millimeter-wave radars, and 12 ultrasonic radars, integrated with the open NVIDIA DRIVE OS operating system.Xavier SoC is based on TSMC’s 12nm FinFET process, integrating 9 billion transistors, with a chip area of 350 square millimeters. The CPU adopts NVIDIA’s self-developed 8-core ARM64 architecture (code-named Carmel), integrating a Volta architecture GPU (512 CUDA cores), supporting FP32/FP16/INT8, with single precision floating-point performance of 1.3 TFLOPS at 20W power consumption, and Tensor core performance of 20 TOPs, unlocking to 30 TOPs when the power consumption is increased to 30W.Xavier is a highly heterogeneous SoC processor, integrating up to eight different processing cores or hardware acceleration units, allowing it to simultaneously and in real-time process dozens of algorithms for sensor processing, ranging, localization and mapping, vision and perception, as well as path planning tasks.
Eight-core CPU: Eight-core “Carmel” CPU based on ARMv8 ISA
Volta GPU: 512 CUDA cores | 20 TOPS (INT8) | 1.3 TFLOPS (FP32)
Vision processor: 1.6 TOPS
Stereo and optical flow engine (SOFE): 6 TOPS
Image signal processor (ISP): 1.5 Giga Pixels/s
Video encoder: 1.2 GPix/s
Video decoder: 1.8 GPix/s
Xavier’s main processor can meet ASIL-B level functional safety requirements.Below is the block diagram of Xavier SoC:Figure 3-15 NVIDIA Xavier Processor Functional Module DiagramIn addition to powerful computing resources, the Xavier SoC has rich IO interface resources:Xavier has two versions of the system on module (SoM), namely Jetson AGX Xavier 8GB and Jetson AGX Xavier:
Jetson AGX Xavier 8GB: A cost-effective low-power version of Jetson AGX Xavier, fully compatible with the existing Jetson AGX Xavier in terms of software and hardware. The entire module consumes a maximum of 20W while providing up to 20 TOPS of AI performance.
Jetson AGX Xavier: The world’s first intelligent computing platform designed for autonomous robots, providing high computing performance while maintaining low power consumption. The Jetson AGX Xavier platform can preset three operating modes: 10W, 15W, and 30W, while the Jetson AGX Xavier Industrial version offers two optional power consumption modes: 20W and 40W.
Below is a comparison of the performance parameters of different versions of the Jetson AGX Xavier system on chip:ADAS DCU Solutions Based on XavierEcotron (ecotron.ai/) is a US manufacturer focused on ADAS DCU (ADAS Domain Control Unit). In September 2019, it launched the EAXVA03 ADAS domain controller, a high-performance central computing platform for L3/L4 level autonomous driving, built with NVIDIA Xavier SoC and Infineon TC297 MCU. According to the design scheme, the Xavier intelligent processor is used for environmental perception, image fusion, path planning, etc., while the TC297 MCU is used to meet ISO26262 functional safety requirements (ASIL-C/D level) in control application scenarios (i.e., as the Safety Core), such as safety monitoring, redundant control, gateway communication, and overall vehicle control.The latest models have evolved to EAXVA04 and EAXVA05. EAXVA04 is an upgraded version of EAXVA03, still consisting of one Xavier and one TC297, while EAXVA05 adopts a dual Xavier + TC297 scheme, providing greater computing power. Below is the structural diagram of the EAXVA04 ADAS domain controller:Figure 3-16 Structure Diagram of EAXVA04Below is the structure diagram of the EAXVA05 ADAS domain controller with dual Xavier + TC297 MCU:Figure 3-17 Structure Diagram of EAXVA05 ADAS Domain Controller
3.3.2 Orin Autonomous Driving Computing Platform
In December 2019, NVIDIA released the next-generation Orin chip and computing platform for autonomous driving and robotics applications. It features ARM Hercules CPU cores and NVIDIA’s next-generation GPU architecture. The Orin SoC contains 17 billion transistors, nearly doubling the number of transistors in the Xavier SoC, with 12 ARM Hercules cores and integrated NVIDIA’s next-generation Ampere architecture GPU, providing 200 TOPS @ INT8 performance, nearly 7 times that of the Xavier SoC. The Orin SOC samples will be available in 2021, with mass production for automakers expected in 2022.In May 2020, at GTC, NVIDIA introduced the new generation autonomous driving Drive AGX Orin platform, which can carry two Orin SoCs and two NVIDIA Ampere GPUs, achieving a comprehensive performance enhancement from entry-level ADAS solutions to L5 autonomous taxi (Robotaxi) systems, with a maximum computing power of 2000 TOPS. Future L4/L5 autonomous driving systems will require more complex and powerful autonomous driving software frameworks and algorithms. With strong computing performance, the Orin computing platform will help run multiple autonomous driving applications and deep neural network model algorithms concurrently.As an onboard intelligent computing platform designed specifically for autonomous driving, Orin can meet ISO 26262 ASIL-D level functional safety standards.Thanks to the advanced 7nm process technology, Orin achieves excellent power consumption levels. While offering 200 TOPS of immense computing power, its TDP is only 50W.Below is the block diagram of the Orin SoC:Figure 3-18 NVIDIA Orin Processor Functional Module DiagramThe following table presents the performance parameters of the Jetson AGX Orin system on chip:
On December 30, 2020, Great Wall Motor held a press conference to announce its smart driving strategy upgrade, officially launching the new Coffee Intelligent Driving “331 Strategy.” At the conference, Great Wall also reached a strategic cooperation with Qualcomm to equip its production vehicles with the Qualcomm Snapdragon Ride autonomous driving platform. Great Wall plans to launch the world’s first mass production vehicle based on the Qualcomm Snapdragon Ride platform at L4 level in 2022, equipped with IBEO’s 4D all-semiconductor solid-state laser radar, with a maximum effective range of up to 300 meters.The Qualcomm Snapdragon Ride autonomous driving platform consists of two chips in hardware: 1) SA8540 master processor (serving as the main processor for ADAS domain applications, meeting system-level safety requirements); 2) SA9000B accelerator, providing the computing power required for the autonomous driving system. Both meet ASIL-D standards and support L1 to L5 level autonomous driving. The AI computing power of a single board is 360 TOPS (INT8), with an overall power consumption of 65 watts, achieving a computing efficiency ratio of approximately 5.5 TOPS/W. By using a PCIe switch, four computing platforms can be added, with a total AI computing power of 1440 TOPS from four accelerators.
L1/L2 level ADAS: Targeting vehicles equipped with driving assistance functions like AEB, TSR, and LKA. Hardware support: 1 ADAS application processor, providing 30 TOPS of computing power.
L2+ level ADAS: Targeting vehicles equipped with HWA (highway assistance), automatic parking APA, and TJA (traffic jam assistance) functions. Required hardware support: 2 or more ADAS application processors, with expected computing power requirements of 60-125 TOPS.
L4/L5 level autonomous driving: Targeting autonomous passenger vehicles, robot taxis, and robotic logistics vehicles in urban traffic environments. Required hardware support: 2 ADAS application processors + 2 autonomous driving accelerators (ASIC), providing 700 TOPS of computing power, with a power consumption of 130W.
To date, Qualcomm has not disclosed information about its SA8540P and SA9000B chips. Given that Qualcomm is unlikely to develop entirely new chips specifically for L3/L4 autonomous driving, we can roughly speculate based on Qualcomm’s other related chip products. In the first half of 2021, Qualcomm officially commercialized an AI 100 edge computing suite, using Snapdragon 865 as the application processor and AI 100 as the accelerator, with a computing power of 70 TOPS under the M.2 edge interface, and up to 400 TOPS under the PCIe interface with 16 cores.Based on Great Wall’s promotional images, both the 8540 and 9000 are 7nm, and both AI 100 and Snapdragon 865 are also 7nm. The PCIe interface is also visible in Great Wall’s promotional images. Of course, to meet automotive standards, performance must be sacrificed somewhat by reducing frequency to lower power consumption, achieving automotive specifications, thus reducing performance to 360 TOPS. The Snapdragon 865 is the highest-end 7nm chip from Qualcomm, and the 870 has a higher frequency, resulting in higher power consumption. Therefore, the 8540 is most likely the automotive version of the Snapdragon 865, while the SA9000B is likely the automotive version of AI 100.Figure 3-19 Qualcomm Cloud A100 AI Accelerator ProcessorQualcomm’s AI 100 core follows the DSP route, with a maximum of 16 AI cores per chip. Each AI core has 9MB of SRAM, totaling 144MB for 16 cores, which is double that of Tesla’s FSD, which has 64MB. The Qualcomm suite uses 12GB of LPDDR5, while Tesla’s FSD can only accommodate LPDDR4.Qualcomm will not only provide hardware but also a complete set of software and hardware solutions, including software SDKs, tools, and simulations.Figure 3-20 Qualcomm Snapdragon Ride Autonomous Driving Software Stack ArchitectureQualcomm’s strategy is to provide a complete end-to-end soft and hard solution while actively laying out corresponding ecological partners upstream and downstream.
END
Forum Announcement
On February 24, New Thinking Technology, in conjunction with Zhixiaoshu, will hold the “ARC Intelligent Connected Vehicle Technology Forum” in an online closed-door live format, focusing on topics such as millimeter-wave radar, automotive-grade chips, AUTOSAR, V2X, etc. Please scan the code to register!
Group Application
The Intelligent Vehicle Information Bureau group chat is now open! Six technical group chats welcome applications. Please note “Name – Company/School/Unit – Position/Major” for priority review!