Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms

Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance PlatformsMainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
Source | Abao1990
Knowledge Circle | To join the “Domain Controller Group”, please add WeChat 13636581676, noteDomain
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
As the level of autonomous driving increases, the required chip computing power increases exponentially. The computing power requirement for L2 level autonomous driving is only 2-2.5 TOPS, but for L3 level it requires 20-30 TOPS, and L4 level requires over 200 TOPS, while L5 level requires over 2000 TOPS.
Previously, it was introduced that Tesla’s computing power is 72 TOPS. The following platforms are all low-performance platforms below 200 TOPS, mainly for autonomous driving platforms above L2 level, such as Mobileye, which excels in vision-based ADAS applications. The low-performance platform chips include Mobileye, Renesas, TI, and Horizon, which will be introduced one by one in this issue.

Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms

Mobileye Solution Introduction
Mobileye was established in 1999 and is a global pioneer in developing advanced driver assistance systems and autonomous driving solutions based on vision system analysis and data processing, providing integrated “chip + algorithm” ADAS visual solutions for 27 OEMs and Tier 1 manufacturers worldwide.
As of the end of 2019, the EyeQ series chips have shipped 54 million units, ensuring driving safety for over 50 million vehicles worldwide, and currently, the global ADAS market share is about 70%. Since its inception, the company has been committed to providing driver assistance technologies such as pedestrian detection, lane keeping, and adaptive cruise control using monocular vision. From 1999 to 2001, Mobileye’s prototype products iterated once a year. In 2001, Mobileye solidified its self-developed algorithms onto chips and integrated them into vehicles, starting the development of EyeQ chips.
In April 2004, EyeQ1 began production, and subsequently, the company received multiple rounds of financing, shifting its business model to automotive safety, and signing cooperation agreements with top global component suppliers such as Continental, STMicroelectronics, Magna, Denso, and Delphi. In 2007, BMW, GM, and Volvo became the first automakers to equip Mobileye chips, and Mobileye products were officially commercialized. In 2008, Mobileye released EyeQ 2, and the company entered a stable development period. In 2013, Mobileye’s cumulative sales of products exceeded 1 million units, and the shipment volume showed explosive growth. In March 2017, Mobileye was acquired by chip giant Intel for $15.3 billion.
From 2014 to 2019, the company’s revenue compound growth rate reached 44%, with 2019 revenue of $879 million and a net profit margin of 27.9%. The shipment volume of the EyeQ series chips reached 17.4 million units in 2019. EyeQ1 to EyeQ4 chip models have already been mass-produced, and EyeQ5 is expected to be launched next year. EyeQ4 is mainly used to support semi-automated driving technologies, with a maximum support of L3 level, while EyeQ5 is primarily positioned for Level 4/5 autonomous driving applications.
By the end of 2019, Mobileye EyeQ chips had cumulatively shipped over 54 million units worldwide.
In September 2020, Mobileye revealed that the global shipment of EyeQ chips exceeded 60 million units.
This 60 million units include EyeQ2, EyeQ3, and EyeQ4, with the newly added part in 2020 mainly being EyeQ4.
Currently, Mobileye has adopted a sensor + chip + algorithm binding integrated solution, which has led to a decrease in customer development flexibility. In the short term, it is beneficial for increasing market share, welcomed by OEMs that have transitioned late or invested little in AI, but in the long term, it will lead to a lack of capability to customize differentiated products. Therefore, it is difficult for new forces in car manufacturing that require rapid iteration and upgrading of products or OEMs that require fast transformation to accept Mobileye’s “black box” approach.
For example, the Chinese car manufacturer Xiaopeng Motors briefly tested Mobileye’s chips before deciding to switch to NVIDIA’s Xavier on the P7, mainly because Xiaopeng wanted to “decouple the chips and algorithms, use programmable chips, and conduct algorithm development and customization on the chips, integrating with scenarios,” thus choosing the more open NVIDIA.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
EyeQ4 is equipped with 4 CPU cores and 6 vector microcode processors (VMP), with each CPU core having four hardware threads. The EyeQ4 chip introduces a novel category of accelerators: two multi-threaded processing cluster (MPC) cores and two programmable macro array (PMA) cores. Structurally, EyeQ4 uses 28nm FD-SOI. Functionally, compared to EyeQ3, EyeQ4 adds features such as REM road network collection management, driving decision-making, arbitrary angle vehicle recognition, and drivable area.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
The upcoming EyeQ5 will be equipped with 8 multi-threaded CPU cores and will also feature 18 next-generation Mobileye vision processors.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
EyeQ5 has more complex functions and will use a 7nm process. EyeQ5 supports up to 20 external sensors (cameras, radar, or LiDAR), with “sensor fusion” being the main purpose of EyeQ5. The computing performance of EyeQ5 reaches 12 Tera/second, with power consumption of less than 5W, making the chip energy-efficient 2.4 times that of the competing product Drive Xavier. To achieve L4/L5 level autonomous driving, Intel’s autonomous driving system will adopt a camera-first design, equipped with two EyeQ5 system chips, one Intel Atom chip, and Mobileye software. EyeQ5 is expected to implement an “open” strategy, allowing Tier 1 and OEM partners to use an “open architecture” to write their own code, including sensor fusion and driving decision-making.
EyeQ5 Mobileye’s SuperVision is about to be mass-produced
Recently, the release conference of Zeekr 001 announced that the autonomous driving used in this model will adopt Mobileye’s SuperVision system.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
SuperVision is a 360° pure vision intelligent driving system developed by Mobileye. The so-called pure vision simply means that, like Tesla FSD, it uses cameras to achieve L2 and above level assisted driving capabilities.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
The autonomous driving assistance system Copilot to be equipped in Zeekr 001 integrates 2 Mobileye EyeQ5 chips and the visual perception algorithm SuperVision, which is an L2+ level autonomous driving system.
Two EyeQ5H chips with 24 Tops/10 W will provide computational redundancy for the autonomous driving system, with the main system chip containing the complete technology stack, while the other chip provides a redundant backup that takes effect when the main system fails.
Sensor Configuration of Zeekr 001:
  • 15 cameras throughout the vehicle;
  • 2 EyeQ5H high-performance chips, EyeQ5 chips are built on TSMC’s 7nm FinFET process, with a single chip computing power reaching 24 Tops, close to ten times that of EyeQ4;
  • 1 250 m LRR ultra-long-range millimeter-wave radar;
  • 12 ultrasonic radars.
Functions that can be achieved include:
  • Hands-free high-speed autonomous driving: including automatic lane change, navigation between different highways, automatic on/off ramps, and urban road assistance;
  • Automatic parking;
  • Standard ADAS functions: including AEB, ACC, and LKA, etc.;
  • DMS driver monitoring system.
Based on this platform, Zeekr 001 will achieve L2+ level autonomous driving in 2021, similar to the current Tesla’s assisted driving capabilities, and by 2023, it will gradually achieve high-speed NoA or urban NoA.
Mobileye’s Subsequent Product Roadmap
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
The computing power level provided by EyeQ5 is up to 24 TOPS, which is less competitive compared to several others.
EyeQ6 is where Mobileye truly focuses on high-performance high-end.
EyeQ6 is expected to be mass-produced in 2024/2025, divided into high, medium, and low versions.
Mobileye began designing EyeQ5 in 2016, selecting MIPS’s I6500 architecture.
MIPS launched the I6500-F specifically for automotive applications based on the I6500 architecture, while the subsequent I7200 is aimed at the wireless market.
Therefore, in the subsequent generation of chips, Mobileye abandoned the MIPS architecture and decided to adopt Intel’s Atom core.
Atom is a perennial favorite in Intel’s processor series, with the typical automotive platform being Apollo Lake.
In June 2016, Intel switched from Apollo Lake to Goldmont architecture and has since been widely used in Tesla, BMW, Cadillac, Hongqi, Hyundai, Volvo, and Chery’s vehicle systems.
EyeQ6 will not be mass-produced until 2024, which seems somewhat lagging in the competition among various manufacturers.

Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms

Renesas Autonomous Driving Platform Solution Introduction
Renesas is the second largest automotive semiconductor manufacturer in the world, the largest automotive MCU manufacturer, and the largest semiconductor manufacturer in Japan, excluding Sony (Sony’s main business is image sensors).
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
Renesas has layouts in all cockpit chips (including LCD instrument + central control navigation) and autonomous driving, with different series of products having both entry-level and high-end versions. For example, the mid-range cockpit chip level is the M level, with the M3 series chips used in Volkswagen’s Magotan and Passat, positioned as mid-range cockpits.
In terms of high-performance vehicle computing, Renesas’s top product is the R-CAR H3, mainly used in the cockpit field. The latest Great Wall H6’s lemon platform uses this platform.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
From the roadmap in the above image, it can be seen that Renesas has been slow in launching ADAS chips, with ADAS chips based on the R-Car Gen3 architecture pushed from 2018 to 2020. R-Car Gen3 is based on Arm® Cortex®-A57 / A53 cores, which use the Arm 64-bit CPU architecture. It provides the capability to process a large amount of data from multiple sensors around the vehicle. When developing entry-level or high-end systems, there are trade-offs in graphics and computer vision.
The chip launched in 2018 is the R-CAR V3M, which is a SoC mainly used for front camera applications. The challenge faced by front cameras is how to provide high performance for computer vision while supporting low power consumption and a high level of functional safety. Since the front camera is installed close to the windshield, the heat generated by the component itself and the temperature rise caused by direct sunlight must be considered, making low power consumption particularly stringent. R-Car V3M addresses this challenge and improves the efficiency of camera system development.
In 2019, the second visual SoC, R-CAR V3H, was launched, which has high-performance visual processing capability and AI processing capability, with industry-leading low power consumption. The target application of this product is the front camera application in L3 and L4 level autonomous driving. The new generation R-Car V3H product has been optimized for stereo front surround applications, improving visual processing performance by 5 times compared to R-Car V3M.
• Four CPU cores: ARM® Cortex®-A53 (1000MHz)
• Supports dual Lockstep ARM Cortex-R7 (800MHz) CPU
• Single-channel 32bit memory controller LPDDR4-3200
• Supports image recognition engine (IMP-X5-V3H)
• Dedicated CNN hardware accelerator, dense optical flow processing, dense stereo vision disparity processing, and object classification algorithms
• Dual image signal processing (ISP)
• Video output (4 lanes × 1 channel LVDS, 1 channel digital)
• Video input (4 lanes × 2 channels MIPI-CSI2, 2 channels digital)
• Supports two CAN-FD interfaces
• One FlexRay interface
• Supports one Gigabit Ethernet and AVB Ethernet
• One PCI Express interface
The AI computing power of this chip is 4 TOPS, making it very suitable for processing image data and suitable for front fusion data processing of sensors.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
Front fusion algorithms involve merging data at the raw layer, and the merged data acts like a super sensor that can see infrared, RGB from cameras, and the three-dimensional data from LiDAR, akin to a pair of super eyes. On this pair of super eyes, one can develop their own perception algorithms and finally output a result layer of objects.
Renesas’s V3H aims to create this super sensor for front fusion, and Bosch’s next-generation vision system integrates V3H. Due to natural flaws, it was unable to merge with radar data, so Renesas needs to develop an enhanced version of the V3U chip.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
Firstly, it can be seen that V3U is based on the Renesas R-Car Gen 4 architecture, providing scalability from entry-level applications to highly automated driving systems. This component can be used for advanced driver assistance systems (ADAS), allowing the use of air-cooled electronic control units (ECU), thus providing advantages in weight and cost.
V3U can process camera and radar sensor data simultaneously on a single chip while using AI for autonomous driving control and learning, meeting the highest ASIL D requirements of the automotive safety standard ISO 26262 to ensure system simplicity and safety.
The three major advantages of the R-Car V3U SoC are:
1. High energy efficiency and high-performance convolutional neural network (CNN) hardware accelerator
As the number of sensors used in new-generation ADAS and AD systems continues to increase, CNN processing performance needs to be continuously enhanced. By reducing the heat generated by power consumption, air-cooled electronic control units (ECU) can be installed, thereby reducing weight and cost.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
Renesas has developed CNN hardware accelerator cores with excellent deep learning performance and has densely configured three accelerator cores for the R-Car V3U, with each CNN accelerator core having 2MB of dedicated memory, totaling 6MB of memory. This reduces data transmission between external DRAM and CNN accelerators by over 90%.
R-Car V3U also provides various programmable engines, including DSP for radar processing, multi-threaded computer vision engines for traditional computer vision algorithms, image signal processing for improving image quality, and other hardware accelerators for key algorithms such as dense optical flow, stereo disparity, and object classification.
Renesas has been in the automotive electronics field, so low power consumption is its forte, achieving an astonishing energy efficiency ratio of 13.8 TOPS/W, which is six times that of the top EyeQ6, which is very impressive.
R-Car V3U provides highly flexible DNN deep neural network and AI machine learning capabilities. Its flexible architecture can run all cutting-edge neural networks used for automotive obstacle detection and classification tasks, providing high performance of 60.4 TOPS while achieving best-in-class power efficiency of 13.8 TOPS/W.
2. ASIL D system safety mechanism with self-diagnostic capability
The ISO 26262 automotive functional safety standard is a digital target (indicator) for various functional safety levels. The highest functional safety level ASIL D requires a single point failure metric (SPFM) of over 99% and a latent fault metric (LFM) of over 90%, thus requiring a very high random hardware fault detection rate. In addition, with the continuous development of advanced vehicle operating systems, such as new-generation ADAS and AD systems, the overall functionality of automotive-grade SoCs basically meets ASIL D standards.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
The internal framework of V3U is shown in the above image: it adopts an 8-core A76 design. Renesas did not stack 12 A72 cores like Tesla, but used ARM’s Corelink CCI-500, which is a cache coherence interconnect designed to meet ASIL D.
Renesas has also developed safety mechanisms for quickly detecting and responding to random hardware faults occurring within the SoC as a whole. By combining safety mechanisms suitable for specific target functions, both power consumption can be reduced and fault detection rates can be improved. After integrating the above mechanisms into R-Car V3U, most of the signal processing of the SoC can meet ASIL D standards, and it can have self-diagnostic capabilities, reducing the complexity of fault-tolerant design in AD systems.
3. Support mechanism for preventing interference between software tasks (FFI)
Support for preventing interference between software tasks (FFI) is an important factor in meeting functional safety standards. When there are software components with different safety levels within a system, it is crucial to prevent lower-level tasks from affecting higher-level tasks and causing failures. Furthermore, when accessing hardware modules and control registers in shared memory, ensuring FFI in the SoC is also very important. Therefore, Renesas has developed an FFI support mechanism that can monitor all data flowing through the interconnect in the SoC and block unauthorized access between tasks. This allows all tasks executed on the SoC to achieve FFI, thereby meeting ASIL D standard applications, enabling management of object identification, integration of sensors and radar/LiDAR, route planning, and issuing control commands through a single chip.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
V3U is also a series of products that can provide multiple versions to meet the needs of different levels of autonomous driving, aimed at further increasing shipment volume and lowering costs.
The product series of V3U adopts a modular design, with A76 being 2, 4, or 8 cores.
GPUs can also be omitted, and peripherals can be easily added or removed, providing strong flexibility.
Insufficient computing power can be compensated by safety:
In terms of technology, R-Car V3U is not considered advanced; at least NVIDIA’s next-generation Orin series products for autonomous driving announced in May 2020 have CNN computing power ranging from 10 to 2,000 TOPS. The chips are made by TSMC and use a 12nm process, while TSMC has begun supplying 5 to 7nm process chips.
Samples of R-Car V3U started shipping on December 17, 2020, and while 12nm process technology was advanced in the automotive chip field at that time, Renesas is scheduled to mass-produce R-Car V3U in April to June 2023, which will be somewhat lagging.
Among the four major autonomous driving chip manufacturers, Mobileye, Renesas, NVIDIA, and Qualcomm, only Renesas’s main business is automotive semiconductors. Although the computing power is slightly lagging, it has the deepest understanding of the automotive industry and places the highest emphasis on automotive standards. V3U is the only one among them that can meet ASIL-D standards, and with the support of Japanese automakers, Renesas has very high hopes.

Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms

Texas Instruments TI Autonomous Driving Platform Solution Introduction
TI chips are traditional automotive chips, along with NXP and Renesas, are the three major traditional cockpit chip manufacturers.
TI actually follows two product lines in processors, Jacinto and TDA series.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
The Jacinto series focuses on digital processors for automotive applications, mainly for in-car infotainment systems.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
However, from Jacinto6, we see the integration of in-car infotainment and ADAS functions. This chip includes dual ARM Cortex-A15 cores, two ARM M4 cores, two C66x floating-point DSPs, multiple 3D/2D graphics processors (Imagination), and also has two EVE accelerators. Whether in processing entertainment and audio or in assisting driving with in-car cameras, it can utilize both internal and external cameras to present various functions such as object and pedestrian detection, enhanced reality navigation, and driver identity recognition.
The TDA series has always focused on ADAS functions. The TDA series shows strong compatibility, with hardware TDA2xV series capable of doing surround view and rear view image processing.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
The TDA3x series supports various ADAS algorithms, including lane line assistance, adaptive cruise control, traffic sign recognition, pedestrian and object detection, forward collision warning, and reverse collision warning, which are crucial for the effective use of front cameras, all-around vehicle surround view, fusion, radar, and intelligent rear cameras.
The overall hardware and software of the TDA series are downward compatible, with only differences in computing power and applications, making it very convenient for migration.
Autonomous Driving Jacinto 7 Series Architecture Chips
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
The Jacinto 7 series architecture chips include two automotive-grade chips: TDA4VM processor and DRA829V processor, where the former is applied to ADAS and the latter is applied to gateway systems, as well as dedicated accelerators for accelerating data-intensive tasks such as computer vision and deep learning. Additionally, these two processors include microcontrollers (MCU) that support functional safety, allowing automotive manufacturers (OEMs) and Tier 1 suppliers to use a single chip to support tasks and functions that require high safety standards of ASIL-D.
Many people believe that when the Jacinto 7 platform was released, it basically announced that TI had abandoned the cockpit domain control chip path and transitioned towards ADAS and gateway systems. Therefore, many automakers have basically given up selecting TI’s Jacinto 6, as TI basically does not produce cockpit domain control chips anymore.
Brief Introduction to the DRA829V Processor:
Traditional vehicles use low-speed interfaces such as CAN and LIN in the gateway part. The upgrade of electronic control units varies, and modern vehicles have developed into a domain structure, including power domain and ADAS domain, all requiring high-speed bus interfaces.
As vehicles become connected, multiple computing resources are needed to manage more data, requiring PCIe and ENET to meet high bandwidth ECU-to-ECU communication, while achieving basic functionality and high-level functional safety, requiring support for network security eHSM.
The DRA829V processor is the industry’s first processor to integrate an on-chip PCIe switch, and it also integrates an Ethernet switch supporting 8 ports of Gigabit Ethernet with TSN support, enabling faster high-performance computing and vehicle-wide communication.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
From the above image, it can be seen that DRA829V is highly integrated, incorporating traditional safety MCU, eHSM, and Ethernet switch into a single chip, reducing the complexity of system design. At the same time, it emphasizes isolation, maintaining stable performance even when mixing high and low functional levels.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
The DRA829V SoC solves the challenges brought by new vehicle computing architectures by providing computing resources, efficiently moving data within vehicle computing platforms, and enabling communication throughout the vehicle network.
Many people confuse this chip with NXP’s S32G, although both chips are used as gateways, the main starting points are different.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
NXP’s S32G is designed as a mature network processor for tasks such as OTA upgrades of various controllers, data gateway interactions, and secure information transmission, without focusing on the forwarding of high-speed signals via PCIe interfaces.
On the other hand, DRA829V focuses more on the collection and forwarding of high-speed signals within the vehicle while also having gateway control functionality, though gateway control is not the main node, merely a supplementary function.
TDA4VM Autonomous Driving Chip
Since the vehicle models using this chip have not been exposed, let’s first take a look at the specifications.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
1. Processor cores:
• C7x floating point, vector DSP, up to 1.0 GHz, 80 GFLOPS, 256 GOPS
• Deep-learning matrix multiply accelerator (MMA), up to 8 TOPS (8b) at 1.0 GHz
• Vision Processing Accelerators (VPAC) with Image Signal Processor (ISP) and multiple vision assist accelerators
• Depth and Motion Processing Accelerators (DMPAC)
• Dual 64-bit Arm® Cortex®-A72 microprocessor subsystem at up to 1.8 GHz, 22K DMIPS
– 1MB shared L2 cache per dual-core Cortex®-A72 cluster
– 32KB L1 DCache and 48KB L1 ICache per Cortex®-A72 core
• Six Arm® Cortex®-R5F MCUs at up to 1.0 GHz, 12K DMIPS
– 64K L2 RAM per core memory
– Two Arm® Cortex®-R5F MCUs in isolated MCU subsystem
– Four Arm® Cortex®-R5F MCUs in general compute partition
• Two C66x floating point DSP, up to 1.35 GHz, 40 GFLOPS, 160 GOPS
• Custom-designed interconnect fabric supporting near max processing entitlement
Memory subsystem:
• Up to 8MB of on-chip L3 RAM with ECC and coherency
– ECC error protection
– Shared coherent cache
– Supports internal DMA engine
• External Memory Interface (EMIF) module with ECC
– Supports LPDDR4 memory types
– Supports speeds up to 3733 MT/s
– 32-bit data bus with inline ECC up to 14.9GB/s
• General-Purpose Memory Controller (GPMC)
• 512KB on-chip SRAM in MAIN domain, protected by ECC
Safety: targeted to meet ASIL-D for MCU island and ASIL-B for main processor
• Integrated MCU island subsystem of Dual Arm® Cortex®-R5F cores with floating point coprocessor and optional lockstep operation, targeted to meet ASIL-D safety requirements/certification
– 512B Scratchpad RAM memory
– Up to 1MB on-chip RAM with ECC dedicated for R5F
– Integrated Cortex®-R5F MCU island isolated on separate voltage and clock domains
– Dedicated memory and interfaces capable of being isolated from the larger SoC
• The TDA4VM main processor is targeted to meet ASIL-B safety requirements/certification
– Widespread ECC protection of on-chip memory and interconnect
– Built-in self-test (BIST) an
In normal circumstances, specifications are generally in English, so here is a brief explanation of the high-performance parameters.
The TDA4VM processor core adopts C7x floating point, vector DSP, up to 1.0 GHz, 80 GFLOPS, 256 GOPS;
The deep learning matrix multiplication accelerator (MMA) can achieve up to 8 TOPS (8b) at 1.0 GHz;
Vision processing accelerators (VPAC) and image signal processors (ISP) and multiple vision assist accelerators;
Depth and motion processing accelerators (DMPAC);
The TDA4VM processor only uses 5 to 20W of power to perform high-performance ADAS calculations without the need for active cooling.
Overview of High-Performance Cores:
The “C7x” next-generation DSP integrates TI’s industry-leading DSP and EVE cores into a single higher-performance core and adds floating-point vector computation capabilities, achieving backward compatibility with legacy code while simplifying software programming. The new “MMA” deep learning accelerator can achieve up to 8 TOPS performance within the industry’s lowest power envelope when operating at a maximum temperature of 125°C in typical automotive worst-case scenarios. Dedicated ADAS/AV hardware accelerators can provide visual preprocessing as well as distance and motion processing without affecting system performance.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
TI’s TDA4VM processor series is based on the Jacinto™ 7 architecture, aimed at driving assistance systems (ADAS) and autonomous vehicles (AV). The TDA4VM processor has strong on-chip data analysis capabilities and, combined with visual preprocessing accelerators, makes the system performance more efficient. Automotive manufacturers and Tier 1 suppliers can use it to develop front camera applications, using high-resolution 8-megapixel cameras to help vehicles see further and add more driving assistance features.
In addition, the TDA4VM processor can operate 4 to 6 3-megapixel cameras simultaneously, while also fusing various perception processing such as radar, LiDAR, and ultrasonic waves on a single chip. This multi-level processing capability allows TDA4VM to serve as the centralized processing unit for ADAS, enabling key functions in applications such as automatic parking (e.g., surround view and image rendering display) while enhancing vehicle perception capabilities to achieve 360-degree recognition and perception.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
From the overall chip performance and functionality perspective, combined with the autonomous driving system architecture, TI’s ADAS chips and Renesas’s V3H are actually very similar, both focusing on image or radar data fusion processing, with low power consumption as the primary goal, requiring very powerful algorithms to enhance the chip processing capability, passing the processed signals to the control chip.
TDA4M Advantages:
Improving vehicle perception capabilities with lower power consumption
By accessing data from cameras, radar, and LiDAR, ADAS technology helps vehicles see and adapt to the surrounding world. The influx of large amounts of information into vehicles means that processors or system-on-chips need to quickly and effectively manage multi-level data processing in real-time while meeting power consumption requirements for the system. TI’s new processor only uses 5 to 20W of power to perform high-performance ADAS calculations without the need for active cooling.
The TDA4VM processor provides high-performance computing for traditional and deep learning algorithms with the industry’s leading power/performance ratio, achieving high system integration while enabling scalability and lower costs for advanced automotive platforms supporting centralized ECUs or multiple sensor modes from independent sensors.
Key cores include the next-generation DSP with scalar and vector cores, dedicated deep learning and traditional algorithm accelerators, the latest Arm and GPU processors for general computing, integrated next-generation imaging subsystems (ISP), video codecs, Ethernet hubs, and isolated MCU islands, all protected by automotive-grade safety hardware accelerators.

Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms

Horizon Autonomous Driving Platform Solution Introduction
Horizon has leading artificial intelligence algorithms and chip design capabilities, developing high-performance, low-cost, low-power edge AI chips and solutions through hardware-software integration, aimed at intelligent driving and AIoT. Horizon can provide ultra-high cost-performance ratio edge AI chips, extreme power efficiency, open toolchains, rich algorithm model examples, and comprehensive empowerment services.
Relying on industry-leading hardware-software integrated products, Horizon provides customers in the industry with a complete solution of “chip + algorithm + toolchain”. In the field of intelligent driving, Horizon’s business ties with the four major automotive markets (the United States, Germany, Japan, and China) have deepened continuously. Currently, empowered partners include top Tier 1s and OEMs such as Audi, Bosch, Changan, BYD, SAIC, and GAC;
In the AIoT field, Horizon has collaborated with partners to empower several national-level development zones, leading domestic manufacturing enterprises, modern shopping centers, and well-known brand stores. Currently, based on its innovative AI-specific computing architecture BPU (Brain Processing Unit), Horizon has successfully mass-produced China’s first edge AI processor, focusing on intelligent driving—the “Journey” series processor and focusing on AIoT—the “Sunrise” series processor, which have been widely commercialized.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
On the road to automotive-grade chips, the company has strong patience and long-term strategic planning capabilities. The launch of Changan UNI-T in June 2020 marked the Journey second generation becoming China’s first vehicle-mounted commercial mass-produced AI chip, marking a significant milestone on the long march. In contrast to other AI chip newcomers that began by entering consumer-grade scenarios such as smartphones and cameras to quickly achieve revenue growth, Horizon chose the most challenging path, aiming to conquer the Everest of the AI industry—automotive-grade AI chips, and compete with traditional chip giants.
Since its establishment in 2015, Horizon has achieved mass production of automotive-grade AI chips in just 5 years, marking the beginning of the first year of mass production of domestic automotive-grade AI chips. The company currently has multiple OEM project orders, and from 2020 to 2023, it is expected to experience explosive growth in revenue and performance. Considering the time for sample wafer production, automotive-grade certification, and model introduction, achieving mass production of automotive-grade AI chips in just 5 years places Horizon in a leading position in the automotive electronics industry. In comparison, Mobileye’s automotive-grade chips took 8 years from R&D to official commercialization; as the global leader in general-purpose AI chips, NVIDIA took 9 years after the release of CUDA to apply the K1 chip to Audi A8’s vehicle systems.
The Journey series chips can simultaneously support AI applications for intelligent vehicle cabins and autonomous driving applications, applied to both intelligent cabin and autonomous driving domains, ultimately becoming the main control chip for centralized computing platforms. Currently, the second-generation Journey can support L2 autonomous driving applications, and the next-generation chip will support L3/L4 autonomous driving applications.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
In the future, intelligent cabins will upgrade their interaction methods, such as in-car vision (optical), voice (acoustic), and chassis and body data from steering wheels, brake pedals, accelerator pedals, gear shifts, and seat belts, utilizing biometric technology (mainly face recognition and voice recognition in the cabin) to comprehensively assess the physiological state (facial features, etc.) and behavioral state (driving behavior, voice, physical behavior) of the driver (or other passengers), enabling the vehicle to truly “understand” humans, and the cabin evolving into a comprehensive “personal assistant.”
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
Therefore, the second-generation Journey chip released by Horizon last year possesses strong multi-modal perception algorithm support capabilities for intelligent cabins. It was officially commercialized in the Changan SUV model UNI-T launched in April 2020, and the intelligent cabin functions such as gaze-activated display, distraction reminders, fatigue monitoring, and intelligent voice photography have all reached mature and stable high standards of user experience.
Currently, the second-generation Journey can perform real-time detection and accurate recognition of multiple target types, providing high-precision and low-latency perception output to meet the needs of various visual applications for L2 level intelligent driving, such as visual perception, visual mapping and positioning, and visual ADAS, as well as intelligent human-computer interaction functions such as voice recognition, eye tracking, and gesture recognition.
It can run more than 60 classification tasks simultaneously, with the capability to recognize over 2000 targets per second, thus fully meeting the various visual application needs for L2 level intelligent driving. It is expected that from 2020 to 2021, we will see mass-produced vehicles equipped with the Journey series chips achieving ADAS functionalities.
In January 2020, Horizon announced the launch of the new generation autonomous driving computing platform—Matrix 2.0, equipped with Horizon’s second-generation automotive-grade chips, capable of meeting L2 to L4 autonomous driving requirements. In terms of perception, Matrix 2.0 can support multi-sensor perception and fusion, including cameras and LiDAR, achieving up to 23 categories of semantic segmentation and six categories of object detection. The perception algorithms can also cope with complex environments and support stable perception results even in special scenarios or extreme weather conditions.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
In the Robotaxi field, Horizon has reached cooperation with several top autonomous driving operation companies. Currently, Matrix is applied to nearly a thousand test vehicles and has commenced commercial operational services. In the complete vehicle manufacturer field, Horizon has long collaborated with Audi in advanced autonomous driving technology R&D and productization, assisting Audi in obtaining L4 road test licenses in Wuxi, and Audi China conducted the first L4 autonomous driving and vehicle-road collaboration demonstration in actual highway scenarios using the Matrix computing platform.
The new product roadmap is clear, with the next generation chips all in the R&D and wafer production stages, with expectations that single-chip computing power will approach 100 TOPS in the future, processing up to 16 video signals. The successful commercialization of the second-generation Journey chip marks a new milestone for the company. The second-generation Journey has already secured multiple front-mounted project orders from customers in various countries. The subsequent new product upgrades and planning are also rapidly advancing and landing, with excellent commercial performance resulting from continuous forward-looking technological exploration and rapid iteration of AI chip products.
Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms
On May 9, it was reported that domestic automotive AI chip manufacturer Horizon officially announced its third-generation automotive-grade product, aimed at L4 high-level autonomous driving with high computing power. The Journey 5 series chip successfully completed wafer production ahead of schedule and was successfully powered on!
As the industry’s first fully integrated intelligent central computing chip for autonomous driving and intelligent interaction, the Journey 5 series chip is developed based on the SGS TV Saar certified automotive functional safety (ISO 26262) product development process, with a single chip AI computing power of up to 128 TOPS, while supporting perception computing for 16 camera inputs. Additionally, based on the Journey 5 series chip, Horizon will launch a series of intelligent driving central computing machines with AI computing power ranging from 200 to 1000 TOPS, boasting the industry’s highest FPS (frames per second) performance and lowest power consumption.
Before J5, Horizon had successively launched automotive-grade chips J2 and J3. Currently, J2 and J3 have achieved mass production in models from several automotive manufacturers such as Changan and Chery, with subsequent orders from several domestic brands such as Great Wall, Dongfeng Lantu, GAC, Jianghuai, Li Xiang, and SAIC (listed in alphabetical order) for multiple flagship models expected to be delivered in the coming 1-2 years.
J5 will be Horizon’s first high-performance chip aimed at high-level autonomous driving, set to be officially released within this year. According to previously disclosed information, vehicles based on J5 are expected to enter mass production in 2022.

Mainstream Autonomous Driving Chips and Platform Architecture: Low-Performance Platforms

Leave a Comment

×