Advancing ADAS Mass Production: Competing in Multi-Sensor Fusion
Join the Gaogong Intelligent Automotive Professional Industry Group (Autonomous Driving, Connected Cockpit, Commercial Vehicles), add WeChat:17157613659 to show your business card, limited to intelligent connected software and hardware suppliers and OEMs.According to “Gaogong Intelligent Automotive”, as China’s ADAS market enters a new 2.0 cycle, the installation rate and functional standard indicators for new vehicles are showing a rapid upward trend. Currently, L2-level ADAS solutions primarily rely on multi-sensor fusion.For OEMs, large-scale mass production requires balancing high cost-performance and differentiated functional experiences. It also demands comprehensive testing of Tier 1’s engineering capabilities from technical implementation to flexible response, where multi-sensor fusion poses a threshold and challenge for many manufacturers.At present, there are very few Tier 1 suppliers in China that truly possess mass production capabilities for multi-sensor fusion.Just like the human brain comprehensively processes information, the basic principle of multi-sensor data fusion technology is to utilize multiple sensor resources. By rationally allocating and using the monitored information from sensors, it integrates redundant or complementary information from multiple sensors in space or time to obtain a consistent interpretation or description of the measured object.In short, sensor fusion involves acquiring data and information using multiple sensors and consolidating them for comprehensive analysis, thereby providing a more accurate and reliable description of the external environment and improving the correctness of system decision-making and execution.Implementing multi-sensor fusion at the hardware level is not difficult; however, the main challenges and technical barriers are concentrated in the algorithms, which occupy a significant portion of the value chain.Due to the enormous amount of data and information collected by multiple sensors, which may even contain contradictory information, the fusion algorithms must be sufficiently optimized.How to Optimize?
According to Liu Xi, an expert in perception fusion algorithms and software at Furitek, optimization and sophistication are not synonymous. “The key to fusion is not how sophisticated the algorithm is, but to consider fault tolerance, complementarity, and cost-effectiveness.”Due to the redundancy among multiple sensors, when individual sensors fail or perceive incorrectly, the functionality can degrade but will not completely fail, as accurate information can still be obtained through fusion technology.The ultimate goal of fusion is to concentrate the advantages of sensors, ensuring that the system can quickly process data, filter out useless and erroneous information, and guarantee that the system ultimately makes timely and correct decisions.“The purpose of sensor fusion is to provide accurate environmental perception information during user driving and make correct control decisions based on that. Therefore, fusion technology must be oriented towards mass production and withstand testing,” Liu Xi emphasized.Whether fusion is done well or poorly is not solely a matter of stacking algorithms; it requires attention to the accumulation of experience in mass production and continuous correction and improvement based on customer feedback.For example, Furitek accelerates the screening of problem scenarios through the integrated true value system, allowing data to circulate quickly and focusing engineers’ efforts on categorizing and iteratively solving problems. This process ensures the delivery of mass production projects, utilizing automation tools and focused analysis by engineers to improve efficiency and drive innovation.The biggest challenge of fusion is to anticipate all possible failure scenarios before mass production and consider countermeasures in advance. Therefore, accumulating actual road data is crucial.Liu Xi believes that, “Although it is impossible to predict which scenarios will definitely cause functional failures, continuous collaboration with OEMs to accumulate actual data and extract and solve various potential corner cases that could lead to functional failures is possible.”Thus, through extensive data analysis and investigation, the failure patterns of sensors can be identified.For instance, during visual target tracking, information loss of the target may occur, which can be addressed by fusing radar information and predicting based on the comprehensive trajectory of moving targets to solve the issue of target information loss;the problem of occasional jitter in the output position of visual targets at night can be resolved through filtering methods in fusion algorithms, making the output position smoother and more stable in predicting target trajectories by referencing historical paths during tracking.Liu Xi points out that the currently widely used fusion technology focuses on integrating target information output from various sensors (such as position and speed), while in actual scenarios, relying solely on target information cannot solve certain extreme problems.Near-field cut-in targets may sometimes be partially obscured, causing the camera to be unable to quickly confirm, while the radar also struggles to accurately determine the target’s motion trend. In such scenarios, auxiliary information can aid decision-making, such as image segmentation information.Segmenting the image can yield partial physical characteristics of the target, such as edges and orientation, just as humans quickly judge whether a target intrudes into the driving area. Information such as the motion trend of these features can help assess the danger level of the target.How to Fuse?From demand decomposition, algorithm development, software implementation to system integration, Furitek can provide complete system development capabilities to meet OEMs’ diverse ADAS requirements based on different consumer styles.While pursuing differentiation, ensuring platformization and modular development of fusion algorithms is a key factor in ensuring software delivery time and quality.Reusable modular development is the foundation of software design. Each module is encapsulated into a fixed interface, achieving relatively fixed functionality that can be reused across different project applications, meeting both reusability and flexibility requirements while effectively controlling development workload.During the platformization process of fusion algorithms, flexible configuration parameters can be utilized to adapt to various sensor setups for different vehicle models. “We set different configuration parameters based on the differences in sensor layouts of our customers and import these parameters during the EOL phase to allow the fusion algorithms to automatically match the vehicle model,” Liu Xi stated.
Another trend is the fusion of multi-level data information.The target information provided by sensors is often categorized into target-level, feature-level, and raw-level.Target-level fusion refers to each sensor independently completing the detection and tracking of targets, with the fusion algorithm selecting the most suitable attributes for output based on different “target-level” information.The mainstream sensors configured for L2 are cameras and millimeter-wave radars. Due to bandwidth limitations of the transmission bus, refined target-level information serves as the primary input for fusion. This fusion strategy has many mature processing algorithms but is also limited by input information, making it susceptible to interference from erroneous target information in complex working conditions.Feature-level fusion refers to each sensor completing the detection of obstacle information without tracking, outputting obstacle feature information, while the fusion algorithm completes clustering and tracking.Raw-level fusion uses the raw information from each sensor, such as raw images and point clouds, to determine target attributes.In Liu Xi’s view, raw-level fusion methods will have more applications in high-level autonomous driving solutions.To achieve high-level autonomous driving, the number of sensors will significantly increase, which raises higher demands on the computing power of the central controller. Simultaneously, to support faster and safer data flow networks, OEMs are redefining electronic and electrical architectures to enable massive data interconnectivity.For example, cameras output image information, while millimeter-wave radars provide raw measurements, and the processing of this data is handled by the central domain controller.Additionally, raw-level data may include point clouds from lidar, high-definition maps, as well as IMU and positioning information. Perception, fusion, decision-making, and other algorithms will be transferred to the active safety control domain for unified processing.“Richer information provides us with more possibilities to tackle complex scenarios,” Liu Xi explains.For instance, relying solely on image recognition to identify a small obstacle on the road often leads to misjudgment, whereas confirmation through lidar can significantly reduce misjudgment and provide more accurate obstacle information.Currently, Furitek’s products include three mainstream solutions: 1V1R, 1V3R, and 1V5R1D, among which 1V1R has already achieved large-scale pre-installation of L2 in China.Verification Is EssentialEnhancing system robustness makes verification methods crucial.When defects in fusion strategies are discovered during actual road testing, we also recommend simulating the scenario at that time, increasing the difficulty of the scenario to expose more shortcomings, allowing engineers to consider solutions more comprehensively:VTD calculates and renders simulation scenes, connects to the perception system, and provides control requests based on decision algorithms; dSPACE calls the veDYNA simulation vehicle system in real-time; PC runs Controldesk for monitoring and uses AutomationDesk for automated testing.The methods above form Furitek’s verification toolchain, which supports rapid iteration of mass production software. These simulation software can simulate different lighting, road conditions, and various weather scenarios, allowing cameras to conduct real-time imaging of virtual scenes, output detection results, and accelerate verification speed by integrating vehicle models.At the same time, data feedback technology is also a key verification method. Currently, Furitek has accumulated millions of kilometers of road data. Each new version of software must undergo rehearsals with this actual data before being provided to customers to ensure that all indicators meet the requirements.“We also have a closed testing ground with long straight roads and curves, allowing for real vehicle testing of functional effects and collision tests with balloon vehicles and standard dummies,” Liu Xi mentioned.Through simulation testing, bench testing, and real vehicle testing, more verification opportunities are provided for products before mass production to ensure product quality.From development, testing to verification, Furitek’s technology has approached international first-class levels, continuously supporting the mass production development and application of the new generation of ADAS and autonomous driving, greatly enhancing OEMs’ pursuit of safety and convenience in the era of automotive intelligence, and providing end-users with a more relaxed and safer driving experience.