
The penetration rate of new energy vehicles in China’s automotive industry has reached over 30%, and there is no clear boundary between traditional car manufacturers and new energy vehicle manufacturers; instead, the crossover is deepening. Especially in the area of cockpit integration, the two can no longer exist and develop in parallel as independent entities.
In 2022, after the merger of AMD and Xilinx, the trend and value of cockpit integration were unearthed. From AMD’s perspective, immersive performance is its traditional strength, whether in 3D driving domain reconstruction or entertainment domain gaming; while the former Xilinx has significant advantages in the safety of driving domain control, from early ECUs to now ADAS domain controllers. The merged AMD can fully integrate these two platforms through the PCIe bus, creating a fully scalable and secure cockpit integration ideal platform.
Welcome to the keynote speech on June 15th, presented by Mr. Feng Yi, Director of Market and Business Development at AMD AECG, to discuss the innovation path of China’s automotive industry.
Keynote Speech: Empowering Innovation in China’s Automotive Industry

June 15 | 10:30 – 11:30
Innovation Forum – Main Venue
Feng Yi (Bob Feng)
Director of Market and Business Development, AMD AECG
In the race to achieve autonomous driving, sensor fusion technology has become a prerequisite. This involves the integration of various sensors, such as cameras, millimeter-wave radar, LiDAR, and ultrasonic sensors, to collect data, recognize colors, and measure distances. There is still debate among automotive manufacturers about which sensor combinations and fusion methods are truly suitable for autonomous driving. Currently, two popular solutions exist: one is the LiDAR and vision fusion intelligent sensor, and the other is a central computing domain controller that fuses various data types.
Can these two solutions be realized through the same platform?
Challenge 1:The central computing faces the need to send multiple streams of raw data in real-time to the central domain controller for neural network processing. The throughput bottleneck caused by large concurrent data can lead to inefficiency.
Challenge 2: The LiDAR and vision fusion need to address the issue of algorithm adjustments and data calibration and matching during the fusion process.
The AMD adaptive platform features high precision, high throughput, low latency, customizability, and flexibility. On the central computing side, it achieves hardware decoupling and reduces processing complexity through a Sensor Hub approach. On the LiDAR and vision fusion side, it realizes hardware-level image front-end fusion based on FPGA, with hardware-level spatial alignment and time synchronization, which also helps to reduce calibration costs and errors. Domestic LiDAR manufacturer Tanway has used this technology to provide more complete environmental information and improve data robustness. It has achieved large-scale production by economically and effectively integrating the advantages of visual and radar data into the same sensor.
Welcome to register for the session on June 14, where AMD and Tanway will jointly explain how AMD adaptive computing addresses the above challenges.
Special Topic Sharing: AMD Adaptive Computing Empowering
Fusion from Radar Sensors to Domain Controllers

June 14 | 15:50 – 16:10
Innovation Forum – Subforum 5 (C003)
Mao Guanghui (Garfield Mao)
Senior Market Manager, AMD Greater China
Automotive Business System Architect

Zheng Ruitong
CTO of Tanway Technology
The growing demand for advanced medical imaging systems highlights the need for more powerful computing solutions that can handle high-quality and real-time imaging across multiple channels. Therefore, with the help of Dr. Joergen Jensen from the Technical University of Denmark, AMD has developed an ultra-fast beamformer for high-end ultrasound imaging systems.
Today, artificial intelligence is widely used in intelligent surgical systems, with surgery robots built using FPGA technology providing precise control of multi-joint robotic arms, high-resolution, ultra-low latency 3D display, and real-time AI inference. This advances the development of intelligent and precise surgical robots, making surgeries safer and more efficient. Colleagues from the medical industry are welcome to participate in this special sharing to discuss the applications of medical imaging, surgical robots, and artificial intelligence in surgery and diagnosis, and to learn about AMD’s 7nm heterogeneous SoC architecture and solutions.
Special Topic Sharing: How to Build High-Performance
Medical Imaging and Intelligent Surgical Robot Systems

June 14 | 10:30 – 11:20
Weng Yuxiang
AMD Industrial and Visual System Architect
In addition to the aforementioned keynote speeches and technical sharing, AMD sincerely invites you to visit our booth (Hall 3 A180). AMD will demonstrate how our adaptive and embedded computing solutions seamlessly integrate with technologies from partners in artificial intelligence, immersive intelligent cockpits, automotive perception systems, electronic rearview mirrors, KVM, industrial and energy storage, to help you quickly build cost-effective products and solutions for intelligent edge.

Click the image to learn more about the demo details
Scan the QR code below to register for the 2023 Shanghai International Embedded Exhibition

Note: After scanning, you will be redirected to the official registration page for the 2023 Shanghai International Embedded Exhibition. Please follow the registration requirements of the exhibition to complete your registration.
AMD’s booth is located at Hall 3 A180. We look forward to seeing you at the exhibition.