Introduction to Hardware and Software Technologies for Autonomous Vehicles

This article is reprinted from:Zhihu

Everyone knows that an Intelligent Vehicle is a comprehensive system that integrates environmental perception, planning and decision-making, and multi-level driving assistance functions. It utilizes technologies such as computer science, modern sensing, information fusion, communication, artificial intelligence, and automatic control, making it a typical high-tech complex.

The key technologies of autonomous driving can be divided into four major parts: environmental perception, behavior decision-making, path planning, and motion control.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

The theory of autonomous driving sounds simple, with four key technologies, but how is it actually implemented? Google has been working on autonomous driving since 2009, and after eight years, the technology is still not ready for mass production, indicating that autonomous driving technology is not simple. Autonomous driving is a large and complex project that involves many technologies and is very detailed. I will discuss the technologies involved in autonomous vehicles from both hardware and software perspectives.

Hardware

It would be unreasonable to talk about autonomous driving without discussing hardware. First, let’s look at a diagram that basically includes all the hardware needed for autonomous driving research.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

However, not all these sensors will necessarily appear on a single vehicle. The presence or absence of certain sensors depends on the tasks that the vehicle needs to accomplish. For example, if the vehicle only needs to perform autonomous driving on highways, similar to Tesla’s AutoPilot function, there is no need for laser sensors; if you need to perform autonomous driving in urban areas, it is very difficult to rely solely on vision without laser sensors.

Autonomous driving system engineers must select hardware and control costs based on the tasks at hand. This is somewhat similar to assembling a computer; you provide me with the requirements, and I will provide you with a configuration list.

Vehicles

Since autonomous driving is to be done, vehicles are, of course, essential. From the experience of SAIC in developing autonomous driving, it is best not to choose a pure gasoline vehicle if it can be avoided. On one hand, the power consumption of the entire autonomous driving system is enormous, and hybrid and pure electric vehicles have obvious advantages in this regard. On the other hand, the underlying control algorithms of engines are much more complex than those of electric motors; rather than spending a lot of time calibrating and debugging the underlying systems, it is better to directly choose electric vehicles to research higher-level algorithms.

There are also media in China that have conducted research specifically on the selection of test vehicles. “Why did Google and Apple coincidentally choose the Lexus RX450h (hybrid vehicle)?” “What considerations do technology companies have when selecting test vehicles for their autonomous driving technology?” and other questions. They concluded that “electricity” and “space” are crucial for modifying unmanned vehicles, while the “familiarity with the vehicle” from a technical perspective is another factor. If the vehicle manufacturer does not cooperate for modifications, certain control systems need to be “hacked.”

Controllers

During the early algorithm research phase, it is recommended to use an Industrial PC (IPC) as the most direct controller solution. This is because IPCs are more stable and reliable compared to embedded devices, and they have richer community support and accompanying software. Baidu’s open-source Apollo recommends an IPC model that includes a GPU, specifically the Nuvo-5095GC, as shown in the picture below.

Introduction to Hardware and Software Technologies for Autonomous Vehicles
Github ApolloAuto

When the algorithm research matures, embedded systems can be used as controllers, such as the zFAS jointly developed by Audi and TTTech, which is already applied in the latest Audi A8 production vehicles.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

CAN Card

The interaction between the IPC and the vehicle chassis must be done through a specialized language—CAN. To obtain current vehicle speed and steering wheel angle information from the chassis, data sent to the CAN bus from the chassis must be parsed; after the IPC calculates the steering wheel angle and expected vehicle speed from sensor information, it must also convert the messages into signals that the chassis can recognize through the CAN card, allowing the chassis to respond accordingly.

The CAN card can be directly installed in the IPC and then connected to the CAN bus through an external interface. The CAN card used by Apollo is the ESD CAN-PCIe/402, as shown in the picture below.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Global Positioning System (GPS) + Inertial Measurement Unit (IMU)

To drive from point A to point B, humans need to know the map from point A to point B, as well as their current location, so they can determine whether to turn right or go straight at the next intersection. The same goes for the autonomous driving system, which relies on GPS + IMU to determine its location (latitude and longitude) and the direction it is heading (heading). Of course, the IMU can also provide richer information such as yaw rate and angular acceleration, which helps with the positioning and decision-making control of the autonomous vehicle.

Apollo’s GPS model is NovAtel GPS-703-GGG-HV, and the IMU model is NovAtel SPAN-IGM-A1.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Sensing Sensors

I’m sure everyone is familiar with onboard sensors. Sensing sensors come in many types, including visual sensors, laser sensors, radar sensors, etc. Visual sensors are cameras, which can be monocular or binocular (stereo). Well-known visual sensor providers include Israel’s Mobileye, Canada’s PointGrey, and Germany’s Pike.

Laser sensors range from single-line to multi-line, up to 64 lines. The cost increases by 10,000 RMB for each additional line, but the corresponding detection effect also improves. Well-known laser sensor providers include Velodyne and Quanergy from the USA, Ibeo from Germany, and SUTENG from China.

Radar sensors are a strong point for Tier 1 automotive manufacturers, as radar sensors have been widely used in vehicles. Well-known suppliers include Bosch, Delphi, and Denso.

Summary of Hardware

Assembling a set of autonomous driving systems that can perform a certain function requires extensive experience and a thorough understanding of the performance boundaries of various sensors and the computational capabilities of controllers. Excellent systems engineers can control costs to a minimum while meeting functional requirements, increasing the likelihood of mass production and implementation.

Software

Software consists of four layers: perception, fusion, decision-making, and control. Each level requires coding to achieve information transformation, with a more detailed classification as follows.

First, let me share a PPT released by a startup.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Implementing an intelligent driving system involves several levels:

To be more specific:

Each level requires coding to achieve information transformation.

The most basic levels include the following categories: data collection and preprocessing, coordinate transformation, and information fusion.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Collection

When sensors communicate with our PCs or embedded modules, there are different transmission methods. For example, the image information collected from cameras can be communicated through Gigabit network cards or directly through video cables. Some millimeter-wave radars send information to downstream devices via the CAN bus, so we must write code to parse the CAN information.

Different transmission media require different protocols to parse this information, which is what was mentioned earlier as the “driver layer.” In simple terms, it is about collecting all the information obtained from the sensors and encoding it into data that the team can use.

Preprocessing

Once the sensor information is obtained, not all information is useful.

The sensor layer sends data frame by frame at a fixed frequency to downstream devices, but the downstream cannot use every frame of data for decision-making or fusion. Why?

Because the sensor’s status is not 100% valid; if we determine whether there are obstacles ahead based solely on a single frame of signal (which may be a false detection by the sensor), it would be extremely irresponsible for downstream decision-making. Therefore, upstream needs to preprocess the information to ensure that the obstacles in front of the vehicle are consistently present over time, rather than just flashing by.

This is where a commonly used algorithm in intelligent driving—Kalman filtering—comes into play.

Coordinate Transformation

Coordinate transformation is very important in intelligent driving. Sensors are installed in different locations; for example, millimeter waves (the purple area in the above diagram) are arranged at the front of the vehicle. When there is an obstacle 50 meters away from the millimeter-wave radar, do we consider that the obstacle is 50 meters away from the vehicle?

No!Because the decision control layer performs vehicle motion planning in the vehicle coordinate system (the vehicle coordinate system typically has the rear axle center as point O), therefore, the 50 meters detected by the millimeter-wave radar needs to be adjusted by adding the distance from the sensor to the rear axle.

Ultimately, all sensor information must be transferred to the vehicle coordinate system so that all sensor information can be unified for planning and decision-making.

Similarly, cameras are usually installed below the windshield, and the data obtained is also based on the camera coordinate system, so the data provided to downstream also needs to be transformed into the vehicle coordinate system.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Vehicle coordinate system: Take your right hand and start saying X, Y, Z in the order of thumb → index finger → middle finger. Then shape your hand as follows:

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Place the intersection of the three axes (the base of the index finger) at the center of the vehicle’s rear axle, with the Z-axis pointing towards the roof of the vehicle and the X-axis pointing in the direction of vehicle movement.

Each team may define the direction of the coordinate system differently, but as long as there is internal consistency within the development team, it is fine.

Information Fusion

Information fusion refers to the operation of combining multiple pieces of information with the same attributes into one. For example, if the camera detects an obstacle directly in front of the vehicle, and the millimeter wave and laser radar also detect an obstacle ahead, but in reality, there is only one obstacle ahead, then we need to fuse the information from multiple sensors to inform the downstream that there is one obstacle ahead rather than three.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Decision Planning

This level primarily designs how to correctly plan after obtaining fused data. Planning includes longitudinal control and lateral control: longitudinal control refers to speed control, which manifests as when to accelerate and when to brake; lateral control refers to behavior control, which manifests as when to change lanes and when to overtake, etc.

I am not very familiar with this area, so I dare not make reckless comments.

What does software look like?

Some of the software in the autonomous driving system looks similar to the one below.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

The name of the software reflects its actual function:

However, in reality, developers will also write other software for their debugging work, such as tools for recording and replaying data.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

There are also visualization programs for displaying sensor information, similar to the effects shown in the image below.

Introduction to Hardware and Software Technologies for Autonomous Vehicles
Introduction to Hardware and Software Technologies for Autonomous Vehicles

Having grasped the ideas behind the software, let’s look at what preparations you need to make.

Preparation

Operating System Installation

Since we are developing software, we first need an operating system. Common operating systems include Windows/Linux/Mac… (I haven’t used other operating systems), considering community support and development efficiency, it is recommended to use Linux as the operating system for autonomous driving research.

Most teams working on autonomous driving use Linux, and following the trend can save a lot of trouble.

Linux comes in many versions, with the most commonly used and widely adopted being the Ubuntu series. Although Ubuntu has been updated to version 17.04, for stability, it is recommended to install version 14.04.

It is recommended to use a separate SSD to install Linux or use a virtual machine for installation; dual-booting is not recommended (as it can be unstable). Here is the Linux Ubuntu 14.04 installation package + virtual machine installation method. (Link: http://pan.baidu.com/s/1jIJNIPg Password: 147y.)

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Basic Linux Commands

As the core of Linux, command line operations are not only very helpful for development but also a powerful tool for installation. Another benefit is that using the command “apt-get install” allows for quick installation of many software packages, without having to search for compatible installation packages online as one would on Windows. Linux commands are numerous and varied, and using them requires practice and learning.

Development Environment Installation

The development environment will involve many libraries that are practically used. Different programmers may use different libraries to handle the same problem. Below, I will introduce some libraries that I frequently use in my work and studies to get developers started.

Required installation packages for setting up the environment:

Introduction to Hardware and Software Technologies for Autonomous Vehicles
(Link: http://pan.baidu.com/s/1sllta5v Password: eyc8)

Appendix: Introduction to Development Environment

Integrated Development Environment (IDE)

Previously, I installed an open-source IDE called qt, which holds a position in Linux similar to that of Visual Studio in Windows. Unless you are a highly skilled developer who does not use an IDE, most teams developing under Linux will choose to use qt for development. The main function of qt is to create interactive interfaces, such as displaying various information collected by sensors in the interface. The interactive interface significantly speeds up the process for developers to debug programs and calibrate parameters.

Tips:

OpenCV

OpenCV is a powerful library that encapsulates a large number of functions applicable to autonomous driving research, including various filtering algorithms, feature point extraction, matrix operations, projection coordinate transformations, machine learning algorithms, etc.

Of course, most importantly, it has a significant influence in the field of computer vision, as it provides convenient interfaces for camera calibration, target detection, identification, and tracking. Using the OpenCV library, one can achieve the effects demonstrated in the image.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Tips:

Here is an electronic version that explains in detail; I recommend printing one chapter at a time for reading and progressing step by step.

(Link: http://pan.baidu.com/s/1dE5eom9 Password: n2dn)

libQGLViewer

libQGLViewer is a well-known library that adapts OpenGL to qt, and its programming interface and methods are very similar to OpenGL. The environmental perception information displayed on various autonomous driving company promotional materials can be completely created using QGL.

Introduction to Hardware and Software Technologies for Autonomous Vehicles

Tips:

Learning libQGLViewer does not require purchasing any textbooks; the official website and examples in the compressed package are the best teachers. By following the tutorials on the official website and implementing each example, you can basically get started.

Official website link: libQGLViewer Home Page

Boost

The Boost library is known as the “C++ standard library”. This library contains a large number of “wheels” that are convenient for C++ developers to directly call, avoiding the need to reinvent the wheel.

Tips:

Boost is developed based on standard C++, and its construction employs sophisticated techniques. Do not hastily spend time studying it; find a book related to the Boost library (either electronic or paper) and read through the table of contents to get a general idea of its functionalities. When needed, focus on specific points and spend time researching.

QCustomplot

In addition to the aforementioned libQGLViewer, we can also display onboard sensor information in the form of plane graphs. Given that qt only provides basic drawing tools like straight lines and circles, which are not very convenient to use, QCustomplot was created. By simply calling the API and inputting the data you want to display as parameters, you can draw excellent graphs like those below. Moreover, it allows for easy dragging and zooming.

Below is some sensor information I displayed using QCustomplot during actual development.

Tips:

The official website provides the source code for this library; you only need to import the .cpp and .h files into your project. Follow the tutorials provided on the official website to quickly get started. By referring to the examples in the example folder, you can quickly turn your data into visual images.

LCM (Lightweight Communications and Marshalling)

When teams develop software, there will inevitably be communication issues between processes (multi-process). There are many ways to communicate between processes, each with its own advantages and disadvantages, and using them is subjective. In December 2014, MIT released the signal transmission mechanism LCM that they used in the DARPA Robotics Challenge in the USA, source: MIT releases LCM driver for MultiSense SL.

LCM supports multiple languages such as Java and C++ and is specifically designed for real-time systems to send messages and marshal data under high bandwidth and low latency conditions. It provides a publish/subscribe messaging model and automatic packaging/unpacking code generation tools in multiple programming language versions. This model is very similar to the communication methods between nodes in ROS.

Tips:

The demo for communication between two processes using LCM is available on the official website, and by following the tutorial on the website, you can quickly establish your own LCM communication mechanism.

Official website: LCM Project

Git & Github

Git is an indispensable version control tool for team development. When writing papers, everyone must create a new version daily; if no special notes are made about what changes were made in each version, it will be forgotten over time. The same goes for writing code.

Using Git can greatly improve the efficiency of multi-person development, and version management is standardized, making it very convenient to trace back to the code.

Github is a household name in software development; if you need certain code, you can search for it directly.

Tips:

Currently, many books about Git are difficult to read, and they delve too deeply into the minutiae, making it hard to get started quickly.

Therefore, I strongly recommend the introductory tutorial on Git: Liao Xuefeng’s Git tutorial, which is simple and easy to understand, with accompanying images and videos, making it an excellent resource.

This concludes the basic introduction; mastering these things will make you an experienced driver in the field of autonomous driving.

Reprinted from Zhihu, the views in the article are for sharing and communication only and do not represent the stance of this public account. If there are copyright issues, please inform us, and we will address them promptly.

Long press the QR code to follow
Introduction to Hardware and Software Technologies for Autonomous Vehicles

Click to read the original text and enter Weibo

Leave a Comment