⇧Click the blue text to follow “CCTV News”
From April 25 to May 4, 2024, the Beijing International Auto Show will be held, featuring renowned domestic and international automotive brands showcasing their new products at a high standard. This auto show has specifically set up the “Smart Driving Future Exhibition Area”, where the word “smart” can be seen everywhere, and autonomous driving and intelligent networking have become the most “eye-catching” elements of this auto show.
On May 1, the “Regulations on the Testing and Application of Intelligent Connected Vehicles in Hangzhou” officially came into effect. Hangzhou has become the first city in the country, apart from special economic zones, to clearly define the specific process for autonomous vehicles on the road through local legislation, and it is also the first city in the country to legislate for low-speed unmanned vehicles. Starting from that day, Hangzhou took the lead in designating all eight urban districts and Tonglu County as testing and application areas for intelligent connected vehicles, serving a population of over 10 million. A senior executive from a domestic technology company recently stated, “2024 is the first year of large-scale commercial use of intelligent driving.” Tesla CEO Elon Musk expressed his delight at the progress of electric vehicles in China after arriving in Beijing on April 28, stating, “In the future, all cars will be electric.” Recently, Musk also mentioned, “In the future, almost all cars will be autonomous,” and “People will enter cars like they enter elevators, without even thinking about it.”
In fact, in recent years, many domestic and foreign automotive manufacturers and technology companies have been investing heavily in autonomous driving, accelerating the process of intelligentization in the automotive industry. The prospects for autonomous driving are full of imagination.
However, for many people, “autonomous driving” is just a term filled with a sense of the future and technology, evoking both aspiration and concern: Is autonomous driving technology difficult? What if it bugs? Is it safer than human drivers?
Is Autonomous Driving Difficult to Achieve?
To answer this question, we can start with the levels of autonomous driving.
Currently, according to the standards of the International Society of Automotive Engineers, automotive autonomous driving technology can be divided into six levels from L0 to L5: L1 to L2 are driving assistance technologies; L3 is the dividing line, where L3 and above are considered autonomous driving, but at L3, when the system requests, a human can still take over driving. L4 allows for autonomous driving in most scenarios without human intervention; L5 is fully autonomous driving in any scenario.
△Source: International Society of Automotive Engineers
In fact, research on autonomous driving technology can be traced back to the 1950s. In recent years, autonomous driving has become a focal point for many automotive manufacturers competing for attention.However, to this day,most automotive manufacturers are still at L2 level, which is the assisted driving stage. The mainstream opinion in the industry believes that technology will gradually progress from L2, continuously iterating until it reaches L4 advanced autonomous driving. However, for a long time, L2 may coexist with L4.
△Reference image
It is evident that achieving autonomous driving is not easy. It is an extremely complex system engineering task that involves multiple disciplines such as sensors, computers, artificial intelligence, communication, navigation, machine vision, and intelligent control, with both hardware and software, as well as the policy environment having a significant impact on its development.
What Are The Paths To Implementation?
There is not just one way to achieve autonomous driving.
He Xiang, a data scientist at Haima Intelligent, introduced that there are currently two main technical paths. One is the fusion perception route centered on LiDAR, which combines LiDAR, cameras, and millimeter-wave radar to form a fusion perception system, represented by companies such as Huawei, Xiaomi, and NIO. The other is the pure visual perception route, represented by Tesla. The pure visual route places high demands on both the vehicle and cloud computing power, which is why Tesla has developed its own AI chips for the vehicle and cloud training.
Regardless of which of these two technical paths is chosen, it must complete the three stages of perception, decision-making, and execution. How to understand this? The autonomous driving system operates like a human driver, relying on the eyes (ears), brain, and hands and feet, respectively, to take on the roles of perception, decision-making, and execution.
△Reference image
Its “eyes” and “ears” can be high-definition cameras, LiDAR, millimeter-wave radar, ultrasonic radar, and other perception systems that rapidly and accurately identify the environment, including vehicles, pedestrians, traffic lights, and obstacles, ensuring it “understands the objects.” The above two technical routes are mainly classified based on environmental perception, which is the foundation for the subsequent two stages.
The “brain” of autonomous driving is the “chip” and algorithms used for decision-making: How to drive safely? Which route is more efficient? This is the “understanding of the road.” The control system is used to execute actions, acting as the “hands” and “feet.”
Let’s first take a look at the fusion perception route, which is currently the main way for domestic automotive manufacturers to achieve autonomous driving.
In simple terms, this involves equipping the vehicle with high-definition cameras, LiDAR, millimeter-wave radar, and other sensors to collect information around the vehicle, and then making decisions based on high-precision maps.
This route has very clear advantages:The combination of several sensors complements each other, leveraging their respective strengths, resulting in a diverse data source that makes the final perception results more stable and reliable, leading to more accurate decision-making. For example, cameras provide rich visual details, while LiDAR can provide precise distance and speed data, ensuring recognition efficiency even in poor weather conditions such as nighttime, fog, rain, and snow, compensating for the shortcomings of cameras.
The biggest disadvantage of this technical route is its cost— high-precision maps and LiDAR require significant investment.
△Reference image
He Xiang explained that high-precision maps serve a similar function to navigation maps, but provide richer road information, including lane positions, road signs, and traffic signals. This information helps vehicles accurately locate and make navigation decisions. “However, one of its application difficulties is that the time and economic costs required for mapping are very high.”
Industry insiders have reported that the value of various sensors carried by vehicles is quite high, with an average collection cost of about 1000 yuan per kilometer. It is reported that Huawei has collected high-precision maps in Shanghai for two to three years, covering only 22,000 kilometers, while it requires 36,000 kilometers to cover the entire city.
Moreover, maps need continuous maintenance to remain “fresh.” “Urban roads change every day; sometimes, data collected today may need to be altered tomorrow.”Industry insiders explained that, meanwhile, from a national security perspective, the government only allows manufacturers to refresh maps every few months, and they must undergo a long approval process, making high-precision maps difficult to use widely. In practical use, when the road conditions do not match the maps, the system will require the driver to take over the vehicle, and frequent takeovers can directly impact consumer experience.
△Reference image
At this Beijing Auto Show, many models have already chosen the no-map intelligent driving solution. By discarding high-precision maps, autonomous driving can adapt to any environment, regardless of whether maps are covered or not.“High-precision maps are costly and slow to update, and are gradually being abandoned by urban intelligent driving solutions.” He Xiang stated that in order to move away from high-precision maps, some companies have begun to use cameras to reconstruct the vehicle’s multi-dimensional environment in real-time; this environmental data can also become the basis for simulation training (referring to constructing rare accident scenarios in a virtual environment to train the system’s response capabilities).
In addition, although the price of a LiDAR has dropped from around $70,000-$80,000 in 2017 to a few thousand yuan today, it is still not low compared to cameras that cost hundreds of yuan. “For a vehicle priced around 100,000 yuan, equipping it with such a radar still poses a significant pressure. Furthermore, LiDAR has only recently been integrated into vehicles, and its longevity has not been tested in the market over a long period.” He Xiang said.
The multi-sensor fusion solution not only has high costs but also presents certain technical challenges. Industry insiders pointed out that when two types of sensor results conflict, which one should the system trust? “If we always trust the camera, then what’s the point of LiDAR?”
△Reference image
The pure visual route focuses on one “pure” word—only using cameras to perceive the environment. This means that all environmental information, from recognizing objects to measuring distance and speed, must be accomplished using cameras. Choosing this route is not easy because the system must process a large amount of image and video information, and both vehicle and cloud computing power and algorithms are crucial, emphasizing the need for a “smart brain.” (The vehicle needs to process a large amount of complex data from multiple high-definition cameras, and all processing must be done in real-time on the vehicle to ensure response speed; the cloud is responsible for large-scale data analysis and model training to continuously optimize the vehicle’s perception and decision-making capabilities.)
He Xiang explained that these algorithms need to handle changes in lighting, weather impacts, occlusion, and other visual noise, converting more two-dimensional images into three-dimensional objects to perceive speed and distance, which are issues easily handled by using LiDAR.This solution mimics human driving habits: as long as it is within the view of the eyes (cameras), the system can recognize the information within and make correct judgments.
To achieve more accurate recognition, the autonomous driving system must continuously learn and optimize driving decisions, massive amounts of data are crucial in this process.
Tesla has achieved this by utilizing actual driving data from millions of vehicles to train algorithm models and improve the system’s response capabilities, significantly enhancing the performance of its autonomous driving software. This “fleet learning” is one of the key factors that led to Tesla’s intelligent driving reaching its “ChatGPT moment” (the release of Tesla FSD V12) and is also one of the reasons it does not require high-precision maps.
△Reference image
Compared to the fusion solution, the pure visual perception route’s biggest advantage is its lower cost, making it more conducive to widespread market adoption. However, for higher safety and reliability, the hidden costs of the pure visual route are not low, and one could even say they are higher; it requires more resources to develop and optimize visual processing algorithms and computing platforms. Musk stated in early April this year that Tesla’s total investment in autonomous driving will reach $10 billion in 2024.
The shortcomings of the pure visual route are also quite apparent: since the raw data collected by cameras are two-dimensional images, lacking distance and speed information, relying solely on this information can easily lead to misjudgment. He Xiang explained that in the fusion perception route, even if one sensor fails or is interfered with, other sensors can still provide critical data, whereas the pure visual perception route lacks equivalent backup options when the camera fails or is obstructed, potentially affecting the overall system’s reliability.
In May 2016, an accident involving Tesla led to a public relations crisis: the system’s camera misidentified the large white body of a truck as a “cloud,” and the driver, not paying attention to the road conditions, ended up crashing into the turning truck.
△Reference image
He Xiang stated that as AI algorithms rapidly evolve, computer vision can gradually compensate for the shortcomings of cameras through extensive data training.
“Both of these routes have their advantages, and the ultimate choice in the industry will be the result of a battle among technology, engineering, and business factors.” However, He Xiang believes that autonomous driving should ultimately move towards the pure visual perception route, “because it resembles human driving; only the information captured by human eyes is the most comprehensive, and only humans can understand semantics. As the algorithms of pure visual perception route continue to iterate and deepen their learning, they will increasingly understand the world as seen by humans, making autonomous driving truly intelligent.”
Will Autonomous Driving Have Bugs?
“Yes, it will. As long as it’s a machine, there will be bugs. He Xiang stated that these bugs may arise from various factors, including software errors, hardware failures, sensor misreads, or data processing errors, etc.
Unlike phones or computers that can restart to function normally without major consequences, if a car experiences a lag while driving at high speed, how can passenger safety be guaranteed?
“Currently, the industry has solutions in place, and safety should be the top priority in autonomous driving.” He Xiang said that autonomous vehicles should all have a fallback mechanism; if the hardware or software reports an error, the car will automatically execute a stop or allow the human driver to take over.
△Reference image
Currently, in order to address potential bugs, the industry has adopted many strategies, including:
Continuous testing and upgrades. Just like phone software needs to be updated, autonomous driving systems also need continuous testing and updates to ensure they can handle various complex traffic situations.
Using backup systems. Installing multiple systems in the vehicle, so if the main system fails, the backup system can immediately take over, just like having multiple engines on an airplane.
Continuously improving algorithms. Learning and optimizing through actual road data to enable the system to handle various situations more intelligently.
These methods aim to continuously reduce the likelihood of bugs in autonomous driving systems, ensuring that the car becomes smarter and easier to drive over time.
Who is More Reliable: Autonomous Driving or Human Drivers?
“The safety of autonomous driving must surpass that of human drivers; this is one of the core goals of autonomous driving technology’s invention, aiming to eliminate human error.” He Xiang stated that most traffic accidents are caused by human errors, such as speeding, drunk driving, fatigue, distraction, and road rage. Autonomous driving systems can operate continuously without distraction, fatigue, or emotional influence, theoretically reducing these common human errors.
“The reaction speed and accuracy of autonomous driving can theoretically exceed that of humans.” Gu Weihai, CEO of Haima Intelligent, stated that autonomous driving systems can continuously monitor the surrounding environment and respond to occurrences in real-time. Furthermore, they can communicate with other autonomous vehicles to collaborate and improve overall traffic safety.
△Reference image
However, currently, autonomous driving systems still face challenges in safety due to technical limitations, especially in handling complex traffic scenarios or rare events. If it has not encountered relevant scenarios, it may make absurd decisions. For example, recently, a vehicle with intelligent driving crashed into a flipped truck ahead because it did not recognize the vehicle with its wheels in the air, mistakenly thinking the situation was safe. Gu Weihai stated that as autonomous driving systems continuously learn and optimize, their safety is expected to gradually achieve and ultimately exceed that of human driving by tenfold.
How Will It Affect Our Lives?
“The development of autonomous driving technology will bring many positive impacts to various aspects of our social life, including improving traffic safety, increasing traffic efficiency, and enhancing quality of life.” Gu Weihai discussed the immense value of autonomous driving technology, stating that it will first reduce traffic accidents caused by human error and can optimize driving routes and speeds to minimize unnecessary stops and abrupt braking, thereby improving traffic flow efficiency. This could effectively alleviate congestion during peak hours on Beijing’s East Third Ring Road.
△Reference image
Additionally, autonomous driving technology can provide more independent travel opportunities for the elderly, disabled, and others who cannot drive, enhancing their happiness; it can also free people from the driving task, transforming commuting time into rest and entertainment time. Furthermore, “as the number of autonomous vehicles increases, urban planning and transportation infrastructure will gradually adapt to this new technology. For example, the demand for parking lots and spaces may decrease, as autonomous vehicles can find parking spaces on their own or return home when not needed.”……
These changes brought about by autonomous driving will have a profound impact on our lives, business operations, and social structure, propelling us into a safer, more efficient, and environmentally friendly future.
What Are The Current Development Dilemmas?
Despite the bright prospects of autonomous driving, the reality is stark: before achieving comprehensive commercialization and widespread application, autonomous driving still faces dilemmas in cost, technology, legal regulations, and social acceptance.
Before any product can truly move towards commercial implementation, it must face a core issue—cost. The high costs of autonomous driving are one of the factors hindering its popularity. Software and hardware development, data collection, relevant technology testing and validation all require substantial financial and time investments.
In terms of technology, Gu Weihai stated that although autonomous driving technology has made significant progress, issues such as the reliability of perception systems in complex environments and the robustness of AI decision-making systems still need to be resolved. For example, how to improve performance in harsh weather or complex road conditions, and how to enhance the intuition and decision-making ability that mimics human drivers.
Regarding policies and regulations, Gu Weihai mentioned that the legal responsibilities, safety standards, and privacy protections related to autonomous driving are still unclear, creating difficulties in regulating testing, usage, and liability for autonomous vehicles. For instance, if a vehicle driven by a human collides with one operated by an autonomous system, how should liability be determined? Is it the driver’s responsibility or the manufacturer’s? This needs to be defined in legal regulations. Additionally, the inconsistency of laws and standards across different countries and regions poses certain obstacles to the international promotion of autonomous vehicles.
△Reference image
Moreover, encouraging consumers to be willing to pay for related features is also a common dilemma for the industry.
Currently, many automotive manufacturers and autonomous driving companies have launched parking and highway assistance functions, but urban scenarios are the most important way for consumers to directly perceive this technology because urban scenarios are frequently used in people’s lives. If autonomous driving cannot be achieved in high-frequency and essential scenarios, all commercial plans are mere castles in the air.
“Solving urban roads will lead to greater trust in autonomous driving, especially in China, where urban traffic conditions are too complex. For example, during the early morning peak hours in Beijing, any vehicle’s autonomous driving will encounter chaos.” said a safety officer in the autonomous driving demonstration area in Yizhuang, Beijing.
Currently, some leading domestic automotive manufacturers or technology companies have begun to implement urban scenario-assisted driving. These urban scenario implementations have improved consumer trust in autonomous driving.
Gu Weihai stated that resolving these dilemmas requires efforts in technological innovation, legal reforms, ethical discussions, and social education, “Only then can autonomous driving technology smoothly integrate into our lives and realize its potential immense value.”
How Far Are We From Complete Autonomous Driving?
In recent years, our country has successively introduced a series of policies supporting the development of autonomous driving, such as the “Notice on the Pilot of Intelligent Connected Vehicles’ Access and Road Use” and the “Guidelines for the Safe Transport Services of Autonomous Vehicles (Trial).” Additionally, both central and local governments have placed significant emphasis on the development of smart roads and vehicle-road collaboration, with many smart roads already built and in trial operation.
Numerous domestic automotive manufacturers and technology companies are also launching mass-produced high-level intelligent driving products, showcasing the commercial and mass production potential of China’s autonomous driving technology. The recent launch of Tesla’s FSD V12 version marks a new milestone in autonomous driving technology.
Currently, autonomous driving demonstration areas have been established in cities such as Beijing, Shanghai, Guangzhou, Shenzhen, Chongqing, Wuhan, and even in Yangquan, the smallest prefecture-level city in Shanxi. L4 level autonomous driving (fully autonomous driving) vehicles such as driverless taxis, patrol cars, and sanitation vehicles can be seen in these areas.
△This year, the regular autonomous driving shuttle from Yizhuang to Daxing International Airport has been opened first.
However, currently, most autonomous driving is still at the L2 level of assisted driving, with the fastest development at the L3 level of the pilot stage. He Xiang believes that we are still some distance from the true mass production of fully autonomous driving.
“A qualitative leap is needed in between.”” Gu Weihai said that to truly enter everyone’s life, overcoming the current development dilemmas is key; changes in technology, cost, legal regulations, and public acceptance are all necessary. “Only then can true fully autonomous driving (L5 level) enter our lives.”
After finishing this article, I feel much more at ease about autonomous driving and have decided to experience it in Yizhuang to see what the ride is like. I didn’t expect to have an unexpected gain; check out the video below:
The following video is sourced from CCTV News
▌This article is sourced from: CCTV News WeChat Official Account (ID: cctvnewscenter)
Editor-in-chief / Li Zhe Chief editor / Wang Xingdong
Written by / Shi Meng Proofread by / Gao Shaozhu/
Some images sourced from / Vision China