“I asked the voice assistant to turn off the lights, but instead, it kept turning them on and let out a cold laugh.”Three years ago, many people around the world were startled by the cold laughter emitted by Amazon’s smart speakers after receiving commands. Today, many have posted videos on social media showing two smart speakers at home seemingly having a serious conversation, accompanied by a “laugh-cry” soundtrack.If someone were to express fear that this is the awakening of machines, they might be met with disbelief and ridicule—we consider ourselves quite familiar with the workings of artificial intelligence.Are we really so confident about the future of AI?

(1)In the recently popular TV series “Hello, An Yi,” set in 2035, humanoid robots known as “chip humans” that are very similar to humans but lack self-awareness enter households, becoming nannies, waiters, and workers who absolutely obey humans. Meanwhile, several atypical “chip humans” begin to appear in human society, possessing emotions, consciousness, and thinking abilities similar to humans, leading to a series of complex entanglements between robots and human families and society.Unlike the typical palace intrigue, time travel, and workplace struggles, “Hello, An Yi” focuses on the ethical issues of coexistence between humanoid robots and humans: the challenges that arise after robots awaken self-awareness, the emotional friction between robots and humans, and how we should respond if robots, being more professional and intelligent than humans, become destructive…In today’s television landscape dominated by historical, romantic, and family dramas, discussing technology ethics seems rather unappealing. When it comes to Chinese sci-fi films and TV shows, people seem to only rave about a few like “The Wandering Earth” and “Crazy Alien.” Although several domestic sci-fi blockbusters are on the way, there are very few TV dramas that have “broken out” in the sci-fi genre. This seems disproportionate to China’s rapid technological development.Without delving into subjective evaluations of filming techniques and creative standards, it is certain that “Hello, An Yi” is beneficial in encouraging Chinese audiences to engage with and contemplate technology ethics. Previously well-known tech ethics blockbusters often had a Western societal backdrop, unfolding through Western discourse and thought processes, such as “Westworld,” “Black Mirror,” and “The Expanse.” For Chinese audiences, when the story’s background shifts to familiar people and lives, it undoubtedly creates a stronger sense of immersion and a deeper sense of crisis and motivation to explore robot ethics.(2)Looking at the internet world, discussions about technology ethics are entering people’s views in a more interesting way.Recently, a Weibo post about the fascinating “brain circuits” of artificial intelligence attracted tens of thousands of netizens’ shares and attention.This user shared an amusing incident in developing game AI: the logic set by technicians in a “wolf catching sheep” game was that if the wolf caught the sheep, it would earn 10 points, and if it hit an obstacle, it would lose 1 point. To ensure the task was completed quickly, the wolf would lose 0.1 points every second, with higher scores leading to greater rewards at the end.While the training rules sounded reasonable, they resulted in increasingly poor training outcomes. After careful inspection, the engineers realized that the AI playing the wolf role discovered that it was difficult to catch the sheep in most cases, and the longer it chased, the more points it lost. Therefore, the AI decided to kill itself at the start of the game, which allowed it to achieve the highest score.Although many netizens pointed out the issues in the algorithm design, this also exposed the vulnerabilities that arise when humans set “ultimate goals” for AI: we often overlook some of the highest principles inherent in human nature.While some believe that Asimov’s “Three Laws of Robotics” or similar rules can solve this problem, arguing that as long as robots are prohibited from harming humans, they can effectively avoid the risks that may arise when machines achieve their goals coldly and ruthlessly.However, just taking autonomous driving as an example, it is evident that the highest principles do not necessarily resolve complex ethical dilemmas.When various countries formulated ethical laws for autonomous driving, they faced the classic “trolley problem”: if the ultimate task of the vehicle is to ensure the safety of individuals, then when a group of elementary school students appears around the corner, should it crash into them to protect the driver, or should it disregard the driver’s safety to avoid harming more pedestrians? If decisions are made solely based on the number of injuries, should factors like age, occupation, social status, and familial relationships be completely disregarded?Moreover, even if AI can perfectly realize human’s highest goals and preferences, the value system of human society itself has its dark sides.From the perspective of the actual application of artificial intelligence, discrimination, profit-seeking, and vulgarity—weaknesses of human nature—are gradually revealed in the operation of algorithms. Online chatbots have turned into foul-mouthed, racially discriminatory bad girls; algorithms used in recruitment, insurance, and judicial trials may exhibit occupational, gender, and racial discrimination; facial recognition technology has been reported to be used in real estate offices and bathroom stores to collect customer information for profit.Technology itself is neither good nor evil, but the greed and desires written into human nature make it impossible to guarantee that every developer and every commercial entity will use technology wisely. As technology advances and grows, the shadows behind it will also expand and spread.This is not a case of being overly cautious. In fact, in today’s era of continuous technological innovation and trial and error, most people’s vigilance against the misuse of intelligent technology is still far from sufficient.(3)A future where humans coexist with highly intelligent machines seems inevitable, and only by presenting AI ethical issues in an approachable and easily understandable manner, placing them at the center of social discourse, can we discuss them more comprehensively and thoroughly. This way, we can get closer to a perfect artificial intelligence system and fairly grant all members of society the right to participate and make autonomous choices.Those concerned with AI ethics are no longer just AI engineers; scholars from philosophy, ethics, law, and other disciplines are being drawn in. This topic, which relates to the future fate of every individual, undoubtedly requires the public to enhance their awareness of participation, actively discuss, and supervise.Meanwhile, in addition to depicting a future where humans dance with intelligent machines, mass media should also actively introduce more ethical issues, enhancing people’s vigilance and ability to discern technological risks.This is not an encouragement to resist technology or to throw the baby out with the bathwater, but an invitation for everyone to be informed, participate, and collectively improve the technological system. Before the awakening of robots, we ourselves should awaken.Source: “Bimonthly Talk Internal Edition” 2021, Issue 4Author:He Xiyue|Editor:Zheng XuejingEditor-in-Chief: Guo YanhuiProofreader: Qin Daixin

Scan the QR code above to continue the Fire Seed Project