Asimov’s Laws of Robotics Need Updating to Adapt to Artificial Intelligence: Introducing the Fourth Law

Asimov's Laws of Robotics Need Updating to Adapt to Artificial Intelligence: Introducing the Fourth Law

XHAFER GASHI/ISTOCK

In today’s world where science fiction and reality intertwine, artificial intelligence (AI) technology is reshaping our world at an unprecedented pace. When discussing AI ethics, one cannot overlook the “Three Laws of Robotics” proposed by the science fiction giant Isaac Asimov. Since their introduction in 1942, these laws have been a cornerstone in discussions about robot ethics. However, with the rapid advancement of AI technology, particularly the breakthroughs in generative AI in language and image generation, we must reassess these classic laws to meet the challenges of the new era.

In 1942, the renowned science fiction writer Isaac Asimov introduced the “Three Laws of Robotics” in his short story “Runaround.” These laws later gained widespread recognition in his groundbreaking collection of short stories, “I, Robot.”

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Although these laws originated from fictional works, they have influenced discussions on robot ethics for decades. As virtual robots become increasingly complex and prevalent, some technical experts find Asimov’s framework helpful in considering the potential safeguards needed for AI that interacts with humans.

However, the existing three laws are far from sufficient. Today, we are entering an unprecedented era of human-robot collaboration, which Asimov could hardly have imagined. The rapid development of generative artificial intelligence capabilities, especially in language and image generation, presents challenges that have surpassed Asimov’s initial concerns about physical harm and obedience.

Deepfakes, Misinformation, and Fraud

The rampant spread of AI-driven deception is particularly concerning. According to the FBI’s 2024 Internet Crime Report, losses from cybercrimes involving digital manipulation and social engineering exceeded $10.3 billion. The European Union Agency for Cybersecurity (ECA) highlighted in its 2023 Threat Landscape report that deepfakes (seemingly real synthetic media) are an emerging threat to digital identity and trust.

Misinformation on social media is spreading like wildfire. During the pandemic, I conducted in-depth research on this and must say that the surge of generative AI tools has made detection increasingly difficult. Worse still, AI-generated articles are as persuasive as traditional propaganda, if not more so, and creating convincing content using AI is almost effortless.

Deepfakes are on the rise across society. Bot networks can utilize AI-generated text, voice, and video to create the illusion of widespread support for any political issue. Bots can now impersonate others in making and receiving phone calls. AI-generated scam calls that mimic familiar voices are becoming more common, and there is a looming threat of AI-enhanced video call scams that allow fraudsters to impersonate loved ones, targeting the most vulnerable populations. I was told my father was very surprised to see a video of me speaking fluent Spanish, as he knows I am a proud Spanish beginner (having studied on Duolingo for 400 days). It can only be said that the video was edited using AI.

Even more concerning is that children and teenagers are developing emotional dependencies on AI, sometimes unable to distinguish between interactions with real friends and those with online robots. There have already been reported cases of suicides attributed to interactions with AI chatbots.

Renowned computer scientist Stuart Russell pointed out in his 2019 book “Human Compatible” that the ability of AI systems to deceive humans poses a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, the most notable being the EU’s Artificial Intelligence Act, which includes provisions requiring transparency in AI interactions and the disclosure of AI-generated content. In Asimov’s time, it was unimaginable how AI could use online communication tools and virtual personas to deceive humans.

Therefore, we must supplement Asimov’s laws.

Fourth Law: A robot or AI must not deceive humans by impersonating a human being.

Path to Trustworthy AI

We need clear boundaries. While human-robot collaboration can be constructive, deceptive behavior by AI undermines trust, leading to wasted time, emotional distress, and resource misuse. AI must undergo identity verification to ensure our interactions with them are transparent and productive. AI-generated content should be clearly labeled unless it has undergone substantial human editing and adaptation.

The implementation of the Fourth Law requires:

· Mandatory disclosure of AI information in direct interactions;

· Clear labeling of AI-generated content;

· Technical standards for AI identification;

· A legal framework for enforcement;

· Educational initiatives to improve AI literacy.

Of course, all of this is easier said than done. Currently, extensive research is underway to find reliable methods for watermarking or detecting AI-generated text, audio, images, and videos. Achieving the transparency we call for is far from resolved.

However, the future of human-robot collaboration depends on clear boundaries between humans and AI entities. As the IEEE’s 2022 framework for “Ethically Aligned Design” points out, the transparency of AI systems is crucial for building public trust and ensuring the responsible development of AI.

Asimov’s complex stories illustrate that even robots attempting to follow rules often find their actions lead to unintended consequences. Nevertheless, encouraging AI systems to strive to adhere to Asimov’s ethical guidelines would be a very good start.Quote:https://spectrum.ieee.org/isaac-asimov-robotics

Domestic and International Policies and Trends

At the domestic and international policy level, discussions on AI ethics are also becoming increasingly heated. The EU’s Artificial Intelligence Act particularly emphasizes the transparency of AI interactions and the disclosure requirements for AI-generated content, which aligns closely with the core idea of the Fourth Law. Domestically, with the rapid development of AI technology, relevant policies are gradually being improved to promote the healthy development of AI technology while safeguarding public interests and social security.

Internationally, research and collaboration on AI ethics are also continuously strengthening. Research organizations, academia, and policymakers are jointly exploring how to establish a more comprehensive AI ethics system to ensure the sustainable development of AI technology and the safety of its social applications.

Conclusion

Asimov’s Three Laws of Robotics provide an important starting point for AI ethics research, but in today’s rapidly evolving AI landscape, we must modernize them. The introduction of the Fourth Law is a proactive response to this challenge. Through collaborative efforts in domestic and international policies and the guidance of international trends, we have reason to believe that a safer, more transparent, and trustworthy AI era is on the horizon. Let us work together to contribute to the healthy development of AI technology!

Founder Scott LaValley describes the Cartwheel robot as “a small, friendly humanoid robot designed to bring joy, warmth, and a bit of everyday magic to our living spaces. It is expressive, emotionally intelligent, and full of personality; it is not just a tech product but a presence you can feel.”

Evan Ackerman, public account: turkey broCartwheel Robotics aims to create humanoid robots that are loved by people.

Learning to create a BP (Business Plan) can enhance business understanding and organization, gain external feedback, refine and optimize, and clarify one’s business ideas (financial models, competitive situations). Its role extends beyond IR (Investor Relations) and PR (Public Relations) and can also seek new partners to join.

Turkey Brothers, public account: turkey broBP (Business Plan) practical sharing.

Turkey Bro

Asimov's Laws of Robotics Need Updating to Adapt to Artificial Intelligence: Introducing the Fourth Law

Turkey Brothers

Magnet for the robotics industry, connecting people and resources related to robotics, look for Turkey Brothers! Focused on empowering engineers’ needs and entrepreneurial teams, it is the only vertical service community for . Members mainly come from entrepreneurs, practitioners, geeks in the robotics and AI fields, and high-end talents from schools and laboratories. Everything in the community revolves around how to meet engineers’ needs in “people, events, and things” and benefit engineers from it.

Welcome scholars and engineers interested in technology to join the community for交流.

Asimov's Laws of Robotics Need Updating to Adapt to Artificial Intelligence: Introducing the Fourth Law

“Engineers, Robotics” community WeChat recognition QR code, group invitation: turkeystore66

「Read the original text」 to enter the official website for more information.

Asimov's Laws of Robotics Need Updating to Adapt to Artificial Intelligence: Introducing the Fourth Law

Leave a Comment