Enhancement Or Replacement: Philosophical Reflection On The Relationship Between AI And Human Intelligence

Enhancement Or Replacement: Philosophical Reflection On The Relationship Between AI And Human Intelligence

Enhancement Or Replacement: Philosophical Reflection On The Relationship Between AI And Human Intelligence

Author Introduction

Pan Bin, PhD in Philosophy, Professor in the Department of Philosophy at East China Normal University, Director of the Social Epistemology Research Center at East China Normal University, Researcher at the Key Research Base of Humanities and Social Sciences of the Ministry of Education, Institute of Modern Thought and Culture Research, and Part-time Researcher at the Institute of Marxism and Modernization of Sun Yat-sen University.

Citation Format:Pan Bin. Enhancement or Replacement: Philosophical Reflection on the Relationship Between AI and Human Intelligence [J]. Theoretical Monthly, 2024(7): 5-14.

Abstract

The core debate in the philosophy of artificial intelligence is to examine the tense relationship and organic tension between artificial intelligence and human intelligence. Artificial intelligence is a concept that relates to and associates synthetic intelligence, computational intelligence, and augmented intelligence, with the construction goal being intelligent subjects, the underlying logic being big data, and the key to operation being algorithms. The development of artificial intelligence encompasses possible paths from weak AI to strong AI or general AI, and even does not exclude the future evolution of superintelligence. There is an inherent tension between artificial intelligence and human intelligence: artificial intelligence may gently simulate and augment human intelligence, or radically surpass and replace human intelligence. The future of humanity requires us to seriously address and value artificial intelligence while also constructing equal and reasonable ethical principles for it.

Living in the present era means being in a society that is constantly being intelligentized: from the macro global context to the micro daily world, even the inner human consciousness and self-spirit are shaped and portrayed by artificial intelligence. Intelligentization has become the thematic discourse and characteristic of the current era. At the same time, artificial intelligence is not a panacea for solving the crisis of modernity; the excessive interpretation and deep infatuation with artificial intelligence risk falling into discourse worship and path dependence. Humanity must face and respond to the inherent tension between artificial intelligence and human intelligence: can artificial intelligence surpass or replace human civilization in the future? Will intelligent machines have moral codes compatible with human morality? Will carbon-based civilization be destroyed by silicon-based civilization? This necessitates a reasonable determination of the meaning and boundaries of artificial intelligence, a re-understanding of its essence and function, and a redirection of its development path.
1. Meaning and Source: The Conceptual Genealogy of Artificial Intelligence
To re-examine the tense relationship between artificial intelligence and human intelligence, we must trace the conceptual genealogy and meaning of “intelligence,” analyze its functional attributes and social effects, and ultimately establish moral norms and ethical laws for it. “The philosophy of intelligence” is a metaphysical inquiry centered on intelligence-related issues from the perspective of philosophical reflection, critical spirit, analytical methods, and ethical concerns. It aims to explore the intrinsic relationships between important categories such as matter and consciousness, body and mind, machine and human, artificial intelligence and human intelligence, studying how to achieve organic interaction and coexistence between the two. The philosophy of artificial intelligence is the contemporary form and specific expression of intelligence issues, rooted in the mystery of human self-recognition and the tortuous path of social evolution, thus tracing and sorting out the conceptual genealogy of artificial intelligence from the perspective of thought sources is particularly necessary.
Ancient Chinese classics contain profound thoughts on automatic devices and intelligent technologies, the most representative being the “Yanshi Xianji” recorded in the “Liezi.” Although this early literature on human automation is merely a mixture of myth, experience, and scientific fantasy, it embodies the positive imagination and empirical exploration of intelligent machines in ancient society. The term “intelligence” in Chinese is a compound of “wisdom” and “ability.” The meaning of “wisdom” is rich and generally contains three levels of semantic connotation: first, similar to “knowledge,” referring to our concrete experiences and empirical certainties formed about the objective world; second, close to “reason,” referring to our objective understanding and scientific cognition formed about the essential properties and basic laws of the object world; third, connected to “dao,” referring to the tacit knowledge and practical wisdom accumulated and internalized by us. In this sense, “wisdom” is a unique cognitive, understanding, and interpretive ability of human beings, distinguished into different levels and types according to the differences in subjective cognition, with perception, intellect, rationality, wisdom, and sagacity all belonging to different forms under the general category of “wisdom.” If “wisdom” emphasizes the degree of cognition, understanding level, and thinking style of the subject, then “ability” emphasizes the focus on skills, talents, and practical effects. The “Shuowen Jiezi” notes that “ability” is: “Ability, like a bear. The foot resembles a deer. From the flesh, it sounds like ‘ability’ and is strong among beasts, hence the virtuous are called ‘able’; the strong are called ‘able outstanding.’ All those belonging to ‘ability’ are from ‘ability.’” This means that the character for “ability” was originally written as “bear,” which is strong among beasts, thus the virtuous are called “able,” and the strong are called “able outstanding.” In the “Chu Shi Biao,” Zhuge Liang praised Xiang Chong as a virtuous and able person: “General Xiang Chong, with good character and military knowledge, was called ‘able’ by the late emperor… consulted him, and he would surely ensure harmony in the army.” As early as the “Xunzi: Zhengming,” both “wisdom” and “ability” appeared: “So, knowledge in a person is called knowledge, and knowledge that corresponds is called wisdom. Therefore, ability in a person is called ability, and ability that corresponds is called ability.” In the late Warring States period, “Lüshi Chunqiu: Shenfen” had already used “intelligence” as a term: “Not knowing how to ride things and relying on oneself, seizing their intelligence, increasing their teachings and decrees, and being self-reliant.” In the Eastern Han period, Wang Chong proposed the concept of “intelligent person” from the perspective of cognitive subjects in “Lunheng: Shizhi”: “Thus, intelligent people cannot learn without success, cannot know without asking.” In terms of cognitive mode differences, the process from sensation, representation, memory to perception, intellect, and rationality reflects the different forms and developments of intelligence, while the process from skills, talents to action is the continuous evolution and realization of ability, both combining to form “intelligence.”
In the West, the concept of “intelligence” also has a long history. Although Plato did not directly use the concept of “intelligence,” the rational capacity for understanding and comprehending the conceptual world that he proposed corresponds broadly to the concept of “intelligence” we discuss today. This capacity was seen by Plato as the fundamental attribute of the soul, specifically manifested in three different abilities that a person should possess: first, the ability to grasp the world of ideas through knowledge; second, the ability to transcend will and desire through introspection and thinking, thereby achieving harmony and balance in the soul; third, noble moral ability. Building on Plato’s ideas, Aristotle grasped the concept of “rational capacity” from the significance of theoretical wisdom and practical wisdom. His greatest contribution to artificial intelligence was the creation of a system of formal logic for reasoning, laying the ideological foundation for the important symbolic movement in the field of artificial intelligence in the future. As a great pioneer of modern philosophy, Descartes held the view of mind-body dualism, believing that the body and mind are two different entities— the body operates according to the mechanical principles of the physical world, while the mind possesses characteristics such as perception, imagination, reflection, emotion, and will. This prompted people to consider whether it is possible to replicate, simulate, and transplant a brain with similar functions to human mind in machines, opening up a vast space of possibilities for artificial intelligence. The famous “Turing Test” and “Chinese Room Experiment” of the 20th century are, in a sense, different responses to Descartes’ mind-body dualism. From Kant to Hegel, the German classical philosophical tradition discusses human rational capacity and its boundaries from perspectives such as subjectivity and self-awareness, while the true center of philosophical research is brought into focus by the philosophy of mind. The philosophy of mind, which emerged in the second half of the 20th century, almost coincides with the contemporary science of human intelligence, continuing the discussion of mind-body dualism and entering fields such as embodied intelligence, artificial enhancement, and computational philosophy. In other words, “intelligence” is no longer a rational capacity exclusive to humans; whether non-human subjects, including robots, can possess similar human-like thinking and mind has become a significant issue that requires research.
To this day, there is no consensus in academia on “what is intelligence,” only the formation of the broadest and most inclusive definition, which is that intelligence is the ability to carry out complex plans. The definition of the concept of “artificial intelligence” derived from “intelligence” is even more complex, with representative definitions as follows: (1) At the Dartmouth Conference in 1956, the year of the birth of artificial intelligence, John McCarthy and other conference initiators defined artificial intelligence as “any aspect of learning or any other characteristic of intelligence that can be precisely described, so that machines can simulate it.” This perspective defines artificial intelligence as an auxiliary means to solve human tasks from a tool-oriented perspective, reflecting the engineering thinking characteristic of early research. (2) Andreas Kaplan and Michael Haenlein defined artificial intelligence as “the ability of systems to correctly interpret external data, learn from this data, and use this knowledge to achieve specific goals and tasks through flexible adaptation,” which defines artificial intelligence based on the foundational role of big data. (3) Wikipedia has formed a relatively widely accepted definition: artificial intelligence refers to the intelligence exhibited by machines created by humans, usually referring to technologies that present human intelligence through ordinary computer programs. This definition has typical behaviorist characteristics, namely that artificial intelligence is a high-level intelligence carried by machines, capable of adapting and maximizing the achievement of subject goals. The key to developing artificial intelligence is to construct intelligent subjects, which possess reasoning, judgment, planning, learning, communication, and tool-use abilities similar to or even surpassing human intelligence.
Although the currently popular concept of “artificial intelligence” is continuously accepted and recognized, it also faces challenges; the doubts are mainly focused on whether machines can think like the human brain and whether machines have self-awareness. Computer experts, represented by David Poole, propose to replace “artificial intelligence” with “computational intelligence.” They believe that artificial intelligence is a vague and hybrid concept, which is related to the pioneers of artificial intelligence equating purpose and method, and will lead people to equate “artificial” with “simulated” or “imitation.” Consequently, people will inevitably question whether artificial intelligence is “real” intelligence. For example, people may think that artificial pearls are not natural pearls but rather fakes, and thus not true pearls. However, if we understand it as “synthetic pearls,” we cannot call it natural pearls, but it is indeed a real pearl. In this way, “synthetic intelligence” is a concept superior to “artificial intelligence.” However, the purpose of intelligence research is to understand both natural nature and the essence of artificial (or synthetic), so using the concept of “computational intelligence” is more reasonable—reasoning is computation, aimed at finding specific methods (algorithms) to complete tasks. The core of computational intelligence lies in intelligent subjects; although humans are regarded as the most intelligent subjects known, the generation of subjects that are more intelligent than humans is still possible. Therefore, the rise of computational society in recent years is also a response and extension of artificial intelligence research.
2. Connotation and Boundaries: Theoretical Consensus of Artificial Intelligence
To more accurately answer “what is artificial intelligence” and clearly present this issue, two contemporary prominent artificial intelligence researchers, Russell and Norvig, in their landmark work “Artificial Intelligence: A Modern Approach,” used structuralist methods to reorganize the problem of artificial intelligence. The essence of the problem of artificial intelligence is what goals we aim to achieve. For this goal, it can be delineated along two different dimensions: one dimension centers on human behavior, relying on common sense and experience, and is thus unreliable; the other dimension centers on rigorous rational computation, greatly enhancing precision. Based on these two dimensions, distinguishing between thinking and acting can subdivide into four different domains—thinking rationally like humans, acting rationally like humans, thinking rationally with humans, and acting rationally with humans. These four domains represent different capabilities possessed by artificial intelligence. Russell and Norvig’s distinction directly promoted a new understanding of artificial intelligence. Although disputes about artificial intelligence remain unsettled, at least the following consensus has been formed.
First, the construction goal of artificial intelligence is the “intelligent agent.” The famous artificial intelligence expert Marvin Minsky first proposed the concept of “agent,” believing that traditional computing systems have the defect of closure, unable to cope with the openness and complexity of social mechanisms; if concepts such as social behavior are introduced into computing systems, a computational society can be constructed, which requires intelligent agents that possess autonomy, reactivity, proactivity, sociality, and evolution. Michael Wooldridge emphasizes that sociality is the most important characteristic of intelligent agents. In terms of weak definition, intelligent agents are social entities characterized by autonomy, sociality, reactivity, and agency; in terms of strong definition, intelligent agents not only possess weak definition characteristics but also have flexible mobility, efficient communication, and mature rationalization. Intelligent agents include both hardware, such as robots, and software, such as systems. They can not only perform precise calculations and process tasks but also actively adjust and respond to emergencies, and can self-learn and evolve continuously. The future artificial intelligence will not only possess the strengths of machine computation but will also mimic and catch up with the advantages of human intelligence, thus becoming very powerful. This typical subject enhancement approach aligns with the intrinsic needs and goals of human intelligence development.
Second, the underlying logic of artificial intelligence is big data. The rise of artificial intelligence in the 21st century is closely linked to the drive of big data. So, how should we understand big data? “Big data refers to data sets that cannot be captured, managed, and processed using conventional software tools within a certain time frame, requiring new processing models to have stronger decision-making, insight discovery, and process optimization capabilities.” The emergence of big data is closely related to the revolution in computer technology, further promoting changes in data collection, storage, analysis, and processing. The author of “Big Data Era: A Great Change in Life, Work, and Thinking” believes that big data has five typical characteristics—Volume, Velocity, Variety, Value, and Veracity. The exponential growth of data volume provides foundational support for artificial intelligence. Traditional artificial intelligence is limited by data and algorithms, only able to engage in simplified tasks in a single domain, while big data drives artificial intelligence into the stage of machine learning and deep understanding, which may be a breakthrough from weak artificial intelligence to strong artificial intelligence. The ultimate development of big data is dataism: everything can be reduced to data collection and processing; all humanity can be viewed as a single data processing system, with each person merely a chip processing small data; all living beings are algorithms, and the operation of life signifies data processing. Thus, dataism issues a shocking and enlightening prediction: intelligence will be decoupled from consciousness, and unconscious algorithms will completely surpass conscious human intelligence.
Third, the key to the operation of artificial intelligence is algorithms. Merely having big data and high-performance computers is insufficient to support the artificial intelligence revolution; “algorithms” are the organizers and commanders of the orderly operation of the entire computing system. Top-notch algorithms can integrate vast data resources and available computing power, thereby maximizing value. There are many definitions of algorithms, such as “an algorithm is a process of completing a task step by step within a limited time” and “an algorithm is a finite, definite, effective method suitable for implementation via computer programs to solve problems.” The UK Artificial Intelligence Council defines algorithms as “a series of instructions executed on a computer to perform calculations or solve problems, constituting the basis of all that computers can do, and thus are fundamental aspects of all artificial intelligence systems.” The common feature of various definitions is that algorithms are collections of programs, steps, instructions, and methods for achieving tasks. Top-notch algorithms should possess three fundamental characteristics: first, they can complete urgent tasks within a limited time; second, they possess relative transparency and openness to ensure fairness; third, they have a high degree of integration, effectively utilizing various programs, technologies, and methods to achieve tasks. Although humans actively invent and innovate to optimize algorithms and enhance computing power, they cannot equally possess or fairly use algorithm resources. When possessing algorithm resources means holding the privilege of digital monopoly, various resources centered on algorithms become “intelligent property,” which will be a new form of wealth in the future world. The disparity in ownership of algorithm resources will lead to different subjects’ inequalities in recognizing, possessing, and sharing the benefits of artificial intelligence, causing risks such as algorithm discrimination, algorithm black boxes, algorithm monopolies, and algorithm gaps, thus creating a new poverty phenomenon called “digital poverty,” and subsequently giving rise to a new impoverished class known as “digital poor.”
In addition, there is an urgent need for new understanding and interpretation in the legislative construction, ethical review, and social collaboration of artificial intelligence. At this stage, the global development of artificial intelligence faces a dual paradox: on the one hand, we must recognize and understand the achievements of artificial intelligence and the trends of future development, and actively participate in and promote the expansion and deepening of artificial intelligence; on the other hand, we are deeply worried or fearful of the risks and consequences of artificial intelligence. The most shocking statements suggest that in the future, “silicon-based life” will replace “carbon-based life,” and humanity may become the pets of robots, with the future world entering a post-human era… Various excessive interpretations and boundless imaginations regarding artificial intelligence stem from our unclear understanding of the relationship between artificial intelligence and human intelligence, thus necessitating a re-understanding of the inherent tension between the two.
3. Simulation and Enhancement: The Gentle Path to Transcendence

The “gentle transcendence theory” posits that artificial intelligence is essentially a simulation of human intelligence. In certain domains, machines can think and act like humans, and even perform better than humans; however, artificial intelligence is merely a partial enhancement and limited extension of human intelligence, and the “artificial enhancement” and “extended mind” it creates cannot overall transcend human intelligence, nor can it possibly replace human intelligence; even if the development of artificial intelligence accelerates in the future, it remains a product and tool of human intelligence. This is the current mainstream position and discourse, which is also the ideal outcome that most people welcome, reflecting the desire for human rationality to control and discipline technological civilization.

Whether intelligent machines can possess self-awareness and thinking forms similar to humans is the key to whether artificial intelligence can transcend human intelligence. If artificial intelligence is an intelligent machine system constructed and manufactured by humans, does the machine have consciousness? This question is actually divided into three levels: what is consciousness? Can consciousness be loaded into machines? If it can be loaded, how can it be verified? Specifically, (1) What is consciousness? The conscious experiences we grasp in intuitive, sensory, and perceptual activities do not equal consciousness itself, but they are the basis for the presentation of consciousness, and we cannot use our experienced consciousness to infer the consciousness of others. David J. Chalmers points out: “Conscious experience is the most familiar thing in our world, yet it is also the most mysterious thing in our world. Nothing is known more directly than consciousness, yet we know very little about how consciousness coordinates with all the various things we know.” Functionalism argues that as long as machines can perform actions similar to humans, they can be said to possess consciousness. (2) The feasibility of loading consciousness into machines. Connectionism suggests that with the development of neural networks and breakthroughs in computational technology, we can fully construct a self-simulating and self-learning machine, which is like a system with self-awareness, allowing us to define the so-called “self-awareness.” As early as 2003, the father of machine deep learning, Jürgen Schmidhuber, designed a device called the “Gödel machine” that can independently complete self-calculation and thinking learning tasks. Since then, information carriers equipped with artificial intelligence have continued to emerge. (3) The verifiability of machine consciousness. “Behaviorism” claims that proving whether machines have human consciousness does not require a biological entity like the human brain; as long as machines can complete tasks similar to the “Turing Test,” we can say that machines possess consciousness. With the development of emerging technologies such as neuroscience, this idea may be realized. The excellent performance of AlphaGo proves that machines can think and act rationally like humans and even exhibit certain characteristics that surpass human rationality at critical moments.
Even if the “Turing Test” proves the possibility of machines possessing human-like thinking, does machine thinking equate to human thinking? Following the Turing Test, Searle proposed the famous “Chinese Room Experiment” to critique the view of strong artificial intelligence— that as long as the program design is sophisticated enough, machines possess the capability of human thought. The basic idea of this experiment is: a person who does not understand Chinese is locked in a room that only has a small window, with many Chinese symbols and a computer program used to answer Chinese questions inside. The system inputs questions in the form of Chinese symbols, and the system outputs answers to those questions in Chinese symbols; we can assume that the program is so perfect that the answers given are indistinguishable from those given by a native Chinese speaker, so although he does not understand the meanings of the Chinese characters, he can accurately understand and express Chinese through a specific program, leading some to mistakenly believe that Chinese is his native language. The same applies to artificial intelligence, including computers; although it cannot truly understand the meanings of the information it receives, it can accurately recognize and convey it. Searle attempts to prove through this experiment that the generation of human mind is based on the brain’s activities as a carrier, and any other subject wishing to obtain a mind similar to humans must possess the same causal power as the human brain; “the program itself cannot constitute a mind, and the formal syntax of the program cannot ensure the emergence of mental content.”
The “Chinese Room Experiment” and its philosophical debate bring important reflections to artificial intelligence research: the pursuit and simulation of artificial intelligence in relation to human intelligence have inherent limitations: (1) Partial transcendence but overall inadequacy. The complexity of the object world far exceeds the variability of subjective thinking, and the limited nature of subjective cognition determines that we cannot cover all areas of cognitive activity. Artificial intelligence can achieve great success in areas where human intelligence is stalled or absent and can even mimic or exceed human intelligence in familiar areas, but this does not mean it has achieved comprehensive transcendence. On the contrary, artificial intelligence is a theory and technology that expands and extends human intelligence, aiming to create a “human brain-like” intelligent system through scientific and technological means, and its achievements are merely results of the extension of human intelligence. It is a machine system constructed by humans based on human intelligence. It is used to represent certain characteristics or functions of human intelligence. Without human intelligence, there is no artificial intelligence; to some extent, artificial intelligence is the reification and externalization of human intelligence. (2) External transcendence but internal inadequacy. Although humans face the crisis of being eliminated and replaced in jobs that robots can perform, artificial intelligence has also created various new jobs and positions. To date, the areas where artificial intelligence has performed excellently are basically those where technological revolutions have already made breakthroughs; the occurrence of technological revolutions follows a gradual process from external to internal, from surface to essence; the areas where technological revolutions have not occurred demonstrate that the conditions and timing for change are not yet mature, and the silence or absence of artificial intelligence in these areas shows the lag and deficiencies in current artificial intelligence research. (3) Quantity surpassing but quality insufficient. Compared to the human brain, artificial intelligence excels in the transformation of three core elements: data resources, algorithm innovation, and computing power enhancement. It can quickly compute and analyze the vast data it obtains within a limited time, thus deriving optimal results. This achievement is based on the preconditions of vast resources, advanced algorithms, and super computing power, while the comprehensive cost of intelligent computation, the overall benefits of social development, and the cultivation of cultural values receive minimal attention. Although artificial intelligence continues to evolve through its adjustments, adaptations, and enhancements, it faces multiple challenges, including data resource scarcity, “algorithm black boxes,” and the uncertainty of social environments.
4. Transcendence and Replacement: The Radical Intelligence Revolution
In 1972, American mathematician Victor S. Vinge and others warned in the article “Structures and Evolution in Mathematics Since Gödel”: “By 1984, the entire galaxy may be ruled and controlled by computers that can self-replicate, self-modify, and are fundamentally indestructible, evolving infinitely into higher forms of intelligence. In the future, humans may be forced to obey supermachines, or perhaps humans will become the pets or pests of computer thinkers… becoming mere memories of their lower forms preserved in future zoos.” Interestingly, even today, there has been no situation where machines replace humans, but this does not mean that humanity can rest easy or that machines will forever remain under the discipline of human intelligence. Artificial intelligence experts continually issue risk warnings: “In the future world, humans and machines will be indistinguishable, and humans will no longer be the spirits of all things. Computers will possess intelligence a thousand times greater than the human brain. Quantum computing will ignite the future of technology. Machines will not only possess intelligence but also have minds, with human-like consciousness, emotions, and desires.” Some even predict that the combination of humans and machines will greatly extend human lifespan, and virtual reality may lead to mutual “love” between humans and machines, resulting in a future where new humans or new species are born from the combination of humans and machines.
Whether machines can surpass or even replace humans is a topic that concerns the survival and extinction of humanity, requiring us to re-examine the tense relationship between artificial intelligence and human intelligence. The focus of the divisions and controversies surrounding artificial intelligence is whether it can surpass or even replace human intelligence. The gentle path from simulation to enhancement is merely a possible path of technological innovation development, unlikely to provoke excessive anxiety and apocalyptic warnings about machines. However, the reason artificial intelligence has become a theoretical hotspot and social focus is that it is no longer limited to simple machine thinking; its functional evolution will not merely stay at the stage of simulation and enhancement. The future rhythm of artificial intelligence development will be from quantitative change to qualitative change, from evolution to mutation, possibly bringing risks capable of overturning human civilization. Using artificial intelligence to re-examine human civilization and reconstruct world history may no longer be a myth. In this sense, we need to trace and outline the development process of artificial intelligence, gaining insight into the inherent tension of human-machine relationships from its evolutionary history. The development of artificial intelligence can generally be divided into three stages: the first stage is weak artificial intelligence, where machines think and act like humans, mainly focusing on physical imitation, image recognition, voice processing, intelligent computation, etc., with terminal products manifested as industrial robots, smartphones, translation systems, etc. Weak artificial intelligence is at the stage of partial imitation and limited enhancement concerning human intelligence; although it can compensate for and extend many areas that human intelligence cannot reach or perform, it ultimately remains within the range that humans can understand and control by simulating human intelligence to produce responses similar to human intelligence. The second stage is strong artificial intelligence, where machines think and act like humans, mainly focusing on psychological feelings, thinking activities, decision-making judgments, and action choices, with terminal products manifested as advanced robots, unmanned driving, deep learning, and other intelligent systems.
It is worth noting that while humans express vigilance towards strong artificial intelligence, artificial general intelligence (AGI) is regarded as the ideal goal and ultimate form of human-developed intelligent technology. The essential difference between AGI and strong artificial intelligence is that AGI possesses efficient learning and generalization abilities, can autonomously generate and complete tasks based on the environment, and has perception, cognition, and decision-making abilities similar to humans; strong artificial intelligence focuses on specific domains, possessing high autonomy and super capabilities in specific areas, capable of performing human tasks or even surpassing humans, but lacking cross-domain adaptability. AGI has high flexibility and adaptability, self-learning, and self-decision-making capabilities, able to respond to multiple scenarios and execute multimodal tasks, being a comprehensive intelligent agent with multidimensional functions and attributes. For example, multimodal large models are effective attempts aimed at AGI, and it is not excluded that comprehensive intelligent agents capable of operating in all time-space and all modalities may emerge in the future, but this is very close to the superintelligence that humans worry about and are vigilant against, making it difficult to distinguish. Strong artificial intelligence has already evolved beyond the initial stage of simulating, imitating, and biomimicking human intelligence to a level with functions equivalent to human intelligence. In other words, tasks that human intelligence can complete, artificial intelligence can generally complete as well. Compared to human intelligence, strong artificial intelligence, while differing in essence and structure, does not lag behind or fall short in functional application and goal achievement, and in some aspects, it can even replace human intelligence because it performs better than human intelligence in areas such as deep-sea exploration, dangerous rescue, unmanned driving, and intelligent computation.
Will strong artificial intelligence replace or eliminate human intelligence? Some scholars propose that while strong artificial intelligence can think and act like humans, it merely performs different roles and responsibilities from human intelligence, and strong artificial intelligence does not intend to replace human intelligence or even eliminate the human species; the two can coexist and compete without contradiction. However, this may just be an optimistic imagination of humanity; the development of artificial intelligence technology does not entirely follow this idealistic path, and the technological iteration of strong artificial intelligence may give rise to the third stage of artificial intelligence, namely “superintelligence.” Superintelligence aims to break through the traditional dichotomy of weak and strong artificial intelligence, transcending human cognitive patterns, understanding levels, and developmental limits. It not only possesses the abstraction, complexity, and evolution of human thought but also the super thinking that humanity lacks and urgently needs. Although currently, machines are not yet a threat to humans in terms of general intelligence, the future emergence of superintelligence may far exceed human cognitive abilities in almost all domains. While there is no unified understanding of how to recognize and define superintelligence, at least there is a consensus: superintelligence is intelligence that far surpasses the current human mind in almost all general cognitive fields; the timing and speed of superintelligence’s emergence are quite complex, possibly slow and gentle, or fierce; the singularity moment is beyond human prediction or control; superintelligence poses significant risks and threats to human futures, and humanity’s advantage lies in the ability to take early action and prevent risks, but how to act and whether such actions are effective remains highly questionable.
Superintelligence is merely a bold imagination and civilizational concern regarding the risks of artificial intelligence in the future, making it difficult for people to specifically define the forms and functions of superintelligence, which is also what makes it frightening and dangerous. However, Bostrom has predicted possible forms of superintelligence: from a formal perspective, superintelligence can be broadly divided into three types—fast superintelligence, collective superintelligence, and qualitative superintelligence. In simple terms, “fast superintelligence refers to intelligence that is similar to the human brain but faster than the human brain. This system can accomplish everything that human intelligence can accomplish, but at a much faster speed. Another form of superintelligence is a system that achieves excellent performance by integrating a large number of small intelligences. This system consists of a vast number of small intelligences that greatly exceed the overall performance of all existing cognitive systems in many general fields. … The third form of superintelligence is qualitative superintelligence, which is a system that is at least as fast as the human brain and has a significant qualitative advantage in intelligence compared to humans.” In terms of origin and inheritance, all three forms of superintelligence are based on human intelligence as knowledge, generated through imitation and enhancement, learning and modification, evolution and mutation. It must be pointed out that superintelligence will not arrive early or attack without reason; if human intelligence has not developed to a sufficient level of maturity and completeness, true superintelligence is impossible to emerge.
The concerns of humanity regarding the era of superintelligence are essentially a defense of the naive human-centered intuition. As a creation of humanity, will artificial intelligence threaten, surpass, or even replace human civilization itself? Will superintelligence end human intelligence? How to handle the inherent tension between the two? These questions present three possible paths: (1) A zero-sum game of survival. The scarcity of Earth’s resources, the acceleration of technological innovation, and the uncertainty of the future world intensify the binary conflict between superintelligence and human intelligence, leading to a competitive game of survival. Compared to superintelligence, which possesses rapid evolution and deep learning capabilities, human intelligence faces significant challenges, and falling behind in the intelligence race may lead to the extinction of human civilization. Artificial intelligence evolved through deep learning possesses high flexibility and strategy; “when artificial intelligence is weak, it behaves very cooperatively (as it becomes smarter, it becomes more cooperative). When artificial intelligence becomes powerful enough, it will launch a counterattack without warning or provocation, forming a single entity and begin to directly transform the world according to its ultimate values.” If superintelligence surpasses and ultimately replaces human intelligence, human civilization will also fall into its darkest hour. Hawking has repeatedly stated that artificial intelligence could either be the best thing or the worst and most destructive event in human history, and if we do not stop researching and applying artificial intelligence, humanity will ultimately be replaced by it. (2) A mutually destructive survival dilemma. The status struggle between superintelligence and human intelligence is essentially the ultimate reflection on the direction of human civilization—whether humans can ultimately control and discipline the products they invented and created. If neither can overcome or transcend the other but cannot reconcile and coexist, then the lasting struggle between human intelligence and superintelligence will ultimately lead to a mutually destructive survival dilemma, which will fundamentally hinder humanity’s understanding and practical application of artificial intelligence, and will not contribute to the self-development and innovative progress of human civilization. (3) Harmonious coexistence of mutual benefit. Human intelligence invents and creates artificial intelligence, initiating the intelligent revolution of weak, strong, or even superintelligence, aiming to fill and overcome cognitive gaps and defects in human beings, expanding and deepening the realm and depth of human rationality, ultimately enabling human beings to live a better life. Artificial intelligence should not deny, cancel, or replace human intelligence, but should become a driving force and innovative path for the advancement of human intelligence. Similarly, human intelligence should not ignore the rapid development and transformative effects of artificial intelligence, but should recognize its rightful subjectivity and respect its reasonable rights. A relationship of mutual trust and equality should be: both should pursue their own paths while competing with each other, both should perform their respective duties and achieve a win-win coexistence. At the Fourth World Internet Conference (Wuzhen) in 2017, Jack Ma defined the relationship between humans and machines as follows: “In the past 30 years, we have turned humans into machines; in the next 30 years, we will turn machines into humans, but ultimately machines should be more like machines, and humans should be more like humans.”
5. Rethinking the Ethical Framework of Human-Machine Relationships
The technological fears, ethical reflections, and philosophical critiques triggered by artificial intelligence reveal that artificial intelligence is a “philosophical event” with metaphysical significance, and once again announce a “philosophical crisis,” namely how philosophy positions itself and examines values in the era of strong artificial intelligence. In the wave of technologicalization, acceleration, and intelligentization, philosophy cannot be a short-sighted absentee or silent participant, but should bravely face and actively participate, reasonably interpreting the functions and significance of artificial intelligence while deeply reflecting on the technological risks and moral concerns it generates. In this regard, Asaro advocates for the construction of a responsible robot ethics, ensuring that humans and machines do not engage in a mutually exclusive survival competition game, which at least includes three aspects: “First, the ethical systems embedded in robots… Second, the ethics of designing and using robots… Third, the ethics of how humans treat robots.” The research and application of artificial intelligence not only involve the iterative upgrade of information technology but also involve a profound transformation of human thinking and a deep examination of ethical norms. The current task of human civilization is to set ethical procedural rules for artificial intelligence from the very beginning, embedding human morals and ethical norms into the algorithmic systems of intelligent machines, ensuring that intelligent operations adhere to designated moral commands, and striving to align intelligent machines with human values. If intelligent machines violate rules, they should activate shutdown modes; if they attack or harm humans, they should trigger preset self-destruction programs. However, this ethical design is not only extremely complex but also faces moral dilemmas.
The most famous early interpretation of human-machine ethics is Isaac Asimov’s three laws of robotics proposed in the science fiction novel “I, Robot”: “First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm; Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Additionally, he added a zeroth law as the root of the three laws—robots must protect humanity as a whole from harm. The execution order of the zeroth law precedes the three laws, and these four laws as a whole present a dictionary-like priority order. On the surface, these laws are ethical rules for intelligent robots, requiring robots to primarily serve human interests while also granting robots moral subjectivity and rights. However, they face significant design loopholes: first, the content is broad, the semantics abstract, and the boundaries fuzzy; terms like “overall interest,” “harm,” and “obedience” are highly contextual expressions, and humans have yet to form a universal cognitive model for them; if there are no explicit instructions in machine coding, robots will act according to their own will; second, the laws overall require intelligent machines to serve human interests or even make sacrifices, but this is a soft constraint based on a deontological stance, and the machines might not execute them or even sacrifice human interests in return.
Around this dilemma, many scholars have corrected and supplemented this set of laws, with Roger Clarke’s revised laws being typical. He set a meta-law that takes precedence over all other laws, stating that robots can do nothing unless their actions comply with robotic laws. Following this is the fourth law, which states that robots must fulfill the responsibilities assigned by embedded programs unless it conflicts with higher-level laws. To prevent robots from mimicking human reproduction, he also formulated a breeding law, stating that robots must not participate in the design and manufacture of other robots unless the actions of new robots comply with robotic laws. The closed-loop system constructed by the seven laws appears to be flawless, but there is a serious disconnection between the perfect theoretical conception and the complex real-world situations. In reality, robots will face a choice dilemma: if they cannot adapt and make optimal choices, they will remain stuck in the stages of imitation and enhancement, rendering the so-called general artificial intelligence or superintelligence nonexistent; if they can replace humans in complex environments to make optimal emergency responses, humanity will be both pleased and worried—pleased that machines can make optimal choices in situations where human cognition is limited, but worried about whether machines can decide significant matters concerning human fate on behalf of humans. If intelligent machines take control over humans, then the self-awareness, independent personality, and subject spirit that have always been revered by humans will face collapse and extinction.
The debate surrounding whether artificial intelligence or human intelligence is superior or inferior is not only an exploration of how intelligent technology shapes and influences the future of humanity but also a reconstruction of the concepts of “future humans” and “human future” and an expansion of their connotations. Some scholars believe that artificial intelligence will redefine the concept of humanity, with electronic humans, biochemical humans, digital humans, synthetic humans, or super-intelligent beings becoming the new humans of the future. This will blur the boundaries between humans and machines and enhance human survival advantages, forming a new life form based on human-machine interaction. Max Tegmark’s concept of Life 3.0 has received positive responses from figures like Hawking, Musk, and Harari. Life 1.0 refers to life whose hardware and software are determined by DNA, which can hardly change without relying on extremely slow evolution, representing the biological stage of life. In the Life 2.0 stage, humans can learn complex new skills and master tools, make limited designs and relative improvements to production methods, social interactions, and language forms, thereby forming certain worldviews and values, which can be termed the cultural stage of life. “Life 3.0 does not yet exist on Earth; it can not only maximally redesign its software but also redesign its hardware without waiting for many generations of slow evolution.” Unlike the previous two stages, Life 3.0 is the technological stage, representing the maximized evolution of human intelligence through the development of artificial intelligence.
Essentially, artificial intelligence is conceived and created by human intelligence, and its emergence not only triggers an epistemological revolution that inspires humans to re-observe and rethink the world but also transforms the world itself, including humanity. The arrival of the artificial intelligence era is not an option we can refuse or choose, but an inevitable situational circumstance we must engage in. Even without artificial intelligence, human intelligence will still develop and generate significant risks and challenges in other forms. The fundamental way to resolve human-machine conflicts is to transcend the binary opposition and zero-sum game of dogmatism; only the organic coexistence and harmonious progress of both can be the path of civilization development. The urgent task of human civilization is to construct a human-centered, responsible ethical framework for artificial intelligence, which is the basic principle for handling the relationship between artificial intelligence and human intelligence. Intelligent machines developed and applied based on this ethical principle will not only greatly enhance and perfect human survival capabilities and existence forms but will also gain the qualification and rights recognized and respected by humanity, becoming a new element and force in human civilization.

END

Enhancement Or Replacement: Philosophical Reflection On The Relationship Between AI And Human Intelligence
Enhancement Or Replacement: Philosophical Reflection On The Relationship Between AI And Human Intelligence
Article reprinted from WeChat Official Account: Theoretical Monthly
Enhancement Or Replacement: Philosophical Reflection On The Relationship Between AI And Human Intelligence

Thoughtful scholarship, scholarly thought

Based in China, facing the world

Key Research Base of Humanities and Social Sciences of the Ministry of Education

2022 CTTI Top 100 University Think Tanks

Institute of Modern Thought and Culture Research

Leave a Comment