A popular definition of artificial intelligence, which is also one of the earlier definitions in the field, was proposed by John McCarthy of MIT at the Dartmouth Conference in 1956 (though this is controversial): artificial intelligence is about making machines behave like intelligent human beings. However, this definition seems to overlook the possibility of strong artificial intelligence. Another definition suggests that artificial intelligence is the intelligence exhibited by artificial machines. Overall, current definitions of artificial intelligence can generally be categorized into four types: machines that “think like humans,” “act like humans,” “think rationally,” and “act rationally.” Here, “act” should be broadly understood as taking action or making decisions about actions, rather than just physical movements.
Strong AI
The strong AI perspective believes that it is “possible” to create “truly” intelligent machines that can reason and solve problems, and such machines would be considered to possess perception and self-awareness. Strong AI can be divided into two categories:
-
Human-like artificial intelligence, where the machine’s thinking and reasoning are akin to human thought.
-
Non-human artificial intelligence, where the machine exhibits entirely different perceptions and consciousness from humans and employs reasoning methods that are completely different from those of humans.
Weak AI
The weak AI perspective holds that it is “impossible” to create machines that can “truly” reason and solve problems; these machines merely “appear” intelligent but do not genuinely possess intelligence or self-awareness.
Weak AI emerged as a contrast to strong AI because research in artificial intelligence had stagnated for a period until the powerful computational capabilities of neural networks were used for simulation, which led to significant advancements. However, not all AI researchers agree with weak AI, nor do they necessarily care about or understand the content and differences between strong and weak AI, leading to ongoing debates over definitions.
In the current field of AI research, researchers have produced a large number of machines that “appear” intelligent, achieving substantial theoretical and practical results. For example, in 2009, Professor Hod Lipson of Cornell University and his doctoral student Michael Schmidt developed the Eureqa computer program, which, given some data, was able to deduce Newton’s laws of mechanics—formulas that took Newton years of research to discover—within just a few hours. This program can also be used to explore many other scientific problems. These so-called weak AIs have made significant progress with the development of neural networks, but there is still no clear conclusion on how to integrate them into strong AI.
Philosophical Debates on Strong AI
The term “strong AI” was initially coined by John Rogers Searle concerning computers and other information-processing machines, defined as:
“The strong AI perspective believes that computers are not merely tools for studying human thought; rather, provided they run the appropriate programs, computers themselves can think.” (J. Searle in Minds, Brains, and Programs. The Behavioral and Brain Sciences, vol. 3, 1980)
The debates surrounding strong AI differ from the broader discussions of monism and dualism. The crux of the debate is: if a machine’s sole function is to convert encoded data, does that machine think? Searle argues that it does not. He uses the example of the Chinese Room to illustrate that if a machine merely converts data, and that data is an encoded representation of something, then without understanding the relationship between that encoding and the actual thing, the machine cannot genuinely comprehend the data it processes. Based on this argument, Searle contends that even if a machine passes the Turing Test, it does not necessarily indicate that the machine truly possesses self-awareness and free will like a human.
Other philosophers hold different views. Daniel Dennett, in his book “Consciousness Explained,” argues that humans are merely machines with souls, questioning why we believe that “humans can be intelligent while ordinary machines cannot.” He posits that machines capable of data conversion may indeed possess thought and consciousness.
Some philosophers argue that if weak AI is achievable, then strong AI is also achievable. For instance, Simon Blackburn states in his introductory philosophy textbook, “Think,” that a person’s seemingly “intelligent” actions do not necessarily prove that the person is truly intelligent. I can never know if another person is genuinely intelligent like me or merely “appears” to be intelligent. Based on this argument, since weak AI suggests that machines can “appear” intelligent, it cannot be entirely dismissed that such machines may genuinely possess intelligence. Blackburn believes this is a matter of subjective assessment.
It is important to note that weak AI does not completely oppose strong AI; that is, even if strong AI is possible, weak AI remains meaningful. At the very least, tasks that today’s computers can perform, such as arithmetic operations, were once considered to require intelligence over a hundred years ago. Furthermore, even if strong AI is proven possible, it does not guarantee that strong AI will necessarily be developed.
Research Methods
Currently, there is no unified principle or paradigm guiding artificial intelligence research. Researchers often debate many issues.
Several long-standing questions remain unresolved: Should artificial intelligence be simulated from psychological or neurological perspectives? Or, similar to how avian biology relates to aerospace engineering, is human biology irrelevant to artificial intelligence research? Can intelligent behavior be described using simple principles (such as logic or optimization), or must a multitude of completely unrelated problems be addressed?
Can intelligence be expressed using high-level symbols, such as words and thoughts? Or does it require “sub-symbolic” processing? John Haugeland proposed the concept of GOFAI (Good Old-Fashioned Artificial Intelligence) and suggested that AI should be classified as synthetic intelligence, a concept later adopted by some non-GOFAI researchers.
Cybernetics and Brain Simulation
From the 1940s to the 1950s, many researchers explored the connections between neurology, information theory, and cybernetics. Some early forms of intelligence were created using electronic networks, such as W. Grey Walter’s turtles and the John Hopkins beasts.
These researchers frequently held technical association meetings at Princeton University and the Ratio Club in the UK. By the 1960s, most had abandoned this approach, although these principles were revisited in the 1980s.
Symbolic Processing
When digital computers were successfully developed in the 1950s, researchers began to explore whether human intelligence could be simplified into symbolic processing. Research primarily concentrated at Carnegie Mellon University, Stanford University, and MIT, each with its independent research style. John Haugeland referred to these methods as GOFAI (Good Old-Fashioned Artificial Intelligence). In the 1960s, symbolic methods achieved significant successes in simulating high-level thinking in small proof programs. Methods based on cybernetics or neural networks were relegated to secondary importance. Researchers in the 1960s and 1970s were convinced that symbolic methods would ultimately succeed in creating machines with strong AI, which was also their goal.
-
Cognitive Simulation: Economists Herbert Simon and Allen Newell studied human problem-solving abilities and attempted to formalize them while laying the groundwork for the fundamental principles of AI, such as cognitive science, operations research, and management science. Their research team developed programs simulating human problem-solving methods based on the results of psychological experiments. This approach has been continued at Carnegie Mellon University and reached its peak with Soar in the 1980s.
-
Logic-Based: Unlike Allen Newell and Herbert Simon, John McCarthy believed that machines do not need to simulate human thought but should seek to find the essence of abstract reasoning and problem-solving, regardless of whether humans use the same algorithms. He worked in Stanford University’s lab to address various problems using formal logic, including knowledge representation, intelligent planning, and machine learning. The University of Edinburgh also focused on logical methods, contributing to the development of the Prolog programming language and logic programming science in other parts of Europe.
-
Anti-Logic: Researchers at Stanford University (such as Marvin Minsky and Seymour Papert) discovered that specialized approaches were needed to solve difficult problems in computer vision and natural language processing: they argued that no simple and universal principles (like logic) could achieve all intelligent behaviors. Roger Schank described their “anti-logic” approach as “scruffy.” Knowledge bases (like Douglas Lenat’s Cyc) are examples of “scruffy” AI because they must manually code complex concepts.
-
Knowledge-Based: Around 1970, large-capacity memory computers emerged, and researchers began to construct knowledge into application software using three methods. This “knowledge revolution” led to the development of expert systems, the first successful form of AI software. The knowledge revolution also made it clear that many simple AI software applications might require vast amounts of knowledge.
Sub-Symbolic Methods
In the 1980s, symbolic AI stagnated, with many believing that symbolic systems could never fully mimic all human cognitive processes, particularly perception, robotics, machine learning, and pattern recognition. Many researchers began focusing on sub-symbolic methods to solve specific AI problems.
-
Bottom-Up, Interface Agents, Embedded Environments (Robotics), Behaviorism, New AI: Researchers in the robotics field, such as Rodney Brooks, rejected symbolic AI and focused on fundamental engineering problems like robot mobility and survival. Their work revisited earlier cybernetic research perspectives while proposing the use of control theory in AI. This aligns with the representational perception argument in cognitive science: higher intelligence requires individual representation (such as movement, perception, and imagery).
-
Computational Intelligence: In the mid-1980s, David Rumelhart and others reintroduced neural networks and connectionism. This, along with other sub-symbolic methods like fuzzy control and evolutionary computation, falls under the umbrella of computational intelligence research.
Statistical Methods
In the 1990s, AI research developed complex mathematical tools to address specific subfields. These tools represent genuine scientific methods, as their results are measurable and verifiable, and they are also a reason for recent AI successes. The shared mathematical language enables collaboration with established disciplines (such as mathematics, economics, or operations research). Stuart J. Russell and Peter Norvig noted that these advancements are comparable to “revolutions” and the success of “neats.” Some criticize these techniques for being overly focused on specific problems without considering the long-term goals of strong AI.
Integrative Methods
-
Intelligent Agent Paradigm: An intelligent agent is a system that perceives its environment and takes actions to achieve goals. The simplest intelligent agents are programs that can solve specific problems. More complex agents include humans and human organizations (like companies). These paradigms allow researchers to study individual problems and find useful and verifiable solutions without needing to consider a single approach. An agent solving a specific problem can use any feasible method—some agents use symbolic and logical methods, while others employ sub-symbolic neural networks or other new methods. The paradigm also provides researchers with a common language to communicate with other fields—such as decision theory and economics (which also use the concept of abstract agents). The intelligent agent paradigm became widely accepted in the 1990s.
-
Agent Architectures and Cognitive Architectures: Researchers have designed systems to handle interactions between intelligent agents in multi-agent systems. Systems containing both symbolic and sub-symbolic components are referred to as hybrid intelligent systems, and research on such systems is the integration of AI systems. Hierarchical control systems bridge sub-symbolic AI at the reactive level and traditional symbolic AI at the highest level, while relaxing the timing of planning and world modeling.
Basic Applications
Basic applications of artificial intelligence can be divided into four main areas:
Perception
Refers to the ability of humans to perceive stimuli from the environment through their senses. Simply put, it encompasses human abilities such as seeing, hearing, speaking, reading, and writing. Learning human perceptual abilities is one of AI’s current major focuses, including:
-
“Seeing”: Computer Vision, Image Recognition, Face Recognition, Object Detection.
-
“Hearing”: Sound Recognition.
-
“Reading”: Natural Language Processing (NLP), Speech-to-Text.
-
“Writing”: Machine Translation.
-
“Speaking”: Sound Generation, Text-to-Speech.
Cognition
Refers to the process and ability of humans to understand information and acquire knowledge through learning, judgment, analysis, and other mental activities. Imitating and learning human cognition is also the second major focus area of AI, mainly including:
-
Analytical Recognition Ability: For example, medical image analysis, product recommendations, spam detection, legal case analysis, crime detection, credit risk analysis, consumer behavior analysis, etc.
-
Predictive Ability: For example, AI-driven predictive maintenance, intelligent natural disaster prediction, and prevention.
-
Judgment Ability: For example, AI playing Go, autonomous driving, healthcare fraud detection, cancer diagnosis, etc.
-
Learning Ability: For example, machine learning, deep learning, reinforcement learning, and various learning methods.
Creativity
Refers to the ability of humans to generate new ideas, discoveries, methods, theories, designs, and create new things. It is optimized from various factors such as knowledge, intelligence, ability, personality, and subconsciousness. In this area, humans still significantly outpace AI, but AI is also trying to catch up, mainly in fields like AI composition, AI poetry, AI novels, AI painting, and AI design.
Wisdom
Refers to the profound understanding of the truth of people, events, and things, the ability to seek real truth, discern right from wrong, and guide humans to live meaningful lives. This area involves human self-awareness, self-cognition, and values, which is a part that AI has not yet touched upon and is also one of the most challenging areas for humans to replicate.