What Is Artificial Intelligence?

A few days ago, a dog became famous worldwide due to a game of Go with AlphaGo. So, at the request of a small number of students, I would like to share with everyone what is called artificial intelligence.

The definition of artificial intelligence can be divided into two parts: “artificial” and “intelligence.” The term “artificial” is relatively easy to understand and is not very controversial. Sometimes we have to consider what can be created by human effort, or whether human intelligence is high enough to create artificial intelligence, etc. But generally speaking, an “artificial system” refers to the typical meaning of an artificial system.

Regarding what “intelligence” is, this raises many questions. This involves other issues such as consciousness, self, and mind (including unconscious mind), among others. The only intelligence that humans understand is their own, which is a widely accepted view. However, our understanding of our own intelligence is very limited, and we have a limited understanding of the essential elements that constitute human intelligence, making it difficult to define what “artificially” created “intelligence” is. Therefore, research in artificial intelligence often involves studying human intelligence itself. Other forms of intelligence, such as that of animals or other artificial systems, are also generally considered related research topics in artificial intelligence.

Artificial intelligence has received increasing attention in the field of computer science. It has been applied in robotics, economic and political decision-making, control systems, and simulation systems.

Professor Nelson from the Stanford University Artificial Intelligence Research Center defined artificial intelligence as follows: “Artificial intelligence is the discipline concerned with knowledge—how to represent knowledge, how to acquire knowledge, and how to use knowledge.” Another professor at the Massachusetts Institute of Technology, Winston, believes that “artificial intelligence is the study of how to make computers perform tasks that only humans could do in the past.” These statements reflect the basic ideas and contents of the field of artificial intelligence. That is, artificial intelligence is the study of the laws of human intelligent activities, constructing artificial systems with certain intelligence, and researching how to enable computers to complete tasks that previously required human intelligence, which is to study how to apply computer hardware and software to simulate certain intelligent behaviors of humans, along with the fundamental theories, methods, and techniques involved.

Artificial intelligence is a branch of computer science, which has been regarded as one of the three major cutting-edge technologies in the world since the 1970s (space technology, energy technology, and artificial intelligence). It is also considered one of the three major cutting-edge technologies of the 21st century (genetic engineering, nanoscience, and artificial intelligence). This is because it has developed rapidly over the past thirty years, has been widely applied in many fields, and has achieved fruitful results. Artificial intelligence has gradually become an independent branch, forming a system in both theory and practice.

Artificial intelligence is the study of how to enable computers to simulate certain human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.). It mainly includes the principles for achieving intelligence in computers, creating computers that resemble human brain intelligence, and enabling computers to achieve higher-level applications. Artificial intelligence will involve disciplines such as computer science, psychology, philosophy, and linguistics. It can be said that it spans almost all natural and social sciences, and its scope has far exceeded that of computer science. The relationship between artificial intelligence and cognitive science is one of practice and theory; artificial intelligence is at the technical application level of cognitive science and is an application branch of it. From the perspective of cognition, artificial intelligence is not limited to logical thinking; it must also consider image thinking and inspirational thinking to promote breakthroughs in artificial intelligence. Mathematics is often considered the foundational science of many disciplines; it also enters the fields of language and thinking. The discipline of artificial intelligence must also borrow mathematical tools. Mathematics plays a role not only in standard logic and fuzzy mathematics but also enters the field of artificial intelligence, where they will mutually promote faster development.

Research Value

What Is Artificial Intelligence?Robots with artificial intelligence

For example, heavy scientific and engineering calculations that used to require human brains can now be completed by computers, which can do them faster and more accurately than the human brain. Therefore, contemporary people no longer view such calculations as “complex tasks that require human intelligence to complete.” It is evident that the definition of complex work changes with the advancement of time and technology, and the specific goals of the science of artificial intelligence naturally evolve with the times. It continuously makes new progress while also shifting towards more meaningful and challenging goals.

Typically, the mathematical foundations of “machine learning” are “statistics,” “information theory,” and “control theory,” along with other non-mathematical disciplines. This type of “machine learning” heavily relies on “experience.” Computers need to continuously acquire knowledge and learning strategies from experiences of solving similar problems and apply experiential knowledge to solve problems and accumulate new experiences, just like ordinary people. We can refer to this learning method as “continuous learning.” However, humans can also create, which is known as “jump learning.” In certain situations, this is referred to as “inspiration” or “insight.” The hardest thing for computers to learn has always been “insight.” Or to be more precise, computers struggle to learn “qualitative changes that do not rely on quantitative changes” and find it challenging to transition directly from one “quality” to another or from one “concept” to another. Because of this, the “practice” here is not the same as human practice. The human practice process includes both experience and creativity.

This is what intelligent researchers dream of.

In 2013, data researcher S.C WANG developed a new data analysis method at the Dijing Data Research Center, which derived a new method for studying function properties. The author found that this new data analysis method provides a way for computers to learn “creation.” Essentially, this method offers a quite effective pathway for modeling human “creativity.” This pathway is endowed by mathematics, which ordinary people do not possess but computers can. Consequently, computers not only excel in calculations but also become adept at creation due to their proficiency in calculations. Computer scientists should decisively strip computers of overly comprehensive operational capabilities regarding “excellence in creation”; otherwise, computers may one day “counteract” humanity.

When reflecting on the derivation process and mathematics of the new method, the author expanded their understanding of thinking and mathematics. Mathematics is concise, clear, reliable, and highly structured. Throughout the history of mathematics, the brilliance of mathematical masters’ creativity shines. This creativity is presented in various mathematical theorems or conclusions, and the greatest characteristic of mathematical theorems is that they are built on some basic concepts and axioms, expressed in a structured language that contains rich information and logical structure. It can be said that mathematics is the discipline that most purely and straightforwardly reflects (at least one type of) creativity model.

Scientific Introduction

Practical Applications

Machine vision, fingerprint recognition, facial recognition, retinal recognition, iris recognition, palm print recognition, expert systems, automatic planning, intelligent search, theorem proving, games, automatic program design, intelligent control, robotics, language and image understanding, genetic programming, etc.

Disciplinary Scope

Artificial intelligence is an edge discipline, belonging to the intersection of natural sciences and social sciences.

Involved Disciplines

Philosophy and cognitive science, mathematics, neurophysiology, psychology, computer science, information theory, control theory, indeterminacy theory.

Research Scope

Natural language processing, knowledge representation, intelligent search, reasoning, planning, machine learning, knowledge acquisition, combinatorial scheduling problems, perception problems, pattern recognition, logic programming, soft computing, imprecise and uncertain management, artificial life, neural networks, complex systems, genetic algorithms.

Consciousness and Artificial Intelligence

Artificial intelligence, by its nature, is the simulation of the information processes of human thinking.

Simulating human thought can be approached in two ways: one is structural simulation, mimicking the structural mechanisms of the human brain to create “brain-like” machines; the other is functional simulation, temporarily setting aside the internal structure of the brain and simulating based on functional processes. The emergence of modern electronic computers is a simulation of the functional thinking processes of the human brain.

Weak artificial intelligence is rapidly developing, especially after the 2008 economic crisis, when the U.S., Japan, and Europe hoped to achieve re-industrialization through robots. Industrial robots are developing faster than ever before, further driving breakthroughs in weak artificial intelligence and related fields. Many tasks that previously required human involvement can now be accomplished by robots.

In contrast, strong artificial intelligence is currently at a bottleneck and requires the efforts of scientists and humanity.

Development Stages

In the summer of 1956, a group of visionary young scientists, including McCarthy, Minsky, Rochester, and Shannon, gathered to study and discuss a series of issues related to simulating intelligence with machines, and they first proposed the term “artificial intelligence,” marking the official birth of this emerging discipline. IBM’s “Deep Blue” computer defeating the human world chess champion was a perfect demonstration of artificial intelligence technology.

Since the formal proposal of the artificial intelligence discipline in 1956, significant progress has been made over more than fifty years, evolving into a broad interdisciplinary and cutting-edge science. Generally speaking, the goal of artificial intelligence is to enable machines to think like humans. If we hope to create a machine that can think, we must understand what thinking is, and further, what wisdom is. What kind of machine can be considered wise? Scientists have already created cars, trains, airplanes, radios, etc., which mimic the functions of our bodily organs, but can we mimic the functions of the human brain? So far, we only know that the thing inside our skull is an organ composed of billions of nerve cells, and we know very little about it; mimicking it may be the most challenging task in the world.

When computers emerged, humanity truly gained a tool to simulate human thinking, and countless scientists have worked toward this goal over the years. Today, artificial intelligence is no longer the patent of a few scientists; almost every computer science department in universities worldwide has researchers studying this discipline, and university students studying computer science must take such courses. With everyone’s relentless efforts, computers now seem to have become quite intelligent. For example, in May 1997, IBM’s Deep Blue defeated chess master Garry Kasparov. People may not notice that in some areas, computers assist humans in performing tasks that were originally only meant for humans, with their speed and accuracy benefiting humanity. Artificial intelligence remains a frontier discipline in computer science, and programming languages and other computer software exist because of advancements in artificial intelligence.

Technical Research

The primary material foundation used to study artificial intelligence and the machines that can realize artificial intelligence technology platforms are computers. The history of artificial intelligence development is closely linked to the history of computer science technology development. In addition to computer science, artificial intelligence also involves information theory, control theory, automation, bionics, biology, mathematical logic, linguistics, medicine, and philosophy. The main content of artificial intelligence research includes: knowledge representation, automated reasoning and search methods, machine learning and knowledge acquisition, knowledge processing systems, natural language understanding, computer vision, intelligent robotics, automatic program design, and more.

Research Methods

Currently, there is no unified principle or paradigm guiding artificial intelligence research. Many researchers have disagreements on several issues. Some long-standing questions that remain unresolved are: Should artificial intelligence be simulated from psychological or neurological aspects? Or, similar to how ornithology relates to aerospace engineering, is human biology irrelevant to artificial intelligence research? Can intelligent behavior be described using simple principles (such as logic or optimization), or must a large number of completely unrelated problems be solved?

Can intelligence be expressed using high-level symbols, such as words and ideas? Or is “sub-symbolic” processing required? JOHN HAUGELAND proposed the concept of GOFAI (Good Old-Fashioned Artificial Intelligence) and suggested that artificial intelligence should be classified as SYNTHETIC INTELLIGENCE, a concept that was later adopted by some non-GOFAI researchers.

Brain Simulation

Main entry: Cybernetics and Computational Neuroscience

From the 1940s to the 1950s, many researchers explored the connections between neurology, information theory, and cybernetics. Some preliminary intelligences were constructed using electronic networks, such as W. GREY WALTER’s TURTLES and JOHNS HOPKINS BEAST. These researchers often held technical association meetings at Princeton University and the UK’s RATIO CLUB. By 1960, most had abandoned this approach, although these principles were revisited in the 1980s.

Symbol Processing

Main entry: GOFAI

When digital computers were successfully developed in the 1950s, researchers began to explore whether human intelligence could be simplified into symbolic processing. Research was primarily concentrated at Carnegie Mellon University, Stanford University, and the Massachusetts Institute of Technology, each with its own independent research style. JOHN HAUGELAND referred to these methods as GOFAI (Good Old-Fashioned Artificial Intelligence). In the 1960s, symbolic methods achieved great success in simulating advanced thinking on small proof programs. Methods based on cybernetics or neural networks were placed in a secondary position. Researchers in the 1960s and 1970s believed that symbolic methods could ultimately succeed in creating machines with strong artificial intelligence, which was also their goal.

Cognitive simulation economists Herbert Simon and Allen Newell studied human problem-solving abilities and attempted to formalize them, while laying the foundation for the basic principles of artificial intelligence, such as cognitive science, operations research, and management science. Their research team developed programs that simulated human problem-solving methods using the results of psychological experiments. This method has been continued at Carnegie Mellon University and peaked in the 1980s with SOAR. Unlike Allen Newell and Herbert Simon, who based their work on logic, JOHN MCCARTHY believed that machines do not need to simulate human thought but should try to find the essence of abstract reasoning and problem-solving, regardless of whether people use the same algorithms. He worked in a Stanford laboratory to solve various problems using formal logic, including knowledge representation, intelligent planning, and machine learning. Researchers dedicated to logical methods also emerged from the University of Edinburgh, which facilitated the development of the programming language PROLOG and the science of logic programming in other parts of Europe. The “anti-logic” researchers at Stanford (such as Marvin Minsky and Seymour Papert) found that solving the difficult problems of computer vision and natural language processing required specialized approaches—they argued that there were no simple and universal principles (such as logic) that could achieve all intelligent behaviors. ROGER SCHANK described their “anti-logic” approach as “SCRUFFY.” Common-sense knowledge bases (such as DOUG LENAT’s CYC) are examples of “SCRUFFY” AI, as they require complex concepts to be manually written in one at a time. The knowledge-based revolution emerged around 1970 with the advent of large-capacity memory computers, where researchers began to construct knowledge into application software using three different methods. This “knowledge revolution” led to the development of expert systems, the first successful form of artificial intelligence software. The knowledge revolution also made people realize that many simple artificial intelligence software systems might require substantial amounts of knowledge.

Sub-symbolic Methods

In the 1980s, symbolic artificial intelligence stagnated, and many believed that symbolic systems could never mimic all human cognitive processes, especially perception, robotics, machine learning, and pattern recognition. Many researchers began to focus on sub-symbolic methods to solve specific artificial intelligence problems.

From the bottom up, interface AGENTs, embedded environments (robots), behaviorism, and new AI robot fields related researchers, such as RODNEY BROOKS, rejected symbolic artificial intelligence and focused on basic engineering problems like robot movement and survival. Their work again highlighted early cybernetic researchers’ viewpoints while proposing the use of control theory in artificial intelligence. This aligns with the representational perception arguments in cognitive science: higher intelligence requires individual representation (such as movement, perception, and imagery). In the 1980s, DAVID RUMELHART and others reintroduced neural networks and connectionism, which, along with other sub-symbolic methods like fuzzy control and evolutionary computation, belong to the research domain of computational intelligence.

Statistical Methods

In the 1990s, artificial intelligence research developed complex mathematical tools to solve specific sub-problems. These tools are genuine scientific methods, meaning their results are measurable and verifiable, and they are also the reasons for the success of artificial intelligence. The shared mathematical language also allows collaboration with existing disciplines (such as mathematics, economics, or operations research). STUART J. RUSSELL and PETER NORVIG noted that these advances are as significant as “revolution” and the success of “NEATS.” Some critics argue that these technologies focus too much on specific problems without considering the long-term goals of strong artificial intelligence.

Integrated Methods

The intelligent AGENT paradigm is a system that perceives its environment and acts to achieve its goals. The simplest intelligent AGENTs are those programs that can solve specific problems. More complex AGENTs include humans and human organizations (such as companies). These paradigms allow researchers to study individual problems and find useful and verifiable solutions without needing to consider a single method. An AGENT solving a specific problem can use any viable method—some AGENTs use symbolic and logical methods, while others use sub-symbolic neural networks or other new methods. The paradigm also provides researchers with a common language to communicate with other fields—such as decision theory and economics (which also use the concept of ABSTRACT AGENTS). In the 1990s, the intelligent AGENT paradigm was widely accepted. AGENT architectures and cognitive architecture researchers designed systems to handle interactions between intelligent AGENTs in multi-AGENT systems. A system containing both symbolic and sub-symbolic components is known as a hybrid intelligent system, and research on such systems is part of the integration of artificial intelligence systems. Hierarchical control systems provide a bridge between reactive-level sub-symbolic AI and higher-level traditional symbolic AI, while easing the time for planning and world modeling. RODNEY BROOKS’s SUBSUMPTION ARCHITECTURE is an early hierarchical system plan.

Intelligent Simulation

Simulating machine vision, hearing, touch, sensation, and thinking: fingerprint recognition, facial recognition, retinal recognition, iris recognition, palm print recognition, expert systems, intelligent search, theorem proving, logical reasoning, games, information sensing, and dialectical processing.

Disciplinary Scope

Artificial intelligence is an edge discipline, belonging to the intersection of natural sciences, social sciences, and technological sciences.

Involved Disciplines

Philosophy and cognitive science, mathematics, neurophysiology, psychology, computer science, information theory, control theory, indeterminacy theory, bionics, social structure science, and scientific development perspectives.

Research Scope

Language learning and processing, knowledge representation, intelligent search, reasoning, planning, machine learning, knowledge acquisition, combinatorial scheduling problems, perception problems, pattern recognition, logic programming, soft computing, imprecise and uncertain management, artificial life, neural networks, complex systems, genetic algorithms. The most critical challenge remains the shaping and enhancement of machines’ autonomous creative thinking abilities.

Application Fields

Machine translation, intelligent control, expert systems, robotics, language and image understanding, genetic programming, robotic factories, automatic program design, aerospace applications, massive information processing, storage and management, executing tasks that chemical life forms cannot perform, or complex or large-scale tasks, etc.

It is worth mentioning that machine translation is an essential branch of artificial intelligence and one of the earliest application fields. However, based on existing machine translation achievements, the quality of translations produced by machine translation systems is still far from the ultimate goal; the quality of machine translation is key to the success or failure of machine translation systems. Professor Zhou Haizhong, a Chinese mathematician and linguist, pointed out in his paper “Fifty Years of Machine Translation” that to improve the quality of machine translation, the first issue to resolve is the problem of language itself, not program design; relying solely on several programs to create a machine translation system will certainly not improve the quality of machine translation. Furthermore, without a clear understanding of how the human brain performs ambiguous recognition and logical judgment in language, it is impossible for machine translation to achieve the level of “faithfulness, expressiveness, and elegance.”

Safety Issues

Artificial intelligence is still under research, but some scholars believe that granting computers intelligence is very dangerous, as they may rebel against humanity. This risk has been depicted in various films, with the main concern being whether to allow machines to possess autonomous consciousness. Granting machines autonomous consciousness would imply that they have creativity, self-preservation awareness, emotions, and spontaneous behavior similar to or equal to those of humans.

Implementation Methods

When implementing artificial intelligence on computers, there are two different approaches. One is to use traditional programming techniques to present intelligent effects without considering whether the methods used are the same as those employed by humans or animals. This approach is called the engineering approach, which has yielded results in some fields, such as text recognition and computer chess. The other is the modeling approach, which not only considers the effects but also requires that the methods used are similar to or the same as those employed by humans or biological organisms. Genetic algorithms (GA) and artificial neural networks (ANN) belong to the latter type. Genetic algorithms simulate the genetic-evolutionary mechanisms of humans or organisms, while artificial neural networks simulate the activity patterns of nerve cells in the human or animal brain. To achieve the same intelligent effects, both approaches can typically be used. When using the former method, programmers need to specify the program logic in detail for each character; if the game is simple, this is manageable. However, as the game becomes complex, with an increasing number of characters and activity spaces, the corresponding logic becomes very complicated (growing exponentially), making manual programming cumbersome and prone to errors. Once an error occurs, the original program must be modified, recompiled, debugged, and finally provide users with a new version or a new patch, which is very troublesome. In contrast, when using the latter method, programmers design an intelligent system (a module) for each character to control, which starts off knowing nothing, much like a newborn, but it can learn and gradually adapt to the environment and handle various complex situations. This system also often makes mistakes initially, but it can learn from them and may correct itself in subsequent runs, at least not making the same mistake indefinitely, thus eliminating the need to release a new version or patch. Using this method to implement artificial intelligence requires programmers to employ biological thinking methods, which may be slightly more challenging to start with. However, once familiar, it can be widely applied. Because this method does not require detailed specification of the activity rules for characters, it is usually more labor-saving when applied to complex problems.

Professional Institutions

United States

⒈ MASSACHUSETTS INSTITUTE OF TECHNOLOGY

⒉ STANFORD UNIVERSITY

⒊ CARNEGIE MELLON UNIVERSITY

⒋ UNIVERSITY OF CALIFORNIA-BERKELEY

⒌ UNIVERSITY OF WASHINGTON

⒍ UNIVERSITY OF TEXAS-AUSTIN

⒎ UNIVERSITY OF PENNSYLVANIA

⒏ UNIVERSITY OF ILLINOIS-URBANA-CHAMPAIGN

⒑ UNIVERSITY OF MARYLAND-COLLEGE PARK

⒑ CORNELL UNIVERSITY

⒒ UNIVERSITY OF MASSACHUSETTS-AMHERST

⒓ GEORGIA INSTITUTE OF TECHNOLOGY

UNIVERSITY OF MICHIGAN-ANN ARBOR

⒕ UNIVERSITY OF SOUTHERN CALIFORNIA

⒖ COLUMBIA UNIVERSITY

UNIVERSITY OF CALIFORNIA-LOS ANGELES

⒘ BROWN UNIVERSITY

⒙ YALE UNIVERSITY

⒚ UNIVERSITY OF CALIFORNIA-SAN DIEGO

⒛ UNIVERSITY OF WISCONSIN-MADISON

China

1. Institute of Automation, Chinese Academy of Sciences

2. Tsinghua University

3. Peking University

4. Nanjing University of Science and Technology

5. University of Science and Technology Beijing

6. University of Science and Technology of China

7. Jilin University

8. Harbin Institute of Technology

9. Beijing University of Posts and Telecommunications

10. Beijing Institute of Technology

11. Xiamen University Artificial Intelligence Research Institute

12. Xi’an Jiaotong University Intelligent Vehicle Research Institute

13. Central South University Intelligent Systems and Software Research Institute

14. Xi’an University of Electronic Science and Technology Intelligent Institute

15. Huazhong University of Science and Technology Institute of Image and Artificial Intelligence

16. Chongqing University of Posts and Telecommunications

17. Wuhan University of Engineering

Main Achievements

Human-Machine Chess

From February 10 to 17, 1996, GARRY KASPAROV defeated “Deep Blue” (DEEP BLUE) 4:2.

From May 3 to 11, 1997, GARRY KASPAROV lost to the improved “Deep Blue” 2.5:3.5.

In February 2003, GARRY KASPAROV drew “Little Deep” (DEEP JUNIOR) 3:3.

In November 2003, GARRY KASPAROV drew “X3D Germany” (X3D-FRITZ) 2:2.

Pattern Recognition

Using pattern recognition engines, branches include 2D recognition engines, 3D recognition engines, standing wave recognition engines, and multidimensional recognition engines.

2D recognition engines have launched fingerprint recognition, portrait recognition, character recognition, image recognition, and license plate recognition; standing wave recognition engines have launched speech recognition; 3D recognition engines have launched fingerprint recognition.

Automation Engineering

Automatic driving (OSO system)

Printing factory (¥ assembly line)

Falcon system (YOD drawing)

Knowledge Engineering

Researching how to use artificial intelligence and software technology to design, construct, and maintain knowledge systems, with knowledge itself as the processing object.

Expert systems

Intelligent search engines

Computer vision and image processing

Machine translation and natural language understanding

Data mining and knowledge discovery

Leave a Comment