Understanding The Nature And Limits Of Artificial Intelligence

Understanding The Nature And Limits Of Artificial Intelligence

Abstract

One of the core theoretical issues in current artificial intelligence research is the problem of “understanding” in AI. For most computational simulations, people cannot achieve the kind of transparency in the traditional epistemological sense. From the perspective of technical implementation in computational science, given the enormous and complex computational volume, it is impossible for humans to review all computational processes, leading to a blind spot of opacity in human understanding. Based on the distinction between cognitive opacity and epistemic opacity, the opacity in machine cognition can result in a lack of “understanding.” Analyzing the extent and requirements of human understanding of machines from the perspective of epistemic transparency can lead to a relatively precise and in-depth discussion of the essence and limits of machine understanding, clarifying the characteristics of machine epistemology and further understanding the essential connections and differences between artificial intelligence and human intelligence, providing an enlightening path.

Author: Dong Chunyu, Professor at the Center for Value and Cultural Studies, Beijing Normal University

Excerpt from: “Social Sciences in China” Issue 5, 2023

This article is included in “Social Science Digest”

Issue 4, 2024

(Click to view the table of contents for Issue 4)

One of the hot topics at the forefront of academic discussions both domestically and internationally is undoubtedly the exploration of issues related to big data and artificial intelligence (AI), including the ethical logic of goodness that people “should” embed in algorithms. However, how can such requirements be implemented? In other words, if we lack an accurate judgment of the nature and limits of AI, then all beautiful wishes can only remain superficial. Therefore, deeply understanding the essence of AI is a fundamental issue we face. The so-called essence of AI is the boundary issue between humans and machines, and the completion of this boundary task fundamentally relies on the standards of delineation. Currently, the academic community tends to attribute the standards of delineation primarily to the so-called “understanding” issue, while the author analyzes the extent and requirements of human understanding of machines from the perspective of epistemic transparency, thus providing some in-depth discussions on the essence and limits of machine understanding.

Cognitive Transparency and Epistemic Opacity

Since ancient times, there has been a presumption in the philosophical community regarding human cognition, which posits that there exists a realm of phenomena where nothing is hidden. Later, cognitive philosophy systematically extended this viewpoint to the realm of the mind, making transparency (luminosity) a core concept in epistemology. It posits that various states of the mind are self-evident to us; simply put, “I know ‘I know’.” Thus, the transparency in cognitive philosophy refers to the relationship between the subject of cognition—”we humans”—and the cognitive abilities and mental states of the subject, emphasizing the “transparent” relationship between the human mind and human cognition. Overall, people implicitly regard cognitive transparency as a prerequisite for all cognitive activities, which is essentially a reflection on our cognitive abilities and the intelligent mind itself. This fundamental change stems from the rise of “machine intelligence.”

It is well-known that the invention of electronic computers is essentially an extrapolation of human brainpower, prompting people to view the relationship between humans and machines, especially AI, more cautiously. One fundamental question is whether there exists a transparency problem in machine cognition similar to that faced in human cognition. Therefore, we first need to clarify the distinction and connection between the concept of transparency in cognitive philosophy and the concept of epistemic transparency in computational science.

Understanding The Nature And Limits Of Artificial Intelligence

In the early 20th century, Humphreys (P. Humphreys) discussed the characteristics of computer simulation using the term epistemic opacity in his book “The Extended Phenotype.” He argued that “if the cognizing subject X does not understand all the epistemic elements related to this process at time t, then the process is epistemically opaque to subject X at time t.” It is not difficult to see that the so-called epistemic opacity is no longer limited to the reflection on the nature of the cognizing subject’s own mind, as is the case with cognitive opacity, but rather focuses on the entire cognitive process, incorporating various factors and aspects, including the subject and object of cognition, the process of cognition, and the methods involved.

Subsequently, Humphreys also explored the nature and classification of epistemic opacity. First, one must consider the role of the cognizing subject X in the cognitive process. This part overlaps significantly with discussions in the philosophy of cognitive science, primarily referring to the reflective aspect of the human “mind”; however, since machine cognition deeply intervenes in the cognitive process under the conditions of big data and AI, the cognizing subject X here actually includes both humans and computers and AI in the so-called composite “agents,” while this article focuses on content related to computers and AI. Secondly, another aspect closely related to epistemic opacity is the so-called patent problem, but this opacity caused by human factors belongs to a non-epistemological level. Finally, regarding current AI technology or machine learning, a significant portion of the content manifests as certain programming languages, algorithm designs, etc., which implies that we need to distinguish between experts and technically illiterate individuals. Additionally, even as professionals, experts find it difficult to avoid data biases.

In summary, the epistemic opacity caused by the lack of knowledge of the cognizing subject is not the primary focus of this article. This article primarily discusses epistemic opacity from the perspective of digital cognitive objects such as computational technology and algorithms, including the research on the differences between AI and human cognitive dimensions, which is also closely related to traditional philosophical issues such as scientific representation, computationalism, and reductionism. Below, we will conduct a specific analysis of this issue from the ontological and epistemological perspectives at the theoretical level, as well as from the operational perspectives at the technical implementation level.

The Opacity of Machine Cognition:

Principles and Technical Implementation

(1) Theoretical Opacity

Theoretical opacity is closely related to the representation problem in cognitive science. The development of machine intelligence based on logical pathways reduces the understanding ability of machines to the representation and formalization of symbols concerning the real world. This issue is fundamentally based on the essence of computation, that is, the specialization and symbolization characteristics inherent in the ancient “mathematization,” which closely links the transparency of cognition to the so-called “scientific representation.” Nowadays, many people, while adhering to reductionism, propose a strong program of computationalism, asserting that the entire world is completely governed by algorithms. However, anti-computationalism is filled with skepticism regarding this view.

Generally speaking, computation is discrete, as specific operations are always limited to a certain data range. However, the continuous hypothesis of computation posits that the so-called computational process should cover any value within a region, with no gray areas between any two data points. If computation is considered continuous, then one must fundamentally acknowledge the possibility of a complete understanding of the entire computational process, making the theoretical opacity of understanding a false problem. Conversely, if computation is discrete, the accuracy of the measured data and the issues of expansion or overflow pose a metaphysical challenge to the continuity and consistency of the theory, and the computability of the machine, as a special existence in the physical world, is also influenced by this argument. When the computational nature of machines is affected, the subject’s understanding of them may no longer possess theoretical transparency.

Beyond issues related to computation itself, symbolic representation also involves a broader topic concerning the relationship between models and reality.

According to traditional views, models serve as a bridge connecting us to the world, yet the construction of theoretical models relies on simplification or idealization methods, which implies “generalizing from the particular.” What then is the relationship between theoretical models and the concrete world? Here, we emphasize the most important “isomorphic” relationship between them. Isomorphism, as defined in mathematics, refers to a one-to-one correspondence between certain elements in one set and certain elements in another set. Do theoretical models and real entities possess an “isomorphic” relationship? Do human feelings and perceptions exhibit an “isomorphic” relationship? This not only involves the self-mind, the other-mind, and intersubjectivity issues, but also concerns the knowable and unknowable relationships between the perceived physical world, meaning that this isomorphic relationship opens various pathways for the human mind to access the world, yet this isomorphic relationship is also limited.

In summary, the isomorphic relationship between theoretical models and the world is a necessary epistemological prerequisite, and as the essence of scientific representation, it is undoubtedly significant for machine learning or AI based on computational models and algorithms.

The above discussions about the discreteness and continuity, simplicity and abstraction, and isomorphism of scientific (model) representation provide a certain degree of clarification for further exploring the technological development prospects of AI based on computational models and algorithms: from a theoretical principle perspective, rational symbolic representation can provide sufficient possibilities for improving AI’s intellectual level, yet its limitations are difficult to overcome fundamentally.

First, from the current processes and mechanisms of machine learning, the improvement of AI levels relies on the support of big data technology, which is typically the result of large amounts of data being “fed” for training.

Second, the patterns collected by machines from big data belong to statistical correlations rather than causalities. Additionally, causality should be understood from both causal effects and causal mechanisms, meaning that even if the causal effect between A and B is empirically verifiable, the specific links such as C and D mechanisms inserted within remain unclear, making it difficult to claim a full understanding of the entire event’s context. Therefore, as an “ontological” explanation of causal effects, the causal mechanism implies that revealing the process by which causes lead to results through structural models raises another requirement for the transparency of understanding discussed in this article: if we say that the previous definition of epistemic transparency is the grasp of the state of things at a given moment, then the issue of causal mechanisms presents a deeper requirement for transparency of understanding from a diachronic process perspective. In summary, if machines cannot effectively distinguish between correlation and causation, and if they cannot grasp the specific mechanisms of causation, then machines will always fail to achieve a qualitative leap in “understanding” the world.

Furthermore, humans possess uncertain qualities such as “free will,” which also pose significant obstacles for the development of machine intelligence, as the technology development pathway based on symbolic representation fundamentally struggles to escape the confines of computationalism, attempting to reach the irrational shore through rationalist paths, which seems paradoxical.

Finally, the cognitive foundation of humans may not align with Locke’s “blank slate theory,” but rather lean towards Kant’s notion of “a priori synthetic judgments”—the inherent ability to understand the world contained in DNA, which distinguishes us from other species; in contrast, machine cognition requires both data feeding and algorithm support, making the differences in cognitive premises more evident through comparisons of carbon-based and silicon-based life.

In conclusion, this article primarily discusses the working principles of machine intelligence and its theoretical limitations. Next, we will discuss the opacity issue from the perspective of technical implementation.

(2) Epistemic Opacity in Technical Implementation

The opacity in technical implementation primarily relates to the data itself. Clearly, the transparency of understanding implies a comprehensive grasp of the objects of understanding. However, despite having relatively mature big data methods today, obtaining so-called “global data” about things is certainly unachievable. From the algorithmic perspective, the issues encountered in epistemic opacity seem even more pronounced.

1. The Opacity Problem Caused by Complex Hierarchical Relationships in Algorithms

The challenge we face in understanding machine algorithms is how to comprehend the results produced by a large number of simple modules working in concert. First, the internal decision logic of current algorithms changes with the “deep learning” of training data. Secondly, the multi-layer neural network methods that play a significant role in machine learning are themselves divided into multiple different levels, with the relationships between these levels transitioning from quantitative changes to qualitative changes, a process that cannot be presented in a visualized or idealized manner. This implies that, from the perspective of practical operation or measurement accuracy, the limitations of multi-layer neural network computation on epistemic opacity are difficult to overcome.

2. The Impact of Time Complexity in Computation

In computer science, many algorithms and computations need to consider the time element. When computation grows non-linearly, the excessive time consumption may render the computation “inoperable.” Thus, the limitation of computation time due to time complexity is also one of the sources of epistemic opacity.

Limits and Significance

Whether machine intelligence will truly emerge remains a significant unresolved issue of our time. At present, it seems that individual research approaches—formal logic, functionalism, naturalism, and neural hierarchy—are not well-rounded theoretically, prompting scientists to explore a “composite path.”

First, cognitive scientists have engaged in in-depth discussions on machine consciousness issues through analogies with the human brain, and their general consensus is that “understanding” is the foundation of intelligence; however, the problem of machine understanding, as mentioned above, due to the increase in data and the irreducibility of more complex hierarchical and procedural computations, makes it nearly impossible to fully comprehend machines. Therefore, “in systems where computational irreducibility occurs, the most effective program for predicting the future state of the computational system is to let the system evolve itself.” This implies that machines also have their own evolution rules, and we cannot force humans to understand every influencing factor and the details of each computation in the process of computer simulation and machine cognition; the opacity of understanding may be one of the fundamental characteristics of computational science for human subjects.

Understanding The Nature And Limits Of Artificial Intelligence

Secondly, aside from the “understanding” dilemma, we must also consider the concept of the “explanatory gap” discussed in the philosophy of mind: it seems impossible to find a bridging law to connect our knowledge of mental states with our knowledge of the physical world, indicating that the knowledge we use to describe the external world seems unsuitable for describing mental states. For the construction of artificial consciousness, does this dilemma also imply that it is impossible to develop artificial intelligence that is homologous to human intelligence?

Finally, the novelty of computational methods brought about by the use of computers is gradually diminishing humanity’s central position in epistemology, complicating the meaning of “understanding.” First, regarding the active party that generates the understanding problem, it initially focused on the individual’s “self-awareness,” which is the “transparency problem in cognitive science” discussed earlier in this article. Second, in terms of understanding between individuals, it also presents the “other-mind” problem, which is the issue of one person inferring another’s intentions, with the premise being one person’s understanding of another’s intentions. In contemporary philosophy, this has evolved into the problem of intersubjectivity. The concept of intersubjectivity differs from the traditional Western philosophical notion of subjectivity, which is based on an isolated individual perspective; it refers to how different subjects can achieve “unity” or form “co-presence” through linguistic communication under the premise of dissolving traditional objectivity. This means that different subjects, engaged in the same cognitive process, can form equal and symbiotic relationships through continuous communication and mutual understanding, ultimately reaching a consensus in their interpretive activities regarding the world, and this interpretive unity among subjects carries characteristics of openness and dynamic evolution. Drawing from this line of thought, we can expand our understanding of the relationship between humans and machines and the problem of epistemic transparency.

First, considering that the current cognizing subjects have entered an era of human-machine hybrid “intelligent agent X,” the connotation of understanding relationships will no longer be limited to human-to-human interactions but will also need to consider dimensions between humans and machines, as well as between machines and machines. Secondly, borrowing from the research ideas on the aforementioned intersubjectivity problem, we must also consider the more complex scenarios between humans and machines, as well as between machines, akin to how humans achieve the requirement of publicness in interactions through linguistic dialogue. In human-machine and machine-machine relationships, explicit symbolic representations are also required to achieve mutual understanding. However, due to the existence of the explanatory gap, tacit knowledge, and other issues, it becomes extremely challenging for machines to fully understand or achieve human intentions, and even between machines, reaching unity is difficult. This is because the characteristics of knowledge, including tacit knowledge, and the public nature of mutual understanding will be embedded in the sociality of understanding and its ever-changing possibilities, indicating that even between machines, they will encounter uncertainties and diversification issues inherent in their emergence, along with the need for social collaborative evolution of machines—this has actually opened up another important area of research in artificial intelligence, namely, the future challenge of addressing the social development issues of artificial intelligence formed through cooperation among individual intelligences, which is essentially a question of the potential socialization of artificial intelligence.

Conclusion

This article’s discussion about the boundary issues between humans and intelligent machines can help us reflect on the future development of artificial intelligence: first, the essence of artificial intelligence has always been the simulation of human intelligent behavior, but the differences are substantial, such as the significant obstacles posed by non-rational factors like free will. Secondly, when considering the complex intelligent networks formed by human-machine relationships, it is crucial to pay particular attention to the openness, flexibility, and diversity produced by the “co-evolution” effect of machine groups, as this uncertainty is also a challenge we must face.

In summary, if we must compare the future development of artificial intelligence with the intentionality, purposefulness, and autonomy exhibited by human intelligence, it may be that the power of rationality is overestimated, and the pure ultimate dream of a machine mind remains distant; let us wait and see.

Related Recommendations

Table of contents for previous issues of “Social Science Digest”

Ni Liangkang|Artificial Intelligence: Computation or Thinking?

Wang Lihui, Hu Shengming, Dong Zhiqing丨Artificial Intelligence Technology, Task Attributes, and Occupational Replacement Risks—Empirical Evidence from the Micro Level

Jiang Yi|From Philosophy of Technology to Philosophy of Engineering: A Transformation of a Research Paradigm

Zhou Mingying|On Individualization of Artificial Intelligence Robots in “Machines Like Me”

Li Chuntao, Zhang Qian, Xu Hao, Gao Jiaying|Research on Ancient Characters Based on Artificial Intelligence Technology

More Recommendations

Leave a Comment