Understanding the Nature and Limits of Artificial Intelligence Through Machine Opacity

One of the hot topics at the forefront of academic discussions both domestically and internationally is undoubtedly the comprehensive exploration of issues related to big data and artificial intelligence (AI), such as neural networks, deep learning, algorithmic bias, machine ethics, and robot emotions, all of which are receiving significant attention. The related research is in full swing, and people’s understanding in these areas has made substantial progress. Although the aforementioned research topics are very broad, if we lack an accurate judgment of the essence of AI and its limits, then discussions can only skim the surface. For example, there is currently a view that ‘humans are the measure of algorithms’, indicating that there is now a consensus that algorithms should not only pursue efficiency but also accuracy and precision; however, they are not heartless entities. Algorithms must also be benevolent, demonstrating a humanistic concern, meaning that algorithms must benefit people rather than harm them. People have become aware that ethics and human-centered logic should be embedded in algorithms, but the fundamental question we face is: how can such a noble wish be realized? At least, can it be realized in principle? The former belongs to the realm of science, while the latter pertains to philosophy. In other words, how to achieve the requirement of treating humans as humans in AI research is predicated on a profound understanding of the essence of AI and its future developmental trends.
In fact, discussions about the essence of AI began as early as the 1950s when related research commenced, and it has been more than half a century since then. Notable debates include the ‘Turing Test’ and the later ‘Chinese Room Test’, which have led to the formation of three research paths: symbolism, behaviorism, and connectionism. To this day, two major schools of thought have emerged in academia: one is the so-called weak AI school, which believes that machine intelligence can never surpass human intelligence; the other is the so-called strong AI school, which believes that machine intelligence will eventually surpass human intelligence—especially Kurzweil, who has given a specific time frame for this moment, namely 2045, referring to it as the ‘singularity’. Opinions on the future development of artificial intelligence are varied. The core issue in clarifying various viewpoints is how to view the boundary between humans and machines, and the completion of the boundary task fundamentally relies on the standards of demarcation. Currently, academia tends to attribute the demarcation standards mainly to the so-called ‘understanding’ as an essential issue. Of course, the study of ‘understanding’ itself has a long tradition in the history of Western philosophy, originating from ancient Greek reflections on knowledge, and in modern times, figures like Locke and Leibniz have authored works on the theme of understanding to elaborate on human cognitive abilities. By the early 20th century, due to people’s focus and in-depth discussion on language analysis, with the prominence of issues such as the ‘meaning’ and ‘context’ of propositions or theories, research on understanding as a cognitive mode distinct from knowledge gradually expanded from the initial specialized linguistic analysis perspective to the domain of practical epistemology, integrating the three dimensions of language, mind, and action to explore the relationships among mind, language, reality, and action. In this regard, the hermeneutic viewpoints proposed by figures like Heidegger and Gadamer are particularly noteworthy; they emphasize that understanding is an interpretative activity concerning the meanings of various human experiences, an exchange and convergence of the ‘minds’ of the understander and the understood, and a practical path for the cognizing subject to plan their possible modes of existence and life direction. This shift in the research perspective on understanding—from the internal dimension of the mind to the external dimension of practice—has significant implications for discussing human-machine and machine-machine relationships. In particular, the philosophy of science that takes understanding as its core goal naturally extends the relatively narrower and specialized issues of scientific understanding into one of the most important research categories in the field of the philosophy of science, with many representative figures emerging, ranging from early Carnap and Hempel to more recent figures like Stephen Grimm and Henk W. De Regt, who discuss the conflicts between the subjectivity and objectivity of scientific understanding, the complex relationships between semantics and pragmatics, and so on. Of course, scientific understanding also involves more complex situations, such as tacit knowledge and other related issues.
The study of the issue of ‘understanding’ has already formed an extremely complete and grand discourse domain. This article cannot comprehensively sort out all the ideological resources related to the issue of understanding, nor can it temporarily address issues closely related to computer and artificial intelligence technologies, such as natural language processing and semantic analysis. The author mainly analyzes the possible understanding and grasp of humans about machines from the so-called perspective of cognitive transparency, using this specific entry point to make some relatively precise and in-depth discussions about the essence and limits of machine understanding capabilities, aiming to provide an enlightening path for further understanding the connections and distinctions between AI and human intelligence. Therefore, the concept of understanding mentioned later in this article is primarily closely related to the issue of cognitive opacity and is used in a relatively narrow sense. Its basic meaning is: (1) the understanding of a knowing subject (including humans or machines, i.e., AI) regarding the various states of the object of knowledge (including humans or machines, i.e., AI) at a certain moment t; (2) the grasp of the specific mechanisms at various stages of the object over time, i.e., the understanding of causal mechanisms; (3) with the concept of ‘intersubjectivity’, extending the discussion of human understanding of humans to the bilateral understanding between humans and machines, as well as the understanding between machines, which also brings about discussions on the collective sociality and evolutionary nature of AI itself.

1

Cognitive Transparency and Cognitive Opacity

Since ancient times, there has been a presumption in the philosophical community regarding human cognition: that there exists a realm of phenomena where nothing is hidden. Later, cognitive philosophy systematically extended this viewpoint to the realm of the mind, making transparency (luminosity) a core concept in epistemology. It holds that various states of the mind are self-evident to us; that is, a knowing subject knows its own knowing state—if a cognitive subject knows p, then they know they know p. The various states of our mind are transparent to us, and there exists a special channel through which we can be aware of these states. Once the mind possesses content, we can be aware of that content. Thus, the transparency in cognitive philosophy emphasizes the ‘transparent’ relationship between the human mind and human cognition, and ‘the most appropriate understanding of thought is to view it as a representational structure in the mind and the computational processes operating on these structures’. This fundamental understanding of the human mind is also referred to as the Computational-Representational Understanding of Mind (CRUM), which provides a basic reference and premise for our understanding of transparency in thought and cognition.
Of course, not all philosophers assume that human cognition is transparent to the mind. Timothy Williamson (T. Williamson) argues that not all of our thoughts are such; that is, not all thoughts can be fully aware. This means that many elements or forms of thought exist outside our familiar cognitive domain and play indispensable roles in different contexts. Williamson proposed another concept in contrast to transparency, termed ‘opacity’. The manifestation of opacity is not difficult to imagine; for instance, in terms of the forms of thought, there exist many phenomena in human thought processes that are ‘mysterious’; sometimes we refer to them as intuition, sudden insight, or inspiration. Academic circles also tend to classify these methods of thought as ‘non-logical methods’, which essentially cannot achieve ‘I know that I know’. On one hand, the issue of tacit knowledge mentioned earlier theoretically deepens our understanding of the deep structural issues of human cognition, namely, whether there exist fundamental mechanisms operating beneath our rational reasoning. Or, we may be aware that in addition to clear knowledge, tacit knowledge exists; of course, we also care about which level of consciousness tacit knowledge primarily occurs, whether at the pre-conscious or even unconscious level. On the other hand, the elusive characteristics and highly personalized nature of tacit knowledge raise questions about what they mean. Can it be considered a result of the partial transparency obtained when humans understand the world? This layered and demarcated research on cognition has also become one of the focal issues that require further in-depth study in academia.
In cognitive philosophy, the theory of cognitive transparency is tacitly assumed to be a premise of all cognitive activities, and it has complex connections with opacity, truth, knowledge, positive and negative introspection, etc. The views people hold are also varied, as reflected in the intense debates in cognitive science over the past few decades regarding related issues. Overall, if we consider cognition to be transparent, then knowledge acquisition can be obtained through reflection of the mind; if we consider cognition to be opaque, then knowledge, as Williamson states, is limited. However, regardless of how great the differences between these claims, the underlying issues they concern are the same: that is, in the philosophical epistemological sense, under the premise of the subject-object dichotomy, we cannot overlook our own abilities, i.e., the nature of the intelligent mind itself. This is actually the inevitable result of the human-centered epistemological cage—only intelligent beings like humans are capable of knowing the (objective) world. Thus, the development of philosophical epistemology over the past two thousand years has always revolved around human cognitive abilities. Contemporary so-called philosophy of mind or cognitive philosophy is the latest product of this consistent path. Therefore, past discussions about the transparency of cognition were primarily limited to reflections on the human mind itself, i.e., limited to the cognitive transparency mentioned above. This fundamental change in the situation originates from the rise of ‘machine intelligence’. The emergence of machine intelligence further expands the connotation and extension of cognitive transparency while complicating the issues.
As is well known, at the beginning of the modern scientific revolution, machines first realized an extension of human sensory organs in the form of telescopes. Since the 1940s, with the invention and widespread use of electronic computers, their powerful computational capabilities have given rise to a new science—computational science, which essentially is the extrapolation and enhancement of human intellectual capacity. Particularly in the past two decades, with the emergence of big data and machine learning technologies, the essential expansion of machine cognition over human cognitive abilities has transitioned from quantitative to qualitative changes. True artificial intelligence technology has begun to mature, with the birth of AlphaGo being a typical example. This subtle change in circumstances has prompted people to view the relationship between humans and machines—primarily AI—more seriously and cautiously. How to view the essence of AI has thus become the key issue. Therefore, a fundamental question stands before us: does a similar problem exist in the cognitive process of machine intelligence as in the cognitive process of humans, namely, the transparency of machine cognition?
Thus, we first need to distinguish between the concept of transparency in cognitive philosophy and the concept of cognitive transparency in computational science. The research on machine cognitive transparency is far less historically deep, systematic, and thorough than the research on transparency related to the human brain in cognitive philosophy.
Still in the early 20th century, American philosopher of science Paul Humphreys used the term epistemic opacity to discuss the characteristics of computer simulations. He argued that due to our inability to directly examine and verify most steps in the computer simulation process, the ‘dynamic relationship between the core initial state and final state of the simulation is epistemically opaque’. Later, Humphreys provided a stricter definition of epistemic opacity: ‘If the knowing subject X does not understand all the epistemological elements related to this process at time t, then this process is epistemically opaque to subject X at that moment t.’ In other words, ‘the implication of epistemic opacity is that the subject does not understand or cannot know the contents related to the provable specific computational steps’.
From the definitions above, it is not difficult to see that epistemic opacity is no longer limited to reflections on the nature of the knowing subject’s mind as in cognitive transparency; rather, it focuses on the entire process of cognition, incorporating various factors and stages of the knowing subject, the object of knowledge, the process, and methods into the scope of consideration. Humphreys only provided a simple outlook on the development of computer simulation as a scientific method, yet he has already seen that for most computational simulations, people cannot achieve that kind of traditional epistemological transparency—because from the technical implementation perspective of computational science, facing a massive amount of calculations, humans cannot review the entire computational process, resulting in a certain blind spot of opacity in human cognition.
Computer simulation and computational science have now formed a set of uniquely meaningful new scientific methods, and these methods have introduced new unavoidable issues into the philosophy of science, namely the problem of epistemic opacity closely related to the ‘computational method’. Therefore, Humphreys has explored the nature and classification of epistemic opacity in many contexts thereafter.
The first issue to consider is the role of the knowing subject X and its cognitive capabilities in the cognitive process and the extent to which it can reach. This part overlaps significantly with previous discussions in the philosophy of cognitive science, primarily referring to the reflection on the general abilities of the human ‘mind’—the study of mental cognition and its mechanisms has long since become a popular trend in cognitive science and philosophy. However, due to the deep involvement of machine cognition in the cognitive process under big data and AI conditions, the knowing subject X here actually refers to the so-called composite ‘intelligent agent’ that includes both humans and computers and AI. Our focus of discussion here is on content related to computers and AI.
Secondly, another aspect closely related to epistemic opacity is the patent issues caused by the ‘external environment of cognition’, where certain institutions, for their own interests, keep key technical details such as codes in artificial intelligence secret to maintain competitive advantages or avoid malicious attacks from other companies. To some extent, this form of proactive self-protection aimed at maintaining business secrets and competitive advantages leads to opacity, which is essentially a familiar technical patent barrier, resulting in competitors and the public receiving opaque information or even being deceived by false information. This type of opacity caused by human factors belongs to a non-epistemic level and will not be discussed further in this paper.
Finally, regarding current machine learning or AI technologies, a large part of the content is primarily manifested in certain programming languages, which are completely different from human language in some aspects. Therefore, most people still cannot directly use them. Currently, writing and reading code and algorithm design are specialized skills. Just as mathematicians are better at dealing with numbers than artists, an artist wishing to surpass a mathematician in understanding numbers can only do so by continuously increasing their mathematical knowledge until they learn a series of special rules and become another mathematician. This means we need to distinguish between experts and technical illiterates. Furthermore, even as professionals, experts also find it difficult to avoid so-called data biases, where various biases often permeate the data due to human involvement in data collection and selection, leading to inherent biases in the data. Training algorithms with biased data will then produce deeper algorithmic biases and discrimination issues. In summary, whether it is technical patents or the opacity caused by technical illiteracy, these are all due to the insufficiency or even errors of the knowledge of the knowing subject, that is, related to the subjective knowledge state of human knowing subjects; they are not the focus of this paper. The opacity of cognition discussed in this paper mainly focuses on digital cognitive objects such as computational technology and algorithms, including the differences between AI and human cognitive dimensions, of course, it is also closely related to traditional philosophical issues such as scientific representation, computationalism, and reductionism. Below, we will conduct a specific analysis of this issue from the theoretical ontological and epistemological perspectives and the operability from the perspective of technical implementation.

2

The Opacity of Machine Cognition: Principles and Technical Implementation

(1)

Theoretical Opacity

Theoretical opacity is closely related to the representation problem in cognitive science. The development of machine intelligence based on logical routes reduces the understanding of machines to the representation and formalization of reality through symbols, which must be based on the essential problem of computation as the theoretical starting point. Regarding the nature of computation, there are differing views among philosophers, mathematicians, and computer scientists; however, with the development of cognitive science, computationalism and representation issues have evolved from the earlier psychological representation theories to the new cognitive theories of 4E and 5E today, which have incorporated discussions on embodiment and behavior. The relevant literature is extremely abundant. In short, from the perspective of the ancient Greek Pythagorean view that ‘everything is number’, to Galileo directly viewing the universe as a great book written in mathematics, and then to the great success of Newtonian mechanics, understanding the world from the perspective of numbers has always been the most fundamental methodology in scientific understanding. In contemporary times, Wheeler has proposed the striking assertion that ‘everything originates from bits’, and along with the developments in computational science and network technology, a new foundational worldview has been forged: people view material particles as information patterns and physical laws as algorithms, and everything can be realized through algorithmic theory, meaning the universe can be likened to a computable quantum computer, thus naturally forming an information-theoretical scientific paradigm. Since ancient times, the pursuit of transparent understanding of the world has been inseparable from this core element of numbers. Because ‘mathematization’ has specialized and symbolic characteristics, it has naturally become the primary form in which people express scientific thoughts and theories. We can also refer to this process and its results as ‘scientific representation’, and thus the transparency of understanding is closely related to scientific representation: we represent the state of systems and the rules or laws governing transitions between these states in a manner that can be clearly examined, analyzed, and interpreted by humans, where theoretical models and every part of the models can be explicitly represented. Therefore, some cognitive scientists, artificial intelligence experts, and philosophers hold an optimistic attitude towards the development of artificial intelligence, insisting on a strong program based on reductionism, believing that everything from the physical world and life processes to human minds is computable, and even the entire universe is entirely governed by algorithms. However, anti-computationalism is filled with skepticism towards the aforementioned epistemological viewpoints: from the perspective of the computability of the physical world, even with the most precise instruments, we still cannot identify many physical processes, and the accuracy of observations of physical processes is limited. Thus, we cannot conclude that the physical world is computable. This rebuttal is related to whether the essence of computation is continuous or discrete.
Generally, from a phenomenological perspective, computation is discrete because we always perform specific operations within a data range. However, the continuous hypothesis of computation posits that although there may be infinitely many other values between any two continuous data values, continuous data is essentially always numeric, yet the so-called computational process should be able to cover any value within a region, with no gray areas between data, meaning computation is merely an increase in quantity. In other words, for countless data within a range, computation can always be completed continuously. If we consider computation to be continuous, then in principle, we must acknowledge the possibility of fully understanding the entire process of computation. Thus, the theoretical problem of opacity in understanding is essentially a false problem. Conversely, if we consider computation to be discrete, the accuracy of the measured data and the issues of expansion or overflow pose a metaphysical challenge to the continuity and consistency of the theory. If the computability of the physical world is denied, then the computability of machines as a special existence of the physical world is also affected by this argument. Moreover, it has long been pointed out that computational science cannot treat data as the endpoint of epistemology; when tracing the information carried by data, we still encounter opacity issues: ‘Information cannot always be expressed as binary data; thus, there exists a concept of information that computers cannot touch.’ This also indicates that even if we approach transparency infinitely closely at the first level of understanding computation, we still struggle to escape deeper dilemmas in the second level of information acquisition. When the computational essence of machines is affected, the subject’s understanding of it inevitably loses the possibility of theoretical transparency, and the difficulties encountered by the strong program of computationalism at the theoretical level evolve into transparency difficulties in machine epistemology.
In addition to issues related to computation itself, symbolic representation also involves a broader topic, namely the relationship between models and reality.
According to traditional views, the world itself is an objective existence external to humans; when we seek to understand it, we are using scientific theories to explain and comprehend this world. However, any theory we possess can essentially only explain and predict various phenomena through establishing (a set of) models—what we call theoretical models, which are general abstract descriptions of a phenomenon through simplification or idealization methods. For instance, the point mass in early mechanics and the DNA structure in contemporary biology are both such examples. Thus, models serve as the bridge connecting us to the world. The nature and role of models are an unavoidable topic in discussing the theoretical principles of opacity in understanding because both concepts and algorithms fall within the category of models.
The construction of models relies on simplification or idealization methods. To successfully understand the world, we can usually only focus on the individual and specific aspects of things. If we were to consider all the infinite details related to a phenomenon based on the view of universal connections among all things, understanding would essentially be impossible. Therefore, sometimes we deliberately enhance or weaken certain aspects of a phenomenon, aiming to retain and reveal the essential characteristics of things while neglecting specific, secondary, and incidental attributes in the phenomenal world. This methodology has shone brightly in the successful application of modern scientific experiments and mechanical isolation analysis methods, and it is also the essence of the model method. Simplification or idealization means ‘generalizing from the particular’, and scientific models cannot be a complete description of the real world but can only provide a ‘partial description’ of the phenomenal world.
Thus, we cannot demand that the models of scientific theories fully reflect and align with all the attributes and laws of the objects they aim to represent. What then is the relationship between abstract theoretical models and the concrete observed world? In fact, this question has been extensively explored in the philosophy of science, with answers from various branches of Anglo-American and continental philosophy ultimately converging on one point: ‘isomorphism’.
What is referred to as isomorphism, sometimes also called homomorphism or homotopy, originally signifies formal similarity. In contemporary mathematics, it is strictly defined as a certain correspondence between elements in one set and certain elements in another set. If two entities possess some form of correspondence, we can say that a function exists between them, or that a mapping relationship holds; the strongest type of this relationship is called a one-to-one correspondence, meaning not only does each input variable value correspond to exactly one output variable value, but each output variable value also corresponds to exactly one input variable value, which is termed ‘one-to-one’.
Does an ‘isomorphic’ relationship exist between theoretical models and real entities? The answer to this will help us answer the essence and limits of scientific representation from the philosophical ontological and epistemological perspectives; furthermore, it will allow us to thoroughly understand the issue of opacity in understanding. For example, whether there exists an isomorphic relationship between people’s sensations and perceptions relates not only to the self-mind, other minds, and intersubjectivity issues but also to the knowable and unknowable relationships between the perceived physical world. Although some believe that ‘the structure of mental imagery corresponds to the structure of the actual perceptual entity’, this isomorphic relationship opens various pathways for human minds to access the world, but this relationship is also limited in that ‘the mind’ can reflect at most only a part of the entire reality. How should we view this? As we all know, the ancient Greek philosopher Protagoras famously said, ‘Man is the measure of all things.’ Past interpretations of this were subjectivist and idealistic; contemporary philosophy emphasizes the significance of ‘relational reality’ from the perspective of human subjectivity, highlighting the limited nature of human observational abilities: although aided by various scientific instruments, the limitations of human perception of the world can often achieve a degree of breakthrough, the ultimate boundaries remain an unknown question.
Thus, the role of theoretical models in the process of perceiving the world becomes clear: the isomorphic relationship between theoretical models and the perceived entities serves as a necessary epistemological premise—people recognize things through structural equivalence, i.e., the structure of the objective world can be compared to the structure of the theoretical models constructed by people, making the isomorphic relationship the essence of scientific representation, which is not surprising. This applies equally to machine learning or AI based on computational models and algorithms.
In any case, scientific models or scientific representations are important methods upon which we rely to understand the world. Our discussions above regarding the discreteness and continuity, simplicity and abstraction, and isomorphism of scientific (model) representation have provided some clarification for further exploring the technological development prospects of AI based on computational models and algorithms: theoretically, rational symbolic representation can provide sufficient possibilities for improving AI intelligence levels. For instance, regarding the current specialized robot AlphaGo’s ability to play Go, it has already surpassed humans. AI can improve its chess skills through ‘self-play’ based on its understanding of the rules of Go, ultimately exceeding humans not only in terms of thinking speed but also in potential for autonomous innovation in thinking.
Perhaps, from the perspective of defining AI solely from external practical functionality, the height of the usability and effectiveness of machines seems limitless. However, this viewpoint has long been questioned. The crux of this controversy lies in the boundary demarcation issue between humans and machines, namely, what is the essential difference between human intelligence and machine intelligence? First, from the very beginning, people have engaged in extensive discussions regarding issues such as the Turing Test, yet to this day, no definitive conclusions have emerged. From the perspective of the current process and mechanisms of machine learning, the improvement of AI levels relies on the support of big data technology, which is typically the result of training with vast amounts of data, hence the saying ‘data is food’.
Secondly, the patterns collected by machines in big data essentially belong to ‘statistical correlation’, rather than the causal relationships of sequential and mutually determining significance between things. Although efforts are being made to enable machines to identify causal relationships through the correlations of things, many challenges still exist in this regard. Moreover, causal relationships should also be understood from both the perspectives of causal effects and causal mechanisms. Even if the causal effect between A and B can be empirically verified, if the specific mechanisms like C, D, E, etc., that can be inserted in between are unclear, it is difficult for people to claim that they understand the entire sequence of events. Thus, the causal mechanism, as an ‘ontological’ explanation of causal effects, implies that one of its main meanings is to reveal the process by which causes lead to results through structural models, which also raises another important requirement for the opacity of understanding that this paper is concerned with. If the previous definition of cognitive transparency refers to the grasp of the state of things at a moment t, then the issue of causal mechanisms proposes a deeper requirement for transparency of understanding from the perspective of the temporal process, and it is closely related to the continuity and universality of data discussed earlier. In summary, if machines cannot effectively distinguish between correlation and causation, and if machines cannot grasp the specific mechanisms of causation, then machines will never achieve a qualitative leap in ‘understanding’ the world; they will forever be unable to surpass humans.
Furthermore, humans also possess uncertain qualities such as ‘free will’, which will also be a significant obstacle that machine intelligence faces in its development. This is because the technology path based on symbolic representation fundamentally struggles to escape the confines of computationalism, attempting to reach a non-rational shore through rationalism, which seems paradoxical. The existence of this paradox also reminds us that the reductionist approach, which attempts to understand the workings of the human brain by reversing the structure and working mechanisms of machine intelligence, has achieved remarkable results in the fields of neuroscience and brain-computer interfaces, yet the heights it can ultimately reach seem limited.
Lastly, the cognitive foundation of humans may not align with Locke’s ‘blank slate theory’, but rather lean towards Kant’s ‘a priori synthetic judgments’—humans are born with certain abilities to recognize and respond to the world, and this ability is exceptionally evident when compared with other biological species. We now generally summarize this difference as part of the subjectivity of cognition, whose ontological foundation is actually embedded in the innate differences of DNA; in contrast, machine cognition requires data feeding and algorithm support, and before the cognitive activity begins, it can be said to be a blank slate. This difference in cognitive premises may be more thoroughly understood through comparisons between carbon-based and silicon-based life.
In summary, we have primarily discussed the working principles of machine intelligence and its various limitations, analyzing some manifestations of the opacity of machine cognition at the theoretical principle level. However, this is only one aspect of the issue; below we will shift to discussing the opacity issue from the perspective of technical implementation.

(2)

Epistemic Opacity in Technical Implementation

As mentioned above, it is generally believed that scientists’ pursuit of transparency in computational science means that they understand many attempts made by machines in theoretical principles. In computational science, when we say that A cannot be computed, we usually refer to the fact that completing A is tricky given the current hardware and software levels, rather than proving that A is impossible in principle. However, the scientific language of ‘computing A’ has been misinterpreted in philosophical usage. Therefore, Winsberg states that ‘philosophers of science have missed the opportunity to contribute to this explosive field of modern science precisely because they tend to focus on principled possibilities rather than the achievements we can obtain in practice.’ Thus, we need to pay more attention to practical difficulties rather than principled solutions. Technical implementation opacity should be the main focus of the current opacity of machine cognition.
Technical implementation opacity first requires consideration of issues related to the data itself. Logically, the transparency of cognition implies a comprehensive grasp of the object of cognition, which can also be referred to as a ‘God’s eye view’. However, despite the relatively mature big data methods we now possess—the key feature of big data is that it can characterize things and phenomena from multiple dimensions, requiring each dimension’s data to approach the full sample—one requirement of big data is multiplicity and completeness. However, obtaining so-called ‘global data’ about things is practically impossible: the earlier argument that ‘humans are the measure of all things’ reflects this limitation regarding data to some extent. Additionally, the accuracy of measurable data is evidently related to cognitive transparency. Although we have implicitly considered this issue when discussing the continuity and discreteness of data, the actual extent of measurement accuracy has undeniably influenced our grasp of the essence of things since modern science. This is still the case without considering the inherent randomness characterized by chaos and fractals, as well as quantum randomness. In summary, the opacity in understanding caused by the practical ‘information gap’, i.e., insufficient measurement data, is entirely different from the well-known subjective ‘data bias’.
From the perspective of algorithms, the problems encountered in terms of epistemic opacity seem to be even more pronounced.
1. The Opacity Problem Caused by the Complex Hierarchical Relationships of Algorithms
Although we can simply achieve the goal of understanding machine algorithms by understanding their operational logic, in reality, this simplification is not particularly useful. The greater challenge faced by humanity is how to understand the results produced by the collaboration of numerous simple modules. In the era of big data, a study often requires analyzing billions or trillions of data examples along with tens of thousands of data attributes, and the internal decision logic of an algorithm changes with its ‘learning’ from training data. Processing large amounts of particularly heterogeneous data increases the complexity of the code and necessitates the use of techniques and devices embedded in the code to manage it, thereby invisibly increasing the opacity of understanding the computational process.
When dealing with big data scenarios, people generally adopt flat data structures—such as cloud data often using distributed storage methods to execute numerous simple computations and can design logical rules to form algorithms that are easy to understand. However, the prolonged interaction between code and data can lead to numerous unpredictable processes, and this process is not fully presented in ways that humans can understand, such as visualizations or idealizations; it is irreducible. For example, the multilayer neural network methods that currently play significant roles in machine learning mean that algorithms are divided into multiple different layers, with relationships between each layer exhibiting abrupt changes or emergent relationships from quantitative to qualitative. According to our studies on the integrity, hierarchy, and emergent processes and properties of systems from the perspective of complexity, the emergence of new qualities at each level of the system occurs through abrupt changes. Mathematical analysis of this process can utilize specialized mathematical tools such as ‘catastrophe theory’, while understanding the physical mechanisms can refer to the ‘initial value sensitivity’ characteristics based on the fractal structure of chaotic attractors—its most common expression is the ‘butterfly effect’. In summary, the behaviors exhibited by abrupt changes or ‘chaotic’ behaviors do not imply a contradiction with causal analysis. In the eyes of mathematicians, chaotic phenomena are untenable—if God wishes, He can know all the details of chaotic behavior; in God’s perspective, the essence of chaos still satisfies strict deterministic patterns. Of course, humans can never become God, so in the eyes of physicists, the uncertainty of chaos at the level of technical implementation is inescapable. This means that from the perspective of practical or measurement accuracy, the limitations of multilayer neural network calculations on epistemic opacity are difficult to overcome. Another impact of multilayer neural network algorithms on epistemic opacity manifests in the uncertainty of the new qualities generated during the computational process. In this sense, algorithms are no different from a ‘black box’ of understanding. Turing Award winner Geoffrey Hinton referred to this new type of knowledge provided autonomously by algorithmic systems that humans cannot articulate or comprehend as ‘dark knowledge’ in 2014. In short, dark knowledge is a result produced by the autonomous operation of multilayer deep learning processes, and we remain unaware of its mechanisms and elements. Its rationality is unknown to us, thus it poses a challenge to traditional epistemology and prompts us to take seriously the epistemological value and significance of such opaque knowledge.
In summary, from the algorithmic hierarchy perspective, most deeper-level coordination in machine learning does not necessarily yield the best results statistically; and as mentioned above, this statistical correlation also has a significant gap with causality, which renders it impossible for people in many cases to rely solely on intuition to understand these implicit processes. Thus, the inevitable complexity of algorithms and models in machine understanding indicates that machine cognition is not a simple overlay or extension of model epistemology. This complexity and the mismatch relationship between the understanding capabilities of humans, who possess simple collaborative abilities and processes, are one of the important sources of epistemic opacity. The emergence of opacity is not only due to the understanding barriers and irreducibility issues brought about by the high-dimensional features of machine computation, but the factor of time also plays an important role in this, and even from a practical perspective, it plays a decisive role.
2. Time Complexity in Computation and Its Impact
In computer science, many algorithms and computations require consideration of the time factor. Time complexity is a function that quantitatively describes the running time of an algorithm. Therefore, when considering the computational capabilities of a system, we cannot focus solely on the theoretical amount of computation but must also consider its efficiency. In daily life, there are often several types of time described to complete a task: first is constant time (c), which does not increase with the complexity of the task; linear time (n), which increases linearly with the complexity of the task; and polynomial time (n^c) and exponential time (c^n). In the latter two scenarios, the computational time grows exponentially with the increasing complexity. When most operations in a program are linearly related to time, there is no need to worry too much about computational complexity issues, as improving computing power will significantly shorten time. However, when computation grows non-linearly in a polynomial manner, the time consumption becomes excessive, even approaching infinity, which we cannot accept in practice. Therefore, in computational science, a good algorithm hopes to minimize time complexity as much as possible. If the time required to solve a problem grows exponentially with the scale of instances, then that problem is termed ‘intractable’. Although undecidability and incomputability are important for understanding computation, inoperability has more significant implications. Inoperability is one of the sources of opacity in practical understanding, indicating that large-scale instances cannot be resolved using computation within a reasonable timeframe. Even if the input and starting algorithms are simple, the content that may need to be understood at different stages can also grow exponentially. This poses an insurmountable practical problem for human cognitive abilities.
Thus, the opacity issue is no longer merely related to computing power and human cognitive abilities but also relates to the nature of time and space. Even if we can assume that people can analyze uncertain representations using probability theory, such as Bayes’ theorem, and transform the problem of uncertain reasoning into elements within a symbolic system, such methods will still encounter barriers to understanding in practice. Therefore, technical implementation opacity is the insurmountable barrier to human understanding of machines, and it essentially stems from the complex interactions of time and computing power, leading to the cognitive object—machines themselves—being unable to achieve a complete state of being understood. Since the infinite extension of running time caused by time complexity renders computation incomplete, how can humans understand a computational process that has no endpoint and thus no answer? It is evident that the time issue is one of the premises we should fully consider regarding the importance of computation and understanding.

3

Limits and Significance

Whether machine intelligence will truly emerge remains an important unresolved issue in contemporary discourse. The so-called ‘artificial consciousness’ currently exists only in science fiction films or other fictional works. This issue is not only difficult at the level of specific technical implementation but also at the level of selecting the correct theoretical path. In other words, if we cannot clearly answer a first-order question like ‘what is machine consciousness’, how can we explore a second-order cognitive question like ‘how to achieve it’? Therefore, we cannot simply equate ‘machine intelligence’ with human consciousness. Even as the emitters of consciousness, we still cannot precisely understand why it arises, how it functions, or where it is headed. The emergence of artificial consciousness must adhere to strict causal threads and must retrace the path of human consciousness research back to the birthplace of artificial intelligence. Does this also mean that our discussion of machine cognitive transparency needs to be founded on an understanding of human cognitive transparency? However, as pointed out at the beginning of this article, there has been much research on human cognitive transparency and self-knowledge models in cognitive science philosophy, yet they are not the focus of this paper—they merely serve as the starting point or larger theoretical background for the topic of this article, and thus will not be elaborated upon here. In fact, as we have made specific efforts in the first part of this article, on one hand, we have succinctly reviewed the basic connotations of human cognitive transparency, while on the other hand, we have focused on exploring the connections and distinctions between (machine) cognitive opacity and human cognitive opacity in cognitive science, emphasizing that the former is an extension or elaboration of the latter. Additionally, from the current situation, the theoretical paths regarding human self-knowledge, formal logic, functionalism, naturalism, and neural hierarchies are not very well-developed, prompting scientists to begin exploring ‘composite paths’. The existence of such a situation means that the question of when machine intelligence will surpass human intelligence effectively becomes merely an optimistic outlook for futurists regarding technological development. The reasons for making the above judgment also stem from several considerations.
First, cognitive scientists have conducted in-depth discussions on the problem of machine consciousness through analogies with the human brain, with a general consensus emerging that ‘understanding’ is the foundation of intelligence. However, as previously mentioned, the problem of understanding in machines becomes increasingly impossible to achieve due to the growing amount of data, the involvement of more complex hierarchies and processes, and irreducibility. Therefore, ‘in systems where computational irreducibility occurs, the most effective program for the future state of the computational system is to let the system evolve itself.’ Just as nature has its own evolutionary rules, machines also have their own evolutionary rules. We cannot demand that humans understand the influence of every element and every step of computation in the processes of computer simulations and machine cognition. The opacity of understanding in computational science is one of its fundamental characteristics for human subjects. This issue can also be viewed in another light: if humans themselves do not fully understand the origins of their consciousness—as mentioned earlier, we cannot find a home for our own understanding—while machine intelligence has its own independent developmental path, then has the basis for comparing human and machine intelligence been dissolved?
Secondly, in addition to the ‘understanding’ dilemma, some researchers believe that from the perspective of computational science, the absence of artificial consciousness, or even a direction we should attempt, stems from what is termed the ‘computational explanatory gap’—which he defines as ‘the lack of understanding of how to map high-level cognitive information processing to low-level neural computations.’ ‘High-level cognitive information processing’ refers to various aspects of cognition, such as goal-oriented problem-solving, reasoning, decision-making execution, cognitive control, planning, language comprehension, and metacognition—all cognitive processes that can be consciously accessed. The so-called ‘low-level neural computation’ refers to the types of computations realizable through artificial neural networks. In fact, the computational explanatory gap remains an unexplainable problem caused by the partial unrepresentability of human consciousness—essentially belonging to a part of the ‘explanatory gap’ concept that has been discussed in the philosophy of mind: it has been noted that there seems to be no bridge law to connect the knowledge of subjective mental states with the knowledge of the objective physical world, indicating an insurmountable epistemological gap between these two kinds of knowledge. In the philosophy of mind, this gap is also referred to as ‘the explanatory gap’. Its existence implies that the knowledge we use to describe the external world seems inapplicable to describe mental states. Once this issue is raised, it has remained a core topic of concern in the philosophy of mind, and its relevance is still strong today. For the construction of artificial consciousness, if this dilemma is inevitable, does it imply that the development of artificial intelligence that is homologous to human intelligence is also impossible?
Finally, in traditional epistemology, since humans are the active or dominant party in cognitive activities, this characteristic is referred to as ‘human-centered epistemology’. However, with the rise of computers and artificial intelligence technologies, the position of humans in cognitive activities has profoundly changed. The novelty of the computational methods brought about by the use of computers is gradually causing humanity to lose its central position in epistemology. We have further clarified the dilemmas faced by human-centered epistemology based on the irreplaceability of machine cognition under big data conditions, proposing that the key to solving this problem is to construct a non-human-centered epistemology that recognizes the value of machines in epistemology, which, to some extent, can dissolve the opposition between humans and machines.
The shift in epistemological stance from human-centered to non-human-centered makes the meaning and tasks of ‘understanding’ more complex. Undoubtedly, the essence of ‘understanding’ is primarily a ‘relationship’, which implies that it can only occur between at least two parties or more; secondly, its core demand is ‘truth’ or ‘correctness’. In terms of its specific manifestations, there are two aspects to consider.
First, from the perspective of the active party that generates the understanding problem, it initially primarily concerns the ‘self-awareness’ and ‘self-consciousness’ of the individual, which is precisely the ‘transparency issue in cognitive science’ discussed at the beginning of this article, and will not be elaborated further here.
Second, regarding the understanding among multiple cognitive subjects between humans, its most important manifestation is the so-called ‘other-mind’ problem. This issue has also long entered the research field, for example, in ancient Chinese philosophy, it is expressed in the classic dialogue between Zhuangzi and Huizi while they were above the bridge, where Zhuangzi asks, ‘Are you not fish?’ The essence of this problem is a person’s speculation and judgment about another person’s intentions, and of course, the premise of speculation and judgment is a person’s cognition and understanding of another’s intentions. In contemporary philosophy, it has evolved into the problem of intersubjectivity.
The significance of the concept of intersubjectivity is very complex, but its most basic meaning differs from the subjectivity concept based on the isolated, atomistic individual notion in traditional Western philosophy, referring to the collective issues among different subjects, especially concerning the unity or ‘co-being’ issues between subjects. Its manifestation in ontology and epistemology essentially pursues the universality of knowledge, or how individuals in cognition can escape the dilemmas of self to possess universal knowledge, thus achieving consensus and effective and correct communication among different subjects. This epistemological significance of intersubjectivity initially remains embedded in the traditional subject-object dichotomy framework of ontology, but it brings significant shifts in epistemology, namely, people are shifting their focus from the status of subjectivity in the ‘subject-object’ relationship in cognition to the symbiotic, egalitarian, and communicative relationships among different subjects within the same cognitive process. Therefore, Heidegger and others emphasize the relationship of communication, understanding, and interpretation between subjects, stressing the unity of humans with the world in the interpretative activity, thereby allowing intersubjectivity to enter the realm of ontology in the form of hermeneutics and existential theory. The ontological significance of intersubjectivity mainly involves questions about how freedom and understanding are possible, aiming to fundamentally explain the relationship between humans and the world and address human existential issues.
It is evident that the in-depth discussion of the intersubjectivity problem actually gradually guides the initial focus on the objectivity, universality, and unity of cognition towards attention to how different human subjects achieve equality and symbiosis through language communication, and its hermeneutic significance leans more towards the openness of the relationship between humans and the world and the public nature of relationships among humans. This provides a potential space for subjective construction of these relationships to play a role, and it also implies that such relationships will possess collective characteristics and dynamic evolutionary features. It should be noted that contemporary Western philosophy, including hermeneutics, has significantly diverged from traditional rationalist positions, and their focus on discussing issues is primarily limited to the realm of humanism. However, the discussions regarding the intersubjectivity of humans can provide a new, enlightening theoretical perspective for expanding our understanding of human-machine relationships and the issue of cognitive transparency.
First, considering the fact that machine cognition intervenes in the cognitive process, we have now entered the era of the ‘intelligent agent X’, where the knowing subjects have become human-machine hybrids. This will lead to the complexity of understanding issues becoming even more perplexing. From a classification perspective, the complexity of understanding relationships is no longer limited to interpersonal relationships but is manifested in at least three dimensions: understanding between humans and humans, humans and machines, and machines and machines. Secondly, drawing on the basic ideas of the aforementioned intersubjectivity research, we must consider the more complex situations in the relationships between humans and machines, as well as between machines and machines. Analogous to the demands for publicness in human communication achieved through language dialogue emphasized by figures like Heidegger and Gadamer, explicit symbolic representation is also needed to achieve mutual understanding between humans and machines, as well as between machines. However, due to the existence of problems such as the explanatory gap and tacit knowledge mentioned earlier, machines may encounter insurmountable difficulties in fully understanding or grasping human intentions. Even between machines, they may struggle to achieve unity. Because the characteristics of knowledge, including tacit knowledge, and the public nature of mutual understanding imply sociality and its constantly changing possibilities, this means that even between machines, they will encounter uncertainties and diversities presented by their own emergence, as well as the demand for social collaborative evolution. In this regard, it has actually opened up another important area of artificial intelligence research, meaning that the objects of concern are no longer limited to individual artificial intelligences. In the near future, we will also need to face the issue of ‘collective intelligence’ formed by individual entities through cooperation—related research has already been initiated, and its essence pertains to the potential social development of artificial intelligence. In this aspect, the growth of machine intelligence, like human intelligence, will gradually be realized in the interaction with complex social situations, which can also be viewed as an extension or elaboration of the social behavior formed through self-organization of artificial intelligence. This further complicates the forms of understanding subjects and their interrelationships.
The discussions above about understanding issues between humans themselves, between humans and machines, and the transparency issues, beyond those theoretical principles, have brought some new angles to our consideration of transparency issues in light of the recent advancements in brain-computer interface technology.
Brain-Computer Interface (BCI) technology refers to the detection and collection of signals related to brain activity, such as brain waves, through certain sensors, followed by the identification or decoding of the meanings of these brain signals through intelligent algorithms in computers connected to the sensors, ultimately encoding them into control commands that external machines, such as prosthetics, wheelchairs, or robots, can understand and execute. This allows for the movement of external devices manipulated by ‘brain control’ even when the human body is immobile, achieving the effect of ‘thought control’ or ‘mind over matter’. At present, this technology can help individuals with disabilities regain movement or perception through various brain-computer interfaces; in the future, it may further enable healthy individuals to acquire extraordinary movement and perception abilities. This means that brain-computer interface technology will profoundly change the ways in which humans perceive and act, fundamentally altering the characteristics of cognitive activities of humans as knowing subjects, and may even endow them with certain abilities or special capabilities that they previously did not possess, leading to a form of ‘new evolution’.
Specifically, some wearable devices currently serve as assistive devices to help students with attention deficit disorders improve their attention and enhance their academic performance, assist drivers who are fatigued from long driving to overcome momentary lapses in attention and reduce traffic accidents, and in some cases, brain-computer interface devices can convert infrared and ultrasonic signals into electrical stimulation of specific areas of the cerebral cortex, allowing the brain to perceive and process these signals, enabling individuals to ‘see’ infrared and ultraviolet light or ‘hear’ infrasound and ultrasonic waves, thereby endowing the knowing subject with extraordinary perceptual and cognitive abilities. Unlike traditional cognitive technologies such as handwriting, computer-aided design, and mobile communication, which are spatially separated and distinct from the human body, especially the human brain, brain-computer interfaces are integrated with the cognitive organ of the subject, allowing for direct interaction between the brain and the machine, thereby naturally highlighting the mutual ‘understanding’ problem between the two. According to the accurate understanding, which means transparent understanding, this implies a one-to-one correspondence between the two, which at least involves two aspects: first, the operational mechanisms of machines are fundamentally based on algorithms and logical processes, making their meaning relatively simple, although there may also be emergent possibilities; in contrast, the states of the human brain are much more complex, particularly influenced by many irrational factors such as emotions and free will. It is not difficult to imagine that the one-to-one correspondence from the brain to the machine state will face many unforeseen issues. Secondly, from a philosophical perspective, the key to solving the above problems lies in the adaptability and effectiveness of the reductionist methodological principle. However, in contemporary science and philosophy, the limitations of reductionist thought and methods are evident. Therefore, regarding the requirements of cognitive transparency, the extent to which brain-computer interface technology can achieve will be limited.
Strictly speaking, topics like ‘memory erasure’, ‘digital cloning’, and ‘consciousness uploading’ remain in the realm of science fiction and other literary and cinematic works. Yet, even under such circumstances, the development of brain-computer interface technology has revealed another enticing prospect, namely the recent phenomenon of ‘consciousness uploading’. The so-called ‘uploading the brain’ primarily falls into three types: the first is uploading the complete ‘mind’ of a human; the second is uploading partial memories, especially those of significant importance that are not easily retrievable; the third is uploading information. From the perspective of brain science and some artificial intelligence studies, the third type may be closer to the current reality of ‘brain-computer interfaces’. If we were to clone every person’s brain into machines or the cloud, we would also need to consider the ‘digital person’ evolving with changes in information and data updates, thus achieving the development and immortality of digital persons, which presents us with extremely daunting challenges.
Despite the fact that ‘brain-computer interfaces’ currently offer more speculation in many aspects, the issues they raise are thought-provoking.
First, brain-computer interface technology implies that external symbolic systems and other cognitive tools can be deeply embedded within the human cognitive system, forming a complete, organic ‘coupling system’. In this system, the human body and technological elements constitute an indivisible whole that mutually influences and adjusts each other. In this case, the non-human technological elements within the brain-computer interface have become part of the cognitive process or cognitive system, raising new questions about the ways in which the cognitive subject extends its mind. The intelligent enhancement achieved through brain-machine collaboration will place the knowing subject into a new state of existence and development, potentially leading to a situation where, when the brain and external devices work together to complete cognitive tasks, the task cannot be accomplished without certain external devices. Can this relationship between the external device and the knowing subject be separated? The subsequent question remains: although current brain-computer interface technology can achieve partial cognitive functions, the deeply metaphysical transparency between humans and machines will fundamentally limit the potential extent of the knowing subject’s extension. That is, is there necessarily an insurmountable ontological distinction between the external and internal? When the external begins to substitute for the internal, does it not attain the same ontological status as the internal?
Secondly, from the current development trend of brain-computer interface technology, the role played by machine factors will become increasingly significant. Will its ultimate state be the birth of a ‘machine mind’? That is, when the role of machines extends not only to increased computing power and speed but also gradually to autonomously and flexibly handling certain cognitive issues, it will no longer merely possess the attributes of a tool but will, to some extent, become a machine with a ‘mind’, thus possessing a certain degree of ‘subjectivity’. At this point, can it be metaphysically regarded as a conscious entity? This becomes a question worthy of deep contemplation.
Finally, if the essence of brain-computer interface technology emphasizes the integration of humans and machines, the idea of ‘human-machine symbiosis’ fundamentally suggests that the machine part cannot possess complete independent significance and will be controlled by the human brain and serve humans. Thus, the ultimate limit of this developmental trend should be the theme of this paper, namely the pure AI that possesses complete cognitive abilities and subjectivity. Of course, the journey from possessing an independent knowing subject status to ultimately surpassing human intelligence may still be long and tortuous; however, at least in terms of the issue of cognitive transparency, it will guide us from considering human-machine relationships to thinking about machine-machine relationships.

Conclusion

The discussion in this article regarding the boundary between humans and intelligent machines can help us reflect on the future development of artificial intelligence.
First, the essence of artificial intelligence has always been a simulation of human intelligence. If we strictly follow the understanding of the concept of transparency in epistemology that I have consistently emphasized—only when a cognitive subject knows A can it be said that ‘it’ knows ‘it’ knows A—when extended to machine epistemology, the reasonable form of machine cognitive transparency should be: only when the machine itself, as a knowing subject, knows A can it be said that it knows it knows A. Here, ‘knowing’ is an imitation of human ‘understanding’ behavior. It is crucial to clarify this point; through the discussion in this article, we can clarify what the essence of machine consciousness should be, or in other words, although true machine consciousness has not yet emerged, distinguishing between human consciousness and machine consciousness and defining the role of machine consciousness in machine epistemology is beneficial for clarifying its potential and direction for future development.
Secondly, in our earlier exploration of the human-machine relationship and its manifestations in transparency issues, we pointed out that one of the fundamental difficulties present is perhaps the expression and realization of irrational factors, such as the aspects of free will, inspiration, and emotions unique to humans, which are currently urgent challenges that artificial intelligence research needs to overcome. Some even attribute this gap to the irreducible differences between carbon-based and silicon-based entities. To some extent, the understanding between machine-machine relationships seems simpler in this regard, as their operational mechanisms and methods are fundamentally homogenized and belong to the realm of computation and logical methods. The communication and connection between them are relatively easier and face fewer obstacles, especially under the current conditions where artificial intelligence technologies generally adopt relatively simple parallel data storage and retrieval structures and run the same algorithms. However, when considering the complex intelligent network issues formed by machine-machine relationships, we need to be aware not only of the unique mutations occurring within individual artificial intelligences, such as the emergence of dark knowledge, leading to opacity, but also, from the earlier discussions on intersubjectivity, we should recognize the ‘co-evolution’ effects that emerge when machines appear in self-organizing forms. Such a society composed purely of artificial intelligences should possess openness, flexibility, and diversity. This uncertainty regarding cognitive transparency raises further questions about what it means. It may represent another difficulty we must confront.
Furthermore, from the perspective of the often-quoted notion that ‘humans are the sum of social relations’, in addition to the quantitative representation issues of uncertain irrational factors such as free will, emotions, and artistic creativity, the ethical and diverse complexities of social situations are also non-analytical and difficult to replicate. Human intelligence is, in fact, a long-term dynamic result of evolution, including cultural influences, involving subjective purposes and motivations closely related to changes in objective conditions within contexts, which are not governed by fixed rules. Taking into account the cognitive characteristics of individual intelligent machines analyzed here, while we may also consider the possibilities of change and progress brought about by deep learning, the focus remains primarily on formalized computation, logic, and rational aspects, thus it does not entirely equate with the intentionality, purposefulness, and autonomy exhibited by humans. Even when considering the potential for sociality and autonomous openness and evolution of future machine collectives, we can still say that perhaps we have overestimated the power of rationality, and although logical representation and related issues are one of the prerequisites for AI development, the dream of a pure ultimate machine mind remains distant. Let us wait and see.

Leave a Comment