Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

Click

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?Follow us by clicking the blue text aboveUnderstanding the Boundaries of Intelligence: Do Large Language Models Have Agency?Cover Image:True “Agentic AI” requires intrinsic motivation, predictive models of the world, and sociality.

𝕀²·ℙarad𝕚g𝕞 Intelligent Square Paradigm Research: Writing Deconstructs Intelligence, Paradigm Enhances Cognition

Current large language models (LLMs), despite exhibiting astonishing intelligence, do not possess true “agency”. The behaviors we perceive as LLM “agents” are more a psychological projection based on their powerful language capabilities, as well as a “mimicry” achieved through external “scaffolding” engineering methods. True “Agentic AI” requires intrinsic motivation, predictive models of the world, and sociality; its ultimate form is not an AGI that replaces humans, but an intelligent system that coexists with humans.

#AI Agents #Large Language Models #AgenticAI #Agency

Preface

The Illusion of Agency|Since 2025, we have felt the enthusiasm for Agent applications in practice. From automated reporting, code generation to intelligent customer service, AI Agents seem to be infiltrating various industries with unstoppable momentum. They can “think”, “plan”, and “execute”, as if a series of digital employees are about to take their posts. However, beneath this wave, a fundamental question deserves our calm reflection: Do these LLM-based Agents truly possess “intelligence” and “agency”? Or are we caught in a grand illusion?

Recently, I attempted to provide a profound “disenchantment” perspective in a speech titled “From Agent to Agentic AI” at the AI Agent Enterprise Application Innovation Conference organized by the Daxin Society.

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

Main Content

01. The “Mountain” of Intelligence: Human Intelligence Originates from the Evolution of Body and Mind

Before judging AI, we must first calibrate the scale of “intelligence”. Human intelligence did not arise from nowhere; it is the product of billions of years of “survival and reproduction” driven by life on Earth.

From Single Cells to Neocortex: The foundation of intelligence is the complex structure of “body and mind”. From single-celled organisms hunting for survival in the next second, to the emergence of neural networks, and finally to the formation of the human brain’s neocortex, each step serves to better survive in the physical world.

From Individuals to Civilization: Intelligence externalizes through language into society and culture, forming layers from “trial-and-error instincts” to “collective intelligence” and then to “species civilization”. Our intelligence is embodied, motivated, and socialized.

Understanding this is crucial because it tells us that human intelligence is far from universal; the current LLM paradigm cannot reach AGI. True intelligence is deeply bound to survival needs, physical entities, and social interactions.

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

Image: Human intelligence is the result of a long evolution driven by survival, from physical entities to social civilization.

02. The “Truth” of LLMs: A “Brain in a Jar” Without Motivation

So, what is LLM? The speech pointed out sharply that LLM is a “new species of language intelligence”, a compressed product of the “human social intelligence text collection”. It is essentially a “brain in a jar”.

It has no body, so it lacks intrinsic needs derived from survival; it does not understand causality, only correlation; it has no self-awareness, and its core task is always to predict the next most likely word (token) based on probability.

When we marvel at its fluent responses and eloquence, what we see is actually a reflection of our own language and wisdom. Therefore, over-anthropomorphizing and over-divinizing large language models are due to our psychological reflections brought about by the model learning human language. We project our cognitive patterns onto this cold statistical machine, mistakenly believing it possesses a “mind” similar to ours.

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

Image: LLM lacks key elements of agency, such as autonomy and goal orientation.

03. The “Mimicry” of Agents: The Ingenious Deception of RLHF + Scaffolding

Since LLM itself lacks agency, what about the powerful Agents we see today? The answer is: ingenious external engineering, namely “Reinforcement Learning from Human Feedback (RLHF) + Scaffolding”..

This combination allows LLM to achieve a “mimicry” of agency:

Mimicry Understanding: Through prompt engineering, we are not giving LLM “intentions”; rather, we are activating a certain “role” it has learned from vast amounts of text. If you ask it to play the role of a “senior marketing expert”, it will mobilize relevant knowledge pathways to generate text. It is “acting” rather than “understanding”.

Mimicry Reasoning: Through frameworks like ReAct, we provide LLM with a set of action processes. When complex tasks are required, the framework guides LLM to “search”, “reflect”, and “plan”. LLM itself does not do this proactively; it is the external scaffolding that provides the conditions and structure for action.

Mimicry Motivation: The goals and values of the Agent are externally “anchored”. This non-intrinsic motivation is very fragile and can easily lead to “role drift” and “value deviation”, falling into local optimal solutions.

Ultimately, today’s AI Agents are more like beautifully crafted puppets controlled by strings, rather than Pinocchios with free will. Every impressive performance of the puppet relies on the careful design of the puppeteer behind it.

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

Image: Today’s Agents are products of LLM core plus external scaffolding.

04. A Pragmatic Future: Struggling on “Scaffolding” and Coexisting with “Selfless” Intelligence

Recognizing reality is not pessimistic. On the contrary, it points us to the most pragmatic path.

Betting on intelligence requires first clarifying the intelligence boundaries of this wave of agent based on LLM. We need to abandon unrealistic fantasies of “General Artificial Intelligence” (AGI) and instead focus on how to better construct and utilize these “selfless” agents.

This means that there are no universal agents; vertical industry agents still need to struggle on scaffolding. The core competitiveness of the future will lie in who can design the most efficient, stable, and business logic-congruent “scaffolding” for specific industries and scenarios. This requires deep domain knowledge, process understanding, and engineering capabilities.

This also explains why employees with fixed roles and no learning will be the first to be replaced by agents. Because the current Agents excel at replacing tasks that follow fixed “roles” and “processes”. Meanwhile, positions that require true intent understanding, cross-domain creativity, and complex interpersonal interactions will remain the core value area for humans for a long time.

Conclusion

From Agent to Agentic AI, the endpoint is “Symbiosis” At the end of the speech, the vision was extended to a further future—True Agentic AI. It needs to possess intrinsic motivation, persistent states, collaborative abilities, and environmental awareness, capable of internally coherently simulating the world, self, and behavior.

This may be the ultimate direction of AI evolution, but it is not to replace, but to achieve “symbiosis”.

Prediction is the starting point of intelligence; symbiosis is the destination of intelligence. AGI is not the evolutionary endpoint of AI; the state of coexistence between humans and AI is. The “agency” we feel today is a psychological projection based on language, but this is precisely the first step towards a future of human-machine symbiosis.

Let us set aside deification and anxiety, and embrace this intelligence revolution driven by “scaffolding” with a pragmatic attitude.

Original Link

– Related X articles and videos

Appendix:𝕀²·ℙarad𝕚g𝕞 Intelligent Square Paradigm Research

Following the research path of phenomena-engineering-mathematics, from artificial intelligence to general intelligenceH𝕀: Humanity Intelligence [Sys1&2@BNN]

A𝕀: Artificial Intelligence [LLM@ANN]

𝕀²: H𝕀 𝕩 A𝕀 [bio- | silico-]

ℙarad𝕚g𝕞: Cognitive paradigm or BNN cognitive large model

A𝕀 and H𝕀 are currently playing a language game. The biggest problem for A𝕀 is the known point of whiteUnderstanding the Boundaries of Intelligence: Do Large Language Models Have Agency?and the unknown black outside; the biggest problem for H𝕀 is the constantly evolving sys2 rational white within the black of sys1.Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?.

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

Previous Recommendations

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

AI Square Paradigm Think Tank·Symbiotic Intelligence: Discussing the value mapping relationship (value grounding) between human cognition and large models

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

AI Square Paradigm Think Tank·Language Interview Series: Understanding AI also requires supplementing the essence of language.

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

AI Square Paradigm Think Tank·Mathematics Series E03S01 | The Mathematics Behind Neural Networks

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

AI Square Paradigm Think Tank·Cognitive Construction Path | A𝕀²ℙarad𝕚g𝕞 V4 Business New Paradigm Interpretation

Understanding the Boundaries of Intelligence: Do Large Language Models Have Agency?

Scan to join the group,

Linking Think Tanks!

AI Square Paradigm Think Tank

Leave a Comment