Whenever the AI Agent framework and standards are mentioned, many people likely feel a mix of confusion and clarity. This is because the ceiling for creating frameworks is very high; it is possible to quickly scale to 300M, but if the framework fails to deliver, the consensus may collapse, leading to a high probability of falling into an abyss. So, why have AI Agent framework standards become a battleground? How can we judge whether these framework standards are worth investing in? Below, I share my personal understanding for reference:
1) AI Agents are products born purely from the web2 internet context. Large Language Models (LLMs) have been trained on vast amounts of data, eventually producing interactive AIGC applications like ChatGPT, Claude, and DeepSeek.
The overall focus is on the “application” logic. However, there is a fundamental lack of understanding regarding how Agents communicate, how to establish a unified data exchange protocol, and how to build verifiable computational validation mechanisms.
The essence of expanding AI Agent frameworks and standards is the evolution from centralized servers to decentralized collaborative networks, from closed ecosystems to open unified standard protocols, and from single AI Agent applications to complex interconnected ecosystems in web3 distributed architecture.
The core logic is simple: AI Agents must seek commercial prospects under the modular and chain-based ideas of web3. To start with the goal of “framework standards,” a distributed architecture that aligns with web3 frameworks must be built; otherwise, it will merely be a web2 application market focused on computing power and user experience.
Thus, AI Agent frameworks and standards have become a key battleground in this wave of AI + Crypto narrative, with unimaginable potential.
2) The AI Agent framework and standards are still in a very early stage. It is not an exaggeration to say that hearing various developers talk about their technical visions and practical routes now is akin to Vitalik Buterin seeking funding in China ten years ago.
Imagine if Vitalik stood in front of you ten years ago; how would you judge him?
1. Look at the founder’s charisma, which aligns with the logic of most first-round angel investments focused on “people.” For example, when shawmakesmagic was criticized for being a loudmouth, if you saw his sincere engagement with the community, you would want to align with a16z; similarly, Kye Gomez from Swarms maintained a consistent technical discussion despite various FUD scams, which might resonate with you.
2. Assess the technical quality. While the facade can come from decoration, it also incurs costs. A project with good technical quality is worthy of FOMO, of a “donation” mindset for investment, and of the effort to follow up and research. For instance: the quality of Github code, the reputation of the developer open-source community, whether the technical architecture is logically coherent, whether the technical framework has already been applied, and the rigor of the technical white paper.
3. Evaluate the narrative logic. The AI Agent track currently has a gradually “chain-based” narrative direction, and you will notice that more and more old chains are embracing AI Agent narratives. Of course, original frameworks like ElizaOS, arc, Swarms, and REI are also exploring the possibility of “chain-based” approaches. For instance, Focai is a project built from the community’s exploration of the ElizaOS framework’s “chain-based” construction. A good narrative logic carries inherent momentum because it embodies the expectations of the entire Crypto market. If a project emerges claiming to solve AI problems that even web2 cannot address in the short term, would you believe it?
4. Assess ecosystem implementation. Framework standards are indeed very upstream; in most cases, it is best to abstract framework standards after having a standalone AI Agent. For instance, after Zerebro, ZerePy was launched, as frameworks empower standalone AIs, which is naturally stronger than launching a new framework token to split consensus cohesion. However, regardless of how grandly a framework and standard are presented, it must be evaluated based on the actual implementation of the AI Agent project (team execution capability and iteration speed) and whether there is ecosystem implementation, as this is the lifeblood of sustainable project growth.
In summary, the current competition over frameworks and standards is about identifying who will be the next EVM in the AI Agent narrative and who will be a high-performance SVM superior to EVM. Of course, during this process, if a Cosmos IBC emerges, along with a new Move-based DeFi paradigm, creating a parallel EVM and real-time large-scale concurrent layer2, just think about how long this road still is.
Frameworks and standards will continue to emerge, each stronger than the last, making it difficult to make choices.
I only focus on the developers’ activity levels and the actual delivery results of the projects. If they cannot deliver results, a short-term surge is merely illusory. If “certainty” is seen, it is not too late to invest; the valuation ceiling for AI Agents can reach the “public chain” level, with opportunities exceeding 10B possible, so there is no need to rush.
3) The boundaries of AI Agent frameworks and standards are quite blurred. For instance, the ElizaOS framework standard can only be qualitatively defined as a spiritual totem of the developer community before platformization, and its value spillover can only rely on a16z to support it. Similarly, the Game framework standard is still playing in a closed-source virtual model, appearing somewhat alternative compared to mainstream open-source composite architectures.
Moreover, while the ElizaOS framework is indeed the current star, it has an independent ELIZA, whose relationship with it is unclear. The arc RIG framework has a good fundamental outlook, but applying Rust in the AI Agent field to enhance performance feels overly ahead of its time. The technical quality of Swarms is not bad, but such a tumultuous and FUD-laden start, along with the panic-inducing situation, was unexpected. The compatibility between blockchain determinism and Agent execution probability that REI aims to solve is very interesting, but its technical direction seems overly advanced.
The above are just some market-recognized frameworks and standards with “technical quality.” There are many others, such as Nexus, LangGraph, Haystack, AgentFlow, etc., with numerous projects claiming to be framework standards. Whether focusing on low-code convenient deployment, native multi-chain inheritance, or other enterprise-level customized commercial potential, or even AI Metaverse, they all indicate the current “no standard” characteristic of framework standards. This is akin to Vitalik proposing to expand Ethereum, resulting in various exploratory directions like Plasma, Rollup, Validium, and Parallel, yet ultimately only Rollup became mainstream.