Why AI Agents Have Opportunities in B2B

Why AI Agents Have Opportunities in B2B
Author | Gao Jia Wang Yi
In just a year since the birth of Agents, they have become a battleground fiercely contested by tech giants and startups alike.
However, most Agents on the market do not seem to strictly meet the expectations of the business community.Even OpenAI’s GPTs are essentially just chatbots for specific knowledge bases or data. These intelligent agents, which are based on situational information and used for data analysis and code debugging, are merely lightweight personal assistants.
Besides the widely discussed security issues such as soft pornography, fake official accounts, and rampant fake orders, the lack of necessary user demand and deep scenario integration has prevented the emergence of disruptive killer applications in the B2C space. Many GPTs have become mere “toys” for the masses. Moreover, there is still much room for improvement in the areas of program linkage and automated workflows for GPTs.
In the early stages of large models, what kind of Agent is truly needed for commercialization? In what scenarios can Agents provide their maximum value?
When we shift our focus from B2C to B2B, a more promising answer seems to emerge.

01.

B2B: The True Battlefield for Agents

At the 2024 Sequoia Capital Artificial Intelligence Summit, Andrew Ng delivered a speech on Agents, proposing four main capabilities of Agents: Reflection, Tool Use, Planning, and Multi-agent Collaboration. He emphasized the importance of AI Agent workflows, predicting that it will become a significant trend in the future.
Entrepreneur and platform economy researcher Sangeet Paul Choudary also mentioned in a March article this year that Agents create a possibility for re-integrating scenarios, allowing AI players in vertical domains to coordinate across multiple workflows for horizontal development, which will reshape the B2B value chain.
Compared to scattered individual users, enterprise users typically face more complex business requirements, with clearer business scenarios, business logic, and a wealth of industry data and knowledge accumulation. This makes the B2B field an excellent stage for Agents to showcase their autonomy, perception, understanding of the environment, decision-making, execution, interaction, and tool usage.
In our previous article, “Who Will Become the ‘App Store’ for B2B AI Applications?”, we proposed that in the era of mobile internet, the App Store is arguably the most powerful ecological platform in history; similarly, the era of large models also requires such a vibrant ecological platform to close business loops and accelerate industries. In other words, we need an “Agent Store in the B2B field” to empower enterprises and reduce costs while increasing efficiency.
So, what kind of company can successfully create this “Agent Store”?
Andrew Ng and Sangeet provide a near-standard answer: companies that can intervene in enterprise clients’ workflows and those that have accumulated vertical industry data, ideally with their own large models for adaptation and empowerment. LLMs are the backbone of Agents.
All of this seems to point towards collaborative office platforms.
Collaborative office platforms represented by DingTalk, Feishu, and WeChat Work are not only a combination of PaaS and SaaS but also possess good API interfaces and plugin systems. They can be firmly embedded in enterprise workflows through various forms of products such as instant messaging, video conferencing, scheduling, task management, and collaborative documents. Moreover, through years of cultivation, they have accumulated enterprise data assets across multiple industries and tracks. With application scenarios, industry data, and their own large models, they are a natural growth platform for an “Agent Store”.
Before entering the B2B battlefield of Agents, let’s first look at how Agents have evolved since their inception a year ago.

02.

From Copilot to Agent: The Advancing AI Assistant

Agents have evolved today through a process from “Copilot to Agent”.
In the past year, the Agent field has developed rapidly, backed by large models. Although there is still a significant distance to true autonomous intelligence, the explosive trend of Agents in the industrial sector has become quite apparent. Platforms building around the Agent ecosystem have begun to take shape, attracting developers from various industries. We see that Agents have gradually evolved from the early Copilot mode to a form with more autonomous intelligence.
Copilot is a low-level assistant, while an Agent is a high-level agent, where the “high-level” refers to the Agent being an autonomous AI entity. In other words, Copilot is human-led with AI assistance, while the Agent is AI-led with human supervision.
If we compare this to the levels of autonomous driving, L2 level assisted driving belongs to Copilot, while L4 level driving belongs to the Agent, with L3 being the transitional stage from Copilot to Agent.
In the evolution from Copilot to Agent, several key advancements in the underlying large models empower Agents:
1. The application of RAG (Retrieval-Augmented Generation) allows Agents to utilize external knowledge and timely information to supplement their shortcomings;
2. With the rapid advancement of long context in large models, Agents’ capabilities in handling complex scenarios and multi-turn dialogues have greatly improved. This advancement has broken the previous bottleneck of Agents’ insufficient memory capabilities, and now Agents can reason within long contexts, directly describing complex process logic and its conditional branches in the window;
3. By connecting with an increasing number of external tools, such as plugins, APIs, etc. With various tools, intelligent assistants are beginning to accelerate their evolution from being a copilot to a true intelligent entity;
4. High-level capabilities unique to Agents, such as autonomous planning, environmental interaction, and error reflection, are still in the exploratory stage but have recently made significant progress, especially reflected in the establishment and promotion of “Agent platforms”. Agent platforms provide developers with a natural language prompt engineering development environment, allowing iterative optimization of Agents through human-machine dialogue in context windows. Developers can “train” Agents for specific tasks and release them through the platform, helping to form the Agent ecosystem, with the release of GPTs and the GPT Store being a typical example.
The biggest difference from Copilot to Agent lies in the capabilities of “autonomous planning” and “environmental interaction”. While Copilot relies on human prompts to assist users, the Agent empowered by large models possesses fully automated capabilities for autonomous memory, reasoning, planning, and execution for its target tasks. In principle, it only requires the user’s initial instruction and feedback on the results, with no need for human intervention during the process.
As shown in the figure below, the Agent is the model’s autonomous behavior, operating “unmanned”; the involvement of humans and external tools serves as the interaction between the environment and the Agent.
Why AI Agents Have Opportunities in B2B
Specifically, from the current main implementation methods of Agents, “autonomous planning” is reflected in the process of developers building Agents, which differs from traditional software engineering: traditional software engineering requires implementing specific algorithms that can be executed by machines using programming languages. However, in building Agents, developers no longer need to provide specific algorithms, nor do they need to use computer languages, not even pseudocode; they only need to define tasks (inputs and outputs) in natural language to initiate the Agent’s autonomous planning to execute tasks and create the initial version of the Agent.
The “environmental interaction” capability is reflected in the two types of output results that the Agent will produce under the drive of sample data input from the initial version to a “product” that can be listed on the platform:
One is error messages, indicating that the planning path of the Agent has issues, similar to syntax errors in traditional programming;
The other is unsatisfactory output results, akin to logic errors in traditional programming. At this point, developers can provide specific feedback, indicating what the expected output corresponding to the sample input should be.
Both types of information can be directly fed back to the Agent on the development platform; as part of the interaction between the Agent and the environment, the Agent will reflect on the errors fed back by the environment and try to correct them in the next iteration. This cycle allows for the creation of a usable Agent that can be listed as a product on the platform. This is the “internal iteration” of the Agent and the environment.
After the Agent is released, the environmental feedback during actual user usage constitutes the “external iteration” of the interaction between the Agent and the environment. Like “internal iteration”, “external iteration” can also be directly fed back to the Agent, allowing the Agent to self-improve and align with user preferences, iterating new online versions. The process of external iteration marks the establishment of the environmental data flywheel.
From a technical evolution perspective, we have witnessed OpenAI’s transition from the opening of plugin functionalities to the ecological construction of the GPTs platform, as well as Microsoft’s leap from GitHub Copilot to Microsoft 365 Copilot. The traditional process-oriented application development in the industry is beginning to evolve towards an end-to-end development paradigm like Agents, empowered by large models.
From a product form evolution perspective, we see a transition from single-function coding assistants (like GitHub Copilot) to Agent platforms like AutoGPT, and then to the release of multi-Agent frameworks like MetaGPT and AutoGen, as well as OpenAI’s release of the Assistant API for Agent development. The development tools and platforms for Agents are becoming increasingly user-friendly, and the capabilities of Agents are likewise enhanced.
Why AI Agents Have Opportunities in B2B
In many domestic platforms, especially collaborative office platforms, we find that DingTalk’s development trend with large models and Agents over the past year has almost aligned step by step, combining Agent technology with actual enterprise scenarios to rapidly establish a development platform and ecosystem for AI assistants.
Over the past year, DingTalk has taken the lead in the industry by transforming its products with large models, AI-ifying 20 product lines, and achieving good applications in intelligent Q&A and data inquiries within enterprises. Furthermore, DingTalk’s own large model, Tongyi Qianwen, has also been rapidly evolving, for example, with long text and multi-modal capabilities, solidifying the model foundation for Agent evolution. At the same time, relying on DingTalk’s advantages as a collaborative platform and its engineering capabilities in workflows and AI PaaS, its Agents have gradually achieved integration with business processes and data.
DingTalk’s exploration of Agent technology has always focused on actual enterprise needs, and its differentiated advantage lies in attracting a large number of B2B users through the office needs of various industries (the “greatest common divisor” of enterprises), accumulating a massive amount of applications and data under its unified platform framework. Recently, DingTalk launched its own “Agent Store” (named “AI Assistant Market”), which already has over 200 AI assistants.
This customer stickiness and the accumulation of massive user data give DingTalk a natural advantage in the practical application of Agents.

03.

Who Has the Best Chance of Succeeding in Agent Development?

Why is a vast user base the foundation for developing Agents?
One important indicator to test the effectiveness of Agent operation is its “information retrieval” capability, which is also why RAG technology is highly valued. It allows Agents to utilize external knowledge and timely information to provide users with more precise and relevant answers and services.
This requires Agents to grow on a platform with massive data, ideally with enough plugins and API tools for the Agent to call upon, maximizing the retrieval and understanding capabilities of Agents to enhance their action capabilities.
In other words, the volume of user data almost determines the “product ceiling”.
This is precisely the cornerstone of DingTalk’s enormous advantage—based on a strong ecosystem and user data, allowing for more optimization space for products.

Since DingTalk entered the AI space a year ago, 2.2 million enterprises have enabled DingTalk AI, covering numerous industries such as K12, manufacturing, retail, real estate, services, and the internet. All of these have accumulated rich data for DingTalk’s AI platform, and the “AI Assistant Market” within DingTalk features templates derived from various scenarios, allowing users to copy them as starting points for new scenarios, enhancing the “universality” of Agents born on the DingTalk platform.

The second factor for developing Agents is large models, as product Agents cannot be separated from the empowerment of large models, making “product-model integration” inherently advantageous.
As mentioned earlier, Agents represent an end-to-end large model product development paradigm. Traditional AI products generally adopt a process-oriented pipeline system architecture, with modules relying on and linking to each other, resulting in many intermediate results between the Input and Output ends; while the ideal large model product is end-to-end, with product iteration improvements automatically enhanced through backflow data during the process.
The end-to-end development poses a significant challenge for many “product-model separation” companies, while a few “product-model integration” companies provide possibilities for end-to-end training:

On one hand, products continuously collect consented user feedback “buried point” data, which feeds back into integrated large model training for user alignment, improving the quality of model data;

On the other hand, the continuously iterating model enhances product experience optimization, attracting a larger user base with products aligned with user expectations, leading to more data backflow. This data barrier and user stickiness prevent being crushed by upgrades of other general large models.

DingTalk itself is a true “product-model integrated” company. It has its own large model and is developing its own Agent products.
“Product-model integration” is crucial for AI companies. In our article, “Why Is ‘Product-Model Integration’ a Better AI Company Model?”, we mentioned that companies with both products and models are more likely to form a “data flywheel”, enhancing their core competitiveness.
Products play a critical “guiding” or “lighthouse” role for models: first, product demand can guide the direction of product optimization; second, products help verify the actual performance of models.
For DingTalk, the “AI Assistant Market” based on massive data serves as that guiding lighthouse, focusing its model training objectives.
The third factor for successfully developing Agents is the engineering capabilities of the platform.
When DingTalk launched the “AI Assistant Market”, its Agents’ capabilities had already undergone significant upgrades. For instance, in terms of action systems, the AI assistant’s “human-like operation” capabilities were greatly enhanced. The AI assistant can automate page operations after observing the user’s operation path, improving the efficiency of high-frequency business actions, such as allowing DingTalk AI assistants to automatically input customer information and submit repair orders with a single command, and also supporting jump-linking to external web applications like Fliggy to autonomously complete tasks like booking flights and hotels.
Why AI Agents Have Opportunities in B2B

Similarly, regarding workflows, to enable the AI assistant to handle more complex tasks, DingTalk has incorporated workflows into the assistant creation process. Users can break down tasks and orchestrate execution actions for the AI assistant to complete, making the task completion results more accurate and controllable. Human-like operations, workflows, and connections to external APIs and systems are all advanced capabilities of Agents, further expanding their action capabilities.

The “universality” of collaborative office platforms, the “usability” of powerful large models, and the “certainty” of extensive engineering capabilities are all the foundations that give DingTalk an advantage in developing AI assistants.

04.

Vertical Depth or Horizontal Development?
Based on large AI models, product forms that can be derived include open MaaS platforms and intermediate layer products represented by AI Infra. In the blue ocean of fields, there is also a branch for vertical depth development.So, why does DingTalk choose to promote the Agent ecosystem and create a horizontally covering Agent market across various industries?
One insight may answer this question: in the long run, one of the winning methods for vertical solutions is horizontal development.
Diving deep into vertical fields remains a blue ocean market, likely to be divided between two major territories. One is horizontal entry, while the other is vertical depth—creating industry-specific large models based on general large models and then developing Agents for industry scenarios.
It is difficult to say that the latter will inevitably be crushed by the former, as those who choose horizontal entry often cannot specifically develop industry large models for every vertical field; they typically can only temporarily enhance with scenario data, manifested in Finetune and In-context learning, and they cannot significantly change the foundational models.
Recently, entrepreneur and platform economist Sangeet Paul Choudary proposed a viewpoint in his blog, stating that Agents create a possibility for re-integrating scenarios, enabling AI players in vertical domains to coordinate across multiple workflows for horizontal development, which will reshape the B2B value chain.
Similarly, drawing a parallel with SaaS, the rise of vertical SaaS previously followed two logics:
First, capturing core scenarios to achieve rapid development; Second, extending scenarios around the core.
For example, Square started with payment SaaS and gradually expanded into a dual ecosystem of B2B and B2C, developing different product lines such as developers, virtual terminals, sales, e-commerce, customer management, invoicing, stock investment, installment payments, and virtual currency, covering various industries like catering, retail, finance, and e-commerce, becoming a comprehensive SaaS solution provider.
Another example is Toast, which expanded from providing POS machines to restaurants as a single-point solution to a comprehensive restaurant SaaS platform that includes software (restaurant management, channels, ordering, delivery, payroll management, marketing, scan-to-order), hardware (fixed terminals, handheld terminals, contactless terminals), and supporting services (after-sales, micro-loans).
It is evident that SaaS giants like Square and Toast have followed a development strategy of expanding from vertical to horizontal.
Sangeet believes that most “disruptions” to the status quo (which can be understood as innovation) occur through deep exploration of segmented scenarios, but most venture capital returns are realized through “integration”.
Breaking apart does not yield sustainable value. For example, many VCs may initially focus on innovators in segmented scenarios, but ultimately most of the benefits are captured by “integrators” that build ecosystems.
To obtain value at scale, software companies need to continuously extend scenarios—ultimately, all vertical games seek horizontal development.
This principle may also apply to Agents. Although Agents possess excellent perception, reasoning, and action capabilities, applying them in vertical fields can quickly and effectively solve pain points, but this does not constitute a moat. Rather, the true moat lies in the interaction and cooperation between Agents after the underlying data is integrated, which is to say, the re-integration of workflows by Agents across APIs, ultimately enhancing the quality and efficiency of the entire system.
Why AI Agents Have Opportunities in B2B
The AI Agent Store, or AI Assistant Market, is precisely the embodiment of this “integration” and “unification”. This is also the strategic layout behind DingTalk’s launch of AI assistants—DingTalk aims to transform the entire B2B ecosystem with the AI assistant market built on its “Hub”, maximizing quality and efficiency in the B2B field based on existing industry and data accumulation.
Over the past year, from the intelligent transformation of various product lines to opening up AI PaaS to ecological partners and customers, from AI Copilot to AI Agent, and then to AI Agent Store, DingTalk has paved a way for the scalable implementation of AI. In the current context where various industries are eager to find scenarios for large models, DingTalk provides a model for AI application implementation.
We believe that the application of Agents in the B2B field reflects the use of AI to accelerate the digital transformation of enterprises. The core issue addressed by the capabilities of Agents is “reducing costs and increasing efficiency”, a characteristic that also determines that AI assistants represented by DingTalk can have greater space for scalable application promotion in the B2B blue ocean.
As the autonomy of AI Agents further strengthens, Agents will evolve into more specialized agents, replacing most professional work and skills.From a trend perspective, it is not far-fetched to say that large model Agents may replace 90% of professional work, and the remaining 10% will still have Copilots to assist human professionals.
In the further future, Agents may evolve into “universal intelligent agents”, completely replacing human jobs and integrating with more hardware products (not limited to embodied intelligence and humanoid robots). What kind of relationship will human civilization and AI Agents have at that time?
Everything is starting from the current battlefield of B2B Agents.
And who will be the biggest beneficiary of this technological wave?
Related Links:
In the second half of large models, several questions about Agents
Who Will Become the “App Store” for B2B AI Applications?
Is “Product-Model Integration” a Better Path for AI Companies?
Looking at Multi-modal, Agent, 3D Video Generation, and Autonomous Driving from the Robot Model RT-2

Why AI Agents Have Opportunities in B2B

Leave a Comment