Author | Azuma, Odaily
The AI Agent sector continued to accelerate its pullback today, with the majority of the top Framework/Launchpad/Agent tokens such as VIRTUAL, AI16Z, AIXBT, ARC, and other large and small tokens recording double-digit declines, apart from some meme coins related to the DeepSeek concept.
While the decline is certainly related to the overall market weakness, a portion of investors also attribute the collective downturn of the AI Agent sector to the explosive popularity of DeepSeek, believing that DeepSeek, as a new generation of more economical LLM, will disrupt or even reshuffle the current Crypto AI landscape.
I personally do not support this view.
The core issue raised in the above questioning is whether the iterative development of LLM will disrupt the existing Agents. Putting aside the debate over whether DeepSeek's current reasoning quality has surpassed ChatGPT and Claude, and assuming that DeepSeek has more promising iterative potential from an economic sustainability perspective, the question is whether the iterated DeepSeek will recreate a new batch of Agents, making the existing ones "yesterday's flowers"?
To answer this question, we need to first understand the relationship between LLM and Agent. We'll try to use DeepSeek itself and another "national treasure" Kimi to answer this question (to be honest, Kimi's quality seems better in this regard).
DeepSeek:
The relationship between AI Agent and AI LMM is that LMM can serve as one of the core components of Agent, especially in handling natural language understanding and generation tasks. AI Agent is an intelligent system that can autonomously execute tasks and interact with the environment, while LMM provides Agent with powerful language processing capabilities, enabling it to better understand and generate human language, and thus play a role in scenarios such as dialogue, recommendation, and analysis. In short, LMM is an important technical support for AI Agent to achieve intelligent language interaction.
Kimi:
Agents typically rely on LLM to understand and generate natural language. LLM provides powerful language processing capabilities, enabling Agents to engage in natural language interaction with users, understand user needs, and generate corresponding responses.
Agents also rely on capabilities beyond just LLM's language processing, such as decision-making, problem-solving, interaction with external environments, and action execution. LLM provides the foundation for Agents to understand and generate language, while Agents then plan and execute tasks based on this foundation.
Agents can also integrate with other tools and APIs to expand their functionality. For example, Agents can call search engines, database query tools, file read/write tools, etc. to complete more complex tasks. LLM provides the language understanding and generation capabilities in this process, allowing the AI agent to better interact with these tools.
In summary, the operation of Agents requires the support of LLM, but their expression and interaction quality do not depend entirely on LLM. In fact, what determines the significant differences between different Agents is the capabilities beyond LLM.
For example, the reason why aixbt can "overwhelm" other similar Agents in terms of output is that it has done better in prompt design, post-processing mechanisms, context management, fine-tuning strategies, randomness control, external tool integration, and user feedback mechanisms - you can call it first-mover advantage or moat, and this is the current advantage of aixbt.
After understanding this logical relationship, let's now answer the core question raised earlier: "Will the iterative development of LLM disrupt the existing Agents?"
The answer is negative, because Agents can easily inherit the capabilities of the new generation of LLM through API integration to achieve evolution, thereby improving interaction quality, enhancing efficiency, and expanding application scenarios... Especially considering that DeepSeek itself provides a compatible API format with OpenAI.
In fact, the responsive Agents have already completed the integration of DeepSeek. The founder of ai16z, Shaw, mentioned this morning that the AI Agent construction framework Eliza developed by the ai16z DAO has completed support for DeepSeek two weeks ago.
In the current trend, we can rationally assume that after Eliza of ai16z, other major frameworks and Agents will also quickly complete the integration of DeepSeek. In this way, even if there will be some impact from the new generation of DeepSeek Agents in the short term, in the long run, the competition between Agents will still depend on the external capabilities mentioned earlier, and the accumulated development achievements brought by the first-mover advantage will shine again.
Finally, let's share some comments from the big shots on DeepSeek to recharge the faith of the defenders of the AI Agent sector.
Frank, the founder of DeGods, said yesterday, "The idea that people have about this (DeepSeek disrupting the existing market) is wrong, the current AI projects will benefit from new models like DeepSeek, they just need to replace the OpenAI API call with DeepSeek, and their output will be improved overnight. New models will not disrupt Agents, but will accelerate their development."
Daniele, a trader focused on the AI sector, said, "If you're selling AI tokens because DeepSeek models are low-cost and open-source, you need to know that DeepSeek is actually very helpful in extending AI applications to millions of users at low-cost pricing. This may be the best thing for the industry so far."
Shaw also published a long article this morning to respond to the impact of DeepSeek, and the first sentence of the opening reads, "Stronger models are always good for Agents. For years, various AI labs have been surpassing each other. Sometimes Google is ahead, sometimes it's OpenAI, sometimes it's Claude, and today it's DeepSeek..."