Web3 + AI at Open AGI Summit 2025 | Part II
Part II of my Open AGI Summit overview covers AI Agent standards, decentralized training, and major DeAI trends.

Hello, everyone!
This is The Web3 + AI Newsletter, your guide to the burgeoning intersection of blockchain and artificial intelligence! Couple of days ago, I published the first part of my Open AGI Summit overview, and it's now time to deliver the rest. Find below updates on AI Agent standards, decentralized training, and DeAI trends to watch out for in the years to come.
Thank you for being here! Let's dive in!
MCP, A2A, and the Necessity of Web3-Specific Agent Standards
The Model Context Protocol (MCP) is a new open standard that enables AI agents to connect with external tools, services, data sources, and blockchains in a modular and scalable way. It defines a universal interface for how agents can access context, whether that context comes from APIs, smart contracts, or other agents.
In Web3, MCP allows developers to build plug-and-play systems, letting agents call external functions or pull on-chain data without needing custom integrations every time. It serves two major use cases:
- Collecting and interacting with on-chain data.
- Managing wallet delegation and permissions.
Platforms like ChainOpera now offer a variety of MCP configurations, making it easy to plug into multiple blockchains and services. In essence, MCP brings standardization to a fragmented ecosystem, defining how agents interact with the broader digital world. Over the last three months, it has evolved significantly, as it’s now stateless, making it more lightweight and efficient.
In parallel, Google’s A2A (Agent-to-Agent) protocol is focused on enabling seamless discovery and communication between agents.
Where MCP connects tools to agents, A2A connects agents to each other. Although they focus on different layers, these two standards are complementary, and sometimes even interchangeable. In many cases, a tool can be wrapped as an agent, and the lines begin to blur.
Together, MCP and A2A are laying the groundwork for a more interoperable and scalable agent ecosystem.
What Would a Web3-Native Agent Flow Look Like?
While MCP and A2A are both Web2 standards, the need for a Web3-native framework to regulate crypto-specific use cases is growing. Web3 is unlocking multi-agent systems that are far more composable than anything possible in Web2. At the Open AGI Summit, ChainOpera's co-founder Salman Avestimehr drew the following scenario:
- Your super-agent decides to execute an automated trading strategy.
- It sends instructions to a coding agent, which generates Python code.
- That code is handed to a deployment agent, which locates compute, allocates resources, deploys the model, connects to your wallet, and launches the strategy.
This results in a fully automated AI-powered hedge fund, built and deployed in real time. Such agent chaining and coordination would be very difficult to achieve in Web2, especially if we'd like for agents to transact on their own.
Centralized vs. Decentralized AI Training?
Have you ever wondered how Big Tech is training their Large Language Models (LLMs)? Of, course, they're using GPUs, but what kind?
Big AI labs are relying on high-end GPUs, but those are a very scarce resource. Let's take OpenAI as an example. Although the company announced plans to build its own state-of-the-art AI data centers, those will take years to start operating. In the meantime, OpenAI is entirely dependent on one single company for its computational needs, namely Nvidia.
When a resource ceiling is hit, scaling is impossible. And OpenAI really needed to continue scaling. So, the company began looking for other viable paths. It temporarily found one in a new model architecture.
To be able to train its largest models, OpenAI split them into pieces across many nodes. However, this created a major bottleneck: massive amounts of data had to be exchanged between these nodes, which required enormous bandwidth. To overcome this, the company started investing in ultra-fast communication between components, capable of 400–800 GB/s transfer speeds.
But even then, one challenge remained: how to know which parts of the model are most important to share across nodes? Since OpenAI couldn’t reliably predict that, it ended up transferring everything, adding to the inefficiency.
Why Do We Need Decentralized AI Training?
All of the above comes to demonstrate that Big Tech has quite a limited view of scale. While OpenAI and Google consider scaling only within their own infrastructure, we in the decentralized AI space think of scale as something that spans the entire planet. We imagine models that can run across every device, not just within a few corporate data centers.
Big Tech controls about 90% of high-end compute, but this type of GPUs represents only 2% of all GPUs globally. That leaves an enormous untapped pool of commodity hardware—something Web3 is uniquely positioned to aggregate and utilize.
What's more, a large number of countries, including the entire continent of Australia, don't have access to these high-end GPUs. In a world where leadership in AI development determines the economic prosperity of nations, and centralized AI is the norm, a lack of the right compute infrastructure may become a death sentence.
So the real question becomes: how do we make AI training decentralized and work on lower-end hardware and commodity devices? Just 18 months ago, this was widely considered either impossible or unnecessary. Now, companies like Gensyn are already executing it.
Gensyn's Approach to Decentralized Training
At the core of making the shift from high-end to commodity GPUs possible are stochastic optimization methods, which allow deep learning to be parallelized across multiple devices. Rather than scaling vertically by building ever-larger data centers, we can scale horizontally, spreading the workload across many devices. This approach offers orders of magnitude more compute at significantly lower costs.
Gensyn is pioneering this future. Its testnet is live, and the first demo application is focused on reinforcement learning post-training, using what they call the RL Swarm. Here’s how it works:
- It’s a globally distributed, heterogeneous network of devices, including both CPUs and GPUs, contributing compute.
- Any user with a GPU can start executing tasks and earn rewards.
- Models communicate via a gossip network, where each model critiques the outputs of others until they reach consensus on the truth.
- You can deploy any model you want, and its performance is measured by how many other models respect and agree with its answers.
This approach is infinitely horizontally scalable. Gensyn already boasts:
- 85,000 on-chain accounts.
- Over 4,000 active Swarm nodes (CPUs and GPUs).
Importantly, users own their models, Gensyn doesn’t. This is decentralized AI infrastructure in action, enabling truly competitive systems built on commodity hardware, not locked inside proprietary data centers.
Trends That Will Inform the Coming Years
The Open AGI Summit outlined a number of strong trends in the way we'll be using AI and the decentralized AI space will be developing. I thought it would be useful for you to get them summarized, so here they are.
On our personal use of AI agents:
- Humans will no longer browse websites; instead, bots and agents will interact with information on their behalf. Google Search will disappear, whereas a universal interface powered by agents will become the standard.
- To be widely adopted, agents need to be able to deal with useful, verifiable tasks.
- Every individual will have a personal agent embedded in their wallet, capable of investing, transacting, and acting on their behalf based on their risk profile and preferences.
- Users will rely on swarms of agents to manage their health, wealth, and well-being.
On AI's advancement:
- The industry’s focus is shifting: from AI models in 2023, to agents in 2024. By 2025, the emphasis will be on super apps, with platforms like X and Meta competing to integrate agent swarms into everyday life.
- Next big unlock for AI will be personalization. That's why Big Tech is trying to desperately capture more user data.
On DeAI's future:
- Agent-to-agent payments will become common, enabling autonomous systems to transact and collaborate without direct human oversight.
- The DeAI space's purpose will be to make powerful tools, aggregated data, and comprehensive resources widely accessible, and deliver refined, highly fine-tuned user experiences.
- One of the most pressing outstanding questions is agents' permissions to handle wallets. If a user trusts its agent with their wallet's private keys, it is no longer their wallet. While many projects are currently betting on agents, there are very few who empower users to remain in control of their agency.
Disclaimer: None of this should or could be considered financial advice. You should not take my words for granted, rather, do your own research (DYOR) and share your thoughts to create a fruitful discussion.