Web3 + AI at Open AGI Summit 2025 | Part I

Discover the key takeaways from the Open AGI Summit at EthCC 2025, including Sentient’s vision for open-source AGI, and the future of AI agents and privacy-first AI.

Web3 + AI at Open AGI Summit 2025 | Part I
Image generated using Ideogram.ai

Hello, everyone!

This is The Web3 + AI Newsletter, your guide to the burgeoning intersection of blockchain and artificial intelligence! Last week, the crypto world headed to Cannes for the Ethereum Community Conference (EthCC), Europe's largest Web3 event. By tradition, I followed the DeAI chatter there and summarized the main highlights for you. Join me in exploring all narratives and trends shared at the Open AGI Summit, one of EthCC's largest side events.

Thank you for being here! Let's dive in!


Open AGI Summit

The Open AGI Summit gathers leaders and builders of open and community-aligned AI for a second year in a row. This time, it was sponsored by Amazon's AWS, Sentient, and Polygon, along with support from Giza, Olas, Fraction AI, Gensyn, and more.

I attended the first Open AGI Summit in Brussels in 2024. As I was watching this year's recording (see below), I was reflecting on how much the field had changed in just one year. Compute DePINs were all the rage back then, with decentralized inference considered near-impossible and absolutely unnecessary. The DeAI space was niche and extravagant, whereas AI Agents hadn't taken center stage yet.

In 2025, things are different. DeAI is more mature, numerous novel use cases have emerged, AI Agents are everywhere, and tools and infrastructure have advanced immensely. Our goal is clear:

Verifiable, interoperable, community-driven, private AI.

However, we're not as far along as we wish we'd be by now. The 2025 summit was kicked off with the reminder that, over the last several months, the concentration of power in the top two AI players, OpenAI and Google, has intensified significantly. With it, the need for an open alternative is growing, but it's not clear how many of us have realized it yet.


Sentient's Vision for Open AGI

Sentient’s mission is to build open-source AGI, or essentially, an open, sovereign, and community-driven alternative to what OpenAI is doing. In other words:

Sentient is developing the foundational infrastructure—“the rails”—onto which the global ecosystem of agents and AI tools can plug in. Their aim is to create a competitive advantage over closed systems like OpenAI and Anthropic by fostering openness, interoperability, and innovation.

In its early phase, Sentient focused on mapping the Web3 landscape and identifying three core pillars of the ecosystem: data, compute, and agent/model builders.

  • Data: Sentient is aggregating as many data sources as possible to ensure that model builders have seamless access to diverse datasets directly through the platform.
  • Compute: The goal is to unify various compute providers, encouraging price competition, so developers can find the most affordable and efficient solutions.
  • Agents: The “holy grail” is the creation of a universal crypto assistant. Many teams are working on highly specialized agents focused on niche tasks or tokens. By integrating these into a unified interface, Sentient aims to deliver a vastly improved user experience.

In recent months, Sentient has expanded its focus into the Web2 world, positioning itself as the go-to platform for all AI needs. This includes AI assistants, specialized agents, and tooling, essentially curating and combining the best from both Web2 and Web3 ecosystems.

Himanshu Tyagi, Sentient's co-founder, emphasized that a shared economy and community incentives are critical to the company’s mission. He acknowledged Bittensor as a pioneer in this space, but also pointed out its limitations: namely, its one-way value flow. While Bittensor enables contribution, it doesn’t clearly benefit from the success of ecosystem projects or AI usage.

Sentient plans to address this with its own upcoming tokenomics model, designed to ensure that the AI built on its platform not only gets used, but also generates real demand and sustainable revenue.

One major challenge Sentient is tackling head-on is AI saturation and poor product discovery. There are too many tools and too little guidance. The upcoming Sentient chat interface will serve as a meeting ground between agent builders and users. It will incentivize both sides: builders can showcase their agents, while users can experiment and find the tools that meet their needs. The more popular your agent becomes, the more rewards you earn. Users, meanwhile, will pay either per query or through a subscription model.


The Future of AI Agents

The consensus among speakers from Ethereum Foundation, EigenLabs, and Parry: the agent stack is still incomplete. When asked which primitive—code, spec, or hardware—is most essential for agents to go mainstream, the general sentiment was that time is the key ingredient. The tech is progressing, but it’s not quite there yet.

By the end of the year, agents may be capable of handling tasks like booking a holiday (something Fetch.ai was working on a while ago), but they won’t yet be ready to manage million-dollar portfolios. The community is only now beginning to train agents using reinforcement learning (RL) for long-horizon tasks, a crucial step toward more capable and autonomous systems.

A major unlock still needed is the evolution of the crypto wallet, enabling it to integrate seamlessly with AI. This would allow agents to interact meaningfully with on-chain systems and financial primitives.

A growing trend emerging in the space relates to the ambition to build useful agents, as opposed to the widespread X reply-guy agent, i.e., automated responders with limited utility. But what makes an agent truly useful? The answer: verifiability. An agent that can:

  • Determine the truthfulness of a statement;
  • Predict whether a wallet is real or Sybil;
  • Complete other cryptographically provable tasks.

As AI agents become more capable, questions about on-chain guardrails become more pressing. With agents managing private keys, new attack vectors are emerging. Solutions like EigenLayer’s AVS Silence Laboratories, which splits key access across multiple operators, using MPC (multi-party computation), are being developed to mitigate risk and improve system resilience.


Data Privacy and Sovereignty in AI Model Building

What's the level of privacy users have now with dominant models and apps, like ChatGPT? Zero.

As Venice's Teana Baker-Taylor says, it's not the LLM that determines what's private, it's the platform that it's running on.

Unlike ChatGPT, Venice doesn't store your prompts, nor the responses. Nothing is held on GPUs, rather sent back and forth through a reverse proxy. Venice never has your data, so it doesn't have to protect it or keep it secure. Users can connect through a wallet and pay with crypto, and Venice doesn't know who they are. You can even use DeepSeek on the Venice platform, and your data won't go back to China.

Though, a bigger challenge is that people don't understand why they need privacy.


If you have the time, watch the full recording below. Part II of my summary of the summit's discussions is coming tomorrow. Make sure to subscribe to be among the first to get it.

Disclaimer: None of this should or could be considered financial advice. You should not take my words for granted, rather, do your own research (DYOR) and share your thoughts to create a fruitful discussion.