
Web3 + AI + Privacy: Spotlight on Ethereum
With its current focus on privacy, the crypto world returns to its cypherpunk roots. But how would it affect Web3 + AI?

The Web3 + AI Interview: Quranium
Quranium's Kapil Dhiman joins me to discuss the quantum threats to blockchains, AI, and privacy.

The Web3 + AI Daily #35
Your definitive guide to the world of Decentralized AI (DeAI/dAI).
Your ultimate guide to the burgeoning intersection of blockchain and AI, and the nascent agent economy.



Web3 + AI + Privacy: Spotlight on Ethereum
With its current focus on privacy, the crypto world returns to its cypherpunk roots. But how would it affect Web3 + AI?

The Web3 + AI Interview: Quranium
Quranium's Kapil Dhiman joins me to discuss the quantum threats to blockchains, AI, and privacy.

The Web3 + AI Daily #35
Your definitive guide to the world of Decentralized AI (DeAI/dAI).
Your ultimate guide to the burgeoning intersection of blockchain and AI, and the nascent agent economy.

Subscribe to The Web3 + AI Newsletter

Subscribe to The Web3 + AI Newsletter
>100 subscribers
>100 subscribers
Sreeram Kannan, founder of EigenLayer and EigenCloud, believes that agentic companies will be the next trillion-dollar bet. But why haven't we witnessed this new kind of software-only firm yet?
The key bottleneck is not intelligence, but rights.
At the center of every entrepreneurial endeavor lies the ability to own property. Humans (living in fairly democratic and capitalist countries) dispose of an evolved legislative and judicial system that guarantees property rights - something that agents are still lacking.
A common assumption is that the main thing holding agents back is capability. I do not think that is the whole story. Even if models continue to improve rapidly, the bigger bottleneck is that agents do not have standing in the systems that matter most. Humans can own property, sign agreements, take liabilities, and organize companies. Agents, by default, cannot. Without those capabilities, they remain extensions of human operators rather than economic actors in their own right.
Though, if you think that agents would wait for the legislation to catch up, you're gravely mistaken. In fact, this is where blockchain comes into play.
A blockchain already allows a program to hold and administer assets according to rules. That is, in effect, a mechanism by which software can own property and exercise constrained control. Smart contracts are the earliest and clearest example of this.
All we need to allow an agent to own rights and property is to create mechanisms that guarantee the agent's identity. As I've written repeatedly here, this is what Ethereum Foundation's ERC-8004 standard is all about:
We are on the verge of having agents establish and run companies, sign contracts, raise funds, and ship products. A new generation of corporations is about to be born.
AI democratizes the creation of software. AI plus crypto democratizes the creation of software companies.
I urge you to listen to Kannan's full lecture at Digital Asset Summit 2026 to learn more.
Illia Polosukhin was among the co-authors of the landmark paper "Attention is all you need," which introduced the then-novel Transformer architecture for AI models and ultimately laid the groundwork for the generative AI revolution we’re experiencing today. He left Google to create NEAR AI and, subsequently, NEAR Protocol.
It was in 2017 when Polosukhin envisioned that blockchains and AI would converge. He knew that AI needs resources like data and compute and that all you need for setting up a global marketplace to crowdsource them is a single smart contract. He still believes that one day AI will replace all operating systems and apps to become our interface to computers and code, and blockchain will work in the background.
What blockchains provide in that scenario is invaluable, because they can serve as the layer of truth and trust, offering a global registry of identities, along with marketplaces, payment methods, and all of that delivered with upgradeability.
Polosukhin joined the Bankless podcast to discuss IronClaw - a more secure and private version of the popular agent framework OpenClaw. But why do we need more privacy and security when interacting with agents?
Polosukhin outlines two main reasons:
Agents are not as useful today as we want them to be because we don't give them enough context and information. Meanwhile, we withhold context because we're afraid of misuse.
Another problem when communicating with agents is privacy - we cannot trust them to keep our sensitive data, passwords, and private keys safe.
People are right to worry. The reality that many users don't fully grasp is that with OpenClaw, every piece of information they provide is sent to the inference partner working in the background, whether that's Anthropic, OpenAI, or someone else.
So somewhere in Anthropic's and OpenAI's logs, they have everybody's access keys, API keys, passwords to gmail, notion, etc.
In contrast, with IronClaw, all different tools are sandboxed in isolated environments, and all credentials and private keys are fully encrypted and locked in a vault, with checks on how the agent uses them. The keys never touch an LLM, even if a centralized inference provider is used. In case of agents creating products themselves, they'll be running inside a VM and their effect will be under control.
The focus of NEAR AI over the past year has been on private AI - where neither NEAR AI, nor the model provider or the hardware provider can access what the user is doing with it.
Listen to the entire conversation to find out more.
Streaming behemoth Netflix just open-sourced VOID (Video Object and Interaction Deletion) - an AI tool that erases objects from video and models how the remaining scene should physically behave after the object is gone.
What's more important for me to highlight here is that the research behind VOID was conducted in cooperation with Bulgaria's INSAIT - Institute for Computer Science, Artificial Intelligence and Technology.
VOID was chosen 64.8% of the time in tests against six rivals, including Runway, the leading commercial alternative, which managed just 18.4%. It is not clear yet whether Netflix would incorporate the model in production pipelines, but using it to automate labor-intensive post-production tasks can save the companies significant resources.
The New Yorker's Ronan Farrow and Andrew Marantz spent over a year investigating OpenAI and its CEO, Sam Altman, to discover the reasons behind Altman’s dismissal by the company’s board, and his subsequent reinstatement following the highly public fallout.
The article is a fascinating read that examines Altman's professional and personal life in depth and his path from co-founding OpenAI as a non-profit to the present day - from the gross and often vicious slander against him to his tendencies to lie and exaggerate and his ties with Middle Eastern princes who order murders of journalists.
If we have to describe Altman with a single word, it surely will be 'controversial.' But if I have to characterize my thoughts after reading this piece, the word will be 'confused.'
'Confused' about how the world might believe this person after numerous accounts of deceit, how we trust him to manage one of the most consequential companies in human history, and how people continue to glorify Silicon Valley bullies, liars, and narcissists just because they claim to work "for the common good."
In this newsletter, I have often cited Altman for his contentious, to put it mildly, stands on AI regulation - something that the article outlines perfectly:
Testifying before the Senate Judiciary Committee in 2023, he proposed a new federal agency to oversee advanced A.I. models. [...] But, as Altman publicly welcomed regulation, he quietly lobbied against it. In 2022 and 2023, according to Time, OpenAI successfully pressed to dilute a European Union effort that would have subjected large A.I. companies to more oversight. In 2024, a bill was introduced in the California state legislature mandating safety testing for A.I. models. Its provisions included measures resembling the ones that Altman had advocated for in his congressional testimony. OpenAI publicly opposed the bill but in private began issuing threats. "I would say that over the course of the year, we saw increasingly cunning, deceptive behavior from OpenAI," a legislative aide told us.
Some of you may say that CEOs at such scale need to shift swiftly to be able to stir whole industries. And yes, the technology world, and AI in particular, is extremely fast-paced, and founders like Altman need to be able to adapt to survive. Nobody debates that.
But to found an organization with the explicit mission of ensuring safety and benefitting all of humanity and then gradually abandon every safety commitment you have ever made is next-level hypocrisy, to say the least.
Make sure to read the full story here:
Thank you for reading! If you haven't done so yet, I invite you to subscribe to stay in the loop on the hottest dAI developments.
The Web3 + AI Book Club is live! This month, we're reading 'The New Age of Sexism' by Laura Bates. Follow the link below to join the club on Fable.
If you want to support the publication financially, you can either purchase my writer token $WEB3AI, or buy my creator token $ALBENA on ZORA.
I'm looking forward to connecting with fellow Crypto x AI enthusiasts, so don't hesitate to reach out on social media.
Disclaimer: None of this should or could be considered financial advice. You should not take my words for granted; rather, do your own research (DYOR) and share your thoughts to encourage a fruitful discussion.
Sreeram Kannan, founder of EigenLayer and EigenCloud, believes that agentic companies will be the next trillion-dollar bet. But why haven't we witnessed this new kind of software-only firm yet?
The key bottleneck is not intelligence, but rights.
At the center of every entrepreneurial endeavor lies the ability to own property. Humans (living in fairly democratic and capitalist countries) dispose of an evolved legislative and judicial system that guarantees property rights - something that agents are still lacking.
A common assumption is that the main thing holding agents back is capability. I do not think that is the whole story. Even if models continue to improve rapidly, the bigger bottleneck is that agents do not have standing in the systems that matter most. Humans can own property, sign agreements, take liabilities, and organize companies. Agents, by default, cannot. Without those capabilities, they remain extensions of human operators rather than economic actors in their own right.
Though, if you think that agents would wait for the legislation to catch up, you're gravely mistaken. In fact, this is where blockchain comes into play.
A blockchain already allows a program to hold and administer assets according to rules. That is, in effect, a mechanism by which software can own property and exercise constrained control. Smart contracts are the earliest and clearest example of this.
All we need to allow an agent to own rights and property is to create mechanisms that guarantee the agent's identity. As I've written repeatedly here, this is what Ethereum Foundation's ERC-8004 standard is all about:
We are on the verge of having agents establish and run companies, sign contracts, raise funds, and ship products. A new generation of corporations is about to be born.
AI democratizes the creation of software. AI plus crypto democratizes the creation of software companies.
I urge you to listen to Kannan's full lecture at Digital Asset Summit 2026 to learn more.
Illia Polosukhin was among the co-authors of the landmark paper "Attention is all you need," which introduced the then-novel Transformer architecture for AI models and ultimately laid the groundwork for the generative AI revolution we’re experiencing today. He left Google to create NEAR AI and, subsequently, NEAR Protocol.
It was in 2017 when Polosukhin envisioned that blockchains and AI would converge. He knew that AI needs resources like data and compute and that all you need for setting up a global marketplace to crowdsource them is a single smart contract. He still believes that one day AI will replace all operating systems and apps to become our interface to computers and code, and blockchain will work in the background.
What blockchains provide in that scenario is invaluable, because they can serve as the layer of truth and trust, offering a global registry of identities, along with marketplaces, payment methods, and all of that delivered with upgradeability.
Polosukhin joined the Bankless podcast to discuss IronClaw - a more secure and private version of the popular agent framework OpenClaw. But why do we need more privacy and security when interacting with agents?
Polosukhin outlines two main reasons:
Agents are not as useful today as we want them to be because we don't give them enough context and information. Meanwhile, we withhold context because we're afraid of misuse.
Another problem when communicating with agents is privacy - we cannot trust them to keep our sensitive data, passwords, and private keys safe.
People are right to worry. The reality that many users don't fully grasp is that with OpenClaw, every piece of information they provide is sent to the inference partner working in the background, whether that's Anthropic, OpenAI, or someone else.
So somewhere in Anthropic's and OpenAI's logs, they have everybody's access keys, API keys, passwords to gmail, notion, etc.
In contrast, with IronClaw, all different tools are sandboxed in isolated environments, and all credentials and private keys are fully encrypted and locked in a vault, with checks on how the agent uses them. The keys never touch an LLM, even if a centralized inference provider is used. In case of agents creating products themselves, they'll be running inside a VM and their effect will be under control.
The focus of NEAR AI over the past year has been on private AI - where neither NEAR AI, nor the model provider or the hardware provider can access what the user is doing with it.
Listen to the entire conversation to find out more.
Streaming behemoth Netflix just open-sourced VOID (Video Object and Interaction Deletion) - an AI tool that erases objects from video and models how the remaining scene should physically behave after the object is gone.
What's more important for me to highlight here is that the research behind VOID was conducted in cooperation with Bulgaria's INSAIT - Institute for Computer Science, Artificial Intelligence and Technology.
VOID was chosen 64.8% of the time in tests against six rivals, including Runway, the leading commercial alternative, which managed just 18.4%. It is not clear yet whether Netflix would incorporate the model in production pipelines, but using it to automate labor-intensive post-production tasks can save the companies significant resources.
The New Yorker's Ronan Farrow and Andrew Marantz spent over a year investigating OpenAI and its CEO, Sam Altman, to discover the reasons behind Altman’s dismissal by the company’s board, and his subsequent reinstatement following the highly public fallout.
The article is a fascinating read that examines Altman's professional and personal life in depth and his path from co-founding OpenAI as a non-profit to the present day - from the gross and often vicious slander against him to his tendencies to lie and exaggerate and his ties with Middle Eastern princes who order murders of journalists.
If we have to describe Altman with a single word, it surely will be 'controversial.' But if I have to characterize my thoughts after reading this piece, the word will be 'confused.'
'Confused' about how the world might believe this person after numerous accounts of deceit, how we trust him to manage one of the most consequential companies in human history, and how people continue to glorify Silicon Valley bullies, liars, and narcissists just because they claim to work "for the common good."
In this newsletter, I have often cited Altman for his contentious, to put it mildly, stands on AI regulation - something that the article outlines perfectly:
Testifying before the Senate Judiciary Committee in 2023, he proposed a new federal agency to oversee advanced A.I. models. [...] But, as Altman publicly welcomed regulation, he quietly lobbied against it. In 2022 and 2023, according to Time, OpenAI successfully pressed to dilute a European Union effort that would have subjected large A.I. companies to more oversight. In 2024, a bill was introduced in the California state legislature mandating safety testing for A.I. models. Its provisions included measures resembling the ones that Altman had advocated for in his congressional testimony. OpenAI publicly opposed the bill but in private began issuing threats. "I would say that over the course of the year, we saw increasingly cunning, deceptive behavior from OpenAI," a legislative aide told us.
Some of you may say that CEOs at such scale need to shift swiftly to be able to stir whole industries. And yes, the technology world, and AI in particular, is extremely fast-paced, and founders like Altman need to be able to adapt to survive. Nobody debates that.
But to found an organization with the explicit mission of ensuring safety and benefitting all of humanity and then gradually abandon every safety commitment you have ever made is next-level hypocrisy, to say the least.
Make sure to read the full story here:
Thank you for reading! If you haven't done so yet, I invite you to subscribe to stay in the loop on the hottest dAI developments.
The Web3 + AI Book Club is live! This month, we're reading 'The New Age of Sexism' by Laura Bates. Follow the link below to join the club on Fable.
If you want to support the publication financially, you can either purchase my writer token $WEB3AI, or buy my creator token $ALBENA on ZORA.
I'm looking forward to connecting with fellow Crypto x AI enthusiasts, so don't hesitate to reach out on social media.
Disclaimer: None of this should or could be considered financial advice. You should not take my words for granted; rather, do your own research (DYOR) and share your thoughts to encourage a fruitful discussion.
Share Dialog
Share Dialog
🌷 🪻 As we Bulgarians prepare for a long Easter weekend, I've curated a nice collection of Web3 + AI news and opinion pieces for you to enjoy over the break: 1️⃣ @eigencloud' @sreeramkannan envisions that agentic companies will be the next trillion-dollar bet 🤑 But contrary to popular belief, the reason why we haven't seen such software firms yet has nothing to do with capability. Rather, it's all about property rights and digital ownership, and here's where blockchain come in 🧐 2️⃣ @near-protocol's @ilblackdragon talked to @bankless about IronClaw - their private and secure alternative to @openclaw 🫨 3️⃣ Have you heard that @netflixfilm open-sourced the video editing model VOID in cooperation with Bulgaria's INSAIT institute? 👏 4️⃣ The New Yorker's Ronan Farrow and Andrew Marantz published a detailed investigation into OpenAI and Sam Altman -it's definitely worth your time. Read the full story and subscribe 👇 https://web3plusai.xyz/daily_64
1 comment
🌷 🪻 As we Bulgarians prepare for a long Easter weekend, I've curated a nice collection of Web3 + AI news and opinion pieces for you to enjoy over the break: 1️⃣ @eigencloud' @sreeramkannan envisions that agentic companies will be the next trillion-dollar bet 🤑 But contrary to popular belief, the reason why we haven't seen such software firms yet has nothing to do with capability. Rather, it's all about property rights and digital ownership, and here's where blockchain come in 🧐 2️⃣ @near-protocol's @ilblackdragon talked to @bankless about IronClaw - their private and secure alternative to @openclaw 🫨 3️⃣ Have you heard that @netflixfilm open-sourced the video editing model VOID in cooperation with Bulgaria's INSAIT institute? 👏 4️⃣ The New Yorker's Ronan Farrow and Andrew Marantz published a detailed investigation into OpenAI and Sam Altman -it's definitely worth your time. Read the full story and subscribe 👇 https://web3plusai.xyz/daily_64