
Covenant AI / Templar, subnet 3 of the Opentensor Foundation / Bittensor decentralized AI ecosystem, has recently released the largest foundation model pre-trained in a completely decentralized and permissionless manner.
Covenant-72B is a 72-billion-parameter large language model (LLM) trained across 70+ contributors using standard hardware on open internet infrastructure.
The model achieved a 67.1 MMLU score (popular benchmark for evaluating LLMs), thus ranking in the same performance range as Meta’s LLaMA 2 70B.
Decentralized AI training at a scale is possible: A couple of years ago, training an LLM in a decentralized fashion, using a blockchain-coordinated network of nodes, was deemed not just exorbitantly expensive and energy intensive, but simply impossible. Now, Templar proves it's doable.
Decentralized AI training is a viable alternative to OpenAI, Google, Anthropic, Meta: Currently, the conversation around AI revolves predominantly around the ability of major labs to borrow billions of dollars to build data centers, while they haven't yet generated any profit from the technology they're building. It turns out there's an alternative nobody is talking about.
Covenant-72B is trained by a network of nodes using commodity hardware. No billion-dollar data centers and no tech behemoths in sight. As 0xSammy wrote:
It's no wonder that Jack Clark, co-founder of Anthropic, has stated that Covenant-72B is challenging the political economy of AI:
Distributed training is a technique that can change the political economy of AI by shifting the people at the frontier from monolithic ‘compute singletons’ (like labs such as Anthropic and OpenAI, and clouds like Google) to a larger federated collective.
[Decentralized training] is an important technology to track, and I could imagine a world where on-device AI features a lot of models developed via distributed training techniques, while on-cloud AI mostly runs on proprietary models trained on huge amounts of compute.
Building AI is not just for the tech giants: Recently, the AI race has made it increasingly difficult for independent researchers and smaller labs to access the computing power they need. Cloud services and specialized AI data centers have become so expensive that only billion-dollar corporations can realistically afford them. The result? Progress in AI research and development is now concentrated in the hands of just a handful of companies.
The only meaningful efforts to democratize AI development are currently emerging from the crypto space. Covenant-72B was trained in a completely permissionless and transparent way, where anyone interested could have joined:
Participation generates income: Think of Templar, and Bittensor as a whole, as a crowdsourcing platform, but instead of micro donations, it collects computing resources. It allows anyone possessing heavy-duty graphics cards (GPUs) to contribute their computing power to help train a shared, global AI model. Yet, here, contributors gain direct monetary remuneration for a job well done.
That means that anyone with a gaming computer, anywhere on the planet Earth, can lend their idle computing resources and earn passive income. An income that in many parts of the world, has the potential to be truly life-changing.
Performance doesn't have to suffer: Covenant-72B delivers a performance competitive with models trained in centralized data centers - a 67.1 MMLU score, close to the one reached by Meta’s LLaMA 2 70B.
And yes, I should emphasize that Covenant-72B is a bit outdated compared to the models major labs are releasing these days:
However, what's particularly impressive and exciting about Covenant-72B is the fact that a distributed network of peers, each running 8×B200 GPUs, has trained a model that performs similarly to the one trained by the seventh* richest company in the world (*at the time of writing).
Covenant-72B is completely open-source: Instead of locking the model behind a paid API wall, Templar released all the model weights and checkpoints under an open-source Apache 2.0 license for anyone to use.
That's just another piece of evidence that the Web3 space offers fertile ground for AI experimentation.
QVAC, the AI-focused arm of stablecoin issuer Tether.io, launched a new version of its QVAC Fabric, marking a shift from centralized, cloud-based AI to local, on-device AI development. Instead of relying on expensive data centers, the framework allows developers to run, train, and fine-tune large language models directly on everyday hardware like laptops, consumer GPUs, and even smartphones.
QVAC Fabric LLM challenges a long-standing assumption at the heart of modern artificial intelligence: that training and customizing powerful models must be confined to large, centralized data centers. QVAC Fabric LLM is built around a local-first, privacy-first philosophy, enabling individuals and organizations to fine-tune large language models directly on their own devices, using the hardware they already trust and control.
How was that possible? The key breakthrough is bringing fine-tuning, especially via efficient methods like LoRA, to edge devices, making AI personalization far more accessible. Tasks that once required high-end infrastructure can now be done locally, often with just a few commands.
This approach works across all major platforms (mobile and desktop), enabling a truly cross-platform, decentralized AI ecosystem, and eliminates cloud dependency.
QVAC Fabric LLM represents a turning point by democratizing AI development and personalization, shifting power from big tech and data centers to individuals and their own devices. And most importantly, this progress doesn’t come at the expense of performance:
Importantly, broader hardware access does not come at the cost of model quality. Models trained using QVAC Fabric LLM were evaluated against industry-standard benchmarks. Across multiple benchmarks, including biomedical accuracy tasks, performance was on par with the industry standard (PyTorch) and, in some cases, marginally better. In other evaluations, results were effectively equivalent, demonstrating that hardware-agnostic, on-device fine-tuning can match established training standards.
Thank you for reading! If you haven't done so yet, I invite you to subscribe to stay in the loop on the hottest dAI developments.
The Web3 + AI Book Club is live! This month, we're reading 'The New Age of Sexism' by Laura Bates. Follow the link below to join the club on Fable.
If you want to support the publication financially, you can either purchase my writer token $WEB3AI, or buy my creator token $ALBENA on ZORA.
I'm looking forward to connecting with fellow Crypto x AI enthusiasts, so don't hesitate to reach out on social media.
Disclaimer: None of this should or could be considered financial advice. You should not take my words for granted; rather, do your own research (DYOR) and share your thoughts to encourage a fruitful discussion.

Covenant AI / Templar, subnet 3 of the Opentensor Foundation / Bittensor decentralized AI ecosystem, has recently released the largest foundation model pre-trained in a completely decentralized and permissionless manner.
Covenant-72B is a 72-billion-parameter large language model (LLM) trained across 70+ contributors using standard hardware on open internet infrastructure.
The model achieved a 67.1 MMLU score (popular benchmark for evaluating LLMs), thus ranking in the same performance range as Meta’s LLaMA 2 70B.
Decentralized AI training at a scale is possible: A couple of years ago, training an LLM in a decentralized fashion, using a blockchain-coordinated network of nodes, was deemed not just exorbitantly expensive and energy intensive, but simply impossible. Now, Templar proves it's doable.
Decentralized AI training is a viable alternative to OpenAI, Google, Anthropic, Meta: Currently, the conversation around AI revolves predominantly around the ability of major labs to borrow billions of dollars to build data centers, while they haven't yet generated any profit from the technology they're building. It turns out there's an alternative nobody is talking about.
Covenant-72B is trained by a network of nodes using commodity hardware. No billion-dollar data centers and no tech behemoths in sight. As 0xSammy wrote:
It's no wonder that Jack Clark, co-founder of Anthropic, has stated that Covenant-72B is challenging the political economy of AI:
Distributed training is a technique that can change the political economy of AI by shifting the people at the frontier from monolithic ‘compute singletons’ (like labs such as Anthropic and OpenAI, and clouds like Google) to a larger federated collective.
[Decentralized training] is an important technology to track, and I could imagine a world where on-device AI features a lot of models developed via distributed training techniques, while on-cloud AI mostly runs on proprietary models trained on huge amounts of compute.
Building AI is not just for the tech giants: Recently, the AI race has made it increasingly difficult for independent researchers and smaller labs to access the computing power they need. Cloud services and specialized AI data centers have become so expensive that only billion-dollar corporations can realistically afford them. The result? Progress in AI research and development is now concentrated in the hands of just a handful of companies.
The only meaningful efforts to democratize AI development are currently emerging from the crypto space. Covenant-72B was trained in a completely permissionless and transparent way, where anyone interested could have joined:
Participation generates income: Think of Templar, and Bittensor as a whole, as a crowdsourcing platform, but instead of micro donations, it collects computing resources. It allows anyone possessing heavy-duty graphics cards (GPUs) to contribute their computing power to help train a shared, global AI model. Yet, here, contributors gain direct monetary remuneration for a job well done.
That means that anyone with a gaming computer, anywhere on the planet Earth, can lend their idle computing resources and earn passive income. An income that in many parts of the world, has the potential to be truly life-changing.
Performance doesn't have to suffer: Covenant-72B delivers a performance competitive with models trained in centralized data centers - a 67.1 MMLU score, close to the one reached by Meta’s LLaMA 2 70B.
And yes, I should emphasize that Covenant-72B is a bit outdated compared to the models major labs are releasing these days:
However, what's particularly impressive and exciting about Covenant-72B is the fact that a distributed network of peers, each running 8×B200 GPUs, has trained a model that performs similarly to the one trained by the seventh* richest company in the world (*at the time of writing).
Covenant-72B is completely open-source: Instead of locking the model behind a paid API wall, Templar released all the model weights and checkpoints under an open-source Apache 2.0 license for anyone to use.
That's just another piece of evidence that the Web3 space offers fertile ground for AI experimentation.
QVAC, the AI-focused arm of stablecoin issuer Tether.io, launched a new version of its QVAC Fabric, marking a shift from centralized, cloud-based AI to local, on-device AI development. Instead of relying on expensive data centers, the framework allows developers to run, train, and fine-tune large language models directly on everyday hardware like laptops, consumer GPUs, and even smartphones.
QVAC Fabric LLM challenges a long-standing assumption at the heart of modern artificial intelligence: that training and customizing powerful models must be confined to large, centralized data centers. QVAC Fabric LLM is built around a local-first, privacy-first philosophy, enabling individuals and organizations to fine-tune large language models directly on their own devices, using the hardware they already trust and control.
How was that possible? The key breakthrough is bringing fine-tuning, especially via efficient methods like LoRA, to edge devices, making AI personalization far more accessible. Tasks that once required high-end infrastructure can now be done locally, often with just a few commands.
This approach works across all major platforms (mobile and desktop), enabling a truly cross-platform, decentralized AI ecosystem, and eliminates cloud dependency.
QVAC Fabric LLM represents a turning point by democratizing AI development and personalization, shifting power from big tech and data centers to individuals and their own devices. And most importantly, this progress doesn’t come at the expense of performance:
Importantly, broader hardware access does not come at the cost of model quality. Models trained using QVAC Fabric LLM were evaluated against industry-standard benchmarks. Across multiple benchmarks, including biomedical accuracy tasks, performance was on par with the industry standard (PyTorch) and, in some cases, marginally better. In other evaluations, results were effectively equivalent, demonstrating that hardware-agnostic, on-device fine-tuning can match established training standards.
Thank you for reading! If you haven't done so yet, I invite you to subscribe to stay in the loop on the hottest dAI developments.
The Web3 + AI Book Club is live! This month, we're reading 'The New Age of Sexism' by Laura Bates. Follow the link below to join the club on Fable.
If you want to support the publication financially, you can either purchase my writer token $WEB3AI, or buy my creator token $ALBENA on ZORA.
I'm looking forward to connecting with fellow Crypto x AI enthusiasts, so don't hesitate to reach out on social media.
Disclaimer: None of this should or could be considered financial advice. You should not take my words for granted; rather, do your own research (DYOR) and share your thoughts to encourage a fruitful discussion.

Web3 + AI + Privacy: Spotlight on Ethereum
With its current focus on privacy, the crypto world returns to its cypherpunk roots. But how would it affect Web3 + AI?

The Web3 + AI Daily #35
Your definitive guide to the world of Decentralized AI (DeAI/dAI).

The Web3 + AI Daily #51
Daily insights into the fascinating convergence of Crypto and AI.

Web3 + AI + Privacy: Spotlight on Ethereum
With its current focus on privacy, the crypto world returns to its cypherpunk roots. But how would it affect Web3 + AI?

The Web3 + AI Daily #35
Your definitive guide to the world of Decentralized AI (DeAI/dAI).

The Web3 + AI Daily #51
Daily insights into the fascinating convergence of Crypto and AI.
Your ultimate guide to the burgeoning intersection of blockchain and AI, and the nascent agent economy.
Your ultimate guide to the burgeoning intersection of blockchain and AI, and the nascent agent economy.

Subscribe to The Web3 + AI Newsletter

Subscribe to The Web3 + AI Newsletter
Share Dialog
Share Dialog
>100 subscribers
>100 subscribers
No activity yet