The Trust Problem with AI
Artificial intelligence (AI) has reached a point where its power often exceeds our ability to explain it. As algorithms get deeper and generative AI (GenAI) models grow more complex, transparency has become one of the biggest challenges facing businesses today.
As Dr Scott Zoldi, Chief Analytics Officer at FICO, told a packed room at the London Blockchain Conference 2025: “The black box of AI is getting darker, and the lack of transparency is increasing.”
That statement captures the heart of the problem. AI now influences decisions that shape people’s lives – credit approvals, hiring, healthcare, even national security. Yet, many organisations struggle to prove how their systems make those decisions.
The solution, according to Zoldi and a growing number of experts, lies in combining AI and blockchain – a technology built on transparency, traceability, and trust.
Blockchain: The Enforcer of AI Integrity
“Blockchain will govern AI,” Zoldi declared. It’s a bold statement, but one rooted in logic.
At its core, blockchain provides an immutable record of actions – a ledger that can verify every step in an AI model’s development, from training data to deployment and monitoring. This makes it possible to prove how a model was built, tested, and used.
Instead of treating compliance as an afterthought, blockchain can make it automatic. Every stage of model development – data collection, algorithm tuning, and output validation – can be logged on-chain, ensuring an auditable trail.
That matters, Zoldi argued, because “the people monitoring a model are often not the people who built it.” By recording monitoring data immutably, organisations can demonstrate accountability long after the original team has moved on.
The result? A new kind of AI – auditable, explainable, and responsible.
Focused AI: Precision Over Power
But trust isn’t just about transparency – it’s about relevance.
Zoldi warned that the industry’s obsession with massive, general-purpose AI models is misplaced. “Generative AI can provide a lot of value,” he said, “but it is not a perfect tool… they know as much about how to make mushroom soup as financial services.”
In other words, general intelligence is impressive, but it’s not necessarily useful in business.
FICO’s answer is Focused Language Models (FLMs) – smaller, domain-specific systems that outperform generalist AI in accuracy, compliance, and efficiency. These models are trained only on curated, relevant datasets – say, banking transactions or credit histories – rather than the entire internet.
The payoff is huge:
- 1,000× lower computing requirements, according to FICO.
- Higher-quality results in regulated environments.
- Transparent, explainable decision-making processes.
And once again, blockchain plays a supporting role. It records the data sources, the training parameters, and even examples of “good” versus “bad” outputs. That means any decision made by the AI can be traced back to its origins – no more black boxes, no more guesswork.
Trust You Can Prove
Trust has become the ultimate currency in digital transformation. As AI systems gain autonomy, from self-executing contracts to agentic assistants, proving trustworthiness is no longer optional – it’s existential.
Google X Founder Sebastian Thrun framed it succinctly:
“We can no longer live in a world where things can be understood – they can only be proven correct.”
This “proof” is precisely what blockchain enables. It’s a neutral referee that ensures no model, no algorithm, and no human can rewrite history.
On the Visionaries Stage, Thrun and Zoldi shared the view that blockchain could form the backbone of the next AI revolution, particularly in the rise of Agentic AI – autonomous systems that act on behalf of users.
The danger, Thrun warned, is that “the average consumer will trust a ChatGPT more than a scientific expert.” The only defence against misinformation is verifiable provenance: a record of what an AI system saw, learned, and decided.
Blockchain doesn’t just help us trust AI – it helps us verify that trust.
AI + Blockchain in the Real World
This convergence isn’t theoretical. It’s already happening across industries.
IBM, for example, has demonstrated how blockchain can provide a “digital record” of AI training data, ensuring the provenance and integrity of information used in models. In life sciences and pharmaceuticals, combining blockchain and AI has made clinical trials more reliable by securing patient data, improving consent management, and automating reporting.
Web3 infrastructure firms like Quicknode are taking the idea further, arguing that decentralised AI will outpace centralised systems in both resilience and fairness. By spreading computation and knowledge across blockchain networks, they claim, AI can operate without dependence on any single company or government – a model for autonomous trust.
Meanwhile, financial institutions are adopting blockchain-based audit trails to meet regulatory requirements around AI-based credit scoring and risk assessment.
In short, this isn’t about hype. It’s about building systems of accountability that scale.
Why Businesses Should Care
For business leaders, the combination of AI and blockchain represents both a challenge and an opportunity.
The challenge lies in recognising that AI is no longer a “set-and-forget” technology. Regulation, ethics, and explainability are now part of the operating model. The opportunity lies in turning that accountability into a competitive advantage.
Imagine a future where every AI-powered decision – whether approving a loan, adjusting a supply chain, or diagnosing a patient – comes with a verifiable proof of integrity. Customers won’t just trust the outcome; they’ll trust the process.
That’s not just compliance. That’s brand equity.
From Transparency to Autonomy
As AI agents gain independence – executing trades, managing logistics, or negotiating contracts – the need for verifiable autonomy becomes critical. Without blockchain, we risk creating unaccountable machines. With it, we gain digital actors whose decisions are traceable, testable, and aligned with human intent.
Tatiana Kalganova of Brunel University captured it well during her London Blockchain Conference 2025 panel:
“People must stop treating AI as a magic box. It’s based on data and predictions – nothing more.”
Her point: AI doesn’t need mystique; it needs method. And blockchain provides exactly that.
The Road Ahead: Proving the Proven
The convergence of AI and blockchain isn’t just a trend – it’s the next phase of digital infrastructure.
Zoldi closed his keynote with a statement that summarised the entire debate:
“Blockchain is the key to responsible AI.”
For forward-thinking enterprises, the roadmap is clear:
- Record everything that matters. From data sources to decisions, provenance is protection.
- Train for focus, not fame. Smaller, domain-specific models deliver measurable ROI.
- Make auditability automatic. Let blockchain turn compliance into code.
As AI becomes more autonomous, businesses will need systems that don’t just think – they must also prove.
And that’s where blockchain doesn’t just support AI – it makes it trustworthy.
Dive deeper into these transformative discussions and watch the on-demand sessions from the London Blockchain Conference 2025.
And don’t stop there—register your interest today for our upcoming webinar, “From Chaos to Clarity: Understanding the Global Crypto Regulation Wave and Why It Matters for Business,” happening on December 10, 2025.
It’s your chance to stay ahead of the curve as the rules of the digital economy continue to evolve.