David Silver, the mind behind the groundbreaking AlphaGo, believes the current trajectory of artificial intelligence is fundamentally flawed. While the tech world is currently obsessed with Large Language Models (LLMs), Silver argues that relying on human-generated data is a dead end for achieving true superintelligence.
Through his new venture, Ineffable Intelligence, Silver is attempting to pivot the industry away from “mimicry” and toward a model of autonomous, self-sustaining learning.
The “Fossil Fuel” Problem of LLMs
The current AI boom is largely driven by LLMs—systems trained on massive datasets of human text, code, and books. Silver views this method as inherently limited. He describes human data as a “kind of fossil fuel” : an incredible shortcut that provides an initial boost but is ultimately finite and non-renewable.
The core issue is that LLMs learn from what humans have already done. They are reflections of human intelligence rather than independent engines of discovery. Silver posits that if an AI is confined to human data, it can never surpass the collective knowledge of its creators.
“You can think of systems that learn for themselves as a renewable fuel—something that can just learn and learn and learn forever, without limit,” Silver explains.
To illustrate this, Silver uses a thought experiment: If you released a powerful LLM in a world where everyone believed the Earth was flat, the AI would become an expert “flat-earther.” Without the ability to interact with reality or conduct its own experiments, it remains trapped within the biases and limitations of its training data.
The Path to Superintelligence: Reinforcement Learning
Instead of feeding AI more text, Silver is doubling down on reinforcement learning (RL). This is the process where an AI learns through trial and error, interacting with an environment to achieve specific goals. This was the mechanism that allowed AlphaGo to master the game of Go—not by reading books on strategy, but by playing millions of games against itself.
Silver’s vision for Ineffable Intelligence is to move this concept from the “confined worlds” of games like Go into the immense complexity of the real world. His strategy involves:
- Simulated Environments: Placing AI agents inside highly sophisticated simulations where they can interact, collaborate, and test hypotheses.
- Autonomous Discovery: Creating “superlearners” that don’t just process information, but actively discover new scientific, economic, or technological principles.
- Scaling Intelligence: Building systems that can scale their intelligence without being tethered to “human priors” (the preconceived notions and biases inherent in human data).
Safety and the Alignment Challenge
A significant concern in the race for superintelligence is AI alignment : ensuring that a machine smarter than humans remains beneficial to humanity.
Critics worry that an AI learning through pure trial and error might discover “optimal” solutions that are efficient but morally catastrophic. However, Silver and his backers, including Lightspeed Ventures, argue that his approach may actually be safer.
By developing these agents within controlled simulations, researchers can observe emergent behaviors in real-time. They can see how an agent treats “lesser intelligences” or handles conflicting goals before the technology is ever deployed in the real world. This allows for a proactive approach to safety, rather than a reactive one.
A High-Stakes Mission
The scale of Silver’s ambition is reflected in the financial backing of Ineffable Intelligence. The startup has already secured $1.1 billion in seed funding, reaching a valuation of $5.1 billion. This is an extraordinary figure for a European-based AI company and underscores the industry’s belief in Silver’s “purity of vision.”
Despite the massive wealth at stake, Silver maintains a philanthropic stance. He has committed to donating all equity proceeds from Ineffable Intelligence to high-impact charities, viewing the pursuit of superintelligence as a profound responsibility to the future of humanity.
Conclusion: While the current AI landscape is dominated by models that parrot human knowledge, David Silver is betting that the next leap in intelligence will come from machines that learn to navigate and understand the world entirely on their own.






























