From Hype to Hyperon: Why Ben Goertzel Says AGI Will Be Built Differently, And Why DEEP Is the Place to Do It
In a world obsessed with large language models (LLMs) and “AI that can do everything,” Ben Goertzel offers a much-needed reality check.
In his recent talk at The Beneficial AGI Summit & Unconference 2025 (BGI-25), the 🔗SingularityNET founder and long-time 🔗Artificial General Intelligence (AGI) researcher makes a bold but grounded claim:
Today’s AI is powerful, but it isn’t AGI, and we’re going to need new architectures, new tools, and new infrastructure to get there.
Even more importantly, he argues that how we build and deploy AGI will determine whether we end up in a world of concentrated power and instability, or one where intelligence and opportunity are broadly shared.
That’s exactly the space 🔗DEEP is focused on: helping builders, researchers, and communities experiment with the next wave of AGI-native ideas, things like Hyperon, MeTTa, neural-symbolic AI, and decentralized AI infrastructure.
Let’s unpack some of the key concepts from Ben’s talk and why they matter for the future of AGI and for what we’re building together at DEEP.
Ben acknowledges what everyone can see:
By many visible metrics, it looks like we are already close to human-level intelligence.
But there’s a gap.
LLMs are trained on an enormous amount of data and are astonishingly good at pattern-matching and pattern-remixing. They can imitate almost anything they have “seen” in their training distribution.
What they notably struggle with is:
That last point is crucial.
You can’t build a civilization-changing intelligence that only rearranges what already exists. AGI must be able to push beyond its training data, the way humans invent new science, new music, new culture.
Which brings us to what Ben and our collaborators are actually building.
AGI is not just “stronger AI.” It has a specific meaning in Ben’s framing:
AGI is an intelligence that can generalize across domains and beyond its training data, and continually expand its own capabilities in open-ended ways.
Some important aspects:
LLMs are broad in scope because they’re trained on almost everything online, but they still primarily obey their training distribution. AGI is about crossing that boundary.
At DEEP, this is the level we care about, not just “another wrapper on top of an LLM,” but research, tools, and systems that move us closer to true generality.
To move beyond the LLM paradigm, Ben and his team are developing 🔗Hyperon, a next-generation AGI framework.
You can think of Hyperon as:
A decentralized knowledge metagraph + AI operating system designed to host many kinds of intelligence at once.
Some of its defining features:
Instead of relying on one gigantic neural network to do everything, Hyperon gives us a shared substrate where different forms of intelligence can interact and cooperate.
We see frameworks like Hyperon as the kind of AGI-native infrastructure builders should start playing with now. Our aim is to:
Hyperon needs a way to express knowledge, rules, and processes. That’s where 🔗MeTTa comes in.
MeTTa (Meta-Metagraph Language) is the AGI programming language designed for Hyperon.
In practice, it is:
A language of thought for AGI, a way to encode data, logic, probabilities, and procedures in a unified form, and run AI methods over them efficiently.
MeTTa allows developers to:
Recent progress (which Ben highlights) includes:
This is not just another general-purpose programming language. It’s purpose-built for building minds, not just apps.
We want to make environments where you can:
One of the most important themes in Ben’s vision is neural-symbolic AI.
Instead of choosing between “neural” vs. “symbolic,” the idea is to combine both:
In Hyperon, this looks like:
This hybrid approach matters because:
Most real-world problems, governance, science, policy, operations, all require both intuition and explicit reasoning, not just text prediction.
We see neural-symbolic AI as a crucial opportunity area:
We aim to be a home for experiments that treat neural-symbolic AI as the default, not an afterthought.
Ben doesn’t just talk about how AGI will think, but also where it will live and who will control it.
He contrasts two futures:
In his simulations, one clear conclusion emerges:
The most powerful lever for steering away from dystopian outcomes is infrastructure, the “rails” that spread access, capability, and value widely.
That includes, for example:
Dr. Goertzel points out that massive wealth inequality and centralized control of AGI are much more concrete risks than sci-fi “🔗paperclip maximizers.” And infrastructure is how we meaningfully reduce those risks.
We position ourselves precisely at this intersection:
If AGI is going to be open, beneficial, and global, it needs ecosystems like DEEP to incubate the ideas, tools, and norms that make that possible.
Ben closes with an uncomfortable but honest point:
That means:
We need:
DEEP exists to help convene and empower exactly those people.
If any of this resonates, if you care about:
then your next steps are simple:
This post is just a guided overview. To really feel the nuance, the math, the vision, the urgency, you should hear it in Ben’s own words.
🎥 Go watch the full video to get the complete picture of how he sees the path from LLMs to AGI to ASI.
If you want to keep following these ideas as they evolve, and see how to build with them, make sure you stay in the loop:
Get started: 🔗Learn MeTTa
We’re just at the beginning of this journey. AGI is not only something that will happen to us, it’s something we can co-create. DEEP is here for the people who are ready to build that future together.
Copyright 2025 © Theme Created By DeepFunding, All Rights Reserved.