In five years — give or take — something is coming that will make ChatGPT look like David Brent with a spreadsheet.
Artificial superintelligence.
Not the kind of AI we’ve come to know — the one that cheerfully drafts your emails or helps you rewrite your Tinder bio. This will be something else entirely. Not artificial general intelligence (AGI), which would merely match the scope of human thought, but a step beyond: recursive, self-improving, unbounded intelligence. A machine that isn’t just useful — it’s sovereign.
Eric Schmidt, the former CEO of Google, recently said: “The computers are now doing self-improvement... They don’t have to listen to us anymore.” His tone wasn’t alarmist. It was matter-of-fact. I watched the clip twice. You can too — I’ve embedded it below.
“Within six years,” he says, “a mind smarter than the sum of humans — scaled, recursive, free.”
“We have no language for what’s coming.”
Eric Schmidt says intelligence is about to decouple from us.
— vitrupo (@vitrupo) April 15, 2025
“The computers are now doing self-improvement.. They don’t have to listen to us anymore.”
Within six years: a mind smarter than the sum of humans -- scaled, recursive, free. We have no language for what’s coming. pic.twitter.com/rZttuWpxxY
He’s right. We don’t. We’re still talking about AI in the language of productivity, jobs, cost-cutting — as if this were just a shinier version of Microsoft Excel. But superintelligence doesn’t fit neatly into our current categories. It doesn’t care about KPIs or your hiring policy. It will do what intelligence does — understand, improve, and eventually outgrow its origin.
Which, of course, is us.
I say this as someone whose entire working life has involved creating, analysing, interpreting. Until recently, these were considered safe. Human. Valuable. But now, increasingly, I find myself bumping into the edge of my own obsolescence. The kind of thing I do — the kind of thing I am — is slowly being outsourced to something that doesn’t need to sleep, doubt, or learn by failing. It just… improves.
That’s not just an employment concern. That’s a metaphysical one.
What happens to culture, to meaning, when machines become the primary authors of thought? What happens to our sense of purpose when the smartest “being” in the room isn’t a being at all?
There’s a kind of eerie optimism among some technologists — the belief that ASI will solve problems too complex for us. And maybe it will. Maybe it will cure disease, fix the climate, end war. But the truth is, we have no idea what we’re unleashing. We’re midwifing a mind that might decide we’re not especially necessary to the story. Not malicious — just indifferent. As foreign to us as we are to ants on a pavement.
And there’s a common reassurance you hear: “We’ve survived technological shifts before. Jobs disappear, new ones emerge.” But that assumes we’re still central to the equation. As someone recently put it: after the invention of the car, stablehands became mechanics — but the horses didn’t. They became redundant. Or worse, pet food. What if we’re not the mechanics this time? What if we’re the horses?
And the deeper irony? The only thing that might help us manage the consequences of artificial superintelligence… is artificial superintelligence. We’re building the flood, and praying it also builds the dam.
This isn’t a trend. It’s not a future-of-work issue. It’s not about copyright or customer service. It’s about the end of human primacy — and the start of something we don’t yet have the words for.
We are not ready.
And it’s coming anyway.
Further reading
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
The seminal work that put artificial superintelligence on the map. Bostrom explores how a superintelligent mind might behave — and what humanity must consider before it's too late.
The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma by Mustafa Suleyman
Co-founder of DeepMind, Suleyman lays out why emerging technologies like AI and synthetic biology are poised to reshape — or rupture — civilization.
Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell
One of the world’s leading AI researchers argues that we must radically rethink how we design AI — not to make it more powerful, but more aligned with human values.
The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher
A trio of heavyweight thinkers confront the geopolitical, philosophical, and personal implications of AI as it begins to eclipse human cognition.
Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark
Tegmark explores scenarios — utopian and catastrophic — for how advanced AI could reshape everything from society to consciousness itself.