GZERO WORLD with Ian Bremmer
The Race to Superintelligence
5/15/2025 | 26m 46sVideo has Closed Captions
AI systems that rival human intelligence will transform our world, is society ready?
Tech experts are ringing alarm bells that powerful AI systems that rival human intelligence are being developed faster than regulation or even our understanding can keep up with. What happens when the line between man and machine disappears all together? Former OpenAI whistleblower and author of the new 'AI 2027' report, Daniel Kokotajlo, discusses the risks of artificial superintelligence.
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided by Cox Enterprises, Jerre & Mary Joy Stead, Carnegie Corporation of New York and Susan S. and Kenneth L. Wallach Foundation.
GZERO WORLD with Ian Bremmer
The Race to Superintelligence
5/15/2025 | 26m 46sVideo has Closed Captions
Tech experts are ringing alarm bells that powerful AI systems that rival human intelligence are being developed faster than regulation or even our understanding can keep up with. What happens when the line between man and machine disappears all together? Former OpenAI whistleblower and author of the new 'AI 2027' report, Daniel Kokotajlo, discusses the risks of artificial superintelligence.
Problems with Closed Captions? Closed Captioning Feedback
How to Watch GZERO WORLD with Ian Bremmer
GZERO WORLD with Ian Bremmer is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- Humanity, in general, mostly fixes problems after they happen.
The problem of losing control of your army of superintelligences is a problem that we can't afford to wait and see how it goes and then fix it afterwards.
(lively music) - Hello, and welcome to "GZERO World."
I'm Ian Bremmer.
And today on the show: How much could the world change by 2027?
In the last few years, powerful new AI tools, like ChatGPT and DeepSeek, have transformed how we think about work and creativity, even intelligence itself.
How different will our relationship with technology be just two years from now?
Tech experts and policymakers are ringing alarm bells that powerful AI systems are coming down the pike faster than regulation or even our understanding can keep up with.
Soon, they warn, the line between man and machine may disappear altogether.
My guest today, Daniel Kokotajlo, is one of those people.
He's a former OpenAI employee and leader of the team behind "AI 2027," a report that envisions a not-so-distant future where the US and China are locked in an AI arms race, they ignore safety concerns and the software goes rogue.
Sounds like science fiction, but it's written by experts with direct knowledge of the research pipelines, and that is exactly why it is so concerning.
How worried should we be?
What happens when machines can outthink us?
Is this the next great leap forward or the next geopolitical arms race?
We'll talk about all that and more.
Don't worry, I've also got your "Puppet Regime."
- US-Iran talks are back on, but nobody knows who's gonna make the first move.
- But first, a word from the folks who help us keep the lights on.
- [Announcer] Funding for "GZERO World" is provided by our lead sponsor: Prologis.
- [Announcer] Every day, all over the world, Prologis helps businesses of all sizes lower their carbon footprint (bright music) and scale their supply chains with a portfolio of logistics and real estate and an end-to-end solutions platform addressing the critical initiatives of global logistics today.
Learn more at prologis.com (gentle music) - [Announcer] And by: Cox Enterprises is proud to support GZERO.
Cox is working to create an impact in areas like sustainable agriculture, clean tech, healthcare, and more.
Cox, a family of businesses.
Additional funding provided by Jerry and Mary Joy Stead, Carnegie Corporation of New York, and... (lively music) (graphic whooshes) (lively music) - If you've been listening to AI experts lately, one phrase comes up over and over and over.
- AGI, artificial general intelligence.
- AGI.
- AGI, whatever you want to call it.
- Superintelligence and AGI.
- Our goal is to make AGI.
- When AGI arrives, you know, I think it's gonna change pretty much everything about the way we do things.
- AGI, artificial general intelligence, the technology holy grail: machines that don't just mimic human thinking, but match and then fast surpass it.
But ask those same experts their thoughts on the what and when of AGI, you get a dozen different answers.
Some say it's when a computer can do any intellectual task a human can.
Others say it's about transfer learning, applying knowledge to different situations.
Some say it's 10 years away, others two.
Today's AI tools can feel very human, at times brilliant, but large language models aren't yet intelligent.
They're highly skilled but narrow.
They're good at mimicking and matching patterns trained on tasks like writing code or generating images and text.
Artificial general intelligence is very different.
These are machines that understand and learn and adapt like or better than humans.
Many LLMs now arguably pass the Turing test, so experts have proposed new goalposts to understand when machines reach human level cognition.
Microsoft AI's Mustafa Suleyman says that AGI is when you can hand an AI $100,000 and it turns it into a million on its own.
Apple co-founder Steve Wozniak says AGI is when a robot can make coffee in a stranger's kitchen.
To understand AGI's potential, imagine you have an apple and you wanna make a meal.
Google gives you recipe links, ChatGPT writes a recipe in the style of Shakespeare or Taylor Swift, but AGI knows your tastes, orders ingredients, hires a cook, and has dinner ready for you when you get home.
Or let's say you ask, "How do I turn Uzbekistan into a top 10 global economy?"
AGI laughs a bit and then reads everything ever written about economic development.
It builds models, it runs simulations, it factors in demographics, climate, politics, and crafts a strategy and starts emailing ministers in Uzbekistan, in Uzbek, with a reform plan.
That's the difference.
AGI doesn't just tell you the answers, it figures them out like a human strategist, but one who never sleeps, never eats, doesn't waste time scrolling Instagram, and then gets it done.
What happens next?
Best case scenario, AGI helps us solve climate change, cure cancer, boost productivity, and enter a new golden age where innovation isn't limited by us.
Worst case, AGI decides that human beings are inefficient carbon-based error machines and logically concludes the easiest way to save the planet is to eliminate us.
We are not yet on the brink of Skynet.
We aren't an algorithm away from the Terminator.
Computers still struggle with fundamental human concepts like spatial recognition and sensory perception.
And AGI is also limited by the enormous energy needed to power it.
But dismissing AGI as science fiction is a mistake because virtually every AI expert out there says that AGI is coming fast and society is not ready.
The timeline may be fuzzy, but what is certain is that AGI is a technology that will redefine intelligence, the rules of power, and with it our understanding of what it means to be human.
(dramatic music) Joining me to discuss a new set of predictions for how human-level artificial intelligence could transform our world and how we avoid the worst case scenario is Daniel Kokotajlo, executive director of the AI Futures Project and co-author of "AI 2027."
Daniel Kokotajlo, thanks so much for joining us on "GZERO World."
- Thank you for having me.
- Okay, I read this report, I thought it was fantastic, so I'm a little biased.
But I wanna start with the definition of artificial general intelligence.
How will we know it when we see it?
- So, there are different definitions.
The basic idea is AI system that can do everything, or every cognitive task at least.
So, once we get to AGI and beyond, then there will be fully autonomous artificial agents that, you know, are better than the best human professionals at basically every field.
If they're still limited in serious ways, then it's not AGI.
- And from the report, I take it that you are not just reasonably confident that this is coming soon to a theater near you, like 2027, but you're completely convinced that this is going to happen soon.
Let's not even talk about exactly when, but there's no doubt in your mind that AGI of some form is gonna be developed soon.
- There's some doubt, right?
I would say something like 80% in the next, you know, five or six years, something like that.
- So, like, in the next 10 or 20 it gets to, like, 99% or is- - No, there's a long tail.
I would say there's- - There's a long tail.
- Maybe it gets up to, like, 90% by the next 20 years or so.
But there's still, like, some chance that this whole thing fizzles out, you know, some crazy event happens that halts AI progress or something like that.
There's still some chance on those outcomes, but that's not at all what I expect, I would say.
- So let's first tell everyone what this report is, "AI 2027."
Explain the contents of the report briefly and why you decided to write it.
- Sure.
So you may have heard, or maybe you haven't heard, that some of these AI companies think they're going to build superintelligence before this decade is out.
What is superintelligence?
It's AI that's better than the best humans at everything while also being faster and cheaper.
This is a big deal.
Not enough people are thinking about it.
Not enough people are, like, reasoning through the implications of what if one of these companies succeeds at what they say they're going to do.
"AI 2027" is an answer to all those questions.
It's an attempt to game out what we think the future is going to look like.
And spoiler, we do think that probably one of these companies will succeed in making superintelligence before this decade is out.
So "AI 2027" is a scenario that depicts what we think that would look like.
"AI 2027" depicts AIs automating AI research over the course of 2027 and the pace of AI research accelerating dramatically.
At that point, we branch.
There's a sort of choose-your-own-adventure element, where you can choose two different continuations to the scenario.
In one of them, the AIs end up continuing to be misaligned, so the humans never truly figure out how to control them once they become smarter than humans.
And the end result is a world, a couple of years down the line, that's totally run by superintelligent AIs that actually don't care about humanity at all, and that results in, you know, catastrophe for humanity.
And then the other branch describes what happens if they do manage to align the AIs and they manage to figure out how to control them even as they become smarter than humans.
And in that world, it's a utopia of sorts.
It's a utopia with a lot of power concentration, where the people who control the AIs effectively run society.
- It's far more detailed about the near future than anything else I've read.
But your views are not way out of whack with all of the AI experts that I know in all sorts of different companies and university settings, right?
I mean, at this point it is, I would say, commonly accepted, even conventional wisdom, that AGI is coming comparatively soon among people that are experts in AI.
Is that fair to say?
- I think that's fair to say.
I mean, it's still controversial, like almost everything in AI.
But especially over the last five years, there's been this general shift from, you know, "AGI, what even is that?"
to, "Oh, wow, it could happen in our lifetimes," to, "Oh, wow, things seem to be moving faster than we predicted.
Maybe it's actually on the horizon, you know, maybe five years away or something like that, maybe 10 years," right?
Different people have different guesses.
One core thing to look out for is when AI progress itself becomes automated, you know: autonomous AI agents are doing all or the vast majority of the actual research to design the next generation AIs.
This is something that is, in fact, the plan.
It's what these companies are attempting to do, and they think they'll be doing it in a few years.
The reason why this matters so much is that we're used to an already... We're not even used to the already fast pace of AI progress that exists today, right?
The AI systems of today are noticeably better than the AI systems of last year and so forth.
But I and others expect the pace of progress to accelerate quite dramatically beyond that once the AIs are able to automate all the research.
And that means that you get to what you could call true superintelligence fairly quickly, not just sort of an AI that can hold a conversation for half an hour and seem like a human, but rather AI systems that are just qualitatively better than the best humans at everything while also being much faster and much cheaper.
You know, this has been described as, like, the country of geniuses in the data center by the CEO of Anthropic.
I prefer the term army of geniuses.
I would say that they're going to automate the AI research first, then they're going to get superintelligence, and then the world is going to transform quite abruptly and plausibly much for the worse depending on who controls the superintelligences and if anybody controls the superintelligences.
- I wanna take one little step back.
Before we get to self-improving systems, we're now at a place, it seems, where a large amount of the coding is already happening through AI.
Is this the first, I mean, let's say, large-scale job that people should no longer be interested in going into because within a matter of, let's say, six months, a year, you're just not gonna need people to do any coding anymore?
- My guess is it'll be more than six months to a year.
So in "AI 2027," which at the time that we started writing was my median forecast, now I think it's a little bit too aggressive.
I think if I could write it again, I would have the exciting events happen in 2028 instead of in 2027.
But, yeah, I think that one of the first major professions to be fully automated will actually be programming because that's what the companies are trying hardest to achieve, because they realize that that will help them to accelerate their own research and compete with each other.
- Yeah, and make the most money in their field doing things they know how to do, and they're the ones at the cutting edge of AI.
So if you were a major university in the United States or elsewhere, would you simply get rid of your faculties, your departments to teach coding?
- Yeah, potentially.
I mean, it feels kind of strange to be giving career advice or schooling advice in the times that we live in right now.
It's sort of like imagine that I came to you with evidence that a fleet of alien spaceships was heading towards earth and it was probably going to land sometime in the next few years and your response to me was, you know, "What does this mean for the university?
Should they retool what types of, you know, engineering degrees they're giving out?"
or something.
And I'm like, "Yeah, maybe, I guess."
But, like, I think you should mostly be thinking about bigger things, basically.
- The aliens.
- You know, like, what is this fleet of aliens gonna look like and how can we make it go well for humanity?
Right?
- Maybe let me ask first this.
You left OpenAI because you felt like those people that are, they have the resources, they're driving the business models, were acting irresponsibly, or at least not acting responsibly, taking into account these things that you're concerned about.
Explain a little bit about that decision, what went into it, and then we'll talk about where we're heading.
- The short answer is it doesn't seem like OpenAI or any other company is at all ready for what's coming, and they don't seem inclined to be getting ready anytime soon.
They're on track and they don't seem like they're going to be on track.
So, to elaborate on that a little bit, there's this important technical question of AI alignment, which is: How do we...
In a word, it's: How do we actually make sure that we continue to control these AIs after they become fully autonomous and smarter than we are?
And this is an unsolved technical problem.
It's an open secret that we don't actually have a good plan for how we're going to do this.
There are many people working on it, but not as many as there should be, and they're not as well-resourced as they should be.
And if you go talk to them, they mostly think they're not on track to have solved this problem in the next couple of years.
So there's a very substantial chance that if things continue in the way that we go, if things continue in the current path, that we will end up with something like what is depicted in "AI 2027," where the army of geniuses on the data center is merely pretending to be, you know, compliant and aligned and controlled but they're not actually.
That's one very important problem.
And then there's another one, which is the concentration of power and sort of who do we align the AIs to problem.
Who gets to control the army of superintelligences on the data centers?
Currently the answer is, well, I guess maybe the CEO of the company or maybe the president if he intervenes.
I think both of those answers are unacceptable from a democratic perspective.
We need to have, you know, checks and balances.
We need to make sure that the control over the army of superintelligences is not something that one man or one tiny group of people get to have.
OpenAI and also perhaps to other extents, also perhaps other companies are just not at all really giving these issues the investment that they need, and I think they're instead mostly focused on beating each other to that, winning the race basically.
They're focused on getting to the point where they can fully automate the AI research so that they can have superintelligences.
I think this is going to predictably lead to terrible outcomes, and I don't trust these companies to make the right decisions along the way.
- And you think these companies do not want the public to be aware of the trajectory that the researchers in their own companies believe is coming?
- Yeah, basically.
I think that the public messaging of the companies is, well, focused on what's in their short-term interest to message about.
So they're not doing nearly enough to lay out explicitly what these futures look like and especially not to talk about these risks or the ways things could go wrong.
- But I kinda I get this when you're talking about, like, Exxon in the '70s, right?
Because their long-term is generational.
But, I mean, here the long-term you're talking about is short-term.
I mean, the people that are making decisions and that are profiting, they're the same people that are gonna have to deal with these problems when they come in, like, just a matter of a couple of years.
So I'm having a harder time processing that.
- Well, they each think that it's best if they're the ones in power when all this stuff happens.
So, part of the founding story for DeepMind was, "Wow, AGI, incredibly powerful.
If it's misaligned, you know, it could possibly end the human race, also, you know, someone could use it to become dictator.
Therefore we should build it first and we should make sure that we build it safely and responsibly."
Part of the founding story for OpenAI was exactly that.
They are really focused on winning and beating each other.
Each of these CEOs thinks that the best person to be in charge of the first company to get to superintelligence is themselves.
- I mean, the thing that was most disturbing about your piece, in many ways, is the fact that for the next two, three years, the baseline scenario is that these companies are going to be right before they're wrong.
They're going to become far, far wealthier and more powerful than they presently are, and therefore they are going to continue to want to, to be incented to reject your thesis right up until it's too late.
Do you think that's right?
- Yeah, basically.
I mean, one of the unfortunate situations that we're in as a species right now is that humanity, in general, mostly solves, mostly fixes problems after they happen.
Like, mostly we watch the catastrophe unfold, we watch people die in car accidents, et cetera, for a while, and then as a result of that cold, hard experience, we learn how to effectively fix those problems both on the, like, governance regulatory side with regulations and then also just on the technical engineering side.
We didn't invent seat belts until after many people had died in car crashes and so forth.
Unfortunately, the problem of losing control of your army of superintelligences is a problem that we can't afford to wait and see how it goes and then fix it afterwards.
We have to get it right without it having gone wrong at all, basically.
- Okay, so given that, and I know you're not a policymaker, you are sort of an AI whiz, but, you know, you did write this paper, you are hoping to see action on the back of it, what are a couple of things that, if they were to occur in the next year, you would say, "I actually feel a little better.
My doomsday scenario is less likely to come to pass"?
- Loads of things.
Right now, the main thing I say when people ask me these questions is transparency.
So I think that what we should do is be trying to set ourselves up to make better decisions in the future when things start getting really intense.
And so I think that information about what's going on inside these companies in general needs to flow faster and in more detail to the public and to the government.
Setting aside the safety concerns, it's important for the public to know what goals, what principles, what values the company is trying to train the AIs to have so that the public can be assured that there aren't any secret agendas or biases that the company is putting into their AIs.
And this is something that everyone should care about, even if you're not worried about loss of control.
But it also helps with the loss of control angle, because if you have the company write up a model spec that says, "Here are intended, here's what we're aiming for"- - And they're not doing that, then you know that there's a gap, obviously.
Yeah.
- Yeah, exactly.
Then you can compare to how the AIs actually behave and you can see the ways in which the training techniques are not working.
Similarly, there should be safety cases, where the company says, "Here is our plan for getting the AIs to follow the spec.
Here's the type of training we're going to do, blah, blah, blah, blah, blah."
And then that plan can be critiqued, you know, and academics can say, "This plan rests on the following assumptions.
We think these assumptions are false for these reasons," right?
So the scientific community can get engaged into actually making progress on the technical problem that I mentioned.
Ultimately, we need to end this race dynamic, otherwise we're not going to have solved the problems in time and some sort of catastrophe is going to happen along the lines of what's described in "AI 2027."
But in the meantime, I think more transparency is good because it gives the public and the government the information they need to realize what's happening and then hopefully end the race.
- Absolutely.
Well, a lot for everyone to think about.
Read the piece:"AI 2027."
You can find it online.
Daniel Kokotajlo, thanks so much for joining us.
- Yeah, thank you.
(light music) (graphic whooshes) - From he AI arms race to the nuclear one, Iran and the United States continued disarmament talks.
And we got an exclusive look at the surprisingly upbeat negotiations.
It's time for "Puppet Regime."
- US-Iran talks are back on, but nobody knows who's gonna make the first move.
(lively music) ♪ After you ♪ ♪ No, after you ♪ ♪ Won't you lift off a sanction or two ♪ ♪ No, after you ♪ ♪ No, after you ♪ ♪ First, turn off that centrifuge ♪ ♪ Come on, Ali, let's get down to business ♪ ♪ Do it deal quick or your regime is finished ♪ ♪ Plus, don't forget, you got a big league problem ♪ ♪ Bibi wants to bomb you and I might not stop it ♪ ♪ Now, look, great Satan, I have to be honest ♪ ♪ It's hard to believe an American promise ♪ ♪ We once had a deal, it was sealed, no drama ♪ ♪ But then you ripped it up just to spite Obama ♪ ♪ So after you ♪ ♪ No, after you ♪ ♪ Another Middle East war and you will be screwed ♪ ♪ Not as bad as you ♪ ♪ Sad but true ♪ ♪ Please take me to your centrifuge ♪ ♪ To stop an Iranian bomb ♪ ♪ It's going to be a whole song and dance ♪ ♪ But there's a chance to avoid the worst ♪ ♪ But only if you go first ♪ ♪ No, you ♪ ♪ After you ♪ ♪ Ah, after you ♪ ♪ No, after you.
♪ ♪ Very crafty, you ♪ ♪ To stop an Iranian bomb ♪ ♪ It's going to be a whole song and dance ♪ ♪ But there's a chance to avoid the worst ♪ ♪ If only ♪ ♪ You go ♪ ♪ First ♪ ♪ No, you ♪ ♪ Puppet Regime ♪ - That's our show this week.
Come back next week.
And if you like what you see, even if you don't, but you think you might have superhuman intelligence and you wanna prove it, why don't you check us out at gzeromedia.com?
(lively music) (lively music continues) (lively music continues) (lively music continues) (gentle music) - [Announcer] Funding for "GZERO World" is provided by our lead sponsor: Prologis.
- [Announcer] Every day, all over the world, Prologis helps businesses of all sizes lower their carbon footprint (bright music) and scale their supply chains with a portfolio of logistics and real estate and an end-to-end solutions platform addressing the critical initiatives of global logistics today.
Learn more at prologis.com - [Announcer] And by: Cox Enterprises is proud to support GZERO.
Cox is working to create an impact in areas like sustainable agriculture, clean tech, healthcare, and more.
Cox, a family of businesses.
Additional funding provided by Jerry and Mary Joy Stead, Carnegie Corporation of New York, and.
(lively music) (triumphant music)
Support for PBS provided by:
GZERO WORLD with Ian Bremmer is a local public television program presented by THIRTEEN PBS
The lead sponsor of GZERO WORLD with Ian Bremmer is Prologis. Additional funding is provided by Cox Enterprises, Jerre & Mary Joy Stead, Carnegie Corporation of New York and Susan S. and Kenneth L. Wallach Foundation.