Available to listen on apple and spotify.
A decade ago, the leading minds in AI gathered to make sure this technology would benefit everyone. Today, those hopes are colliding with the reality of an AI arms race. Today, game theorist Liv Boeree and philosopher William MacAskill lay out what they see as the “narrow path” between a future of limitless potential and one of irreversible loss.
This Episode Features
Max Tegmark, Nick Bostrom, Liv Boeree, William MacAskill
Here’s the transcript!
Gregory Warner:
This is The Last Invention, I’m Gregory Warner. Today: the case for how we can - and maybe should - build super intelligence… but how we’ll need to come together… to make sure we don’t destroy humanity in the process.
And there’s a good reason to argue that this worldview, this camp, the AI Scouts, as we call them, was born - in the year 2015 with a gathering of true AI believers on the island of Puerto Rico.
MUSIC
Andy Mills:
Okay, so first up, can you just tell me how you pulled this off? Like how did you get all of these different figures in the big AI debate together down in Puerto Rico?
Max Tegmark:
Yeah. First of all, uh, we scheduled a meeting in Puerto Rico in January, and the invitation I sent out to everybody had. A photo of a guy shoveling his car out from three feet of snow
Andy Mills:
Haha
Max Tegmark:
next to a photo of the beach by the hotel and it said the date then it said “so where would you rather be on this date?”
Andy Mills:
Oh, very clever Max. Very clever.
Max Tegmark:
Haha
MUSIC OUT
Gregory Warner:
This is Max Tegmark, MIT, professor of Physics, co-founder of the Future of Life Institute, and back in 2015 what inspired him to organize this meeting of the minds was that while most people at that time, still did not believe that anything like true AGI was on the horizon in our lifetimes, the people who did believe it were already starting to fight … about whether AI was going to be great for the world or lead to its destruction.
Max Tegmark:
The conversation happening was completely dysfunctional. On one hand, you had some people outside the research community like Eliezer Yudkowsky and Nick Bostrom and others who expressed concerns. And then you had people inside the community who either weren’t thinking about it at all or felt very threatened by the people complaining about it, worrying it was gonna be bad for funding. And since these two groups mostly didn’t talk to each other, they both thought that the other ones were crazy or reckless or morally unscrupulous or something like that. And I felt we have to have – to get the AI community itself into this conversation.
Gregory Warner:
And so at this time - that with hindsight we now know was right between these two defining moments in the history of AI: When Deepmind’s Atari demo got them acquired by GOOGLE … and when Elon Musk and Sam Altman started Open AI… It’s right then, that all of these AI hopefuls like Demis Hassabis and Ilya Sutskever were brought together with the AI worried, like Eliezer Yudkowski, Nate Soares, Elon Musk… and also: Nick Bostrom
Andy Mills:
So take me back to 2015 - how did it feel to be you up until that time? Especially how you - somewhat like Eliazar Yudkowsky - had been pushing for everyone to take the idea of AGI seriously, and largely getting nowhere, am I right?
Nick Bostrom:
Yeah, It was striking because it seemed. Pretty clear to me that we gonna at some point get AGI and then super intelligence and that this was gonna be maybe the biggest thing ever. And it was gonna involve these huge challenges. In particular, the technical alignment problem, but also obviously governance problems and ethics challenges, etc. And yet it was completely ignored by academia and by the wider sort of intellectual world.
Gregory Warner:
Bostrom – who’d just published his surprise hit book Super Intelligence – was encouraged to get an invitation to this conference where he’d be able to sit down and talk face to face with some of people actually building it.
Nick Bostrom:
Because the majority of the world dismissed this completely at the time.
Andy Mills:
And is it right that the basic pitch of this conference was like, Hey, all of you guys may have your differences, but you all agree that AGI is important, that we should take this seriously? So… Let’s stop our bickering. Let’s get together, let’s have some talks, let’s have some debates, let’s have some drinks and see if we can find some common ground. Is that essentially it?
Nick Bostrom:
Um, yeah, I mean, this conference was like – it brought together a bunch of different important constituents. So on the one hand, there were many of the sort of leading lights in the AI field at that time, like Rick Sutton was there, Stuart Russell was there, the founders of DeepMind were there, Ilya Sutskever – and then a big contingent from like AI safety people and some potential funders and uh, I mean these communities had previously been more or less separate with limited interaction. And, and I think part of the design of this conference was: can we bring these together and then create an atmosphere where they can actually engage and listen and discuss these things rather than forming two different camps that sort of throw, you know, grenades over a wall on each other.
Max Tegmark:
And, uh, you know, it was really quite moving to see. People who both thought that the other one was crazy when they just sat next to each other over lunch and had some wine, and how they, they both updated to think, oh wow, this other person is actually much more reasonable than I thought.
Gregory Warner:
And so for three days by the beach, without any reporters around, with nothing being recorded, all these people got the chance to actually sit down and discuss and hash out what was the world with AI, they all wanted to see. They talked about things like how do we ensure that AI might lead to an economic boom without triggering the biggest unemployment crisis in human history? They talked about how do we build AI systems that we can actually control even if these things are way smarter than we are, and how do we take this technology that we still don’t understand and make it a serious object of study for universities and other institutions?
Max Tegmark:
And then Elon stood up at the end and also promised to give $10 million to fund the first ever grant program on not just making AI more powerful, but specifically on - nerd research on how to make it safe.
Gregory Warner:
And this conference had an immediate impact.
Max Tegmark:
That went a very long way to mainstreaming AI safety in academia. You know, nowadays, if you go down to Neurips or any AI conference, there’s gonna be a bunch of technical papers with matrices and integral science and all the good, the nerdy stuff, you know, which is actually safety research. Once people realized AI safety doesn’t just mean shouting from rooftops, “Stop! Stop!” But it actually means often doing concrete hands-on work, much of the taboo kind of melted away.
Gregory Warner:
It led to something that rarely happens in emerging industries…. a focus on safety became not only part of the conversation, but an early priority in most of the major AI labs. And where those most worried about AI and those most excited about it - agreed to work together.
Nick Bostrom:
You might think they would sort of close ranks and say, wow, there are no risks here, because that would be inconvenient for them to acknowledge. And then the AI safety people would be on the outside and maybe they would have some ideas of safety things, but they couldn’t – like ultimately it needs to actually be implemented by people building the AI. Right? And so that that was an obvious sociological risk that you would get this polarization into two separate communities.
Andy Mills:
And am I right that one of the things that they agreed on at the end of this was a commitment to do whatever they could to avoid a an AI race?
Max Tegmark:
For sure. For sure.
Andy Mills:
And what did that commitment said?
Max Tegmark:
Well, they even signed something. Let let me, give me a second. I’ll give you the right quote. Okay. Uh…
Gregory Warner:
Tegmark ended up pulling out this list of principles that were signed after the conference by many of the people who attended, even some new folks that couldn’t make it down to Puerto Rico, like Sam Altman and Dario Amodei.
Max Tegmark:
One of the Asilomar AI principles says, principle number five, race avoidance. Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards.
Andy Mills:
So essentially this thing is too important for us to treat it like just some kind of product that we’re all racing to build as fast as we can.
Max Tegmark:
Yeah, it’s, it’s very depressing to look at how some of these have aged. There is also another one that’s saying an arms race in lethal autonomous weapons should be avoided. Well, welcome to 2025. There is also Principle 22, recursive self-improvement.
Gregory Warner:
Tegmark says that while industry leaders in AI will still claim that they are profoundly concerned about the risks of AGI, pretty much all of these principles, these commitments have been compromised by the current race to be the company that makes it first.
Max Tegmark:
Principle 23. The last one, the common good principle that says super intelligence should only be developed in the service of widely shared ethical ideals and for the benefit of all humanity rather than one state or organization. And welcome to 2025 when you have Dario Ammodei from Anthropic very openly saying, for example, that the US should crush China, basically race China to get this first. So it’s really fascinating how the ideals – starry eyed ideals that these people had back then have, uh, gradually fallen to, uh, competitive pressures.
MUSIC
Gregory Warner:
However, there are still those that believe that we can return to the dream and the promise of what happened in Puerto Rico. And this time they want even more of us, all of us really part of the conversation about how do we get ready, how do we get prepared for super intelligence? These are the AI Scouts.
After a short break, Andy interviews two scouts, who make their case. Stay with us,
– BREAK –
Liv Boeree:
My personal philosophy is like, how do we find the win-win outcome here?
Andy Mills:
All right, so the first of our two scouts is Liv Boreree.
Liv Boeree:
I would love to live in this techno-utopian awesome like freedom maximizing world where humanity and whatever fun new species also emerge alongside it, get to go and do amazing things together, and everybody wins. I would love that future to happen, but I would try not to be a naive optimist in thinking that that’s just gonna magically happen if we just carry on with the status quo. I’m actually extremely concerned that the current trajectory we are on is actually on a lose-lose path.
Andy Mills:
Liv is actually a famous poker champion, but she’s also a game theorist. She has a background in astrophysics and she spent a lot of the past several years trying to persuade people of what she sees as both the opportunities and the serious risks posed by AI.
And our other spokesman for the Scouts today is the philosopher William McCaskill.
William MacAskill:
The attitude is one of taking really seriously the potential benefits of highly advanced AI thinking that catastrophic outcomes are not at all preordained and appreciating though that if AI really does drive rapid tech progress, there will be this enormous number of challenges and yeah, we should be preparing now.
Andy Mills:
William is probably best known for being one of the founders of the Effective Altruist Movement which gained a lot of influence, especially in SIllion Valley over the past decade or so. He’s also the best selling author of the book What We Owe The Future. And as you’ll hear in both of these conversations, Liv and William are making this case that the urgency and the opportunity of the very moment that we live in is unique, and they believe that it demands of us – all of us, around the world – that we join in, in doing whatever we can do to try to get ready for the radical transformation that AGI is about to bring.
MUSIC
Liv Boeree:
Our job now, right now is. Whether you know you are someone building it or someone who is observing people build it or just a person living on this planet. ‘cause this affects you too, is to collectively figure out how we unlock this win-win path, this narrow path. ‘cause it is a narrow path we need to navigate. But I do think this win-win future is in principle possible.
Andy Mills:
Okay, so I wanna start off by getting your view. Broadly speaking on the risks versus the rewards of building an AGI that eventually becomes a super intelligence. We can get into some of this in more detail later, but it is just like a very basic introduction. How do you think about the parts of our AI future that could be amazing? Versus the threats that AI poses that could be catastrophic?
William MacAskill:
Well, there’s lots of ways that things could go badly. There are risks of enormous catastrophe, like global pandemic from manmade viruses or loss of control to AI systems themselves, or the catastrophe of intense concentration of power, perhaps a single country becoming utterly globally dominant, and that single country falling into some sort of authoritarian regime or even dictatorial regime.
Andy Mills:
When you say that you think we’re on a lose-lose trajectory, say more about that, what is this lose-lose scenario as you envision is?
Liv Boeree:
Well, so in terms of the current trajectory, we run up against some kind of planetary boundary or in this case maybe multiple planetary boundaries at the same time and it creates these like cascading effects of essentially like institutional collapse, environmental collapse, mental health collapse, all the conflicts that then come sort of down stream of those but we’re still living under the nuclear shadow… so there’s all those sort of crisis that could happen that might lead to our either permanent curtailment like some massive catastrophe or complete extinction.
Andy Mills:
So you take it that far? You think that it’s possible that this thing could be our demise? Could be the end?
Liv Boeree:
Yes, absolutely.
William MacAskill:
I think there are many ways in which things could go really quite badly wrong, but there’s so many positives as well. So one way in which AI is very different from some of the technologies that people sometimes point to, like atomic weapons, is that: it comes along with these enormous upsides too. So one is just the ability to make better decisions, to think better, to have more knowledge. If we have super intelligence, then we can get super intelligent advice. We can make better decisions. You can have AI that helps us reason much better, or does the reasoning itself even helps us kind of reflect better from an ethical perspective too. You can also have AI that helps you coordinate much better, such that if I’m the United States and you are China, well, we’re really quite limited in our bandwidth at the moment. There’s only so much diplomacy that can happen, but with enormous amounts of AI diplomats, I think you might be able to have many more kind of mutually beneficial agreements such that, you know, the irrational things that seem to happen have happened in the past, like wars and other sorts of enormous destruction of value. Maybe we don’t need to have them anymore.
Andy Mills:
So it sounds like you agree with this idea that’s out there, that there could be a future unlocked by AGI where we literally live with world peace. Like the idea is that we have like a more peaceful coexistence, with all the help that’s brought about by this AGI.
William MacAskill:
Absolutely. And then the second aspect is just abundance that AI could bring, too, you know, because it’s been technological development in the past, that is the primary reason why we are so much richer today than at any time in history. And so if we’re facing the prospect of AI and then a rapid transition to super intelligence, well that is a world with enormous abundance such that everyone in the world, if that abundance was allocated at all equally, everyone in the world could be millionaires many times over. And such that in principle, at least everyone can get basically all they want right now. And that should be a cause of optimism because if the pie is going to be get much, much bigger, you know, a hundred times bigger, a thousand times bigger, then it really shouldn’t matter what slice of the pie does everyone get. We should instead be much more focused on ensuring that we actually get that big pie. We get to enjoy it and that it’s at least somewhat kind of equally distributed because everyone can be extremely well off.
Liv Boeree:
I do think that we need to enhance our intelligence in order to get outta some of these really wicked collective action problems like climate change, for example. Like we’ve understood the mechanism behind this for decades now, and yet for the same reason, like we can’t unplug the internet if we wanted to. We can’t unplug climate change if we wanted to.
Andy Mills:
Right.
Liv Boeree:
So the question to me is how do we allow all the goodness of competition to create the race to the top of the cool stuff that we want, like solutions to cancer or novel drug discovery, better coordination mechanisms to fix climate change and all of these other huge collective action problems without also accelerating all of the dangerous things – without, you know, giving terrorists the ability to synthesize novel pathogens, without creating a ubiquitous surveillance capitalism, which is also a path we seem to be accelerating. So basically, how do we get all the best bits of AI without all of the downsides, and it’s a really, really wicked problem.
Andy Mills:
I know that sometimes you get labeled as an AI doomer because of the fact that you and the AI doomers agree about a lot of the things that you’re worried about. But one of the things that I find so fascinating about you and about this whole camp is the view that it would be bad for us to stop in our attempts to make super intelligence and even be bad for us to pause for too long. Could you unpack that for me? Like what do you mean by that?
William MacAskill:
So… if it were the case that we would never develop super intelligence. That would be very bad. So some people have this attitude that, oh, we should just never build it. It’s like the science fiction novel Dune in which civilization just decided, no, we’re not gonna have computers. We’re not gonna have AI. I think that would be very bad because, AI could help us solve many of the other problems in the world.
Every year, something like a hundred million people die. There’s enormous amounts of suffering. Much of that is because we lack the medical technology or the scientific understanding to improve those lives or prevent the early and unnecessary death.
Similarly, there’s enormous poverty in the world, and that could be alleviated significantly because. If we had more the distribution, but it could also be alleviated similarly if the world was just much, much richer than it was today. I also think that this is a good argument against delaying the development of super intelligence unduly.
I think we should, as a first, best try to have solutions that mean we get there safely, that don’t go via delaying it for, you know, years or decades because of there’s such a loss from all the problems that we could have solved that we’re currently not solving.
Andy Mills:
I think it would be helpful for us to try and get on the same page about where things stand right now with the AI race, because as far as I can see it, you have the race happening here in the US between Open AI and Google and DeepMind and, and Anthropic and all these other companies. But then the race that seems to be much more urgent on the minds of lawmakers especially, is the race between the US and China.
Liv Boeree:
Mm.
Andy Mills:
And right now there’s a ton of money and there’s a ton of support, and there’s a ton of excitement fueling the American side in that race. The idea that the US has to win this race. First off, is that how you see our current situation and where do you think things stand right now?
Liv Boeree:
It’s true that there is this larger sort of geopolitical race going on largely between America and in some ways the West and China. Like it seems like all trendlines points that those are gonna be the two major players here, especially towards the on the cutting edge race to super intelligence. And that frankly terrifies me because in such a race, certainly under current conditions where everyone is cutting corners and going as break neck speed as possible, it’s just a race to who can go off the cliff the fastest. No one wins such a thing. But at the same time, there’s also the risk of like value lock in.
If somehow we do manage to safely navigate building super intelligence where it does what we want it to. That means that one person might end up with all the power and you know, I would personally rather that be a Western values than from what I can tell the CCP values because the West is more aligned with my core tenants, which is of personal freedom, self-determination, et cetera.
If there was absolutely no other option, I would rather the US win that race. But I’m also extremely concerned that it, it is not possible under current conditions for anybody to win this race.
William MacAskill:
So, one thing I’m very worried about in the context of AI is intense concentration of power because: if a single company is developing technology much, much faster in a way that gets faster in fact with every iteration, so it’s not just exponential, it’s super exponential, then you could quite soon get to a stage where that company has just greater technological capability than the rest of the world combined, or if there’s even just a single country. Then again, that country would quite soon, if it was leading ahead of all others quite soon would just become completely dominant.
Andy Mills:
Yeah. This is something that I’ve heard Tyler Cowen, the economist, who I’m a big fan of, uh, talk about a lot, is this possible future where the US and China, because they are so invested in creating AGI and they’re so far ahead of everybody else, that we may end up a few decades from now or maybe a hundred years from now in a situation where. They aren’t just the two superpowers in the world, but where they’re essentially the two powers in the world. That the whole planet is divided up between the US and its AI and China and its AI. And this isn’t just like a philosophical like, oh, it’s, it’s an interesting idea, but this is actually something that serious people are already thinking about and trying to come up with different models of the future based around this.
William MacAskill:
Yeah and there are good reasons for that based on this idea of just very rapid growth and technological progress. I think I would go further than Tyler Cowen and say that, you know, actually I think it’s quite likely that really there’s just one country that wins out.
So, in a hundred years time, essentially the United States is the world government, or essentially China is the world government, where I think that yeah follows as a thought quite naturally from the dynamics that AI introduces into technological advancement.
Andy Mills:
I wanna come back to China in just a little bit, but while we’re on the concentration of power, I’d love for you to just tackle the risk as you understand it of AGI to personal freedoms, no matter what government ends up winning the AI race. What is it you believe that future might look like?
William MacAskill:
Yeah, so the world used to be more inegalitarian. Prior to the industrial revolution, you had the nobles and you had farmers, and the nobles had a reasonable amount of power and most of the populists didn’t. And we’ve had this move towards, um, democracy and egalitarianism over the last few hundred years. And I think at least part of the story for that is just because human beings are very useful. We think we have, we can contribute, um, very productively to society, but in a post AGI world, that is a world where AI can do all the tasks, at least all the economically relevant tasks that human beings can do, you don’t have any way of economically contributing to the world, so you can’t, uh, sell your labor for wages. Instead, any income you have would have to be either because you own land, you own capital, or via government redistribution, but then that also just gives you a lot less bargaining power too. And so one of the structural reasons why I think we’ve had a proliferation of democracy and egalitarianism over the last couple of centuries and really falls by the wayside. And to take a really extreme example of this, I imagine we get to a world which again, wouldn’t be very far after the development of artificial general intelligence. Imagine you have an army that consists of AI and robots rather than human beings. Well then we’re in this very different circumstance where that whole army can be trained to be loyal to just one person. So if the president were to order a coup, then if the AIs were trained that way, they would loyally obey them. There would be no question of disobedience, unlike in the human case. And in the limit, there’s no reason at all why a single human being couldn’t control essentially the whole economy and all military force if AI systems had been trained to do that. And that’s a scenario that I think is really quite likely and extremely worrying.
Andy Mills:
All right, so what does being prepared for that risk look like, how do we mitigate that risk in the event that we do create a super intelligent AI?
William MacAskill:
Yeah, I mean, so. The first thing is just to ensure that especially in the early stages, individual actors aren’t able to stage. What is literally a coup, that could be what’s called a self coup if the president decides to stay in power unlawfully, where if you’ve already automated as in replaced with AIs, large parts of the military, or even a small kind of special guard that kind of protects the president, or if you’ve automated in the place with AI, you know, large factions of the bureaucracy. It would just become, as a practical matter, much, much harder to unseat someone who wanted to become a dictator and stay in power unconstitutionally because they would have this, you know, small AI and robotic army able to protect them. Or they would have perhaps some large fraction, perhaps even most of the government administration that are supporting them depending on how exactly the AIs have been trained. In fact, there could similar worries coming not from the president either. So leaders of AI companies themselves, if we’re at this point of time that AI capabilities and tech progress is going extremely quickly, that in fact I think the mechanisms by which leaders of AI companies could themselves stage a coup if they wanted to as well. And so that’s quite a lot more extreme than merely an erosion of democracy, though we should be worried about that too.
Andy Mills:
What does it look like for the US to quote unquote, get prepared to take seriously the threats that are posed by the state of the race between the US and China right now? Like what is it you think we should be arguing for? What should be done?
Liv Boeree:
It’s a really difficult problem and my advice would be if I could wave a magic wand, is: if at all possible for people to put much more energy into diplomacy. I mean, again, it may, who knows what’s going on behind closed doors, but it feels like right now, at least publicly, no one is trying to do the diplomacy route in the US and China.
Andy Mills:
And are you imagining something here that looks like a super intelligence version of what we did with the nuclear arms race?
Liv Boeree:
Yes.
Andy Mills:
Saying essentially. Hey, what we’re doing is not just dangerous to our adversaries, it’s a danger to the whole planet. And so we need to come up with some kind of arrangement here where we can begin to disarm and maybe the world isn’t totally safe, but at least it’s a much safer place than where it was at say like the height of the Cold War.
Liv Boeree:
Yes, Absolutely. And it’s actually quite astonishing. If you look back at how nuclear disarmament went so successfully after the fall of the Berlin Wall, because it was right around the end of the eighties that we had the peak number of nuclear weapons on Earth. I think it was over 60,000. And through the Nuclear Arms Reduction Treaty, there was some really clever incentives set up of like checks and balances. Of like different security teams sort of showing, okay, this is how we’re disarming a little bit of tit for tat. They managed to break the sort of game theoretic stalemate, which was such a magical thing and it shows that such thing is possible in principle.
Andy Mills:
Mm-hmm.
Liv Boeree:
One of the main ways that happened though was through diplomacy first, and I don’t like the way the current narrative is of like these sort of China hawks and this sabre-rattling that is going on. ‘cause it is, it’s just adding fuel to an already completely out of control fire. But the thing is, it is possible in principle.
So yeah, that’s, that would really be my advice is just like, please can we exhaust all diplomatic parts first? And I think one of the big parts to that as well is through education is making people realize that actually our common enemy is not one another or even a difference in views. It’s this idea of like, game theory gone wrong.
It’s these game theoretic dilemmas that I often call Moloch essentially of like, well, if I don’t do it, then the other guy will, so I have to do it too. Essentially, this sort of incentive trap that we get, we get caught in that takes us into these arms race spirals. That’s humanity’s common enemy. And that’s the thing we all need to sort of collectively look at and be like, oh, that’s the asteroid coming towards us.
Andy Mills:
Well, I’m glad that you brought up Moloch because I keep seeing it in all these different AI forums that I like snooping around inside of, and people are just tossing around like, oh yeah, that’s Moloch, that’s Moloch, and I don’t exactly know what it is.
Liv Boeree:
Yeah.
Andy Mills:
Um, so explain it for me. What is it? Or maybe who is Moloch?
Liv Boeree:
So, so Moloch is basically the personification of game theory gone wrong. It actually comes from an old Bible story about there was this, apparently in sort of the Canaanite times, there was this war obsessed cult that was so desperate to sort of accumulate power, military power and money that they were willing to sacrifice anything up to and including their literal children. Allegedly by burning them in a bonfire in a ceremony to this deity they called Moloch that they believed would then like reward them for this ultimate sacrifice by giving them more military power and money. And so it’s obviously an incredibly powerful and dark image, but really what it’s a lesson in is like be careful in being so fixated on winning a narrow game, whatever game is right in front of you or optimizing for this narrow metric of money or whatever it is that you’re trying to win, that you don’t sacrifice too much of the other things that you care about. And if you dig into sort of – it’s often called the generator function – but the sort of driving force behind so many of our biggest problems, it is this process of like: Well, I need to win at this game, so I didn’t want to talk bad about my neighbor or backs stab that person, but if I don’t do it, I know that everyone else is gonna be doing it anyway, so I have to do it too. It’s this act of like sacrificing other important values to win a quick thing to get ahead of your opponents, that when everybody does it is what creates these race to the bottom dynamics. And unfortunately, that’s what I see going on in the AI world now.
Andy Mills:
I feel like this is a perfect encapsulation of what has happened. To my own industry, to what I’ve seen as a reporter over the past 16, 17 years. This idea that you chase after the short-term rewards that you get when you publish clickbait or hyperbole, or when you tell everybody who is quote unquote on your side, that they’re totally right and look how awful and dangerous the other side is.
And eventually you get to a situation where, journalists and media outlets. They’re just chasing that attention and the investment in careful journalism, that has to fall to the wayside. Or even the idea that you might publish what’s really happening with all the nuance that it demands.
Well, that becomes this huge risk because that’s not gonna do very well online. And before you know it, the whole industry has lost its core values and –
Liv Boeree:
Right.
Andy Mills:
I believe in doing so. Bled out. Its trust.
Liv Boeree:
It’s one of the perfect examples of it, because the way the internet works with virality, it happens that generally speaking, more negative stories, certainly more anger inducing stories, especially with very clickbait headlines tend to go viral more easily.
And so those who adopt that strategy get a short term leg up over everyone else who doesn’t. And over time that pushed even the most respectable news outlets into having to adopt more and more of those tactics. And I think that is basically the main driver of why we are in this information crisis now, where no one really knows who to trust and with good reason.
Andy Mills:
Mm-hmm. Yeah people don’t like to mention that part, but – the ‘with good reason’ is important to know.
Liv Boeree:
And that doesn’t mean to say that there still aren’t many high integrity journalists out there, but if you lean into this stuff once, people don’t forget. And it, I view it kind of like a tragedy of the commons.
You know, we talk about, you know, people throwing trash on the ground. If again, one person doing it, oh, well, it doesn’t matter. But when everybody does it now this beautiful park has turned into a trash heap. Well, that’s kind of what’s happened with our information commons, because people have been polluting it more and more because it’s a quick way of getting some eyeballs, and now the entire information ecosystem is just covered in trash and it’s dying.
That’s Moloch in action. I, I call it the media Moloch.
Andy Mills:
Alright, so what does that trap look like as you see it happening right now to the AI industry?
Liv Boeree:
The Moloch trap that AI is caught in right now is the one of a lot of the AI leaders – I even know a couple of leaders at some of the labs, and they don’t necessarily want to be releasing products as fast as they’re doing.
They’d like to spend more time on testing them, as we’ve seen so many times with new LLM releases, there are some really crazy. Unexpected outcomes that these things were doing. There was like the whole Sydney thing, this weird persona on the Microsoft chatbot when they launched with GPT four for the first time, it was threatening a journalist and it ended up on the front page of the New York Times.
Andy Mills:
Right. It was my friend Kevin, who found himself talking to a chatbot that was actively trying to get him to leave his wife so that they could run away together.
Liv Boeree:
Right. Google had their, um, debacle with making the black Nazis.
Andy Mills:
Mm-hmm.
Liv Boeree:
And the basically woke image generation. And then the sycophancy of Chat GPT.
Andy Mills:
Mm-hmm.
Liv Boeree:
Which was incredibly shocking. These are all unexpected, unintended consequences of clearly releasing models that just weren’t ready and Okay the damage on these was fairly limited. Okay. A few people either probably got a bit misled, but I mean, we’ve already seen some people, like there was that kid that killed himself because the chat bot he was talking to basically convinced him that he should, and these are all downstream results of products being released before they’d clearly been sufficiently tested.
Now – do this in two years time, when these models are much more capable, they’re much better at persuading people. They are also agentic and that they can actually take actions by themselves on the internet without supervision. So they can actually do stuff that influences the real world. Everyone is in this rat race of who can release the biggest and best models the furthest to keep drumming up, you know, keep the hype machine going. It’s an absolute recipe for disaster.
Andy Mills:
And the case that you’re making is that just like what happened in the media, even if you do not want to be in this race, the incentives.
Liv Boeree:
Right.
Andy Mills:
Are pushing you into it whether you like it or not.
Liv Boeree:
Exactly. People I know who work at some of these labs, they’re embarrassed by these mistakes. They would love to take more time before releasing their products on the general public, but if they don’t release them, then they run the risk of losing their engineers who want to be associated with the latest and best products. It’s an incredibly tricky situation that they’re in. And so while ultimately like – pressure should be placed upon those with the most power, I think some degree of sympathy needs to be given to them as well because they’re trapped in this dilemma. And if we aren’t honest about the situation, then. We don’t stand a chance of fixing it.
MUSIC
Andy Mills:
I’d love if you could walk me through some of the ideas that you lay out in your book, What We Owe the Future, where you’re essentially trying to motivate people. To change the way that they are looking at the world to change the way that they’re looking at AI. And I’d love if you could start off with this concept called long-termism. What is long-termism and why do you think it’s something that’s gonna help us get prepared in the long run?
William MacAskill:
So long-termism is the view that we should be doing much more than we are currently to improve the lives of future generations. Where the core reasons for thinking that. Are simply that the future could be very big, so –
Andy Mills:
Mm-hmm.
William MacAskill:
There really could be enormous numbers of people to come.
Andy Mills:
And I’ve read your book, so I know that you mean. Very big, like billions and billions of people big. So say more about that.
William MacAskill:
Yeah, so if we take just our scientific understanding of the world seriously, humanity could last for an extremely long time. There are hundreds of millions of years left remaining on Earth, billions of years If we think that society could get to a level of technological sophistication such that beings could live off world, which again, given our scientific understanding, seems extremely likely. And so that means that when we’ve looked to the future of civilization, you know, we’re used to thinking like, well nowadays, five minutes ahead. But, maybe if we think long term, we’re used to thinking a decade ahead or even a century ahead. But really this place that we’re in, in terms of history is very early on indeed. If we don’t suffer some, you know, huge calamity.
Andy Mills:
Right, in your book you say that we need to start to think of ourselves as the ancient ancestors to billions and billions of people to come, more people to come than people who have ever yet lived.
William MacAskill:
Yeah. So, I mean, it’s a really striking fact that: most people just don’t pay attention to or think about. It’s just how early we are in civilization’s history. A typical member of human or human originating civilization will be far in the distant future and they will look back. Maybe they’ll listen to this conversation with maybe a sense of awe and wonder, but they will think of us as people from the distant past.
And in particular, they’ll think of us as people who had enormous responsibility because decisions that we will be influencing and making in our lifetimes will affect actually that long run trajectory. It will affect what sort of lives they have.
Andy Mills:
And the case that you make is that we should care about those people. That we should be thinking actively today, that we should be making decisions today. Thinking about the wellbeing of those people in the future.
William MacAskill:
That’s exactly right. So at this point, I’m trying to argue for the idea that future people count. Their interests matter morally, maybe just as much as interests of the people alive today. And so if there are some actions that impact not just the next few decades or few centuries, but will really impact the whole trajectory of a future civilization, those problems that just utterly derail civilization such that we just, we don’t come back from them. The world is worse in a way for this very long time into the future that those at least become distinctively important and we as a society should distinctively care about them.
Andy Mills:
And you believe that one of the reasons that this is so important for us right now is because you think that we are living through a distinctly unique moment in the history of the human race. Make that case.
William MacAskill:
The thing that’s unique about this point in time is how rapidly we’re developing technologically and growing economically where for almost all of human history when we were hunter-gatherers and then when we were agriculturalists, there was very little change.
The world that your children or even great-grandchildren would be born into would generally look very similar to the world that you were born into. And it was only since the industrial evolution that rates of technological development and economic growth picked up. Such that we have the kind of 2 to 3% annual growth rates and the rates of tech development that we’re currently used to.
So things are changing much faster than they did in the past. And then we have all of these new risks and new dangers and challenges that will only happen once: the development of what’s called artificial general intelligence. There’s only one moment at which that first gets developed, and I think it’s really quite likely that happens within our lifetimes.
And that is something that is unique about our current situation. Something that’s different from our ancestors, different from our distant descendants.
Andy Mills:
So just to put a fine point on this: you’re advocating that we adopt this mindset of seeing ourselves as right now, living in this unique moment in the course of human history, and that our actions, our decisions, are going to affect billions and billions of lives to come. In part because it’s going to be us that bears witness to the creation of an AGI and what we end up doing with that creation. And we need to bring this sort of mindset to the decisions that we make about what we do next.
William MacAskill:
Exactly.
Andy Mills:
All right, so how long do you think we’ve got? Like what, if you had to put a number on it, looking at the state of AI innovation and investment, where do you think things stand right now? How far away? Do you think we are from that AGI?
William MacAskill:
My best guess is that we’ll get it in the early 2030s, so within the next 10 years, and where I think it’s more likely than not that leads to the sort of very rapid improvements in AI capabilities and very rapid technological progress, but I’m not extremely confident in that there could be a slower move from where we are today to much greater technological capability.
Andy Mills:
When you look at the AI industry right now and you think about where we’re at in terms of getting prepared or ensuring that we experience the win outcomes over the lose outcomes.
What do you see? Because one of the things that I find really remarkable is this idea that the leaders in the industry are the very people who have been some of the most vocal about the dangers and of the negative consequences of the technology that they’re making. And even without any government regulation, forcing them to.
Many of them are spending billions of dollars on AI safety, and a lot of them are sharing their AI safety research and their findings with the public openly. That feels like a really remarkable thing. And so I wonder, does that give you a sense of optimism? Does that make you hopeful that the project you’re engaged in right now might work?
William MacAskill:
I do think that gives me enormous optimism and hope that I wouldn’t have had otherwise. And in fact, since we published What We Owe The Future, that was before just before the chat GPT moment. Since then, I’ve seen this huge surge again in interest and in AI safety and the seriousness with which people are taking it.
And I think it’s a very striking fact. But ultimately, the assuring one that the leaders of all three of the major labs and the whole top three of the most cited computer scientists of all time has signed on to the statement that mitigating the risk of extinction from AI is a global priority on a par with the risks of nuclear war or pandemics.
You know, that shows that at least to some extent, people in power are really taking this quite seriously in a way that, for example, was really not true for the leaders of Exxon and Chevron in the seventies when our understanding of climate change was just developing.
So that all gives me optimism, but it’s very far from sufficient and I think the international political order is disappointing, getting worse on this front where over the last 10 years, there’s just been a greater and greater emphasis on China as the enemy especially with respect to AI, there is intense hawkishness such that, okay – yeah. It just, the very strong default is that there will be an arms race over who can get to AI supremacy, and in fact, we’re already in the middle of that, and that’s a cause for concern too.
Andy Mills:
In some ways, I get it that your camp gets locked in with the doomers a lot because you are also going around talking about the dangers and trying to ring this alarm, but I also see a lot of overlap between you and the accelerationist, especially in the way that you’re trying to inspire people. You’re trying to get people involved and really rally them to bring about an amazing future and not a catastrophe. And one of the things I’ve been thinking about is how much of our society right now is lacking in core beliefs about how religious participation is down.
About how there are all these people who are out there trying to find their tribe and trying to find their purpose in things like politics, which is not panning out very well. And when I look at what you’re doing, when I look at what the accelerationists are doing, it feels like you’re saying, “Hey guys, look around at all these problems that we face as a civilization. Look at how our institutions are letting us down. Look at how nihilism is spreading throughout the world. This AGI thing – it might be THE thing that ends up saving the day that ends up changing everything that brings about a freer, less hungry, less competitive, less violent world. Like this thing may lead to us living in whole new ways and curing all diseases. It might even mean that like a generation or two from now, human beings will be traveling the galaxy” and I feel like trying to bring that technology – that hinge moment in history – to fruition.
That’s an amazing thing to devote your life to like that is something to believe in, to strive for, and it’s basically the Accelerationist point. What you’re saying I know is different. You’re saying, yes, let’s get there. It’s great, but let’s do it safely. Let’s do it smart. But do you feel a little bit like an accelerationist? Do you feel a kinship maybe with them or do you feel like your projects are just totally apart and I’m wrong here?
Liv Boeree:
I am – I mean I want the awesome solar punk future.
You know, I want all of this. Does that make me an accelerationist? I think it just depends on your definition of accelerationism. I a hundred percent think we have to build new institutions. I think most of our institutions are grossly outdated, they’re crumbling. They’re either non-functional or actively making society worse.
So I want to accelerate the building of new social structures that manage these incredibly powerful technologies that are emerging in our world. So the question is, it’s like what are we accelerating? I worry that so much of our innovation is going into like technological solutions. While we are not sufficiently building up the other supportive structures that are required for a lasting civilization, which are the social structures, the actual social institutions of how to manage these incredibly powerful technologies and tools and the state structures to manage the social structures.
Andy Mills:
So the idea is that it’s not just about building the tech the right way. It’s not just about making quote unquote safe AGI. It’s also about having universities and lawmakers and the media and society as a whole be robust and healthy and trustworthy and focused on these issues when the time for the technology arrives.
Liv Boeree:
Yes. If you let the technology drive the social structure, which drives the memes, that’s where you end up in the race to the bottom. That’s where you get moloch. But if you flip that stack, if you come up with the good memes of like, what are the high philosophies that we want to instill as a flourishing future that we give to our grandchildren, that we give to their descendants?
If you come up with those and put the effort into those, then you build the social structures built upon those principles and that those social structures be the things that drive the technologies. That’s when you get the inverse. That’s when you get the win-win outcomes. So I’m an accelerator in that direction, but I do not wanna accelerate the other direction.
And my concern is that the current acceleration movement is doing the molochian version. They think that just build more powerful technologies, just more, more, more, more of that, and that will be sufficient. We need to build the wisdom alongside the power.
So I’m a wisdom accelerator. I’m carving out my niche there. That’s what I wanna accelerate. How do we accelerate the wisdom and the social structures that support that wisdom? I agree with your a larger point that people need a North star. And we need some kind of religion as a sort of a motivator. And again, we, and what does makes a good religion?
What makes a good shared story, a common enemy? And I, to me, that common enemy is this molochian process.
Matt Boll:
Next time on The Last Invention: The Accelerationists… Make their case.
The last invention is produced by Longview, home for the curious and open minded.
Special thanks this episode to Sam Harris, Scott Arenson, and Tim Urban.
For links to WIlliam MacAskill’s book and Liv Boreree’s podcast - as well as how to support our work, just look at the show notes to today’s episode.
And if you like this show, please share it with your friends and community - and leave us a review on Apple or Spotify. It really helps others discover the show.
Thanks for listening, we’ll see you soon.


