A tip alleging a Silicon Valley conspiracy leads to a much bigger story: the race to build artificial general intelligence — within the next few years — and the factions vying to accelerate it, to stop it, or to prepare for its arrival.
FEATURING:
Mike Brock, Kevin Roose, Geoffrey Hinton, Connor Leahy, William MacAskill, Liv Boeree, Sam Harris, and Yoshua Bengio
Available to listen on Apple and Spotify
Here’s the transcript!
Gregory Warner:
This is The Last Invention. I’m Gregory Warner. And our story begins with a conspiracy theory.
Andy Mills:
So, Greg, last spring, I got this tip via the encrypted messaging app Signal.
Gregory Warner:
This is reporter Andy Mills.
Andy Mills:
From a former tech executive. And he was making some pretty wild claims. And I wanted to talk to him on the phone, but he thought his phone was being tapped. But the next time I was out in California, I went to meet with him.
Mike Brock:
I’m really kind of contending with, like, who I am in this moment. Up until a few months ago, I was an executive in Silicon Valley and yet here I am, sitting in a living room with you guys, talking about what I think is one of the most important things that needs to be discussed in the whole world, right? Which is the - the nature in which power is decided in our society.
Andy Mills
And he told me the story that a faction of people within Silicon Valley had a plot to take over the United States government, and that the Department of Government Efficiency: Doge under Elon Musk was really phase one of this plan, which was to fire human workers in the government and replace them with artificial intelligence. And that over time, the plan was to replace all of the government and have artificial intelligence make all the important decisions in America.
Mike Brock:
I have seen both the nature of the threat from inside the belly of the beast if it were in Silicon Valley and seen the nature of what’s at stake.
Andy Mills:
Now this guy, his name is Mike Brock, and he had formerly been an executive in Silicon Valley, he had worked alongside some big name guys like Jack Dorsey, but he’d recently started a Substack. And he told me that after he published some of these accusations, he had become convinced the people were after him.
Mike Brock:
I have reason to believe that I’ve been followed by private investigators, umm for that and other reasons, I traveled with private security when I went to DC and in New York City last week.
Andy Mills:
He told me that he had just come back from Washington, DC, where he had met with a number of lawmakers, including Maxine Waters, and debriefed them about this threat to American democracy.
Mike Brock:
We are in a democratic crisis. This is a coup. This is a slow motion, soft coup.
Gregory Warner:
And so this faction who is in this faction? What is this like the Masons or something or is like a secret cult?
Andy Mills:
Well, he named several names, people who are recognizable figures in Silicon Valley. And he claimed that this, quote unquote, conspiracy went all the way up to J.D. Vance, the vice president. And he called the people who were behind this coup.
Mike Brock:
The accelerationists.
Andy Mills:
The accelerationists.
Andy Mills:
It was a wild story. Yeah. But, you know, some conspiracies turn out to be true. And it was also an interesting story. So I started making some phone calls. I started looking into it and some of his claims… I could not confirm other claims, they fell apart. And of course, eventually Doge itself somewhat fell apart. Elon Musk ended up leaving the Trump administration. And for a while it felt like, you know, it’s one of those tips that just doesn’t go anywhere. But in the course of all these conversations I was having with people close to artificial intelligence, I realized that there was an aspect of his story that wasn’t just true, but in some ways, it didn’t go quite far enough because there is indeed a faction of people in Silicon Valley who don’t just want to replace government bureaucrats, but want to replace pretty much everyone who has a job with artificial intelligence. And they don’t just think that the AI that they’re making is going to upend American democracy. They think it is going to upend the entire world order.
Mo Gawdat:
The world as you know it is over. It’s not about to be over. It’s over.
Kai-Fu Lee:
I believe it’s going to change the world more than anything in the history of mankind. More than electricity.
Andy Mills:
But here’s the thing. They’re not doing this in secret. This group of people includes some of the biggest names in technology. You know, Bill Gates, Sam Altman, Mark Zuckerberg. Most of the leaders in the field of artificial intelligence.
Dario Amodei:
AI is going to be better than almost all humans at almost all things.
Sam Altman:
A kid born today will never be smarter than they are.
Bill Gates:
It’s the first technology that has no limit.
Gregory Warner:
So wait. So you get a tip about, like a slow motion coup against the government, and then you realize, no, no, no, this is not just about the government. This is pretty much every human institution.
Andy Mills:
Well, yes and no. Many of these accelerationists think that this AI that they’re building is going to lead to the end of what we have come to think of as jobs, the end of what we traditionally thought of as schools. Some would even say this could usher in the end of the nation state, but they do not see this as some sort of coup. They think that this may end up literally being the best thing to ever happen to humanity.
Demis Hassabis:
I’ve always believed that it’s going to be the most important invention that humanity will ever make.
Mustafa Suleyman:
Imagine that everybody will now, in the future, and have access to the very best doctor in the world. The very best educator.
Bill Gates:
The world will be richer and work less and have more.
Elon Musk:
This really will be a world of abundance.
Andy Mills:
They predict that their AI systems are going to be the thing that helps us to solve the most pressing problems that humanity faces.
Jack Clark:
Energy breakthroughs, medical breakthroughs.
Demis Hassabis:
Maybe we can cure all diseases with the help of AI.
Andy Mills:
They think it’s going to be this hinge moment in human history, where soon we will be living to maybe be 200 years old, or maybe we’ll be visiting other planets where we will look back in history and think, oh my God, how did people live before this technology?
Demis Hassabis:
It should be a kind of era of maximum human flourishing, where we travel to the stars and colonize the, the galaxy.
Jack Clark:
I think a world of abundance really is a reality. I don’t think it’s utopian, given what I’ve seen, that the technology is capable of.
Gregory Warner:
So this is a lot of bold promises. Why do they think that the AI that they are building is going to be so transformative?
Andy Mills:
Well, the reason that they’re making such grandiose statements and these bold predictions about, you know, the near future. It comes down to what it is they think that they’re making when they say they’re making AI.
Gregory Warner:
Okay.
Andy Mills:
This is something that I recently called up my old colleague Kevin Roose, to talk about. Kevin, how is it that you describe what it is that the AI companies are making? Am I right to say that they’re essentially building like a super mind, like a digital super brain?
Kevin Roose:
Yes. That is correct.
Andy Mills:
He’s a very well-sourced tech reporter and a columnist at the New York Times.
Gregory Warner:
Also co-host of the podcast Hard Fork.
Andy Mills:
And he says that the first thing to know is that this is far more of an ambitious project than just building something like chat bots.
Kevin Roose:
Essentially, many of these people believe that the human brain is just a kind of biological computer, that there is nothing, you know, special or supernatural about human intelligence, that we are just a bunch of neurons firing and learning patterns in the data that we encounter, and that if you could just build a computer that sort of simulated that, you could essentially create a new kind of intelligent being.
Andy Mills:
Right. I’ve heard some people say that we should think of it less like a piece of software or a piece of hardware, and more like a new intelligent species.
Kevin Roose:
Yes. It wouldn’t be a computer program exactly. It wouldn’t be a human exactly. It would be this sort of digital super mind that could do anything a human could and more.
Andy Mills:
The goal, the benchmark that the AI industry is working towards right now is something that they call AGI, artificial general intelligence. The general is the key part because a general intelligence isn’t just really good at 1 or 2 or 20 or 100 things, but like a very smart person can learn new things, can be trained and how to do almost anything.
Gregory Warner:
I guess this is where people get worried about jobs getting replaced, because suddenly you have a worker, like a lawyer or a secretary, and you can tell the AI to learn everything about that job.
Andy Mills:
Exactly. I mean, that is what they’re making, and that’s why there’s a lot of concerns about what this could do to the economy. I mean, a true AGI could learn how to do any human job. Factory worker, CEO, doctor.
Gregory Warner:
That’s insane.
Andy Mills:
And as ambitious as that sounds, it has been like the stated on paper goal of the AI industry for a very long time. But when I was talking to Kevin Roose, he was saying that even just a decade ago, the idea that we would actually see it within our lifetimes, that was something that even in Silicon Valley, was seen as like a pie in the sky dream.
Kevin Roose:
People would get laughed at inside the biggest technology companies for even talking about AGI. It seemed like trying to plan for, you know, something building a hotel chain on Mars or something. It was like that, far off in people’s imagination. And now if you say you don’t think AGI is going to arrive until 2040, you are seen as like a hyper conservative, basically Luddite in Silicon Valley.
Andy Mills:
I know that you are regularly talking to people at OpenAI and Anthropic and DeepMind and all these companies. What is their timeline at this point? When do they think they might hit this benchmark of AGI?
Kevin Roose:
I think the overwhelming majority view among the people who are closest to this technology, both on the record and off the record, is that it would be surprising to them if it took more than about three years for AI systems to become better than humans, at at least almost all cognitive tasks. Some people say physical tasks, robotics that’s going to take longer. But the majority view of the people that I talked to is that something like AGI will arrive in the next 2 or 3 years, or certainly within the next five.
Gregory Warner:
I mean, holy shit.
Andy Mills:
Holy shit.
Gregory Warner:
That is really soon.
Andy Mills:
This is why there has been such insane amounts of money invested in artificial intelligence in recent years. This is why the AI race has been heating up.
Gregory Warner:
Right. This is to accelerate the path to AI.
Andy Mills:
Mmhmm. But this has also really brought more attention to this other group of people and technology, people who I personally have been following for over a decade at this point, who have dedicated themselves to try everything they can to stop these accelerationists.
Eliezer Yudkowsky:
The basic description I would give to the current scenario is: if anyone builds it, everyone dies.
Andy Mills:
Many of these people, like Eliezer Yudkowsky, are former accelerationists who used to be thrilled about the AI revolution and who for years now have been trying to warn the world about what’s coming.
Eliezer Yudkowsky:
I am worried about the AI that is smarter than us. I’m worried about the AI that builds, the AI that is smarter than us and kills everyone.
Andy Mills:
There’s also the philosopher Nick Bostrom. He published a book back in 2014 called Superintelligence.
Nick Bostrom:
Now, a superintelligence would be extremely powerful. It would then have a future that would be shaped by the preferences of this AI.
Andy Mills:
Not long after, Elon Musk started going around sounding this alarm.
Elon Musk:
I have exposure to the most cutting edge AI, and I think people should be really concerned about it.
Andy Mills:
He went to MIT.
Elon Musk:
I mean, with artificial intelligence we are summoning the demon.
Andy Mills:
Told them that creating an AI would be summoning a demon.
Elon Musk:
AI is a fundamental risk to the existence of human civilization.
Andy Mills:
Musk went as far as to go to the White House and personally lobby President Barack Obama, trying to get him to regulate the AI industry and take the existential risk of AI seriously. But he, like most of these guys at the time, they just didn’t really get anywhere. However, in recent years that has started to change.
ABC News:
The man, dubbed the godfather of artificial intelligence has left his position at Google, and now he wants to warn the world about the dangers of the very product that he was instrumental in creating.
Andy Mills:
Over the past year or so, there have been several high profile AI researchers and some cases very decorated AI researchers.
ABC News:
This morning, as companies race to integrate artificial intelligence into our everyday lives. One man behind that technology has resigned from Google after more than a decade.
Andy Mills:
Who have been quitting their high paying jobs, going out to the press and telling them that this thing that they helped to create poses an existential risk to all of us.
Geoffrey Hinton:
It really is an existential threat. Some people say this is just science fiction. And until fairly recently, I believed it was a long way off.
Andy Mills:
One of the biggest voices out there doing this has been this guy, Geoffrey Hinton. He’s like a really big deal in the industry, and it meant a lot for him to quit his job, especially because he’s a Nobel Prize winner for his work in AI.
Geoffrey Hinton:
The risk I’ve been warning about the most, because most people think is just science fiction. But I want to explain to people it’s not. Science fiction is very real. Is the risk that we will develop an AI that’s much smarter than us, and it will just take over.
Andy Mills:
And it’s interesting when he’s talking to journalists trying to sound this alarm. They’re often saying, yes, we know that AI poses a risk if it leads to fake news. Or like, what if someone like Vladimir Putin gets a hold of AI?
MSNBC:
It’s inevitably, if it’s out there, kind of fall into the hands of people who maybe don’t have the same values, the same…
Andy Mills:
And he’s telling them, no, no, no, no, no, this isn’t just about it falling into the wrong hands. This is a threat from the technology itself.
Geoffrey Hinton:
What I’m talking about is the existential threat of this kind of digital intelligence taking over from biological intelligence. And for that threat, all of us are in the same boat. The Chinese, the Americans, the Russians. We’re all in the same boat. We do not want digital intelligence to take over from biological intelligence.
Gregory Warner:
Okay. So what exactly is he worried about when he’s talking about a takeover? Because usually when I hear about the fears related to AI, it’s either about the fears of disinformation like fake news, deepfakes or jobs. But what is he talking about when he says it’s an existential threat?
Andy Mills:
Well, the simplest way to understand it is that Hinton and people like him, they think that one of the first jobs that’s going to get taken after the industry hits their benchmark of AGI will be the job of AI researcher, and then the AGI, will 24/7 be working on building another AI that’s even more intelligent and more powerful.
Gregory Warner:
So you’re saying I would invent a better AI, and then that I would invent an even better AI?
Andy Mills:
That is one way of saying it. Yes, exactly. That AGI now becomes the AI inventor, and each AI is more intelligent than the AI before it all the way up until you get from AGI: artificial general intelligence to ASI: artificial superintelligence.
Connor Leahy:
The way I define it is this is a system that is single handedly more intelligent, more competent at all tasks than all of humanity put together.
Andy Mills:
I’ve now spoken to a number of different people who are trying to stop the AI industry from taking this step. People like Connor Leahy. He’s both an activist and a computer scientist.
Connor Leahy:
So it can do anything the entire humanity working together could do. So, for example, you and me are generally intelligent humans, but we couldn’t build semiconductors by ourselves. But humanity put together can build a whole semiconductor supply chain. A superintelligence could do that by itself.
Gregory Warner:
So it’s kind of like this: If AGI is as smart as Einstein or way smarter than Einstein, I guess.
Andy Mills:
An Einstein, that doesn’t sleep, that doesn’t make bathroom breaks right.
Gregory Warner:
And lists forever and has memory for everything.
Andy Mills:
Exactly.
Gregory Warner:
ASI that is smarter than a civilization.
Andy Mills:
A civilization of Einsteins that’s how the theory goes right. Like you have the ability now to do in hours or minutes, things that take a whole country, or maybe even the whole world a century to do. And some people believe that if we were to create and release a technology like that, there’d be no coming back. Humans would no longer be the most intelligent species on Earth, and we wouldn’t be able to control this thing.
Connor Leahy:
By default, these systems will be more powerful than us, more capable of gaining resources, power, control, etc. and unless they have a very good reason for keeping humans around. I expect that by default they will simply not do so, and the future will belong to the machines, not to us.
Andy Mills:
And they think that we have one shot, essentially.
Gregory Warner:
One shot, like one shot, meaning we don’t we can’t update the app once we release it.
Andy Mills:
Once this cat is out of the bag. Once this genie is out of the bottle. Whatever metaphor…
Gregory Warner:
Once this program is out of the lab, as it were.
Andy Mills:
Exactly. Unless it is 100% aligned with what humans value, unless it is somehow placed under our control. They believe it will eventually lead to our demise.
Gregory Warner:
I guess I’m scared to ask this, but, like, how? Would this look? Like a global disaster? Or are we talking about it? Getting control of Crispr and releasing a global pandemic?
Andy Mills:
Yes, there are those fears for sure. I want to get more into all the different scenarios that they foresee in a future episode, but I think the simplest one to grasp is just this idea that a superior intelligence is rarely, if ever, controlled by an inferior intelligence. And we don’t need to imagine a future where these ASI systems hate us, or they break bad or something. The way that they’ll often describe it is that these ASI systems, as they get further and further out from human level intelligence after they evolve beyond us, that they might just not think that were very interesting.
Gregory Warner:
I mean, in some ways hatred would be flattering, like if they saw us as the enemy and we were in some battle between the humanity and the AI, which we’ve seen from so many movies. But what you’re describing is just… like indifference.
Andy Mills:
Right. I mean, one of the ways that people will describe it is that, like, if you’re going to build a new house, of all the concerns you might have in the construction of that house, you’re not going to be concerned about the ants that live on that land that you’ve purchased. And they think that one day the ASIs may come to see us the way that we currently see ants.
William MacAskill:
You know, it’s not like we hate ants. Some people really love ants, but humanity as a whole has interests. And if ants get in the way of our interests, then we will fairly happily kind of destroy them.
Andy Mills:
This is something I was talking to William MacAskill about. He is a philosopher and also the co-founder of this movement called The Effective Altruists.
William MacAskill:
And the thought here is, if you think of AI as we’re developing as like this. New species, that species. As its capabilities keep increasing. So the argument goes, we’ll just be more competitive than the human species. And so we should expect it to end up with all the power that doesn’t immediately lead to human extinction. But at least it means that our survival might be as contingent on the goodwill of those AIs as the survival of ants are on the goodwill of human beings.
Gregory Warner:
If the future is closer than we think, and if one day soon there is a - at least reasonable probability that superintelligent machines will treat us like we treat bugs. Then what do the folks worried about this say that we should do?
Andy Mills
Well, there’s essentially two different approaches to the perceived threat. Some people who are worried about this, they simply say that we need to stop the AI industry from going any further, and we need to stop them right now.
Connor Leahy:
We should not build ASI. Just don’t do it. We’re not ready for it and it shouldn’t be done further than that It’s not just I am not trying to convince people to not do it as a good news of their heart. I think it should be illegal. It should be logically illegal for people and private corporations to attempt even to build systems that could kill everybody.
Gregory Warner:
What would that mean to make it illegal? Like, how do you enforce that?
Andy Mills:
Yeah, I mean, some accelerationists joke like, what are you going to outlaw algebra.
Gregory Warner:
Right. You don’t need uranium in a secret center. You can just build it with code.
Andy Mills:
Right, but you do need data centers. And you could, you know, put in laws and restrictions that stop these AI companies from building any more data centers and a number of other laws. There are some people, though, who go even further and say that nuclear armed states like the US, should be willing to threaten to attack these data centers if these AI companies like OpenAI are on the verge of releasing an AGI to the world.
Gregory Warner:
Wait, so even bombing data centers that are in Virginia or in uhh Massachusetts?... I mean, like-
Andy Mills:
They see it as that great of a threat. They believe that on the current path we’re on, there is only one outcome, and that outcome is the end of humanity.
Gregory Warner:
If we build it, then we die.
Andy Mills:
Exactly. And this is why many people have come to calling this faction the AI Doomers.
Liv Boeree:
The accelerationists like to call “Doomer”. That was a kind of pejorative coined by them and very successfully I must say.
Connor Leahy:
I disavow the “Doomer” label because I don’t see myself that way.
Andy Mills:
Some of them have embraced the name “Doomer”. Others of them dislike the name Duma. They often will call themselves “The Realists”, but in my reporting, everyone calls themselves “The Realists”, so I didn’t think that would work.
Connor Leahy:
I consider it to be realistic, to be calibrated.
Andy Mills:
And one of the reasons that they balk at the name is that they feel like it makes them come off as a bunch of anti-technology Luddites, when in fact many of them work in technology, many of them love technology. People like Connor Leahy. I mean, they even like AI as it is right now. I mean, he uses ChatGPT. He just tells me that from everything that he sees, where it’s headed, where it’s going, we have no choice but to stop them.
Connor Leahy:
If it turns out tomorrow there’s new evidence that actually, all these problems I’m worried about are less of a problem than I think they are. I’d be the most happy person in the world. Like this would be ideal.
Gregory Warner:
All right, so one approach is we stop AI in its tracks. It’s illegal to proceed down this road we’re on. But that seems challenging to do. Given how much is already invested in AI and frankly, how much potential value there is in the progress of this technology? So what’s the alternative?
Andy Mills:
Well, there’s another group of people who are pretty much equally worried about the potentially catastrophic effects of making an AGI, and it leading to an ASI. But they agree with you that we probably can’t stop it. And some of them would go as far as to say we probably shouldn’t stop it, because there really is a lot of potential benefits in AGI. So what they’re advocating for is that our entire society, essentially our entire civilization, needs to get together and try in every way possible to get prepared for what’s coming.
Liv Boeree:
How do we find the win-win outcome here?
Andy Mills:
One of the advocates for this approach that I talked to is Liv Boeree. She is a professional poker player and also a game theorist.
Liv Boeree:
Our job, now, right now, whether you know you up, someone building it, or someone who is observing people build it, or just a person living on this planet because this affects you too, is to collectively figure out how we unlock this narrow path. Because it is a narrow path we need to navigate.
William MacAskill:
We should be really focusing a lot right now on trying to understand as concretely as possible, what are all the obstacles we need to face along the way, and what can we be doing now to ensure that that transition goes well?
Andy Mills:
This faction, which includes figures like William MacAskill, what they want to see is the thinking institutions of the world, you know, the universities, research labs, the media joined together to try and solve all of the issues that we’re going to face over the next few years as AGI approaches.
Gregory Warner:
So you mean not just leave this up to the tech companies?
Andy Mills:
Exactly. They want to see, you know, politicians brainstorming ways to help their constituents in the event that the bottom falls out of the job market… Right?
Gregory Warner:
Right. Or prepare communities to have no jobs I guess.
Andy Mills:
And some of them go that far, right? Like universal basic income, all that kind of stuff. And they also want to see governments around the world, especially in the US, start to regulate this industry. What are the concrete steps we could take in the next year to get ready?
Geoffrey Hinton:
So we’d like regulations that say when a big company produces a new very powerful thing, they run tests on it and they tell us what the tests were.
Andy Mills:
Geoffrey Hinton, after he quit Google, he converted to this approach, and he was talking to me about the kinds of regulations that he wants to see.
Geoffrey Hinton:
And would like things like whistleblower protection. So if someone in one of these big companies discovers the company is about to release something awful, which hasn’t been tested properly, they get whistleblower protections. Those are to deal, though, with more short term threats.
Andy Mills:
Okay, but what about the long term threats? What about this idea that AI poses this existential threat? What is it that we could do to prevent that?
Geoffrey Hinton:
Okay, so I can tell you what we should do about AI self taking over. There’s one good piece of news about this which is that no government wants that. So governments will be able to collaborate on how to deal with that.
Andy Mills:
So you’re saying that China doesn’t want AI to take over their power authority. The U.S. doesn’t want some technology to take over their power and authority. And so you see a world where the two of them can work together to make sure that we keep it under control.
Geoffrey Hinton:
Yes. In fact, China doesn’t want an AGI to take over the US government because they know it will pretty soon spread to China. So we could have a system where the research institutes in different countries that were focused on how are we going to make it so that it doesn’t want to take over from people? It will be able to if it wants to, so we have to make it not want to. And the techniques you need for making it not want to take over are different from the techniques you need for making it more intelligent. So even though the countries won’t share how to make it more intelligent, they will want to share research on how do you make it not want to take over.
Andy Mills:
And over time, I’ve come to calling the people who are a part of this approach: “The Scouts”.
Gregory Warner:
Like the Boy Scouts, be prepared.
Andy Mills:
Like the Boy Scouts. Yes, exactly. And it turned out that after I ran this name by William MacAskill. So what if I called your camp “The Scouts”?
William MacAskill:
So a little fun fact about myself is I was a Boy Scout for 15 years.
Andy Mills:
He actually was a Boy Scout. And so I thought, okay, “The Scouts”.
William MacAskill:
Maybe that’s why I’ve got this approach.
Andy Mills:
But the key thing about the Scouts approach, if it’s going to work, is they believe that we cannot wait, that we have to start getting prepared, and we have to start right now. This is something that I was talking about with Sam Harris.
Sam Harris:
The reasons to be excited and to want to go, go, go are all too obvious, except for the fact that we’re running all of these other risks and we haven’t figured out how to mitigate them.
Andy Mills:
Sam is a philosopher. He’s an author. He hosts the podcast Making Sense, and he’s probably the most impassioned scout that I know personally.
Sam Harris:
There’s every reason to think that we have something like a tightrope walk to perform successfully now, like in this generation, right? Not 100 years from now. And we’re edging out onto the tightrope in a style of movement that is not careful. If you knew you had to walk a tightrope and you got one chance to do it and you’ve never done this before, like what is the attitude of that first step and that second step. Right? We’re like racing out there in the most chaotic way you know.
Andy Mills:
Flailing out arms “Ahhh!”
Sam Harris:
Yeah. And just like we’re off balance already, we’re looking over our shoulder fighting with the last asshole we met online, and we’re leaping out there.
Andy Mills:
All right. And you’ve been on this for a long time. In 2016, I remember you did this big TED talk.
Sam Harris:
Yeah.
Andy Mills:
I watched it at the time. It had millions of views, and you were essentially saying the same thing. You were trying to get people to realize that we have a tightrope to walk, and we have to walk it right now.
Sam Harris:
Well, I wanted to, to help sound the alarm about the inevitability of this collision. Whatever the time frame, we know we’re very bad predictors as to how quickly certain breakthroughs can happen. So Stuart Russell‘s point, which I also cite in that talk, which I think is a quite brilliant changing of frame. He says, okay, let’s just admit it is, you know, probably 50 years out. Right? So let’s just change the concepts here. Imagine we received a communication from elsewhere in the galaxy, from an alien civilization that was obviously much more advanced than we are, because they’re talking to us now, and the communication reads thus “People of Earth, we will arrive on your lowly planet in 50 years. Get ready.” Just think of how galvanizing that moment would be. That is what we’re building. That collision and that new relationship.
Matt Boll:
Coming up on The Last Invention.
Stop AI Protesters:
Stop AI or we’re all going to die! Stop AI or we’re all going to die!
Peter Theil:
Why is all the worry about the technology going badly wrong? And why are people not worried enough about it not happening?
Matt Boll
The accelerationists respond to these concerns.
Reid Hoffman:
Existential risk for humanity is a portfolio. We have nuclear war. We have pandemic. We have asteroids, we have climate change. We have a whole stack of things that could actually, in fact, have this existential risk.
Andy Mills:
So you’re saying that it’s going to decrease our overall existential risk, even as it itself may pose to some degree, an existential risk?
Reid Hoffman:
Yes.
Matt Boll:
Researchers tell us what they saw that changed their minds.
Yoshua Bengio:
I was a person selling AI as a great thing for decades. I convinced my own government to invest hundreds of millions of dollars in AI. All my self-worth was on the plan that it would be positive for society. And I was wrong, I was wrong.
Matt Boll
And we go back to where the technology fueling this debate began.
Kevin Roose:
Basically, this is the holy grail of the last 75 years of computer science.
Connor Leahy:
It is the genesis, the err-like philosopher’s stone of the field of computer science.
Matt Boll:
The last invention is produced by Longview, home for the curious and open minded. To learn more about us and our work. Go to Longviewinvestigations.com. Special thanks this episode to Tim Urban. Thanks for listening. We’ll see you soon.



