Is the only way to stop a bad guy with an AGI… a good guy with an AGI? In a twist of technological irony, the very people who warned most loudly about the existential dangers of artificial superintelligence—Elon Musk, Sam Altman, and Dario Amodei among them—became the ones racing to build it first. Each believed they alone could create it safely before their competitors unleashed something dangerous. This episode traces how their shared fear of an “AI dictatorship” ignited a breakneck competition that ultimately led to the release of ChatGPT.
This Episode Features:
Karen Hao, Keach Hagey, Jasmine Sun, Yoshua Bengio, Kevin Roose, Connor Leahy
Available to listen on Apple and Spotify
Here’s the transcript!
GOV ASSOC:
The progressive development of man is vitally dependent on invention. It is the most important product of his creative brain. Its ultimate purpose is the complete mastery of mind over the material world, the harnessing of the forces of nature to human needs. This is the difficult task of the inventor who is often misunderstood and unrewarded.
GOV ASSOC:
Does anybody know who wrote that passage? Nicola Tesla.
Gregory Warner:
This is The Last Invention, I’m Gregory Warner.
GOV ASSOC:
So now for the main event. There are some like Tesla, Edison, the Wright Brothers, Ford, Jobs, Bell…
Gregory Warner:
At the 2017 meeting of the National Governor’s Association.
GOV ASSOC:
Rare entrepreneurs who make the impossible possible. All these governors from red states and blue came together in a room to find out what they could do to prepare for the future… from a man who seemed to be ushering in the future.
GOV ASSOC:
You know, I’m really thrilled to introduce a man who’s arguably the personification of technological innovation. Please join me in welcoming Elon Musk.
ARCHIVE ELON MUSK:
Hey, good to see you
Gregory Warner:
And the governors are eager to ask him about his plans for Tesla, about electric car infrastructure, about how to get ready for autonomous vehicles and even SpaceX flights. They want to know what does Elon see as the next big tech on the horizon.
ARCHIVE GOV ASSOC:
What would you want things to look like in 5 to 10 years with autonomous vehicles, electric vehicles?
Elon Musk:
Well, I think things are gonna grow exponentially, so there’s a big difference between 5 and 10 years.
Gregory Warner:
But no one seems prepared for where Elon wants to take this conversation.
Elon Musk:
I have exposure to the very most cutting edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell but you know, until people see like robots going down the street, killing people, like they don’t know how to react.
Gregory Warner:
He tells them the best thing that lawmakers can do to prepare for the future is make sure humanity has a future
ARCHIVE ELON MUSK:
AI is a fundamental risk to the existence of human civilization.
Elon Musk:
AI is a rare case where I think we need to be proactive in regulation instead of reactive, because I think by the time we’re reactive in AI regulation, it’s too late.
Elon Musk:
Because what’s gonna happen is the robots will be able to do everything better than us. I mean, all of us, you know? Um, yeah, I’m not sure exactly what to do about this.
Gregory Warner:
And at first it seems like the governors maybe think he’s pulling their leg, but he just keeps going
Elon Musk:
But when I say everything, like the robots will be able to do everything… bar bar nothing.
GOV ASSOC:
Let’s move back to,...you’re rolling out the Model 3 this year, right? Mm-hmm. Uh, and how many orders
Gregory Warner:
And after a moment of uncomfortable silence… the law makers eagerly move on… they never return to the subject. Now, it would not be long after this very public warning that Elon Musk was accelerating to build that very technology he seemed so alarmed about and he wasn’t alone. Today, how some of the very people most concerned about artificial super intelligence came to decide one after another, that the best way to protect the world from this technology was for them to build it first. And build it fast
Connor Leahy:
the AI race was started by the people who warned about it. It was started by the exact people – Sam Altman, Elon Musk, Demis Hassabis, Dario Amodei – who said, at least nominally, that they were the most concerned, and they want to prevent this to happen. They are the exact people who actually brought us into the situation we are in now. And they’re still doing it.
Gregory Warner:
So Andy, walk me through how we get from Elon Musk warning about AI to him trying to build AI, like going from Doomer to Accelerationist.
Andy Mills:
All right this all started with a meeting between Elon Musk and Demis Hasabis.
Gregory Warner:
The Demis Hassabis of DeepMind, the child prodigy.
Andy Mills:
Right. The gamer of gamers, the child genius. Back in 2012… Peter Theil set up this meeting between these two men … and in the years since, that meeting has become like a silicon valley folk tale. Like, I heard about it from dozens of people I spoke to for this series … and some of them were saying that if AI becomes even half as powerful as people think that it’s going to, the future will look back at this meeting and see it as some kind of turning point.
Peter Theil:
You know, it was the Demis meeting with Elon that we sort of brokered.
Andy Mills:
Peter Theil himself recently told a version of this story to my old colleague Ross Douthat.
Peter Theil:
The rough conversation was, you know, Demis tells Elon, I’m working on the most important project in the world, I’m building a superhuman ai. And Elon responds to Demis: Well, I’m working on the most important project in the world. I am turning us into interplanetary species.
Andy Mills:
As the story goes, Musk says, I’m sending us to Mars so that if anything terrible happens here on planet Earth, you know, nuclear war, some kind of civilization ending pandemic, we’ve got this escape valve. We can actually travel to other planets, our species can survive.
Peter Theil:
And then Demis said, you know, my AI will be able to follow you to Mars. And uh, and then Elon sort of went quiet.
Karen Hao:
And this is a huge trigger for Musk, where he is like, who is this guy? What is he trying to do? Why is he telling me that he is ultimately doing something that might kill us all?
Andy Mills:
Karen Hao - the author of Empire of AI - she says that Musk quickly decides he wants to keep an eye on Demis.
Karen Hao
And so he invests in Deep Mind to keep tabs on the company.
Andy Mills:
So his reaction is to be concerned and maybe a little freaked out and then to say here’s some money so I know exactly what you’re up to?
Karen Hao
Yeah, exactly.
Andy Mills:
So Musk, he became an early investor in DeepMind, and then a few years later -- after that impressive Atari demo.
Gregory Warner:
The AI that mastered Space Invaders?
Andy Mills:
Right, yes without any training.
Gregory Warner:
Yes.
Andy Mills:
Google quickly jumps in, wanting to acquire Deep Mind. Elon actually tries to get in the way of that and buy Deep Mind himself. But, it doesn’t work, And when Google acquires DeepMind - that’s the moment where suddenly, Elon starts going out in public and really sounding the alarm about what he believes are the existential dangers of AI.
Elon Musk:
I don’t think most people understand just how quickly machine intelligence is advancing.
Elon Musk:
Mark my words, AI is far more dangerous than Nukes… i think that’s the single biggest existential crisis that we face.
Andy Mills:
Keach Hagey, from the Wall St Journal - she says that this is what inspired Elon to start going out and trying to lobby lawmakers, and president Barack Obama.
Keach Hagey:
Elon even has a meeting with Obama and it was kind of interesting because there was a sense that yes, Obama understood the risks and yes, he also understood how important AI was gonna be for the economic development of the country.
Barack Obama:
It promises to create a vastly more productive and efficient economy.
Keach Hagey:
He gave an interview to Wired around this time, you know, saying.
Barack Obama:
If properly harnessed, can generate enormous prosperity for people, opportunity for people, could cure diseases we’ve not seen before, but it could increase inequality, it can suppress wages and so we are going to have to develop new social constructs in order to embrace fully…
Keach Hagey:
And yet Elon left that meeting with the sense that Obama wasn’t really gonna do anything about the existential risk piece.
Andy Mills:
Elon, he doesn’t just strike out with President Obama, he’s also talking to Vanity Fair, he’s talking at colleges, he’s speaking at conferences – like remember - this is the Tony Stark era Elon Musk we’re talking about here!
Gregory Warner:
Right, this is a time when Elon Musk had pretty broad market appeal to lots of audiences. Especially on the subject of technology.
Andy Mills:
This Elon at his peak celebrity, pre controversies that would follow - but even for him, he feels like no one is taking him seriously…. And so he starts to host these dinners … were he would invite other tech leaders - sometimes other billionaires and they would get together and try to brain storm a way that they could stop Demis Hassabis and Google from making some sort of civilization ending AI.
Gregory Warner:
Okay, so first his strategy is go to the leader of the free world… warn him, then uhh, then it’s themed dinners? And theme is how to we save the world from super intelligence?
Andy Mills:
That’s pretty much the story, yes. And one of these dinner guests, was none other than Sam Altman.
Sam Altman:
It is my believe that in the next few decades, someone will build a software system that is smarter and more capable than humans in every way and that it will quickly it will go from being a little more capable to like a billion times more capable.
Andy Mills:
Who is Sam Altman at this time?
Keach Hagey:
Sam Altman was the president of Y Combinator, which basically meant he was like the king of Silicon Valley.
Andy Mills:
Keech Hagey actually wrote a biography of Sam Altman called The Optimist. Which is really good. I recommend people check it out. She says that by 2015 Altman was already almost this mythical figure in Silicon Valley. He had helped to turn companies like Doordsash, Instacart, Airbnb into household names.
Andy Mills:
What’s Sam Altman’s superpower? How would you sum up what he’s so good at?
Keach Hagey:
Sam Altman is a once in a generation fundraising talent. He’s an incredible storyteller. He can convince people that he can see the future. He can sort of summon companies into being, um, just by persuasion. He... Is also kind of a fixer with lots of relationships all around Silicon Valley. People who owe him favors could make anything happen in Silicon Valley that anyone wanted.
Andy Mills:
It turns out that since Altman was young, he had always been enamored with this idea of making a true AI thinking machine. But by the time he’s having this meeting with Elon Musk, he had read the book Superintelligence by Nick Bostrom and he had come to believe that if AI was made irresponsibly that it could possibly lead to the end of the human race.
Keach Hagey:
And he even began blogging about this idea that if AI happens, it could be the most consequential thing that ever happened to humanity, but it could also be dangerous.
Andy Mills:
So when he goes to this meeting with Musk.
Karen Hao:
Altman starts talking to him about this idea that he is also very, very worried about AI potentially going wrong and becoming an existential threat to humanity.
Andy Mills:
He pitches him on this idea that if you want to stop a dangerous AI, if you want to stop Demis Hassabis if you wanna stop Google - then what we need to do is make a safe AI before they make a dangerous one.
Karen Hao
What do you think about the idea of us creating a lab that counters Google.
Keach Hagey:
why don’t we make a lab that would create the same technology, the same AGI technology as a counterweight to Google, except it would be non-profit. It would be open source, and it would be for the benefit of humanity.
Keach Hagey:
And Elon says, great, let’s do that. Sam basically convinces Elon to bankroll this thing, and by the end of the year they have created OpenAI.
Archive Altman:
We started a group called OpenAI, it is a non-profit, the goal is to build general super AI for the benefit of humanity.
ARCHIVE ELON:
OpenAI is structured as a 501-C3 nonprofit to help spread out AI technology so it doesn’t get concentrated in the hands of a few…
Interviewer:
What is going to be your sort of biggest differentiator then? Like OpenAI versus like the mega corps?
Archive Altman:
I hope that our biggest differentiator is number 1: we do the best research in the world, and number 2: we care the most about how it gets deployed.
Gregory Warner:
Okay, so OpenAI, the company behind Chat GPT came out of this plan to stop Google by beating Google at their own game - but do it in a totally different way than Google was doing it because it’s going to be a non-profit?
Andy Mills:
Right, the idea is almost to create like an anti-Google.
Gregory Warner:
Mmhmm
Andy Mills:
They called it a non-profit research lab, they didn’t even call it a tech company. And at the core of that lab is this mission that not only are they going to make the super mind, the AGI, but that they are going to ensure that this thing is good for the entire planet.
Gregory Warner:
Right.
Andy Mills:
Ok so, when they start Open AI, what’s it like at first?
Karen Hao:
The first thing that’s quite interesting is… in order to pull off what they wanted to do. They needed to recruit talent, so they needed to break up Google’s monopoly on AI research talent. And so they used their non-profit ethos and this mission drive idea… to very effectively poach… and bring in a bunch of new PHD grads into the founding team.
Andy Mills:
And I remember even with google purchasing Deepmind, most people in technology still don’t really buy into the idea that AGI is coming any time soon.
Karen Hao:
And so the people that primarily ended up joining OpenAI were self-selected, so called AGI believers, people that were there for the crazy quest to try and recreate human intelligence. And they were of one of two camps: There were the people who were AGI believers, but doomers that were really focused on the AI safety orientation of we’re ultimately trying to recreate this thing in order to prevent existential risk. And there were the accelerationists, who were like we believe in this thing because we think it’s gonna bring us to utopia.
Andy Mills:
They were both there together in this one lab working on this project
Keach Hagey:
They were both there together in that one lab, and at the time they philosophically did not seem that different because there were, compared to the rest of the fields, which just did not think that this idea of creating AGI was really something that held water. The doomers and the accelerationist are just two sides of the same coin. They both believe in AGI and there were only so many of them. So they were all kind of banded together on that shared belief and excitement and fear around doing this journey together.
Andy Mills:
They send out this signal to the world of technology, and in response they end up actually bringing together a really fascinating mix of people. They are able to poach Ilya Suskiver… one of the guys behind the ImageNet win - with Hinton from his job at Google, They get Greg Brockman to join them from Stripe, eventually bring in this guy Dario Amodei, who had also worked at Google and these were people leaving big paying very stable jobs in the world of technology to come work at this new research lab, because, as they said it they truly believed in this mission and how important it was.
Gregory Warner:
Okay. So they walk away from these big paychecks, these stable jobs, and they build what? Like what do they begin to make it OpenAI?
Andy Mills:
Well - at first Elon is very excited about the idea that they should make an AI that’s going to go head to head in some kind of game against Demis. Legendarily, Elon Musk is a gamer. And so he is pushing them down that path. BUT, there’s also just this kind of looseness, they’re a research lab so there’s this sense of like “let a thousand flowers bloom” like, what path might lead to AGI? We don’t know. Let’s try this one, let’s try that one. But after months and months of this, without any real meaningful progress …suddenly, Demis Hassibis strikes again.
GO ANNOUNCER ARCHIVE:
Now the wait is almost over. In less than one hour from now, man will face off against Machine in an Epic game of Go. The competitors are grandmaster Lee Sedol of Korea and he’s taking on the artificial intelligence supercomputer called Alpha Go. Now the first game of…
Andy Mills:
In 2016 - Demis and the team at Deepmind, they thunder back onto the public stage again this time to play the game GO.
Archive:
The game of GO is the holy Grail of Artificial intelligence. For many years, people have looked at this game and they’ve thought: Wow! This is just too hard. Everything we’ve ever tried in AI, it just falls over when you try the game of GO. And so that’s why it feels like a real litmus test of progress. If we can crack GO, then we’ll know we’ve done something special.
Andy Mills:
Are you familiar with GO?
Gregory Warner:
I was obsessed with GO as a kid, I remember because I didn’t really get into chess although I got into it briefly but GO was like the stones were very beautiful. The pieces are just these black and white stones.
Andy Mills:
Mhmm, what I think is cool about it is that it is an ancient Chinese game, and I was looking it up, and it appears as though we don’t even know how old it is. Like there’s records of people playing GO like 2000 years ago.
Gregory Warner:
It’s older than chess, though, right?
Andy Mills:
Way older than chess.
Gregory Warner:
What’s also crazy about Go is that you can be playing for a while and not even know who’s winning, it’s that complex.
Andy Mills:
Mhmm, Jasmine Sun, who is one of the tech writers I spoke to about this, she told me: You could play Go like every day and never play the same game twice.
Jasmine Sun:
The game of Go is like an ancient Chinese game that is known for having more possible board positions than the number of atoms in the universe.
Andy Mills:
You’re saying you can’t even calculate the positions.
Demis Hassabis:
The number of configurations on the board is more than the number of atoms in the universe.
Andy Mills:
That just doesn’t even seem possible.
Jasmine Sun:
It’s unfathomable. It’s literally unfathomable, right? Like there’s no way through some sort of like brute force search that you can just search every possible move and compare them all against each other, right? Like you can’t do that. You can’t memorize the strategy. This was a game that NO expert system could ever have hoped to truly master. Which is exactly WHY Demis Hassabis, wanted to create an AI that could.
Demis Hassabis:
So even if you took all of the computers in the world. And ran them for a million years, that wouldn’t be enough compute power to calculate all the possible variations.
Andy Mills:
But another way that it’s really different from chess is that GO players, because there is no way for them to really calculate their way to victory. The ones who become masters of this game are often described as having some sort of deep INSTINCT. Or often they use the word “intuition.”
Jasmine Sun:
Champion Go, players are known for being really. Intuitive, I suppose, for having some sort of like deep feel of like strategy and the board that you cannot learn by like memorizing any sort of rule book.
Demis Hassabis:
If you ask a great go player why they played a particular move, sometimes they’ll just tell you it felt right. So you can, the one way you can think of it is that go is a much more intuitive game, uh, whereas chess is a much more logic based game.
Jasmine Sun:
A lot of games players call go a very quote unquote human game.
Andy Mills:
So Demis and his Deepmind team they created this AI system called AlphaGO and the way that they train it sounds like Sci-Fi - and it actually plays into ONE of the big fears that the Doomers have, about how AGI might one day turn into ASI and you know, replace us all. And it starts off like this: so at first they just load it up with a whole bunch of data of human beings playing GO so that it can find it’s own patterns and see - oh that’s working for this person and that’s working for that person.
Jasmine Sun:
But again, like - GO has more possible board positions than the number of atoms in the universe. There are so many moves and positions that no one has ever thought of yet.
Andy Mills:
But then what they do is they make an identical COPY of the AI system… so that the AI can play, against itself, millions and millions of times - each time learning new strategies and gathering more data and learning new strategies and gathering more data.
Jasmine Sun:
The self play is what they call it. This is like the thing that really makes a system not just like quite good but super human at playing GO because it’s able to put in so many reps, like an infinite amount of practice that no like ordinary player ever could.
Gregory Warner:
Interesting. So that makes me think of that idea from Malcolm Gladwell. The 10,000 hours thing, like you takes 10,000 hours to become an expert in something, but this thing can basically log the equivalent of 10,000 hours of practice in like a month or something.
Andy Mills:
Oh no, it’s even crazier than that: Alpha-go can play itself so quickly that in the span of a week, it could play more than a human could in centuries. Like hundreds of years worth of non-stop playing in a week.
ANNOUNCER:
Hello and welcome to the DeepMind Challenge Game One Round One live from the Four Seasons here in Seoul, Korea.
Andy Mills:
In March of 2016, they set up this showdown against player Lee Sadol, who is often described as the greatest of his generation - it garners all this attention more than 100 million viewers.
Jasmine Sun:
All these journalists are there. Everyone in Korea is watching the game is huge there.
ARCHIVE:
Everyone here is very excited for the match of the century, and you can feel the tension in the air. As the competitors get ready to face off journalists from Asia and around the world, about 350 members of the press are here to see if artificial intelligence can really beat human intelligence.
Andy Mills:
Now I should say just to level-set here that: that pretty much everyone thinks that Lee - is going to win - Even Google believes that their system is very likely to lose. Demis himself later said that the team was given just a 5 percent chance of pulling this off.
Gregory Warner:
which is interesting because even though chess had fallen to AI and Jeopardy! had fallen to AI that GO was just seen as too hard. Or maybe too human?
Andy Mills:
Yeah - it’s too something - Lee is too good, the game is too complex for an AI, and maybe even just it’s not ready yet maybe there’s like one day it could happen but not today. And in the middle of one of the matches, during Move 37 as it’s called, AlphaGO ended up doing this weird thing that seemed to prove the doubters may have been right all along.
ANNOUNCER:
Interesting. Uh, Alpha Go played this move, which I want to hear more about in a second. But, uh, Lee has left a room.
Jasmine Sun:
It makes a move on the board. That to all of the spectators, to Lee Sedol, looks almost like a mistake.
ANNOUNCER:
That’s a very surprising move. It’s a very surprising move. Umm I wasn’t expecting that. Uhhh I don’t really know if it’s a good or bad move at this point.
Jasmine Sun:
Like the DeepMind AlphaGo team. They were watching it and even they thought it had made a mistake because it seemed weird. It seemed bad. It was something that no human player would ever do, had ever done in a position like that.
Andy Mills:
So while the cameras are rolling and the world watching, the deepmind team was like, damn I think it just malfunction?
Gregory Warner:
Like it glitched?
Andy Mills:
Yeah, like a glitch.
David Silver in AlphaGo Documentary:
The professional commentators almost unanimously said that no human player would have chosen move 37.
Fan Hui:
I can’t believe what I see right now…
Andy Mills:
And yet in the end…
Announcer:
Oh, I think he resigned.
Announcer:
Oh my gosh…
Andy Mills:
AlphaGO wins.
Announcer:
Yeah Lee has, I’m getting word that Lee has resigned…
ARCHIVE NEWS:
In the battle between man and machine, a computer just came out the victor.
ARCHIVE NEWS:
Deep Mind put its computer program to the test against one of the brightest minds in the world and won.
ARCHIVE NEWS:
AlphaGO beat a professional player who has 18 GO world championships under his belt.
Andy Mills:
And suddenly people go back and re-examine that move … Move 37.
David Silver in AlphaGo Documentary:
It went beyond it’s human guide and did something new and creative and different
Andy Mills:
and they realize that it was the turning point in that match.
Jasmine Sun:
That move was what got everyone to say in the GO community and the broader community, even Lee himself to say, oh, the computer like this machine can be creative. It can be intuitive, it can sort of like master this thing that I always thought was a human task.
Fan Hui:
The more I see this move, I feel something changed. Maybe it can show humans something we never discovered.. maybe it’s beautiful.
Gregory Warner:
So the AI played a move that no human had ever made in any recorded Go game. That means it discovered its own original strategy?
Andy Mills:
Some people go as far as to say it had something like an original thought. And original idea.
Gregory Warner:
And what do we know about how it did that? How did Demis Hassabis and his team at Deepmind explain Move 37?
Andy Mills:
Well, they really wanted to know and so they spent time digging through the code and looking inside the guts of the system. They wrote a paper about it and while they were able to, you know, glean some information; because it’s one of those connectist neural net AI toddler styles of AI…
Gregory Warner:
Right.
Andy Mills:
Remember the trade off? that Yoshua Bengio was talking to us about … to get this kind of impressive performance, this level of intelligence, you just have to accept that you’re not going to get satisfying answers to these kinds of questions.
Gregory Warner:
you have to accept some level of mystery.
Andy Mills:
This is the black box…
Andy Mills:
And when I asked Yoshua Bengio about this AlphaGo moment…
Yoshua Bengio:
With alpha Go I thought - ooh, now we are getting close to something important.
Andy Mills:
He said this is when he realized that AI was now entering a whole new era. And he wasn’t the only one - all across Silicon Valley, across the world of technology - people were singing Demis’ praises, people were abuzz about AlphaGO - and of course… Yet again… Elon Musk doesn’t like this one bit.
Keach Hagey:
DeepMind’s show of force in AlphaGo freaked Elon out a lot and made this sort of ambling approach that OpenAI had at the beginning of, you know, let a thousand flowers bloom. All the researchers can kind of pursue their own different areas. It made him have little tolerance for that.
Andy Mills:
We now know in detail some of what happened inside of OpenAI during this time, because a number of their internal emails were revealed as a part of a lawsuit. And so you can see in these emails, Elon Musk telling Altman and the leaders at OpenAI just how frustrated he is that they’re losing to Demis still. In one of them, he says, “Open AI is on a path of certain failure relative to Google. There obviously needs to be immediate and dramatic action, or else everyone except for Google will be consigned to irrelevance.”
Gregory Warner:
What does immediate and dramatic action mean?
Andy Mills:
Well, according to Keach Hagey, this is when Musk was saying, where is our game player?
Keach Hagey:
Elon wanted to drive like, like fighting tit for tat with Deep Mind, and he really wanted to respond to it by showing an even cooler and harder game that AI could beat.
Andy Mills:
We need to challenge Deep mind in some kind of public display of our dominance in a game with an AI game player and Altman and the team at OpenAI, they were saying to Musk, well, we do have a way that we think we could beat Deep Mind. We’ve got this strategy, But for us to implement that strategy, we’re gonna need way more compute power, and we’re gonna need more money. So they pitch him on this idea that the nonprofit OpenAI could have a for-profit arm, and then that way Altman could go out and do the thing that he’s best at, right? He could go get investor money that they can use to up their compute power.
Karen Hao:
They start to discussing how are we gonna convert the nonprofit into a for-profit. And all of a sudden, Musk and Altman start butting heads. Because when they go to form the for-profit, the question becomes who will be the official CEO of the for-profit? And both of them want to be CEO and they cannot agree.
Andy Mills:
And Musk comes back and basically says, no way I gave you all this money to start a nonprofit. If you’re going to turn that nonprofit into a for-profit, then I should be the head of that. He even thought about folding it into Tesla and just making it an arm of a for-profit company he was already running.
Karen Hao:
He wanted to be CEO and have controlling voting power.
Gregory Warner:
So Elon says, if it’s gonna be for-profit, then he’s gonna be in charge.
Andy Mills:
Yes. Or he’s gonna walk. And so now OpenAI has a decision to make: lose Elon Musk and his celebrity and his money and his tech prowess, or change the structure of their company that they designed specifically not to have one person be the ultimate controller of this technology that they think is going to be so powerful that no one man should wield it, right?
Gregory Warner:
Right.
Andy Mills:
One Open AI co-Founder, Ilya Sutzkever, he wrote to Elon in this email and he says: “the goal of OpenAI is to make the future good and avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So are we. So it is a bad idea to create a structure where you could become a dictator if you choose to.”
Gregory Warner:
And so what does Elon do?
Andy Mills:
Well, he responds to this email: “Guys, I’ve had enough, this is the final straw.”
Karen Hao:
And then Musk decides in a huff. If this is not gonna stay a non-profit, and it’s converting to a for-profit where I am not in total control, I’m leaving.
Andy Mills:
And not long after he quits OpenAI. But within months… he starts going around on a very different kind of campaign. Essentially telling people that he has also quit trying to sound the alarm about AGI and what it’s going to give rise to…
ARCHIVE JOE ROGAN:
4, 3, 2, 1. Boom. Thank you. Thanks for doing this, man. Really appreciate it.
ARCHIVE ELON MUSK:
You’re welcome.
ARCHIVE JOE ROGAN:
It’s very good to meet you.
ARCHIVE ELON MUSK:
Nice to meet you too.
ARCHIVE JOE ROGAN:
And thanks for not lighting this place on fire.
ARCHIVE ELON MUSK:
You’re welcome.
Andy Mills:
And it’s at this time, that Elon Musk makes his first appearance on the Joe Rogan Experience.
Gregory Warner:
This is the infamous episode where Musk smoked pot on camera.
Andy Mills:
Yes.
ARCHIVE ELON MUSK:
I mean, this is legal right?
ARCHIVE JOE ROGAN:
Totally legal.
ARCHIVE ELON MUSK:
Okay.
ARCHIVE JOE ROGAN:
Just tobacco and marijuana in there. How does that work, do people get upset at you if you do certain things?
Andy Mills:
And as you remember: this became like a whole big thing.
NEWS ARCHIVE:
The stock value of electric car manufacturer Tesla tumbled 9% Friday morning.
NEWS ARCHIVE:
Billionaire, Tesla head, Elon Musk. What is he up to?
NEWS ARCHIVE:
shares in Tesla took a hit today shortly after video was posted of CEO Elon Musk apparently smoking pot .
Andy Mills:
But of all the spectacle in this moment, there was this other part of the podcast that even I didn’t really notice until I went back recently and listened to it.
ARCHIVE JOE ROGAN:
You scare the shit outta me when you talk about ai, between you and Sam Harris. I realize like, oh, well this is a genie that once it’s outta the bottle, you’re never getting it back in.
Elon Musk:
That’s true.
ARCHIVE JOE ROGAN:
Are you honestly legitimately concerned about this? Are you -- Is, like, AI one of your main worries in regards to the future?
ARCHIVE ELON MUSK:
It’s less of a worry than it used to be, mostly due to taking more of a fatalistic attitude.
Andy Mills:
Musk tells Rogan: hey, I did my best to warn people.
ARCHIVE ELON MUSK:
I tried to convince people to slow down, slow down AI to regulate AI. This was futile. I tried for years. Nobody listened.
ARCHIVE JOE ROGAN:
This is like a scene in a movie.
ARCHIVE ELON MUSK:
Nobody listened.
ARCHIVE JOE ROGAN:
Where the robots are gonna fucking take over and you’re freaking me out. Nobody listened?
ARCHIVE ELON MUSK:
Nobody listened.
ARCHIVE JOE ROGAN:
No one.
ARCHIVE ELON MUSK:
I even met with Obama and just for one reason.
ARCHIVE JOE ROGAN:
Just to talk about AI?
ARCHIVE ELON MUSK:
Yes. I met with Congress. I was at a meeting of all 50 governors and talked about just AI danger, and I talked to everyone I could, and no one seemed to realize where this was going…
Gregory Warner:
After a short break, OpenAI, now without Musk stumbles into a breakthrough that will transform the industry, and yet again make the people most concerned about building AI safely decide that they need to build this even faster. Stay with us.
Gregory Warner:
Okay. So once Elon Musk walks out, what happens at OpenAI?
Andy Mills:
Well, it turns out that even though this was a nightmare for everyone at OpenAI and they were worried that this might spell the end of the company, it kind of turned out to be a blessing in disguise.
Keach Hagey:
I talked to, um, Andre Carpathy, who worked for both Elon at one point and OpenAI, and he said, you know, in the beginning, OpenAI was trying to copy paste DeepMind, and in the end it turned out that DeepMind had to copy paste OpenAI.
Andy Mills:
As Keach Hagey was it saying to me - pretty much the whole time that Elon was at Open AI, he was trying to push the AI game-player strategy.
Keach Hagey:
And after Elon left in 2018, over in the corner, a completely different researcher, had breakthrough with a completely different technology, a language model.
Andy Mills:
And it was only after he was gone - that instead, they focused eventually on LANGUAGE. And THAT it how by 2022 - they flip everything and have Demis and Google chasing after THEM instead of the other way around.
News Archive:
The next generation of AI is here, it’s called Chat GPT.
Andy Mills:
BUT… before that would happen… there was ANOTHER SPLIT, inside this company. And it’s a split, that some people in Sillion Valley think may end up being far more consequential even than Elon Musk’s. And this is the paradox of Dario Amodei.
Andy Mills:
Who is Dario Amedee? Why does he end up at OpenAI and what is it that he really contributes to the team there?
Jasmine Sun:
Dario, is like Demis actually, has a neuroscience background, which is one interesting thing. He is also very interested in the brain which sort of in ends up informing a lot of his theories for AI systems and how they should work.
Kevin Roose:
What he was really known for at Open AI was his emphasis on safety.
Andy Mills:
Jasmine Sun and Kevin Roose - they’re both working on a book about AI right now and Dario is one of its central characters.
Kevin Roose:
He, by his own admission, kind of a nervous person, and he was really a pioneer, not just in developing AI systems, but in worrying about them and how they might go wrong.
Jasmine Sun:
He does end up being extremely concerned about AI risk and the potential for systems much smarter than us to develop their own goals, become unaligned with opposed to, or just sort of not caring about human goals, and then to sort of end up taking over screwing humans up.
Gregory Warner:
So Dario was one of the people who came to Open AI as someone whose motivation was more about stopping a dangerous AI.
Andy Mills:
Yes - he had an association with this group called the Effective Altruists and he came to Open AI - in large part BECAUSE of their altruistic-focused safety mission.
Andy Mills:
And - how do you sum up - what it means for someone like Dario to study AI safety? - like what exactly is AI safety?
Kevin Roose:
So, AI safety is a big field. It contains a bunch of different subfields. One of them, that Dario and his colleagues have been very instrumental in, is called mechanistic interpretability. Uh, that’s a very long name, I’ve told them they should rebrand it to something people can actually pronounce, but they didn’t listen to me, but basically mechanistic interpretability is the science of figuring out how AI models make decisions, why they behave like they do. What is going on inside the guts of the system.
Andy Mills:
So he’s studying the mysterious AI black box.
Kevin Roose:
Yes, making that interpretable to humans.
Gregory Warner:
okay, so he’s trying to probe the AI to try to figure out why its doing what its doing. And is it going to do anything we don’t want it to?
Andy Mills:
Yes, that’s one part of AI safety, there’s other parts of it too. The other thing that Dario is really into is size.
Kevin Roose:
Early on in his career, Dario Amede had worked on a project at Baidu, the Chinese internet conglomerate, that dealt with these so-called scaling laws. And this was a theory at that time, it was an unproven theory that basically the key to making an AI system more intelligent was just making it bigger and training it on more data. This was sort of countercultural in AI research at the time. Lots of people were theorizing that you needed some clever new algorithm or some very different architecture to make these models smarter. But Dario and his colleagues sort of had this idea that you could actually just make them bigger and the systems would get smarter.
Andy Mills:
And so the idea here is the just that if we take an already promising neural network AI system and we just make it bigger, then maybe like the human brain which is bigger than the bird brain or the cat brain, and smarter, this thing will also get smarter and smarter and maybe one day will even become a general intelligence.
Kevin Roose:
Yes, and essentially this is sort of his best guess at how companies like OpenAI are going to get more intelligent systems. It’s not by, uh, training them on, you know, more specialized data. It’s not by coming up with clever efficiency hacks. They are just going to make the models bigger and that is going to take care of a lot of the problems.
Andy Mills:
So - his theory is that if you just take a promising AI model: for Open AI, that became their language model, that is a neural net that looks for patterns in text and language. Then what you just need is a MASSIVE amount of text and data to pump into it as well as a MASSIVE amount of GPU computer chips. Which are of course very expensive.
Gregory Warner:
And this is why they are pretty sad to lose Elon Musk and his money.
Andy Mills:
Yes, this is one of the reasons it was sad to see Elon walk out the door. But not long after he does, Sam Altman goes out, he does the thing that he’s so good at. He knows that they need lots of money, LOTS of GPUs… so he goes out and strikes up a partnership with one of the biggest companies of all time: Microsoft.
Karen Hao
So, Microsoft ends up fulfilling both of these things. They become the largest investor in OpenAI, and they partner to build the supercomputers that OpenAI needs.
Andy Mills:
And Greg - this is a detail of the story - that’s especially wild to me. When Dario and his team at Open AI, when they get access to these Microsoft super computers… they decided to take the scaling theory as far as they can….
Karen Hao:
Dario Amide was really pushing for the idea of, no, we really go big or go home.
Andy Mills:
So, for example: at Deepmind when they did that Atari demo, they were using just 1 GPU.
Karen Hao:
For the richest universities in the world, like MIT Stanford, it would be a big deal to have a few dozen chips. And in places like India, you would have grad students, multiple grad students sharing one computer chip. So they’re trying to do their research on fractional amounts of GPUs.
Andy Mills:
But with this Microsoft partnership, Open AI now has access to these super computers with thousands and thousands of GPUs, and so as the story goes, Dario approaches the leadership team at Open AI and says: “guys, what if next time, what if we take this model and we crank it up to 10 thousand GPUs?
Karen Hao:
So 10,000 computer chips, I mean no one had ever thought of that before, like that’s bananas… actually within Open AI this was a contentious decision because some people were like: “Is that even possible? That just seems improbable” other people were like: “no one has ever done this before, and if we think that AI could go badly, maybe we should more gradually scale it, not just do this dramatic step change.” But his philosophy was, we need to accelerate the development of this technology so that we can then retain hold of it and figure out how to perfect it in the lead time that we have over other potentially bad actors getting a hold of it. And Sam Altman also really liked the idea because his entire career has been adding zeros to things. So he was like, let’s do it. And Ilya Sutzkever also philosophically was always more in the camp of scaling will potentially bring wondrous and potentially terrifying things, but we should not be afraid to go in that direction. And so the main people that were running open AI all converged on, yeah, let’s, let’s give it a go.
Andy Mills:
However - If you are going to massively scale up your GPUs, your compute, you also have to massively scale up the data that it’s searching for patterns inside of, because just think of it, like no matter how smart and powerful it is, if it only has access to a limited amount of data, it’s never going to truly become like an artificial general intelligence.
Gregory Warner:
Or it’s almost like if Einstein, as smart as he was, if he’d only ever read one book…
Andy Mills:
He would know that book really well.
Gregory Warner:
Right, but he wouldn’t be that smart.
Andy Mills:
Yes,and so they’re going to need a massive amount of data to match the massive amount of compute, but here’s the problem, there’s only so many open source free data bases on the internet. And so here’s where Open AI does something that right now has a lot of people, a lot of different corporations suing them. Including the New York Times, because it appears, and some former OpenAI employees have leaked some details about this, that they just started dumping big chunks of the internet into their AI. I feel like this is going to be THE PINNACLE SCENE in the inevitable Hollywood depiction of this story one day, where they are just ramping up all of the stuff they are throwing into their system. Like oh, here’s a free database let’s put that in, oh look let’s put some reddit in there?
Keach Hagey:
Yeah. And then, Hey, how about Wikipedia? And then, hey man, these researchers over at the University of Toronto just scraped all these books off the internet, I’m sure they wouldn’t care if we just took all of them copy righted books and fed that to the LLM. No problem… Right?
Andy Mills:
What else we got?
Keach Hagey:
Uh yeah, that’s pretty much what happened.
Andy Mills:
Allegedly they start throwing in scientific journals, news articles, blogs, transcripts from youtube videos, they just keep going and going.
Gregory Warner:
And how far does this go? Do they feed it the whole internet?
Andy Mills:
Well, like I said, there is a lawsuit that’s happening right now so we’re going to learn more details I think as information comes out from those suits. But it’s been reported that basically if a website or some if some text online didn’t explicitly have a label on it saying: Do not use this to train your AI… they adopted a stance of better to ask forgiveness instead of permission.
Gregory Warner:
So all we know is that they just dumped a lot of the internet in the system, we don’t know how much or what exactly they put in there.
Andy Mills:
Yes, and we know that this is eventually the strategy that would give birth to what we now call Chat GPT.
Andy Mills:
And so Dario is both the guy who is saying - let’s scale this thing further and faster than anyone has before - let’s crank this up to 10,000 GPUs, but he’s also the safety guy? Isn’t there a tension between those two? Between let’s crank the knob up to 11, and oh, we really need to make sure this is safe?
Kevin Roose:
Yeah, this is sort of the classic, uh, paradox of Dario Amodei and of AI safety in general is that they on one hand fear the effects and implications of these very large, very powerful models, and they’re trying to build them and stay on the cutting edge of AI capabilities. And I’ve asked Dario about this before, and he says, in order to be able to study the safety challenges of very powerful AI systems, you have to have very powerful AI systems to use as your testing grounds. You can’t sort of learn about safety on a Formula One car by practicing on like a, you know, jalopy of a, you know, 10-year-old Honda Civic. It just won’t teach you that much about what kinds of risks are going to take place when AI is, is very powerful.
Andy Mills:
And so the argument here is that to make a powerful AI that is safe, that is good for humanity, you’re going to need to learn about powerful AI systems and test powerful AI systems and so therefore you’re going to have to make one.
Kevin Roose:
Yes. If you want to do cutting edge AI safety research on very powerful systems, you need to actually build those powerful systems.
Jasmine Sun:
I think the other thing that like a Dario Amodei would say is like: Whichever system is the best, it’s going to be embedded in every part of society, like, we’re going to use it to make decisions about who to give a loan to, we’re going to sue it to plan out cities, we’re going to use this superintelligence to make even like figure out our military strategy and so like the only way to have the impact that we want to have in the world, to ensure that we have superintelligence and that the superintelligence does things like curing cancer instead of like screwing us all over and self sabotaging ourselves is by having both the safest and the best model, because if ours is the safest but it’s not actually a very good model then like, the unsafe model is going to be the one that’s widely deployed and that’s much a much worse world.
Kevin Roose:
And the last one I heard him make is: The only way to stop a bad guy with a powerful AI is a good guy with a powerful AI. Mm-hmm. Essentially, this is the argument that this technology, it’s so powerful and so lucrative, that someone is going to build it and in Dario’s mind that someone could be an authoritarian government. It could be a rival AI company that doesn’t care as much about safety. It could be a terrorist group. And so the ethical thing to do in his mind, if you are concerned about the power of these AI systems, is for you to be the one who builds it and keeps it safe and kind of sets the high bar of safety that the rest of the industry will have to follow.
Andy Mills:
And it was this mindset, this idea that to make AGI safe, you need to make it fast, this is part of what drew Dario to OpenAI in the first place, this is a part of the mission that he loved. However… one day, late in 2020… he just up and quit, along with several members of the AI safety team, and they went out and pretty much immediately started a RIVAL AI company called Anthropic.
Andy Mills:
Alright so, Kevin - Dario has yet to accept my interview request. Although the people that he works with are very nice, and uhh I had a nice meeting with them and maybe one day, uhh he will come on the show, but in the meantime, I know you’ve spoken to him, what do you understand is the reason he leaves Open AI? What does he see there? What is it he doesn’t like?
Kevin Roose:
So the official story of why Dario and his colleagues left OpenAI is that they had philosophical differences about AI safety approaches and, priorities that Dario and his team, wanted the company to put more emphasis on safety and that others at the company were less interested in that. I think the real story is, um, more complicated and involves a lot of, um, not only philosophical differences, but also like real personal differences and beefs. Um, lots of disagreements. I, I’ve heard about in, in reporting about, specific decisions they were making, whether they were taking safety seriously enough, whether they were becoming too commercial. I mean, you have to remember that when Dario joined OpenAI, it was a research nonprofit. It was, um, specifically set up not to, uh, be and act like a normal AI company. And by the time he left, it had started this for-profit subsidiary. It had struck this deal with Microsoft. It was starting to look more and more like a kind of normal tech startup. And I think that made him and his colleagues very uncomfortable. And there are some other juicier stories that I’m gonna save from my book.
Gregory Warner:
So we don’t know what Dario saw that scared him?
Andy Mills:
No. We don’t know. I mean, maybe Kevin knows something and we’ll just have to wait for his book, all we know is that he quits, he says that it’s connected in some way with AI safety and that he opens a competitor claiming that now he’s going to be the one to make AI truly safe.
Gregory Warner:
And so now we have more competitors in the race?
Andy Mills:
Yes, and this pushes everyone to work even faster and really where you see that most dramatically is around Chat GPT, because OpenAI had already decided that eventually they wanted to release a version of Chat GPT to the public but they didn’t think it was quite ready. It had gone from GPT 3 to GPT 3.5, but it was still buggy, it still regularly had these hallucinations that they didn’t understand. So they were trying to hit their benchmark of GPT 4 before going public with their chatbot.
Karen Hao:
But suddenly this rumor starts to spread within the company. That Anthropic also has a chat bot and they might release it soon.
Andy Mills:
Allegedly, this rumor starts to go around OpenAI that Dario Amadei and Anthropic are planning to release their own chat bot and to do it before Open AI can.
Karen Hao:
And so OpenAI executives make a decision. We are not gonna wait for the GPT-4 launch because the model’s just not ready, but we have the chat interface and we have GPT 3.5.
Andy Mills:
And so they’re nervous that if Anthropic BEATS them to market with their chat bot, then Open AI is going to seem like their behind the ball, they’re going to come off like a copy-cat…
Karen Hao:
They’re operating under, you know, a very Silicon Valley belief of winner takes most. So you need to be the number one. You need to be the one that has the name recognition, the one who invented this kind of chat bot.
Andy Mills:
And so they just decide decide, you know let’s do a low key, you know, no press release, no advertising, no social media blitz… release of Chat GPT 3.5 … and Keach Hagey and Karen Hao, they were telling me that supposedly, the Team at Open AI … they didn’t think that this was going to be that big a deal outside of Silicon Valley, like they didn’t think that this was going to make much of a public splash.
Gregory Warner:
And so why release it, if they didn’t think it was going to be a hit?
Andy Mills:
Well, in some ways it was like insider signaling, to just say to the world of technology: we were here first. It doesn’t matter if the public uses it or not. It mattered that like the world of technology doesn’t think that they are just copying off of their rivals, Anthropic.
Keach Hagey:
Inside the company. It was a ‘low key research preview’ is how they described it. Let’s just release this model with this new interface that is just like a chat bot and see what people think.
Karen Hao
The night before, they were like making bets on how many people would actually start using the model. I think the first weekend and the highest bet was a hundred thousand. So that’s how many users they provisioned their servers for.
Andy Mills:
And so, on Nov 30th - Sam Altman, goes onto twitter and he just writes: “Today we launched ChatGPT. Try talking with it here” and he pastes a link.
ARCHIVE NEWS:
The next generation of artificial intelligence is here.
ARCHIVE NEWS:
The future is now.
ARCHIVE NEWS:
The Internet’s going crazy over new artificial intelligence called ChatGPT.
ARCHIVE NEWS:
A new artificial intelligence chatbot.
ARCHIVE NEWS:
Chat GPT is like a Google you can ask to do things.
ARCHIVE NEWS:
It can answer essay questions, write songs, give you a more complete travel itinerary.
ARCHIVE NEWS:
Who knows what companies and ethical issues that could launch.
ARCHIVE NEWS:
It already has more than a million users after debuting just a few weeks ago.
ARCHIVE NEWS:
Very creepy. A new artificial intelligence tool is going viral for cranking out entire essays in a matter of seconds…
Matt Boll:
Next time on The Last Invention.
Andy Mills:
I’d love it if you could just take me back to this time period in your life where after years of being on the fringes as you’ve described it, being rejected. You and Hinton and your fellow connectionist AI researchers, your contrarian views are proven right and then all of these actual AI systems, these promising new technologies are born out of your guys’s you know, determination to chase after this idea despite all the nay-saying… How did that feel? Like, I imagine it felt really good?
Yoshua Bengio:
Oh, yeah. I mean, it was, it was great. Um, it was, um, let me share something emotional. So, um, shortly after AlphaGo, uh, I don’t know, maybe, uh, 2018 or something, uh, oh. I guess that’s when I, uh, I got the Turing award with, uh, Jeff and Yan. I thought I’ve achieved the greatest prize that a computer scientist can expect in their life. And I’ve accomplished so much, and you know, my career has been so rewarding and successful, what else is there to do? I felt like if I die tomorrow, I’ll go with, you know, serenity.
Andy Mills:
Hmm. You did it.
Yoshua Bengio:
But wait, but there’s a but…
Andy Mills:
Hmm.
Yoshua Bengio:
Uh, November 22 ChatGPT, it dawned on me, yes but like look this has been a really big step. How far are we from human level? Maybe just a few years? Maybe a decade? Maybe 2? And then what? Like what’s going to happen? With this kind of technology, aren’t we going to build machines that we don’t control? And could potentially destroy us? How do we make sure this doesn’t happen?… and I didn’t have an answer.
Matt Boll:
The Last Invention is produced by Longview, home to the curious and open minded. We are an independent outlet focused on giving people the backstory to the debates shaping our future … to support our work click on the link in our show notes or visit us at LongviewInvestigations.com and become a subscriber. And as always, it really helps us if you leave a rating and review on Apple or Spotify or wherever you listen your podcasts. One last thing to mention: Audio from the documentary AlphaGo was used in this episode. Thank you for listening and we’ll see you soon.


