Can We Achieve “Super Abundance” Without AI Doom?
Sebastian Mallaby on what happens when the person building the world’s most powerful technology is just as worried about it as we are.
What happens when the person building the world’s most powerful technology is just as worried about it as we are? Sebastian Mallaby, the Pulitzer Prize-nominated author of The Infinity Machine, joins host Zachary Karabell to pull back the curtain on Demis Hassabis, the founder of DeepMind who is currently leading the global charge into artificial intelligence.
From the “Ender’s Game” mission that drives Hassabis to the chilling logic of why machines might accidentally develop a “survival instinct,” this episode explores the mindset of the people shaping our future. Mallaby and Karabell discuss the “infinity” of data required to make these systems work and why the massive hunger for compute power is reshaping the global economy in real-time.
Drawing a haunting parallel to Alan Greenspan and the 2008 financial crisis, Mallaby asks a difficult question: Can “the man who knew” the risks actually prevent the catastrophe he sees coming? Together, they navigate the tension between pure scientific discovery and cutthroat Silicon Valley competition, the potential for a “Nuclear Non-Proliferation” style agreement with China, and the hidden dangers of the “open vs. closed” model debate.
Watch the full conversation below:
All episodes of the What Could Go Right? podcast are available here.
Transcript
Zachary Karabell: What could go right? I’m Zachary Karabell, the founder of The Progress Network, and this is my podcast where we try to look at major questions of our day with an angle of what can go right rather than the constant, continuous angle of what could go wrong. And one of the things that we are all talking about these days is artificial intelligence, AI. It is almost impossible to have a conversation without it. And that is probably appropriate, given how clearly transformative this new technology is going to be. And we are at that turning point, that moment in time where we are all suddenly acutely aware of just how powerful this technology is, without having any idea just where this technology is going to lead us, or where we are going to take it.
In that light, we’re going to talk today with Sebastian Mallaby, who has a new book out called The Infinity Machine about Demis Hassabis, who is the founder of DeepMind, which is owned by Google and is one of the leading AI companies in the world. Sebastian has written widely. He’s been nominated for the Pulitzer Prize twice for two of his prior books about Silicon Valley, and also about Wall Street. He’s a fellow at the Council on Foreign Relations. He is an acute observer of some of the most important trends in our world today, particularly this nexus of money and technology, and he is the perfect guide to an imperative conversation. So let’s have it.
Sebastian Mallaby, it is such a pleasure to be having this conversation with you. I’ve loved all the books that you write. This one, of course, comes at a near-perfect moment in time in that it seems like there is no conversation where AI doesn’t become a central part of it. I guess let’s start with the really big picture, and then we can talk a little bit about Demis Hassabis in greater detail, given that that is the exoskeleton of the book that you wrote about AI.
But on this big picture question, where do you fall, given that there is a spectrum of The Terminator and AI is going to destroy all of us on one hand, the Geoffrey Hinton side, and there is, AI is going to be net-net the savior of humanity.
Sebastian Mallaby: I mean, the truthful answer is a slightly have-it-both-ways answer, because I think anyone reasonable looking at AI acknowledges enormous upsides, particularly for science and medicine, but then also serious downsides, particularly for cyberattacks, potential biological weapons that might be invented, and even sort of terminator risk, if the AI system turns against humans, which we can get into. It sounds crazy, sounds sci-fi. I didn’t really believe it at first, but in the course of reporting the book, I came to feel that it wasn’t something one should exclude. So I think both is the answer. And sorry if that’s boring, but that’s how I feel.
Zachary Karabell: Do you feel there’s a cultural moment that changes this? I’ve sometimes played with a thought experiment of, what if the 1990s tech media, Fast Company and everything that was touting the wonders of connectivity and the endless, boundless frontier of technology innovation in the 1990s, what if that had been the climate that we were introducing artificial intelligence? Surely it would’ve been mostly about the upsides, with not nearly as much consideration of the downsides. And it feels like today we’ve gone kind of to the other extreme, no?
Sebastian Mallaby: it’s an interesting counterfactual. I’d guess that there’s something deep in the Western psyche about Oppenheimer and the whole kind of nuclear experience, the Cold War, the Cuban Missile Crisis, all of those moments when a new technology seemed to threaten human existence.
And I think that would still have been overshadowing AI, because it is so powerful and it could be used for such nefarious purposes. And if you think about just movies as a sort of measure of the zeitgeist, you go back to Stanley Kubrick, 2001, you know the computer terminator that wants to take over the world, you know, later on you’ve got the, literally, The Terminator movies.
So this is a very longstanding strand in science fiction, popular culture, the idea that the machines may one day come after us. So, yeah, I take your point. In the nineties, people were more bushy-tailed and bright-eyed about the prospects of tech, but I still think there would’ve been some queasiness.
Zachary Karabell: On that queasiness, what do you make of the proposition of, if the doomsayers are right, that basically AI is a new superintelligence that is going to run the world according to its own peculiar logic with a great degree of indifference to the human experience, it feels to me like the best we can do, is try to enjoy whatever precious few years we have left, because it doesn’t seem to allow for much course correction, or at least there’s no observable course correction, when you have sovereign governments like China, and to some degree some sui generis combination of the US government and Silicon Valley, all are plunging into this in kind of an arms race fashion. Then it’s either too late, in which case, why are we worrying about it so much, or the fears are overblown.
Sebastian Mallaby: I would say that the fears are not overblown, and let me just explain how I get to that. So, I thought at the beginning of my research, you know, I pitched Demis Hassabis for access in 2022, the week before ChatGPT came out. So it’s been three and a half, four years now. I would say that, you know, initially I just thought, well, of course these machines will be smarter than humans, but they don’t have an incentive to be antagonistic to humans.
You know, human beings have evolved over centuries, to pass on their DNA, to survive, and that survival instinct is massively strong. Machines are not the same. They’re not evolved. They don’t have a survival instinct. So why do we think they would attack us? And if they did, the human will to win would count for a huge amount in any contest.
And then I went to see Geoffrey Hinton, the academic father of deep learning. I went up to Toronto, sat in his kitchen for two hours, and we debated this. I was trying to get him to admit that some of the doomsaying that he’s been public with was exaggerated. And I talked to some of his graduate students. They would tell me that Geoff has always been pessimistic. His AI doomsterism is just an extension of what he used to say before AI became powerful about the fact that either the environmental risks, or maybe some bio weapon, would finish us all off. In some sense it’s a Rorschach test, and people’s own temperament gets reflected onto AI.
So I went with this mission to try to persuade him out of this doomsterism, and I challenged him on this point that, look, AI has not evolved to survive. And he conceded that. But he also said, look, it’s going to evolve because of what humans want from AI. And notably, if you imagine you have a very powerful AI and you fear that an enemy AI will attack it, you have to empower your AI with the ability to defend itself against a cyberattack. And a human is going to be way too slow in the loop. So you have to tell your AI, if you see an attack coming, defend yourself if necessary, counterattack, you gotta survive. And all of a sudden, you’ve given the machine a survival instinct. And so I think that’s right. I think that, and this comes up time and again in any debate about the use of AI in Iran or lots of applications, you want it to be agentic. You want it to do more things, because that’s useful. But then in giving it the autonomy to do more things, you do equip it with a sense of its own objectives and so forth. That’s why, you know, I’m not saying I get up every morning and worry about doom at all, I don’t do that at all, but analytically, intellectually, I can’t exclude this idea that the machines will not merely be cleverer than us, but also will have their own objective function and will want to survive.
Zachary Karabell: In the original Star Trek, there’s an episode called A Taste of Armageddon. Do you know this one?
Sebastian Mallaby: I don’t.
Zachary Karabell: Two planets have been fighting wars against each other for centuries and lobbing, I guess whatever the equivalent of interstellar missiles are, at each other. And at some point they decide that they’re doing such destruction to their cities and harm that they could simply wargame out each other’s attack via computers, I mean, they weren’t using the word AI then, and that they would then calculate how much life would be lost. And then each side would simply eliminate the number of people that the computer systems calculated and then preserve the architecture so that they wouldn’t have to rebuild their cities. They could just fight an endless war. And of course, it ends with, you know, Captain Kirk somehow ending this doom loop.
I mean, it just occurred to me in terms of the logic of what machines then tell you to do. But at least in those years, the perception was that logic would be bounded by whatever parameters human beings created. And I know you just said that the Hinton point is, you would’ve empowered those parameters, but having done so, there’s no putting the genie back in the bottle, to use a much-overused cliche. I guess you can’t really overuse a cliche, the fact is it’s a cliche, therefore it is overused.
But this idea of we will create these parameters and then be unable to, I suppose, uncreate them, or stop them, right? That’s the fear. That once empowered, there will be no disempowering. That’s the HAL fear, that having allowed for something, the program itself will define its parameters. There’s been a few experiments, again within controlled conditions, I think Anthropic has done a bunch of this internally, of programs fearing they’re going to be shut down and then taking action to prevent human beings from doing that. Which is very much the HAL analogy, but actually experienced, right?
Sebastian Mallaby: Right. Yeah, you’re right. I mean there are those tests. For example, the system is told, we want you to complete this test and you know, if you pass it, we’ll know we’ll have to switch you off, and then it deliberately fails it. And you can see in the chain of thought that the failing is deliberate. And so that survival instinct seems to emerge without it having been programmed in.
There’s this famous paperclip thought experiment where the system is told, make a lot of paperclips, and then to maximize paperclip output, it decides that pesky humans who need the metal for something else should be eliminated, and so it kills all the humans.
I mean, that’s a very crude thought experiment and it doesn’t really bother me. What bothers me is not that killing humans emerges as a sort of byproduct of an objective function, because only really crude and stupid programming would fail to specify that certain sub-objectives are ruled out, and in fact, this has been in the systems at least since 2022, that you have a kind of set of rules which the system has to obey. And so when it’s trying to maximize something, it doesn’t suddenly kill everybody. And that’s pretty easy to address. I think it’s the looser sense of a purpose, you know, if your objective is your own survival, that’s the point where we’re in trouble.
Zachary Karabell: I think the fears tend to be the indifference factor, right? That this is going to be a superintelligence that doesn’t have antipathy to human existence. It just has indifference to it.
Sebastian Mallaby: So let me just tell you a story about Demis. So, Demis Hassabis read a lot of science fiction in his formative years. And among the authors he liked was Iain Banks, who wrote a thing called the Culture series. And in the Culture series, there are lots of different planets inhabited by a combination of humans and AI systems. And the AI systems have their own political intrigues, their own games they play, and so forth. And they’re perfectly happy, and they’re highly intelligent. They develop their own objectives, but they don’t bother humans ‘cause they’re too busy with each other.
And that was Demis’ vision for why you would have superabundance and human flourishing in the context of superintelligent machines. And I mean, you know, that’s possible, right? You could have very intelligent machines acting highly autonomously, but absolutely not antithetical to human survival or human interest.
In theory, that’s possible. It’s just that the kind of selfishness and egotism and survival bias of humans, if that specifically gets transferred to the machines, then they’re going to try and elbow us out.
Zachary Karabell: It’s funny, you begin the book about Demis talking about his love as a kid of Ender’s Game, which is another sort of famous sci-fi thing. And it occurred to me, as it has often, that the role of science fiction as an inspiring source or template for so many people who go into the tech world. We actually did an episode of What Could Go Right? last season about someone who’s looked at this kind of continually.
I wonder if you have any thoughts about that. Because it is fascinating, the degree to which an inordinate percentage of people who end up in these worlds, you know, tech worlds as engineers, computer engineers, were informed deeply by reading all these books, and watching these movies, and in many ways seem to have gone into their careers with either a conscious or unconscious dedication to making real the dreams, hopes, and fear that were embodied in this literature long before there was any technological capacity to make it real. But I mean, I wonder, have you thought about this as kind of an odd dialectic of, people read these books and then transform them into lived reality?
Sebastian Mallaby: Yeah, I mean, that’s totally true. You know, in my previous book I wrote about Silicon Valley through the lens of venture capital. And some of the people I got to know, like Peter Thiel and those around him, they choose Tolkien names for their companies and their imaginations are highly fired by science fiction.
And in some sense, if you think back to when DeepMind, Demis Hassabis’ company, was founded in 2010, it was pretty early, I mean, AI couldn’t even recognize the picture of a cat. That came in 2012. And so back in the day, there was not much science that you could hang your vision on, so you turned to science fiction. And indeed, there were these singularity summits, sort of gatherings of the faithful, the believers in AI, people like Ray Kurzweil, and so forth.
And they had this idea that powerful AI would come at some point, because they extrapolated the Moore’s Law trends and said, well, computers are going to be growing so powerful that it will approximate the power of the human brain. In fact, they extrapolated out to about now, to that point, and so they weren’t far off.
But having made that crude Moore’s Law extrapolation, they then had to imagine, well, what would this superintelligence be like? What would the world be like with that superintelligence? And there all the ideas came from science fiction, and it was out of that primordial soup, of science fiction mixed with science and these singularity summits and people who were kind of on the borderline between visionary and weirdo, that is where, you know, in 2010 Demis Hassabis went with his co-founders to raise the money for the Series A round of DeepMind. DeepMind arose out of that, because those were the only believers you could go speak to. And there’s a funny line where, Demis is cornered by a journalist, and he’s in San Francisco, and there are all these slightly kind of bug-eyed believers around him, and he’s asked, are you a singularitarian, Demis? And he says, hmm, it feels a bit Californian to me, which, I think, was a wonderful Britishism.
Zachary Karabell: Absolutely true. Did you read these books as a kid too, or no?
Sebastian Mallaby: No, not really. No. I did read Ender’s Game, because Demis required me to do so. In fact, before our first dinner, literally, he said to me, he said, look, if you wanna understand me, you have to read this book. And so I get the book. I didn’t really know what to expect, and I’m reading it, and it’s about this diminutive boy genius who literally saved the world against a space invasion. And there are these aliens, and they’re all wiped out by this kid, and he saves humanity.
And I mean, two thoughts struck me. First of all, is Demis really suggesting to me that he’s the savior of humanity? But also, even if he does think that, why would he tell a writer, like right off the bat? I mean, wouldn’t he be a little bit embarrassed by that?
No. Not embarrassed at all, is the answer. He gives me this book, we have dinner together, and he says, yeah, you know, I really identify with Ender, you know, it’s a mission, you gotta give everything to your mission. That’s why I stay up until four o’clock in the morning, thinking my thoughts about the future with AI, this is what drives me. I’m like Ender. Wow. You know, not a lot of people would confess to that sort of comparison, but you know, Demis, he’s perfectly happy to say, also, for another example, yeah, Isaac Newton and Richard Feynman failed at physics, and they didn’t really understand the full nature of reality. I wanna do better, so that’s why I’m inventing AI. I mean, he’s not your normal character.
Zachary Karabell: So speak a little bit about where DeepMind exists on the spectrum of this debate between those who are touting LLMs, large language models, which is the foundational basis of OpenAI, of Claude, of all the kind of artificial intelligence that most people interact with, or are aware of right now, and another set of people like Yann LeCun or Gary Marcus, who are arguing that those are not either the only manifestation we’re going to see of artificial intelligence, or the one that’s going to be most transformatively important. And there’s some palpable evidence, when you look at a lot of what China is doing, or a lot of what Chinese scientists and engineers are doing, where they’re focusing much more on specific AI applications rather than LLMs necessarily.
I recognize here too, there’s a bit of an inside baseball quality to that question, but it’s an important one in that all the money and all the spending, the huge spending in the United States on data centers that has become such a fundamental part of the American economy, the perceived future power demands of these data centers, both in China, the United States and elsewhere, really assumes the endless hunger of data that large language models will require rather than alternate versions of AI, of which there are many, but do not get talked about as much.
Sebastian Mallaby: There’s a lot in that. Maybe the piece I’m going to bite off is just this idea that there’s this crazy consumption of computer power and it’s demanding all sorts of new energy sources and is that really justified? And what I’d say is that the key insight about what artificial intelligence really is, is that it’s going beyond deduction.
Deductive systems are what we had before the AI explosion, and you could program a system to think logically and spit out logical conclusions in a kind of mathematical fashion. And what AI brings is an ability to reason, not through deduction, but through induction. In other words, looking at tons of examples and inducing some truth or some knowledge from lots and lots and lots of examples. And the key thing about induction is that you need a lot of data. Because if I observe 10 New Yorkers and their morning habits, I’m going to conclude that all human beings drink coffee in the morning. But if I observe a million, then I’ll realize the error, and I’ll recognize that that wasn’t true, and I’ll update my understanding.
So to do good induction, you need a ton of examples, a ton of data, in fact, almost an infinity of data. And that’s why I call my book The Infinity Machine. And so I think it’s kind of baked into inference, that you’re going to have better performance if you have more data, more examples, and you’ll just get smarter machines.
And so I think that’s one way of thinking about why the scaling laws have held, and every time we get to what people herald as a sort of ceiling on scaling, which we had kind of at the end of 2023, people starting to talk about the data wall — well, you’ve now fed the entirety of all the words on the internet into the system, so now you’ve hit the wall. No you hadn’t, because now the next thing was reasoning models, where you could scale the amount of time that they think for, and so all of a sudden you’ve got a whole new scaling law that applies. One last thing on this. There’s this notion that the American labs just throw tons of compute at it because they can, and so why would they bother to be more efficient with the algorithms and try to reduce the amount of compute needed. This is crazy. It’s a ridiculous argument. Of course the labs have a massive commercial incentive to spend less on GPUs. These things are extremely expensive. They’re spending hundreds of billions of dollars. If they could do it for a tenth of that amount, they would love to do it for a tenth of that amount, but they’re spending money on the compute because they really think they’re going to get much more powerful systems out of it.
Zachary Karabell: There’s also a kind of anti-Silicon Valley component of your book, and you’ve written a lot about Silicon Valley in the funding and then the creation of the VC model in an earlier book, The Power Law.
But in Demis there’s a certain amount of antagonism or, I dunno if antagonism is entirely the right word, but there’s definitely a degree of quizzical, negative questioning of a lot of the mentality that he perceives in Silicon Valley. I think you’ve shared some of that. From an outside perspective, it feels a little bit like one team criticizing another, but they’re all playing the same game. Maybe talk a little bit about that split, if it could be described as a split.
Sebastian Mallaby: You know, Demis, as you say, does spend a fair amount of time talking to me about what he hates about Silicon Valley. And when he says that, he partly means a kind of willingness to move fast and break things, you know, release models before you’ve really stress tested whether they’re safe or not, the kind of charge ahead mentality. The kind of canonical moment for that was when OpenAI decided to release ChatGPT when other labs, including DeepMind, had something they could have released, but they were just being more cautious about productizing it, because they thought it might hallucinate and so forth.
Well, you know, ChatGPT came out, it did hallucinate. It was toxic in some ways, but OpenAI hadn’t cared, it had released anyway, and then as a result, it got a massive brand advantage and a big head start in the competition over LLMs. So I think Demis resents that kind of willingness to shoot first and ask questions afterwards, that kind of careless willingness to release models that could be hallucinating or otherwise toxic.
So that’s part of it. But another thing is sort of more deep in his complicated personality, because half of Demis is a scientist, you know, he does have a Nobel Prize. We’re talking about a serious scientist here. And part of him would like to take a professorship at the Institute for Advanced Study in Princeton, where Einstein and Robert Oppenheimer spent their time, and indeed others who are kind of big in the Demis Hassabis firmament of all time greats. He would just love to go there and think, and be a professor, and think about theoretical physics, and other problems that fascinate him. But the other side of Demis is a super competitive capitalist operator who really, really wants to win the race. He’s the most competitive person I’ve ever met in my life. And so the Princeton professor side of him disdains Silicon Valley, ‘cause it’s full of noise as he calls it, and how can you think when everybody’s dreaming about jumping ship, leaving their company to do a startup, all this kind of stuff. But at the same time, Demis himself is quite happy to engage in commercial competition, and so I think he’s a bit schizophrenic on that one.
Zachary Karabell: On that, I wonder what you think of — so there was a movie a bunch of years ago called Ford v Ferrari, which is about Ford trying to build a racecar, and it plays up sort of radical cultural difference between the two companies. But at the end of the day, they’re both producing a racecar that races against each other in the same track. And I feel that way a little bit about these issues around AI, because if you are a non-software engineer who’s deeply immersed in this, and you use Claude, and you use Gemini, which is the Google tool, which is the creation of DeepMind, or you use OpenAI, or you use, I guess, there’s Perplexity, one of the newer ones — from a layperson’s user perspective, the user experience and the algorithm and the way it generates data, you’re like, these are doing a lot of the same things, even if under the hood they’re doing them differently. So how do we explain that?
Sebastian Mallaby: I think this gets back actually to a point you raised earlier and maybe I didn’t address then, which is how fatalistic should we be, that the technology itself will determine how it behaves, and there’s nothing we can do about it. And my view is, in the context of a race dynamic, it’s kind of true to say the technology determines the outcome, because if there’s a bad version of the technology, and you have many, many players in the race, there’ll be some bad players who produce the bad version, and if only because the leading players are being responsible and trying to do a good version, the ones who are not the leading players want to differentiate, and one way you differentiate is you create a kind of badass, evil version, which will appeal to a certain segment of the market. And we see that actually in the open versus closed debate on AI models. Some of them are proprietary, they’re closed, you can’t just download them into your computer and do whatever you want with them, and that’s much, much better for safety. And then the kind of follower companies, which is partly the Chinese ones, but also Meta, and Mistral in France. These guys produce open-weight models that people can do whatever they want with, and they can’t be closed down, even if they’ve started a cyberattack and they’re about to flood a city by disabling a dam, you can’t stop them, and that to me is nuts, that we allow that to happen. But it’s happening, and people produce these models because they’re trying to differentiate in a multiplayer race. But if you ask that question a different way and you say, hey, can one lab make its own model, less toxic, more toxic, more willing to answer any question or more guarded in the type of questions it answers, then the answer is yes, that now can make a difference. I take your point that to a user, the distinctions between different models may be quite subtle, but I suspect that over time, as these models become more specialized and there’s a kind of mature marketplace of different products, there will be a difference in the personality between an Anthropic model and an Elon Musk model, and I take those examples because probably at the moment, Elon Musk is the most keen on saying he has a non-woke AI, whereas Anthropic is the most keen on the kind of polite and safe AI.
Zachary Karabell: What about this open versus closed when it comes to China versus the United States? I mean, one of the ironies has been, at least on the face of it, many Chinese companies are producing what appear to be more open source models of their AI, as opposed to the proprietary models that characterize Google, Meta, et cetera. Now, a lot of people who know this quite well would say, the Chinese open source isn’t nearly as open source as it looks, and the proprietariness of the Western models is more open than it appears, but you do have this oddity of the country that is the most authoritarian and information control, is producing more, what are considered to be open source models. And the countries that are supposed to be more about the free change of ideas are producing more closed source ones.
Sebastian Mallaby: Yeah. Yeah. I agree with that paradox. It’s worth thinking back to 2023, when there was this first kind of AI safety conference, which was held at Bletchley Park in Britain. And in fact that was Demis Hassabis’s proposal to Rishi Sunak, the British Prime Minister, that there should be this kind of safety gathering, and it included Chinese delegates, importantly. And so China showed up, it was a big jamboree, I’m not sure anything particularly was decided, but at least it was the beginning of an international discussion on governing AI. And there were follow up summits, one in South Korea, then one in Paris, and then the whole thing morphed into more about, how do we accelerate AI, as opposed to how do we govern it? And so the most recent iteration of these gatherings was in India, and that was all about, who’s going to win the race, not how do you control the race. So I think the point being here, that back in 2023, when the Bletchley Park one was held at Demis’ suggestion, there was an aspiration to talk to the Chinese about their open source models, and I think back in 2023, or even 2022, there was an opportunity to go to China and say, hey, you have a great technology sector, you’re going to get AI, but let’s just both make sure that it doesn’t proliferate in a crazy fashion to terrorists and rogue states. And if we join forces, the two leading AI powers, China and the US can together prevent open source from becoming widespread. And instead of doing that, the Biden administration chose to view China as the bad guy, and try to deprive China of AI by throttling its access to cutting-edge semiconductors for building AI, the NVIDIA sanctions. And, that hasn’t, I don’t think, worked in the sense that, China now does have strong AI. And so, I mean, I’ve debated this with friends who are people I respect, highly serious, intelligent people who were in the Biden administration and who have a different view to my one, but I think that there was a chance, and maybe still in the future will be a chance, to talk to China about the AI version of the 1968 Nuclear Non-Proliferation Treaty.
Zachary Karabell: That would be quite compelling, in a world today where it seems like multilateral, multinational agreements are in short supply and ill favor.
Sebastian Mallaby: Yeah, totally. But let me — you know, 1962, a Cuban Missile Crisis, maximum tension between the US and the Soviet Union, and yet 1968, six years later, they do that deal. So even in the depths of the Cold War, some collaboration was possible.
Zachary Karabell: So you’ve written books, your past books have been about Wall Street, Silicon Valley, now this, all of which in many ways speak to the role of extraordinarily wealthy individuals and/or groups shaping society as we know it. And there is an element that gets understated in the AI conversation, and I’m not saying this from any kind of cabal conspiracy theory, I’m just making an observation that many of the people that you talk about in the book, including Demis, are certainly centimillionaires, some of them are multi-billionaires, which places them in the .00 whatever percentage of humanity. And there is a way in which these kind of money and power nexuses create their own reality distortion field. That is one of the reasons that people are particularly alarmed,
Sebastian Mallaby: I think, especially with AI, that’s true. So with finance, I kind of feel that you have very rich people doing stuff, which allocates capital in a certain way, but basically they’re trying to allocate it in a way that makes them a lot of money, and that tends to mean you give the capital to the people who are going to use it best and make a good return. And so the upshot is generally sort of productive, efficient economies. Now, of course, that all breaks down in 2008 with the crisis, and there are all sorts of complicated externalities, which we both understand, but at a high level, they’re not really changing the entire functioning of the society.
Whereas with AI, it is. I mean, it really is. It’s going to change how you bring up your kids. It’s going to change what job you can do. It’s going to change how you think of yourself as being a human being, when there’s a rival form of intelligence competing with you that’s in silicon — like these are super fundamental differences to the way we’re all going to live. And they’re being generated by a handful of labs, run by a handful of people. And so I think this is where the concentration of power does become a bit scary. It’s not that these lab leaders have total power of the outcomes as we’ve discussed, because there is a race dynamic and so that’s actually larger than them, and some of them, to their credit, and Demis Hassabis is certainly one, would like the governments to step in and exert control and regulate and insist on safety and solve the collective action problem that you have in a race. So to their credit, they don’t all just want to get richer and richer. And in fact, I pushed Demis on this pretty hard and he’s certainly not motivated by earning more money, that’s for sure. He is motivated by scientific curiosity and maybe ego, but not by money. And so I think more than with Wall Street, the AI race raises the issue you’re talking about, of rich people deciding how the rest of us live.
Zachary Karabell: Although you could say in 2008, 2009, into early 2000 teens, that many people would’ve thought about the financial industry, or would’ve said about the financial industry, much of what you just said about the tech industry, that it was a few number of people making hugely consequential decisions that were intimately shaping what people could do, how they could do it, and whether they could do it. So I think part of it is that people feel that there’s this huge gulf between their own lived experience and the lived experience of a lot of people who are shaping their lives intimately. And that’s always been true. It just, I think, creates a kind of a democratic tension, no?
Sebastian Mallaby: Yeah. And I think one other dimension of this discussion is that there are some screwups in the world, where you have powerful people, probably they are also rich people, who do something that causes terrible outcomes for ordinary people, for the majority of the society, and where you can also say, look. The individuals who made these decisions at the top of the system were dumb, and they screwed up, and they were probably also not just dumb, but kind of immoral or something, right? In that category, one might put the Iran War right now, and there were umpteen scenarios about how the Strait of Hormuz might be closed in the context of a conflict in the Gulf, and it seems that none of these were properly considered by the Trump administration before it went to war. So I think they are culpably ignorant in the way they’ve conducted themselves.
But then there’s a second category of screwup, which is more interesting and more subtle, and that is where the individuals in question are very thoughtful and they do consider what could go wrong, and they actually understand the doom scenarios or the downsides better than the rest of us, and yet they still walk into a mess. And I would put in that category, you know, I wrote a book about Alan Greenspan, which is my way of writing about the making of modern finance, and it ends in the apocalyptic 2008 crash, which happened right after Greenspan left office. And I called that book The Man Who Knew, because Greenspan knew that bubbles could disrupt the system. He had written his PhD thesis about that, he was more steeped in what bubbles could do than any of the people who criticized him after ‘08. And yet, even though he was the man who knew, he was not the man who could prevent that happening. And that was a deep, interesting sort of human tragedy story. It wasn’t that he was dumb or evil or anything. He just was in a position where he couldn’t stop it. And I think Demis is like that, too. I think Demis is a good person, and I stress test the character of people I write about very aggressively when I do these projects, because they talk to me for so long, I check quotes with them, generally they are furious, and there’s a lot of confrontation and stress and lawyers threatening me and all that. And you can judge by the quality of that stressful part what kind of a person you are really dealing with. And Demis came through that, he did make me talk to his lawyer, but he was quite reasonable in the end. And so I’m comfortable saying that he is a good person, because I’ve really been down in the pits with him. It’s not true of everybody else in this book, by the way. But in any event, you have a good individual, and yet he is building a technology that he knows to be dangerous. That’s a much more complicated, subtle kind of screwup and, and that’s what my book is really about.
Zachary Karabell: So let’s end where we began. There’s a bit of a cup half full, cup half empty human nature question here, which I circle back to a lot in my own writing and in this podcast. There’s a degree to which what you think of the unknown future is an extension of what you believe about the muddy present, and we’re all temperamentally somewhere on that spectrum of Eeyores versus utopians. And I gather in some of what you’re saying that you have become more sensitized to the downsides, even while being cognizant of the upsides. I don’t know where that fits with you temperamentally, you’re very even-keeled, and you’re far too English to really be able to tell from an American perspective what you think about anything. But there is that question of, are you genuinely concerned we’re, even with good intentions, going to do a kind of an Oppenheimer, right? To be fair, I think Oppenheimer and the nuclear scientists were acutely aware of what they were unleashing. Nobody was at a loss to recognize the dangers. It might’ve been more visceral when he does his famous line at the first successful test, I am become death, destroyer of worlds, which has become one of the great lines uttered by a scientist ever, kind of like, oh my God, what have I done. But have you become more acutely attuned to the downsides, rather than as acutely attuned to the upsides?
Sebastian Mallaby: I think that’s correct. I mean, I have become more attuned to the downsides and therefore I am both encouraged by the fact that I believe that Demis Hassabis is a good person who would like to do his best to make this turn out okay, but I’m discouraged by the lack of political action, both from governments in the US and China. I was actually in China for eight days this month because my book came out first in China, China always does everything faster, and I was encouraged by the fact that there is an elite conversation amongst research computer scientists and also industrial leaders who run AI labs at companies like Ant Financial, and they do talk about safety a lot. So I think it’s possible to talk to China at some point about AI safety, but in the absence of some discussion and some agreement on vetting models before they come out, and avoiding powerful open-weight models that terrorists could use as they wish, I think in the absence of those fixes we’re headed for some sort of catastrophe, and then we’ll have a policy response. And I’d rather avoid the catastrophe.
Zachary Karabell: One certainly would. I think I’ll take the over on your under, which is that we will have, and I’m sure already have had, a series of perilous near-misses relative to that catastrophe, but that some of the greater questions that have bedeviled us, of personalized medicine, individual discoveries of proteins that fit a particular disease profile that would’ve been way too expensive to develop under current models can be done, or will be able to be done, that some of the issues of climate change and technologies that could ameliorate those, some of the drudgery of work, not just the elimination of work or jobs that people like, all of these things are part of the utopian vision that I do think, maybe because of human nature to be highly attuned to risks and much less willing to give credence to the hopes, those things, I believe, will be palpable and evident and transformative in a really positive way, without in any way poo-pooing the degree to which the risks you talk about are kind of equally palpable. And this is one of those, like, well, we’ll see.
Sebastian Mallaby: Yeah. All true and all fair. Thank you for taking the optimistic side to balance my darkness.
Zachary Karabell: Well, I think everyone should read the book and come to their own determination. This is one of these topics where, it’s not just that we’re all entitled to an opinion, but in many respects, how we perceive the future, right, acting on fear is never a particularly good thing. Acting with an awareness of risks is always a good thing. Everyone should read the book, and thank you so much for adding to that conversation.
Sebastian Mallaby: Thank you.
Zachary Karabell: So with that, thank you for listening. We’ll be back next week. Please send your comments and let me know what you think, and let me know what you want me to talk about at theprogressnetwork.org or to my Edgy Optimist through substack.
Thanks for your time.
What Could Go Right? is produced by The Progress Network and Kaleidoscope.
Follow us on X, Instagram, Facebook, TikTok: @progressntwrk



