DeepMind cofounder is tired of ‘knee-jerk bad takes’ about AI

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


For the past month, Mustafa Suleyman has been making the rounds of promoting his recent book The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma.

Suleyman, the DeepMind cofounder who is now cofounder and CEO of Inflection AI (which set off fireworks in June for its $1.3 billion funding), may reasonably be all-talked-out after a slew of interviews about his warnings about ‘unprecedented’ AI risks and how they can be contained. Still, he recently answered a batch of questions from VentureBeat about everything from what he really worries about when it comes to AI and his favorite AI tools. Notably, he criticized what he considers “knee-jerk bad takes around AI” and the “hyperventilating press release” vibe of AI Twitter/X.

This interview has been edited and condensed for clarity.

VentureBeat: You talk a great deal about potential AI risks, including those that could be catastrophic. But what are the silliest scenarios that you’ve heard people come up with around AI risks? Ones that you just don’t think are concerning or that are just bogus or unlikely? 

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 

Learn More

Mustafa Suleyman: AI is genuinely transformative, a historic technology that is moving so fast, one with such wide-ranging implications that it naturally breeds a certain level of speculation, especially in some of the darker scenarios around “superintelligence”. As soon as you start talking in those terms you are getting into some inherently extreme and uncertain areas. While I don’t think these are the most pressing worries, and they can be way over the top, I’d hesitate about calling anyone silly when so much is so unknown. Some of these risks might be distant, they might be small, maybe even unlikely, but it’s better to treat powerful and still only partially understood technologies with a degree of precaution than dismiss its risks outright. My approach is to be careful about buying into any narratives about AI, but also to constantly keep an open mind. 

VentureBeat: On the flip side, what is the biggest AI risk that you think people underestimate? And why? 

Suleyman: Plenty of people are thinking about those far out risks you mentioned above, and plenty are addressing present day harms like algorithmic bias. What’s missing is a whole middle layer of risk coming over the next few years. Everyone has missed this, and yet it’s absolutely critical. Think of it like this. AI is probably the greatest force amplifier in history. It will help anyone and everyone achieve their goals. For the most part this will be great; whether you are launching a business or just trying to get on top of your inbox, doing so will be much, much easier. The downside is that this extends to bad actors… Because AI will proliferate everywhere, they too will be empowered, able to achieve whatever they want. It doesn’t take too much imagination to see how that could go wrong. Stopping this happening, containing AI, is one of the major challenges of the technology. 

VentureBeat: Do you think if you didn’t live in Palo Alto, in the midst of so many in Silicon Valley concerned about the same things, that you would be just as worried about AI risks as you are now?  

Suleyman: Yes, absolutely. I was worrying about these things in London nearly 15 years ago when they were at best fringe topics for a small group of academics! 

VentureBeat: You famously co-founded DeepMind in 2010. What were your thoughts back then about the risks of AI as well as the exciting possibilities? 

Suleyman: For me the risks and the opportunities have always existed side by side, right from the start of my work in AI. Seeing one aspect without seeing the other means having a flawed perspective. Understanding technology means grappling with its contradictory impacts. Throughout history, technologies have always come with positives and negatives and it’s narrow and myopic just to emphasize one or the other. Although in aggregate I think they have been a net positive for humanity, there were always downsides, from job losses in the wake of the industrial revolution to the wars of religion in the wake of the press. Technologies are tools and weapons. We’ve probably got a lot better, as a society, of thinking about those downsides over the last ten years or so. Technology is no longer seen as this automatic path to a bright, shiny future, and that’s right. The flipside of that is we might be losing sight of the benefits, focusing so much on those harms that we miss how much this could help us. Overall I’m a huge believer in being cautious and prioritizing safety, and hence welcome a more rounded, critical view. But it’s definitely vital to keep both in mind. 

VentureBeat: There has been seemingly endless hype around generative AI since ChatGPT launched in November 2022. If there is one hype-y concept that you would be happy never to hear again, what would it be? 

Suleyman: I won’t miss a lot of the knee-jerk bad takes around AI. One of the downsides from all the hype is that people then assume it is only hype, that there’s no substance underneath. Spend all day on Twitter/X and the world looks like a hyperventilating press release. The endless froth obscures what’s actually happening, however actually significant. Once we get over the hype phase I think the true revolutionary character of this technology will be more apparent, not less. 

VentureBeat: We’re all captivated by the conversations happening on Capitol Hill around AI. What is it really like to discuss these topics with lawmakers? Who do you find the most well-informed? How do you bridge the gap between policy makers and tech folks? 

Suleyman: Over time it’s become much, much easier. Whereas a few years ago getting lawmakers to take this seriously was a tall order, now they are moving fast to get involved. It’s become so apparent to them, like everyone else, this is happening, AI is inevitable, it’s moving fast and there are yawning regulating gaps. In DC and elsewhere there is a real appetite for learning about AI, for getting stuck in and trying to make it work. So in general the regulatory conversation is far more advanced than it has ever been in the past. The gap always comes because of the mismatch in timescales. AI is improving at a rate never seen before with any previous technology. Models today are nine orders of magnitude bigger than those of a decade ago – that’s beyond even Moore’s Law. Politics necessarily grinds away at the same old pace, subject as always to the broken incentives of the media cycle. It’s impossible for legislation in generally slow moving institutions to keep up, and to date no one has managed to effectively get round this. I’m hugely interested in ways or institutions that might bridge this. Watch this space! 

VentureBeat: Besides Pi, what is your favorite AI tool right now? Do you use any of the image generators? 

Suleyman: I use pretty much all the popular AI tools out there, not least for research… What I would highlight are not necessarily individual consumer products, but the AI you don’t see, the way AI is embedding itself everywhere: in scanning medical images, routing power more efficiently in data centers and on grids, in organizing warehouses and myriad other uses that work under the hood. AI is about more than just image generators and chatbots, as extraordinary as they can be.

VentureBeat: You talk about the Coming Wave, but have you ever been surfing? 

Suleyman: I have! Not that I would claim to be any good… I’m more of a metaphorical surfer!

VentureBeat: You have been active in AI policy for years and obviously spend a great deal of time thinking about how companies and governments can ride the Coming Wave. But obviously for all of us it comes with some anxiety. What are your personal strategies for handling AI or tech-related stress and anxiety regarding the future? 

Suleyman: It’s a really good question, and an important point. It can seem completely overwhelming, paralyzing even. There are two things I’d say to someone here. The first is that although AI may cause problems, it will also help solve a whole load of them as well. Climate change, stalling life expectancy, slowing economic growth, the pressures of a demographic slowdown… The 21st century has its fair share of epochal challenges, and we need new tools to meet them. I would never say AI alone can do this. It is only as effective as its context and use, but I also think meeting them without something like AI is much, much harder. Again, let’s remember both sides here, the worries but also the benefits. 

Secondly, too many people are inclined to what I call pessimism aversion, the dominant reaction of elites to scenarios like AI. They take the downsides on board, but then quickly ignore them, look away from where it might lead and carry on as if everything is fine. It’s not doomerism, but a kind of willful ignorance or dream world. This is a terrible foundation for the future! We do need to confront hard questions. Anxiety might be an important signal here. The only way we make all this work is by following the implications wherever they lead. It’s not an easy place to be, but better to see clearly and have a chance of making a difference than look the other way. I find the best cure to that is working to actively build contained technology and not standing on the sidelines.

Originally appeared on: TheSpuzz

Scoophot
Logo