Better Call Geoff
Why We Need a Manhattan Project For AI Safety
This week, Professor Geoff Hinton resigned from Google because he no longer wants to work towards the destruction of the human race. My précis is only mildly hyperbolic. Hinton notified the New York Times of his decision, then proceeded to give several interviews, including one to the BBC, in which he was explicit about his chief concern: the risk of “things more intelligent than us, taking control”. In his English, slightly diffident manner, Hinton is raising the alarm. He can’t be dismissed as an alarmist.
Geoffrey Everest Hinton is the most venerable AI scientist in the world, the man who ushered in the modern era of machine learning. Born in London in 1947, he studied experimental psychology at Cambridge, then joined a graduate programme in artificial intelligence at the University of Edinburgh. He specialised in neural networks: an approach to AI which is very loosely modelled on the human brain. The basic idea is that machine intelligence arises from passing data through successive layers of simple processing nodes, or “neurons”. The more layers you add, the smarter it gets.
Neural networks are now the leading method of artificial intelligence, responsible for the stunning advances we’ve seen over the last ten years. ChatGPT runs on a neural network, as does Google’s image recognition. Scientific innovation doesn’t always go in a straight line, however. Twenty years or so after the emergence of neural networks as an idea a consensus formed that it was a technological blind alley. That coincided almost precisely with Hinton’s entry into the field, at the start of the 1970s. He must have felt like Cinderella arriving at the ball only to discover that everyone is leaving for a better party.
Hinton carried on regardless, pursuing his unfashionable line of research at British and then American universities, while struggling to get funding. Computer science and artificial intelligence in particular were seen as highly theoretical fields without practical benefits or commercial application, and Hinton, who has left-wing politics, wasn’t comfortable with accepting grants from the US Department of Defence. But he believed in the fundamental idea of neural networks and continued to be inspired by analogies with the human brain, reflecting his original academic interest.
Eventually, his doggedness paid off, in a big way. In the late 1980s he began a series of crucial breakthroughs which re-ignited interest in the possibilities of neural networks (sometimes known as “deep learning” AI). Around the same time, the internet came online, sprouting vast datasets, and computing power set off on its exponential upwards curve. Neural networks became the future of AI and of everything, and money flooded into the field. In 2012, at the age of 64, Hinton set up a company with two of his graduate students at the University of Toronto which was promptly acquired by Google for $44m.
Even if it ended in riches, Hinton’s journey affirms our romantic idea of scientific research as an endeavour pursued for its own sake, without glamour or glory. He has this tradition in his blood. His father, Howard, was an eminent entomologist who spent a lifetime studying beetles. Hinton is descended from George Boole, the founder of mathematical logic, and Mary Boole, also a mathematician. His great-grandfather, Charles Hinton, coined the word “tesseract” and studied four-dimensional space. (Geoffrey is also related to a Victorian geographer after whom a Himalayan mountain was named; hence his middle name).
Hinton’s career, including his long years in the wilderness, means that he speaks with a special authority on the question of AI risk. It’s not as if he doesn’t understand the technical questions; he has spent a lifetime bringing to fruition the technology he now warns will endanger human civilisation. Having dwelled happily enough in the backwaters of an obscure academic field for decades, he does not seem the type who craves attention, and he is clearly a man of integrity. He’s not doing this for money - he is rich already and could earn millions more, if he chose, by continuing to work on AI.
Some people have been inclined to wave away the chorus of AI doomsters on the basis that it’s all hype generated by companies which are hungry for investment. This has never made sense to me, since the higher that public concerns rise, the more likely these companies will face heavy regulation.1 Either way, Hinton is nobody’s idea of a hype merchant, and if you think he’s wrong, you need an actual argument. In short, Hinton has spoken up, and we should listen. Even Snoop Dogg recognises this.
I don’t know if Hinton is right or not. It’s far from a given that experts in a technical field will be good judges of social or economic impact. At some point soon I’ll lay out some of the best counter-arguments to the idea that AI poses a serious existential threat. Frankly I find both sides of the debate to be pretty persuasive, and on this question I think most of us should try and remain in what Keats called negative capability, rather than assuming a firm position which we then feel obliged to defend. There is too much to learn, and too little to be certain about, to behave otherwise.
That’s fine for someone like me, but when it comes to serious societal risks, governments can’t stay in negative capability. They have to act. If we wake up one day to find that an AI has perpetrated even a relatively minor catastrophe - say, it unlocks the security settings of a major bank, causing financial chaos for millions of people - then whoever is in power will suffer the most almighty blowback, comparable to the anger unleashed at politicians after the 2008 financial crisis.
After all, to most voters, the idea that AI is unregulated is insane. I don’t mean voters are worrying about it now, but rather when their attention is drawn to it. A friend of mine recently ran some focus groups on this topic in the US and UK. He found that people view AI very negatively - they worry about the impact of deep fakes, on the news, and in scams. When they’re told that AI companies have the explicit aim of creating a machine that's more intelligent than humans - and that their engineers don’t understand how AIs do what they do, and can’t control how they behave2 - they are flabbergasted that we haven’t shut this whole thing down already.
What should governments do? If they over-regulate they risk killing off the potential benefits of AI and leaving the field to states with no scruples (although having said that, the “China will do it if we don’t” story is wearing thin. It’s far from clear that China is keen to plough ahead; AI is inherently disruptive of authoritarian rule). Given the speed of progress I don’t think governments necessarily have time to work out what perfect regulation looks like before acting. There are a few costless measures that suggest themselves: get the CEOs of the major AI funders like Google and Microsoft to account for themselves in public. Start organising international co-operation. Beyond that, legislators will have to think like entrepreneurs: do some stuff, accept the possibility of mistakes, change course when necessary.
Perhaps the most important thing that they can do is fund research into AI safety. In the last few years, this has become a field of study in its own right - a career option for AI researchers. There are safety researchers working in academia and non-profits, and in industry - OpenAI and Google’s DeepMind employ people who have the job of trying to stop the tech that their companies are developing from doing terrible stuff. But overall, there is a tiny number of people working on these problems - around 400 or so. At the corporate level, there are powerful incentives not to take safety seriously. The big tech companies - Google, Microsoft, Meta - are locked into a race to achieve AGI, and in a competitive race, you tend to cut corners if you can. If you’re a talented AI technician, why would you work on safety rather than building the next iteration of ChatGPT or cracking AGI or working at a health start-up? That’s where the money is or will be, and the cachet too.
This is why governments need to step in. They should treat this like a Manhattan Project. It is different in some ways: we’re not facing an enemy with the explicit aim of destroying us. This is a technology which might solve some of humanity’s biggest problems.3 But we are in a situation of great uncertainty in which the potential harms are high and possibly catastrophic, the avoidance of which will require the application of rare scientific expertise. Governments have the funds to pay the best talent handsomely, and even if they can't or won't want to match Silicon Valley's compensation, they can offer something else: status. It would be quite exciting to be called upon by the world's governments to save humanity. I imagine you might take a pay cut for that. But such a project would only have legitimacy if it was run by a Robert Oppenheimer - by someone whom those in the field respect and are happy to work for. Well, now we have our Oppenheimer.
This is a public post so feel free to share. If you haven’t yet signed up as a paid subscriber, please do - that’s what enables me to keep this thing going (at least, up until the point when VCs recognise that The Ruffian is the future of everything and the funds pour in). Plus it’s worth it. As a paid subscriber you will, for instance, get access to the whole of this excellent interview with Tom Chivers on AI risk (particularly worth reading if you’re coming to this question anew). You’ll also get access to the heart of the Ruffian experience…what I’ve been reading:
After the jump, a bumper crop of delights including:
- Are self-driving hypes now under-hyped?
- Why France is always doing better than you think
- A superb interview on free speech
- The best Reddit AMA ever, maybe.
- A visualisation of a genius pop bassist doing his thing, don’t miss this one.
- Plus, poetry by James Baldwin and some other stuff that has made me happy this week.