FLASHPOINTS #12: Are machines becoming more intelligent than us - and what does that even mean?
A conversation with Professor David Krakauer
In the FLASHPOINTS series, I interview experts about controversial topics of the moment. Recently I spoke with Tom Chivers about AI risk, and Tom did a superb job of calmly explaining why we should take the existential threat from AI seriously. Since then, the profile of this issue has continued to rise, especially after Geoff Hinton’s intervention.
I remain agnostic overall but I’ve been increasingly sceptical about the looseness with which the term “intelligence” is wielded by people from AI and tech. Hinton, for instance, leans very heavily on the simple idea that these machines will soon be “more intelligent” than us, at which point, game over: “I don’t know any examples of more intelligent things being controlled by less intelligent things.”
Previous FLASHPOINTS conversations have included: Teresa Bejan on on free speech; Martin Bromiley on healthcare errors; Sahar Akhtar on the ethics of immigration; Olympia Campbell on the gender gap in mental health
But what does that actually mean - to be more intelligent than us? We can barely grasp what intelligence is in terms of human endeavour, let alone for machines. In my piece on the Seven Varieties of Stupidity I tried to show how intelligence isn’t at the opposite end from stupidity, but mingled with it. It certainly doesn’t map neatly to wisdom or judgement or creativity. Very clever people can hold very stupid beliefs and do very stupid things. Maybe it’s more complicated than “higher intelligences control lower intelligences”.
There’s nobody with whom I’d rather discuss these questions than David Krakauer, Professor of Complex Systems and President of the Santa Fe Institute. I cited David in Seven Varieties; he’s been a big influence on the way I think about intelligence. David has a wide-ranging, polymathic mind and he’s a witty and lucid explainer of scientific topics. As you’ll see below is that he has a style of reasoning which comes from a very British tradition of empiricism (he grew up in England and studied at Oxford), of starting with precedent, of distrusting the overly abstract or theoretical, and of valuing elegance and economy of thought. Although he doesn’t attempt a definition of intelligence here, I like his idea that it is - or should be - about doing “more with less” (as opposed to doing more with more, which is currently how machine learning/AI operates).
This is one FLASHPOINTS where I slightly wished it was a video series, because David is such an entertaining talker with a mischievous sense of humour. In lieu of that, I’d like you to imagine him smiling as you read this lightly edited transcript of our conversation.
Hi David. Maybe we could start with what Geoff Hinton has been saying. He’s obviously a serious man, someone we should listen to. He says we've created this thing which is vastly more intelligent than any human being, and he doesn’t know of any example in nature of a less intelligent thing, being able to control a more intelligent thing. Therefore we're screwed. I mean, it’s a little more detailed than that, but that is the gist of it. I'm just wondering how you respond.
I find it so confused, in so many ways, that it's hard to know where to start. He's much too smart to make that kind of statement. Let's deal with the ethical confusion first, and then come back and talk about the scientific confusions.
Here's the problem. Everyone's talking about ‘existential threat’ when we face more urgent, immediate, practical problems. How easy is it to get a loan? How likely are you to be racially profiled? How likely are you going to be detained without trial, will you be denied bail? Those are the real issues that algorithms are already being used to help us solve. To me all this talk about existential threat is a weird Freudian, Zizekian, perhaps unconscious pitch for technology, because people get secretly excited about being devoured by robots.
If we’re going to talk about the risk side - then let's talk about real risk, which is that we exacerbate economic inequality. For me, as a pragmatist, you look to where a tool is being deployed already, and then ask to what extent will these incremental or revolutionary improvements in the tool make life better or worse. That's the ethical foundation for me. I don't know what existential risk even means. If you're telling me that this thing's going to turn around and say, ‘Foolish human being, you can't perform the following integral using calculus, therefore, you're not worthy of living’, I think it's absurd. To Geoff Hinton's credit, when he's actually forced to explain what he means, he says it's going to be used to do things that we already do better or worse, as in the examples I gave. That's what his real fear is, I think. This other side, about it being more intelligent - well, since he's not defined his terms it's very difficult to know what he means. Most of my colleagues don't accept for a second that this technology is more intelligent than us. It's a category error. It's like saying a car is a faster runner than you. It's not a faster runner, it’s faster on an even surface, but I can put some stairs in front of your Formula One car and it comes to an abrupt stop.
Let me push back and put the case from the other side for a minute. I think they would say, yes, we have these immediate problems to address, bias and so on. But if there is a risk to the existence of humanity, that is surely something we should worry about as well - we don't have to kind of swap one for the other. And we should start planning for that now. Second, it’s hard to imagine that Open AI and so on, would want to exaggerate the risk. I can't think of another industry where they say, you know, ‘Come and regulate us, government’.
Well, that's an interesting point because of course, by inviting regulation, there's a huge incumbent advantage. That's a move we've seen many times. When you regulate markets, you can give huge quasi-monopoly power to the incumbents. So one has to be very thoughtful and balanced. But even if I accepted your point, there's no evidence for existential risk, it's not still clear to me where their concerns are coming from.
As I understand it, it’s not that machines are going to just decide to be evil and destroy us. It's more like they’re going to be programmed with a task and get so good at doing that task, at meeting the objective, that it will do anything it can to do so, even if that involves wiping out humans. And we can say we'll program it not to do harm, but it's actually very, very hard to program an AI to be useful and not to do any harm.
OK I understand, but I think we should learn from precedent. One of the things that’s a problem in this debate is it often gets talked about as if it was sui generis. Let's talk about firearms, nuclear weapons, land mines - they're much more tangible and much more concerning. They're tools, like this is a tool. This just happens to be an analytical tool. It calculates. It’s a lot safer than nuclear weapons, but the additional ingredient we’re throwing into the mix here is autonomy. A nuclear weapon that has a mind of its own. Now, that's a very strong claim. Because thus far what it can do is predict the next word in a sentence.
There are things LLMs can do well and things where they give absurd results. So let's just be honest about their capabilities and then really analyse this question of to what extent they are autonomous, to what extent do they have destructive potential. But we are accustomed to a world of tools that can be misappropriated, and those are the appropriate analogies to think with - firearms and so on. If these machines are as dangerous as firearms that's a huge problem but thus far, they’re as dangerous as junk mail. That’s not to say they won't be more dangerous. But I just think we need to put this in perspective and triangulate the debate according to things that we do understand, instead of presenting it as a completely unique new world of super-sentient alien intelligence.
Yes, it can feel a bit like medieval theological debates. I go through all the various links in the the more alarmist arguments and it all kind of makes sense - and then I find myself in a place where it’s like, ‘OK, but really…?’
I'm only interested in styles of argument where there’s strong precedent. Otherwise, it's metaphysics. I mean, there's nothing wrong with metaphysics, but I think the debate has become metaphysical much too quickly. We should be talking about loans. That’s where the questions of risk and existential quality of life are very real. I don't understand these strange, as you say, theological arguments, in which no-one defines their terms. I don't even know what they mean by “It's smarter than us.”
No-one says that a calculator is better at maths than Poincaré. It certainly can multiply faster than he could but mathematics is a complicated set of cognitive capabilities, as is intelligence generally. So to say that a technology which can guesstimate the next word is ‘intelligent’ is an extraordinary simplification of what we mean by intelligence. I see behind you a picture of the Beatles. Musical composition, artistic judgement - there are so many components of intelligence.
In other words, the ‘doomsters’ are using a pretty narrow, IQ-based definition of intelligence and extrapolating wildly into this general quality of smartness, which is actually made up of lots of different things. It’s something we barely grasp ourselves.
Exactly. Right now, this technology is Wikipedia plus. It's a really good search engine. It gives you answers the way a really good librarian would, by looking at all the books in the bookshelves. Interestingly, humans are inconsistent on this topic. If you were given an exam when you were at school and you looked up the answers you'd be called a cheat. If we give a question to a large language model that's looked at the answer, and gives it to us, we say it's intelligent. It has every library in the world to look at.
When I went to school we always made a distinction between the people we called smart, who were the people who worked things out without knowing everything beforehand, and the people who had read all the right papers and books, who had the answers because they'd been told the answers. These algorithms are giant libraries; know-it-alls. They’re not smart in the sense that we think of smart. We think of smart as not knowing much and still getting to the right answer. It’s a really important distinction to make.
The history of the field of IQ started at the beginning of the 20th century with Binet and others asking things like, ‘Who is the second Prime Minister of Britain? How many carriages can you get down The Strand? The questions in the early days of IQ tests were ludicrously factual. Then over the course of time they realised that they were just testing knowledge. They were not measuring analytical reasoning, which by the way, is still undefined. So they started substituting factual knowledge with geometric tests or mathematical tests, Raven Matrices, and so on. The idea was to divorce intelligence from knowledge and cultural contingencies. If you’re from Iran it's not fair to ask you about the seventh president of the United States and take your answer as a measure of IQ.
So we've made this transition from IQ as management of knowledge, to IQ as a measure of problem-solving minus knowledge. The problem now is that these machine learning models are like IQ tests from circa 1920. They know everything. But what would it mean to restrict their knowledge, in order to really test their intelligence? So we’re being incredibly inconsistent. For Hinton to say LLMs are very smart is like saying the IQ tests of the 1920s were fair.
For me, intelligence is doing the most with the least, not doing the most with the most. This gets to the very core of what science is about. Think about the progress of science, from natural history to more fundamental sciences, biology and physics and so on. What we've done is to look for rule systems that can stand in for descriptions. Instead of saying ‘I can write down catalogues of the positions of the planets in every minute of every day of every year for the last 1000 years’, we say, ‘I can write down Newton's laws of motion on one page and calculate those positions for all time, past and future.’ That’s what science is. It’s about taking a really cumbersome, data-rich world and replacing it with an elegant, minimal-data world.
Darwinian evolution being an obvious example.
Yes - natural selection. You don’t need a new theory for every species - they all get the same one. Science tries to replace shelves and shelves of description with a minimal rule system that can do as well or better. This looks nothing like LLMs.
This begs the question of, is it a different kind of intelligence? Maybe it’s a kind of super well-educated savant that has no elegance in its reasoning. It can't explain to you how it came to an answer, because it doesn't know, it just cogitates masses of data. That's interesting, although that's the kind of intelligence that I always found rather loathsome, actually: “It is true because I’ve read all these books. But I can't tell you why.”
Ha ha. That reminds me of very worthy but dull history books, 1000 pages long, crammed with information on every page, with nothing to say.
Exactly. So let’s try and be a little more consistent, and talk about intelligence in the way that we've always recognised it, outside of AI.
We evolved solutions to problems in the world that our motor sensory problems before we solved math problems, right. There’s a reason mathematics is so hard. It's because evolution doesn't give a shit about mathematics. And there’s a reason why walking and running is so easy, because it does.
If you look at the number of neurons we dedicate to vision or motor control, it’s massive compared to the number of neurons we have for mathematics. You don't have a mathematical brain, it's like a thin little layer of cells under your forehead, whereas most of your brain cares about things that matter to evolution. But we have this weird, anthropomorphic bias to equate difficulty with intelligence. If it’s hard, then it must require more.
But if we measure intelligence in terms of the number of neurons required to compute a solution, then everything inverts. You just sense that maths is difficult because you have one cell to play with so to speak. We talk about computers in terms of CPU speed, size of memory - and that's how powerful they are. If you think that way about the brain, then vision is the intelligent thing. But we don't, we think about intelligence in terms of things we find difficult. Which is okay, and interesting in other ways, but I think it’s another one of those inconsistencies that we're just very unclear about.
Do you think machine learning programmes just don't know what they don’t know - that they’re kind of overconfident?
Well, they sort of do know what they don’t know. These autoregressive transformers, LLMs, the way they work is that you have a sequence in the past, and they generate the probability of the next word appearing. They assign a probability to a list of words and they pick the most probable one. So they do actually have an internal model of their own uncertainty, in a more explicit way than we do, weirdly enough.
What they don't have, and you could argue we don't either, actually, is a sense of the plausibility of a rule. We acquire rules in two ways, right? One is through experience. We go ‘Look, it's cloudy, it might rain.’ You don’t have to be taught that. But you also learn it through principles. You learn about how cloud formation, how a rain drop is seeded by a dust particle, super condensed fluid and all that kind of stuff. Now, those two together are really important, because then you can calibrate your opinions based on both your own experience and principles that you've been taught.
These models are weird because they have this huge experiential database that comes out of data - data that’s been stolen from the rest of us, basically. What they don't have are rules and principles by which to calibrate their experiences. They don't say, hang on, is this in violation of the second law of thermodynamics? They don't have that minimal elegant reasoning thing that we are taught at school (I mean, hopefully we are).
Human action is this peculiar sum of what you have experienced and what you know to be right. That's true with ethics, also. If you see someone drop £100 in the street, you might think, I’ll nab that, no-one will know. That's your experience. Then you have this ethical system, that society and history has given you, which you can choose to ignore or use. I think AI lacks that second component, what we might call a compressed rule system to modulate its own experience.
That’s so interesting because it inverts the usual way that we think about computers, which is they're all kind of rationality and no intuition. This is the other way around.
It is completely the other way around. That's why there's this debate between the machine learning statistical approach to AI versus conventional symbolic AI. And even though that dichotomy is perhaps a little too simple, it captures an important distinction. These current models are statistical intuition engines. They are experiential intelligences, based on the experiences of others. Parasitic experience. They don’t calibrate against what we would consider true knowledge - rule-based knowledge as well as factual knowledge. They’re reflex engines.
Do you think they’ll find it impossible to be truly innovative or creative?
Sign up for the rest of this conversation, which includes why we should think of machine learning as an “algorithmic telescope”, why David thinks AI is the future of science, and how he hopes it will be used. You’ll also gain access to the whole wondrous world of the Ruffian.
Paid subscribers can also read my take on what might be the most fundamental trade-off of the brain and any learning system; a trade-off we confront in our daily lives, at home and at work, every day: Exploit vs Explore. Plus all the other goodies stacked up behind the paywall. Paid subs are where the action is; they’re also what make the Ruffian possible in the first place.
Keep reading with a 7-day free trial
Subscribe to The Ruffian to keep reading this post and get 7 days of free access to the full post archives.