Why Giving Up On The Humanities Is Self-Destructive
The Economic Case For the Liberal Arts In the Age of AI
Catch-up service:
The Orangutan Theory of Intelligence
Introducing ‘John & Paul’
Why Britain’s Elites Shun Highbrow Culture
How the World Failed Greta
Is Culture Dying?
A drumbeat of doom accompanies the rapid progress of AI. It’s going to steal our jobs, mash our brains, subvert our democracy, defile our womenfolk - and that’s if it goes well. Believe me, I do take these threats seriously, but the future is stubbornly unpredictable, and when we focus on the problems ahead we sometimes take our eyes off the ones in front of us. Insofar as I have a razor for the whole discourse, it’s this: everything we worry AI is going to do to us is already happening, because we’re doing it to ourselves.
We worry that AI is going to flood the world with mediocre content; I worry that we won’t notice the difference (does the latest data-driven Marvel blockbuster really bear the inimitable stamp of human genius?). We worry that AI-powered machines will overpower human intelligence; I notice everyone, including me, is mesmerised by their phones. We worry about being ruled by robots, when we are already ruled by politicians who act like automatons (with a few exceptions - some admirable, others less so). Recent reports that students no longer read books add to a pile of evidence that we’ve given up on the struggle to be human.
There is a familiar story in consumer categories: an established, market-leading brand comes under attack from cheaper, lower-quality competitors. In response, rather than adapting and bolstering the qualities that made it successful in the first place, the leader cravenly attempts to imitate its challengers - and ends up being swallowed by them. This is the strategy our species seems to be pursuing in response to machines that can provide a “good enough” emulation of our most valued product attributes, like the ability to use language and solve problems. (In this case, the brand is the sponsor of its own competition, but let’s not stare at this analogy for too long).
This self-abnegating approach is embodied by the sharp decline in the study of the humanities, or liberal arts. From 2012 to 2020, the annual number of humanities bachelor's degrees awarded in the US fell by almost 16%. The share of such degrees is now at less than 10% of all bachelor's degrees awarded, the lowest level ever; English and history fell by a full third over that period. In the UK as well as the US, universities and schools are organising themselves around the primacy of STEM, cutting programmes in classics, history, music, arts and drama.
We’re abandoning the humanities. The clue is in the name; I mean it could hardly be more on the nose, could it? We’re giving up our USP in order to meet the machines on their turf. Meanwhile we’re training humans to think and act algorithmically, following rules and checking boxes. Here’s one prediction I will risk: the machines are going to be better at imitating humans than humans ever will be at imitating machines. We do not have the comparative advantage here. We should be leaning into, not away from, our humanness.
I believe people should read great books and listen to great music for their own sake rather than to make themselves better employment prospects. The humanities help us think about how to be; not just what to do. But even if we’re being utilitarian about it, ditching the humanities is a mistake; a well-rounded liberal education makes more economic and commercial sense now than it ever did. Only if we use AI to support us in a quest to be more human will we reap the rewards of the coming revolution.
If governments, universities and employers are starving the humanities of resources, that’s partly because they are under the spell of ‘human capital analysis’, pioneered by the late Gary Becker, winner of the 1992 Nobel Prize for economics. It was Becker who first made a systematic argument that education and training are investments in human capital, in the same way that businesses invest in machines or buildings.
On this basis, the intuitive inference is that if the global economy is to be dominated by AI, countries and companies should be allocating the maximum amount of human capital to these technologies. We simply don’t need graduates who are experts in Greek civilisation, nineteenth century novels or twentieth century philosophy, even if it’s nice to have them around.
The inventor of the theory took a different view, however, as I discovered via an excellent post by the economist Peter Isztin. This is from an interview with Becker:
Becker: …What people should look for then as they invest in their human capital is more flexibility. Instead of having human capital that would be particularly useful for one company or even one occupation narrowly defined, you should try to recognize that the future may involve working at another company or in a somewhat different occupation. So look for flexibility.
Interviewer: What kind of education affords such flexibility?
Becker: A liberal arts education. I wrote about this 40 years ago, but I think it’s become even more important today. In an uncertain world, where you don’t know what the economic situation will be like 20 years from now, you want an education based on general principles rather than on specific skills.
This makes sense for a few reasons. A person who is educated in the liberal arts or humanities - not necessarily instead of maths and science and engineering - is acquainted with a range of different fields and ways of thinking. That makes them better able to adapt to an economy that moves in unpredictable ways.
All this talk of “twenty-first century skills” misses a salient point about twenty-first century economies: we can’t be sure what skills will be most valuable. It’s suddenly unclear, for instance, which programming skills will be valuable over the next ten years now that LLMs have proven to have a knack for coding. AI disrupts all technology, including itself; you might spend three years studying software that becomes obsolete three years later. A more general education creates more versatile workers, which might be one reason that liberal arts educations tend to pay off over the long term.
Given unpredictable employment demands, it makes sense to adopt what you might call a value investing strategy for education: focus on the fundamentals and don’t take current trends too seriously. For instance, a good humanities education inculcates precision of verbal expression in writing and speech. As long as humans use language to communicate, those skills will remain valuable, even or especially in tech companies. That students are finding it harder to read books or write essays suggests that this most vital of human skills is in dangerous neglect.
Reading a book is a drag because the information goes in so slowly, but learning to think well entails the sacrifice of speed for depth. Isztin, borrowing from the economist John List, defines critical thinking as the habit of “thinking slower”; of being wary of our instincts and intuitions and able to analyse them (which is not the same as ignoring them). There is no better discipline for doing that than philosophy. Socrates, the greatest innovator in Western thought, made it his business to stop smart people leaping to conclusions. A surprising number of Silicon Valley’s most successful entrepreneurs and investors are philosophy grads, Reid Hoffman being a prominent example.
Literature and history are good ways to learn about the complexity, potential, and frailty of human beings. No matter how tech-dominated our workplaces become, the biggest decisions that leaders make will always concern people, with their messy feelings and maddening, glorious irrationality. It requires something more than technical competence to get those calls right. Reading widely is no guarantee of wisdom, of course, but it does indicate a lively mind. People laugh at Elon Musk’s enthusiasm for Homer, but it’s not a coincidence that many of the most successful tech leaders are voracious readers. Success in an unpredictable world correlates with intense curiosity about all human endeavour.
A knowledge of the humanities also makes life, and work, more interesting, and in a world where the top companies are in fierce competition for the smartest minds, interestingness is valuable. This point is made by Nabeel Qureshi in a recent post on his time at Palantir, the software company founded by Peter Thiel and Alex Karp. Nabeel, as those of you who have heard him on podcasts will know, is himself the model of a twenty-first century renaissance man: a top software engineer who is at ease discussing Empson’s Seven Types of Ambiguity or comparing recordings of the Goldberg Variations (without for a moment sounding like a show-off).
Although no longer at Palantir, he writes affectionately of a company staffed by brilliant, driven weirdos who love talking about Wittgenstein. Palantir’s founders, both philosophy graduates, met at law school and bonded over a love of arguing about socialism and capitalism, Christianity and atheism, Heidegger and Girard. The company they created in 2005, after Karp had completed his PhD in neoclassical social theory from Goethe University Frankfurt, retains this intellectual intensity. Here is Nabeel on why that matters:
The combo of intellectual grandiosity and intense competitiveness was a perfect fit for me. It’s still hard to find today, in fact - many people have copied the ‘hardcore’ working culture and the ‘this is the Marines’ vibe, but few have the intellectual atmosphere, the sense of being involved in a rich set of ideas…The main companies that come to mind which have nailed this combination today are OpenAI and Anthropic. It’s no surprise they’re talent magnets.
The blend of technical expertise and broad intellectual curiosity exemplified by such companies points to a future where human and machine intelligence complement each other, rather than compete or converge. In a recent interview about AI, the mathematician Terence Tao said, “I’m not super interested in duplicating the things that humans are already good at. It seems inefficient. I think at the frontier, we will always need humans and AI. They have complementary strengths.” I wholeheartedly agree, and would only add the converse: humans should not be duplicating the things that AI is already good at. If that’s all we do, we’ll all be poorer for it; in every sense.
After the jump: a mini-rant about the state of our politics; some counter-intuitive news about attention spans; whether you should do what you love; Da Vinci vs Michelangelo; how to get the most out of AI chatbots, and MUCH MUCH MORE.
If you don’t have a paid subscription yet, please consider taking one. I rely on them: it’s what makes the Ruffian possible; it’s what made ‘John & Paul’ possible. It’s also what gets you access to the best stuff (and the best stuff will only be getting better in months to come).
Keep reading with a 7-day free trial
Subscribe to The Ruffian to keep reading this post and get 7 days of free access to the full post archives.