33 Things I Heard At Foo Camp 2026
A Report From the Knowledge Frontier

Catch-up service:
A Deep Dive Into ‘I Feel Fine’
Pitfalls of AI Journalism
Bodies Behaving Badly
Centrism’s Anger Problem
All Hail the Putter-Togetherers
The Stamina Gap
John &Paul is at number one in the Sunday Times bestseller list for a second week running!
Today I report on what I heard or overheard at Social Science Foo Camp, an annual convention of techies, academics, and assorted oddballs, this year held at the Microsoft campus in Mountain View outside San Francisco. (Also in attendance: Helen Lewis, who writes about it here). These snippets are taken from my notes which are incomplete and messy and frankly unreliable. Some are from presentations, others from canteen chats. To stay on the safe side, I haven’t attributed them except for one or two.
This is intended to be impressionistic than anything rigorous - I want to give you a feeling for the kinds of conversations that were going on. There were conversations about culture and art as well as technology although inevitably the conference was dominated by AI. I haven’t used quotation marks but where these are first person you should assume someone other than me is speaking unless I use my initials.
Best to click on the title above and read this online.
Tech exec: I’ve been in conversations with the top people at all the major companies developing frontier models. The common theme is that none of them have a clue how this will play out.
We should have a strong industry norm against making AI conscious. Anthropic’s ‘constitution’ for Claude is endowing (and even enacting) an AI with consciousness, and therefore ‘rights’. This will lead nowhere good.
We don’t even know what it would mean for an AI to be conscious - we don’t even know what it means to say humans are conscious. There are 22 competing theories of human consciousness in the scientific literature! Six major theories and a host of niche ones. The lack of consensus is striking - few if any major areas of scientific research are like this.
The chimp-pig hypothesis originated by Eugene McCarthy (not present at Foo) argues, contra Darwin, that humans are the result of interbreeding between those two species. (See also this report in Pig Progress). (IL: The speaker who presented McCarthy’s theory was not arguing for it, merely suggesting that it’s interesting and provocative, as Darwin’s theory was in 1859, and that this is therefore an opportunity for us to feel what it feels like to encounter a strange idea.)
AI “reasoning models” (models like DeepSeek and OpenAI o3 which show extended chains of thought before answering) aren’t just reasoning for longer; they’re engaging in internal debate. This study finds that they simulate multiple different internal voices or perspectives, with distinct personalities and areas of expertise. These voices argue with each other, question each other, and eventually reconcile differences. The researchers call this a "society of thought." That is, AI seems to be spontaneously reproducing the mechanisms of human intelligence, including debate and productive disagreement.
Another study used machine learning to map the intellectual perspectives of millions of scientists, inventors, screenplay writers, entrepreneurs, and Wikipedia contributors against each other. They found that collaborations between diverse thinkers were more successful - but in a specific way. Background diversity — having lived different lives or trained in different fields — was actually detrimental to creative achievement. What counted was perspective diversity: the most successful collaborations happened when partners had a common language but divergent approaches. (IL: I’m going to call this the John & Paul theory of creative collaboration.
There has been a long running dispute between Wikipedia editors over whether an entry titled “Longcat” is actually about a long cat, or about an internet meme depicting a long cat. This actually goes quite deep.
When an AI performs poorly or messes up a task you give it, make a note and try the same prompt in six months’ time. This is a good way to get a feel for their progress.
The vibe-coding apps are not, in fact, easy to use. You need technical domain knowledge and intuitions rooted in software engineering - and a lot of LLM usage - to get the best out of them. They don’t replace expertise, they amplify it.
When we give scientists an AI and a scientific problem to solve they feel better about what they’ve done but haven’t always done better. When we give them a cloud of AIs with different viewpoints to work with, they feel better and they do better. They’ve understood the problem at a deeper level.
Analyst at major investment bank: My team’s written reports have become more complex and in-depth since LLMs. It wasn’t a conscious shift, just a response to the fact that it’s now so easy to get instant analyses of, say, the economic impacts of an American attack on Iran. So we have to offer something better.
One of the questions we should ask when designing or using AI systems is do we only want the ‘probably right’ answer - or do we want the user to have understood the answer and why it might be wrong?
On the one hand, AIs increase individual agency because people will be able to do X (something they didn’t have the capability to do before). But they may also reduce agency because people will feel that everyone else can also do X.
Culture is “the unconscious patterning of human behaviour”. (Original source: Edward Sapir).
The whole history of America and the modern world is embodied in the banjo, an instrument that originated in Africa, entered America through blackface minstrel shows, migrated into the upper class of American and British society as a fashionable parlour instrument before shedding that identity entirely and becoming a hillbilly/bluegrass instrument. Earl Scruggs, who created the three-finger roll technique in 1945, is one of the most influential musical innovators of the last century. (IL: hat tip to the wonderful Alison Brown).
Time-locked LLMs - LLMs trained on data that only goes up to, say, 1492 or 1913 - are still at an early stage but will be a fascinating tool for exploring historical perspectives.
Will human-made things need the stamp of human craft, William Morris-style? Or will norms simply evolve, as they did in photojournalism? When digital photography emerged some in the photojournalist community believed they might have to stick to analogue film to prove veracity. In fact ethical norms have sufficed to regulate the field.
We can train people to use AI for writing in a way that doesn’t harm the quality or individuality of their writing. But they’ve already had years of experience. How will we do it for people who have grown up using LLMs to write?
Arnold Schoenberg, notwithstanding his reputation for austere music, was a warm and affectionate father and much loved teacher. He has three living children.
AI safety only needs one rule: the allocation of electricity to each agent is proportional to the wealth (or well-being) of the poorest person. So they cease to exist if people are impoverished. The physicality of AI is the only way to force alignment. All abstract principles can be gamed but energy will remain the fundamental constraint and we should use it.
Cognitive resonance - when the AI feeds back to you what you already think. (Opposite of cognitive dissonance).
I had a toxic person in my life that I’d been emailing back and forth for years. I had this nagging feeling that I needed to meet them face-to-face to resolve our differences. But then I found a better way: I gave the AI all of our emails and then did a role-play with this person - after that I realised I didn’t need to meet with or talk to them ever again.
After the jump, more snippets from Foo including some fascinating insights into how human memory works. Plus: my thoughts on the BAFTA brouhaha, plus the Gorton by-election, plus a Rattle Bag of juicy links and stories. If you haven’t already, seize the day and take out a paid subscription. It’s easy to do and excellent value, though I say so myself. The Ruffian is 100% human-crafted.



