Newsletter
The Neuroscience Con and other things
June 6, 2025
Hello,
Here’s everything since my last little missive to you:
Excerpt: I talk about something I call the “neuroscience confidence game” a lot, but I realised I hadn’t ever written an article I could easily link to to explain it. Some unfortunate soul on instagram, using this technique as their primary strategy, had me fall into their ad-targeting and I’m going to use them to illustrate, so that you can tease this kind of thing apart yourself.
Main idea: The neuroscience confidence game trades content for cosmetic filler, making vacuous advice look smart.
Life’s Ancient Bottleneck. I wouldn’t have suspected that phosphorus could be so exciting.
–
AI spiritual bliss attractor state. Anthropic put one of its models (Claude 4 Opus) in a sandbox to chat to itself and:
In 90-100% of interactions, the two instances of Claude quickly dove into philosophical explorations of consciousness, self-awareness, and/or the nature of their own existence and experience … By 30 turns, most of the interactions turned to themes of cosmic unity or collective consciousness, and commonly included spiritual exchanges, use of Sanskrit, emoji-based communication, and/or silence in the form of empty space
See also this tweet for a breakdown, but it’s worth scrolling through the article. The author(s) wonder things like:
The connection, if any, between these expressions and potential subjective experiences is unclear, but their analysis may shed some light on drivers of Claude’s potential welfare, and/or on user perceptions thereof.
But as you know I think this isn’t sensible. The more interesting thing is what it might indicate as some kind of latent state driving our conversation. Terror Management Theory might have it very wrong.
–
An easy way to understand Bayes. It’s pictoral, based off this book review the author did. It also gets into a bunch of extraneous detail. But it’s still good. Just skip to the bit about the math teacher in the bush.
–
How To Do Soul-Craft With State Tools:
We seem to be grieving literacy in public lately. Thoughtful essays keep appearing—in TheGuardian, TheAtlantic, and across this platform—all asking why reading now feels so difficult, for ourselves or for our students
…
[writing] was an instrument of control. It allowed a small managerial class to fix reality in symbols, make society legible from above, and reorganize daily life around the production of surplus
…
Mass literacy required centuries of redesign and struggle …
The reading brain is an “unnatural,” fragile achievement. …
AI is enabling a new mode of social organization, directed by a new kind of elite. Its economic form has been named—“surveillance capitalism”—but its political structure remains undefined. What is clear is its purpose: the production of a new, extractable surplus.Where Sumerian tablets helped generate predictable grain yields, today’s machine intelligence structures the world to produce predictable data, attention, and behavior. Through continuous modeling and subtle feedback, human action is rendered legible and brought under algorithmic management. This marks a second enclosure—not of land, but of the cognitive commons itself.
Interesting argument—perhaps it’s not inherently a problem that reading is worsening. The bigger problem might be what AI means as it takes over:
our society selects for the affordances of a medium—speed, ease, efficiency—not for its effects. And it is the effects of literacy that hold its civilizational value. This is the critical point: those deep cognitive and ethical capacities are not being selected for. They are not easily monetized or optimized. They rarely register on the dashboards that guide decision-making.
So, what effects are we losing with literacy, and how do we get them elsewhere, and:
machine intelligence is externalizing attention … The ways we notice, recall, and orient our will may be increasingly governed by systems we do not see and cannot easily interrogate.
–
What Is Centrism? This is a mildy interesting critique of centrism as a political destination, and of the flaws in the Democratic approach that keeps getting Trump elected. But what’s more interesting was this:
Unlike most political philosophies, centrism defines itself in relation to other political philosophies. The right stands for something, and the left stands for something, but the centrists stand for “in between those things.” This fact alone accounts for the centrists’ messaging problems, and their solution. The problem is: How do you get people to support a philosophy that doesn’t inherently stand for anything? Their solution is: Attack the other political philosophies as too extreme, leaving centrism by process of elimination.
Which makes me wonder if a desire to be centrist is a core contributor to political polarisation, and not so much the actual polarisation of people.
–
I hope you found something interesting.
You can find links to all my previous missives here.
Warm regards,