Newsletter

Beyond System 1 and System 2 and other things

June 13, 2025

Hello,

Here’s everything since my last little missive to you:

New Articles:

Beyond System 1 and System 2

Excerpt: Kahneman’s System 1 and System 2—our fast, intuitive autopilot versus slow, deliberative override—have become a shorthand for human thought. But thinkers from Evans and Sloman to Stanovich and Minsky remind us that cognition isn’t just a two-lane road. It’s a bustling coalition of specialised processes—heuristics, conflict-detectors, symbolic reasoners—all running in parallel or in nested hierarchies. Fast versus slow will do as a starting point, but the real story lies in the many flavours and layers of mind at work behind the scenes.

Main idea: System 1 vs System 2 is a useful shorthand, but our minds aren’t two-speed engines—they’re multi-process coalitions of specialised agents working in parallel and in series.

New Marginalia:

Graduates have multiplied faster than the room at the top:

The result is a stock of nearly-men and women whose relationship with their own class sours from peripheral membership to vicious resentment. If this coincides with a bad time for the general standard of living, there is an alliance to be formed between these snubbed insiders and the more legitimately aggrieved masses.

Turchin notes that this marginalisation of certain segments of the elite class has a heavy hand in many of our modern problems, from Brexit to far-right populism to the most problematic aspects of ‘woke culture’. Seems increasingly interesting as the years since its publication grow. You’ll need a paywall avoider.

Link

“only in very recent years that some people have begun to undermine the absolute prohibition on zoosexuality. Are their arguments dangerous, perverted, or simply wrongheaded? … Do they have a ‘paraphilia’ … Or are they just normal people who happen to have a minority sexual orientation? Given the fraught debates about consent in human-on-human sexual encounters, it is worth asking whether nonhuman animals can ever consent to libidinal relations with humans”

Interesting, but more interesting is Bourke’s comments about the strangeness of our moral opposition to this. Sex with animals is very untrendy, we’re happy in the main to tacitly endorse factory farming and the conditions that come with it. You could rationalise this by saying some variation of “we need to eat, we don’t need to have sex with animals”. But I think this is a moral dumbfounding thing.

Joanna Bourke - Loving Animals

Christopher Alexander and his patterns. I talk about Christopher Alexander quite a few times in my content. His concept of a ‘centre’ in particular:

for example … a fishpond … Obviously the water is part of the fishpond. What about the concrete it is made of? .. the air which is just about the pond? … the pipes bringing in the water? These are uncomfortable questions … The pond does exist. Our trouble is that we don’t know how to define it exactly … When I call a pond a center, the situation changes … the fuzziness of edges becomes less problematic. The reason is that the pond, as an entity, is focused towards its center. It creates a field of centeredness … The same is true for window, door, walls, or arch. None of them can be exactly bounded. They are all entities which have a fuzzy edge, and whose existence lies in the fact that they exist as centers in the portion of the world which they inhabit.

Christopher Alexander, The Nature of Order: Book 1

This is how the world often works for us—how the brain sees it.

Anyway, here is a tech guy explaining why tech people also like Christopher Alexander. I think he does a good job.

Link

AGI is far away. I only really skimmed this—it’s about the slow productivity gains from what seems to be enormous bursts of growth in AI capability.

Some of this is people hiding the fact that AI is doing their work for them. You could tell your boss that you finished all your work early because of AI and you need more work, or you could do other more fun things instead.

Some of this is because even though the AI can produce surprising results very quickly, it still takes human triage time to account for errors and hallucinations and whatnot, and so the time spent is just traded from the work, to supervising the work.

A lot of this is because a lot of knowledge-work requires context, tacit knowledge, or interpersonal judgment that you need to feed the models. So you either spend time feeding them, or you just do the stuff yourself.

The author says something very interesting about this last point:

the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human’s. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience.

The reason humans are so useful is not mainly their raw intelligence. It’s their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task.

Feels truthy.

Link

AI personality extraction from faces:

we extract the Big 5 personality traits from facial images of 96,000 MBA graduates, and demonstrate that this novel “Photo Big 5” predicts school rank, com- pensation, job seniority, industry choice, job transitions, and career advancement. Us- ing administrative records from top-tier MBA programs, we find that the Photo Big 5 exhibits only modest correlations with cognitive measures like GPA and standardized test scores, yet offers comparable incremental predictive power for labor outcomes.

Given that personality traits (including the Big 5) aren’t really thought to predict performance very well, I wonder why this ‘photo big 5’ does? Is it something about the quality of the photo? I’m going to have to actually read this article.

Link

The speed of AI take-off. Videocast with Tyler Cowan and Azeem Azhar. It’s just good quality AI speculation. 50 mins. Worth a listen.

Link

Preferring skilled migration appears to make us less amenable to migrants more broadly. This is an email about the Australian situation—we opened our doors to migrants via tertiary education and now we hate their guts for raising the cost of housing. But the housing problem has nothing to do with them.

Link

I hope you found something interesting.

You can find links to all my previous missives here.

Warm regards,

Dorian | btrmt.