btrmt. | Betterment

ideologies worth choosing

About

betterment

noun

making or becoming better;

ideology

noun

rituals of thought, feeling, and action;
the science of ideas;

Humans are animals first. At our core, we are creatures like any other—responding adaptively to the environment around us. We see this in our habits, our routines, and our rituals. Automatic patterns of behaviour that gracefully handle the predictable shapes of everyday life. But rituals of behaviour are preceded by rituals of thought. This is what brains do. And unexamined, such things are karstic: pretty landscapes that obscure sinkholes, caves, and rivers beneath. I thought, better to look where you tread. Hence, btrmt. A place to discover ideologies worth choosing.

Projects  |  Newsletter  |  Latest

Projects

Analects

analects

I have a terrible memory. Everything I learn I have to leave somewhere I can find later. This is where I put them. Analects are a collection of ideas, extracts, or teachings. These are mine, to myself, and anyone else who might find them interesting. With a background in brain science and the sciences of mind, I explore how ideas become ideologies become action, for better or worse. Here, you’ll find links to all the content I produce for any of the btrmt. projects.

Animals First

animals first

You might have read about me, but now, let me introduce you to btrmt. Animals First walks you through this little website of mine. The philosophy, and all the major threads and minor projects that make it up. Let's see if you can't find something worth your time.

Karstica

karstica

Karstica is the landing page I send people to when they want to pay me to help them. Coaching, consulting, keynote speaking, that kind of thing. I also do pro bono mentorship on a case-by-case basis. Have a look if you want to see my approach to coaching and consulting.

Content

Random Featured

Featured

article


On Emotion

Article

Emotion is an impossible term to define. Seems important though, so let’s try anyway.

filed under:

Latest Content

Latest

article

System 1 vs System 2 is a useful shorthand, but our minds aren’t two-speed engines—they’re multi-process coalitions of specialised agents working in parallel and in series.

Beyond System 1 and System 2

Article

Kahneman’s System 1 and System 2—our fast, intuitive autopilot versus slow, deliberative override—have become a shorthand for human thought. But thinkers from Evans and Sloman to Stanovich and Minsky remind us that cognition isn’t just a two-lane road. It’s a bustling coalition of specialised processes—heuristics, conflict-detectors, symbolic reasoners—all running in parallel or in nested hierarchies. Fast versus slow will do as a starting point, but the real story lies in the many flavours and layers of mind at work behind the scenes.
System 1 vs System 2 is a useful shorthand, but our minds aren’t two-speed engines—they’re multi-process coalitions of specialised agents working in parallel and in series.

filed under:

article

The neuroscience confidence game trades content for cosmetic filler, making vacuous advice look smart.

The Neuroscience Con

Article

I talk about something I call the “neuroscience confidence game” a lot, but I realised I hadn’t ever written an article I could easily link to to explain it. Some unfortunate soul on instagram, using this technique as their primary strategy, had me fall into their ad-targeting and I’m going to use them to illustrate, so that you can tease this kind of thing apart yourself.
The neuroscience confidence game trades content for cosmetic filler, making vacuous advice look smart.

filed under:

article

This might be the most comprehensive example of the neuroscience confidence game I’ve ever written about. That and a heavy dose of self-indulgence. Neuroscientific self-help, not so much.

Positive Intelligence pt.III

Article

A lot of people were upset with me for teasing the ‘neuroscience-based’ coaching programme ‘Positive Intelligence’, so I thought I’d do a little autopsy. This is part three, on the brain science… Such as it is.
This might be the most comprehensive example of the neuroscience confidence game I’ve ever written about. That and a heavy dose of self-indulgence. Neuroscientific self-help, not so much.

filed under:

marginalium

Marginalia are my notes on content from around the web.

Marginalium

My commentary on something from elsewhere on the web.

Christopher Alexander and his patterns. I talk about Christopher Alexander quite a few times in my content. His concept of a ‘centre’ in particular:

for example … a fishpond … Obviously the water is part of the fishpond. What about the concrete it is made of? .. the air which is just about the pond? … the pipes bringing in the water? These are uncomfortable questions … The pond does exist. Our trouble is that we don’t know how to define it exactly … When I call a pond a center, the situation changes … the fuzziness of edges becomes less problematic. The reason is that the pond, as an entity, is focused towards its center. It creates a field of centeredness … The same is true for window, door, walls, or arch. None of them can be exactly bounded. They are all entities which have a fuzzy edge, and whose existence lies in the fact that they exist as centers in the portion of the world which they inhabit.

Christopher Alexander, The Nature of Order: Book 1

This is how the world often works for us—how the brain sees it.

Anyway, here is a tech guy explaining why tech people also like Christopher Alexander. I think he does a good job.


filed under:

marginalium

Marginalia are my notes on content from around the web.

Marginalium

My commentary on something from elsewhere on the web.

AGI is far away. I only really skimmed this—it’s about the slow productivity gains from what seems to be enormous bursts of growth in AI capability.

Some of this is people hiding the fact that AI is doing their work for them. You could tell your boss that you finished all your work early because of AI and you need more work, or you could do other more fun things instead.

Some of this is because even though the AI can produce surprising results very quickly, it still takes human triage time to account for errors and hallucinations and whatnot, and so the time spent is just traded from the work, to supervising the work.

A lot of this is because a lot of knowledge-work requires context, tacit knowledge, or interpersonal judgment that you need to feed the models. So you either spend time feeding them, or you just do the stuff yourself.

The author says something very interesting about this last point:

the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human’s. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience.

The reason humans are so useful is not mainly their raw intelligence. It’s their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task.

Feels truthy.


filed under:

Recent Missives

Missives

June 13, 2025

February 14, 2025

Last Changelog

Last week I was supposed to do this week’s article, and got distracted by a cool feature of the study of language regions of the brain. Anyway, I updated last week’s article to stand alone, and this week’s article is what it should have been. If you read last weeks’ you can skip the intro to this weeks’ and just dive right in.

Join over 2000 of us. Get the newsletter.