btrmt.

Analects

filter by:

Noetik

stuff On thinking well in a noisy world

show:

article

Nature is just nurture over time, and nurture is far more obviously in charge; nothing changes if free will <em>isn’t</em> real; and the same is true of consciousness. They’re just complicated debates with no real outcomes.

Stupid Questions

article

There are a few questions which, on the surface, seem hugely important. Then, on closer inspection, turn out to be more or less irrelevant. I need a place to write about them, so I thought I’d make it a sort of always-evolving article. So far, I talk about how useless the nature-vs-nurture debate is and how boring the questions of whether free-will is real, and what consciousness might be are.
Nature is just nurture over time, and nurture is far more obviously in charge; nothing changes if free will isn’t real; and the same is true of consciousness. They’re just complicated debates with no real outcomes.

filed under:

article

Human reasoning isn’t flawed, it’s a social tool we use in the wrong places. It’s about sharing and evaluating intuitive claims, not generating rational ones. AI is fundamentally this but crippled: without the grounded intuitions and social friction that makes it work.

AI Hallucination is just Man-Guessing

article

One time I was out drinking with some Swedish folks and they told me about the word killgissa. It means something like ‘man-guessing’, referring to when you sound like you know what you’re talking about but you’re actually just guessing. I reckon AI hallucination is just man-guessing, but on your behalf. To explain, I first have to convince you that human reason isn’t actually that reasonable. With any luck it’ll make you better at managing your own processes of reason and your AIs. Let’s see.
Human reasoning isn’t flawed, it’s a social tool we use in the wrong places. It’s about sharing and evaluating intuitive claims, not generating rational ones. AI is fundamentally this but crippled: without the grounded intuitions and social friction that makes it work.

filed under:

article

Most people think better ethical decision-making is just a matter of stopping to think before acting. But many moral judgements are intuitive, and then we rationalise them to ourselves. We have to train both intuition and reasoning, not rely on one to correct the other.

Moral Blindspots

article

Most discussions about ethics centre on catastrophic scenarios. Situations where it’‘d be very difficult to avoid unethical behaviour. These scenarios aren’‘t really very interesting to me. What the average person probably wants to know is how to avoid the tamer moral lapses we encounter every day. What the average person wants to do is know how to avoid that single decision that might haunt them. So let’’s explore a more practical ethics. This is the second in the series—avoiding the moral blindspot.
Most people think better ethical decision-making is just a matter of stopping to think before acting. But many moral judgements are intuitive, and then we rationalise them to ourselves. We have to train both intuition and reasoning, not rely on one to correct the other.

filed under:

article

System 1 vs System 2 is a useful shorthand, but our minds aren’t two-speed engines—they’re multi-process coalitions of specialised agents working in parallel and in series.

Beyond System 1 and System 2

article

Kahneman’s System 1 and System 2—our fast, intuitive autopilot versus slow, deliberative override—have become a shorthand for human thought. But thinkers from Evans and Sloman to Stanovich and Minsky remind us that cognition isn’t just a two-lane road. It’s a bustling coalition of specialised processes—heuristics, conflict-detectors, symbolic reasoners—all running in parallel or in nested hierarchies. Fast versus slow will do as a starting point, but the real story lies in the many flavours and layers of mind at work behind the scenes.
System 1 vs System 2 is a useful shorthand, but our minds aren’t two-speed engines—they’re multi-process coalitions of specialised agents working in parallel and in series.

filed under:

article

The neuroscience confidence game trades content for cosmetic filler, making vacuous advice look smart.

The Neuroscience Con

article

I talk about something I call the “neuroscience confidence game” a lot, but I realised I hadn’t ever written an article I could easily link to to explain it. Some unfortunate soul on instagram, using this technique as their primary strategy, had me fall into their ad-targeting and I’m going to use them to illustrate, so that you can tease this kind of thing apart yourself.
The neuroscience confidence game trades content for cosmetic filler, making vacuous advice look smart.

filed under:

Join over 2000 of us. Get the newsletter.