Uncertainty vs Risk
May 9, 2025
Excerpt: I’ve been talking about we’re all quite scared of bias, but actually bias is quite handy. It’s a preference for precision—you can ignore a noisy world because you have some expectations about how things are going to play out. But you don’t always know when to be biased, or when to open yourself up to the noisy world. So, sometimes you’re biased when you shouldn’t be, and sometimes you’re paralysed by indecision when you should have just gone from the gut. This article explores the lever that sits under that process—uncertainty.
Our brains track two kinds of uncertainty. Expected uncertainty makes us trust our model of the world more and exploit familiar patterns (be biased). Unexpected uncertainty makes us explore and update our model (prefer noise). Correctly diagnosing the uncertainty is the key.
Table of Contents
filed under:
Article Status: Complete (for now).
In the first part of this series, I tried to convince you that bias was a good thing, then I promised that I’d show you how that plays out in human behaviour. So let me recap the main idea to save you reading that first article again, and then I’ll get into showing you how. In this article it’ll be by telling you about the lever that seems to drive your inclination to be biased or not: uncertainty.
Find the rest of the series collected here.
If you already read the first part, then you can skip to the showing
Recap
Behavioural economists have the world terrified of biases. Groupthink, confirmation bias, negativity bias, optimism bias, etc, etc. You’ve probably read about at least one of them. You see, they:
work off the assumption that humans are ‘rational actors’. They call it the ‘rational-actor model’. The idea, more-or-less, is that if you give a human a decision to make, they will decide by optimising for their preferences, weighing up the costs and benefits. You do stuff that does the most good for you, and the least bad. On this model, errors should only happen when you don’t have the right information to make the ‘rational’ decision. … ‘biases’ … are times when we deviate from this model. When we make choices that are not optimised to our preferences, maximising benefit and minimising cost, despite having accurate information.
But statisticians don’t see bias this way, and the brain is much closer to a statistician than an economist. For them, bias is one half of a trade-off.
Bias is the opposite of noise, or variance. If you have a biased measure, it’s more precise. A noisy measure is more variable. But that’s orthogonal to accuracy—both biased and noisy measurements can be either accurate or inaccurate.
The idea is that, sometimes, you want to use your expectations and assumptions to ignore the noise, and see the picture more clearly. The trade-off is that, sometimes, the noise is useful or your expectations are off.
This is, more-or-less, what the brain does:
the brain, and nervous system more broadly, has to map all the noise out there in the world to produce the right response. Not only that, but it has to coordinate all the noise inside your body to do it. Nerves innervating, muscles activating, hormones sloshing around in glands.
And most of the time, this is predictable … the brain maps the predictable structure of the world and your actions within it. By paying attention to the predictable stuff, and biasing your actions as a response, it can ignore all the irrelevant noise that might lead you to make an error. This frees it up to do more complicated processing when it doesn’t know what to expect—when it needs to pay more attention to the noise.
So, that’s the first article. Let’s look at an example.
Uncertainty promotes bias
As I’ve been writing this series, a core feature of this bias-vs-noise trade-off has been jumping off the page. There’s something very important about uncertainty.
In the article on stress, I pointed out that when things are uncertain we want low stress environments, so that we’re free to engage in exploration behaviour (i.e. embrace noise). But what often happens is that uncertainty creates high stress environments, which makes us think and behave more stereotypically (i.e. prefer bias/precision).
In the article on social identity, I wrote that we engage in group related biases, rather than embrace our individual noisiness, when we’re uncertain about what the right way to behave is.
And in the one on cognitive dissonance, I pointed out that one of the times our mind will work hardest to resolve cognitive conflict (i.e. bias us towards pretending we’re coherent) is when we experience uncertainty about “who we are”.
And all of these observations seem to be at odds with the fact that, when things are uncertain, our biases seem inherently like they’d be more likely to lead us astray. If biases are about using our expectations to help us be more precise, then when things are uncertain, we’d want to open ourselves up to the noise, so we don’t miss something we’re not expecting.
And then I realised that I’d just jimmied myself with the language problem, because there’s more than one kind of uncertainty.
Being ‘sure’: Uncertainty vs Risk
There is a question around certainty that the nervous system is asking itself. Something like, “how sure am I that my current model of the world is good enough?”
If the answer is “pretty sure”, then we’re going to exploit that model—we’re going to lean into the biases that it encourages and ignore the noise.
If we’re not sure, then we’re going to loosen our grip on that model and start sampling for new information—we’re going to explore the noise.
The nervous system is going to work that out by estimating two different kinds of uncertainty; tuning neuromodulatory systems that push our thinking either toward bias or toward noise:
- There’s a kind of expected uncertainty that fills our world: when there’s lots of noise, but we’re still pretty sure we know what’s going on. You could be driving in fog, but you aren’t going to be changing how you drive very much, especially if you’re on a familiar route. If anything, you’re going to lean in to your biases around driving. You don’t want to start trying out new road tricks now. Someone might hit you.
- Then there’s unexpected uncertainty, where it seems like our old model of the world isn’t working very well. If you’re driving, and you hit a patch of black ice, and you’ve never hit black ice before or it’s black ice in a spot you wouldn’t expect it, then you’re going to change how you drive very much. You’re not going to keep driving like normal, because that led to you sliding all over the road. You’re going to slow to a crawl, hotwire every sense, and take in every piece of perceptual information that you can, until you work out what’s going on.
You hear these things called different things, depending on the discipline. You might hear them called: irreducible and volatile uncertainty; stationary variance and volatility; ambiguity and risk… The list goes on.
But what might be more interesting than exploring that, is exploring psychological models that reflect them.
One of my favourite psychological models is the interruption theory of emotion. We feel emotions when our automatic patterns of behaviour have been interrupted. It motivates us to do something different. Or, put another way, we are jolted into a state of openness to noise when our biased patterns of behaviour aren’t working anymore.
Or, for something more mathsey, you could look at predictive coding/Bayesian brain accounts of the brain[^1]—the brain is posed as a layered statistical machine. It wants to eliminate the gap between what it expects and what it actually sees. So, at each level of processing, it sends a top-down prediction (prior) about what’s happening at the level below. But equally signals are being sent from below, bottom-up, about how wrong the predictions were.In a world we know well, the sensory information is judged unreliable. The brain cranks up the priors and lets them steer behaviour. But when there’s lots of error signals coming back up, the priors lose credibility. The brain down-weights them, and the prediction errors seize control, making us re-sample the information and update our priors (openness to noise).[^2]
And physiologists have started looking at the locus coeruleus for this in animal models of behaviour. You have this very nice pattern where the locus coeruleus fires, a wave of noradrenaline washes through the cortex, the pupils dilate, and frontal-midline theta brainwaves flicker exactly when an animal flips from exploitation behaviour to exploration behaviour.
A neurotransmitter scam
In fact, if you’ll allow me to commit the thing I regularly call a con, there’s some nice neuromodulatory correlates.
There’s that noradrenalinergic ‘alarm/interruptions’ signal I just wrote about. A phasic burst seems to be about unexpected uncertainty, signalling the animal to break the current model and explore. A more sustained rise (i.e. tonic) is more what I’ve been referring to when talking about the stress curve—more noradrenaline stably rising is associated with more stereotypical thinking and behaving. This almost certainly goes hand-in-hand with cortisol levels more broadly across the HPA-axis. Sustained levels of cortisol are associated with narrowing attention and reduced working memory—we’re forced to pivot to our less noisy routines.
Acetylcholine, on the other hand, is thought to play some kind of role in an expected uncertainty signal, being a key candidate for predictive coding models of mind. More ACh seems to be associated with times we weigh sensory evidence less and place more emphasis on our priors/expectations (i.e. bias).
And we can’t forget the addict hormone, dopamine. High dopamine is super-characteristic of times when we’re anticipating a known outcome, and behaving correspondingly (i.e. bias). Low dopamine might similarly act as a signal to open ourselves to noise.
Little chemical traces of our different kinds of uncertainty dialing up our preference for precision or noise.
Outro
Of all the articles, this one highlights that it’s very hard to optimise for bias or noise directly. Stress might influence you one way or the other, and maybe you have some control over that. But your influence over how attracted you are to groups, and how to behave around those groups is much more about the group than it is about you. Similarly, cognitive dissonance isn’t something you control—you experience it, but if you knew when it was coming then you wouldn’t let it happen. It wouldn’t be a thing.
But you can diagnose what kind of uncertainty you’re experiencing. And lots of things try to do this—label volitility. Think of mindfulness and all the variations on cognitive behavioural therapy. Or, for something less psych-ey think of Agile retros, and the military OODA-loop. These are all training you to understand when you’re more inclined to explore (be noisy) or exploit (be biased), and when you should be more inclined to do those things too.
Uncertainty produces bias and it produces noise. Misdiagnosing the uncertainty means you’re going to pick the wrong strategy. You want to have more control over your biases, you want to have control over that. Lucky for you, I have and article for that.
Ideologies worth choosing at btrmt.