Mechanical Ethics
October 1, 2025
Excerpt: Dennis Vincent’s S-CALM model elegantly identifies the factors that lead good people to do bad things. But identifying what goes wrong isn’t quite the same as understanding how to fix it. Here, I show how mechanistic thinking—illustrated by the ETHIC stack—can help us understand the causal plumbing beneath Vincent’s model, turning it from a diagnostic tool into an intervention toolkit.
Vincent’s S-CALM model describes the situational and cognitive factors that undermine ethical behaviour. Mechanistic thinking helps explain how those factors might operate, and thus, where we might intervene on them.
Table of Contents
filed under:
Article Status: Complete (for now).
I teach ethical leadership at Sandhurst. We use Dennis Vincent’s S-CALM model, which is probably one of the better attempts at ethical decision-making frameworks I’ve come across. Vincent does something most models don’t—he explicitly engages with behavioural science literature that tries to explain poor decision-making, and uses those to think about ethics.
The S-CALM acronym stands for:
- Situational influencers: situational factors that lead to unethical acts, specifically hostile environments, normalised violence, weak leadership, lack of resources, enhanced emotional states;
- Common behaviours: the cognitive biases that lead to unethical acts (conformity, deindividuation, obedience, cognitive dissonance, bystander effect, groupthink, dehumanisation, moral disengagement, social identity, and a few others);
- Accountability: which speaks to understanding different forms of responsibility;
- Leadership: which speaks to the role of leaders in preventing ethical drift; and
- Moral compass: which speaks to the values and standards that should guide behaviour.
Altogether, fairly comprehensive. Vincent draws from the classic social psychology experiments—Milgram’s shock studies, Zimbardo’s prison experiment, Asch’s conformity tests—and more importantly, from actual military disasters like Deepcut, Abu Ghraib, and Sergeant Blackman’s killing of a wounded Taliban fighter in Afghanistan. He doesn’t just give you abstract principles; he gives you case studies with names and faces and real consequences.
But here’s the thing. The S-CALM model is fundamentally diagnostic. It tells you what goes wrong—which situational factors corrupt judgement, which cognitive biases lead people astray. It’s excellent at helping you recognise when you’re in dangerous territory. What it doesn’t quite do is tell you how to fix it. It tells you what the problems are, but not the mechanisms by which those problems arise, and thus, not quite how to intervene.
For that, you need to think mechanistically. Lucky I have a model that does just that: the ETHIC stack is one illustration of what that might look like.
The mechanistic turn in ethics
I’ve written before about the three problems of ethical education. The first is that the catastrophic examples we use—Abu Ghraib, the Stanford Prison Experiment—are terrible illustrations because they’re both rare and extraordinarily complex. The second is that “thinking it through” is almost never the answer, because our faculties of reason are particularly vulnerable to rationalising our existing intuitions rather than challenging them. The third is that most models tell us what to do, not how to do it.
Vincent’s model addresses the first two problems quite well. He doesn’t pretend ethical disasters are simple, and he explicitly recognises that deliberative processes are often slaves to intuition rather than masters of it. But the third problem—the how problem—remains.
The difference between telling people what they should do and how they can do it is the difference between prescription and mechanism. A mechanism requires three things:
- Entities: the (relatively) stable ingredients or parts of the system
- Activities: the things those parts do
- Organisation: the way those things are linked together
Think of a radio. The antenna (entity) captures radiowaves (activity). The tuner (entity) filters the wave to a specific frequency (activity). The heterodyning circuit (entity) drops that frequency into the audible range (activity). The amplifier (entity) boosts the signal (activity). The speaker (entity) converts it to sound (activity). The organisation is clear: radiowaves become intelligible sound.
Why this incessant detail? Two reasons. First, without it, you’re not particularly closer to making radiowaves into something you can listen to. You know what you should have—audible sound—but not how to get it. Second, by understanding the relation between activities and entities, you can see where to “wiggle” something to change the outcome. Swap a short antenna for a longer one, and you’ll capture more waves.
The ETHIC stack is my attempt to sketch one possible account of the entities, activities, and organisations that make up ethical behaviour. It’s a five-level mechanism—or rather, a sketch of what five-level mechanism might look like:
- E: Early emotional circuitry—the ‘pattern detector’
- T: Thought-level schemas—the ‘lazy controller’
- H: Habitat (the situation)—an ‘affordance auction’1
- I: In-group dynamics—the ‘prestige engine’
- C: Cultural scaffolding—the ‘top-down bias’
Now, these levels aren’t the only levels. In the publication I wrote this up for, I make clear that you could easily make a case for collapsing or extending them. The point wasn’t to be exhaustive, but illustrative.
I illustrate a single mechanism at each level—a lever, if you like, to be able to change behaviour, drawn from theories from cognitive neuroscience and social psychology that seem plausible. Should you dislike my layers, or my candidate theories, then by all means choose your own. The point isn’t that these are the mechanisms, but that thinking in terms of mechanisms—entities, activities, organisations—gives you something prescriptive models don’t: potential intervention points.
With that in mind, Vincent’s S-CALM model and mechanistic thinking (illustrated here by ETHIC) aren’t competing approaches. They’re complementary. Vincent identifies what goes wrong; mechanistic thinking helps explain how it might go wrong, and thus, where you might intervene. Let me show you what I mean.
Situational influencers through a mechanistic lens
Vincent identifies five key situational influencers that increase pressure on soldiers to act unethically:
- Hostile environment: constant threat perception
- Normalised violence: lowered threshold for inflicting harm
- Weak leadership/lack of supervision: absence of moral reference points
- Lack of resources: time, personnel, equipment, sleep
- Enhanced emotional state: anger, rage, frustration, disgust
These are excellent. They’re concrete, recognisable, and drawn from real disasters. But why do they work? What might the causal plumbing look like? Let me illustrate using the ETHIC stack, though other mechanistic accounts would work just as well.
Hostile environments and normalised violence: a situational mechanism
One way to think about how situations influence behaviour is through something like the H-level in the ETHIC stack—what I’ve called an “affordance auction”. The basic idea, drawn from Gibson’s theory of affordances and the Affordance Competition Hypothesis in cognitive neuroscience, is that environments present opportunities for action, and your brain weighs them based on their salience (how obvious they are) and utility (how useful they are, minus their costs).
A hostile environment cranks up the salience of threat-related affordances. When you’re constantly under threat, defensive and aggressive responses become the most obvious actions available. Every person becomes a potential enemy; every sound becomes a potential attack. The cost of inaction—getting killed—makes aggressive responses seem high-utility.
Normalised violence does something even more insidious. It recalibrates what counts as “normal” behaviour, which changes the range of affordances you even consider. If violence is everywhere, violent responses don’t trigger the kind of surprise that would make you stop and think. The “pattern” of acceptable behaviour shifts to include things that would have shocked you six months ago. As Vincent notes, quoting the Centre for Military Ethics, “the range of acceptable behaviour becomes so wide and there is no clear moral reference point for right and wrong.”
Basically, violence doesn’t just become acceptable, it becomes the right thing to do. Your intuitions tell you to be violent be default. And here’s the kicker: once you’re in a hostile environment with normalised violence, it’s often too late. The situation is doing its work on you whether you like it or not. The intervention has to come earlier—in training, in setting norms before deployment, in creating regular check-ins that help you recognise when your sense of “normal” has drifted.
Lack of resources: a cognitive control mechanism
Here’s another mechanistic account, this time drawing on cognitive neuroscience models of executive control—what I’ve called the T-level “lazy controller” in ETHIC. The basic idea is that deliberative thought is expensive. Your brain has something like a conflict monitor that detects when intuitions clash, and something like an executive network that resolves those conflicts. But that resolution process only happens when (a) there’s a conflict big enough to notice, and (b) you have the resources to resolve it.
Lack of sleep, time pressure, inadequate personnel—these all deplete the resources that deliberative processes need to function. When you’re exhausted, you default to intuition and habit. Affective intuitions pass straight through into behaviour because you don’t have the cognitive resources to question them.
Vincent notes that “the human body requires six and a half hours sleep a night to regenerate and that prolonged periods of less than this reduces cognitive ability. Leaders who lack sleep make poor cognitive decisions.” Now I’m not sure about that. Matt Walker’s Why We Sleep might have been a good book, but appear to have very little factual basis. But the general point remains. This isn’t just about being tired. On something like the mechanism I’ve sketched, deliberative moral reasoning is one of the first things to go when you’re resource-depleted, because it’s so expensive to run.
The intervention here is obvious but often ignored: you must resource your people adequately. This isn’t soft. It’s operational necessity. An under-resourced soldier isn’t just less effective; they’re more likely to commit atrocities.
Enhanced emotional states: an affective mechanism
Let me sketch one more mechanism, this time at the level of emotional processing—what I’ve called the E-level “pattern detector” in ETHIC, drawing on Lisa Feldman Barrett’s Active Inference theory and George Mandler’s Interruption Theory. The idea is that your nervous system constantly tags stimuli with valence (goodness or badness) and injects arousal to urge you to approach or avoid. When you’re in an enhanced emotional state—anger, rage, frustration, disgust—that tagging system is already pumping out high-arousal signals.
Vincent quotes research showing that “emotional reaction at the moment [of stress] is more influential in determining choice than the rational evaluation of options that may have been conducted beforehand.” On a multi-level mechanistic account like ETHIC, when your affective circuitry is pumping out high-arousal intuitions, your deliberative processes are swamped. The signal is too strong; the intuition wins.
Moreover, enhanced emotional states recalibrate pattern detection. When you’re angry, patterns that would normally be tagged as “threatening” or “unfair” get tagged as “justified target for aggression”. The system isn’t malfunctioning; it’s doing its job, but the emotional context has changed what patterns it recognises.
The intervention here is emotional regulation training before deployment, and structures during deployment that help soldiers recognise and manage enhanced emotional states. This isn’t about suppressing emotion—that doesn’t work. It’s about training pattern detection to recognise “I’m in a state where my judgement is compromised” as itself a pattern worth flagging.
Weak leadership: social and cultural mechanisms
Weak leadership and lack of supervision seem to operate through social and cultural mechanisms—what I’ve called the I-level (in-group dynamics) and C-level (cultural scaffolding) in ETHIC.
At the social level, something like a “prestige engine” seems to operate—drawing on Social Identity theory and social capital theory. Groups generate social norms that drive behaviour through several components:
- Social norms: the behaviours groups promote
- Group representatives: the people who embody those norms
- Social capital: the status/belonging/resources you gain from conforming
Leaders are the primary group representatives. They set the norms. If a leader is weak, or absent, or—worse—modelling unethical behaviour, the norms shift. Suddenly, the behaviours that earn you social capital in the group are the wrong ones.
Vincent’s personal insight about the “Zoo Club” is perfect here. As a new officer, he was pressured to get a tattoo to fit in. That was the norm that would earn him social capital in his peer group. By refusing, he didn’t just make a personal decision; he changed the norm. On something like the mechanism I’ve sketched, once a high-status person (an officer) refused to conform, the norm lost its power.
At the cultural level, weak leadership makes values less visible. The British Army has its Values and Standards—Courage, Discipline, Respect for Others, Integrity, Loyalty, Selfless Commitment. On a mechanistic account, these cultural values act like a “top-down bias” that should amplify certain patterns, certain conflict resolutions, certain affordances, and certain norms at other levels. But if leaders don’t embody these values, they become invisible. They’re just words on a laminated card clipped to your flak jacket.
The intervention is straightforward but demanding: leaders must be visible exemplars of the values. This isn’t about charisma or dominance. It’s about consistency. As Lieutenant General David Morrison said, “the standard that you walk past is the standard that you are prepared to accept.”
Common behaviours through a mechanistic lens
Vincent identifies ten common cognitive behaviours that lead to unethical actions in stressful situations. He splits them into five individual behaviours (Social Comparison/Conformity, Deindividuation, Obedience, Cognitive Dissonance, Bystander Effect) and five group behaviours (Groupthink, Moral Disengagement, Dehumanisation, Social Identity, and others).
If I’m being honest with you, I’m not at all excited about cognitive biases. I’ve written often about how useless they are in understanding behaviour, and what we should do instead, or what other people think we should do instead.
But, since my main complaint is that with some arbitrary list of biases, drawn from the 200+ that exist in the literature, you’re never going to be able to prevent poor decision-making. You’re never going to be able to pan through all the possible biases and choose which ones might apply.
But, the ETHIC stack might be able to help us with this problem. If we work out how these biases become problematic, we could probably get in front of similar problems in future. So let’s have a go.
Conformity and obedience: social mechanisms
Conformity and obedience seem to be primarily social phenomena—operating through something like the “prestige engine” I sketched. You conform to gain social capital; you obey authority figures who represent legitimate group norms.
Vincent describes Asch’s conformity experiments. In these experiments, participants conformed to the group ~35% of the time, even when the group was obviously wrong. On this kind of account, social capital was at stake. Going against the group would cost status of some kind, so people conformed.
Milgram’s obedience studies work similarly, but Milgram thought authority was the key representative. The experimenter’s white coat and Yale affiliation made him a high-status representative of a legitimate group (“scientists who know what they’re doing”). Participants conformed to his norms, even when those norms involved shocking someone to death.
On this kind of mechanistic account, the intervention points become clear. You can:
- Change the representatives: Make sure high-status people in your group model ethical behaviour;
- Introduce comparison groups: Highlight groups whose norms you do want to emulate;
- Redefine social capital: Make ethical behaviour the thing that earns status, not unethical behaviour.
Vincent’s August Landmesser example (the man refusing to give the Nazi salute in Hamburg, 1936) shows someone explicitly rejecting the group norm despite massive pressure. This only works if you’ve got strong alternative groups to identify with, or a rock-solid personal identity that doesn’t depend on the local group’s social capital.
Deindividuation: social identity overwhelming personal identity
Deindividuation seems to be what happens when social mechanisms run so strong that your personal identity disappears entirely. You become the group. Your behaviour is the group’s behaviour. Responsibility diffuses.
Zimbardo’s guards in the Stanford Prison Experiment wore uniforms and reflective sunglasses—masks that let them adopt a “Mr Correction Officer” persona. They weren’t doing the abuse; the persona was. The group representatives (the active abusers) set the norms, and everyone else lost themselves in the group.
The intervention is to maintain personal identity even while fostering group cohesion. Vincent’s insight about being “in the platoon but not of the platoon” as a platoon commander is crucial here. You need enough distance to recognise when group norms are drifting into dangerous territory. You need enough personal identity to pull yourself out of the deindividuation spiral.
Cognitive dissonance: motivated reasoning
Cognitive dissonance is what happens when deliberative processes fail to challenge intuitions or behaviours. You hold two conflicting beliefs, or your beliefs and behaviour don’t match, and your mind has to resolve the conflict. The problem is that resolution often takes the path of least resistance, which means rationalising the behaviour rather than changing it.
Vincent’s personal insight from the UN mission in the DRC is perfect. The brigade commander knew he should protect Congolese civilians—that was the mandate, and he even reported doing it. But he didn’t want to risk his soldiers, so he didn’t deploy them. To resolve the dissonance, he rationalised: “I will not lose one of my soldiers for the Congo.” The belief shifted to match the behaviour.
The intervention is to make moral engagement easier than moral disengagement. This means:
- Training scenarios: Practice recognising cognitive dissonance and resolving it ethically;
- Accountability structures: Make it harder to rationalise unethical behaviour by ensuring someone will call you on it;
- Direct language: Train people to spot and eliminate euphemisms and rationalisations (Bandura’s moral disengagement mechanisms);
Bystander effect: diffusion of responsibility
The bystander effect is what happens when social norms don’t clearly assign responsibility. The more people present, the less any individual feels responsible, because surely someone else will act.
On a mechanistic account, this is a social-level failure. The group isn’t providing clear norms about who should act. Responsibility diffuses across the group, and so no one acts.
The intervention is bystander intervention training: explicitly assign responsibility, and train high-status individuals to break the bystander effect by acting. Vincent notes that in the Stanford Prison Experiment, it was Christina Maslach—Zimbardo’s girlfriend, an outsider—who convinced him to stop. She had the distance to see what the group couldn’t.
In a military context, this is the platoon commander’s job. Vincent’s story about retaining social distance from his platoon—they didn’t even know his first name—shows why this matters. He was the designated intervener, the person not swept up in the group norms, who could act when the group failed.
Groupthink, moral disengagement, dehumanisation: multi-level disasters
These three behaviours don’t sit at just one mechanistic level; they seem to cascade through multiple systems.
Groupthink appears to be primarily social (group norms suppress dissent), but it’s reinforced by cognitive processes (deliberation doesn’t engage because there’s no visible conflict) and cultural factors (values that should provide alternative perspectives become invisible).
Moral disengagement seems to be primarily cognitive (deliberation rationalises unethical behaviour using euphemisms, advantageous comparison, displacement of responsibility), but it’s enabled by social factors (group norms that make disengagement easier) and cultural factors (values that aren’t visible enough to challenge the disengagement).
Dehumanisation is perhaps the most insidious because it seems to operate at every level of a multi-level mechanism:
- Affective: Pattern-detection stops recognising the victim as “person with moral value” and starts recognising them as “object” or “threat”
- Cognitive: Deliberation doesn’t detect conflict because “I’m harming a person” no longer applies
- Situational: The environment affords violence against the victim more readily because the costs (moral harm) seem lower
- Social: Group norms reinforce dehumanising language and behaviour
- Cultural: Values may even explicitly support dehumanisation (e.g., enemy combatants as “targets” not “people”)
Vincent notes that in Milgram’s experiments, when the teacher had to physically put the learner’s hand on the shock pad, compliance dropped from 65% to 10%. When the teacher had to order someone else to do it (adding distance), compliance rose to 90%. Dehumanisation increases with physical and psychological distance.
On a multi-level mechanistic account, the intervention would need to target every level:
- Affective: Train pattern recognition to maintain awareness of others’ humanity (moral case deliberation with realistic scenarios)
- Cognitive: Train moral engagement skills that force confrontation with the human costs of behaviour
- Situational: Reduce distance—physical and psychological—between actors and victims where possible
- Social: Establish group norms that explicitly value the humanity of outsiders
- Cultural: Make values visible that emphasise human dignity (like the Law of Armed Conflict)
Vincent’s examples, mechanistic explanations
The power of Vincent’s S-CALM model is its examples. Let me show how two of them—Deepcut and Sergeant Blackman—look through a mechanistic lens.
Deepcut: a multi-level systems failure
Between 1995 and 2002, four trainees at the Deepcut barracks died in suspicious circumstances. The Blake Review identified four situational influencers:
- Poor leadership: Weak leaders posted to the new RLC regiment
- Lack of supervision: Corporals responsible for 200 trainees; unsupervised access to weapons
- Low morale and boredom: Trainees waiting up to 18 months for training to continue
- Bullying and ‘tribal’ mentality: Unsupervised peer abuse at night
On a multi-level mechanistic account:
- Cultural failure: The system (formation of the RLC, contracting out Phase 2 training) created conditions where cultural values couldn’t cascade down
- Social disaster: With 200 trainees per corporal, no group representative could set healthy norms. The “tribal mentality” was social dynamics running wild, with status earned through bullying
- Situational catastrophe: Unsupervised access to weapons and ammunition made self-harm and violence highly salient opportunities
- Cognitive collapse: Low morale and boredom depleted resources, meaning deliberative processes weren’t engaging
- Affective damage: 18 months of boredom and bullying recalibrated intuitive responses to accept abuse as normal
The intervention needed to happen at every level: better systems, better supervision, better resources, better training to maintain moral intuitions despite adversity.
Sergeant Blackman: the situational factors stack up
In 2011, Royal Marine Sergeant Alexander Blackman shot a wounded Taliban fighter in Helmand Province. Vincent uses this as his extended case study, and the Telemeter Report identified multiple situational influencers at play:
- Hostile environment (constant threat)
- Normalised violence (intense combat deployment)
- Weak leadership (command distance, lack of clear guidance)
- Lack of resources (undermanned, exhausted)
- Enhanced emotional state (anger, frustration at IED casualties)
Through a mechanistic lens:
- Affective: Anger and frustration led to the wounded Taliban fighter being tagged as “justified target” rather than “protected person under LOAC”
- Cognitive: Exhaustion meant deliberative processes didn’t engage to resolve the obvious conflict between “enemy” and “hors de combat protected person”
- Situational: The situation afforded quick violence; the wounded fighter was right there, the weapon was in hand, the costs seemed low (no witnesses except squad members)
- Social: The squad’s norms were shaped by constant IED attacks and lack of command presence. Status came from solidarity against the enemy, not adherence to LOAC
- Cultural: Values (LOAC, the Army Values) were invisible in that moment, despite everyone having been briefed and carrying laminated cards
The tragedy is that Blackman knew he’d done wrong—he said “obviously this doesn’t go anywhere, fellas” after the shooting. His deliberative processes came online after the act, but by then it was too late.
On something like the mechanistic account I’ve sketched, the intervention would need to be prophylactic: training that makes LOAC intuitive, structures that keep deliberation engaged even under exhaustion (via better resources), immediate command presence to provide visible reminders of values.
Teaching S-CALM with mechanistic thinking
So how does this work in practice, when I’m standing in front of a lecture hall at Sandhurst?
Vincent’s S-CALM model is a thoughtful attempt to characterise ethical behaviour. His situational influencers are concrete and recognisable. His common behaviours are grounded in real psychology and real disasters. His emphasis on accountability, leadership, and the moral compass is exactly right.
But I have to go further, and spend time teaching mechanistic thinking, illustrated by something like the ETHIC stack, not as a replacement, but as an explanation. When Vincent says “hostile environment,” I sketch one possible mechanism for why it works: situations change what opportunities for action seem obvious and useful. When he says “cognitive dissonance,” I sketch why deliberation might take the path of least resistance. When he says “dehumanisation,” I show how it might cascade through multiple levels of processing.
Mechanistic thinking—illustrated here by ETHIC—turns an ethical decision-making tool like S-CALM from a diagnostic into an intervention toolkit. It answers the how question. It shows you where you might wiggle the system to change the outcome.
And crucially, mechanistic thinking fills in the gap that Vincent’s model has: examples of successful intervention. Vincent has brilliant examples of failure—Deepcut, Blackman, Abu Ghraib, the Stanford Prison Experiment. What he doesn’t have much of are examples of success.2 Once you understand possible mechanisms, you can start designing interventions:
- Want to improve affective pattern detection? Run moral case deliberation scenarios regularly
- Want to keep deliberative processes engaged? Resource your people adequately (sleep, time, personnel)
- Want to change situational affordances? Alter the environment before deployment, and install regular check-ins to catch drift
- Want to harness social dynamics? Put your best leaders in visible positions, make ethical behaviour the thing that earns status, provide comparison groups that embody your values
- Want to make cultural values visible? Have leaders embody them constantly, not just talk about them
It’s like teaching someone to diagnose a sick radio (that’s S-CALM) versus teaching them how radios might work so they can attempt repairs (that’s mechanistic thinking). You need both. The diagnosis tells you where the problem is; the mechanism sketch suggests where you might intervene.
Outro
And that, I think, is what’s been missing from military ethics education more broadly. We’ve got excellent diagnostic tools. We can recognise when things are going wrong. What we haven’t had is mechanistic thinking that suggests how to intervene.
Vincent’s S-CALM model doesn’t flinch from the uncomfortable truths of moral psychology. It doesn’t pretend that deliberation is king, or that knowing the rules is enough, or that good people don’t do bad things. It looks squarely at the fact that Milgram’s teachers kept shocking learners, Zimbardo’s guards turned sadistic, and Sergeant Blackman shot a wounded prisoner, and it asks: what were the conditions that made this possible?
Mechanistic thinking asks a different but complementary question: how might those conditions have operated, and how might we manipulate them?
Together, they give us something rare in military ethics: a framework that’s both realistic about human moral failure and cautiously optimistic about our ability to intervene. Because once you understand possible mechanisms—even provisional sketches of entities, their activities, and their organisation—you can start to identify potential intervention points.
You might train affective processes, resource deliberative processes, manipulate situational factors, influence social dynamics, and make cultural values visible. You can take Vincent’s examples of failure and use them to generate hypotheses about success.
That’s the promise of mechanistic thinking in ethics. Not that we have the definitive account of moral behaviour—we don’t. Not that we can eliminate moral failure—we can’t. But that we can sketch plausible mechanisms well enough to suggest interventions, test them, refine them, and gradually make ethical disasters rarer.
And that understanding, however provisional, is actionable.
Which is, after all, the whole point of teaching this stuff in the first place.
Ideologies worth choosing at btrmt.