Practical Ethics

by Dorian Minors

July 4, 2025

Analects  |  Newsletter

Excerpt: Most discussions about ethics centre on catastrophic scenarios. Situations where it’d be very difficult to avoid unethical behaviour. These scenarios aren’t really very interesting to me. What the average person probably wants to know is how to avoid the tamer moral lapses we encounter every day. What the average person wants to do is know how to avoid that single decision that might haunt them. So let’s explore a more practical ethics. This is the last in the series—the three hooks for a practical ethic.

To avoid rationalising poor ethical intuitions, we can use three tools to develop our ethical muscles. Sensitising ourselves to the small number of basic ethical motivations and the the mechanisms which allow us ignore them, before asking what a good person would do. It gets us most of the way there.

filed under:

Article Status: Complete (for now).

If you already read the intro in another part, then you can skip to the content. See the whole series here.

I manage the ‘ethical leadership’ module at Sandhurst, among other things. The ethics of extreme violence is something that really bothers military leaders. Not just because they want to reassure themselves that they’re good people, even if they’re doing bad things. But also because people in the profession of arms are particularly prone to moral injury—the distress we experience from perpetrating, through action or inaction, acts which violate our moral beliefs and values. And also, frankly, because we keep doing atrocities, and this is very bad for PR.

When I arrived at Sandhurst, the lecture series on ethical leadership was very much focused on “how to not to do atrocities”. You can see me grappling with the lecture content in this series of articles. I wasn’t very impressed. It’s classic “look how easy it is to do atrocities” fare, with the standard misunderstanding of a handful of experiments from the 60’s and 70’s.

But it’s not easy to do atrocities. As I summarise in that series:

You have to train your authority figures to be cruel. You have to stop people from being able to leave or cease participating. You have to screen out dissenters. You have to do it over long periods of time and with an intensive level of engagement. And you have to constantly intervene to stop people’s better judgement getting in the way of the terrible goings on.

It’s actually really hard to be a catastrophic leader.

And many atrocities are exactly this. It’s not normal people slipping into terrible acts. It’s circumstances in which it would be very hard to avoid terrible acts.

These fatalistic kinds of events aren’t really that interesting to me, nor are they interesting to the average soldier, who’d prefer not to know how bad things can get. What the average person, and certainly the average soldier, wants to know is know how to avoid that single decision that might haunt them.

This is one of the difficult problems that a philosophy of ethics tries to deal with. But at Sandhurst, we don’t have time to do a philosophy of ethics course. It’s a 44 week course, mostly occupied by learning how to do warfighting. In that span, I get three lessons to help the officers do moral decision-making more effectively.

So we don’t want a philosophy of ethics. We want practical ethics that helps us very quickly get a sense of the moral terrain we’re facing. As such, in the process of making the module a bit more fit for purpose, I thought I may as well share it with you too.

What not to do

If you’re coming here for the first time, then you’ve missed two introductory articles. The first tries to demonstrate just how complicated ethics is, without getting too deep into the weeds—the moral terrain, if you will:

The upshot of the article is that there are many different ways to think about what ethics is, and they don’t always align:

  1. You have principles of right and wrong, for example, like laws and obligations—this is called deontology and is one way of thinking about what’s right;
  2. You have the human desire to do the least harm and the most good—consequentialism and another way of thinking about things;
  3. You have the idea of a ‘good person’, and you might ask yourself what they would do—which is called virtue ethics;
  4. You have the fact that none of this matters when someone you care about needs help—an ethics of care; and
  5. You have your responsibilities to the community, which also aren’t always aligned to these other things.

An ethical decision, then, isn’t going to be an easy decision, because there’s often more than one ethical dimension at play. I walk through a torturous hypothetical in my last two articles demonstrating how each could contribute to a single ethical decision. But ethicist Rushworth Kidder makes the point in a pithier way. He talks about “right-vs-right” and “right-vs-wrong” choices. Moral dilemmas versus moral temptations.

Moral temptations are easier. You do something wrong, but you know what you should be doing. You might justify it to yourself, but you’re not confused about the ethics of the situation:

Only those living in a moral vacuum will be able to say, “On the one hand is the good, the right, the true, and noble. On the other hand is the awful, the wicked, the false, and the base. And here I stand, equally attracted to each.” If you’ve already defined one side as a flatout, unmitigated “wrong,” you don’t usually consider it seriously. Faced with the alternatives of arguing it out with your boss or gunning him down in the parking lot, you don’t see the latter as an option. To be sure, we may be tempted to do wrong—but only because the wrong appears, if only in some small way and perhaps momentarily, to be right. For most people, some sober reflection is all that’s required to recognize a wolflike moral temptation masquerading in the lamb’s clothing of a seeming ethical dilemma. The really tough choices, then, don’t center upon right versus wrong. They involve right versus right. They are genuine dilemmas precisely because each side is firmly rooted in one of our basic, core values.

Weighing up your principles with the desire to do least harm when they come info conflict is an ethical dilemma. Weighing up what a good person would do when you want to do what’s right for your sick child if one isn’t the same as the other is an ethical dilemma.

Weighing up competing ethical dimensions produces ethical dilemmas. Kidder proposes four common ones—truth versus loyalty, individual versus community, short- versus long-term, and justice versus mercy. If you can’t think of times when you’ve weighed these competing ethical lenses, then you haven’t been alive very long.

Whether it’s Kidder’s four, or some other model of competing ethical lures, this general problem leads us to our second article in our series—one that talks about how this problem leads to the development of moral blindspots. As Kidder points out:

For most people, some sober reflection is all that’s required to recognize a wolflike moral temptation masquerading in the lamb’s clothing of a seeming ethical dilemma.

And if moral temptations require some reflection, then ethical dilemmas must require lots of reflection, right?

That’s what most models of ethical decision-making seem to assume. Poor ethical decisions are made more quickly and intuitively. So, like the behavioural economists would have you believe, one should solve problems of intuitive thinking by using our more ‘rational’, deliberative thought processes.

But there is a very strong philosophical tradition, increasingly supported by the behavioural sciences, that suggests this is exactly the wrong way around. As I say elsewhere:

David Hume … [told us] “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them” … [and] you find people discovering this over and over again. Other examples I’ve come across include include Nietzche’s Geneology of Morality (pdf), or Sperber and Mercier’s Enigma of Reason, or a very compelling recent article that reckons all ‘biases’ are just variations on confirmation bias.

We believe something, and we rationalise that thing to ourselves—not the other way around. And when it comes to moral judgement:

Popularised by Jonathan Haidt … an extremely rich body of literature demonstrates that, in the case of moral judgement, [intuitive thought] very frequently drives [rational or deliberative thought] rather than getting corrected by it.

If a family, who cherish their dog, eat it after it dies, you—assuming you’re socialised into anglo culture, and not trying to be clever—are morally triggered. But by what? It’s not disrespectful to the dog. It’s dead, and the family took great care of it while it lived. It’s not unhealthy, assuming the dog wasn’t left to rot nor died of something catching. It’s not setting a bad example—lots of cultures eat dogs and have no adverse social effects.1

We are socialised to have some moral intuitions, and they then drive our process of reasoning. It’s motivated reasoning at its finest.

So stopping for some of Kidder’s “sober reflection” isn’t going to help us very much, right?

What to do instead

It implies a new starting point for ethical decision-making. It implies that we have to assume that our decision-making in the moment will be deterministic. That the situation will select our response, by way of our intuitions.

This doesn’t mean we need to throw out deliberative thinking. But it does tell us when we might think about deploying it. We want to think about it before we get into the situation. You can think of this as a sort of choice architecture:

Who is a choice architect? Everyone in this room is a choice architect. Anyone who designs the environment in which you choose is a choice architect. If you go to a restaurant, there is a menu. Somebody thought about how to structure that menu. In many restaurants you have appetizers, then main courses. In some restaurants the main courses are divided into meat, fish and pasta. In others they are all mixed up. Sometimes they are arranged in order of price. Sometimes there is no apparent order. Everything we know about psychology tells us that all of those things matter. Everything matters.2

When it comes to our moral decision-making, we want to think about making decisions before it comes time to make them. If you decide that you’re not going to eat meat any more because it’s unethical, then you don’t have to think about buying meat when you’re shopping.

So we can try to create an architecture around our choices that helps pre-determine an ethical path. But we can also use our deliberative thinking to help develop our moral intuitions. Critic of Haidt, Albert Musschenga points out, people don’t typically have strong intuitions. Often they’ll adjust their intuitive judgment “if there are too many reasons pleading against it”. As I’ve written about, most of our beliefs are weakly held ones, and one of our greatest forms of entertainment comes from violating those weak beliefs.

We can just take Haidt’s favourite demonstration to prove it. The family eats the dog, and how quickly was I able to push you away from your intuitive rationalisations? Moral case deliberation is an effective tool for developing an ethical sense. Exposing ourselves to moral content, both in reality and in more intellectual domains, allows us to develop greater sensitivity to it.

But we need more than Socratic dialogue to do this. We need concrete tools. This is particularly because, as Kidder points out, many ethical problems are dilemmas—“right” vs “right” situations.

In these situations, our ethical predispositions are in conflict. It might seem fortunate, then, that cognitive conflict is one of the most obvious times at which our more deliberative thinking comes into play. But sharp readers will be suspicious of that conclusion—if our more deliberative reasoning is more active during conflict, then it could, in fact, be helping us make worse decisions.Remember, deliberative thought processes more often rationalise intuitive decisions. Maybe its activation is actually making us more stubborn in our moral attachments.

We want, then, to use these moments of conflict to encourage keener deliberation, and to do that we need the right hooks.

Hooks for a practical ethics

For the first, we can return to Kidder’s four basic ethical dilemmas:

Four such dilemmas are so common to our experience that they stand as models, patterns, or paradigms. They are: … Truth versus loyalty … Individual versus community … Short-term versus long-term … Justice versus mercy … The names for these patterns are less important than the ideas they reflect: Whether you call it law versus love, or equity versus compassion, or fairness versus affection, you’re talking about some form of justice versus mercy.

Or, if we are feeling more expansive, we could take W.D. Ross’ “prima facie duties”. Ross suspected that, on the face of it (prima facie), there are a handful of very basic ethical motivations that drive us, which can’t easily be reduced any further:

  • Fidelity, or a duty to keep our promises;
  • Reparation, or a duty to make amends;
  • Gratitude, or a duty to return favours;
  • Beneficence, or a duty to improve the lot of others;
  • Self-improvement, or a duty to improve our own lot;
  • Non-maleficence, or a duty to not harm others; and
  • Justice, or a duty to distribute happiness according to merit.3

For Ross, these are the duties that tug against each other in decisions where more than one “right” is in play.

Whether you take Ross or Kidder or any other reductionist, this is our first tool. A set of moral motivations that we can pull out and examine. You then need to work out what drove you to choose the “right” that you chose, and ignore the other “rights” that might have been on the table.

It’s not always easy to determine why we picked something in the face of others, particularly when it comes to intuitive decisions. Subthreshold, automatic and unconscious patterns of thought are my main bugbear around here.

But sometimes it’s made clearer by working out what made us ignore the other “rights”. Maj Ben Ordiway, who put together this particular route of moral decision-making, wrote that we can examine the moral disengagement mechanisms that Albert Bandura spent much of his career working on:

  1. Moral justification (rationalization)—construing a reprehensible action as serving socially worthy or moral purposes.
    Example: A Special Forces Captain kills an unarmed, suspected bomb-maker—an apparent war crime. The officer declared that doing so prevented future casualties.

  2. Euphemistic labeling—using sanitizing language to reduce personal responsibility or reframe a reprehensible act.
    Example: “This is what it means to ‘operate in the gray.’”

  3. Advantageous comparison—contrasting a known reprehensible action against a worse alternative.
    Example: “A bit of rough treatment is fine; it’s not like we’re torturing them.”

  4. Distortion of consequences—misrepresenting the harm of a reprehensible act by ignoring or minimizing its effects.
    Example: “So I took a few of the team’s meds; it’s not a big deal; it’s just this one time.”

  5. Displacement of responsibility—pinning responsibility for one’s actions on an authority figure or a mandate.
    Example: “When the First Sergeant told us to ‘handle things at our level,’ this is what he meant.”

  6. Diffusion of responsibility—spreading the responsibility for an action among a group.
    Example: “We all agree to do this, right?”

  7. Dehumanization—denying a person or group human attributes.
    Example: A former Navy SEAL declares enemy combatants are “monsters.” The same individual claims he used a wounded non-combatant as a practice dummy “to do medical scenarios on him until he died.”

  8. Attribution of blame—placing the responsibility for one’s actions on the target of the action.
    Example: “It’s not our fault; they brought this on themselves.”

When we identify what helped us ignore the other ethical factors that could have driven our behaviour, we become more sensitive to those factors.

And then the final, practical, hook would be to borrow the deliberative slogan of virtue ethics:

What would a good person do?

Most social organisations have values. Explicit, or implicit, most social groups will have a clear idea of how a good member should behave. Linda Zagzebski has written at length about the value of developing moral exemplars to shape moral thought, and while it doesn’t take us the entire distance the project of virtue ethics would have us travel, but it gets us much, much closer.4

Outro

Three hooks, then, for a practical ethic. In my first article, I spoke of the ethics of pragmatics:

Pragmatic ethicists think that morals improve as we develop as people, both individually and collectively … That it’s our responsibility to test our ethical principles and revise them in the real world among real people. To reflect on our actions—the moral principles we used to justify our behaviour in the moment, and ask ourselves if what we did made sense.

And with this, we are much closer. We abandon the idea that ethical decision-making is simply a matter of hard thinking. In fact, we accept that, largely, it’s not. Rather, it’s the product, some substantial proportion of the time, of our intuition. So we can become choice architects, and pre-determine our ethical paths. And we can deliberate later about our and other peoples’ moral faculties in situations we and they have been exposed to. But each of these requires practical hooks, and we now have three.

We can sensitise ourselves to the basic ethical motivations—Kidder’s four or Ross’ five,5 and figure out which ones might have been tugging at us. Then we can ask ourselves what mechanisms of moral disengagement allowed us to choose one over the others. And lastly, we can help orient ourselves towards better decision-making by asking what someone we think is ‘good’ might have done.

Each of these helps us improve our moral intuitions and release our deliberative faculties of thought from their slavery to those moral intuitions.

As a practical ethic goes, it’s not bad.


  1. Another one that works well is someone cutting up a national flag for rags. No one sees them do it, and it was their flag. It’s hard to say why this feels wrong, assuming you’re nationalistic at all. But the population is so split over nationalistic stuff like this these days that I think I’ll relegate it to the footnotes rather than make it my main example. Haidt and his colleagues love incest stuff. We hope it’s just because it’s convenient, and not because they’re doing some subliminal kink priming. But, you know, they’ll say that a brother and sister fuck, and its a beautiful experience for the two of them. Again, no one finds out, and they’re careful—no babies. Why is it wrong? Most arguments are easily refutable, and the more complex ones are so abstract they aren’t grounded in anything particularly objective. Dumbfounding. 

  2. It’s probably worth pointing out that this model is actually the premise for the biggest critique of choice architecture. Thaler and his co-author Sunstein, endorse ‘libertarian paternalism’ for this reason. If everything matters when it comes to the way people make choices, they think we have an obligation to help people make the ‘right’ choices—the ones we feel are in their best interest. To quote Thaler again “If somebody asks you, what are the directions to get to the museum and you point them in the right direction, you are a paternalist according to us. We’re not saying you should go to that museum, but if you want to go to the museum, it’s over there, it’s not over here.”. But of course, as I wrote elsewhere “no one is asking and who is to say his directions are the right ones”

  3. Ross reckoned that these last two might just be part of Beneficence, but I think it’s not immediately obvious why, so I will break them out, as many others do. But essentially, for Ross, there are four kinds of good we try to do out of beneficence—pleasure, virtue, knowledge, and justice. So, apportioning good to merit is one of the forms of good. And apportioning good to yourself is not so different from doing it for others, if you think about doing the most general good.  

  4. Virtue ethics isn’t just about picking the right act, but becoming the right kind of person. So we cultivate traits in ourselves, so we don’t have to rely on what other people might do. By just asking ‘what would a good person do’ I will probably have virtue ethicists complaining that using a quick decision rule like this isn’t going to help in developing phronēsis (practical wisdom). And, virtue ethics isn’t so much about conforming to some social organisations’ ethical code, but developing moral excellence within yourself for yourself—eudaimonia (flourishing). But I think, since most people reading this aren’t going to spend much more time on moral development than reading this, then we can safely do away with that level of depth. 

  5. Again, I wrote seven out, but Ross really reckoned five was closer to target. See footnote.3 


Ideologies worth choosing at btrmt.

Join over 2000 of us. Get the newsletter.