Moral Terrain
June 20, 2025
Excerpt: Most discussions about ethics centre on catastrophic scenarios. Situations where it’d be very difficult to avoid unethical behaviour. These scenarios aren’t really very interesting to me. What the average person probably wants to know is how to avoid the tamer moral lapses we encounter every day. What the average person wants to do is know how to avoid that single decision that might haunt them. So let’s explore a more practical ethics. This is the first in the series—getting a sense of the moral terrain.
You could try to make ethical decisions by reasoning through. You want to do good, so you work out what good means. Then you work out what you should do to achieve the good. Or, you could do what most people do and wing it. Just make sure you reflect on what you’re doing.
Table of Contents
filed under:
Article Status: Complete (for now).
If you already read the intro in another part, then you can skip to the content. See the whole series here.
I manage the ‘ethical leadership’ module at Sandhurst, among other things. The ethics of extreme violence is something that really bothers military leaders. Not just because they want to reassure themselves that they’re good people, even if they’re doing bad things. But also because people in the profession of arms are particularly prone to moral injury—the distress we experience from perpetrating, through action or inaction, acts which violate our moral beliefs and values. And also, frankly, because we keep doing atrocities, and this is very bad for PR.
When I arrived at Sandhurst, the lecture series on ethical leadership was very much focused on “how to not to do atrocities”. You can see me grappling with the lecture content in this series of articles. I wasn’t very impressed. It’s classic “look how easy it is to do atrocities” fare, with the standard misunderstanding of a handful of experiments from the 60’s and 70’s.
But it’s not easy to do atrocities. As I summarise in that series:
You have to train your authority figures to be cruel. You have to stop people from being able to leave or cease participating. You have to screen out dissenters. You have to do it over long periods of time and with an intensive level of engagement. And you have to constantly intervene to stop people’s better judgement getting in the way of the terrible goings on.
It’s actually really hard to be a catastrophic leader.
And many atrocities are exactly this. It’s not normal people slipping into terrible acts. It’s circumstances in which it would be very hard to avoid terrible acts.
These fatalistic circumstances aren’t really that interesting to me, nor are they interesting to the average soldier, who’d prefer not to know how bad things can get. What the average person, and certainly the average soldier, wants to know is know how to avoid that single decision that might haunt them.
This is one of the difficult problems that a philosophy of ethics tries to deal with. But at Sandhurst, we don’t have time to do a philosophy of ethics course. It’s a 44 week course, mostly occupied by learning how to do warfighting. In that span, I get three lessons to help the officers do moral decision-making more effectively.
So we don’t want a philosophy of ethics. We want practical ethics that helps us very quickly get a sense of the moral terrain we’re facing. As such, in the process of making the module a bit more fit for purpose, I thought I may as well share it with you too.
The Moral Terrain (or: ‘ethics, a primer’)
We need some background in ethics, of course, before we get into the practicalities, if only to understand why getting to practical ethics requires more than one article to explain.
What even is ‘good’?
This is probably best illustrated by the mere existence of the discipline of meta-ethics.1 Most of the time, when we’re talking about morals and ethics, we’re talking how and why people should behave. It’s much less common to ask questions about what morals and ethics are. Are moral facts objective or subjective? What does the ‘good’ in ‘good behaviour’ even mean? Is it all in our head, as the only (some presume) thinking beings? This is meta-ethics.
I’ll show you what I mean with a real example. Most people have an instinctive notion that we want to sort of avoid causing harm to others. So if someone is feeling pain, probably this is bad. Probably we shouldn’t keep doing whatever is causing this person pain. That’s a moral statement—the clue is the ‘shouldn’t’. But why shouldn’t we? What makes pain bad, rather than good? We don’t usually ask that question—usually we presuppose what ‘the good’ thing is.
Now, there are some classic answers to this broad class of questions. The first is the idea that, perhaps, morals track natural facts—real features of the world. Facts about flourishing, harm, or cooperation. A good example of this is the idea that wellbeing is a property of the mind—how capable something is of perceiving good. So you treat a rock differently to a dog and a dog differently to a little girl, all based on their capacity to experience wellbeing. This perspective is called realism or naturalism.
But there are a lot of people that suspect that morals don’t track facts, even if they pretend to. That morals actually track attitudes and emotions. When people say “stealing is wrong”, they aren’t usually engaging in some kind of philosophical enquiry about the various contexts in which stealing is or isn’t justified. It’s an attitude of disapproval they’re venting, not a fact about the world. This is called non-cognitivism. Similar positions think that we try to capture real facts, but can’t for various reasons.
And there are moral facts that seem to be about social contracts more than anything else—our obligations to others. Survivors of a shipwreck might agree to ration food equally, and now saying ‘stealing rations is wrong’ isn’t about an attitude anymore. But it’s also still not some mind-independent fact about wrongness. In this case, it’s very specifically about the fact that you all agreed to ration the food equally and breaking that contract would be bad. This is called constructivism.
All of these different definitions of what ‘the good’ is lead to different interpretations of how we should behave—how they should guide our actions. Every sentence that contains ‘ought’ or ‘should’ sneaks a concept of ‘the good’ in there somewhere.2
What ‘should’ we do?
A practical ethics, though, isn’t going to spend a lot of time thinking about what “good” means, probably. It’s going to elide past that, as we ordinarily do, into how we should act.
There are a handful of broad categories of normative ethics—ways of thinking about how people should behave:
- Principle- or duty-based ethics (i.e. deontology)3 tell us that we should act with regard to principles or obligations. So if you believe that a god or gods laid down moral commandments we should follow, these would be deontic ethics. Or if you believe that the law is important to follow in-principle—not merely because you’re scared of getting caught—this would also be deontic. Or, if you think there are basic principles of ethics like we should do no harm, and be nice to each other, and say sorry when we mess up, and tell no lies, then you are also drifting into deontic territory.
- Consequence-based ethics (i.e consequentialism)4 tell us that we should act with regard to the consequences of our actions. So if you think we should do things that produces the most good for the most people, or the least bad, then you’re being consequentialist.
- Virtue-based ethics (i.e. virtue ethics)5 shift the question from “what should I do?” to “what kind of person should I be?” The idea here is that understanding what the right ‘principles’ might be, or the extent of the consequences of our actions, is hard. We’re not likely to get it right all the time. So perhaps it’s better to try to become good people instead. We like good people, and we don’t mind when they make ethical errors because we know “their hearts are in the right place”. We think they’re much more likely to do good than bad. So perhaps it’s better to try and be one of these good people, rather than try to figure out what each of our actions should be, because then we’re more likely to do good than bad. So, virtue ethicists focus on cultivating “virtues”—good character traits like courage, honesty, temperance, compassion—in the hope that this will facilitate good behaviour.
- Care-based ethics (i.e. ethics of care)6 is a reaction to the last three ethical styles that points out how fragile these other ethical styles when someone we care about needs us. When our grandparent or child asks us for help, we don’t worry about the principles, the consequences, or the virtues involved, we just attend to the needs of the person we care about. It seems prudent, then, to factor this into our ethical considerations.
- Community-based ethics (i.e. communitarianism)7 is similar to a care-based ethical approach, but isn’t so much about the tight relationships we have. It’s more about the larger social and communal context we live in. So much of what we think is “good” or “right” to do is bound up intrinsically with our sense of the communities we are a part of, and it doesn’t make sense to think of principles or virtues or consequences except as extensions of that.
And if you’ve made it this far, you’ve probably already spotted the problem. None of them are really very distinct from the others. When you’re making an ethical decision, you’re going to be operating through many, if not all, of these ethical lenses.
Pretend you’re walking down the street, late at night, and you watch a car swerve into a telegraph pole, then skid into the middle of the road, and shudder to a halt. You look at the car, grey-haired figure waving weakly for help behind a cracked screen, petrol streaming from beneath, smoke curling from under the hood, cars whipping past in the cross-street. You suspect you should do something. But what? You know, in principle, you shouldn’t enter an accident scene unless you’re trained (deontic). You also know that this principle is probably designed to stop you messing the situation up more—hurting the elderly figure in the car, or causing more ruckus for the cars on the cross-street. But equally, wouldn’t it be so terrible if the elderly figure could be saved if you acted, and died because you didn’t (consequential)? So you’re paralysed. You say, “maybe I’m just being a scaredy-cat; I should be courageous” (virtuous), and so you start to move slowly toward the car, waving frantically for the traffic to stop. But as you get closer, you see through the cracked screen that the elderly figure is your granma. You start sprinting to the car—all thoughts of prudent traffic coordination lost (care). But, you catch yourself. You have a responsibility to those people out there in their cars (communitarian). You need to make sure they don’t also get caught up in the accident, and so you start waving again—now time moving much faster towards granma.
You didn’t pick one ethical perspective. You picked them all.
Outro
I said at the start of all this that:
So we don’t want a philosophy of ethics. We want practical ethics that helps us very quickly get a sense of the moral terrain we’re facing.
Not just because we’re pressed for time, but because it’s quite troublesome to narrow down exactly what it is that constitutes ‘good’, and what that means for how we should behave.
But that doesn’t mean we should divorce ourselves from the philosophy of ethics. We just want to make it easier to understand how to act. In this, we’ll be doing something akin to pragmatic ethics8. Pragmatic ethicists think that morals improve as we develop as people, both individually and collectively. That we might adopt any of the different approaches above, but that they’re incomplete. That it’s our responsibility to test our ethical principles and revise them in the real world among real people. To reflect on our actions—the moral principles we used to justify our behaviour in the moment, and ask ourselves if what we did made sense.
It’s not nearly as rigorous as the pragmatists would like it to be, but it’ll do.
The next article describes the fatal flaw in most models of ethical decision-making, and the one after will finally get us to our practical ethics.
See also the Stanford Encyclopedia of Philosophy, really for anything in this series. It’s harder to read, but much more thoughtful that Wikipedia. I’ll add links in the footnotes where I remember. ↩
Although, there is plenty of mind-bending debate around this too. ↩
See also here for communal ethics, and here for something a bit less western. ↩
See also here for pragmatic ethics. You might also check out moral particularism at wikipedia and the Stanford Encyclopaedia. ↩
Ideologies worth choosing at btrmt.