When groups go bad
January 3, 2025
Excerpt: There’s this cluster of classic social psychology experiments from the 50’s through the 70’s that you’ll be presented with in documentaries and whatnot whenever groups of people are behaving crazily. You’ve probably heard of some of them. Milgram’s ‘shock’ experiments, or Zimbardo’s prison experiment, or Asch’s conformity tests, and so on. These things gloss over just how hard it is to get people to do atrocities on a large-scale. Luckily, you have me to tell you how they really happen.
Without more tasteful social behaviours to sample from, we’re liable to attach very strongly to the behaviours of our group. Add a hostile environment, normalised physical and emotional violence, and a lack of mental and physical resources, and you have the ingredients for atrocity.
Table of Contents
filed under:
Article Status: Complete (for now).
There’s this cluster of classic social psychology experiments from the 50’s through the 70’s that you’ll be presented with in documentaries and historical novels whenever groups of people are behaving crazily. You’ve probably heard of some of them. Milgram’s ‘shock’ experiments, or Zimbardo’s prison experiment, or Asch’s conformity tests, and so on.
I talk about them in detail in part one of this series, but the basic idea is that humans will cheerfully engage in the most obscene behaviour if either:
- Everyone else is; and/or
- Someone charismatic/in authority tells them too.
But there are a few issues with this little narrative. I mean, just intuitively, you know that most of the groups around you aren’t just teetering there on the brink of some atrocity. This shit always catches everyone by surprise. You also probably know of a good handful of actual atrocities—genocides, war crimes, mass killings, particularly untidy riots. And if so, you might also be familiar with the fact that by the second Wikipedia section these things go from a simple story about a group of terrible people doing terrible things to a nihilism-inducing tangle of complexity that stops you from reading any further.1
Now, I’ll recap shortly, so this article stands alone, but as part one of this series details, you really have to work quite hard to get some group-level atrocities going. Which makes sense, because as part two and part three point out, the same dynamics that underpin bad group behaviour are actually just the normal biases and predispositions that underpin all social behaviour. And you’ll notice that most social behaviour is quite pleasant and uncontroversial, with very few groups going around doing atrocities.
But these things do happen. So, while the previous articles have explored how in-group/out-group biases are normal, healthy and productive, this article will tell you exactly just how hard you have to work to get people doing heinous acts on a large-scale.
What science says about bias says more about science than bias
Social science had a looser feel in the 60’s. For example, you could pretend that shocking someone to death was a legitimate experiment your participants were doing, like in Milgram’s “obedience” studies. Or you could literally trap undergraduate students in the basement of your office building and make them pretend to be a jail, like Zimbardo in his “Stanford Prison Experiment”.
Today, you aren’t really allowed to do this kind of stuff because, sometime around the late 70’s, and not unrelated to Zimbardo’s nonsense, universities re-discovered the very beneficial relationship between functional ethics boards and less expensive liability claims. So perhaps we will never really know exactly how easy it is to induce these kinds of dynamics.
But we did learn some things. The first of which we have alluded to already—getting groups to act terribly is quite an involved process. To save you reading the article, I’ll very quickly convince of this before we get into just what is needed to get groups doing this stuff.
What we can’t take away from these studies
Milgram invited people to participate in his experiment as a ‘teacher’, whose job was to apply an electric shock to another participant—the ‘learner’—every time the learner got something they were supposed to be learning wrong.
The shock got more intense every incorrect answer, and where the study started at shock levels labelled ‘Mild Shock’ in normal black text, it ended with shocks labelled ‘Danger: Severe Shock’ and ‘XXX’, coloured red for added effect.
What people will tell you about this experiment is that we should be very surprised and concerned that about two-thirds of people went all the way to ‘XXX’ so long as an experimenter in a lab coat urged them to. This despite the fact that the ‘learner’—in actual fact, an actor in on the experiment—would begin screaming about their heart condition before falling silent and failing to answer any more questions. Now, I have literally never been in a lecture in which Milgram was taught where the phrase ‘just because a person in a lab coat told them to do it…’ wasn’t used,2 so I’d say that’s what we’re supposed to take away from this.
Zimbardo’s experiment was less complicated, but more egregious. He put a couple dozen undergrads in a basement and told them to simulate a prison. The experiment was supposed to last two weeks, but had to be called off on day six because of the “guards’” escalating abuses and the deterioration of “prisoner” conditions. Naked and sleeping on the concrete floor, cleaning toilets with their hands, this kind of thing.
Again, we’re supposed to be worried about and titillated by the fact that normal people will end up doing this kind of wild shit with basically no provocation at all. In this case, just assigning people to the guard condition made them monsters, and simply assigning people to the prisoner condition made them extraordinarily compliant—Zimbardo made a big fuss about how the prisoners could have left at any time if they chose to, and his guards were given basically no instructions.
Unfortunately, Zimbardo was telling some porky-pies. In the archival material for the experiment, it became clear that Zimbardo’s team explicitly instructed his guards in the intricacies of prison-based abuse, regularly intervened to escalate the tempo of the abuses, and planned for their prisoners to “be led to believe that they cannot leave, except for emergency reasons. Medical staff will be available to assess any request to terminate participation … Prison subjects will be discouraged from quitting”.
Milgram’s results are a little less obviously biased. But Milgram himself points out that not only was the experiment:
sponsored by and takes place on the grounds of an institution of unimpeachable reputation, Yale University. It may be reasonably presumed that the personnel are competent and reputable.
But also:
The subjects are assured that the shocks administered to the subject are “painful but not dangerous.”
And more importantly:
There is a vagueness of expectation concerning what a psychologist may require of his subject, and when he is overstepping acceptable limits. Moreover, the experiment occurs in a closed setting, and thus provides no opportunity for the subject to remove these ambiguities by discussion with others. There are few standards that seem directly applicable to the situation
Essentially, Milgram is wondering why his participants would behave differently—what evidence did they have that they would know better than the experimenter? He also spends a great deal of time itemising all the ways the participants emotionally came apart during the experiment. Enthusiastic, they were not.
What we can take away
So, it’s not really that people stepping into a group find themselves stumbling upon some shallowly buried predisposition toward complicity and moral decay.
It’s more that, under certain fairly complicated circumstances, absolutely normal people can be pressured into an abnormal level of complicity and moral decay.
Most lectures will bring up the philosopher Hannah Arendt’s concept of ‘the banality of evil’ with these experiments. Arendt says of Nazi logistician Adolf Eichmann, “the trouble with Eichmann was precisely that so many were like him, and that the many were neither perverted nor sadistic, that they were, and still are, terrifyingly normal”. More generally she says:
The sad truth is that most evil is done by people who never make up their minds to be good or evil.
But, what’s left out is that she also notes:
[U]nder conditions of terror most people will comply but some people will not, just as the lesson of the countries to which the Final Solution was proposed is that “it could happen” in most places but it did not happen everywhere
So. The question becomes, what does make this stuff happen?
How group behaviour goes bad
We can always think about human behaviour in two directions. We can think about how we act, and we can think about how the environment acts on us.
How we act
We can take our quote from Milgram as a starting point:
There is a vagueness of expectation concerning what a psychologist may require of his subject, and when he is overstepping acceptable limits. Moreover, the experiment occurs in a closed setting, and thus provides no opportunity for the subject to remove these ambiguities by discussion with others. There are few standards that seem directly applicable to the situation
Milgram is gesturing here at the core conclusion of the social sciences when it comes to what attracts people to groups:
we sort ourselves in and out of these [social] categories based on how we would like to distinguish ourselves from others … Like, in the UK, I am Dorian the Australian. I place myself in that category to distingish myself from the British and other ex-pats roaming this soggy landscape. But in Australia, I place myself in the Sydney-sider category to distinguish myself from those pesky Melbournites.
We are attracted to one group over another when it’s distinct from other groups in a way that appeals to our personal identity or sense of self. Once we have a group that’s distinct in this way, we engage in this huge list of behaviours designed to make us more like the group:
And when I want to be identified with a group, I’m going to accentuate those distinctions. I’m going to make very clear all the ways the group I’m identifying with is different from other groups, and how similar the people in the group are to each other, including lil old me.
- I obviously need to make clear who is us, and who is them, so I’ll do the ‘othering’ cluster of behaviours.
- I need to make sure I’m behaving like my group, so I’ll do a lot of conforming, and I’ll be evaluating how close my behaviour is to the behaviour of the group.
- Thinking like myself is actually not super productive, because my personal identity doesn’t always match my social identity. So I’ll do a lot of outsourcing my thinking to the group, leaving my personal identity behind.
- And because I’m outsourcing all this thinking, I’m probably going to do stuff I wouldn’t otherwise do. Not just because there’s safety in numbers, but because my thinking is those numbers. I don’t have responsibility for my actions anymore, because taking responsibility might make me act differently to the group and I want to be part of the group.
This little attempt of mine to summarise all of social psychology elides 200-odd social biases floating around in the literature. But the point is that these opportunities for positive distinction are absolutely crucial features of our social behaviour. And as Milgram points out, the only distinction available to his participants was to identify with the ostensible expert in the room—the experimenter—or to defy them, creating a new group with unpredictable outcomes.
When you introduce more groups you get a very different result. So, in follow up experiments, Milgram introduced ‘participants’ who would object to all this electrocution, which essentially stopped our real participant from doing any shocking past the point the ‘learner’ started to get upset. As soon as an alternative group was available, our participants quickly jumped on board.
In Zimbardo’s experiment, you started off with two groups—the ‘guards’ and the ‘prisoners’. But quite quickly, and even with Zimbardo’s ongoing attempts to encourage greater abuses, you ended up with a group of willingly abusive guards and another group, double the size, who would try to avoid humiliating and punishing the prisoners. You even had a split among the prisoners—a group who rioted in response to the worsening conditions and a group who preferred to endure their torments without aggravating the guards. Zimbardo himself identifies that, by the end, the groupings had splintered even further.
Had Zimbardo permitted less abusive behaviour in his guards, or made escape more easily available to his prisoners, there seems very little chance his experiment would have lasted even the short while that it did.
Critically though, we want to be able to be a part of groups with positive distinctions. This raises the question of what happens when no such positive distinctions exist. In Milgram’s first experiment, and Zimbardo’s entire experiment, there were very few chances to identify positively with a group other than the problematic ones. Indeed, in many of the examples of atrocity we see little chance for members of the guilty party to identify with another group. What happens then?
I suspect that the major feature of human cognitive that comes into play here would be Festinger’s cognitive dissonance. When we hold two conflicting beliefs, or our beliefs and behaviour are at a mismatch, we do some really weird shit to smooth over the conflict. You can read the wikipedia page or some of my older articles for some examples, but I think it’d be more interesting to make a quick tangent into BDSM, since I’m writing an upcoming article on the topic.
There is a peculiar phenomenon that appears quite regularly in BDSM content. Essentially, a submissive can ‘forget’ that they can slow things down or stop them. You have a lot of people advising that a dominant should regularly remind their sub of their safe words and protocols and whatnot. Obviously the literature on this is sparse enough that I haven’t found anything relevant because BDSM is not trendy to fund, but it’s obvious you can’t simply explain this away by appealing to the human stress response. This seems like a fairly clear example of a mind trying to resolve the various and extreme tensions at play here—they don’t like what’s happening, but also want it to continue, so something here is subtly eliminating the idea that they can stop what’s happening to help resolve the conflict.
I use this example because it reminds me of Zimbardo’s strangely compliant participants. Obviously they were actively hampered in their attempts to leave. But, one participant is famous for faking a mental breakdown. Early, too—within 36 hours. He was promptly removed. It’s curious to me that few of the participants resorted to methods like this rather than, you know, cleaning toilets with hands. It’s also notable, that groups exacerbate these kinds of effects—helping decrease the importance of information that conflicts with group beliefs.
But, this isn’t all we need to rely on to explain bad behaviour. We also have the option to make the other group seem even worse, so our group appears to have more positive distinctions by comparison. So, we have Kelman’s 1973 concept of Dehumanisation, an effect where we we talk about and treat an out-group in a way that strips them of their humanity. Or, we have Faure’s 2008 description of Demonisation, where we engage in behaviour that helps up to perceive the out-group as inherently evil.
By doing this kind of thing, we don’t have to fuss around managing our cognitive dissonance with all these weird and subtle mechanisms. We can just point to the obvious inferiority of the other group to justify our group membership, even in the light of all the outrages we’re now a part of.
How the environment acts on us
It’s not enough to just have the capacity to do atrocities though. Something also has to bring it about. This is where the environment comes in. Helpfully, since the fruitful cross-pollination of TV and the atrocities of the Vietnam War various militaries have shown quite some enthusiasm for averting this kind of thing, so we’ll dip into that literature for this. And the old head of the leadership department at Sandhurst, Dennis Vincent, does a nice job of summarising the environmental factors at play.3 He calls them ‘situational influencers’. The first two will need a bit of translation to make them less war-ey:
- Hostile Environment: ‘A hostile environment is one in which a person feels under threat or uncomfortable due to a perception of danger.’
- Normalised Violence: ‘if violence is seen as an immutable part of our daily life, it is soon accepted as normal and the threshold of violence is lowered’.
Vincent is reflecting specifically on military operations, but we can easily extend this beyond gunfights in developing countries. Zimbardo’s prison experiment had these in spades. Specifically designing the two groups to be in opposition to one another generated a hostile environment, made most obvious by the rioting that started on day three. And the ‘guards’ began the experiment by inflicting violence on the prisoners, so the threshold for violence had nowhere to go but down.
Importantly, the only form of violence Zimbardo’s ‘guards’ were instructed to avoid was physical violence. Together, these factors paint a picture in which in-groups and out-groups who (a) feel threatened by one another and (b) can attack each other in any way, are at risk for bad behaviour.
Obviously under these conditions, the individual behaviour we’ve explored above is more likely to trend towards the less pleasant. Identifying with a group that’s a threat to us doesn’t really provide a lot of opportunity for positivity. So we might be forced to suppress those pesky questions floating around in the back of our minds asking if we should really be doing these things, or we might be encouraged to notice just how much worse the other groups are. And in doing so, as we take our behavioural cues from our group, watching them behave in this place of normalised violence, the:
range of acceptable behaviour becomes so wide and there is no clear moral reference point for right and wrong
King’s College London, Centre for Military Ethics, Armouring Against Atrocity
Vincent adds to this two more considerations:
- Lack of resources: ‘maybe in the amount of materiel [i.e. physical resources] … [which] can lead to a degree of hopelessness … However, in many case studies the two key resources lacking were time and manpower’
- Enhanced emotional state: ‘emotional reaction at the moment [of stress] is more influential in determining choice than the rational evaluation of options that may have been conducted beforehand’
Together, these two point to capacity limits in our decision-making. When we’ve run out of physical, cognitive, or emotional resources, we have to prioritise our actions. As I’ve pointed out many times before, this is going to have us acting on more passionate, urgent motivations rather than our more future-oriented ‘thinky’ motivations.
For this, I might point to the actions of destructive cults. Most human collectives are cults, and there’s every chance you belong to several. But some of these go quite spectacularly awry—Heaven’s Gate, Jim Jones, you know the ones. And in destructive cults, you often find that this kind of resource deficiency is at play. Sleep deprivation is most common, but food, time and even emotional or social deprivation are almost equally frequent.
So, in environment where identifying positively with other groups accentuates a threat—hostile environments—we almost have no choice by to accentuate our own group membership. If attacks between groups are a regular part of our interactions, then we’ll lose track of the range of moral behaviour we’re accustomed to in favour of the new range we’ve been exposed to. And where these things happen in tandem with a lack of physical, mental, or emotional resources, we’re much less likely to engage the more thinky aspects of our behaviour, and indulge in our more passionate systems of decision-making.
Which leads us to Vincent’s final consideration. Vincent calls this ‘weak leadership or lack of supervision’, but goes on to use examples where the leaders were actually very dominant and capable, but were just terrible people. Sharper students will always notice this and derail things asking questions. So I’ll provisionally call this ‘lack of opportunities for moral identification’ or something instead.
You see, all of this—the individual biases in behaviour, and the weight of the environmental impacts—hinge on the fact that people have no better social behaviour to identify with. When there are more tasteful behaviours to sample from—in the form of role-models, or responsible managers, or similar but less awful groups for example—then these kinds of terrible effects social scientists of the 60’s and 70’s were so interested in more-or-less disappear.
Outro
It took us four articles to get there, but we get there we did. So now, if someone is invoking these old, psychotic experiments from a half-a-century ago to explain some catastrophic collapse of social behaviour, you can take a step back. It’s really hard to make people do this stuff. Which is perfectly sensible because these biases aren’t just responsible for bad behaviour, but all social behaviour. And indeed, the most productive groups are the ones that harness these biases best.
And, while catastrophic in-group/out-group dynamics have often very complicated explanations, at the core is a lack of opportunity to identify with more tasteful, less destructive behaviour. This is particularly so if our environment is hostile to you and your group, and attacks between groups are common. We won’t just be motivated to accentuate our groups affiliation and interpret other groups as more negative, we’ll also lose our sense of what range of behaviour is good and bad. Worse still if we’re poorly resourced in mind and matter, because we’re going to be much more responsive to our less ‘thinky’ motivations. And these things together mean we’re not going to be able to compare what we’re doing with people we might prefer to be behaving like. So, we’ll lean into the bad behaviour of our group.
So there you go. All equipped to contravert genocides now. Thank me later.
Kind of like this sentence. ↩
Admittedly, including my own lectures, since I say exactly this sentence. ↩
I’m going largely draw from Dennis Vincent’s review of ethical leadership in [Sandhurst Occasional Paper #36](https://www.army.mod.uk/who-we-are/our-schools-and-colleges/rma-sandhurst/faculty- for-the-study-of-leadership-security-and-warfare/). ↩
Ideologies worth choosing at btrmt.