The Scientific Ritual

a lecture by Dorian Minors

May 2, 2026

Listen  |  Analects  |  Newsletter

Excerpt: Science feels like the most reliable thing we have. The opposite of belief. But it’s a belief system itself—a ritual, with all the failure modes that rituals have. And the receipts are right there in the replication crisis.

Ideology

The scientific method is just another belief system, a ritual subject to errors of application like any other. And the consequence is a machine for generating exaggerations.

Show Notes

Further reading

References

The replication crisis itself

Statistical ritualism

p-values, Bayes factors, and software

  • Wikipedia: p-value, Bayes factor
  • Ronald A. Fisher (1925), Statistical Methods for Research Workers — where the 5% threshold appears as an illustrative example
  • Harold Jeffreys (1939), Theory of Probability — where the Bayes-factor thresholds (BF > 3 substantial, BF > 10 strong) come from
  • JASP — the open-source Bayesian statistics software with default priors

Specific replication-crisis casualties

Books cited in the lecture

Other

Below is a lightly edited transcript. For the article that inspired this one, see The Scientific Ritual.

Welcome to the btrmt. Lectures. My name is Dr Dorian Minors, and if there’s one thing I’ve learned as a brain scientist, it’s that there’s no instruction manual for this device in our head. But there are patterns. Patterns of thought, patterns of feeling, patterns of action. Because that’s what brains do: create the patterns that gracefully handle the predictable shapes of everyday life. So let me teach you about them. One pattern, one podcast, and you see if it works for you.

Now, in the last lecture, I spent a lot of time trying to show you why the narrative that social media is bad for you is a false one. One that looks like it’s supported by the evidence, but as you dig into it, sort of turns out to be quite shallow. And in fact, what seems to be happening is that sad people are using social media differently. Not so much that social media is making people sad in the first place.

And indeed, in most of my lectures I’m doing this kind of zooming out thing, showing you how claims about neuroscience and psychology and other social sciences get kind of weaponised to trick you into doing stuff that isn’t going to necessarily help you, or to buy programmes that are kind of silly. The idea, I hope at least, is showing you how you can dig into these kinds of claims to find out what the science actually says, so that we can sort of get to what you can actually do about whatever problem has been hijacked under the guise of helping you.

So that’s what I normally try and do. But there is this sort of problem with appealing to, you know, in quotes, “the science.” And I want to spend this little lecture talking about what that even means.

Reverence and reaction

I think it’s important because we live in this ambivalent kind of culture that can’t quite work out how it feels about science.

There’s this huge chunk of our cultural impulses organised around the fact that we should revere science. You think of reels that kick off with “the science says,” or “here’s what experts do.” Or you think of newspapers and online media talking about how, you know, a new study indicates this kind of headline, almost as if they’re settling questions. Or you have friends correcting each other at dinner with “well, actually, studies show that X and Y thing is really the case.” That sort of trump card. That thing that you appeal to when belief gets out of hand—when somebody’s getting too spiritual or too political or too credulous—you bring out the science, and the conversation’s over faster than if you used a platitude.

And indeed, it’s gone so far in that direction that there’s now a reaction against it. The same friends that might have once corrected you with “studies show” are now sending you podcasts about how the experts have been lying to you the whole time, or “do your own research” has become this sort of slogan in many corners of TikTok. And in a more modern framework—I’ve talked about this in other episodes—entire political movements are built on this idea that we shouldn’t trust the experts. Think of the debates around COVID, or ivermectin, or raw milk. The idea that if credentialed people are saying it, then the smart move is to assume the opposite. This is like 60% of the Joe Rogan podcast now, right?

Because it’s not always false. It’s not always the case that if credentialed people say it, then it’s true. And indeed, these aren’t different groups of people. The same people who will appeal to the studies will tell you to do your own research, because credentialed people aren’t always correct. I often say not to trust apparent experts, even as I spend minutes crawling through a literature on something, as I did in the last lecture.

And I think this kind of pained dual attitude that we have towards science and the experts is actually pretty easily explained, if you know this sort of odd little fact about—maybe not a fact, but an insight into—what science actually is. Understanding this little insight means you can actually understand when to trust the studies and when to do your own research, when to trust the experts and when to be a little sceptical.

And the insight is pretty straightforward. It’s that science isn’t different to beliefs. It’s a particular kind of belief. And like all beliefs, it has a particular kind of failure mode. So let me tell you about it. Let’s talk today about the scientific ritual.

The Dawkins move

I think the place to start is the surface-level assumption of our modern secular culture: this assumption that the much-vaunted scientific method is the most reliable way that we have of finding out whether something is true or not. Now, obviously, if you ask people to reflect on this, most people are going to have some reservations. But I think if I show you an example, we’ll agree that this is the sort of crest of the wave that our cultural narratives ride on.

So let’s take Richard Dawkins as an easy example. He has this nice TED talk where he says something like:

Religion teaches people to be satisfied with trivial, supernatural, non-explanations and blinds them to the wonderful real explanations that we have within our grasp. It teaches them to accept authority, revelation and faith instead of always insisting on evidence.

Right, so that’s the pitch. Faith is problematic. And not just from religion—I mean, Dawkins is upset about religion, but in the culture, it’s any kind of blind belief. I have an entire series of articles and lectures that highlight how terrified we are of cognitive biases. So we don’t want that. We don’t want that kind of blind belief, that faith. We want evidence. That’s the thing. And science is what gets you the evidence. So science is a good thing.

And even if you don’t personally agree with the narrative I’ve just set up for you, you can tell that this basic premise is driving the cultural narrative, because the reaction to it—to oppose the experts—is framed as the edgy, sexy thing to do. Even though it’s pretty mainstream now, and anybody doing a reel about how nobody is talking about whatever thing is one of thousands, it still feels controversial, because it’s not the assumption that sits at the cultural surface. It’s the second-order reaction.

And this idea that we should trust evidence, that we should trust science, is a sensible surface assumption. Science gave us hand washing and antibiotics and aeroplanes and so on.

But I want you to pay attention to the way that it’s framed, and we’ll go back to the quote to show you how it looks. So again:

It teaches them to accept authority, revelation and faith instead of always insisting on evidence.

Here, science isn’t being framed as one way of knowing. In the way that we normally talk about it, science gets framed as the only legitimate way of knowing. And the quiet corollary is that anything that sits outside of the scientific milieu—anything that hasn’t been measured, anything that hasn’t been peer-reviewed, that doesn’t have double-blind studies, isn’t replicated, isn’t p-tested—all of that stuff isn’t really knowledge. It’s just feelings.

That’s the assumption.

And what I want to do today is try and take this assumption apart. Because the scientific method, when you actually look at how it works under the bonnet, is itself a kind of belief system. It’s itself a ritual, and it’s subject to all the same lazy applications and errors of judgment that any other belief system is.

And in fact, it’s so subject to them that there’s a crisis. It’s called the replication crisis, and it’s going on inside the academy right now, where scientists are openly admitting that an enormous chunk of what they’ve published over the last 50 years probably isn’t true. It’s the public admission of the same kind of thing that’s leading to this reaction against the scientists. Except the difference is that few outside the academy have recognised this for what it is. So you end up with this strange ambivalence around science.

So let me walk you through it today, and see if I can illustrate why this replication crisis is happening, what it is, why science can be trustworthy and untrustworthy, and what we should probably do instead.

What a scientific fact actually is

So imagine this. You’re at dinner with some friends, and one of them says something like “honestly, ever since I started meditating, I just feel sharper. Better focus, better mood, everything.” And you say—oh, yeah, studies show that meditation does all sorts of stuff. There’s research on it.

Now, you probably haven’t read the studies. You’ve probably just seen this floating around on Instagram, TikTok, whatever. Probably nobody at the table has read a study about meditation. We’ve heard people mention research before. We have a vague sense that the literature is broadly favourable. And so we’ve reached for it to reply to our friend, because it ends the conversation. It’s basically a platitude.

Now, what I want to do is pause here for a second, because what just happened here is actually a misunderstanding of how science works. And I think it’s a reasonable misunderstanding, and one that works given the circumstance. And that’s why I think it’s a good place to illustrate the problem.

When somebody runs a study—a scientific study on meditation—they don’t start with the question “does meditation work?” What they’re supposed to do is start with the opposite question. What you’re supposed to do is start with the assumption that meditation doesn’t do anything at all. That should be the default position. And scientists call this the null hypothesis. It’s the boring answer that you’re supposed to assume is true unless you’ve got really good evidence to suggest the opposite, that meditation does something.

So what you do is you run your study, you take, what, a hundred people or whatever, half of them are going to meditate, half of them aren’t, and then you’re going to measure something—focus, mood, cortisol, whatever you want. And then what the study tells you isn’t whether meditation works. What it tells them is: if meditation did nothing at all, how unlikely would these results be? If it’s unlikely enough, then you’re supposed to abandon the boring answer. You’re supposed to abandon the idea that meditation is doing nothing. You’re supposed to say—okay, probably meditation is doing something.

Now, it seems like a strange distinction, but it’s a really important one, because what’s missing from this picture is the idea of proof. The scientific method does not, ever, by design, prove that something is true. The whole purpose of science is to fail to disprove something.

Stephen J. Gould puts it really nicely. He says in his book:

Science is a procedure for testing and rejecting hypotheses, not a compendium of certain knowledge. Claims that can be proved incorrect lie within its domain.

So again, he’s saying that science is where we do not prove things to be true. We prove things that are false.

So when you’re at the dinner table saying “studies show that meditation works,” what you’re actually appealing to, technically, is that some researchers tried to disprove the claim that meditation does nothing, and they didn’t manage to do that. So what we do instead is we provisionally behave as though meditation does something. And we’re going to keep believing that until somebody proves that it’s false, and we have to adopt a new belief.

That’s what a scientific fact is. It’s not the discovery of a truth. It’s a belief that we haven’t knocked over yet. It’s something that we haven’t proved false, not something that we have proved.

Whether I explained the null hypothesis thing well or not, that’s the takeaway. Science, as Stephen J. Gould put it, isn’t a compendium of what’s known to be true. It’s a procedure for testing and rejecting hypothesis. It’s a destructive thing. It proves what is not true. And so we then act as if what hasn’t yet been proved untrue is true.

And once you put it that way, you understand how science is a belief system—same as any other belief system—because a belief that we haven’t knocked over yet, a belief that we haven’t proved untrue yet, is just a belief. It’s something that we’re holding on to for now. (And this is probably why meditation is such a hotspot of argumentation at the moment, how people are now saying that it works for some people and it doesn’t for others. I’ll leave something on this in the show notes rather than get into the details, because they are provisional beliefs.)

Now, I should point out that I’m sort of framing this process as a negative, this destructive belief-disproving process of science as a negative. But I think it’s actually very positive, because the point of this structure is to make sure that we don’t hold on to false beliefs forever. There’s a structure for testing our beliefs, testing which beliefs are stronger than other beliefs. And there’s a culture around the whole thing, and a set of rules around the thing, and a community of people around this thing, all of whose job is to try and strengthen the value of those beliefs for the way that we live our lives.

But once you concede that what’s happening here is belief management rather than truth discovery, then you can start to ask yourself a question: can this become sclerotic like any other belief? Can it become ritualised behaviour like the ritualised behaviour we have around any other belief?

And the answer is obviously yes, or I wouldn’t be doing this lecture and it wouldn’t be called “the scientific ritual.”

The Fisher accident

Let me give you my favourite example of this, and maybe the cleanest illustration of what I mean when I call science the scientific ritual. And again, we got to get a little technical for this, but I’ll try and make it as easy as possible.

Almost all scientific research, at least up until the last 10 years or so, would have talked about a result being statistically significant. And if a result is statistically significant, then what we’re talking about here are p-values. You might have heard this before, you might have read about it before in science journalism, or if you’ve braved a scientific article yourself. And what we look for to determine statistical significance is a p-value that’s below a certain threshold, usually below 0.05. If your p-value is below 0.05, your result ends up being statistically significant.

Now, that’s not always true, but it’s true in the main, and it’s almost exclusively true in psychology studies up until the last 10 years or so. (Biology uses a slightly different threshold; I think it’s 0.001 or something like this. But that’s the main thing.)

And the reason that I’m telling you about this—I’m not going to try and tell you how a p-value works, but we should talk about where this value comes from. Why does a value under 0.05, or whatever, mean that something’s statistically significant?

The reason is because there’s this bloke called Ronald Fisher. 1925. He’s a statistician. He is working, I believe, in agricultural research. And he’s writing the textbook that essentially invents modern statistics. And somewhere in the middle of this textbook he’s explaining the concept of statistical significance to other researchers. And he uses 5% as a worked example. It’s like a convenient threshold for a hypothetical gambling problem that he’s walking through to illustrate how statistical significance works to other researchers—because we don’t really do this yet, this kind of statistically-significant thing.

So he says 5%, not because there’s anything magic about 5%, but because it’s a nice round number that helps him illustrate the maths. And Fisher, to be clear, explicitly says in the same book that researchers should, and I’ll quote, “give his mind to each particular case in the light of his evidence and his ideas.” Right? That’s Fisher saying, you know, like, obviously you’d have to pick something that made sense to you rather than my 5% example here.

And, perhaps just as obviously, researchers ignored him. They just took the example. (It’s a little bit more complicated than this, and I’ll link to an article that explains that in more detail.) But that’s the essential case, and the way that this 5%, or 0.05 p-value, became the threshold in psychology and whatnot. And again, other disciplines have stricter requirements—I mentioned biology, I think they do 0.01—but they’re not doing things on a case-by-case basis like Fisher wanted. It’s still rule-based. A static rule. Because in 1925 Fisher picked his example, and maybe a couple of people had a crack at changing it, but eventually everybody just copied whatever the last person did.

And that’s a ritual. Using this example as a static rule, despite Fisher’s exhortations to view things on a case-by-case basis, is the exact thing the word “ritual” was invented for. An action of prescribed form, with no understanding of why that form has come about, repeated because that’s what everybody does.

Statisticians have an entire literature on why this threshold doesn’t really make sense for most of the contexts it gets applied in. And again, I’m not going to drag you through the technical details, but the short version is that 0.05 was pegged to a particular kind of decision in a particular kind of context that doesn’t match how most modern scientists are using it.

And there’s this guy called Gerd Gigerenzer who has a name for this. He calls it the “null ritual”—from the null hypothesis that I mentioned before, the assumption that the boring thing is true, that nothing has changed. And the line that I love from him is that researchers do this—they pick this threshold without thinking about it, they run a test, and then they treat that output of the test as gospel. This process is like compulsive hand washing. It’s a tic. The form of the science is intact, but there’s no judgment there. You’re just doing it because you feel like you have to, not because your hands are dirty.

Okay, so that’s one seam. That’s the place where, if you look closely enough, you see that we’re treating objective measurement actually as something closer to a liturgy. Most people—most researchers who are running statistical tests—wouldn’t be able to tell you where that value of 0.05 came from. And most of them wouldn’t be able to defend the threshold if you asked them. They use it because that’s what gets used. So this is a belief, not an investigation.

And once you know to look for it, you find it everywhere.

The replication crisis

Now, you might say: okay, fine, so researchers are a bit lazy with their thresholds. So what? A lot of professions are a bit lazy. It probably doesn’t mean that the whole thing is shit, or we wouldn’t have been doing modern research all this time.

And that’s where the replication crisis comes in, that I mentioned before. The crisis is around the fact that this whole thing was indeed a bit shit for the last 50-odd years.

The replication process is probably worthy of its own lecture or article or something, but I’ll give you the short story here. Around 2015, a group of researchers got together. This was a project called the Open Science Collaboration. They picked 100 famous psychology studies—the kind of studies you’d see cited in textbooks, or in TED talks, or self-help books, or corporate training decks—and they tried to redo them. Just redo them. Same method, fresh sample of people. See if you get the same result as in these famous results that prompted all this media attention.

And importantly, science is organised around this concept of replication. Partly this is due to the history of the p-value—I don’t want to get into the details of that, but it’s a part of the ritual now, this idea that we design our experiments so that they can be replicated, even if in practice few ever do get replicated. So we make sure that we detail the methodology, we detail the participants, we detail everything that went into the study, so that in theory somebody could pick it up and run the study again. And that’s what these guys did. They called everybody’s bluff and they ran the replications.

And only a third—something like a third—came out with the same result. Roughly two-thirds didn’t. Two-thirds of the studies that they tried to replicate, according to the instructions for replicating those studies, didn’t replicate.

And that was in psychology. And everybody had a little panic and started trying to do the same thing in other disciplines. They did it in medicine and found the same problem. They did it in cancer biology, same problem. They did it in economics and ecology. Every field that uses this kind of statistics, they checked, and found that these studies didn’t replicate.

Now, the famous casualties are worth noting. And rather than getting into the more depressing ones of medicine, I’ll just stick to psychology.

So power posing. This idea that you should stand in a certain pose for two minutes before the big meeting, lowers your cortisol, makes you more confident, et cetera, et cetera. I think this is one of the most-watched TED talks of all time. Didn’t replicate. The lead author, who was on the TED stage, eventually wrote that she no longer believed her own findings.

Ego depletion. You might not know it by that name, but you definitely know it. The idea that willpower is a finite resource that gets used up over the day. The reason that you’re not supposed to make big decisions when you’re tired. So you’re thinking books like Eat That Frog, The Willpower Instinct, every productivity guru’s pitch about decision-making energy. Massive multi-lab replication study from dozens of universities came back and couldn’t reproduce the effect.

Priming didn’t survive the replication effect. This idea that reading words about old age made people literally walk out of the lab more slowly. Very cute. Made it into the most popular psychology book of all time, Daniel Kahneman’s Thinking, Fast and Slow, and created entire subfields. In fact, doesn’t replicate.

And what I’m trying to show here is that these aren’t fringe results. These are the primary results that have advanced psychology in the modern age. I mean, one of the biggest casualties is the entire discipline of positive psychology, starting in the 2000s or thereabouts. The idea that maybe we shouldn’t concentrate psychology on pathologies—not on people with problems and fixing their problems—but instead we should concentrate on making healthy people better. How can we push people towards flourishing?

So think of Angela Duckworth’s concept of grit, so influential that I myself wrote an article about it back in the day. Doesn’t replicate. The idea of a growth mindset, or of flow states, or gratitude journalling, or the 3-to-1 positivity ratio, the happiness advantage, all of these things—all of positive psychology, none of it replicates. Almost none of it. And so on, and so on, and so on. If I were to list them all, we’d be here forever. Suffice to say that any psychological finding that you’re familiar with probably didn’t survive the replication crisis. If I were to move forward, that would be my default assumption.

How the ritual fails

And this is the scientific ritual. None of this is happening because people are doing fraud—or at least, very little of it is fraud. Psychologists, for the most part, weren’t lying. The cancer biologists weren’t lying. They probably genuinely believed what they published. But they were suffering from structural problems. This ritual at scale.

And I’ll walk you through the failure modes of this particular system of beliefs.

The first one is what we now call the file-drawer problem. Journals only want to publish positive findings, studies where something happened. Nobody wants to publish a study where nothing happened, nothing changed as a result, where the null hypothesis was true and meditation didn’t have an effect.

So if you run an experiment and find nothing, you’re not going to be able to publish it. The paper goes into your proverbial file drawer and nobody hears about it. Maybe 100 eager young postdocs do this. They run the study. Only one of them gets lucky and finds the effect. The rest put the failed study in the file drawer.

Now, you would conclude that this probably isn’t a real effect, because 99% of them didn’t find it. But because the 99 put it in the file drawer, and what gets published is the false positive, we conclude that the false positive is the actual effect.1 So multiply that across thousands of researchers, hundreds of journals, 50 years—the literature is systematically biased towards that lucky run. The unlucky stuff all sits in drawers, unread.

And that alone is enough to torch most of psychology.

But there is more. There’s what’s known as p-hacking, or what’s also called the “garden of forking paths.” This is the idea that when you run a study, you’re constantly making analytical choices—we call these researcher degrees of freedom. So what subjects am I supposed to do this test on? What variables should I control for? What test should I use? Should I log-transform the data when I’m done? Every one of these choices is defensible on its own. But there are enough choices that, by sheer combinatorics, you can almost always find some analysis path that gets you below that magic threshold of 0.05. And so you publish that path. You don’t mention the other paths that you tried.

And you know this dynamic. You’ve probably done a human version of it. You spend a couple of hours on something at work, you try it one way and it fails, you try it another way and it fails, you try it a third way and it works—but not in the way that you expected. You’re not going to tell the boss about all these times that you failed, right? You’re going to tell her the story that presents the thing that you did in a weird way as the thing that you were always planning to do. Researchers just do this with data.

This in particular is called HARKingHypothesising After Results are Known. So you set out to find effect A and you don’t find it, but you do notice effect B in the data. So you rewrite your paper as though you were looking for result B all along, and now you’ve got a clean confirmatory result that you can publish. Except, of course, that’s not what happened.

Now, why am I going into all this incessant detail here? Well, it’s because the upshot is that a single study should never settle a question for you. A single study is just one data point in a belief-formation process, and as such it’s only a piece of evidence. If you read a headline that says “some new study shows X”—if you get an Instagram reel telling you that some new study shows X—the appropriate reaction isn’t to update your worldview. It’s probably to say “let’s see if that survives,” because more often than not, it doesn’t.

Instead, what we should be looking for is bodies of evidence—literatures—where you’ve got dozens or hundreds of studies run by different groups of people, in different places, by people with different incentives, all triangulating towards the same kind of conclusion. That’s something that you can lean on. But you have to do the work, figuring out whether the literature you’re looking at actually has that shape, or whether it’s a hundred papers all running the same flawed ritual on the same flawed finding. (And a good example of this is my last lecture on social media.)

Look, I’ll write all this up properly at some point, because I know I’ve just hand-waved through a lot of technical stuff. There’s a reading list in the show notes on the p-value mess, on the idea of the null hypothesis and “no evidence”, and so on and so forth. So if you want the more technical stuff, those will get you there.

The Bayesian sequel

So this is the replication crisis, this new thing we’ve been grappling with for the last decade or so. And you see some reform. You see things like people pre-registering studies, so you can’t change what you said you were going to do after you find some weird result you weren’t expecting; or using open data, so everybody can see your workings; or multi-lab studies.

But the ritualism sticks. And it sticks for a reason. Because the ritual is doing work. It’s giving institutions and people a way to make decisions that look defensible, that seem like they’re pointing towards a technical, methodological answer. And it’s giving researchers a way to publish that doesn’t require them to defend judgment calls that they make. And it’s giving readers—you and me, the journalists writing the headlines—a shortcut to understanding the world a little bit better. Because we don’t like to weigh evidence. We don’t like to do all this judgment work. It’s hard. We just want a yes or no. And the scientific ritual launders the ambiguity of the world into a yes or no. And as it turns out, people will do a lot of work to get to that point of not having to think.

You can see this even in the reform to the scientific ritual that I’ve just described. So I’ve spent this whole lecture talking about how problematic p-values are. And p-values are the product of what’s known as frequentist statistics. But there are other ways of doing stats. There’s one called Bayesian statistics. It’s very popular now. It’s more complicated, it’s more philosophically rigorous, and it’s probably why people didn’t use it in the first place—even though it’s been around for at least as long.

So in response to this crisis, a few academics built an open-source software to do this new, apparently better kind of statistics. And I say new, but again, Bayesian statistics isn’t new—this newly appreciated form of statistics. And to provide some defaults in the software, they imported a set of thresholds proposed by a statistician in the ‘30s. Again, I won’t get into the detail, but essentially this guy Jeffreys says that a Bayes factor of over 3 means there’s substantial evidence for our hypothesis, and over 10 means there’s strong evidence.

So the authors of the statistics package called JASP baked these into the defaults. They also picked another default called the prior. And the entire premise of Bayesian statistics is that you need to pick a prior distribution, which is sort of like an expectation of what you think might be true. It’s not really important that you understand what a prior distribution is, because researchers typically don’t. What is important is that this too was picked as a default in JASP. They provided a default prior and then default Bayes factor cut-offs of 3 and 10, and so forth.

So now everybody is going around—all these papers using Bayesian statistics are using these default priors and these default Bayes factors. The identical problem that we had with p-values. The same compulsive hand washing, but with different soap.

Where does that leave you

So I guess the question is, where does that leave you? Let me talk a little bit about that, and then I’ll let you go.

I was reading this textbook once. It was a permaculture textbook, actually, by Bill Mollison. But he said something about the scientific method that I keep coming back to. He said:

We can predict only those things we set up to be predictable, not what we encounter in the real world of living and reactive processes.

And what he’s saying there is that the scientific method works beautifully on the things we can constrain enough to make legible to it—falling apples, lab rats in mazes, drug doses in carefully selected populations. But the minute that you take it out into the messy, living, social, breathing world—human beings making decisions in their actual lives—then it starts to creak. Not because the method is bad, but because the world is more than the method can see. And we forget. We forget that we set the conditions up to be predictable, and we just look at the predictable result and call it the truth.

So what can we do about it? I think the answer is pretty straightforward.

Firstly, when you read a single study, or when people quote single studies, our default reaction shouldn’t be “oh, well, the science says.” Our default reaction should be: “that’s interesting, let’s see if it survives.” Not in a cynical way, but in the way that understands that one study is the start of a conversation, not the end of a conversation. If it matters to you—if you’re trying to find out what the science says—you don’t want a single study. You want to know what the literature looks like. Has anybody else found this? Have they tried to break it? When they tried, did it hold up? That’s the first thing.

The second thing I think is the bigger one, the more abstract point, which is that the scientific method is just one way of knowing. And it’s a good one, but it’s not the only one. Think of lived experience, of tradition, the intuitions of people who’ve spent decades doing the thing. I mean, you actually know about this last one. It’s why you go to the doctor for a medical issue and not the academic papers. This is, in fact, why we distinguish between medical doctors and academic doctors—MDs from PhDs. I’ll link to the show notes for an article about this, but experience and expertise is a way of knowing too. None of these are reducible to a p-value or a Bayes factor. And the fact that science can’t see them doesn’t mean they aren’t there.

There’s this line from the book Life of Pi that I think about a lot. The main character is this kid in India who decides to be Christian and Hindu and Muslim simultaneously, and everybody gets upset. And at one point he says, about people who are dogmatically atheist:

They are brothers and sisters of a different faith. They go as far as the legs of reason will carry them, and then they leap.

And what he’s saying there is that we all have to leap, eventually. The world is just too complicated to know everything. Spiritual or secular, praying or applying the defaults in your statistical software—you go as far as you can, and then you leap.

And that’s why we’re so ambivalent about science. Because at the end of the day, this is what we’re seeing. A bunch of people trying really hard to work things out, but getting it wrong sometimes, because it’s a belief system like any other. And any belief system has failure modes.

So now that you know about them, you can decide when to make the leap. And I reckon that’s a pretty good place to stop.

Until next time.


Ideologies worth choosing at btrmt.

Join over 2000 of us. Get the newsletter.
More where this came from. Get the newsletter.