How does the brain 'think'? Pt. II

by Dorian Minors

July 19, 2024

Analects  |  Newsletter

Excerpt: In part two of a series explaining my PhD, I talk about one example of the kind of thinking that really does incontrovertably appear to be higher-order, non-routine thought. If you have the word 'blue', but the word is coloured red, and I ask you to name the colour, not read the colour-word, you're going to have trouble. You've been told to name colours, but you automatically want to read the words. You have a *conflict*. Much of my PhD asked how the brain might solve this kind of conflict.

The brain probably enhances colour information, inhibits word information, or some combination to solve the Stroop task. I found inhibition, but really, this is more of a catalogue of how hard brain science really is.

filed under:

Article Status: Complete (for now).

This is part two of a series trying to explain my PhD, because people keep asking. I want to point out that my initial response to this is always “I promise that you won’t care”, but no one ever believes me. So, as a punishment, I’m going to make you read it. For three articles. You did this to yourselves.

In part one1 I pointed out that most of what humans do we probably wouldn’t consider ‘thinking’. Mostly we respond completely automatically to the world around us. Driving is a good example of this. A very complex, and dangerous, task. But many people have experienced the strange sensation of arriving home and having no memory of driving there—we were on autopilot the entire route. Even something that might seem superficially ‘thinky’, like having a conversation, is often automatic. You rehash opinions you’ve already formed, using words that you’ve used before and sentences that come naturally in patterns that you’ve learned. It’s actually hard to find examples of ‘thinking’ that really do incontrovertably appear to be higher-order, non-routine thought.

The chapters of my PhD that I’m about to write about explore one of these examples. But, fair warning, you’ll learn much less about the brain reading this than you will about how hard it is to get at something the brain is doing.

The Stroop task: a perfect example and a horror show

The Stroop task is a wonderful example of what most people would take to be the kind of ‘thinking’ that we’d be interested at getting at. Something non-automatic, where something higher-order arbitrates over lower order stuff. So you have:

blue

green

red

And I ask you to ignore the words, but name the colors. This is a bit hard. You read words all day, every day. You almost never name colours. So here, you have to overcome your automatic impulse to read the words, so you can name the colours. It’s an effortful experience, particularly when you’re doing a whole bunch of them. And you’re going to be slower and more error-prone than if you were to do either of the tasks when you didn’t have the other task competing for your attention (naming colour patches, or reading the words in black ink).

Now, at a high level, the Stroop effect is easy to explain. You have one mouth. You can’t name colours and read words at the same time. You’ve been told to name colours, but you automatically want to read the words. You have a conflict.

But as you dig into the Stroop task, you start to realise that, actually, it’s not easy to explain at all. It’s not really that your mouth is trying to form the colour and the colour-word at the same time. The conflict is happening somewhere else. But where?

  • Is it in the brain responsible for making the mouth do sound? Both colour-naming and word-reading need to make mouth sounds, and while you’re trying to consciously do this for the colour naming, maybe the automatic word reading is coming into the mouth-brain bits and making a mess of the process.
  • Or, maybe there’s some interference coming from the parts of the brain that deal with the meaning of colours—you have colour words, and colours being translated into words,2 and somewhere in there the two are getting mixed up.
  • Or maybe it’s happening even earlier—your eyes are drawn to the words, because you’re used to reading, but then you have to sort of muscle your attention to the colour to gather information about it.

It’s worth pointing out that the Stroop effect was first brought into the academic literature in 1935 and almost 100 years later, we’re still trying to work out what the hell is going on. It’s a mess.

Each of these things has some kind of bearing on how the task might be solved: how the brain might overcome the interference from the word-reading process and do the colour-naming one instead. If it’s some kind of early interference, then maybe the brain is strengthening the attention you place on the colour and/or reducing the attention you place on the word. If it’s in the middle, then maybe it’s boosting the semantic or lexical processing of the colour->word transformation, or trying to filter out the lexical processing of the word itself. If it’s late, then maybe it’s trying to force through the colour mouth sounds, or trying to hold back the word mouth sounds. All of these imply the involvement of different cognitive resources and different brain bits.3

Fortunately I skipped all of that stuff

The good thing in all this is that most strategies for solving the Stroop task come down to either enhancing the colour-stuff, inhibiting the word stuff, or some combination of the two. And that’s what I wanted to work out. Is the brain enhancing colour information? Is it inhibiting word information? Is it doing both?

So all I had to do was take the Stroop task, and compare it to some other tasks. One where word information wouldn’t need to be inhibited, and one where the colour information wouldn’t need to be enhanced.

The words red, blue, and
    green, with word-like shapes that resemble red, blue and green below. My stimuli. You have colour words, and you have 'falsefonts': word-like shapes that resemble the colour words, but aren't readable.

In my Stroop task, people would name the colours of colour words. Sometimes the colour-word and the colour would be the same, but mostly they’d be different. The word information and colour information would be in conflict. Something would need to be enhanced and/or inhibited.

In the ‘colour-baseline’, I had people name the colour of ‘falsefonts’: wordlike shapes that resemble the colour words, but aren’t readable. Here, the colour information isn’t getting so much conflict from the wordlike shapes—it’s much easier to name the colour of one of these ‘falsefonts’ than it is a colour-word. As such, if the brain solves the Stroop task by enhancing colour information—by making it stronger than the word information—then we’d expect the brain to be doing more colour stuff in the Stroop task than in the colour-baseline.

I also had the physical size of the the stimuli change. Sometimes they’d be big, sometimes normal, and sometimes small. So in my last condition, I had people name the size of the Stroop stimuli. Here, the colour-words are still there, and so are the changing colours of those words. But because the colour-word doesn’t overlap with the size words people are trying to say, they don’t interfere with each other as much. This was the ‘word-baseline’. Here, if the brain needed to inhibit word information—make it less strong than the colour information—to solve the Stroop task, then we’d expect the brain to be doing less word stuff in the Stroop task than in the word-baseline.

Feel like you’ve kept up so far? Let me ruin that for you.

Brain scanning sucks

I won’t spend really any time talking about how much work went into the design of those stimuli. Matching the words and the falsefonts for pixels. Trying to decide how word-like the falsefonts should be to ‘match’ the words, but not be so word-like that we have essentially the same problem as the Stroop task. Making sure the colours were evenly spaced apart on the colour wheel. Also, which colour wheel? There are like five. What sizes should I pick? If I made them too big, people started making a different kind of error. If I made them too small, then the task became too easy. And so on, and so forth.

I also won’t spend much time talking about picking my brain regions. Only that I wanted a colour-processing region, and a word-processing one. But, did you know that if you ask two different researchers where the colour region of the brain is, they’ll give you different answers? Did you know that the ‘visual word form area’—where the representations of word forms are supposed to happen—might not even exist? For either word-region or colour-region, should I look in the entire region? Or in one of the subdivisions? How do I know if the regions are even in the same place for each participant? It’s not uncommon to have brain regions move around from person to person—the brain is a dynamic system and heads come in all different shapes and sizes besides.

All of that and we haven’t even gotten into the main problem. When most people talk about changes in brain activity, they’re talking about univariate changes—how big the brain activity was in one condition over another. So in my experiment, we might expect to see more activity in the colour region of the brain during the Stroop task than the colour-baseline if the brain was enhancing colour information to solve the problem. Similarly, we might expect to see less activity in the word region of the brain during the Stroop than the word-baseline if the brain is inhibiting word information to solve the problem. But magnitude of change isn’t always that informative. There’s no particular reason why inhibiting something would be less work than enhancing something. As one paper put it, “all activities come at a cost and the brain eats [what] it needs!”4

So I wanted to actually look at the information directly, not just brain activity. To do that, I used a method called multivariate pattern analysis (MVPA). Instead of just looking at average brain activity, MVPA examines the patterns of activity across the brain.5 The primary method of doing this is known as pattern classification or ‘decoding’. Here, we use a machine learning algorithm to analyse the neuroimaging data gathered during an experimental task. So in fMRI, I’d look at the blood usage (BOLD signal) across voxels (little cubes of brain—I had 100 voxels in each brain region). To make this clear, imagine a task where participants differentiate between faces and houses. In my 100 voxel region of the brain, we might have different levels of activity in each voxel during trials where faces are shown, to the activity in each voxel when houses are shown. We’d take some trials and give them to the algorithm, telling it which trials were faces, and which were houses. The algorithm’s job is to learn to recognise the difference in activity patterns across voxels that distinguish the two. After training, we test the algorithm with trials it’s never seen before to see how accurately it can predict whether the data is from a face or house trial. The accuracy of the algorithm tells us how much information about faces or houses is present in the brain data. If it’s above chance levels, then we know there is some kind of information in the pattern of brain activity that is different between faces and houses in our brain region.6

What did I find?

So, I had three conditions: Stroop, colour-baseline, and word-baseline. Across these three conditions I looked at two regions—a word region, and a colour region. And essentially I looked to see whether there was more colour information in the colour region during the Stroop task than the colour-baseline (where the colour wouldn’t need so much enhancing), and/or whether there was less word information in the word region during the Stroop task than the word-baseline (where the word wouldn’t need so much inhibiting).

I found weak evidence that the word information was inhibited.

I bet you’re glad you read all that.

Outro

The real contribution of these chapters of my PhD didn’t come from the findings. It came from the establishment of the baseline conditions, and the methods of exploring them. If you recall, we don’t really have a good idea where the word-stuff and colour-stuff are fighting. It’s perfectly possible I was looking in the wrong spot. And in fact, I looked in two colour regions, and two word regions. I only found evidence for inhibition in one of the word regions, and that evidence went away when I analysed it a bit differently. I wouldn’t be that surprised if it turned out I was looking in the wrong place.

Fortunately, with these baselines and these methods, in theory, you could look all over the brain and see where the brain might be enhancing and/or inhibiting. For example, I also looked in a set of regions that we think are pretty important in the kind of ‘thinking’ I talked about earlier—executive/controlly type thinking. I found that it always had information about whatever was relevant to the task. In the colour-baseline and the Stroop, the colour information was higher. In the word-baseline (where participants named sizes) there was size information. I never saw word information. This is a common finding in these regions, but it’s interesting that I saw the (possibly) controlly brain bits preferring colour information and the word region (maybe) inhibiting word information during the Stroop task. It could imply that the brain is doing more than one thing when it tries to solve this problem.7

So it wasn’t the most exciting finding. But I still had plenty to write about. And for a PhD, that’s all that really matters.

You can find part three here, and part one here.


  1. Part three is here

  2. For fun, this is contentious wording. Do colours need to be ‘translated’ into words? Would that process overlap with straightforward automatic word reading? Who knows! Not I. What I do know is that I spent several thousand words complaining about the Stroop task in my introduction to my PhD, the Stroop chapters, and the discussion of my PhD, and every time my supervisors told me to tone it down because it was a buzzkill. 

  3. It goes without saying that this is a non-exclusive list of what could be happening. 

  4. Lauritzen, M., Mathiesen, C., et al. (2012), Neuronal Inhibition and Excitation, and the Dichotomic Control of Brain Hemodynamic and Oxygen Responses. 

  5. For the technically minded masochist, you could check out a pretty good tutorial in Grootswagers, T., Wardle, S. G., et al. (2017), Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data. 

  6. There is another method for MVPA called Representational (dis)Similarity Analysis (RSA). I actually used this method, but I’m not going to try and explain in text. It was hard enough in my PhD. The technicalities aren’t really that relevant, and it essentially produces the same thing—a metric of how much information about a stimulus is present in the neural activity of a given brain region. For this article, I’m just going to refer to it as decoding (even though that’s not technically correct). 

  7. This is kind of silly to say. Of course it’s doing more than one thing. But here we might have two distinct mechanisms going on—one which enhances colour and one which inhibits word—or a complex single mechanism that maintains relevant information in order to inhibit irrelevant information elsewhere. 


Ideologies worth choosing at btrmt.

Join over 2000 of us. Get the newsletter.