Alert - IQ scores are meaningless

by Dorian Minors

January 11, 2019

Analects  |  Newsletter

Excerpt: The Intelligence Quotient, or IQ, is commonly considered a representation of your raw intelligence. At least, that’s the folk wisdom. But the folk wisdom is wrong.

AI alarmism thrives on speculative, worst-case scenarios, but our understanding of AI’s fundamentally alien nature and the complex forms of consciousness make me suspect that less stressful alternatives are equally plausible.

IQ scores are meaningless.

Those aren’t my words. They come from Dr. Roger Highfield, the director of external affairs at the Science Museum in London and the first man to bounce a neutron off a soap bubble.

Highfield was commenting on one of the largest online surveys ever conducted on this subject and part of a study he co-authored, now published in the journal Neuron about the IQ score. The Intelligence Quotient, or IQ, is commonly considered a representation of your raw intelligence.

At least, that’s the folk wisdom. But the folk wisdom is wrong. As Highfield puts it:

It has always seemed to be odd that we like to call the human brain the most complex known object in the Universe, yet many of us are still prepared to accept that we can measure brain function by doing a few so-called IQ tests.

The most common IQ test in the Western world is the Stanford-Binet. The Stanford-Binet was developed around the start of World War I and followed the combined interests of military and research institutions in adjudicating the information processing capabilities of potential military recruits to better match recruits to the various military roles. The researchers who developed the test combined the test items in such a way that that half of the people taking the test would score above 100 and the other half would score below. In this way, one could easily compare individuals against the median result: 100.

The test became particularly popular because the score one got tended to be remarkably stable over time. All of a sudden, we believed that we had finally got it: a straightforward measurement of someone’s base level intelligence. Why? Because the score didn’t change!

Because this was 1916 (not that things would necessarily be much different now), the IQ was then delivered globally and conclusions were drawn based on gender, social class and race, thereby feeding justifications for a history of discrimination. Indeed, the almost equally popular Weschler Adult Intelligence Scale, and its child-targeted variants are based on similar fundamental concepts and used to draw similar problematic conclusions.

However, that the average IQ score appears to increase generationally; that the measurement gives Quentin Tarantino and Sharon Stone an IQ similar to that of Albert Einstein; that higher IQs increase the probability of financial difficulty; and that higher IQs are a risk factor for psychological and physiological disorders; makes it seem clear that the measurement might not be illuminating those essential qualities of a person’s potential.

Indeed, Stanford Psychologist Catharine Cox published a curious study after the widespread adoption of IQ testing that illuminates this difficulty. She collected the biographical details of 301 eminent persons of history—from poets and artists to political and religious figures to scientists, philosophers, and soldiers. Documenting her findings over eight-hundred pages (a curious study indeed), one of her key motivations was to explore the IQ differences between these extraordinarily accomplished individuals and the general population.

Cox found that though, as a group, these individuals were smarter than the average person on average, and while the highest IQs ranged up towards 190, the lowest reached to a range of between 100-110. That is, the least intelligent of the bunch were of average intelligence.

More importantly, she found that the relationship between eminence and intelligence was trivial—of no value at all. Cox simply couldn’t use IQ to predict the achievement of these individuals. John Stuart Mill with an IQ of 190 could not be dinstinguished from Samuel Taylor Coleridge whose IQ was at the very bottom of the spread. IQ could tell us nothing about why these individuals contributed so substantially to the world. Instead, IQ as a measure appeared to be rather arbitrary.

We don’t precisely know what IQ is measuring

In 1904, Charles Spearman was messing around with statistics. Spearman, for those who aren’t familiar, invented the Spearman rank correlation coefficient—the thing that’s reported anytime you hear the word ‘correlation’—as well as the statistical method of factor analysis—another commonly used statistical tool in psychological research. Spearman applied his factor analysis to a series of cognitive tests he had conducted on some participants. He noticed that one central factor seemed to influence scores across all cognitive tests. Spearman called it the ‘g’, or general factor.

Spearman also proposed that there was an ‘s’, or specific factor that related to specific abilities that varied from test to test. Others, took issue with Spearman’s ‘g’ and tried to break the ‘s’ into multiple components that better explained intelligence. These components, notably those put forward by Louis L. Thurston, became the components that IQ tests such as the Stanford-Binet try to assess.

Highfield took three of these components—short-term memory, reasoning, and verbal agility—to explore what they looked like in the brain. These three were found by Highfield’s study to be entirely handled by three distinct nerve “circuits” in the brain. This is a fairly crucial finding. Each circuit would have its own individual capacity which would vary from person to person, and over the lifespan. As such, there is no possible way that a solitary measure could capture intelligence. Adding the circuits together comes out with a number that’s fundamentally meaningless; a number that tells you nothing about the individual’s ability and, as Highfield’s study showed, cannot account for the variation between people and between tests.

Like the heroes in Shakespeare’s tragedies, the IQ is fundamentally flawed.

So what can we infer from IQ testing?

So what does IQ measure? Well, they measure developed skills like general knowledge, comprehension, vocabulary. Like abstract reasoning and problem solving. Like awareness of the world. None of these things necessarily relate to one’s baseline intelligence, rather it truly is Spearman’s ‘g’ that explains these things. In fact, if you introduce the right stimulus to someone, the scores based on the individual components can change dramatically over time—something we knew as early as 1941 according to a seminal anaysis on the subject.

But what about this famous stability, the very reason for the Stanford-Binet’s success? Well, simply building a test that is stable over a short period of time simply reflects the circumstances that characterise that time period. In a time of economic depression and the immobility that comes during a wartime and post-war society, people aren’t spending a lot of time learning about the world. In other times, this doesn’t necessarily hold true. Indeed, the Flynn effect—the improvement in average IQ scores generationally—may simply reflect the capacity of people to access more and different kinds of information as technology improves, and as society becomes more globally mobile.

More to the point, even more generally people do not typically change their circumstances. When summing up about 50 years of research by a slew of famous scientists, Psychologist Stephen Ceci wrote:

IQ scores can change quite dramatically as a result of changes in family environment, work environment, historical environment, styles of parenting, and, most especially, shifts in level of schooling.

People can’t change their history, have little say over their parenting, rarely change their family environment, and rarely change their level of schooling. It’s almost as if, rather than measuring intelligence, IQ is merely a representation of the opportunities one has been afforded (or that they’ve had the nerve to take). So how can we change our IQ? Or more importantly, how can we reveal more of our actual potential?

Well, we have to give ourselves opportunity:

  • Our children must feel safe and secure in their home and family environment, so they aren’t concentrating on satisfying their needs during the critical periods of growth, but are instead exploring, playing and so developing their brains.
  • Firm and structured but permissive parenting leads to the greatest outcomes in terms of children’s intellectual performance

But, perhaps most relevant for those of us well past our childhood:

  • The historical environment speaks to current events. If you’re in the midst of a war, your IQ might not quite reflect your talent. Even the stress of, say, threats of terrorism might be impacting your true potential. Similarly, if there’s stress at work, you’re equally likely to have a lower IQ score. Less stress means less distractions and a mind liberated thus can be an extraordinarily powerful thing.
  • Finally, Ceci spoke to our schooling. If our education is lacking, we lack those fundamental building blocks that lead to higher levels of learning and performing. Learning must be continuous and of high quality for one to reach their full potential.

What does this mean? Well, it means that if we had a crappy childhood, we might be set back, but we’re not stuck. By working on reducing the stress in our lives, in our homes, in our families, at our places of work and from the world around us we can vastly increase our intellectual performance. As for learning, you’re on the right track (you’re reading this!), but the more learning you engage in and education you receive, the more your underlying potential will reveal itself. It’s not your IQ‚ it’s how you get there—so get there!

An older version of this article was published by me on Elephant Journal


Ideologies worth choosing at btrmt.

Join over 2000 of us. Get the newsletter.