The New Scientist cover caught my eye: Stupidity: Why are humans so varied in their mental abilities? Finally, I
thought, a popular treatment of an important question. It is not entirely fair
to regard a popular science magazine as being likely to discuss the topic of
intelligence in any depth. It is
aimed at a general audience, and the best it can do is to act as an indicator
of what the magazine thinks will play to their reader’s world view and capture
their attention. The word Stupidity certainly did that, with all its negative,
disparaging connotations.
So, as only an indicator of popular views about
intelligence, here are a few quotations:
It
turns out that our usual measures of intelligence – particularly IQ – have very
little to do with the kind of irrational, illogical behaviours that so enraged
Flaubert. You really can be highly intelligent, and at the same time very
stupid.
Modern
attempts to study variations in human ability tended to focus on IQ tests that
put a single number on someone’s mental capacity. They are perhaps best
recognised as a measure of abstract reasoning, says psychologist Richard
Nesbitt….
Possibly
a third of the variation in our intelligence is down to the environment. …..
Genes meanwhile contribute more than 40% of the differences between two people.
“I
would probably soundly fail an intelligence test devised by an 18th
century Sioux Indian” says Nisbett.
Comment:
Intelligence does not guarantee good decision-making
in all circumstances, simply better decision-making in more circumstances than
a duller person. Some problems forms are
inherently difficult and ambiguous. For example, it is easier to understand
natural frequencies than percentages with decimal point. Apart from
intelligence, social pressures and emotional attachments influence decisions.
Modern IQ tests give one overall figure, and also
figures for 3 to 4 component indices, usually verbal comprehension, perceptual
organisation, working memory, plus processing speed. The single figure is
usually the best predictor, but the others have their place in specific
circumstances. The fact that one single number is the best predictor of human
achievements is testimony to its power.
40% is the heritability estimate for children, but
it rises to 60% plus for adults. 70/30
is not a bad estimate for wealthy countries, 50/50 for very poor ones.
Sioux Indians, for all their other skills, did not
leave a written record of how they estimated intelligence. The point is
misleading, and a poor match with cross-cultural test results. People from
profoundly different cultures make the same sorts of errors on culture reduced
tests, and the pattern suggests a largely universal problem-solving capacity. The
predictive power of intelligence is similar in culturally different countries.
And just one more thing, if you want to find out
about intelligence in a UK publication, why not talk to Ian Deary, who is doing
much of the research, and has written an excellent short introduction to the
topic. If you want an American, why not Earl Hunt, who has given a balanced
view in a larger and more up to date volume? If you are interested primarily in
the importance of intelligence for everyday life, why not talk to Linda
Gottfredson?
Anyway, the rest of the article is about Keith
Stanovich, who is “working on a rationality quotient”. This has yet to be
released, and yet to be evaluated against intelligence tests. We do not know
what it will add in the way of predictive accuracy to that already achieved by
intelligence tests. Similarly, we lack proper large-scale comparative studies
with: multiple intelligences, emotional intelligences, and practical intelligences.
If the goal is wide open, why can’t one of these pioneers get the ball in the
net? All they must do is develop a test, administer it to a representative
sample (at least as good as a psychometric standardisation sample) alongside a
validated intelligence test, and then compare the results when predicting some
real life variables. After that, they can market the damn thing. Why the
perpetual delay?
Interestingly, the one thing which shows up in this
article is the difficulty people have in understanding that a strong
correlation is not a perfect correlation.
It is possible for an IQ test to
be the best predictor, and yet for it to be far short of a perfect predictor.
The other difficulty which people have is distinguishing between variance that
has been accounted for, and variance which cannot yet be accounted for. The
unexplained variance is not owned by the next person with a fanciful
hypothesis: it is merely up for grabs for anyone who can prove an additional power of prediction. As
anyone who has fooled around with multiple regressions will know, (after looking
at 50 or more regressions) getting a high R square in behavioural science
research is very difficult. Once you have a couple of good predictors it is
hard to shift the R-square up further, even when you add many putative
predictors. Often, a couple of independent variables hoover up all the
predictive variance.
In his Tractatus, Wittgenstein
intoned: "Whereof one cannot speak, thereof one must be silent." With
more verve and vernacular charm his friend Frank Ramsey quipped:
"What we can't say we can't say, and we can't whistle it either."
If
you can’t explain the variance, you can’t dog-whistle it either.
I've done an IQ test only three times that I can remember (ages 10, 11, 21) but I have read a bit about them (mainly in Eysenck's old paperbacks). It seems to me that they wouldn't really show up perhaps the most glaring deficiency of dim, or even of ordinary, intellects, namely incapacity at abstract thinking. Of course, lots of people who reckon themselves clever indulge in bloody stupid abstract thinking - I cite communists as an example, m'Lud - but the way that most people shut down altogether when faced with abstraction is rather striking. By contrast, fiddling with little puzzles about numbers, shapes and words can seem pretty concrete.
ReplyDeleteThere must, I suppose, be a literature on this?
Yes, there is a profound impact when one moves from the concrete to the symbolic. Any symbolic notation imposes a restriction, because an extra level of complexity has to be mastered. The problem is even more profound, because we do not have objective measures of psychological complexity. I will try to write about this in more detail soon. Meanwhile, the whole of Daniel Kahneman's work could be seen as focusing on that issue, notably the self-diagnosed problem of him knowing that psychological experiments always used samples that were too small, but still continuing to use small samples himself. Most of his example problems require abstract thinking. Gigerenzer is also good on this.
DeleteP.S. Is the mis-spelling in your headline intended to be ironic or is it just God's idea of a joke?
ReplyDeleteMere stupidy on my part, but a neologism is born.
ReplyDeleteEver since I started reading blogs I've been entertained by how many typos make fine jokes. I suppose that there might be some mental mechanism whereby silly typos get caught and corrected but the brilliant ones escape detection?
ReplyDeleteI suppose it is a boundary issue: if the initial form and final ending (primacy and recency effects)survive rough error checking, then problems in the middle (like not knowing when to stop writing bananana) are not fatal errors. A few of those make jokes. Similarly, if the contextual cues are strong, severe damage to the words can be coped with, fairly easily. It is a "Cna yuo raed tihs?" phenomenon.
ReplyDelete