Monday 16 June 2014

Nature stumbles again

Something has gone wrong at Nature, the former science publication. THE science publication, as was. Perhaps they just don’t like the topic of intelligence, and are on the search for knocking copy, publishing anything critical of tests and examinations.

Only a few days ago I had posted some proper work on the GRE, showing that although this was the best predictor, minorities (and to a lesser extent women) are being admitted to US colleges despite having lower scores.

An eagle-eyed reader forwards this gem from Nature, in which it is claimed that the Graduate Record Examination is no good, and should be replaced by an interview. They entitle their piece “A test that fails: A standard test for admission to graduate school misses potential winners”, say Casey Miller and Keivan Stassun. After a provocative title like that one expects a reasoned argument as to why the test fails, and a simple exposition of which tests or procedures succeed. As a rule of thumb, given that we have GRE data going back to 1982, if not earlier, to live up to the title one expects a proper set of alternative test results going back a five or ten years. Multiple intelligence tests or emotional intelligence tests or gastro-intestinal intelligence tests. Procedures of which Robert Sternberg approves. Things like that. Anything.

We might also expect some data on over and under prediction. Yes, both of those. All tests miss some potential winners and pass some duffers. See R.L. Thorndike. The concepts of over- and underachievement. Columbia University, 1963.

Here is an example of the quality of their argument: According to data from Educational Testing Service (ETS), women score 80 points lower on average in the physical sciences than do men, and African Americans score 200 points below white people. In simple terms, the GRE is a better indicator of sex and skin colour than of ability and ultimate success.

At this stage you might wish to turn to other matters but charitably the authors might conceivably go on to show data to confirm that the GRE is a poorer predictor for African Americans than White Americans. As Jensen pointed out in 1980, tests are not bad if they show lower scores for some groups, but if they lead to poorer predictions for those groups. (That is my short summary of his  Bias in Mental Testing). Instead, when these authors talk about correlations, they mean that lower scores are associated with some groups of test takers. They present no data on poorer predictions. I suppose one might say that they perform a public service by showing the results for different genetic groups, clearly showing that Asians are ahead, but cast it as a case of bias, without evidence. To be consistent they should say that the test is biased in favour of Asians. 

Perhaps (this is a somewhat psychodynamic hypothesis, but I am writing this in a sunny cafe in Totnes, which has a slightly hippy feel to it) the authors are clever sillies, wanting to look good in public while at the same time holding up the examination results for everyone to see. Innocents of some kind. Like those who are opposed to pornography, but who among their many protests and calls for censorship keep providing detailed links to the websites they find most worthy of condemnation.

Have a look at the paper, just in case I have missed something.

Why does Nature publish stuff like this?


  1. "Why does Nature publish stuff like this?"

    Because, since at least 2007, Nature cares more about political correctness and promoting Leftist causes than they do about science.

    A LOT more.

  2. Like, I posted at information processing, they seem to confuse a correlation of 0.5 with "just as good as the flip of a coin".

    Furthermore, 700+ on the quantitative section really isn't asking a lot. I have a lukewarm IQ and always got a perfect or close to perfect score when taking practice tests. If you think I am bragging, please see the sample questions published by ETS: (they should be a breeze for anyone considering graduate studies in maths, physics and computer science. Even bright high-schoolers should be able to do them.)

    Btw, the reader comments Nature publishes about intelligence are just as inane.

  3. To my surprise, without any knowledge of Maths (beyond that badly taught to a 16 year old) I was able to answer some of the questions. So, perhaps with some instruction (say 10,000 hours)..........

  4. Yes, the general consensus is that the GRE quantitative section is not particularly harder than the high school SAT math. As makes sense it also correlates strongly with the same, up to the rather low ceiling.

    I would say the authors of this piece do present one of the most braindead arguments I have seen in some time. I truly don't understand how this glaring flaw would get past any editor.

    The authors clearly chart their data based on the percentage of accepted/enrolled graduate students in the relevant disciplines. Not the percentage of GRE test takers; not even the percentage of applicants.

    In other words, they show that most men have to score above 700 on the GRE to get admitted in these various fields. Minorities and women do not; this report shows they are still being admitted with lower scores in the first place. This article's own statistics reflect roughly the proportion of undergrads who obtain bachelor's degrees in the relevant fields as well, so minorities and women are not being underrepresented on the bachelors => PhD program transition.

    Relaxing GRE score requirements would mean, if anything, allowing many more (white and Asian) men with low scores to also get admitted to graduate school in place of the distribution of candidates currently admitted.

    1. It seems to have gotten a little harder since I worked on them, since now a top score means in the top two percent of test takers, while according to the 2006-2009 statistics six percent of test takers got a perfect score.

      Erm, if I am I reading these stats right (, page 17-19) one in twenty intending to do graduate studies in home economics got a perfect GRE quant. For communications majors one in ten got a perfect quant and one in five scored 700+. And about one in six of those hoping to go to graduate school in home economics got a 700+ quant. Aren't these are considered easy subjects?

      For those taking the test in a field where math was much used about half scored a 700+, so that is not a very strict requirement. I would be very suspicious of claims that students scoring below that are cut out for graduate school.

  5. Nature panders -- & thereby denies nature itself.

    there is a substantial & consistent body of research in the field - tests predict equally accurately for all groups - a low score on the test is predictive of poor outcome
    (& correlations between test & outcomes would be much higher if we allowed low ability folks to take the test AND to attend grad school).

    sad that Nature pretends that over a century of intelligence research does not exist in order to pander & to appear concerned & enlightened.

  6. I dared submit some of my crap to Nature and Science-- resoundingly rejected (likely rightfully so). Could be sour grapes, but the quality of the social science they there publish sucks. (e.g., Science's Oswald & Wu article on US state well-being is embarrassing). Perhaps they should stick to hard science

    Btw, GRE/GMAT scores correlate nicely with elementary cognitive tasks...

  7. James, you focus here on Nature, but the problem of course is endemic throughout the academic, business and government sectors. A truly depressing conclusion.