Something has gone wrong at Nature, the former science publication. THE science publication, as was. Perhaps they just don’t like the topic of intelligence, and are on the search for knocking copy, publishing anything critical of tests and examinations.
Only a few days ago I had posted some proper work on the GRE, showing that although this was the best predictor, minorities (and to a lesser extent women) are being admitted to US colleges despite having lower scores.
An eagle-eyed reader forwards this gem from Nature, in which it is claimed that the Graduate Record Examination is no good, and should be replaced by an interview. They entitle their piece “A test that fails: A standard test for admission to graduate school misses potential winners”, say Casey Miller and Keivan Stassun. After a provocative title like that one expects a reasoned argument as to why the test fails, and a simple exposition of which tests or procedures succeed. As a rule of thumb, given that we have GRE data going back to 1982, if not earlier, to live up to the title one expects a proper set of alternative test results going back a five or ten years. Multiple intelligence tests or emotional intelligence tests or gastro-intestinal intelligence tests. Procedures of which Robert Sternberg approves. Things like that. Anything.
We might also expect some data on over and under prediction. Yes, both of those. All tests miss some potential winners and pass some duffers. See R.L. Thorndike. The concepts of over- and underachievement. Columbia University, 1963.
Here is an example of the quality of their argument: According to data from Educational Testing Service (ETS), women score 80 points lower on average in the physical sciences than do men, and African Americans score 200 points below white people. In simple terms, the GRE is a better indicator of sex and skin colour than of ability and ultimate success.
At this stage you might wish to turn to other matters but charitably the authors might conceivably go on to show data to confirm that the GRE is a poorer predictor for African Americans than White Americans. As Jensen pointed out in 1980, tests are not bad if they show lower scores for some groups, but if they lead to poorer predictions for those groups. (That is my short summary of his Bias in Mental Testing). Instead, when these authors talk about correlations, they mean that lower scores are associated with some groups of test takers. They present no data on poorer predictions. I suppose one might say that they perform a public service by showing the results for different genetic groups, clearly showing that Asians are ahead, but cast it as a case of bias, without evidence. To be consistent they should say that the test is biased in favour of Asians.
Perhaps (this is a somewhat psychodynamic hypothesis, but I am writing this in a sunny cafe in Totnes, which has a slightly hippy feel to it) the authors are clever sillies, wanting to look good in public while at the same time holding up the examination results for everyone to see. Innocents of some kind. Like those who are opposed to pornography, but who among their many protests and calls for censorship keep providing detailed links to the websites they find most worthy of condemnation.
Have a look at the paper, just in case I have missed something.
Why does Nature publish stuff like this?