Monday 18 November 2013

Can The Economist be trusted on intelligence?

 

I used to trust The Economist, though I recognised that they were reluctant to discuss human intelligence. It does not fit their world picture, so they generally ignore it. Their line is that human skills are entirely mediated by environmental causes, and they follow this view with relentless enthusiasm.

Their most recent offering is true to form: “Ethnic-minority pupils in England are storming ahead. Every ethnic-minority group that trails white Britons in GCSE exams, normally taken at age 16, is catching up. Bangladeshis used to perform worse than whites; now they do better. Indians have maintained a huge lead. All this despite the fact that ethnic minorities are poorer than average. Control for that, by looking at pupils who are entitled to free school meals, and all ethnic-minority groups now do well.”

http://www.economist.com/blogs/graphicdetail/2013/11/daily-chart-8?fsrc=scn/tw/te/dc/racetothetop

They draw a chart to make the point crystal clear:

 

 

First, this chart is designed to make any changes as dramatic as possible. It strongly suggests that white British students have made no progress, but have been overtaken by some minorities, or are about to be overtaken by other ethnicities. There is an explanatory legend “Percentage point difference from the mean” at the top of the graph but most readers will just look at the lines, which appear to tell their own story.

“Control for…free school meals” Where have we heard that before? This is the sociologists’ fallacy: that differences in wealth are entirely due to external causes and are not in any way the consequences of prior differences in intelligence and personality. It is assumed that differences in wealth are outside your agency.

The gap between the free schools meals students and the rest depends on the exams taken and the grading of those exams, but an Economist graphic which shows “gap closing” after a correction for free school meals cannot show that. Also obscured is a highly relevant factor: if these results are to be believed, then there has been a massive increase in scholastic ability between 2007 and 2011. Can’t see that? Of course not, we need to go further back to the raw data, or more precisely the slightly less cooked data from which the graphic was derived.

The Economist reference to the Department of Education is rather vague, and has taken me some time to track down, but here are the actual GCSE results, with the 2007 results in light blue, and the 2011 results in purple.

https://www.gov.uk/government/publications/gcse-and-equivalent-attainment-by-pupil-characteristics-in-england-2010-to-2011

 

image

 

The old “white British” level for 2007 is shown by a solid line, the new “white British” level shown by a dotted line. As you can see, either scholastic ability has zoomed up from a 45% pass rate to a 58% pass rate in 4 years (perhaps as a result of surreptitious genetic manipulation of the student population, a transformation in teacher’s charisma and ability, or something special in the water supply) or the test has been watered down over that period, particularly for lower achievers. Notice that every single ethnic group has gone up, particularly those who were doing badly before. Not the Chinese, however. (By the way, the Economist drops the Chinese entirely, in a case of ethnic ignoral). What has gone wrong with the Chinese? Smoking dope and playing too many computer games? Bad role models? Rap videos? Or does the new test not have enough scope for already bright scholars to show their high level skills? We are talking pass rates, after all, not actual marks. Getting 90% on a test and 50% on a test gets the same mark on this graphic: your pass was C or better.

Now consider the following possibility: if you suddenly decide to give everyone an additional mark for holding a pencil you will get a higher pass rate. The pencil bonus will have least effect on the brightest students who are already getting high real scores. However, it will make everyone else look good, including the teachers. A mark for holding a pencil would be too blatant. A few more marks for giving very simple answers will have the same effect. The simplest tricks are the best. The Economist article and Department of Education report are predicated on these being real gains.

The strong suggestion that the exams have got weaker is sustained by the findings on free school meals. If needing a free school meal is an indicator of low ability as well as bad luck, then one would expect smaller differences between “high ability/good luck” students and “low ability/bad luck” students on easy subjects (all will tend to get prizes) but larger and continuing differences on harder subjects. The data bear out this interpretation.

“The attainment gap between the proportion achieving 5 or more A*-C grades at GCSE or equivalent including English and mathematics GCSEs is 27.4 percentage points – 34.6 per cent of pupils known to be eligible for FSM achieved this indicator compared with 62.0 per cent of all other pupils. There has been a very gradual narrowing of the attainment gap from 27.9 percentage points in 2006/07. “ (My note: this narrowing is probably just measurement error).

“However the attainment gap between the proportion achieving 5 or more A*-C
grades at GCSE or equivalent has narrowed faster by 8.7 percentage points between 2006/07 and 2010/11, with 64.6 per cent of pupils eligible for FSM achieving this indicator in 2010/11, compared with 83.0 per cent of all other pupils.”

It’s a bit hard to work it out, isn’t it? The last paragraph shows that if you measure achievement by easy subjects 83% of able students pass, and even 65% of less able students pass, so the “gap” is 18 points and over 4 years has narrowed faster. If you make the criterion just a little harder, by ensuring that students exams take English and Mathematics, which in normal educational systems would be considered the purpose of sending children to school in the first place, then 62% of able students pass, but only 34.6% of less able students, a gap of 27.9% and that gap stays solid.

So, the true finding is as follows: no closing of the gap if you require students to read, write and count, but less of a gap and more apparent progress if they are tested on less demanding skills! What does the Department of Education do next? It “corrects” for the real gap revealed by the English and Maths exams by making an adjustment for school meals, on the assumption that this measures poverty and not any ability differences. (I hope you are still with me. I may have lost the will to live while going through these results, so I do not blame you.  I think they have been written so that no-one will actually read them, but will have to rely on newspaper summaries.)

How do they do the adjustment? I cannot find this described in the technical appendix, but since they fall for the sociologist’s fallacy they will have done the following: they will have assumed that poor kids would have done as well as rich kids if they had just happened to be rich, so they bump up the results for each ethnic group in proportion to the number of kids in each ethnic group who are poor. So, if most of the children in a particular ethnic group are poor, they get a big boost from this adjustment. Of course, both parents and children may be poor because they have low ability, and cannot easily work in a technological society and/or do not save whatever they can earn, but that possibility is not considered. The “adjustment” or “correction” is made, and the desired argument is “proved”.

My other observation is that there is only one group that really stands out, and they are the Irish travellers and Gypsy/Roma. Numbers are very small, but it is clear that schooling is not their thing. There is intelligence test data on these groups, which would suggest low ability, but that is for another time. Drop those few students, and the significant finding is that Chinese and Indians and White and Asian students are leading the pack. Of course,  the “proportion of students achieving a particular level” measure is crude. The actual results would be more informative, as would be the subjects studied. Some are easy, like A level Psychology and Sociology for example.

In summary, if you look at the detail, some of The Economist story is borne out, but the detail reveals that these exams give us only a partial glimpse as to what is happening with ethnic minorities, because the pass rates for all students have been rising so fast. By making the White British score an apparent unchanging line they have obscured a fundamental problem with using GCSE scores to track scholastic progress. PISA and TIMSS scores do not show commensurate improvements. Furthermore, the “correction” for free school meals is based on highly questionable assumptions. I am not denying that Chinese children are bright, and that the selection of Indian children of commercial classes who made it to Britain are also bright. The sad truth is that if you make exams too easy, then you may well lose out when some students make real gains: the test has become too soggy to measure real achievement.

Disclaimer: If you can find better and further particulars about these “adjustments” I am happy to hear from you. That includes the Department of Education.

5 comments:

  1. Thank you for that Dr T: most revealing. They are all honourable men, no doubt.

    I have spent more time than is sensible over the years trying to explain that tests where the bright bulbs all score more than 90% are no bloody use to an admissions tutor who has to distinguish the very bright from the superbright. The distinction would be hard to make with any reliability under the most favourable circumstances; under modern British conditions one is forced back onto semi-surreptitious entrance exams and questionable interviewing methods. That's "questionable" not in the sense of doubtful morality, but in the sense of doubtful effectiveness.
    My own solution has been to recommend more reliance on arbitrary rules and on games of chance. Specifically, I once suggested that if two candidates were tied in the contest for admission, we should choose as follows.
    (i) If they are both male, admit the younger.
    (ii) If both female, admit the prettier.
    (iii) If one of each, spin a coin.

    ReplyDelete
  2. I don't think it makes any sense to adjust the data in any way at all, what we're interested in is the actual exam results (whilst acknowledging that even they are flawed, for the reasons given).

    ReplyDelete
  3. What percentage of 16 year old children sit for the test and does this vary by race. I could not find this in the GCSE reports.

    ReplyDelete
  4. A standardised exam such as GCSE is highly corelated with IQ. Regarding the relative mild Chinese performamce( still at the top of the pack I think? The Economist is just being PC because the Indians are the most favoured immigrants in the eyes of the elites) and stellar performance of the other groups, the main explaination could most likely be the dumbdown of the exam:

    1. the harder the exam is, the wider the gaps are, and more authentic the rankings become.

    2. much easier exam today not only narrows the gaps, but also sends false information on ranking most of the time. e.g. when the question is as easy as what is 21+3, almost everyone can get it right (gap narrowing), yet the best student could easily make a typo and end up as the 2nd or the last (flase info on ranking). We are witnessing the both effects these days.

    Easy exams in essence penalise the best while reward the mediocre.

    Indians' performance is no surprise under today's easy GCSE scheme. On average Indians study (due to cultural reason) much more than the Whites and other groups, the Chinese aside. Therefore, it is entirely possible for Indians to score better than the Whites for instance, even though they have much lower average IQ than the latter.

    To easily prove above explaination, one can select random samples from all these groups and give all an equivalent amount of time of training. Then take Chinese Gaokao exam or the equivalent ( by design they are very tough exams in order to rank 10s of millions of the Chinese students).

    The result of that is predictable with clear gaps: The Chinese on the top, the Whites as the 2nd, Indians 3rd... you bet.



    ReplyDelete
  5. @dr james thompson

    I promptly made above 2 comments following a google search leading me to here, without fully realising that this is a scholarly blog. So my apologies if I was rude by having made a mess above instead of several concise lines.

    ReplyDelete