Friday, 11 October 2013

How illiterate is the OECD?

 

The OECD is has conducted a study of adult skills in some of the wealthy countries, of the world and the UK papers are aghast at the results. The UK has done badly, with many heavily educated British youths knowing less than their more lightly educated elders. Cue for outrage, hurt feelings, and political posturing. If you look at the actual publication, you will find that the key OECD results have been written up in a corporate format: suitably uplifting photos of students staring intently at their homework lurk in the background as the Secretary General makes his opening remarks:

“If there is one central message emerging from this new survey, it is that what people know and what they do with what they know has a major impact on their life chances. The median hourly wage of workers who can make complex inferences
and evaluate subtle truth claims or arguments in written texts is more than 60% higher than for workers who can, at best, read relatively short texts to locate a single piece of information. Those with low literacy skills are also more than twice as likely
to be unemployed.”

http://skills.oecd.org/SkillsOutlook_2013_KeyFindings.pdf

Well, blow me down. Some people are brighter than others. This is the finding which has emerged from intelligence research over a century. Read Linda Gottfredson  “Why g Matters: The Complexity of Everyday Life” (1997) if only just page 117 for an explanation of the relationship between literacy, learning and intelligence.

http://www.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf

Then, for an explanation of the relationship between intelligence at 11 and scholastic attainment at 16 read Ian Deary:

http://emilkirkegaard.dk/en/wp-content/uploads/Intelligence-and-educational-achievement.pdf

Finally, for an explanation of the relationship between intelligence and the time taken to learn a skill to an adequate standard look at this summary, from Gottfredson: “Men of low ability (10th to 30th percentiles) took about 12 to 24 months to catch up with men of higher ability (above the 30th percentile) who had only 3 months’ experience on the job.” (Schmidt and Hunter (1998) The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, Vol 124(2), Sep 1998, 262-274; Vineberg & Taylor, 1972,Performance in four army jobs by men at different aptitude (AFQT) levels p. 55-57). For this reason the US Army has always been allowed to use “ability” (intelligence) tests to selects its recruits and is allowed to reject low ability applicants simply because it will take too long to teach them necessary skills. The military are sitting on a treasure trove of data on how long it takes to train people at different levels of intelligence, and how much the recruits can think for themselves beyond their training, at various levels of intelligence. They go for the brightest recruits every time. The higher the rate of unemployment the better the class of recruit who present themselves to try to get into a well paid, if sometimes dangerous, job. The Army do not care what genetic group recruits come from. In that sense the military are race integrated because they are intelligence integrated. If you can make the grade you get the job. The US armed forces don’t talk too much about intelligence because they fear they might be prevented from using their best weapon: using IQ tests to get good quality people. They have government permission to use intelligence tests, and to reject those who are not intelligent enough, and they don’t want to lose that privilege, like many other public service employers have done. They keep their procedures very complicated and their results obscure.

Anyway, what else has been left out of the report, apart from human intelligence? The OECD view is that the problems arise from people not having the “skills”. Give them the skills and all will be well. That is true, but only given lots of time, patience and resources. As Gottfredson has pointed out, training skills in people of low ability is a very long drawn out process, and does not generalise easily to other skills. Her work on the Wonderlic personnel selection test (designed by researchers who did not believe in general intelligence) paradoxically is one of the best proofs of general intelligence. Training someone of low ability to do a particular task does not generalise very much at all to other tasks. You are better off getting someone who learns everything at a faster rate.

On the OECD results the UK is placed slightly below average in literacy. National results do not mean very much unless you analyse immigrants separately. PISA does this, and the scores show immigrants are lower than locals even in the second generation, although the second is usually better than the first generation. When the rate of immigration is high, national “skill” levels drop (except in countries with low intelligence levels who import brighter foreigners to run things).

The much repeated finding that UK youngsters are no better than their elders turns out to be a bit misleading. Both young and old in the UK are within measurement error of each other. It is simply that British youngsters have not shown the gains shown by Korean youths. Not surprising. The British 1870 Education Act ensured access to education long ago. Korea achieved it recently.

I turned to the full report. This shows that the samples in each country were initially assessed regardless of nationality. This includes immigrants. Later in the report they are studied separately, but not separating recent arrivals, nor identifying the immigrant groups in question despite their difference in ability, above or usually below the locals. As far as I can see, they regard immigrants as a fungible commodity. There are later analyses somewhere in which personal and parental education are mentioned, but finding the real data in this publication is difficult and time-consuming.

The following are excerpts from the summary which may surprise for their obtuseness:

“Most of the variation in skills proficiency is observed within, not between, countries.”  Bell curve? Mean differences always smaller than individual differences?

“In all but one participating country, at least one in ten adults is proficient only
at or below Level 1 in literacy or numeracy. In other words, significant numbers of adults do not possess the most basic information-processing skills considered necessary to succeed in today’s world.” Bell curve? Every distribution has a lower range?

The authors seem incapable of understanding the normal distribution of human abilities. I decided to skip their carefully crafted presentation, and have a look at the methods section. Here are some scattered findings on the way: the Russian sample omits Moscow; some countries have over-sampled minorities; sample sizes range from 4,500 to 27,000 so the authors are right to say we need to pay close attention to the standard errors of the estimates. In fact, I found out much later in the Readers Companion that all countries were at the 4 to 6 thousand sample size, which is OK but not great, and only Canada managed 27,000. Deary’s work on IQ and scholastic attainment included 70,000 children and that was for just one academic paper. Basically, these researchers do not appear to have used entirely proper epidemiological samples, though they certainly used national registers. It is hard to find a single table which compares sample characteristics with population characteristics, let alone a chi-square to identify discrepancies.

The correlation between proficiency in literacy and numeracy at the individual level for the entire sample is 0.87 (see Figure 2.9). This strongly suggests a common factor, but this is not discussed. Why not show a correlation of the main cognitive variables and do a principal components analysis? Numeracy, they say, has a stronger relationship to wages than does literacy. Yes, it is a better measure of intelligence because it is more demanding. These authors tend to list their results rather than try to understand them, and all the important matters are strung out in a series of addenda.

There are no mentions of “intelligence” in the text, but the word can be found in two of the references. Presumably the censor missed those, or simply had to concede that some researchers use the term. No mention of “genetics”. 76 uses of “ability”. 68 uses of “cognitive”. Note these code words for your corporate survival. You may have ability, you may even have cognitive ability, but woe betide you if you have intelligence.

“Across the countries involved in the study, between 4.9% and 27.7% of adults are proficient at the lowest levels in literacy and 8.1% to 31.7% are proficient at the lowest levels in numeracy. At these levels, adults can regularly complete tasks that involve very few steps, limited amounts of information presented in familiar contexts with little distracting information present, and that involve basic cognitive operations, such as locating a single piece of information in a text or performing basic arithmetic operations, but have difficulty with more complex tasks.”

That, in a nutshell, is the problem of the normal distribution of skills. You can shift the distribution downwards or upwards. The shape will change somewhat depending on what sort of factors are keeping people back (disease, malnutrition, social restrictions). You cannot get rid of variation. However you define the levels, and wherever you set the cut-offs, you will find a distribution of abilities. How you deal with such disparities is a social issue. Pretending you can educate people out of showing individual differences is not possible, not if you are honest about displaying the result. So, if you do not want “any child left behind” you will have to prevent all children from working at their own pace. The slowest pace will have to be imposed upon all. Finally, although the authors do not intend it, the description they give in the above paragraph is a good explanation of what it means that one person is more intelligent than another.

More gems: “Foreign-language immigrants with low levels of education tend to have low skills proficiency. Immigrants with a foreign-language background have significantly lower proficiency in literacy, numeracy and problem solving in technology-rich environments than native-born adults, whose first or second language learned as a child was the same as that of the assessment, even after other factors are taken into account. In some countries, the time elapsed since arrival in the receiving country appears to make little difference to the proficiency of immigrants, suggesting either that the incentives to learn the language of the receiving country are not strong or that policies that encourage learning the language of the receiving country are of limited effectiveness.”  Note that the differences in skills are thought to be due to language, and that lack of ability is a matter of incentives and policies and, most of all, that any intellectual differences are due to language alone.  Language may be part of the picture, but the authors do not consider that ability levels may vary between immigrants and locals regardless of language.

Dissatisfied, I turned as a last resort to the Reader’s Companion. http://www.oecd.org/site/piaac/Skills%20(vol%202)-Reader%20companion--full%20v6%20eBook%20(Press%20quality)-27%2009%200213.pdf 

“Read this one first”, I thought, but it was a disappointment. Finally, I realised I needed to read the Technical Report. At this stage I gave up, fearing, dear reader, that you would have lost interest long ago. For all I know, there are secret messages in the repetitive slabs of tabulated data. I could not find a humble table of correlation coefficient between the main measures, let alone a factor analysis. There are some regression lines for country data, which are most welcome. Otherwise, it is death by a thousand tabulations.

Frankly, this is less well described than the average social science paper, and that is saying something. The whole thing is back to front: policy implications and conclusions are proclaimed first, then more conclusions are trumpeted, then some findings are picked out, and then finally, way in the background, they reveal some of the things you need to know to figure out if they have got it right, or even vaguely right. Perhaps our standard sequence in academic papers makes sense after all: explain the problem, explain the subjects and the methods, describe the results, discuss them including explaining why they may be wrong. And avoid having anything to do with the production of corporate brochures.

This lump of a report is not all bad. One can compare one country with another, which is the sort of thing governments like doing. I really believe that somewhere in the mass of the extended report there may be good things. On a broader matter, I am in favour of people measuring skills. That makes sense, because employers need a skilled workforce, and economies prosper if skilled people are moved to where they can contribute most. The report contains descriptions of skill levels, and that is a good thing. Skills make sense, and if you say that someone has the skill to drive a car, but not the skill to service a car, (another Gottfredson quip) that immediately makes sense to most people. We can distinguish between a driver and a mechanic. We can also understand that someone who can only handle one concept at a time should not be given the task of integrating disparate conceptual inputs. That cuts out being in the control room of most industrial processes. In does not preclude employment as a university teacher, where to manage one concept may lead to a successful career.

The problem with the “skills only” approach is that it strongly implies that is it only a matter of getting the right teacher, and the right attitude, and you can master all tasks.  If only the OECD could help me with structural equation modelling! Even a small grant would make all the difference.

If you want to say anything useful at all about why people don’t have the required skills you have to have a measure of their ability on the one hand, and a measure of the effectiveness of teaching on the other. (In that way you can judge to what extent and for which pupils teaching makes a difference).  Absent either of those measures, you have an interpretive problem. Absent both, you have a muddle.

6 comments:

  1. Good post, thanks. In the sentence

    "Her work the Wonderlic personnel selection test (designed by researchers who did not believe in general intelligence) paradoxically is one of the best proofs of general intelligence."

    is there a word missing at the beginning?

    ReplyDelete
    Replies
    1. Thank you. Her word on the Wonderlic..... I have corrected it.

      Delete
  2. I sometimes think that the only meaningful measure of good teaching is your own experience of it. At school I knew, I just knew, that two of my teachers were excellent. Decades of reflection hasn't taught me much more than that they were indeed. At university I think I encountered just one excellent teacher: I'm not entirely sure because it seemed to matter less, so there's more chance that I've overlooked someone.

    On the other hand it's really easy to identify lousy teachers, and in my experience almost all of the taught would agree on who those are.

    ReplyDelete
  3. I agree, but we need a better measure than having you sit through the education system of all OECD countries. Perhaps looking at a good quality video of the first 5 minutes of each lesson would suffice.

    ReplyDelete
  4. "we need a better measure": I wonder - probably that depends on what we intend to do with it.

    ReplyDelete
  5. Thanks for this stimulating and evaluating comment!
    There are different paradigms - intelligence vs. student achievement or student competence or literacy or student accomplishment or knowledge or skills or what ever term will come!
    Comments like yours will help to show that they are all intelligence (in the broader sense) or intelligence and knowledge.

    ReplyDelete