I had written out a reply to comments on a previous post
http://drjamesthompson.blogspot.co.uk/2014/11/immigrants-scholastic-ability-and.html#comment-form
which had been left by Nick Hassey, but feel that the comments and my reply raise general points, so should be given as an additional post. If you look back at the original post, and all the comments, that will give you the context. On the other hand, you might want to skip all that, and just read the general points.
Thanks for your comments. I have gone through what I see as your main points. I think we are in more agreement than disagreement, but there are still differences which I feel I need to explain further.
“Progress in education measures a child's absolute attainment conditional upon their previous score.”
No, that was the whole point of my essay. The main use of “progress” in education is to say how far along the path of learning a child has got. If you make progress in maths you know more maths, and you have progressed towards numeracy, full stop.
Defining progress as the residuals on a regression line is a different matter, though it is also interesting. If you use previous scholastic results to do the calculation, you get one particular regression line. If you use intelligence test results you will get another, better, regression line. (For example, you can get reasonable estimates of intelligence before 4 years of age, and thus before much serious schooling has taken place, so it would be a better variable to use. Assessments at 11 years of age are more precise, but you can claim that education has had some influence on those later estimates). If you add some other variables such as poverty into the regression mix you will get other regression lines, and each of those modulating variables will carry assumptions and change the picture somewhat.
The residuals from all those lines, however, are of mixed origin, and will contain error terms as a consequence of measurement errors (low reliabilities). They may be due to unmeasured variables like motivation, but they could also be due to presumed acculturation effects. As I recall it, the data set did not allow “years in UK” to be directly entered into the regression (Prof Burgess will be commenting on all this shortly) and that might have given us a better understanding of likely causes of the residuals. I think the “motivation” explanation is plausible but unsupported at the moment, and the acculturation explanation is even more plausible but not measured directly in this paper.
Different measures of progress lead to somewhat different conclusions. A pupil who arrives from overseas but makes progress from primary school (not much knowledge of England) to secondary school (has picked up more knowledge of England) hasn’t thereby really boosted education in England. That could only be argued if they ended up far better than the locals. That is the case for some immigrant groups like the Chinese, but not for others. The best results would be achieved by careful selection, not random selection.
In the example you gave of the child being in the top 80% at 11 and again at 16 making "no progress" isn't quite right.
You go on to cover the argument about predicting progress on the basis of prior attainments. In fact, I was at pains to point out that that particular conclusion would have been foolish. I was simply drawing attention to a feature of residuals on a regression line. If a pupil progresses up the schooling system they learn more and more, but if they progress as expected (exactly as the regression line predicts) then there will be no residuals. For this reason the regression approach to “educational progress” can sometimes mislead. I think it led most of the journalists covering the story to think that immigrant “progress” of itself boosts final achievement, whereas the national end result is increased by some groups and reduced by others. In fact, in PISA national comparisons most researchers pay attention to immigrants, and often measure their progress separately. Rindermann and I have published a big international study on this, but we are holding it for the ISIR December conference #IQ2014. Hope to blog from there in mid-December.
It might be helpful to illustrate this general point with findings from the economic domain. Some poor sub-Saharan countries have shown faster economic growth in the last decade. This is often because they have been exporting raw materials to China. Their growth rate is much higher than “stagnant” Japan, which officially went into recession today. However, Japan is much richer than all sub-Saharan countries. “Rate of improvement” does not equate to having the highest level of actual wealth, nor do measures of adjusted educational progress necessarily imply better scholastic achievements in all immigrant groups.
The reason expected progress is used here (and in education) rather than pure attainment is to try and isolate as much as possible the impact a particular school (or area) is having on the attainment of its pupils.
Of course. Burgess is using a familiar “value add” measure of schools, and this is one way of seeing whether some schools are better than others. The question, however, is the best way to calculate progress. Pre-school cognitive ability is probably the most uncontaminated measure for judging school progress. Once we have a full genome for each child then that might become the gold standard. We are not there yet.
You then go on to discuss the “London effect”. I would not start from London, because it makes much more sense to look at the full sample before the particular towns, which have individual immigration histories. The full sample in this case is England, and that is why I quoted the Deary et al. (2007) paper. We know what causes scholastic attainment. In the main, at 0.81, it is prior cognitive ability. The advantage of cognitive measures is that they are less influenced by school teaching effects than curriculum based assessments.
So, I would not get into any arguments about London or Birmingham effects until I had some cognitive measures to look at. Absent those measures, we would be fighting over scraps of variance.
You then go on to explain what Burgess is trying to do: showing that the presumed London effect is due to race and immigration status, not fancy teaching. I think this is probably right. I can only say “probably”, because cognitive measures are missing, but are being implied from previous performance, which may be partly modulated by lack of English in new immigrants.
As you can see children in London (and Birmingham) score above average, so their pure scholastic ability is high.
Again, Burgess’s argument is that if you control for racial composition, those apparent effects vanish. Better to look at the whole picture first, individual area variations later, and only if they depart from the general pattern. Looking at your argument, I think we are entirely in agreement on this!
Now, as to the “equivalents” I didn’t bother with these, but should have, and should have made the adjustments as discussed in the paper. My major point was that, even within real GCSEs, we do not have full equivalence. If you look at the Deary results, there are many GCSE results which don’t require much intellect, but count just as much in most of the statistics. Schools are perfectly able to “game” the system, and many of them do. All the ways in which this can be done (selecting which pupils take which exams, which exams to take generally etc) are worth a separate paper. I had suggested having a single examination in adulthood to evaluate achievement, as was done in the OECD study I mentioned. This would give us a socially interesting result, in that it allows us to understand occupational histories and later wealth. If you look at the links, you will see that I am a bit exasperated that the OECD don’t mention intelligence, but keep finding it again and again in their results.
http://drjamesthompson.blogspot.co.uk/2013/10/how-illiterate-is-oecd.html
http://drjamesthompson.blogspot.co.uk/2013/12/oecd-children-become-oecd-adults.html
Also, spending money on education does not always give results, certainly not above a reasonably low threshold. Andrew Sabisky showed that US schools were not providing good value for money relative to international expenditures.
http://drjamesthompson.blogspot.co.uk/2014/01/pisa-goes-to-us-finds-little-bang-for.html
Finally, as regards you arguments about Chinese and Indian students, here again we are in agreement, so I don’t think we have to agree at length!
However, at the end of your comments, I think we drift apart again:
Even controlling for ethnicity, gender, month of birth, economic background and the level children started at – London schools still do a better job of getting more pupils to the highest grades than we would expect.
You use the phrase “than we would expect”. Expectations are an elastic concept. I have strong reservations when any researcher does the traditional “corrections” for economic background or socio economic status. Burgess does it, but so do most educational researchers. I thought I had posted about it many times, but my explanations have not been succinct enough, so it is good to try to put them down again in better form now.
Jensen called it the “sociologist’s fallacy”. If you “control” for socio-economic circumstances you assume that low intelligence or low application played no part in that person’s economic circumstances. That is, you are saying that every poor person is poor because of an external force, and ought to be compensated for it in the statistical treatment, despite the fact that low ability and lack of application is a frequent cause of poverty.
Here are some comments I made about immigrant results on 5 December 2013:
PISA have fallen for the sociologist’s fallacy that socio-economic status is entirely imposed externally. That is, that you are poor because the system is stacked against you, rather than that the system responds to how much you work and how much you save. PISA have “corrected” for this. Some immigrants are poor because they have low skills and low ability. Some immigrants are poor because they have low skills and higher ability but haven’t been allowed to enter an open economy in their home country. Some immigrants have high skills and high ability and are rich. We need better calculations here. Plotting out the immigrant results by years of residence would make the effects easier to understand, as would identifying where these immigrants come from.
Furthermore, intelligence is a better predictor of social class of attainment than is social class of origin. A parent’s social class accounts for only 3% of the social class mobility of their children. The ability of the individual child accounts for 13%. Simply talking about the apparent effects of class does not speak to the question of ability. Both need to be measured in the same samples, and then compared for predictive power.
Daniel Nettle covered this in an interesting 2003 paper, which I then used to calculate the social class composition of university entrants, according to how demanding the universities were in their entrance standards.
This was one of my earliest posts, which reminds me that I am almost at the blog’s second birthday.
http://drjamesthompson.blogspot.co.uk/2012/11/social-class-and-university-entrance_28.html
No comments:
Post a Comment