Saturday, 31 August 2013

Poverty and Class, and a mostly missing variable

 

Courtesy of the British Psychological Society, I was alerted to the recent publication of the Journal of Educational Psychology “Virtual Issue: Research from Educational and Developmental Psychology on Poverty and Class“ Edited by Harriet Tenenbaum.

The editor writes in her introduction: “The first half of the selected articles in this virtual issue focus on relations between income and education and the second set on children’s views of economic inequality. The first set of selected articles address why children from low-income families may do less well in education, but also address interventions designed to improve the education of children from low-income families.” “The studies reported in these articles were conducted in many different countries conforming that the influence of poverty on education is important in countries as egalitarian as Sweden.” “In general, the first set of these articles indicate that income serves as a risk factor for educational attainment (Cassidy and Lynn, 1991) partially because of cultural capital or class reproduction (Myrberg & Rosen, 2009). Other factors, in addition to income, influence school dropout (Fan & Wolters, 2012) and achievement (Cassidy & Lynn, 1991).”

There are 10 articles in all. The first mentions intelligence by name, and shows that it has an influence. It also measures personality. One other (Dockrell, Stuart and King) includes some British Ability Scale subtests, and another (Sylva et al.) includes the British Picture Vocabulary Scale as a measure of literacy and general ability.

So, at very best, out of 10 key papers, chosen to be of above average quality and impact on the theme of poverty and class, 3 mention ability, one of those three explicitly. That first paper provides a good model as to what every educational paper should include: a measure of intelligence (here the Differential Aptitude Test); a measure of personality (here the Junior Eysenck Personality Questionnaire); measures of social class, parental education and occupation, family size, type of school attended, an index of possessions in the home, a home crowding index, a measure of parental encouragement of education; and then the target variable, a 20 item measure of achievement motivation. As you can see, all putative causes get a chance to contribute. This paper by Tony Cassidy and Richard Lynn uses path analysis (in 1991) and concludes:

“This study replicates the findings by Lynn et al. in that school-type, IQ, and home
background are important predictors of educational attainment. School-type is the single most important predictor, accounting for 19.6 per cent of the variance. IQ accounts for 13.1 per cent and the combined home background variables of crowding and parental encouragement account for 18.1 per cent of the variance. The other important direct predictors of educational attainment are the achievement motivation dimensions of acquisitiveness, dominance and work ethic. Among them these variables account for 36.2 per cent of the variance. The total variance accounted for by the variables in this study is 87 per cent.”

However, more important than the results of a particular study done 22 years ago, is that so few of the other, usually far more recent, paper show the same even handed range of variables. Since intelligence precedes schooling, and schooling precedes getting a job, and a job precedes accumulating wealth, one might have expected at least basic measures of intelligence to be found in all these educational papers. Its absence allows the authors to draw conclusions about class, poverty, and educability without intelligence having a chance to be tested as a contributing factor. Most seriously, its absence often leads to thoroughly misleading conclusions. For example, Fan and Wolters measure school dropout without taking any measures of intelligence, a very significant omission. Dockrell, Stuart and King have obtained some measures of intellectual ability, but do not display the relationship between intelligence and learning in their intervention study, which would have been instructive.

The papers in the virtual edition are very probably a fair selection of educational psychology publications. Most of the sample sizes are small,  particularly in relation to the effects being studied, and few are population based. My main gripe is that I do not think that educational psychology papers give a fair place to intelligence assessments (nor to personality measures). It is a pity, and significantly reduces our ability to understand children’s educational progress.

Friday, 30 August 2013

A new textbook of psychology: contributions invited

 

Having had my tea, I decided it was time to write a new textbook of psychology. I decided that, to make the task more manageable, I would cut out: all historical introductions; all hypotheses however engaging which were unsupported by relevant confirmatory findings; all individual studies unless they illustrated a general point which had been replicated many times; most individual studies unless they were on substantial and highly representative samples; and all studies where either the hypotheses or the methods or the analysis of results were not clearly specified, and had not been replicated at least three times.

This is what I have got in my draft outline so far:

Chapter One: Psychological Theories which are non-trivial and well supported by results.

 

 

 

Any suggestions as to what I should include?

Tuesday, 27 August 2013

Breast feeding, intelligence, and confounded researchers

 

In a decade or two, people may look back on us as having lived in the dark ages, blind to the obvious, and bound up with delusions, angels and devils. Our behavioural science research designs may look hopelessly simple, and wide open to confounding effects. Full genomic analyses may go a long way to resolving some of those confounders, but good research designs will always be required.

Here is a simple question which dates back to 1929: does breastfeeding a child, as opposed to giving them formula milk, boost their intelligence? Set aside for a moment the potential benefits of bonding with the child, protecting them from illnesses, and thoroughly irritating passers-by who hate the sight of humanity, and just concentrate on that question.

Walfisch, Sermer, Cressman and Koren have done just that in a BMJ paper: “Breast milk and cognitive development—the role of confounders: a systematic review” Their abstract immediately hits the nail on the head: “The association between breastfeeding and child cognitive development is conflicted by studies reporting positive and null effects. This relationship may be confounded by factors associated with breastfeeding, specifically maternal socioeconomic class and IQ.”

They do a systematic review of the literature,  and found 84 studies met their inclusion criteria (34 rated as high quality, 26 moderate and 24 low quality).

They explain: “Well-established confounders in breastfeeding research include demographic and IQ differences between mothers who breastfeed and those who choose not to. Parents who score high on a range of cognitive abilities have children with above average IQ scores. In parallel, advantage in mother's IQ more than doubles the odds of breastfeeding. Thus, some of the published data demonstrates the disappearance of the breastfeeding effect on child's cognition after correction for maternal IQ.”

“Given that more tight control of confounders resulted in greater likelihood of disappearance of breastfeeding effect, it can be argued that the remaining positive effect reflects residual uncontrolled bias, as shown by Der et al in their large study. In that study, before adjustment, breastfeeding was associated with an increase of around 4 points in mental ability. Post hoc analysis revealed that adjustment for maternal intelligence accounted for most of this effect—where full adjustment for a range of relevant confounders yielded a small (0.52) and non-significant effect size (95% CI −0.19 to 1.23).”

They conclude: “Much of the reported effect of breastfeeding on child neurodevelopment is due to confounding. It is unlikely that additional work will change the current synthesis. Future studies should attempt to rigorously control for all important confounders. Alternatively, study designs using sibling cohorts discordant for breastfeeding may yield more robust conclusions.”

My conclusion: Breast feeding is probably a good thing, but don’t do it with the sole purpose of boosting the intelligence of your child. In all probability, the only way to boost the IQ of your child is to make a careful choice of mate 9 months before. So there’s a headline: “Mate selection more important for your child than breast feeding”. Worth a tweet at least.

 

BMJ Open 2013;3:e003259 doi:10.1136/bmjopen-2013-003259

Sunday, 25 August 2013

The truth about Princess Diana: new bullet found

 

Piaget found that if you told young children a story about a boy who had accidentally broken seven cups, they regarded that boy as far naughtier than a boy who had accidentally broken only one cup. From a cognitive point of view one can explain this by saying that the children in this experiment do not yet fully comprehend the notion of an accident. If something is truly accidental, then the resultant damage was not intended, and the concept of naughtiness is inappropriate.

Of course, it is hard to shake off the feeling that a child who breaks seven cups by mistake is paying less attention than a child who breaks one by mistake and is therefore culpable, because there is a tendency to think that things must be proportional, simply because they often are.

Princess Diana was one of the most photographed people on the planet. She was avidly followed by millions. Her face on a magazine always boosted sales. Her story was a drama with which many could identify. When she died in a car crash there was an understandable wish to find out exactly what had happened, and an underlying assumption that the causes of her death would be proportional to her fame. A woman of her renown could have been tracked, followed, stalked and then dispatched by persons unknown, for reasons unknown, the complexity of all these arrangements (someone waiting by a Parisian tunnel with a laser or a sniper’s rifle) being almost a homage to her, albeit of a malevolent kind. The notion that a woman of stratospheric status could be brought down by an inebriated chauffeur is hard to stomach, and frankly somewhat insulting to her. Rather than the Just World Hypothesis, this is the Unjust World Hypothesis: if something manifestly unfair and shocking takes place, forces of unfairness and shock must have been assembled with commensurate malevolence so as to break the protective carapace of Fame. It would be good, in the twisted sense of that word, to find a bullet somewhere.

Why has there never been a conspiracy about J.D.Tippit? (I wrote his name from memory, and when I checked I found I had got it slightly wrong, which strengthens my following argument). He was the Police Officer who was shot by Harvey Lee Oswald 45 minutes after someone shot J.F.Kennedy. Both Tippit and Kennedy had served their country. Both saw active service, Tippit being in action on the Rhine and getting decorated for it, Kennedy having served in the US Navy. Tippit had been a good cop for 11 years, Kennedy an inspirational President of the the United States for 3 years. The fact that Tippit was shot by a disturbed loner makes sense. In the popular mind, it matches. Cops sometimes get shot by no-good guys with guns, and Tippit was an unknown cop. The same disturbed loner being able to kill a nation’s handsome president makes far less sense. Once again, it is pretty insulting. Surely more assassins are required to bring down a very high status person? At least one other armed man must have been somewhere, on a grassy knoll of popular indignation.

So, every anniversary a new bullet is found, some scrap of doubt which is twisted into the projectile of a disturbing fact by some people’s longing for proportionality.

Gigerenzer and psychological theorizing

 

“Several years ago, I spent a day and a night in a library reading through issues of the
Journal of Experimental Psychology from the 1920s and 1930s. This was professionally a most depressing experience. Not because these articles were methodologically mediocre. On the contrary, many of them make today’s research pale in comparison to their diversity of methods and statistics, their detailed reporting of single-case data rather than mere averages, and their careful selection of trained subjects. And many topics—such as the influence of the gender of the experimenter
on the performance of the participants—were of interest then as now. What depressed me was that almost all of this work is forgotten; it does not seem to have left a trace in the collective memory of our profession. It struck me that most of it involved collecting data without substantive theory. Data without theory are like a baby without a parent: their life expectancy is low.”

http://www.mpib-berlin.mpg.de/volltexte/institut/dok/full/gg/GG_Surr_1998.pdf

It can rarely be said of a psychologist that everything they write is worth reading. Gigerenzer is one such psychologist. He writes in plain English (presumably his second language) and understands his material so thoroughly that he can explain it simply, the sign of an intelligent and honest teacher. This straightforward approach means that you can follow this heuristic to make you smart: if you cannot understand him first time round, it is worth reading him several times until you do. With lesser writers, if you cannot understand them first time, turn elsewhere.

Gigerenzer's lament is about the paucity of solid theories in psychology. He laments that there are only surrogates: one word explanations (vague, unspecified references to something like “similarity” offered without any definition or metrics, even when such things are available); redescriptions (such as opium making you sleepy because of its dormative properties); muddy dichotomies (pointless battles between overlapping categories) and data fitting (exquisite mathematical models which re-describe the findings, but cannot explain them).

In defence of the beleaguered dabblers in mental philosophy, it could be argued one has to start somewhere. Data fitting may at least show you where the main currents are in the stream, even if one has no theory of fluid dynamics to explain why there should be currents in the first place.

So, where next? No good calling for more pointless theories and grand delusional castles in the sky. Perhaps we should concentrate on some basic problems.

For example, what makes problems difficult? To make that a little easier, what makes one simple problem slightly more difficult than another simple problem? Other than it having the quality of being slightly more difficult?

No time limit.

Saturday, 24 August 2013

Feminism and rape: intelligent?

 

It may or may not be a coincidence that the considerable increase in the use of the word “rape” coincides with the very rapid rise in the use of “feminism” in the 1970’s. Whole books have been written on the basis of such slender associations, though admittedly mostly in the behavioural sciences.

As previously discussed, behavioural scientists are under no compunction to choose their variables according to established principles, after sustained perusal of a century of published literature, or Delphic consultation with knowledgeable colleagues. The best researchers attempt one or all of those preliminary steps, but they are not compulsory.

So, to examine what may be a chance correlation of frequency patterns it is wise to try other possible words. A reader has kindly suggested “virile” but as my reply (below) shows, this does not work. Neither do the following words, chosen by a most distinguished person, to whom I had described the problem over breakfast. “Nuclear” “culture” “ethnic” “political” “economic” “black power” and “terrorist” all fail to show the same pattern, although they undoubtedly were frequent words at the time (much more frequent than our words in question).

One word shows a clear association, though as a mirror image: “intelligent”.  As it goes down and down, rape goes up, until the two words, the formerly common “intelligent” and the formerly rare “rape” touch each other in 1997. The conclusion is clear and irrefutable (I am using behavioural science talk ironically): as we ceased to regard people as being intelligent there was a rise in rape, and feminism. Perhaps we were all too aware of dysgenic trends and, looking at people’s behaviour, the word did not spring to mind so easily.

Spooky.

 

image

Friday, 23 August 2013

Men and women, feminism and rape

 

Can anything be deduced from books? Reading them is pretty tedious. It is hard to detect anything about social processes from the scribbles of a single person. Books have plots, story lines and other nonsense. However, if one were to take all books and conduct a meta-analysis at the level of the words alone, one might detect a bigger picture.

Consider the simple words: “men” and “women”. The relative frequency of those words may reveal the importance of these items in the popular mind, or at least the minds of readers, probably the slightly brighter, richer and supposedly more refined of the citizenry.

Sure enough, in the 1800’s men (0.0753) were discussed almost 8 times more frequently than women (0.0079). Not until 1983 did they achieve parity of mention at 0.032, and after a brief surge in which women were ahead they are at a rough parity again. Seekers of refinement should be gratified to learn that “lady” and “gentleman” showed no such disparity. Although more common two centuries ago, those genteel words are paired together, with lady (0.004) ahead of gentleman (0.002) as they should be.

The somewhat colder terms “female” (0.0048) and “male” (0.0028) are much less frequent than man and woman, and despite small differences effectively can be ignored.

Finally we get down to the very small numbers for words which are infrequently used like “rape” which was rarely used in the 1800s (0.00027), and then became slightly more rare till the 1970s. Feminism first appears in 1910 or thereabouts, and in the 1970 there is a sharp rise in unison in both these words, peaking in 1997 and falling thereafter. Compared with stalwart words like men and women, both these word have always been rare in books.

At this point in social science reporting everything pauses, like in the Hitchcock murder movie “Psycho” where a psychologist is called in to give The Explanation. I can only advance a few cautionary words: I do not think that rape was low throughout the last two centuries, rising only in the 1970’s for two decades and then falling back. It is notoriously difficult to collect data on rape in any era, but this looks like a consequence of openness of reporting and discussion. The fall since 1997 is less easy to explain. If openness has been achieved one would expect sustained use of the word.

I think this little analysis illustrates a larger point, which formed part of the critical evaluation of “The Spirit Level: Why More Equal Societies Almost Always Do Better”. By choosing what to measure one can frame and potentially distort the search for causes. For example, the word “dominance” rises in a similar way, but the match is not so good with rape. “Masculine” also rises, but is not so good a fit. I have tried a bunch of others which might conceivably be informative: “manly, seductive, powerful, arrogant, forceful, brutal, entitled” and they are all a poor fit. The match between feminism and rape remains strong.

At the moment it seems that, as is often the case in social science musings, “more research is needed”.

 

image

What should teachers know about human intelligence?

 

Imagine you had been asked to write a paper about children’s intelligence, so that teachers could understand it, and increase it (as far as it is possible to do so). As recompense for taking on this task, you are guaranteed that your paper will be distributed to most of the world’s teachers.

What would you include in this booklet, and what formats, graphs, pictures and tables would you employ to make sure that teachers understood and implemented your evaluations of the research literature?

This is not an empty question, because your answers might yet influence a UNESCO booklet which is being prepared at the moment.

Can you send me a brief outline, or links to good source material, and links to the sorts of formats appropriate for this task?

Thursday, 22 August 2013

g (Wechsler example, illustrating general principle)

image

This is a snapshot of the hierarchical structure of intelligence, in this case on the Wechsler tests, but very much as found in 400 extensive databases. Leaving aside the detail of individual tests, this shows a “positive manifold” in that all tests are positively correlated with each other. Rather than one skill taking up brain space so that less is left for other skills, it looks as if the brain is a central processor, able to turn its power onto a wide range of mental tasks.

Heredity grows on you (lecture snapshot)

image_thumb[3]

As people age, the influence of genetics on their intelligence increases, and the effect of shared family environment disappears. The remaining unique environment is created by individuals’ preferences, many of which are partly driven by intelligence, like deciding to work at a university or research lab.

(from Ian Deary’s lecture on intelligence).

Wednesday, 21 August 2013

The dying have it easy

 

The dying have it easy in only one regard: they have lost the illusion of immortality. Their prompt death is guaranteed, more promptly than had ever been the case in their living days. They have crossed a psychological barrier from the land of the living to the land of the dying, facing a certain end rather than what they had always hitherto known: an uncertain future.

Bronnie Ware’s “The Top Five Regrets of the Dying: A Life Transformed by the Dearly Departing” has had an understandable impact. When I first heard of it I immediately wanted to know what those regrets were. As one of my clients once said to me: “I’m not too concerned about making a perfect decision: I just want to minimise my regrets”.

This eschatological curiosity drove me to the list of the regrets, rather than the book itself, which I haven’t read. I wanted to identify the regrets now, while I still had a chance to correct my errors. Indeed, the focus of talking about the top 5 regrets is to use those regrets to plan a better life. Unlike the Tibetan Book of the Dead, this is not so much about understanding or accepting death, but about learning how to live in a better and more fulfilled manner.

Top 5 Regrets of the Dying:

  1. I wish I’d had the courage to live a life true to myself, not the life others expected of me.
  2. I wish I hadn’t worked so hard.
  3. I wish I’d had the courage to express my feelings.
  4. I wish I had stayed in touch with my friends.
  5. I wish that I had let myself be happier.

Fine. Now imagine that you take these regrets as the basis for a life plan. You live true to yourself, cut back on your work, express your feelings, stay in touch with your friends, and let yourself be happier. Sounds good.

However, it probably cuts out working as a nurse. In that profession, rather than being true to yourself,  you have to be true to your patients and meet their expectations, and certainly you have to meet the ward sister’s expectations. You have to show up at the hospital on time, and then have to work hard all your shift. You have to keep your feelings to yourself most of the time, and concentrate on the feelings of others. You have to build up friendly relationships with whomsoever you are working with in a team, as well as being friendly to the dying. You have to do your duties, and put those duties before your own happiness.

In summary, Bronnie would have had difficulty doing her job in palliative care if she had really recast her life in order to avoid the 5 Regrets she heard from the dying. Work itself is a source of satisfaction for most people, particularly because it drives them to do things they would not have achieved in idleness. Loss of work is a source of unhappiness and sometimes even depression. It is better to be in work, with a sense of challenge and achievement, as well as some harassments, than to be without work.

Bluntly, the dying are no wiser than the living.  They are simply having to deal with the most terrible and absolute reversal of priorities. Of course they give different weights to different things, but that does not of itself give them insights into the problems of living. Impending death does not boost intelligence. The dying are not the people to go to for career advice. In point of fact, the Top 5 Regrets look very similar to the Top 5 Resolutions after a good summer holiday, perhaps with the additional wish of being able to speak a foreign language. Within an hour of getting back to the office, these resolutions all evaporate.

To make the point even clearer, here is James Thompson’s “Top 5 Regrets of the living”. It is just a list, without a book wrapped round it. Is is based on the sorts of things that people worry about when they believe, quite rightly, that they have a long life ahead of them, and are trying to make their way in the world.

Top 5 regrets of the living

1 I wish I could fit in better, and find out what people expect of me.

2 I wish I could work harder, and be successful.

3 I wish I didn’t let my feelings get the better of me.

4 I wish I could make new friends and be popular.

5 I wish I could make myself happier.

 

Back to work, everybody.

Tuesday, 20 August 2013

ORIGINAL PAPER: Teaching intelligence

 

Professor Ian Deary is the UK’s leading intelligence researcher. He has written an entertaining personal account about how he teaches intelligence to various audiences. He makes a point of giving public lectures, because  “I think it is important that people outside the psychology student body learn about intelligence differences. This might be because it annoys me how much poor information is out there about my topic; and in part, it might be because I like the positive feedback I get from having a good set of intelligence research stories to tell. Good stories include Carroll's massive data collation exercise, the discovery of the Flynn effect, the re-testing of the participants in the Scottish Mental Surveys after several decades, and the work on separated twins.”

On the question as to which general introduction to intelligence is best, Deary hedges his bets: “I note the large difference between Mackintosh's (2011) good book on intelligence and Hunt's (2011) good book on intelligence: the latter has more technical psychometrics than the former though both managed to do a good job on teaching intelligence. My opinion about which is better is rather a fudge; there needs to be enough psychometrics so that the minority in the audience who would be drawn to this aspect can see that there is statistical rigour behind the data and findings, and there needs not to be so much that one alienates those who are less inclined to the multivariate statistics.”

My own view is that part of the reason some educated people reject the concept of intelligence is that they have read supposed refutations based on questionable statistics, so it is necessary to have some technical input. For that reason I personally found Hunt’s book particularly helpful.

Deary concludes:  “Lastly, I should mention the fact that intelligence is often seen as controversial. I must say that, in all my time teaching intelligence, I have not presented it in that way. I have had the fortune to teach intelligence to groups who have come with little prejudice about it, and they have mostly gone away, I hope, with similarly little prejudice. It is controversial if one wants it to be and if one approaches it in that way. However, it can be taught simply as an interesting topic with some great data and with the assurance that, if people take the time to know something about these data and think about what they mean, they will be the better off for it.”

Read the entire paper here: http://www.sciencedirect.com.libproxy.ucl.ac.uk/science/article/pii/S0160289613000950

Watch him give a lecture (talking to slides) here: http://www.youtube.com/watch?v=MGnCYdr7dYE&list=UUvXjmARhUOdnV5hQ1JBPjcA&index=3

Monday, 19 August 2013

ORIGINAL PAPER: Are cognitive differences between countries diminishing?

 

When countries are compared on intellectual measures, significant differences are found. Leaving aside why these differences exist, this has considerable implications for the economies of those nations, and also for the way in which their institutions function. Although those intellectual differences have been established, the pressing question is whether they are amenable to change. If the gaps can be closed by whatever means then the effects will be diminished and eventually annulled.

It is with that background in mind that it is particularly interesting to have a preview of the publication in the special issue of Intelligence on the Flynn Effect.

Are cognitive differences between countries diminishing? Evidence from TIMSS and PISA Gerhard Meisenberg and Michael A. Woodley

http://www.sciencedirect.com.libproxy.ucl.ac.uk/science/article/pii/S0160289613000305

“Cognitive ability differences between countries can be large, with average IQs ranging from approximately 70 in sub-Saharan Africa to 105 in the countries of north-east Asia. A likely reason for the great magnitude of these differences is the Flynn effect, which massively raised average IQs in economically advanced countries during the 20th century. The present study tests the prediction that international IQ differences are diminishing again because substantial Flynn effects are now under way in the less developed “low-IQ countries” while intelligence is stagnating in the economically advanced “high-IQ countries.” The hypothesis is examined with two periodically administered scholastic assessment programs. TIMSS has tested 8th-grade students periodically between 1995 and 2011 in mathematics and science, and PISA has administered tests of mathematics, science and reading between 2000 and 2009. In both TIMSS and PISA, low-scoring countries tend to show a rising trend relative to higher-scoring countries. Despite the short time series of only 9 and 16 years, the results indicate that differences between high-scoring and low-scoring countries are diminishing on these scholastic achievement tests. The results support the prediction that through a combination of substantial Flynn effects in low-scoring countries and diminished (or even negative) Flynn effects in high-scoring countries, cognitive differences between countries are getting smaller on a worldwide scale.”

In the discussion they add:

“The magnitude of test score convergence is less certain. In PISA, continuation of the current trends is calculated to erase the differences between high-scoring and low-scoring countries in only 40 years, with a 95% confidence interval of 27 to 77 years. In TIMSS, complete convergence would result after 341 years, with a 95% confidence interval of 70 years to never. These calculations are based on the prediction of the trend measured by the averaged performance on the first and last assessments. We do not know whether test score convergence will ever be complete. It might not if, as is frequently assumed (e.g., Jensen, 1998), biological limits for the development of high intelligence are different in different countries. One possible outcome is partial convergence leading to smaller but persistent gaps, similar to test score convergence between racial groups in the United States during the last three decades of the 20th century (National Center for Education Statistics, 2009).”

This is an interesting observation. PISA and TIMSS are similar, though the latter has a greater emphasis on maths and science, so could conceivably be seen as the harder of the two.

As I had already discussed in the previous post on the Rindermann and Thompson paper about narrowing ethnic gaps in the US, that what has happened is a partial convergence, which then seems to have stopped converging. One response to this finding is the “one more push” policy, which is to keep administering the special programs in the not unreasonable hope that they will eventually be effective “all the way”. This might happen. Another possibility is that, even if the compensatory education policies are redoubled in their scope and intensity, not much new convergence is achieved. One mildly absurd possibility is that such programs overcome the environmental half of the gap, but cannot touch the genetic component. Another 15 years of data might help us evaluate that hypothesis.

Thursday, 15 August 2013

Here the researchers be

Number_of_Researchers_per_million_inhabitants_by_Country

 

This map from ChartsBin.com is pretty scary. Scandinavia is the most research-prone zone of the world. I had an inkling that research was a Norwegian disorder, but there is it, glowing in its purple dominance. Iceland is part of the Nordic affliction, together with the Anglo Saxons at home and in their largest colony, and also in their southern colonies of Australia and New Zealand. Japan is another island race determined to get to the truth, and Europe and Russia follow suit. The rest of the world is spared this affliction, though China is beginning to catch the bug.

Naturally, it might seem that research is simply a luxury of wealthy countries, and has nothing to do with the way people think. In the English case it is possible to argue that societies for intellectual curiosity preceded the Industrial Revolution and were major contributors to it. Problem solving comes before solution harvesting. In that light, China is probably regaining its researchers, not creating them for the first time.

Be that as it may,  when you hear the phrase “Research has shown….” this map helps you understand the parts of the world where the work was most likely to have been done, and where the declared finding resides in all its bounded limitations. Unless, of course, psychology can find the eternal truths which apply anywhere, even in India.

Brains in scale, and the enchanted loom again

 

By common convention biggish things are compared with a human’s height and smallish things with a human’s strand of hair, which at 10-3.6 is almost at the limit of what you can see with the naked eye, which coincidentally is almost exactly the size of a human egg. That is a point of reference, in the best anthropomorphic sense.

Now further help is at hand from Cary Huang  http://htwins.net/scale2/

Down at 10-5.1  you can see white and red blood cells, the cell nucleus, the X chromosome and e.coli .

You have to go further down to 10-6.2 to see the largest virus and the smallest thing visible to a light microscope. This is the level of scale called a micrometer because it is a millionth of a meter (6 zeros).

Deeper still are the HIV virus and the transistor gate, both at 10-7.1. It is only at this point that we can start talking about computational scale. If the binary digit has a size, it is roughly 25 nanometers at the moment, but may get smaller.

Drill down further, and at 10-8.2  we get to biological code, DNA. This must be a key reference dimension. All of you is there, in compact informational form. This is stuff which can truly be called “causal”.

Or you could set it precisely at a nanometer which is a billionth of a meter (9 zeros) at which scale you can see a molecule of glucose, about which we hear so much in neurology. It is our basic unit of energy, the simplest sugar which keeps us going. For the more technological, carbon nanotubes are at this scale.

Previously, in “The Enchanted Loom” I was talking about the scale we need to use if we are to understand how the brain functions. I think that we are going to have to work at somewhere between 10-8.2 and 10-9   before we really get to the bottom of biological things. After that, all we have to do is work out how everything interacts, and can be scaled up into explaining how you read and understood this sentence.

Attention: Humans driving

 

Reaction times correlate with lifespan and with intelligence. In evolutionary terms this makes sense: quick reactions allow you to spot dangers, avoid predators and live another day.

In contemporary life, which is now very safe, one of the few causes of premature death are traffic accidents. Motorbikes are the worst. Without a protective cage to absorb impact, riders cannot survive. Helmets don’t help much because necks break as easily as skulls. Pushbikes don’t mix well with motor traffic. All cyclist deaths are premature and preventable, if only by taking public transport, which is safe. Cars are now safe for their occupants. Car braking systems very rarely fail. Human drivers fail every so often. Young over-confident incompetents are the worst, as are drunks of all ages, but careless idiots still cause injury, and each of us will be a careless idiot at some time in our driving lives.

Here is a reaction time test which is a better measure of attention while driving. Unlike the “BBC poisoned arrow sheep” and the various traffic light tests it has face validity, because it lasts for 5 minutes, which simulates the real life requirement of sustained attention. Of course, 5 minutes is not very long, and half an hour would be more realistic. The simulation is also unrealistic in that the hazards happen frequently, whereas in real life they happen rarely, with long boring bits in between. Furthermore, in this test one can begin to anticipate when it is time for the next hazard to show up. The results give the average time of response, but merely record the number of crashes and “false starts” without any further comment. The site constructors have a particular emphasis on fatigue caused by lack of sleep, which is certainly a factor when lorry drivers are sleep deprived, but probably not such a big issue with ordinary car drivers.

My own results “need to be seen in context” (were bad). On the first trial I hadn’t properly understood the instructions. After each hazard the reaction time comes up, which is distracting, but is probably a good example of the many distractions while driving. My average reaction time was 0.31 with one accident and 4 false starts. In other words, death. After a brief pause I tried to redeem myself. Second trial: average reaction time 0.32 with o accidents, 0 false starts. Clear evidence of learning in a dead person.

Let me know how you get on.

http://healthysleep.med.harvard.edu/need-sleep/whats-in-it-for-you/how-awake-are-you

Wednesday, 14 August 2013

Religion, belief, probability and intelligence

 

A recent meta-analysis has shown that there is an inverse relationship between religious belief and intelligence. This is hardly news, since the papers reporting this link go back many years. Brighter people are less likely to be religious believers. Some commentators on the meta-analysis have pointed out that most of the studies have been conducted in “the West” and so mostly apply to Americans and Christians, so the observation only holds true of that group. This may be true. Indeed, most psychology is conducted by Western countries, and is mostly on Westerners, usually young American college kids. We cannot be sure how much the findings apply to the more populous Rest of the World, for both cultural and genetic reasons.

Nonetheless, it would be strange if psychology could not, at the very least, find some things which all people have in common. For example, the basic emotions are recognised around the world. All children learn a language, and those languages include large numbers speaking Mandarin, English and Spanish. Every variety of human seems to have mastered the art of looking at television, understanding soap operas, watching games like football, and knowing how to drive a car, even if they don’t already own one. There are 6 billion mobile phones (measured by SIM cards) and 2 billion people on the internet, so many non-American Christians are logged in. Cultures are partly converging, genetic groups more slowly so.

Physics, of course, is in a better position. Any discipline is right to envy it. They do not restrict their observations to our sun, but pronounce upon all suns. They declare that the laws of physics apply throughout the universe.

Do the laws of probability apply throughout the universe? The notion of probability is certainly a social construct. A thing may be a social construct and yet be as real as gravity. What is more, for those who worry about such things, probability is the product of an elite, and what an elite! Blaise Pascal and Pierre de Fermat puzzled over a gambling problem posed by Chevalier de Mere in 1654, about how to distribute the earnings of an unfinished game. (It has it all, doesn’t it? It even riles those opposed to gambling). Although people had gambled for aeons, there were no extant theories of probability (though Cardan  had made a start in 1550). So, the theory of chance is an invention, and it is also in part a discovery, which was sitting under the noses of all gamblers.

Anyway, can a gambler nowadays understand the workings of probability? This was tested in a very interesting paper, over 30 years ago. I mention it precisely because it offers us a way of solving the underlying question in different cultures, without running up against difficulties in judging religious belief. It would allow us to answer a basic question: do groups differ in their capacity to understand chance?

I will be answering this question by means of a contrast analysis: contrasting two papers to make one point. This is less comprehensive than a meta-analysis, but can sometimes be more informative. By using old papers I can reiterate the point made in a previous post that some old papers deal with important issues we ought to factor into contemporary debates.

Blackmore, S. and Troscianko, T. (1985). Belief in the paranormal: Probability judgements, illusory control, and the 'chance baseline shift'. British Journal of

http://www.susanblackmore.co.uk/Articles/BJP%201985.htmPsychology, 81, 455-468.

Blackmore and Troscianko found a relationship between defective probability judgments and paranormal beliefs, the former determined from studying subjects who played a game of chance. The authors argued that a specific ability, the capacity to make accurate judgments about chance, reduced the likelihood that subjects would make the error of interpreting chance events as being due to paranormal forces. This is a testable finding in a cross-cultural sense, in that subjects can be asked about a whole set of beliefs about religion and superstition, and can then at a later stage play a game or set of games and give their informed judgments about how much of the resultant scores were due to skill (their agency) and how much were due to chance.

Musch, J and Ehrenberg, K (2002) Probability misjudgment, cognitive ability, and belief in the paranormal. British Journal of Psychology, 93, 169–177.

http://www.uni-graz.at/dips/neubauer/lehre/fm_lll/musch_ehrenberg.pdf

The additional contribution of Musch and Ehrenberg was to show that when you factored in general intelligence, as determined by grades at school completion, then there was no specific ability in estimating chance, but rather a general overall ability which explained probability misjudgment. Brighter students called the odds correctly, and as a consequence were less likely to accept paranormal explanations. A scientific approach to life requires the capacity to compute coincidence and to propose and evaluate alternative explanations for phenomena.

So, here is a research project for some one: replicate these findings in different cultural and groups and see if it is a solid finding in the non-Christian world. I think it will be. I am not ready to argue that it must be, but it will certainly damage our current conceptions of probability and intelligence if it turns out not to be. Testable prediction.

Finally, how does understanding chance relate to holding a religious belief? Religions often make claims which fly in the face of everyday evidence. Rising from the dead, Virgin Birth, divine revelation and so on would be examples. In such a case belief or disbelief may have several components, but being able to judge the probability of such a thing being possible will be a key component.

In summary, the overall argument is that superstition, belief in the paranormal, religious belief and a non-scientific world outlook are all related points on the lower part of the intelligence spectrum. They have in common an inability to calculate probability and to evaluate other possible causes, many of them mundane.

In a phrase: The mundanity of inanity.

Tuesday, 13 August 2013

Three mild suggestions for psychology

 

Is Psychology a science? It tries to be. It can certainly go through the motions of empirical enquiry, and good work abounds: well-thought out studies, interesting hypotheses and an accumulation of valuable results. Sure, there is a long tail end of not-so-good publications, but that is to be expected in any discipline.

Are there some things we ought to do better? Here are three suggestions:

1 Agree upon some basic measures. It would be alarming in any other discipline if, after a century of enquiry, we still had no agreement about what psychological measures we should apply as a matter of course. I am not talking about basic demographics on people, which any organisation collects, but the agreed basics of psychological description. How about an agreed brief measure of personality and an agreed brief measure of intelligence? What else? Simple reaction times? We ought to be able to specify what sort of people we are studying in terms of psychological characteristics. We can then show some psychological linkage between different studies.

2 Agree to pay attention to previous research, and relevant research in related fields. Psychology has an alarming tendency to ignore previous work, particularly when terminology changes, which is frequently. It often also ignores problems with measuring techniques, as if it were optional to attend to these matters. Few psychology researchers want to stop their work so as to repair instruments and recalibrate them properly. The experimentalists are much better at this than clinicians, but the former tend to have the smaller and less representative samples. They are interested in effects between difference conditions, rather than whether young psychology students are a fair sample of humanity.

3 Agree to collaborate, and move from sole practitioners in cottage industries to more systematic large scale research projects. Academic advancement is based on “making a name for one’s self” which encourages apprentice piece publications, repetition of papers each dealing with sub-sections of a data set, and anything which brings attention to a person. Engineering projects encourage far more team work, with a team or company name being more important than individuals. Would functional specialisms develop within research projects, rather than expecting every researcher to be a generalist? Would Psychology have a better future if it had more large projects? Imagine if the smallest publishable sample size was 500 persons: might that drive up the representativeness and reproducibility of results?

Please plagiarise this note.

Saturday, 10 August 2013

Jason Richwine and some Hispanic data

 

It was in a Woody Allen film that our earnest hero was waiting in line to watch a movie while having to listen to a poseur in front of him talking nonsense about the concept of “the medium is the message”. Frustrated at hearing this idea being mis-represented, Woody turns behind him, and gets the progenitor of the idea, Marshall McLuhan, to step forward and give an authoritative explanation of what he really meant, thus putting the braggart in his place. Woody turns to the camera and says “If only real life were like this”.

It just so happens I can help correct some of the views being expressed by random commentators on the Jason Richwine affair. Heiner Rindermann and I have written a paper on this very topic of scholastic achievement which is online at Intelligence. The paper version will be out in December, but here is a preview.

http://www.sciencedirect.com.libproxy.ucl.ac.uk/science/article/pii/S0160289613000895

Ability rise in NAEP and narrowing ethnic gaps? Heiner Rindermanna, Corresponding author contact informationJames Thompsonb E-mail the corresponding author

Abstract

US National Assessment of Educational Progress (NAEP) results from 1971 to 2008 enable four different effects to be distinguished: Cohort rise effects, gap-narrowing between ethnic groups, trends due to demographic changes in by NAEP listed or not listed ethnic groups. NAEP means and percentiles in reading and mathematics were transformed to conventional IQs and SDs. The total increase from 1971 to 2008 was in the scale of 4.34 IQ points (dec = 1.17 IQ per decade). The ability distribution became more homogenous (down from SD = 15.00 in 1971 to SD = 13.56 in 2008). Increases were larger for youngerstudents (9-year olds: 2.02 IQ per decade; 13-year olds: 1.20; 17-year olds: 0.30); larger at the lowerability level (10th percentile dec = 1.79 vs. 90th percentile dec = 1.03). The largest increase was for Blacks (Whites dec = 1.29 IQ, Hispanics 2.27, Blacks 3.04). White–Hispanic-differences were reduced from 11.59 to 8.46 IQ, White–Black from 16.33 to 9.94 IQ. If the racial composition of the population had not changed, the mean gain for the 17-year-old group would have been 2.47 IQ points higher. Had the gap between Whites and the two other groups not narrowed, the mean gain would have been 1.70 IQ points lower. Demographic change has accounted for a loss of 2.47 IQ points and according to cognitive human capital theory $2001 GDP per capita per year, but total ethnic gap-narrowing has provided a gain of $1377.

Keywords

  • Intelligence;
  • FLynn effect;
  • White–Black-differences;
  • Human capital

 

1-s2.0-S0160289613000895-gr1 ethnic gaps

 

The snapshot shows that the gap between the blue line for European Americans and the red line for Hispanic Americans narrowed in the early 1980s, but remains substantial. Same picture for African Americans.

In summary, if one uses scholastic data as a measure of ability, then there was some significant narrowing of the White/Hispanic gap in the early 1980’s, but that reduced gap has persisted thereafter. This finding discomforts those who predicted that the gap would never change, and those who said it was closing fast and would shortly disappear. Both are wrong, are partially right. Funny thing, facts. The same happened to African-American scholastic achievement, which had been the focus of much attention, Hispanic achievement somewhat less so.

Friday, 9 August 2013

Jason Richwine, and an unclaimed bottle of rich wine

 

Unaccountably, no-one has taken up my challenge issued on 10th May of this year.

“So here is the challenge: a bottle of fine French wine sent to the first person who can show that Hispanic/Latino American intelligence and scholastic ability is on the same level as European American intelligence and scholastic ability. Data please.”

http://drjamesthompson.blogspot.co.uk/2013/05/jason-richwine-and-bottle-of-rich-wine.html

Perhaps this blog has too select an audience, such that my readers already know the literature, and need take the matter no further. Accordingly, I must ask you to spread the word to those who are not drawn to musings about psychometric research, in the hope of getting some data-based responses.

I was reminded of this four month silence by yesterday’s thoughtful article by Jason Richwine, “Why can’t we talk about IQ?” in which he concludes:

“What causes so many in the media to react emotionally when it comes to IQ? Snyderman and Rothman believe it is a naturally uncomfortable topic in modern liberal democracies. The possibility of intractable differences among people does not fit easily into the worldview of journalists and other members of the intellectual class who have an aversion to inequality. The unfortunate — but all too human — reaction is to avoid seriously grappling with inconvenient truths. And I suspect the people who lash out in anger are the ones who are most internally conflicted.

But I see little value in speculating further about causes. Change is what’s needed. And the first thing for reporters, commentators, and non-experts to do is to stop demonizing public discussion of IQ differences. Stop calling names. Stop trying to get people fired. Most of all, stop making pronouncements about research without first reading the literature or consulting people who have.

This is not just about academic freedom or any one scholar’s reputation. Cognitive differences can inform our understanding of a number of policy issues — everything from education, to military recruitment, to employment discrimination to, yes, immigration. Start treating the science of mental ability seriously, and both political discourse and public policy will be better for it.”

Read more: http://www.politico.com/story/2013/08/opinion-jason-richwine-95353.html

I think he has asked a good question, and suggested some good answers. What do you think?

Wednesday, 7 August 2013

Science is not your enemy (but it is a competitor)


Steven Pinker has written “Science is not your enemy: An impassioned plea to neglected novelists, embattled professors, and tenure-less historians” in New Republic.

http://www.newrepublic.com/article/114127/science-not-enemy-humanities#

Like any good neighbour trying to defuse an argument between angry householders about boundaries,  he is friendly, conciliatory, diplomatic and seeking a way forwards.

He begins with flattery: “The great thinkers of the Age of Reason and the Enlightenment were scientists.” This not quite right. He then goes on to argue that thinkers like Descartes, Spinoza, Hobbes, Locke, Hume, Rousseau, Leibniz, Kant, Smith “are all the more remarkable for having crafted their ideas in the absence of formal theory and empirical data.”   Describing these thinkers in the modern idioms of “cognitive neuroscientists”,  “evolutionary psychologists” and “social psychologists”, even with helpful explanations appended, slightly compounds the problem. These powerful thinkers were able to span different domains of knowledge, as powerful thinkers still do so today, by being more interesting in questions than in subject boundaries. The named thinkers did not know they were “scientists” because the word had not been invented until William Whewell satirically coined it in 1833 (to no general aclaim, because it sounded too much like atheist). He invented it without enthusiasm, because he was actually complaining about the fact that knowledge at that time was subject to “an increasing proclivity of separation and dismemberment".

Dismemberment of knowledge is what we have been left with. A pity. Balkanisation leads to conflict, or recognises that there are conflicts which cannot be resolved without separating the contending parties. C.P.Snow and S.Pinker are right to bemoan it.

In partial defence of specialisation, sometimes knowledge requires advance parties, who rush ahead looking for interesting things, while the rest follow far behind. So long as the explorers are willing to wait for others to catch up, recounting their findings to the laggards, all is well in the house of knowledge.

Herein lies the problem which Pinker did not address. In polite and courteous company it is not seemly to allude to the fact that some people think faster and wider and more powerfully than others. Apart from doing science, scientists also read novels, history books, biographies, and watch cinema. They also write novels and poems and ride motorcycles. Non-scientists also read science, but that traffic is probably not as frequent, and is usually restricted to popular science with the maths left out,  whereas everyone reads novels and watches films together.

The separation comes about because of difficulty. There is no way round the fact that STEM subjects (Science, Technology, Engineering and Maths) are harder. School children know that maths and science are more difficult to learn. There is always a large appreciative audience if a child says that they find maths difficult. There is less sympathy if a child admits to finding geography difficult.

The difference in choice of subjects according to levels of intellect has been laid bare by Lubinski and Benbow. I will keep repeating this slide.
image
People in Engineering, Physical Science and Maths are brighter overall than those in the Humanities, Biological and Social Sciences and the Arts. This is particularly true of mathematical skills and spatial skills. Biological and Social Scientists and Humanities scholars aren’t stupid, but they are relatively low on spatial visualisation (which seems to be very helpful to scientists) and they are generally outclassed in mathematics. (Lord Kelvin: Do not imagine that mathematics is harsh and crabbed, and repulsive to common sense. It is merely the etherealisation of common sense. Mathematics is the only true metaphysics.) The groups are far closer together on verbal skills, which means they can talk and argue with vigour, but the scientists will often have their spatial and mathematical hands tied behind their backs if there is to be any conversation on serious subjects. It is permissible to complain: “Don’t blind me with science” but less common to complain of being blinded by ordinary arguments.

Bluntly, without some background in scientific methods, all sorts of mistakes can be repeated ad nauseam. (They argue without proofs, and errors are the result.) Without the ability to count, judgments are as impressionistic as mediaeval accounts of the size of armies. Science offers a way to correct some errors of understanding: it is simply a way of avoiding some common mistakes in making sense of the world. Among the rules of thumb: not basing everything on one’s individual viewpoint; asking for hypotheses to be specific, testable, and stated in advance; requiring that measurements  can be checked by others; and all this to be carried out in an ethos of open-mindedness and sharing of results and methods, so that errors are corrected quickly. Of course, it is not always like that, because it is being carried out by humans, even if those humans have set themselves tasks formerly assigned to the gods.

Pinker’s friendly effort reminds me of an excursion into the public understanding of science which the Royal Society initiated about a year ago. RS President Paul Nurse was roped in to convince people that scientists were human, that science was fun, and that more people ought to try it. It was a bit like having a priest organise a junior Church dance in which they promised boogy-woogy rock and roll. A painful episode of “Dad-dancing”, it seemed to me. They would have done better to just keep showing Sir Paul on a motorbike (no image available). Any child knows that anything a parent describes as fun will be boring, because if it really was fun parents would try to talk you out of it. “Don’t ride a big motorbike like scientists do” should be a public health warning (and a recruiting strategy).

Anyway, back to making friends between arts and science folk, and bridging the two cultures divide. Why bother? To many Arts/Humanities practitioners, Science is certainly the Enemy. It shows them up, makes them feel stupid and, what’s worse, scientists get to play with bigger and better toys. Scientists get to talk about discoveries, because they are our explorers. The arty crowd are right to feel that the lab workers and nerds are winning the competition. Quite properly, governments need science, and understand they have to pay for it. Artsy people will keep doing their stuff anyway, even when there is neither pay nor market for their scribblings. Science is pushing ahead precisely because it is much harder to prove that something has been discovered, so whatever survives sustained attack is accepted, temporarily as probably true. “Probably true” is as good as it gets. Those “probably true” findings are propelling us forwards.

In summary, accept that the two cultures are in competition, and science is currently ahead in funding.
Disclosure: Keele University offered a one year Foundation Course, which gave equal balance to science and arts subjects, and in the subsequent three years required all students to take an Arts subject if they majored in Science, and a Science subject if they majored in Arts. So, in my case, joint Honours in Psychology and Philosophy, with minor subjects in Physics and English. None of my teachers are responsible for my opinions, but they probably contributed quite a lot to them.

Tuesday, 6 August 2013

Warning: Distressing content which may damage you

 

What possessed you to read this item? Do you really think you can handle it? It might cause you permanent distress and unhappiness, even illness. For example, the sort of problems you might get if you exposed yourself to media coverage of dreadful events, like terrorist atrocities and wars. You know what I mean. Stuff you see on TV, without switching it off. Stuff you continue to watch, even when the announcer says that some of the content is distressing or “might disturb some viewers”. By the way, it is not too late to give up reading this item.

http://pss.sagepub.com.libproxy.ucl.ac.uk/content/early/2013/08/01/0956797612460406.full.pdf+html

Mental- and Physical-Health Effects of Acute Exposure to Media Images of the
September 11, 2001, Attacks and the Iraq War. Silver, Holman, Andersen, Poulin, McIntosh, and Gil-Rivas. Psychological Science OnlineFirst, published on August 1, 2013 as doi:10.1177/0956797612460406

Silver et al. have done a study on a perfectly reasonable sample, and here is their abstract:

Millions of people witnessed early, repeated television coverage of the September 11 (9/11), 2001, terrorist attacks and were subsequently exposed to graphic media images of the Iraq War. In the present study, we examined psychological and
physical-health impacts of exposure to these collective traumas. A U.S. national sample (N = 2,189) completed Web-based surveys 1 to 3 weeks after 9/11; a subsample (n = 1,322) also completed surveys at the initiation of theIraq War. These surveys measured media exposure and acute stress responses. Posttraumatic stress symptoms related to 9/11 and physician-diagnosed health ailments were assessed annually for 3 years. Early 9/11- and Iraq War–related television exposure and frequency of exposure to war images predicted increased posttraumatic stress symptoms 2 to 3 years after 9/11. Exposure to 4 or more hr daily of early 9/11-related television and cumulative acute stress predicted increased incidence of health ailments 2 to 3 years later. These findings suggest that exposure to graphic media images may result in physical and psychological effects previously assumed to require direct trauma exposure.

In fact, there is a previous clinical literature showing that TV can trigger disturbances which meet the diagnostic criteria for PTSD. The authors list some of the other papers showing a “ptsd by TV” effect.That aside, they have a sizeable and nationally representative sample, so the usual problems of self selected clinical samples do not apply.

What strikes you most about the abstract? To my eyes the key phrase is “Exposure to 4 or more hours daily of early 9/11-related coverage and cumulative acute stress predicted increased incidence of health ailments 2 to 3 years later”. Four or more hours daily of television about a dreadful event is a lot of television viewing. One day might be understandable. Who would do such a thing for several days? One should distinguish between an early acute phase in which most citizens take in the event itself, and the new hazard it represents, and another longer phase in which most citizens stop watching disaster coverage and have returned to their lives, still with the additional threat in mind to some degree. Finding out about bad things is part of reality, and assists in the planning of avoidance and coping reactions. Repeating exposure to tragedy is unlikely to help planning of any sort.

Looking at the paper itself, it is immediately apparent that the stress estimates are based on self-report. These need not be totally inaccurate, but they are hardly conclusive. Studying a sub-sample with a telephone or face to face interview would have been more convincing. On a more general methodological point, the more anxious “vigilant” danger-signal-seeking persons (the sort who might watch lots of frightening news coverage) might be more likely to have anxiety responses, be anxious about their health, and consult a physician about their health. The authors controlled for pre-exposure mental health issues, so they have a partial corrective, but they do not have, or do not use, personality data which could be highly relevant in this context. The odds ratio of distress for those with pre-9/11 mental-health ailments was 1.37 [1.20, 1.56] < .001. The odds ratio for those watching more than 4 hours per day of 9/11-related TV in the week after 9/11 was 1.57 [1.15, 2.14] .004  The sex odds ratio was 1.49. As usual, women were more vulnerable.

I think that the simplest explanation is that pre-event mental health problems made people more vulnerable, and extensive self-exposure to TV about the event made them even more vulnerable. Perhaps the abstract should have read:

These findings suggest that prior vulnerabilities plus self exposure to graphic media images may result in physical and psychological self-reported effects, more so in women.

Not quite so stirring a finding when you put in these explanations, is it?

I think that when research relies on well-standardised samples of respondents it would strengthen analysis of causal factors if these respondents have been tested for personality and intelligence, both well known to be important factors in PTSD vulnerability and other illness behaviours.

Meanwhile, if you find that your health deteriorates after reading this little item, you have only yourself to blame. You ignored a clear warning. You are on your own.