Thursday, 15 May 2014

Woodley leads with an abstract

 

Handing over his long Edwardian frock coat and gold pince-nez to Heitor Fernandes and Aurelio José Figueredo, his Latin seconds,  Woodley bounded into the ring and opened the first round in traditional fashion, with a well managed and elegant abstract, which I replay for you below.

 

 

The Victorians were Still Cleverer than us: Expanding the Dysgenic Nexus

Michael A. Woodley1, Heitor B. F. Fernandes2, & Aurelio José Figueredo3

1Center Leo Apostel for Interdisciplinary Research, VUB, Belgium.

2Departments of Psychology and Genetics, Federal University of Rio Grande do Sul, Brazil.

3Department of Psychology, University of Arizona, USA.

It is theorized that dysgenic effects and the Flynn effect co-occur, with the former concentrated on highly heritable g (Woodley & Meisenberg, 2013) and the latter on less-heritable non-g sources of IQ variance (te Nijenhuis & van der Flier, 2013). Evidence for this comes from the observation that 19th century populations were more intellectually productive (Huebner, 2005; Murray, 2003; Woodley, 2012; Woodley & Figueredo, 2013), and also exhibited faster reaction times than modern ones (Woodley et al. 2013), suggesting that g has declined independently of any subsequent improvements that may have occurred with respect to narrower cognitive abilities. We conduct a new test of this model by examining historical changes in the frequencies of the utilization of words from the highly g-loaded WORDSUM test across 5.9 million texts spanning from 1850 to 2005. We find, consistent with predictions, that the item-level difficulties (δ parameters derived from Item Response Theory analysis) of these words predict the degree to which the words decline in use over time even when word obsolescence and temporal autocorrelation are explicitly controlled using Multi Level Modelling (the interaction of word difficulty with time negatively predicted word frequencies – b = -.09; semipartial r = -.09; the time variable was log transformed). When considered independently, predicted year-on-year word frequency trends furthermore revealed that the four high-difficulty (and presumably more g loaded) words trended negatively across time whereas the six low-difficult words exhibit no systematic changes in utilization with time. Given that the populations from which WORDSUM participant birth cohorts are sampled are known to have been in persistent dysgenic fertility since 1900 (Lynn & van Court, 2004; van Court & Bean, 1985) we interpret these trends as evidence that contributors to texts and their target audiences might have experienced dysgenic declines in general intelligence since the mid-19th century. These new findings increase the breadth of the nomological net of historiometric, psychometric and evolutionary biological findings indicating persistent dysgenic declines in g amongst Western populations since the mid-19th century.

 

In round 2 Woodley lays out his case in detail, which will begin after the one minute rest interval (which in publishing time  rather than boxing time means about a week or two).

20 comments:

  1. How does he account for the possibility that increased access (higher literacy rates, say) explains these results?

    ReplyDelete
    Replies
    1. He claims that literacy rates have not changed between 1850 and the present (!), which is nonsense.

      He also does NOT control for word obsolescence. Well, he tries to (by controlling for the years that the words were first used), but of course that's not going to be terribly well related to the rate at which words become obsolete.

      Despite its flaws, I regard this as an interesting paper, and it does support relatively low Flynn effects on crystallized intelligence (especially vocabulary).

      Delete
    2. Michael A. Woodley16 May 2014 at 15:36

      [First comment]

      Nobody in claiming that literacy rates are the same today as in 1850. What I (and others) have claimed however is that high-level literacy rates have declined.

      This is evident from comparing the consumption patterns of historical and contemporary populations in terms of literature requiring high-level prose literacy for comprehension.

      Mahajan (2011) found that Thomas Paine's 1776 work "Common Sense" sold over 500,000 copies within a single year of publication. The Colonial American population was approximately 2.5 million, which means that the work was owned by over 20% of this population. 20% of the US population in 2011 equates to 60 million – modern books which have surpassed this number in terms of sales have generally only done so over a much longer period of time, i.e. eight years in the case of international bestseller "The Da Vinci Code". Mahajan concludes with the observation that the complexity of the text in "Common Sense" required that at least 20% of the Colonial American population would have attained the highest level of prose literacy, which the National Assessment of Adult Literacy defines as “reading lengthy, complex, abstract prose texts as well as synthesizing information and making complex inferences.” By contrast only 13% of the 2003 US population rank equivalently, suggesting a 35% decline in high-level literacy since 1776.

      Another example is Charles Dickens' 1859 "A Tale of Two Cities", which was serialized in cheap weekly installments and sold over 100,000 copies a week (five million within a year), corresponding to an ownership rate of around 25% of the British population in 1860 (20 million people). As with "Common Sense", "A Tale of Two Cities" contains considerable amounts of complex prose, and would similarly require high-level literacy for comprehension. If we assume that the 2003 US estimate of top-tier prose literacy (i.e. 13%) is roughly equivalent to the value for the UK, then this suggests that the proportion of high-level literates in the UK has dropped by around 50% since 1859.

      There are numerous other examples. Basic literacy rates therefore matter little as historically, people unable to read could and indeed did routinely avail themselves of literate family members or friends, and also public readings of popular works - an immensely popular activity in the 19th century.

      What matters therefore in terms of inferring the capabilities of the population is the quality of what is being written and widely consumed, which appears to have declined consistent with explicit predictions derived from my thesis.

      Reference

      Mahajan, S. (2011). Were colonial Americans more literate than Americans today? Freakonomics: The Hidden Side of Everything. http://freakonomics.com/2011/09/01/were-colonial-americans-more-literate-thanamericans-today/

      Delete
    3. Michael A. Woodley16 May 2014 at 16:11

      [Second comment]

      As to the charge that my co-authors and I improperly controlled for word obsolescence, if the commenter has a better measure in mind then we would be interested in hearing about it.

      As it happens our control for obsolescence seems to do the job of measuring the parameter admirably. It is logical to assume that older words will be more sensitive to the forces of semantic drift, and will therefore run the risk of simply disappearing from everyday usage. Thus older words might be 'difficult' only because they are presently more obscure and thus restricted to the technical specialist vocabularies of say, Chaucer scholars.

      That our operationalization of this parameter is at least somewhat valid would seem to be evidenced by the the results of our Multi-Level Model (MLM), which indicated that it predicted changing WORDSUM word frequencies independently of both their difficulties and the residual intercepts and temporal slopes of the words, which capture everything influencing frequency counts that relates to neither difficulty nor obsolescence.

      The same MLM revealed that word difficulties predicted frequency changes (in the theoretically expected direction, i.e. more-difficult words decline to a greater extent than less-difficult ones) independently of both obsolescence and the 'everything else' variance.

      Even if we had improperly operationalized word obsolescence, it would not have mattered, as the missing residual variance would simply have been captured by the 'everything else' variance which our modelling procedure allows us to explicitly control.

      Delete
    4. Michael,

      1. Of course basic literacy rates are important. In 1840, half of British women and one-third of British men could not write their own names. Clearly the literature of those times, therefore, was geared towards a relative elite. While public readings may have been popular sources of entertainment, I am certain that much of the working class did not consume literature in any form at all.

      2. It is entirely possible to consume complex media without fully understanding it: merely to enjoy it on a "surface" level. In the modern day, television shows aimed at very general audiences often have quite complex plots. Undoubtedly, the average person would be unable to summarize these plots in very much detail (or to understand the more sophisticated aspects of the show's character development, symbolism, and so forth). But s/he could still enjoy it on a surface level, despite not fully comprehending it. I believe that this is the same relation the British working classes had to the novels of, e.g., Charles Dickens. This is the case with many historical texts that were aimed at popular audiences: can you believe that Elizabethan peasants fully understood the works of Shakespeare? Moreover, one can read works that contain difficult words without understanding the words involved. For example, when I was six I greatly enjoyed Poe, but probably if I were given a vocabulary test drawn from Poe's works I would have done quite poorly.

      3. Actual measured vocabulary (as adjudged by the Army Alpha, the WAIS and WISC, WORDSUM itself, etc.) has risen (along with everything else). Moreover, conclusions about trends in vocabulary based on an N of 4 words are obviously quite suspect.

      4. I think that you (speaking directly to Michael here) might well seize on any test or measurement that had decreased, and treat it as a "purer" measure of g. What if Coding (from the Wechsler) had decreased? Well, Coding must be a "pure" measure of g!

      Let's test the "co-occurence model" as follows:

      Attached is a list of several g-related variables. Some have shown Flynn effects, some have shown anti-Flynn effects, and some are not known (or have contradictory evidence). Please predict, for each one, WHETHER it has shown a Flynn or anti-Flynn effect, and WHY.

      I. Learning speed of military recruits
      II. Prevalence of superstitious beliefs
      III. Sophistication of Congressional speeches
      IV. Accident rates in jobs
      V. General level of reading comprehension
      VI. Standards of universal (primary) education
      VII. Fine motor/psychomotor skills
      VIII. Piagetian staging
      IX. Sensory discrimination ability
      X. Long-term memory

      Delete
    5. @Elijah

      I don't know whether you read this at the time?

      http://charltonteaching.blogspot.co.uk/2013/05/extraordinary-claims-require.html

      I don't see any benefit from continued methodological quibbling. What I am still waiting-for is counter evidence that would tend to refute the claim of significant and rapid reduction of intelligence in England.

      As I said "If a claim is both 'extraordinary' and also wrong, it must be trivially easy to refute."

      Delete
    6. Yes, I read that article. I didn't like it much. Extraordinary claims are referred to as such because they are massively anomalous, i.e., they postulate wholly new phenomena that are hard to integrate into the remainder of science (ESP, God, objective state reduction in QM), or because they are extremely prima facie implausible (general relativity, 1 SD decline in g). These objections are only probabilistic and CAN be overcome, but they are often NOT overcome.

      My objections are not "methodological quibbling"; they represent basic standards of scientific rigor. As it stands, this paper represents little more than an interesting hypothesis. A blog post.

      I have also provided the opportunity for Michael to verify his model, by adding ten possible tests of the model.

      I agree with you and MAW that general intelligence, the biological substratum that causes the positive manifold between cognitive abilities, has decreased (though probably not 1 SD). However, I strongly doubt that the ability to learn, reason and re-apply information has decreased. Perhaps "raw" learning ability. The most direct evidence against this notion is, of course, the immense rise in actual IQ, and the moderate rise in academic achievement.

      Delete
    7. @Elijah - "the biological substratum that causes the positive manifold between cognitive abilities, has decreased (though probably not 1 SD). However, I strongly doubt that the ability to learn, reason and re-apply information has decreased. "

      Do not get hung up on 1 SD - that is just a measurement, an estimate of effect size. Science is not probabilistic - truth had no distribution. Statistics are a misleading metaphor for science.


      But wrt "the ability to learn, reason and re-apply information " I find it hard to imagine any way in which a slowing of simple reaction times from an average speed of about 180 ms, maybe 200 ms - slowed-down to 250 ms, or 350 ms in some modern populations - could not correspond to very significantly lower ability to learn, reason and apply information.

      The Victorian measurements seem to be close to the physiological minimum reaction time (about 150 ms - not much less) - while most modern people are around twice as slow.

      This magnitude of change in sRT over about 150 years is stunningly large! The associated change in general intelligence must surely be of the same order - but of course its precise size is not measurable due to limitations in our conceptual understanding of g.

      Delete
    8. Interesting then that IQ has increased by 2 SDs over the same period... (And yes, I know the increase is not g-loaded, and that much of it is due to test-wiseness, etc. It's still a damn big increase on real abilities. There are next to NO cognitive parameters where the dysgenic effect overwhelms the Flynn effect.)

      Delete
    9. Excellent points, Elijah!

      Delete
    10. Thanks. :)

      Delete
  2. It is a bit difficult to understand what Woodley et al did from just the above abstract - but having heard the details from the horse's mouth, I regard this is an absolutely *brilliant* piece of work - a really inspired bit of science.

    This is the key: "the four high-difficulty (and presumably more g loaded) words trended negatively across time whereas the six low-difficulty words exhibit no systematic changes in utilization with time. " -

    ReplyDelete
  3. Cool. Again, get a few hundred people off the street to take a given reaction time task, and one can get another few hundred people off the street to perform 20 of their standard deviations above that level, under different conditions you control.

    You should probably email woodley with a let me google that for you link on the word "reliability."

    ReplyDelete
  4. I am also very skeptical of the claim that literacy rates have not risen since 1850. This I found to be the oddest claim.This is certainly not true for southern Europan countries, where the majority of the population was illiterate in the 19th century. Maybe the literacy rates were higher in England but doubt they were as high as today.

    ReplyDelete
  5. I also agree that this is basically nonsense. The fact that people long ago used more fancy words than they do now says nothing about the overall intelligence level of the population, either then or now. And while I do agree that reaction time is a valid measure of intelligence, I still think it's impossible to compare reaction times of the "Victorians" (when the UK was almost exclusively a white British country) with reaction times today, with the UK being maybe 15-20% non-native British. Plus I can't imagine the testing devices from back then were very accurate. Though I would guess that Woodley has already addressed these criticisms of the "Victorians" study, so I know I'm a bit behind the curve there.

    ReplyDelete
    Replies
    1. Yes, he has replied thoroughly.

      Delete
  6. Please look at previous postings here and published papers on this issue.

    ReplyDelete
  7. I wonder how much of this study might plausibly be accounted for by a wider secular decline in deference towards intellectual as much as social elites. In a more demotic age we no longer have quite the taste for styles of language that risk excluding a part of our potential audience, and we no longer aspire to distinguish ourselves from those beneath us in quite the way we used to. The game we play has changed.

    ReplyDelete
  8. Highly literate people back in the day were more literate than their peers today, yes, and what is more telling, they were far more well written. But making the lower percentage of modern highly literate populations into a proxy for comparing intelligence between two ages is unintelligent. Any significant increase in literacy is bound to lower the percentage--not the numbers-- of the highly literate, because it is only going to raise the illiterate to the semi-literate, as exponentially raising the numbers of college students has only succeeded in lowering the highest standard of literacy.
    Common Sense was less a book than an anthem in support of a revolution, and the Boston mob was no friend of literacy.

    ReplyDelete
  9. Here's a thought--the SAT exam, taken by college bound seniors in the US (not exclusively, but by almost all of the top candidates) was "renormed" in 1995 by the College Board due to dropping scores. The test had previously been made somewhat easier in 1974, but I'm not sure there is any quantification of that. The process involved adding enough points to everyone at a combined 1490 or above on the old test to make their scores 1600 (top score on each of the Math and Verbal sections is 800, max total 1600). In the middle of the table, scores were raised about 70 points. I have generally seen conversion factors of about 15 points to 1 IQ pt in the upper ranges. Scores currently are no higher than they were in the 1960s, which suggests that the top end of students has lost roughly 6 IQ points on average. Very crude, but…

    ReplyDelete