Thursday, 22 August 2013

g (Wechsler example, illustrating general principle)


This is a snapshot of the hierarchical structure of intelligence, in this case on the Wechsler tests, but very much as found in 400 extensive databases. Leaving aside the detail of individual tests, this shows a “positive manifold” in that all tests are positively correlated with each other. Rather than one skill taking up brain space so that less is left for other skills, it looks as if the brain is a central processor, able to turn its power onto a wide range of mental tasks.


  1. those g loadings on (auditory) working memory & processing speed look suspiciously high. with other tests (& depending what tests are in the mix!) we've often gotten better model fits by breaking the high-g stuff down into verbal, 2D nonverbal (matrix type tasks) & 3D Spatial (hands-on). then the (relatively) independent, less cognitively complex stuff, like "processing speed" (& other mixed bags). btw, I just read a wonderfully succinct & humorous comment of yours over at harpending-cochran's site - well-played, sir! (re: swedes, depression, running & ethiopians:)

  2. The Wechsler loadings are simply one example of the general trend. Would be interested to see your analysis of other tests. Glad you enjoyed my comment at West Hunter.

  3. I'm a "g" man all the way. tho some of the lesser factors are "factors of opportunity" - e.g., back when the ol' wechsler had "freedom from distractibility" (FFD: arith, dig span & coding), we knew from other tests that when they added "symbol search" it would run off with coding to form its own factor (PSI - meh) breaking up the ol' FFD (which was never real to begin with:) i prefer composites like the wechsler's GAI which leave out the lower g tasks (lower g tasks often regress to the mean from the higher g tasks), but they do all positively correlate (tho for individuals, it's often better to separate out the high g skills from their lower g "processing" skills to interpret their pattern - test companies are s/w oblivious, being mainly concerned with making $ (& in the race to be PC in the early 90s they stopped releasing their dang data broken down by race - bad for bid'ness:)

  4. Wechsler tests have certainly gone somewhat odd. They keep changing their structure from one edition to the next, very probably because it boosts business. A new generation of users seem to think that they can pick subtests at their whim, and that three or four of their choice are as good as doing the full 10 core subtests. I did not know they had withdrawn their race differences data, thought now that I think about it, it is no longer in the manual. Of more concern is that some of the subtests are rather "lumpy": there are fewer items at various points in the difficulty range, and some subtests in the memory scales are far too difficult to give in clinical settings. Nonetheless, the overall result obtained from the full 10 core subtest is still a very good estimate of g.

  5. amen, that's the thing about "g" - it gets collected no matter the test & no matter how badly spaced item difficulty level intervals are! I like the DAS-II (for having relevant g-loaded stuff & a few useful/relevant low g tasks) b/c some subtests of other tests (e.g., wechsler's symbol search, etc.) don't always measure relevant interpretable things - the WJ-III has some wacky ones - but i end up using them all at some point, b/c we're stuck with the tests we have & not the ones we want:) we used to say measuring ability is like measuring someone's height by having them walk next to a fence while we flog them with sticks at different height levels, then by the time they're at the end of the fence we have a reasonably good estimate of their height!

    1. Thanks for your observations. WJ seems to be gathering quite a following at the moment. I need to find out about it. Like the analogy of finding someones height by the stick method, though it is probably closer to the mark to say it is done by seeing which of the twigs have been dislodged after the person has walked by. You may know the paper by Hernández-Orallo and Dowe (2010) Artificial Intelligence 174 1508–1539 Measuring universal intelligence: Towards an anytime intelligence test.
      I found it interesting and helpful as a way of understanding the challenges we face in testing intelligence.