Wednesday, 26 December 2012

How to boost your IQ



Intelligence is sometimes disparaged in refined circles, but there’s always a market for anything which boosts IQ. Some of the most popular offers are effort-free: pills which contain fish oil or stimulants, tricks which boost memory, and simple ways of reading much faster. These draw in gullible crowds, who soon lose interest and move on to other pastimes. Of course, getting a free lunch is an understandable strategy. In Thomas Hardy’s Jude the Obscure, Jude is extremely disappointed to find out that books that teach you how to speak a foreign language do no such thing. He imagined they would reveal a set of tricks, and was dismayed to see they required large amounts of learning and repetition. Like Jude, most of us would like to find the shortcut to brilliance.

Of greater interest are the approaches which demand a lot of effort.  For example, most people can remember 7 digits, plus or minus 2, as Miller’s famous paper put it. A good digit span test, with plenty of trials, is a very efficient, quick and dirty IQ test. Anyone who cannot remember more than 3 digits forwards very probably has severe learning difficulties. At the other end of the spectrum, high digits-forwards do not predict high intelligence very well. Some duller respondents simply have larger short term memory stores. (However, digits-backwards is a much better predictor of ability. Something about the extra difficulty of holding digits in memory while repeating them back in reverse order is more taxing, and more IQ demanding). Can the apparently fixed ability to deal with forwards digits be boosted?

There is a technique, though it takes an enormous effort to learn it. Digits can be learned in small groups and given a code. Thus, digit data get “chunked” and those chunks are remembered, and once unpacked they are converted back into the sequence of digits, so the end result is a higher effective digit span. Few people bother to do this (it takes about 20 months to perfect) and even those that succeed with digits get no advantage when tested with words. With great effort they have mastered a chunking task, but they have only advanced on a narrow front, and haven’t really increased their short term memory. Learning how to chunk notes is essential for music performance, chunking spatial configurations essential for chess, chunking apparently disparate symptoms essential for medical diagnosis, and chunking clues essential for forensic detection. Becoming an expert in any subject requires that you understand the underlying form by over-training in pattern recognition.

Nonetheless, if there was a way of training memory or attention and thus becoming permanently more intelligent it would be an attractive prospect. This high effort condition attracts hardier individuals, probably believers in what used to be called Protestant Self-Improvement (the sort of people who improve their French vocabulary while driving to work).

Does it work? Can one find a training result which generalises from a specific skill to general intelligence?
There is one technique which has come up with promising results. Under the acronym of N-back it combines plausibility with enormous effort, the key requirements of fanaticism. As an aside, any person who volunteers for a demanding and pointless task has to deal with cognitive dissonance: either “the task was useless and I am an idiot for having persisted with it OR the task seemed useless but I derived great benefit from it”. In Social Psychology this dilemma is called effort justification. Make the entry ritual to a group absolutely disgusting (humiliation, abasement and ingestion of foul substances) and the poor victims value membership more highly than if entry is a formality. Useless psychological therapies have their committed adherents, particularly when the premises are silly and the sessions are expensive.  Even training women in a pointless monitoring task helps them lose weight, presumably because they have to justify their efforts, and might as well do so by sticking to the recommended diets.

Back to N-back. Training starts with 2-back. A string of letters is presented continuously and all the subject has to do it to indicate which of those letters they saw 2 back in the sequence. Call it remembering what happened two verbal items ago. The visual version of 2 back is to remember in which quadrant of the visual field the target item appeared two items ago. So far, so gentle. Next, both tasks are done simultaneously, so you have to remember letters two back whilst you are also remembering visual positions two back. This is very much harder. Susanne Jaeggi (University of Maryland) proposed this variant in 2003 and has been publishing thereafter on results which seem to show a positive boost to fluid intelligence. Now comes the sadistic part. None of this is of any use to man or beast till you can get the poor subjects trained up to 4-back, and preferably 5-back. Keeping subjects motivated is a challenge. Roberto Colom (University of Madrid) managed to train 28 Spanish women up to 5-back, but these improvements in their working memory did not boost their fluid intelligence.

As Doug Detterman, a veteran of intelligence research observed: "We want IQ boosting to work, but we have been disappointed so often that we are cautious of any claimed result".

By the way, before entirely closing the door on IQ boosting as a pastime, it may simply be too late to do anything with adults. Craig Ramey (University of North Carolina at Chapel Hill) has shown in the Abecedarian project that massive early childhood intervention before age 5 seems to boost intelligence. Carrying on till age 8 adds nothing to the effects. The effect of training fades quickly over the years, but the residual effect lasts, and confers advantage. Despite this approach being the best validated of the child IQ boosting trials, contemporary replications are few. Ramey describes his own success as being due to his making it clear to his teachers that his aim was not to provide them with a job, but to require them to achieve a specified result. Most recently, he said he could do this for $11,000 per child. This is something to return to, but what avenues remain for adults who want to boost their intelligence?

Why not accept that the power of your central processor has some natural limits? In which skills should you invest your mental energies, so that you can stand on the shoulders of giants? If you want to invest 24 sessions of hard intellectual work, I suggest you do a course on statistics. In 24 sessions you should be able to cover basic descriptive statistics, the analysis of variance, path analysis, factor analysis and Bayes’ theorem. Then the world is your playground. The world cheats those who cannot count.

Perhaps we should all make it our new year’s resolution.

Happy New Year!

Monday, 24 December 2012

Beliefs and knowledge



As is our custom, we walked to our 13th Century Knight’s Hospitaller Church for 12 readings and Carols. 

In timbre and resonance alone the older men were the best readers, searching for meaning where the youngsters mainly lurched from clause to clause. The texts were another matter. First up, in all its barbarity, was the Genesis story. As this tale has it, a man eats an apple from the tree of knowledge, faces divine castigation, blames it on the wife, who in turn blames it on the serpent. The snake is cursed with reptilian motion, the woman with painful childbirth, the man with agricultural toil. The fable is an assault upon intellectual curiosity. How can such a foul creed be given any credence, let alone be repeated with such reverence? From whence commeth this obstuse arrogance?  

Yes, every sacred text has always drawn a swarm of subsequent mollifying commentary, trying to soften the rough edges of a vengeful god. We are urged to understand the ancient use of analogy, the limitations of translation, the deeper meanings of the underlying message. Perhaps it is a spoof, and ironic comment on the management of guilt.  But the bald message remains: “As to Knowledge, you may go thus far and no further”. Religions are always a cobbling together of disparate elements, and a dash of absurdity may serve an essential purpose. It sounds daft to the even the most devout believer, so to convince themselves they set out to make converts, or so theories of self-justification would have us believe. The sillier the premise, the more urgent the need for new believers.  And belief is a vault of faith over the merely factual.

The gem of the service was a recital given from memory, from a distinguished senior with white hair, his health not good this past year, but his voice firm and strong, his hands clenching as he intoned the thunderous words of Isaiah 9: 6 “For to us a child is born, to us a son is given, and the government will be on his shoulders. And he will be called Wonderful Counselor, Mighty God, Everlasting Father, Prince of Peace. Of the greatness of his government and peace there will be no end. He will reign on David’s throne and over his kingdom, establishing and upholding it with justice and righteousness from that time on and forever. The zeal of the Lord Almighty will accomplish this”.

You know where you are with such language. Someone is in charge.

I commended him afterwards. An ex-Navy man, he looked at me with mild surprise, and then muttered by way of explaining his achievement "Well, it was short". His wife confided to me later: "I had no idea he was going to do that. He doesn't like relying on his glasses".

Merry Christmas.

Tuesday, 18 December 2012

The Nose Cone versus the Tree

Years ago, when I was on a mini lecture tour of Californian universities, my host at Stanford took me out to dinner, and we got talking about his wife’s work as a space scientist.  She would compete to get room in the payload of the next space rocket launch, trying to win a small space allocation for her experiments. She would then have to pack as much instrumentation as possible into a small size and weight allowance. This involved juggling many parameters so as to maximise the scientific outcomes. All the other winning scientists would be doing the same with their own independent stand-alone experiments. Collaboration was pretty restricted. In the end the rocket blasted into space with a compact collection of scientific instruments. This is a good analogy for the theory that human brains are made from separate components, refined earlier in evolution, and packed into the human skull.

Another approach to space experimentation is to have a shared general processor which reduces duplication. The Pioneer 10 spacecraft used this sort of approach, being built as one coordinated system though carrying individual sensors. It remains our one only and very successful export from the solar system. Basic processes were done in common, and only specialised instruments require separate modules. Conceptually, coordinated structures look like a tree: a large trunk of common functions, two or three big branches of semi-specialised functions, and then little twigs of highly specialised instruments.

Human intelligence is more like a tree than a nose cone. We know this because if you pack separate modules in the confined space then each highly developed and larger module leaves less space for other components: aptitudes will be negatively correlated. A strong visual ability might compel language ability to be less developed, through lack of space, and so on. Each brilliant skill would lead to a scarcity of competence in other intellectual areas.

On the contrary, human intellectual skills always show a positive manifold, a matrix of positive correlations, such that there is a common factor which accounts for 50% of the variance. People who are good at one intellectual task tend to be above average at all other intellectual tasks.  It would appear that Mother Nature has got round the limitations of the Nose cone problem (skulls cannot get infinitely large) and has gone for a common central processor.

In humans this common factor is virtually always found, and by convention is referred to by a lowercase g. This is an abstraction, but represents the factor that all intellectual tasks have in common. Despite the accumulation of evidence, some still argue for a modular approach to skills, and think of g as an abstraction created by intelligence tests. So, can one show that g exists in other species?

Looking back at the records of 60 rhesus macaques on learning, spatial memory, object memory and set shifting, Rosalind Arden (Middlesex University) found that one factor accounts for 47% of their success at these mental tasks. In headline terms, this is a case of Monkey IQ.  Curiously, success was also associated with total cholesterol, an association which has also been found in humans.

Looking at the success of laboratory mice across 10 different tests of learning, reasoning, and attention, Louis Matzel (Rutgers University) found that a common factor accounted for 30-50% of the variance, a case of Mouse IQ. A large percentage of Mouse g was accounted for by a small number of dopaminergic genes in the medial and dorsolateral prefrontal cortex. Mouse mental skills were improved by working memory training.

It would appear that there is an evolutionary advantage to having a central processor in the skull, which can bear the brunt of problem solving. This frees up space for other essential functions like bodily coordination, visual processing and some more specialised functions. Given that there are physical problems about making brains bigger (every neurone requires connections, and the whole array is very energy consuming) it is a relief that Nature has used its intelligence.



  



Sunday, 16 December 2012

Conference news

Big advances is scanning techniques have allowed researchers to look at single neurones firing as subjects complete mental tasks. The pattern of neurones lighting up may reveal slight differences between medium and higher ability subjects, all of whom have been assessed for general intelligence. Eventually this might allow a version of an IQ test to be utilised based on neuronal signals alone. Aside from giving yet further support to the concept of intelligence, it could have utility with locked in syndrome patients.

Tuesday, 11 December 2012

Fear of Flying and Safety of Gruyere Cheese


Tomorrow I’ll fly to San Antonio, Texas. How big a risk will that be?

Flying is the safest means of transport on a mile per mile basis. My chance of dying is a reassuringly low one in 8 million, perhaps even less than that. Travelling by Underground and by bus to the airport will be very safe, travelling by car to the hotel less so, yet still tolerable.

Why then do I feel any alarm about flying, joining the third of travellers who admit to some concerns? Several reasons have been advanced: the relative rarity of flying as an experience such that fears of the unfamiliar do not habituate, the lack of visible support, and claustrophobic confinement in a fuselage. However, much of the anxiety is engendered by turbulence, which triggers a startle response. It raises the prospect of falling out of the sky, while the same sort of disturbance in a car journey is of no emotional consequence.

At this point commentators often branch into an examination of human irrationality. Fear of flying makes people drive cars, which are more dangerous. On the contrary, I would argue that statistics have to be understood better before coming to a judgement.

Minute by minute, I will be at the same level of risk I submit myself to when driving. Driving is low risk, but tolerable, though with some scary moments. I will be in control, or so I imagine, and I imagine myself to be a safe driver. Sadly, most drivers over-estimate their own skills, and tend to discount being driven into by drivers of even lower skills. Nonetheless, many car crashes are survivable, and one may hope to have light injuries, and no more.

Flying is low risk, tolerable, but with some very scary moments. The accident profile tends to be all or nothing. Sometimes an entire plane is lost with everyone on board. That happens infrequently, but it is widely reported when it happens, and thus is easily called to mind. As a final insult to my ego, I will be provided with a seat, but not with my own set of controls. I cannot influence the outcome, so I am helpless. Apart from worrying about the rivets in the wings, I will also be worrying about pilot error.

Pilots are rigorously selected, highly trained, and regularly monitored. Despite that, they make mistakes. Many of those errors they confess anonymously on specialist websites, for the education of other pilots. This is a good system, which ought to be offered to politicians.

Sometime the error is so severe that the pilots die, and their mistakes have to be determined from the famous Black Box, which is in fact a bright orange ball or cylinder full of recording equipment.

In Human Error (1990) and The Human Contribution (2008) James Reason has looked at the psychological foundations of errors. He argued that we defend against accidents by creating defensive walls of Gruyere cheese. We accept that there are holes in the cheese, but believe that once we get to 2 or 3 defensive layers very few fatal errors should get through.

But consider the loss of Air France 447 which went down with all 228 souls on a flight from Rio to Paris in June 2009.  When the black box was finally recovered (using Bayesian statistics on flight paths, local conditions and past search data) the story could be put together. Far out over the Atlantic the plane flew into the normal equatorial thunderstorms. At this point the most senior pilot decided to take his rest break. This may have been very French, but it is slightly puzzling. The two less experienced but still very well trained pilots were left to fly the plane. The electric discharge known as St Elmo’s fire caused a blue glow in the cockpit, scaring one of the pilots. Unknown to them the air speed indicators then iced up, giving the autopilot such conflicting readings that it disconnected, passing control over to the pilots. All that was required at that stage was for the pilots to fly the plane till the weather calmed down. What in fact happened is that the junior pilot pulled back the nose (made the plane point upwards) and increased engine speed. This put the plane into a stall. In order to fly through the air a plane must have a well-judged angle of attack so as to create the required lift. Pointed too far downwards it dives, too far upwards it stalls. The stall warning alarms went off. They sent for the senior pilot.

At this point we ought to recall the snappy definition of intelligence: what you need when you don’t know what to do. The pilots were intelligent, but they were faced with a bewildering IQ problem with a severe time limit. The plane was falling out of the sky. The stall was so severe that the alarm stopped sounding because the inputs were so extreme as to seem invalid. That meant that later when they tried to level the plane by putting the nose down, which would have saved them from stalling, they moved from a Very Severe Stall (in which the alarm was switched off) to Severe Stall (which made the alarm switch on again). Paradoxically, when they tried to correct their angle of attack the warning system appeared to tell them off.  If they had persisted through the alarm zone the alarm would have finally turned off when they were back to level flight.

The senior pilot arrived back and tried to make sense of what was happening. They run through all the remaining indicators, and the juniors didn’t have time to explain the whole sequence of events in detail. Another piece of the jigsaw that has to be explained in this highly simplified account is that their plane had a side stick which works as a ratchet, not a control yoke joystick. As a consequence, the senior pilot could not see at a glance that his junior had put the nose up so far that they were in a major stall. Baffled, they looked at the instruments and did not realise their absurd angle of attack until the junior pilot said he had been pulling back on the side-stick all the time. They realised they were going to crash shortly before the impact, when the record ends.

In summary, this showed the limits of human intelligence, the capacity to rapidly understand an unusual, fast changing situation and make sense of it quickly. Experience is of little value unless it is active experience. Flying on autopilot is mostly an experience of boredom, interspersed by routine system management chores. The fearful, almost panic stricken reaction of the pilot pulling back on the joystick generated a fatal stall. The use of a side-stick which does not show its relative position, rather than a traditional and familiar joystick where you can see at a glance what the position is, may have been a contributing factor. However, the precipitating factor was a lack of reliable airspeed indicators, and those have been replaced with better instruments. Another little hole in the Gruyere cheese has been blocked up.

So, if you have read this far the story will be easily accessible to you, and you will realise why, frequency statistics notwithstanding, it is understandable that passengers should fear a machine whose mechanics they do not understand, supported by aerodynamic lifting forces they cannot see, guided by invisible pilots whose announcements are inaudible, and who are the best of humans, but humans nonetheless.

Thursday, 6 December 2012

Icebergs and Onions



It is a mantra of medical and social science reports that when cases of a new disorder are found, they are described as being “only the tip of the iceberg”.  Autism, alcoholism, child sexual abuse, drug abuse, diabetes, dyslexia, high cholesterol, high blood pressure, hyperactivity, and post-traumatic stress disorder are examples of conditions which are frequently described in this way. There may be several hundred others. According to the iceberg analogy there is a lot of illness and misery out there. This search for the afflicted (the iceberg beneath water level) may be motivated by altruistic concern for others or it might be a cynical ploy to drum up business for drug companies and purveyors of therapy.

Some under-counting is understandable. Many conditions are not nice to admit to, and are best denied. Some are illegal. Few people willingly admit to a stain on their character, a permanent flaw in their fundamental nature. Further, many citizens have been living their lives under the misapprehension that they were normal, and do not take kindly to being labelled in any way. They do not wish to be accused of having spoken prose all their lives.

Some over-counting is understandable. If the new diagnosis is about a transitory and treatable difficulty due to outside causes, it can become fashionable, with celebrity sufferers recruiting fellow victims and forming lobbying groups. As Lady Bracknell observed in The Importance of being Ernest:  “None of us are perfect. I myself am peculiarly susceptible to draughts”.  Confessing to a minor diagnosis can be cool. To say “I am illiterate” is not a career boosting move. The attribution is inner and permanent. To ask “How good are your facilities for registered dyslexics?” has a much better ring to it. The narrative has moved from a personal failing to a social requirement to provide for a specific and legitimate handicap.

How can we obtain accurate numbers?

We cannot find out until we ask the question. There are so many questions to be asked in psychology and medicine that a selection must be made, and some will be left out. This makes sense because some conditions are much commoner than others. Special interest groups argue for the inclusion of particular lines of enquiry and this usually leads to a higher rate of reported of cases, which confirms the “tip of the iceberg” theory.  If a doctor or psychologist has been on a training course, they will begin to diagnose more of those sorts of cases, sometimes correctly, often incorrectly. So, we cannot find until we look, yet once we are on the lookout we may see cases where none exist.

Autism is a good example. This severe condition exists. Accurate diagnosis depends on training and experience, and there are well-established indicators and measures of severity. Unfortunately, what was once a narrow diagnostic category has entered popular culture as meaning anyone who is not particularly social and is a little too interested in technical matters.  For those who fail to get the autism diagnosis, Asperger’s syndrome may be seen as a consolation prize. By the way, these syndromes are not laughing matters, but the search for diagnostic labels is a two-edged sword.  The sufferer may get the reassuring legitimacy provided by a diagnostic authority and gain resources from government agencies, but might possibly be blocked from being treated normally and thus exacerbate and prolong their difficulties.

There are powerful forces moving us toward the proliferation of diagnosed disorders. One of the few growth stories in our moribund economy is provided by The Diagnostic and Statistical Manual of Mental Disorders. When launched in 1968 it contained 182 disorders. By 1980 it had reached 265, by 1994 there were 297 and the next revision out soon will very probably raise that number.  To the cynical eye, DSM is a child of the US health insurance industry: no patient can be repaid their medical bills unless the doctor writes down a diagnostic number. No number, no cash. So, the all difficulties of life must be numbered, and the greater the number of diagnoses the greater the opportunities for therapists of all types. The iceberg tendency is in the ascendant.

Layers of an onion

If you peel away all the layers of the onion, it ceases to exist. None of the layers are the onion itself, yet no onion is left without them. So it is with the endless reclassifications of normal reactions as disorders.  The person is slowly reduced by a set of dissociative classifications.  They become “person-with”: person with diabetes, person-with memory problems, person-with anger management issues.  Given a large enough armamentarium of diagnoses, normality ceases to exist.

One can do some rough calculations based on the prevalence of diagnosed disorders. The World Health Organization reported in 2001 that one in four people meet criteria for some form of mental disorder or brain condition at some point in their life. Believe that if you will, though of course life time estimates could include one short episode in an otherwise untroubled life.  Here are the ranges given for the prevalence of each condition in 14 countries: anxiety 2.4 to 18.2%, mood disorders 0.8 to 9.6%, substance abuse 0.1 to 6.4%, impulse-control disorders 0 to 6.8. The authors are of the iceberg tendency, and believe that their figures are under-estimates. Clearly, some countries have not come into line with the putative Central Classification System, and don’t realise quite how miserable and disordered people are.

If we repeat the procedure for physical health, we would have to discount those who were too fat, too thin, all those with chronic conditions, and perhaps those who are being medicated because they may be thought to be at risk. For example, giving statins those who are healthy but have high cholesterol (of the bad sort), which is at best an indicator, and not a disease itself. It would be easy to show that at least 25% of the population were unhealthy at some time in their lives and probably chronically unhealthy for the last third of their lives.

Putting together the mentally disordered and the physically unwell results in a small core of citizens being classified as “well”, and even then, perhaps a more detailed enquiry could turn up hidden problems. Peeling away the onion transforms normality into layers of syndromes for which invoices can be issued. The classificatory project has colonial ambitions, and a whole industry behind it.

Can order be brought to this chaos?

Contemporary psychiatry and psychology believe they have the antidote to hand. They restrict diagnostic categories to a set of well-defined indicators which have to achieve set levels of severity and duration. In sober hands, such defined disorders can be diagnosed in a responsible fashion.

Questionnaires can be a help. Patients are more likely to admit to drinking too much and to having served time in prison when confessing to a piece of paper rather than a psychiatrist. It is a commonplace of clinical psychology interviews that if one wants to ask about drug abuse it is easier to hand the patient a list of drugs and ask casually “which of these have you used”? The list begins with pain-killers and antibiotics, and goes on to harder stuff. A bit of distance aids confession.

However, humans are tricky. They deny bad characteristics, complain loudly about aches and pains when sympathy or compensation are offered, and contradict themselves when the mood takes them. They look up diagnoses on the internet, and learn the answers to interview questions.

Why not treat them like fish in the sea, and net and tag them? Put a net into the sea, pull out the fish and tag each one with a shiny Time 1 tag. Then, a few weeks later, visit the same area and repeat the process, tagging each fish with a Time 2 tag. While doing this you will catch a few fish with a Time 1 tag already on them. Note the number of such fish. It you really want to be accurate, repeat the process a third time.

Charming as it might be for psychologists to become fishers of men, members of the public are likely to object to being tagged in the name of science. We cannot use nets or tranquiliser darts. However, a name is a tag. Most places we go, we leave a name. Checking names does the trick. For example, how many drug users are there in North London? The Police have one estimate, based on a list of arrests. The Courts have another list of names. The General Practitioners have their own lists of registered addicts. Each of those lists is a net, which tags every person.

Capture-recapture methodologies come to the rescue here.  The Lincoln-Petersen method, in the interests of simplicity shown here for only two nettings is:

N= (M C) / R
Where:
N = Estimate of total population size
M = Total number of animals captured and marked on the first visit
C = Total number of animals captured on the second visit
R = Number of animals captured on the first visit that were then recaptured on the second visit

So, let us estimate the number of hard drug users in a defined neighbourhood.  The Police have a list of names of people they have arrested, and 10 of those live in the neighbourhood. The local drug clinic has a list of names of users and 15 of those live in the neighbourhood. Five of those 15 are also on the Police list, so they have been “re-captured”.

N= (M C) / R = (10 x 15) / 5 = 30

In this example, the Lincoln–Petersen method estimates that there are 30 drug users in the neighbourhood.

Perhaps we have to turn away from self-proclaimed claims of disorder and get out our nets. Using our nametags instead of nets and physical tags, we have to check for the overlap of names in different lists, and then do our calculations. Sure, we can give citizens questionnaires about their sexual behaviour, but it might be better to check names on Sexually Transmitted Disease Clinics to get more reliable estimates, and to calibrate the questionnaire replies. Given access to the data, by tracing every purchase and every location visited, every bill paid or ignored, every credit card transaction and every health clinic attendance, every TV program watched we will finally find out, with considerable reliability, who we are, and what mental state we are in.

So, when the next new mental disorder is described in the media, always ask yourself: is this the tip of the iceberg or the skin of an onion? 

Tuesday, 4 December 2012

DNA, Nametags, burglars and rapists


 The Night Stalker revisited

Serial offenders sometimes catch the public interest, but only when their crimes are particularly gruesome. Many criminals are serial offenders, and this includes many burglars. Criminals establish a routine, a modus operandi, and stick to it. It may be a lack of imagination, or simple pragmatism. If a way of committing a crime gets them what they want without detection, they persist in it. Far from being anything special, serial offenders are simply criminals with a habit of offending.  It’s a bit like the Olduvai Tool Set, a collection of stone implements our ancestors made, with minimal variation, for 600,000 years.  In that era we were very conservative or not very bright, most probably both.

Anyway, back to South London. Burglary has a significant psychological impact, particularly on the elderly, who are helpless. The implied sanctity of the home is violated, with a consequent pervasive fear and loss of security on the part of the victims.  However, Delroy Easton Grant was more than a burglar. He specialised in raping elderly women and very occasionally elderly men. He did so with such violence that one woman’s life was threatened by a perforated bladder. He probably committed over 200 offences (other estimates are much higher) starting in 1990, and was not caught till May 2009.

Why did it take so long? Contrary to popular TV detective series, it is hard to catch an accomplished criminal who strikes at random, with long dormant periods, and who leaves very few forensic clues. This is no criticism of the Police. One cannot protect every old lady in South London for years on end simply because a particular violent rapist may strike again.

One possible reason for their failure is that Police encountered difficulties because of the assailant’s race. Victims described him as a black male aged between 25 and 40, about 5'9" to 5'11" tall, of slim athletic build. Police at one stage issued a confusing photo-fit picture, in which the balaclava clad assailant seemed to have a white face. That distraction aside, race was an issue because an advanced DNA analysis of his sperm showed that he was very probably from the Windward Islands ( St Lucia, Barbados, St Vincent, the Grenadines, Tobago or Trinidad). Police identified around 21,000 possible suspects that fitted such a profile. So, the solution seemed simple: DNA as many of those suspects until a match was found.

Despite a promise that DNA profiles which did not match the assailant would be destroyed a number of putative suspects declined to give a DNA sample. To some it seemed that the procedure was stigmatising an entire community, and 125 flatly refused to provide samples. This misunderstanding of the process, or distrust of the Police, perpetuated the horror. Police were left with 1000 potential suspects.
However, consider the only other forensic clue the rapist left: a size ten footprint of a particular brand of trainers. Would it have been better to trace that? It might have been. A DNA profile is unique, while anyone can buy a publicly available brand of trainers, though the rarer brands such as the one found in this instance might be traceable. It was a positive indicator, but hard to track down.
So, looking at the task from a Police perspective, they were trying to find a black rapist without the full support of the relevant target population of suspects, and had a DNA signature which they simply had to match against a name.

Is it easier to catch a burglar than an episodic gerontophile rapist? Frankly, the Police have more experience of burglars. Every criminal has several signatures. Almost as good as DNA, footprints, fingerprints and the like is the MO, the modus operandi. What Police knew was that the assailant seemed to have an unerring skill in tracking down the homes of the elderly.  He never broke into a house occupied by anyone other than a lone elderly occupant. He once targeted three houses in a single street. He picked detached or semi-detached houses and bungalows but never flats. Therefore he must have spent much time reconnoitring. To do this almost randomly over a wide geographic area he needed a car or motorbike, or both.

The Police had hit a blank wall with their DNA enquiries. They could only wait till those who had refused to proffer their DNA got arrested for other crimes. Unknown to them, they were up against an even bigger problem. Delroy Easton Grant had been incorrectly listed as having been eliminated from suspicion. There are 63,000 Grants in the United Kingdom. Delroy as a first name is disproportionately found among black men, so in this case the nametag was a distractor, not an identifier. A young policeman made a clerical error and accessed the wrong Delroy Grant, who was already on the Police database but whose DNA did not match that of the rapist. One Delroy covered for another. As a consequence the local police did not follow up the car licence plate number that would have brought them to the real burglar Delroy Easton Grant, whose DNA would have shown him to be the Night Stalker rapist. While teams were trying to do the fancy stuff, and push the DNA analysis to its outer limit, getting geneticists to come up with a possible estimate of the assailant’s appearance, an elementary error was made on the tag which constitutes a name. So, through carelessness, the unique identifier of the genetic code was mistakenly thought not to match the supposedly unique code of a given name.

Into this impasse came a new team. They decided to try to catch a burglar who spent a lot of time reconnoitring, and about whom they had some geographic leads. This was a very staff intensive procedure (the total cost of catching Grant was £10 million) and involved staking out likely suspects and likely places. Car license plates provided the other set of unique identifiers. In a way, they looked where the light was brightest in terms of their procedural capacities. This is often a sensible procedure.
At this point it might be interesting to take a Bayesian approach, in which the evidence about the true state of the world is expressed in terms of degrees of belief.


In The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant (2012) Sharon Bertsch Mcgrayne  described, amongst many other things, the use of Bayesian statistics to catch submarines. The initial indicator would be a radio location based on a transmission from the submarine. Sending a plane straight to the location might have seemed sensible. However, between the transmission and the plane arriving on site, many hours would have elapsed. It would be better to guess where the submarine was going to.  One could make some educated guesses and express those as prior probabilities.


How  best way to look?  Operational research showed that if a spotter on a plane saw anything, it was only in the first 15 minutes of their watch. Looking down at the sea is deadly boring. So, they made everyone change windows every 15 minutes. Research showed that if a spotter spotted anything it was on the horizon, or just below the horizon, and the sunlight had to be in the right place. So, they made the spotters look at that zone and nowhere else. In this way, coupled with a well thought out search routine (using Bayesian estimates of the most likely areas of sea given the time since the original intercepted transmission) they increase their operational effectiveness.

Operational research can be applied to policing. Given that names are so important as tags on human beings, making triple sure that the names are right should be a priority. Criminals often change their names, or give false names, or use different variants on their names in different circumstances. They also give false home addresses, and conduct their businesses from yet other addresses, and use cars registered to other people at yet other addresses.

Delroy Easton Grant, a Jehovah's Witness and father of eight, was a carer for his disabled wife but also living with another woman. He turned out to have been born in Jamaica, and was dark-skinned. He had a very long criminal career, beginning with petty theft and going on to armed robbery and vicious attacks on his partners. Neighbours found him warm, charming and always friendly. He followed cricket, liked to fish and enjoyed community barbeques where he would share jokes and reminisce about his childhood in Jamaica.

Some of the errors:
1                     Mixing up the names. The nametag must be firmly attached.
2                     Being more specific about ancestral DNA than was warranted.
3                     Issuing a misleading photo-fit suggesting he was white or light skinned
4                     Searching for a specific rapist rather than an accomplished burglar with a characteristic MO.
5                     Under-financing the operation in the early stages.

One moral of this story is that the psychological pull of DNA as a search tool may be entrancing the police, to the detriment of systematic data collection and normal, standard police work. It may be far better as a confirmatory technique in any setting in which a substantial minority will not cooperate with testing.
Another moral is that the search for ancestral origins identified the wrong Caribbean island, and added dangerous noise to the already well known fact that he was a black man.

Another possible moral is that psychological profiling can be a distraction, since it offers apparent specificity without sufficient reliability. It was another step too far into sophistication.

It may be best to look where the light is brightest, in this case for any person out late at night apparently checking out houses but never flats. Grant was least at risk when he was raping his victims, because he always disconnected electricity and phone lines. It was the elaborate reconnoitring which put him most at risk.  So, the task was not to catch him, but to catch burglars like him, in the areas where his rapes had been reported, in the hope that one of the burglars would be him.

Caution: This note was written with the benefit of hindsight, and based on publicly available accounts of the investigation, not on any internal documents. Almost every post hoc review of persistent criminal operators shows that they might have been caught earlier. The positive predictive value of criminal indicators is low, which is why so much police work is boring and routine. Even a good indicator may not be specific enough. Being a part-time cab driver, as Grant was, is a good cover for burglary, but most part-time cab drivers are not burglars.

Catching a wily criminal is hard. Searching extensively where the task is easiest (because a high frequency behaviour leads in a few cases to a very low frequency behaviour) may seem paradoxical, yet it has its advantages.

Friday, 30 November 2012

The Credit Crunch and the normal distribution of intelligence


15 points of separation

Three months ago the press decided that we had reached the 5th anniversary of the credit crunch, thought to have started in August 2007. Most of their explanations concentrated on bankers and their creation of derivatives based on mortgage debt. Absent from this account was any clear admission that citizens differ in their ability to understand numbers.

Looking at the credit crunch through the lens of general intelligence provides another perspective. This brief note, written to another intelligence researcher two years ago, looks at the issue in terms of standard deviations of intelligence, so each sigma is 15 IQ points. A 2 sigma is IQ 130, a 3 sigma IQ 145.

How did the international bankers screw up?

The managers were 2 sigmas, and they hired too many 3 sigma mathematicians whom they couldn’t really understand. The 3 sigmas were so happy with their bonuses that they kept cranking out complicated derivatives.

The managers liked the answers they got from the mathematicians, which seemed to make risk disappear by distributing it very widely.

They cranked up a sales campaign run by 1 sigmas, who cruelly exploited the –1 sigmas and –2 sigmas, all encouraged by vote buying politicians (2 sigmas?) who wanted to give everyone a house even if they couldn’t pay for it. 

A few 2 sigma bank managers could have stopped the rot, but most of them had been fired because they were too old, and couldn’t adapt to selling complicated derivatives.

People can only easily communicate within 1 sigma bands, so the situation was ripe for confusion.

Hence John Paulson and a few other 3 sigmas made immense fortunes by understanding all this and then working out a way of shorting mortgage protection insurance securities. Paulson made a personal gain of 2.5 billion dollars.

I hope that explains it all.


Epilogue: Damn the bankers

Looking at the recent press coverage, the “damn the bankers” narrative downplayed the part played by governments who were only too glad to encourage any borrowing that made their voters happy. Far from just being lax in their regulation of banks, Governments often encouraged the relaxation of lending criteria which brought in new grateful homeowners. Citizens lapped up the credit, correctly gambling that if enough people got into debt they would never have to pay it back. So, the irony is that citizens who got into debt probably played their cards better than those who built up their savings.

In fact, the general intelligence perspective on the credit crunch places it in the general context of how citizens of different ability levels deal with each other. This goes far wider that just the management of credit. More of that later.



Wednesday, 28 November 2012

Social class and university entrance


How many children from each social class will enter university?

In Britain today, social class no longer determines our chances in life. A parent’s social class accounts for only 3% of the social class mobility of their children.  The ability of the individual child accounts for 13%. For all we know, the rest of the difference may be due to personality or perhaps even physical attractiveness, but it is not social class.  Without quite realising it, we have achieved considerable social mobility between generations, with far more of that change being due to ability than to social class itself.

One surprising effect of this meritocracy is that social classes still differ in intelligence, simply because the most able have been given a chance to rise into more demanding jobs, and the less able have been left in less prestigious occupations.  Opportunity has allowed people to spread out more, and has brought the best brains to bear on the hardest problems, regardless of their social background.  Separately, there has also been “social class inflation”, with more people doing managerial work, and far fewer in unskilled manual jobs. These manual classes have been stripped of many of their brighter people, who have moved upwards as opportunities opened up.  

If entry to university were based solely on intelligence, how many children from each social class would enter university? Making that calculation depends on some assumptions. First, that people marry partners of roughly the same intelligence. This seems to be true, in that married couples are even more concordant for intelligence than they are for height.  Secondly, that some parental intelligence is passed on to children by genes, and recent heritability estimates of 66% have been established, on samples of 11,000 children (Haworth et al. 2009).

Calculating the estimated intelligence of university applicants according to the social class of their parents is pretty straightforward. Using data analysed by Daniel Nettle on the 1958 generation, the average intelligence score of each social class is multiplied by 66% to get an estimate of the average intelligence of their children. Since the other 34% of intelligence differences are not due to parental intelligence, the class averages will all converge on the population average, a phenomenon known as regression to the mean. The children of professional parents will fall back somewhat towards the population average, though they will remain above it. The children of unskilled manual workers will rise back somewhat towards the average, though they will remain below it. In this way ability is gradually reshuffled each generation, though not totally. At the end of this generational process there will be some differences in the average IQs for each social class. However, these small average differences translate into substantial differences at the upper reaches of the intelligence distribution. This is simply because most people’s abilities pile up in the centre of the intelligence range and there are fewer people at the edges. A small average group difference leads to into big differences in the numbers of individuals at rarified levels of IQ.

As regards university entrance, society can set any cutoff point it likes.  The Table shows what could be expected at various levels of participation. The top 50% was the stated national aspiration, and incidentally corresponds currently to the percentage of the school population who get 5 or more A to C grades at GCSE. The top 40% is close to our current participation level. The top 15% corresponds very roughly to the old universities, and the top 2% to the most intellectually demanding courses at the most highly ranked universities.  The Table shows that social class differences are greatest when the cutoff point is set very high, simply as a consequence of the normal distribution of intelligence.

The point of this exercise is not to say that entry to university should be based on IQ tests. Universities base entry on scholastic tests, particularly those that identify the very brightest candidates. Nor do these calculations lead to setting any particular cutoff point for university entrance as a whole.  That is a social decision.

The real point is to explain that different rates of entry to university according to social class are a direct consequence of a meritocratic society. If people are allowed to rise to the jobs which they merit, (true of Britain from 1958 to 2000 and most probably beyond), then there will be a slight but significant difference in the average intelligence of their children. These differences become quite marked at the outer reaches of the intelligence distribution, leading to actual university entrance figures being legitimately different from simple expectations.  One should not expect every social class to have university entrance rates directly proportional to their numbers in the population, because people are selected into jobs by ability.

Oddly enough, when we hear that proportionately more middle class children are going to university we should reply “So they ought to be, if their parents were correctly selected for their jobs”.


Percentage of each social class who will be admitted if the university takes the top 50, 40, 15 or 2 % of the student population



Student IQ
Top 50%

Top 40%
Top 15%
Russell Group
Top 2%
Oxbridge
Professional
      104
64
54
26
5.0
Managerial
102
56
46
20
3.3
Middle
98
45
35
13
1.7
Semi-skilled
96
39
30
10
1.2
Unskilled
95
34
26
8
0.9





References

Nettle, D. (2003) Intelligence and class mobility in the British population. British Journal of Psychology, 94, 551-561.

CMA Haworth, MJ Wright, M Luciano, NG Martin, EJC de Geus, CEM van Beijsterveldt,
M Bartels, D Posthuma, DI Boomsma, OSP Davis, Y Kovas, RP Corley, JC DeFries, JK Hewitt, RK Olson, S-A Rhea, SJ Wadsworth, WG Iacono, M McGue, LA Thompson, SA Hart, SA Petrill, D Lubinski and R Plomin (2009)  The heritability of general cognitive ability increases linearly from childhood to young adulthood. Molecular Psychiatry 1–9.


Time's face



The Depiction of Time

Psychologists have been more concerned with estimations of time than with its depiction. Humans are poor timekeepers. Living on a fast rotating, slightly tilted planet our ancestors had no need to count the hours. Diurnal variation did the job nicely. Dawn to dusk, our nearby star guided us with dependable regularity. Carving daylight into measured segments made no sense. The question of time was answered by the sun’s position in the sky, and the night was for sleeping.

Calculating planting seasons away from the equator was more difficult, but just counting the days was enough for a first approximation, and the moon gave monthly help, though its cycle was not exactly in step with the solar estimates. Calendars became necessary, and intellectual elites grew up to do the calculations. Eventually it became necessary to measure the passage of time more precisely, even if only because the eternal panoply of the stars seemed to spin round at night, and it began to be important to time sightings of planets.

The whole history of timekeeping is fascinating, if only for its intellectual challenges. The devices were clunky: water filling cups, candles burning down, sand falling through the narrow aperture of an hour glass. Not till Christiaan Huygens invented it in 1656 did the pendulum bring its harmonic oscillating order to chronometry, and held supreme until the 1930s. Not a bad run for one man’s mechanical device, though Galileo had done the conceptual groundwork in 1637. In 1927 the oscillations of quartz provided the Holy Grail: no-one has needed better day to day precision ever since.

Once time could be measured down to fractions of a second it became very apparent that humans did not think like stopwatches. Filled “engaged” time passes quickly, dull “empty” time slowly, terrified time not at all. Time stands still when we are about to die. Even aside from threats of imminent death, patients recounting their traumas in an unlimited therapy session in a quiet and peaceful consulting room (in which a whole afternoon and evening are set aside for them) totally lose track of time, and will often estimate that the 4 or 5 hour session took about an hour, or an hour and a half.  Anyway, Einstein’s comment to his secretary Helen Dukas as to how she should answer lay enquiries about the meaning of relativity catches the main findings perfectly: “An hour with a pretty girl on park bench passes like a minute, but a minute sitting on hot stove seems like an hour”.

This is not to say that we completely lack any internal clock. Left in a dark cave away from all zeitgebers (external cues to the time of day) human rhythms follow a 25 hour cycle.  It is not clear why we are one hour too generous, but it seems that rough approximations are good enough.

A consequence of the pendulum controlled, rotating drive shaft was that time was depicted as a dial, a clock face with equal segments, noon at the top, where the sun should be, but absurdly telling only half the story, since in this configuration the clock must rotate twice every day. We have gained precision, but 12 hour dial time has lost us our connection with real time. Clocks have become a device for dark places, the anonymous non-world, windowless airport rooms. We have become coordinated with each other, and not with the heavens.

Some pioneers have moved Into this conceptual gap, making 24 hour wristwatches. For example, Bjorn Kartomten’s solar lunar timepieces, in a weighty chunk of a chronometer, reveal daylight and night, and moon rise and set and phase, for every point on the planet.  Emerald Sequoia invent imaginary timepiece apps, many of astronomical time, providing grand complication watches for fractional cost, though on iPhones and not yet wearable on a wrist. Perhaps as all these new watches gain popularity we will stop living by the fast, insistent, atomic clock coordinated seconds hand, and ignore even the insolent minute hand, but glance every now and then at the single hour hand that rotates slowly through light and shade and connects us again to the sun and moon, from whence time began.