Friday, 8 August 2014

Another way to fiddle exam marks

 

Years ago I was appointed External Examiner in Psychology at another medical school. I looked at all the papers of students with grades discrepancies between the two internal examiners, and gave my opinion as to what their final marks should be. I also looked at another 20% of the papers to compare those with the list of students with disputed marks, just to calibrate the marking, then looked at the top marked students in detail to agree the prize winners, and looked at the bottom scorers in even more detail to see who should spend the summer re-sitting the Behavioural Science exam. Then I looked at the distribution of the marks, which was fairly tightly distributed round the magic 55% figure, which ensured that most people passed, and thus did not spoil the teacher’s summer by requiring extra teaching to pass the re-sit exam. I think I did a plot of the scores, and planned to discuss widening the range of scores with the internal examiners in Psychology.

All pretty straightforward, but time consuming. Examining is not a well paid occupation, more of a chore than an honour, with some aspects of religious ritual. I felt I had completed my task competently. It was a medical school at which I had taught part of a psychology course for many years. I knew the teachers to be dedicated and enthusiastic. I knew I would have to attend an Examiner’s meeting at which I would make a few suggestions for improvements, commend the course, but would then quickly get back to work at my own medical school.

At this point, having announced that I had completed my task, my colleagues warned me that they and their Psychology course were under attack from other departments, because the Psychology failure rate was seen as unacceptably high. The other traditional courses had failed fewer students, and the occasional severe failure to meet the Psychology standard might lead to the loss of a student who had passed Anatomy and Physiology. By way of background, Psychology and Sociology had been forced on Medical Schools by a Government enquiry, the Todd Report, which sought to ensure that doctors were more patient-focussed, and more aware of psychological issues. This was resented by the older traditional subjects, who hated having lost teaching time to these upstart and probably Marxist intruders. Yes, dear reader, I was part of a revolutionary cadre, overturning the ancient cultures of conceit: the Che Guevara of communicating with patients.

Now look at my first paragraph, and spot my most significant omission. As External Examiner I knew the Psychology syllabus at most medical schools. They were all independent, but all covered the same sorts of key issues, with some slight differences: patient communication, psychology of pain, the placebo response, anxiety, depression etc. I knew at a glance that the exam questions (which I had moderated before they were set) were a fair representation of the course as taught. I knew that the questions were a sub-sample of all the possible questions that could have been set. It was necessary to revise much of the course in order to be sure of getting questions you could answer. Still wonder what’s missing before I can judge the Psychology scores against the other subjects? Look at the sequence in a logical order: assume the Todd Report correctly defined the national standard for Behavioural Science, of which Psychology was part; assume that the Syllabus was fair representation of the national standard; assume that the Exam was a fair representation of the course as taught (not every subject taught will be examined on any particular year); and then assume that the Exam had been properly marked, with two internal examiners working independently, then consulting each other afterwards with their marks, then turning to the External Examiner to resolve differences. Perfect?

If you present experts with a fault tree they tend to believe that it has covered all possible problems (even if about a third of it is missing). Since experts are rational people, and mostly good natured, they tend to have difficulty believing that some people will do stupid, dishonest and malevolent acts.

My Psychology colleagues explained to me that the reason the Physiology exam never had failures, or very very few failures, is that they always did a “revision” teaching session at the end of term, always very well attended, at which they discussed the sorts of topics “which might come up in the exam”. Even the dullest rugger bugger medic could pick up the clues. Passes all round.

Psychology did not do this, either out of honesty or plain innocence, imagining that exams ought to be a test of what students actually knew.

The official Examiner’s meeting was chaired by the Dean of the Medical School. This being London he was already dressed in his legal robes as a Queen’s Council, since he was about to go off to the High Court on another, presumably far more important, matter. As predicted, the Physiologists made their attack: “Psychology, a new subject, is being far too harsh, and is failing students whom we know to be perfectly good future medics, who have done well in our Physiology exam”.

The Dean turned to me slowly, raised an eyebrow, and with infinite politeness said: “Dr Thompson?” I smiled in a manner which I hope could be described as understanding and even sweet, and replied that the overall results on examinations often depended largely on the extent to which the questions could be predicted by students, and that perhaps, just possibly, the Physiology questions were habitually more predictable than the Psychology questions.

The Dean accepted our marks without further discussion, and some students had a summer in which they learned some Psychology. Whether they became better doctors is hard to say.

No comments:

Post a Comment