You can find my most recent commentaries on the Junior Doctors’ crisis as it unfolds in 2008 by following this link.
You may have heard about the “computer problems” with the junior doctors job system. Calling the problems “computer problems” is like blaming the guillotine for the French Revolution.
(Briefly, as background, you should know that Patricia Hewitt and Tony Blair have decided that 20% of our specialist doctors are surplus to requirements. As of August they will be out on their ears. Most people simply don’t believe this statement, but it is true. To turn the wickedness of this into lunatic farce, doctors cannot apply to specific hospitals or NHS trusts. Instead a young specialist may end up virtually anywhere despite the fact that she probably lives within a handful of miles of a hospital training her peers in just her speciality. This is such a stupid situation that most people don’t believe that either.)
Right. Can’t avoid it any more. Let’s look at MTAS up close and personal.
The doctors have been selected for interview based on a Self-Assessment Competency Questionnaire. They are asked a dozen or so questions and allowed 150 word answers to each. Now I have to say that the questions are sharp and to the point, and do in fact cover stuff that is not explicitly available from CVs or even from references. Most doctors disagree with me and consider them to be a bullshitters’ charter, and an invitation to waffle. Paradoxically, they are right too as we shall see.
I suspect that the civil servants just don’t understand why the doctors see the questions as irrelevant requests for creative writing. This is probably the nub of the problem. I come from an environment where this sort of thing is usual and have considerable respect for the format because I am used to the soul-searching and hand-crafting that’s involved in answering this kind of question. So do the minions in the Department of Health. But the fact that civil servants like it doesn’t make it a good way to evaluate doctors.
You see, if you have not done these things before they are the very devil to answer. I am going to give you a couple of examples, that of Dr Smith and Mr Jones, both from the inimitable blog of Dr Crippen. I’ll restrict myself to comparing their answers to just one question.
B6 What experience of delivering teaching do you have? (150)
I have a broad experience of teaching House officers, SHO’s and medical students. I teach theory, practical skills and etiquette. I thoroughly enjoy teaching and find the challenge a constant spur to my own learning. I am an associate lecturer at the University of Toronto.
B6 What experience of delivering teaching do you have? (150)
I have a keen interest in medical education, and take every opportunity to teach in the clinical environment. Formal medical education training has enabled me to develop the skills, attitudes and practices of a competent educator. I bring innovation and passion to teaching, and have a recognition award for Clinical Teaching beyond the call of duty.
I’ve delivered lecture-based and clinical teaching to [specialty] colleagues, A&E colleagues, and nurse practitioners. I’ve taken responsibility for organising MRCS clinical examinations. As an accredited Basic Surgical Skills tutor, I’ve received excellent feedback from delegates and convenors. I tutor medical students in the Surgical Skills laboratory, and supervise medical students spending electives in [specialty].
I’ve been involved in regular surgical events for schoolchildren since 2002. I find great satisfaction in teaching. My experiences have helped develop my teaching, technical and leadership skills. I hope to continue developing and utilising these skills.
Dr Smith may be no worse at teaching than Mr Jones. He may be no less experienced. But you can see quite clearly that Mr Jones’s answer is by far the better of the two. Dr Smith assumed that his qualifications would speak for themselves. Before I leave them, I am going to link you to their own reflections on their applications. Dr Smith may not have been eloquent on his MTAS form, but he’s most certainly a gent and – from his CV – an excellent doctor. Mr Jones, too, seems a thoroughly decent bloke. I cannot believe we have lost one and may yet lose the other.
The medical profession has put some effort into researching how best to predict which applicants will turn out to be the best doctors. The papers BMJ has published on the subject include one by Fiona Patterson, amongst others. The paper advocates a three tier assessment of competencies based on an application form, a day spent at an assessment centre and structured references. The paper concludes (along with most of the literature on the subject) that for professions such as doctoring, a well planned assessment centre is the way to go. I only mention this specific paper because it is rumoured to be the basis for the Medical Training Assessment System, though if it is, there has been a long and deviant path from paper to process.
The three main problems with the paper are:
- It is based on General Practice competencies. Interestingly enough, GPs are being assessed by exam and assessment centre, this time round.
- Application forms included 6 self-assessed competency questions and were “rated independently by three assessors (experienced trainers or course organisers). Each assessor attended a four hour training session to enhance reliability”. As opposed to being specialist registrars recruited with no training on how to mark the forms, or – in the most surreal rumour I’ve come across – Police Cadets.
- The research questions focused on the outcomes of performance at an assessment centre, not on the effectiveness of the application forms in pre-selecting those to be assessed.
In other words, if the MTAS system was in fact based on this paper, then it’s missed the bloody point.
On top of that, it presupposes several things, none of which have been the case:
- The applicants trust the system
- The system is managed competently and fairly
- The assessors know what the fiddle-dee-dee they are doing
- Assessment at the assessment centre lasts a whole day, and the assessments comprise “a series of work-related simulation exercises, each lasting between 20-40 minutes” none of them involving origami.
This is an expensive approach. At some point someone decided to put more emphasis on the Self-Assessment Competency Questionnaire and to cut the day at the assessment centre down to a 30 minute interview slot.
Is this the same?
The problem is that the circumstances which render an MTAS-style questionnaire invalid are the precisely the circumstances in which it has been introduced. It is seductive for civil servants because it is easy to administrate, simple to score, and – when introduced properly – it is effective at working out which are the suitable candidates. (I am sorry, but it is.) But it really has not been introduced properly.
Brace yourself for some academic jargon.
“A meta-analysis … found predictive validity to be relatively low (r=.29), but that it could be substantially improved (r=.64) by incorporating certain measurement conditions including instructions emphasizing comparison with others, previous experience of self-evaluation, guaranteed anonymity of [candidates] and the expectation that [the applications] will be validated or checked against other measures.”
In other words, this system is rendered ineffective if the candidates don’t trust it or haven’t used a system like it before.
Is this ringing any warning bells? In what conceivable world is it acceptable to use a system which is that fragile to work out who are the “weakest” 20%-25% of candidates in order to kick them out of their profession? But you are not a politician. If you were playing those sorts of games with people’s lives and with the NHS you’d want to make sure you’d kept the cream of the crop, and at least played fair by the rest.
Another problem with this whole Competency-Based Questionnaire debacle is that although – if implemented properly – they may be effective at identifying good doctors it is most definitely not proven that they are a good way at selecting out the bad ones. The people who fill the forms out well are (presumably) good, but I’ve not found evidence that everyone who is good will fill the form out well (remember this is in “ideal” conditions). To test for this, you would have to appoint doctors you believed were going to be bad, and then check after a while to find out if this was in fact the case. Don’t you just love ethics?
The bias is subtle and invidious.
“Harris (1994) … proposed that motivated [candidates] are likely to use thorough and analytic processing strategies (deliberate processing), leading to the retrieval of more information from memory. Those with low motivation are likely to use a quick heuristic-based approach (non-deliberate processing), associated with simplistic decision-making and integration processes (Chaiken, 1980; Petty & Caccioppo, 1986).”
This means that MTAS is actually biased against candidates who consider it to be a flawed system. The difference between “deliberate” and “non-deliberate processing” originates below the level of concious choice. We’ve seen a startling example of this with Dr Smith and Mr Jones. If it was a matter of conscious choice on the candidates’ part, then it would be a way to select out those who are willfully obstructive, but the Dr Smiths are not willfully obstructive. They just see no point in answering these questions when their CVs and academic and professional qualifications should speak for themselves and so they answer the questions quickly and without much analysis. (I had to look “heuristic” up in this context. The compact OED defines it as “…. proceeding to a solution by trial and error or by rules that are only loosely defined”). The thing is, it really is completely irrelevant to their abilities as doctors, so we really are not getting the cream of the crop. We are getting the ones that bought in, I guess, but there is nothing to say that they are better or worse as doctors. This feature of the system is particularly nasty, because it gives the government the opportunity to claim that it is the candidates’ fault for not taking it seriously.
What about the things that show a doctor knows her stuff, you ask. What about their qualifications and Membership exams? What about the things that say if they are a good doctor to work with? What about their references? What about knowing if they’ve worked in similar hospitals and with similar teams? What about their references and their CVs?
What indeed. Their MTAS score has a weighting of 85% in the decision of whether or not a candidate should be interviewed. Everything else, CV, Honours at Med School, research, published papers, higher degrees, doctorates, marks in their Membership exams, portfolios, references, every thing else contributed a maximum of 15%. This might be acceptable if MTAS was flawless, and it wouldn’t matter a damn if there was a shortage of doctors, but it is sheer professional negligence on the part of those implementing the system given what we know about the weaknesses of Self Assessment Competency Based questionnaires. Isn’t it?
I had at least retained some faith in the common-sense of the interview process until I read this which sickened me to my stomach all over again:
The second station [at my interview] involved ‘Communication Skills’. It was awful. First, I had to fold a piece of paper according to verbal instructions. It did not make a crane – perhaps I did it wrong? Then I was given a random series of shapes on a piece of paper and had to describe them to another Consultant for her to draw them. Hmmph. Goodness only knows how I did on this station. I felt stupid and I know that I didn’t show how well I can actually communicate about real things. What I don’t understand is how this is supposed to supply them with reasonable doctors. If I did it all wrong, am I a bad Doctor? If I did it right, should you fast track me to a Consultant’s post?
Bloody weird. – Junior Docspot – Origami Anyone?
The worrying thing is that the much vaunted review of MTAS which has now been promised by the government is going to spend its time checking the ropes of the guillotine, making sure that next time that the blade is sharp and the mechanism is properly oiled and running smoothly. The fact that the site kept on crashing is irrelevant, to be honest. The fact that the papers have been marked in such haste that the scoring is effectively arbitrary is marginally more relevant. The two things that the review will ignore are:
- Are Self Assessment Competency Questionnaires an effective way to sort out good doctors from less good ones?
And the really important one:
- Why the f**k are we loosing 20-25% of our specialist trainees?
A note on my sources. I did a fairly quick literature search the other night and much of what I read summarised meta-analyses and other published research studies. Recruitment and candidate assessment isn’t my patch but I do live and work in the neighbouring metaphorical postcode. The paper which was most relevant and which I took my quotes from was:
Ivan T. Robertson and Mike Smith. (2001). Personnel selection. Journal of Occupational and Organizational Psychology 74, 441–472.
The paper in the BMJ was:
Fiona Patterson, Eamonn Ferguson, Tim Norfolk, Pat Lane (2005). A new selection system to recruit general practice registrars: preliminary findings from a validation study. British Medical Journal. 330, 771-714.
Stupidly, I have lost the references for the other papers I read that evening, but interestingly none of them discussed Self-Assessment Competency Questionnaires. Studies, one of them published last year in the BMJ, demonstrate time and again that the most effective predictor of candidate performance is the assessment centre. Ones involving simulated situations, not ones involving origami.
If you do want a list of references then I can find them again. Drop me a comment and I’ll skip through my browser history and dig them out.
A patients’ guide to Modernising Medical Careers Part 1 – Part 2 and Part 4 consider other aspects of this madness.