InSight+ Issue 30 / 6 August 2012

Doctors spend a lot of time learning facts but don’t get much formal education on how to think.

Ideally, this skill is learnt ― or “picked up” ― during the years of clinical teaching at the bedside, in the clinic and in the operating theatre.

Errors in clinical interpretation and reasoning can occur at any point during patient care. These are often due not to a lack of knowledge or competence, but to the decision-making processes of humans in situations that are clinically complex, uncertain, and pressured by time and emotion.

In the latest issue of the MJA, an interesting study looks at the clinical reasoning skills of junior and senior emergency medicine staff by testing their ability to accurately interpret electrocardiographs (ECGs) when provided with either no clinical history, a history with a positive bias towards the correct diagnosis, or a history with a negative bias towards an alternative diagnosis.

Overall, doctors made the correct diagnosis about half of the time. Worryingly, this 52% correct rate may be the “best-case scenario”, as less than half of the doctors approached agreed to participate in the study. One might surmise that these results therefore reflect the doctors who perceive themselves as most competent at interpreting ECGs.

Accuracy was affected by knowing the clinical history and by the seniority of the clinician — results that were to be expected. In reality, however, the pressures associated with the modern emergency department, including overcrowding and the 4-hour rule, make it likely that the ECG is taken before the patient is seen by a doctor and then interpreted by a junior doctor without the benefit of the history to inform the decision.

The authors did not determine the impact of incorrect diagnoses. If ventricular tachycardia is mistaken for supraventricular tachycardia, the consequences could be severe. Likewise, if a diagnosis of myocardial infarction is made in a patient with pericarditis, it is possible that the patient would undergo invasive, potentially dangerous and unnecessary investigations.

In reality, if a doctor is unsure of the diagnosis, one hopes that he or she would be able to consult and collaborate with colleagues about the diagnosis, with knowledge of the patient history and clinical examination.

This study also invites consideration of the risks and costs associated with the inappropriate use of investigations to screen for illness and to guide management, particularly if the test has a low sensitivity — like the ECG. For example, many a clinical dilemma has arisen when the result of a D-dimer test, done as part of a workup, is positive for a patient in whom a pulmonary embolism or deep vein thrombosis was not even part of the differential diagnosis.

The statistics about errors in medical reasoning are sobering. The correct diagnosis is missed or delayed in up to 14% of acute admissions. If the diagnosis is correct, up to 43% of patients do not receive recommended care, and about $800 billion — nearly one-third of all health care spending — is wasted on unnecessary diagnostic tests, procedures and extra days in hospital.

Wilson and colleagues’ landmark analysis of the cause of adverse events in the Australian health care system reported that almost half of reported adverse events involved errors of reasoning.

Clearly, we overestimate our ability to correctly deploy tests, interpret test results, and act appropriately on the results of clinical interactions and subsequent investigations.

The study involving ECG interpretations confirms the need to actively critique our methods of clinical reasoning, and to teach these skills.

Formal courses will provide a theoretical background, but the need for this to be included in the curriculum at the “bedside” remains. We all make mistakes but if we don’t understand why they occurred it is likely that we will repeat them.

Dr Annette Katelaris is the editor of the MJA.
This article is reproduced from the MJA with permission.

Posted 6 August 2012

5 thoughts on “Annette Katelaris: What influences clinical decision-making?

  1. Sue Ieraci says:

    Yes, safety is a commodity, and that’s why the airline industry is not a good model for health. Airline travel is optional – it is not considered to be a universal publicly-funded right. What happens when a flight cannot be fully staffed, or the crew is too junior, or someone is off sick? They can cancel the flight. Health care is more like traffic – chaotic but with a degree of predictability, involving a huge variety of vehicles and drivers, operating 24 hours, 7 days a week.

  2. ex doctor says:

    Donald, and all who advocate the airline safety model for medical care. I strongly recommend reading Richard de Crespigny’s book “QF32”. This is a detailed account of the Qantas A380 engine failure incident out of Singapore. A careful reading will give much food for thought for doctors and indeed the whole medical industry. The inescapable message however is that safety is a commodity and the community can have as much safety as it is prepared to pay for.

  3. Sue Ieraci says:

    “ex doctor” is right – there is a flaw in clinical reasoning when one defines a diagnosis as “correct” on the basis of a single test, while ignoring the history. Perhaps a better structure might have been to provide the ECGs with, and without, the benefit of the correct history. If the “distractor” history made the condition highly unlikely, then that should affect the rate of false positives in the test – according to Bayesian logic. IN that sense, the “misreading” probably reflects appropriate reading of the patient in toto.

  4. ex doctor says:

    Why was the study constructed this way? Surely the human interpretations should have been compared with a computer generated diagnosis as a baseline. Is it now clinical practice to make a diagnosis and implement therapy on the basis of a blind analysis of an ECG without the benefit of a clinical history and physical examination? If it is then God help us.

  5. Donald says:

    A very sobering essay, Dr. Katelaris. Thank you. Maybe it is time for the airline safety model to be applied to medical care? Using the statistics from this study I am not sure if I want to fly with a pilot and copilot that have only a 52% combined chance of understanding or even reaching the destination! At best!

Leave a Reply

Your email address will not be published. Required fields are marked *