Although much of medicine has advanced beyond recognition in the past 50 years, the taking of a patient history remains a bastion of traditional care; artificial intelligence could change all this, writes Dr Jack Marjot.

Medical students the world over have been told some variant of the adage “80% of clinical diagnosis is in the history”. This wisdom has been reiterated since at least 1975, when Hampton and colleagues published that “a diagnosis that agreed with the one finally accepted was made [by physicians] after reading the referral letter and taking the history in 66 out of 80 new patients” (here). Tiny sample size aside, a good history remains the cornerstone of patient assessment and diagnosis. However, although much of medicine has advanced unrecognisably since 1975, the process of history taking has remained a stoic bastion of traditional (perhaps even paternalistic) care – formulaic, clinician-directed and, for the most part, face-to-face.

Since 1975, the realities of health care have changed. In 2022–2023, 35% of patients attending emergency departments in Australia’s public hospitals were waiting too long for their treatment to start, when measured against the standard for their triage category, entailing hours spent in a waiting room. This is mirrored in general practice, where in 2021–2022, 39% of people who saw a general practitioner for urgent medical care waited for 24 hours or more.

What if we were to repurpose those hours and days into productive time, where the patient provides their history and thus begins to make steps towards their diagnosis before they ever meet the clinician? From the moment that face-to-face interaction arrives, the clinician then has a wealth of information at their disposal.

How artificial intelligence could transform taking a medical history - Featured Image
AI-assisted history taking could empower patients to provide their own history, the author writes (Prostock-studio / Shutterstock).

The role of artificial intelligence (AI) in health care is already beginning to crystalise, as we see it applied to medical image analysis, radiology reporting, personalisation of treatment options, and early disease detection. History taking should not be seen as too basic, too fundamental, or indeed too human to be included in AI’s reimagining of health care.

As doctors, we should disavow ourselves of the notion that history taking is an art form predicated on years of experience. Instead, we should embrace the potential for AI to enhance history taking and, in particular, the empowerment of patients to provide their own history asynchronously from a clinician. AI has the potential to ask questions of patients that are personalised and refined according to their previous answers, gathering the relevant information to reach the diagnosis, all before they see a clinician.

For example, while “John” sits in an emergency department waiting room, the clock marking his third hour waiting with chest pain, he could begin the process of contributing to that fabled 80% of his diagnosis. He could be asked a series of questions via his smartphone that prompt him, through an intelligent and adaptive set of questions, to describe the nature of his pain, his cardiac risk factors, previous treatments, comorbid conditions, and lifestyle factors. In the same way, as the experienced clinician knows not to immediately accept John’s answer “no” to the question “do you have any past medical history?”, such an AI-assisted history system would probe a little deeper and directly question if John is known to a cardiologist, is on any cardiac medications, or has ever had a previous electrocardiogram, echocardiogram, or angiogram. When John reports he gets short of breath, AI could help him characterise if it is dyspnoea during rest or exertion, and question if he also gets short of breath lying flat.

In the 2021 census, almost 25% of Australian households spoke a language other than English; if English was not John’s first language, the system could instead use a more appropriate language to improve the accuracy and speed of history taking.

If John were to present to a regional or remote hospital, there is potential for such a system to assist junior clinicians in asking the right questions to gather all the essential information, especially if there is a scarcity of on-site senior support.

What’s next? AI could then determine from John’s history the differential diagnosis list – prioritising the most likely diagnosis while reminding the clinician to consider others to avoid anchoring bias. Ultimately, it may then be able to integrate the history with John’s triage vital signs and basic demographic data and feed it into a validated clinical scoring system, to give the likely severity of that diagnosis.

It could then refine it all into a succinct, relevant and digestible summary for the clinical decision maker and for the electronic records.

In resource-constrained health care systems, imagine if we could use this AI-assisted history to more accurately triage John’s need for urgent clinical review. It may even be able to predict, in the first minutes of presentation, the likelihood of John needing admission and thus help to navigate bed pressures and access block.

All this could occur before John has had a single blood test, scan, or indeed has even left his waiting room chair.

Despite the opportunities, it’s important to discuss the risks and limitations. There will always be times when the nuanced communication skills of a doctor–patient interaction are required to elicit the key details of a patient’s presentation. There will also always be a need for human empathy in the patient assessment and a need for a clinician to overlay the lens of hard-learned experience to an AI-generated history. And, of course, nothing replaces a doctor’s instinct.

In addition, a patient-initiated AI-assisted history system should never substitute clinical review and should never go unchecked. The development of any AI-assisted history-taking tools will need to be met with rigorous testing and regulation and conform to data privacy standards. Nonetheless, in an under-resourced health care system where so much of a patient’s time is spent waiting, we as clinicians and diagnosticians should recognise the power of AI to transform history taking and allow the patient to take the lead – even if we have always considered it one of our most fundamental and human skills as doctors.

Dr Jack Marjot is an advanced trainee in emergency medicine at the Prince of Wales Hospital in Sydney.

The statements or opinions expressed in this article reflect the views of the authors and do not necessarily represent the official policy of the AMA, the MJA or InSight+ unless so stated. 

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners. 

If you would like to submit an article for consideration, send a Word version to mjainsight-editor@ampco.com.au. 

9 thoughts on “How artificial intelligence could transform taking a medical history

  1. D Astles says:

    Thanks for sharing a thought provoking article Dan, which has generated valuable discussion. 100% agree there likely many applications that AI can support in the ED, the important point is to link in with partnered universities to build meaningful research questions that can be robustly tested to build our body of knowledge. At NSLHD we have recently set up our AI Healthcare Governance Committee, to support the review and oversite of AI tools entering the organisation.

    As a side-note, I’d reccomend you reachout to Dr Oscar Perez-Concha, Centre for Big Data Research in Health, UNSW Sydney NSW 2052 Australia.

  2. Anonymous says:

    Sue Ieraci’s comments involving how questions are asked nicely illustrates the issue of taking the answers at face value, particularly that example about smoking.
    There are several inherent issues when people answer a question.
    When a person answers “no” when asked “Do you smoke cigarettes?”, (s)he can mean any of the following:
    1. No, not at this moment as I am talking to you
    2. No, although I do smoke cigars/cannabis or I vape
    3. No, as I stopped doing so 1 hour/1 day ago with a plan to abstain
    4. No, although I am still actively smoking every day

    Answer 1, 2 and 3 can be factually correct and may or may not involve an intent to deceive, whereas the last answer is deceiving

    AI will not necessarily question the accuracy of the answer provided whereas a trained clinician can smell the cigarette breath and note the cigarette stain on the fingers.

    Even if the AI can be programmed to ignore this answer if it decided it doesn’t fit the scheme, but that would involve a courageous programmer to make the call

  3. Antony Sara says:

    as i understand it, asking mechanistic questions via a computer program has been tried many times in the past, and basically failed. One of the issues was that to cover all the possibilities becomes close to impossible. AI cannot think, and cannot deduce, so I struggle to understand how AI in its current form can do what is desired of it. Sue , Nell and David are correct, IMHO.

  4. Anonymous says:

    I feel that some commenters don’t understand that a large language AI program are not mechanistic but are able to learn and analyse answers, utilise vast databases of information and adjust the responses accordingly. There is merit to this idea as a way of further triaging patients. Patients dont have to understand medical terms. I envisage a system that would take patients through very simply worded questions requiring yes or no input to refine their answers. An AI bot could even be designed to vary the complexity of the questions depending on the patient’s details such as educational level, previous responses etc. Also there are already online mental health programs that are very successful and they are not even AI. But it would take a lot of work to design such a system for every type of presenting symptom and in different languages.

  5. Rahul Barmanray says:

    Good summary of the latent potential here. One point you raise deserves further consideration, that of “As doctors, we should disavow ourselves of the notion that history taking is an art form predicated on years of experience.” Those of us in clinical education are unfortunately all too cognisant of the many years it takes human students (ex-students very much included!) to become expert hstory-takers. But this, and presumably the point you intended, doesn’t mean that it takes AI tools years to reach similar levels of expertise. In fact, the LLM experience shows this to clearly not be the case and OpenAI tools in particular probably allow a medical student-level AI tool to be created in days.

  6. Sue Ieraci says:

    The real issue with any AI tool is whether the tool answers the right question. If the AI merely automates a poor cognitive process, it will exaggerate that problem.

    Since this article mentions ED care, I’ll use a common example: chest pain. Bayesian thinking – often poorly lacking – tells us that, if the patient has low pre-test probability, the tests will yield too many false positives. Most so-called “rule out ACS (acute coronary syndrome)” pathways are designed to over-diagnose (because the aims are to minimise “missing anything”.

    Also we all know that the way we ask the question can shape the answer.

    Q: “Do you smoke cigarettes?”
    A: “No”
    (Q: when did you give up?
    A: last week, when I got this cough)

    I would implore the author to pour his talent and insights into clinical documentation. We need a system that makes it easy to rapidly and accurately record the results of our history-taking, logically and legibly, which is efficient for clinicians – not just for data collection.

  7. Lynden Roberts says:

    Nice article Jack. AI can’t replace the ‘art’ of human history taking, but if it can help save clinician time and/or potentially enhance clinician performance, we really must check it out. Our current system is far from optimal for our patients, and the potential here feels worthy of exploration. Methodical and comprehensive clinical evaluation of any clinical AI will be key; note the lengths we go to to train and credential our clinicians, so we can’t skip over this step for AI.

  8. Dr Nell de Graaf says:

    Sounds great Jack.
    However as a rural doctor working in a Queensland ED even our experienced triage nurses have difficulty getting a history out of a lot of our patients.
    Lots of our older,deafer,uneducated patients don’t understand any medical terms.
    Lots of our patients don’t have smart phones or can only use them to answer a call.
    Lots of our primary English speaking patients can barely read or write beyond secondary entry level.Can your AI ask them the questions and listen to their answers?
    Lots. of our mental health patients can’t or won’t answer questions…
    Lots of challenges lie ahead as our patients get older,more complex ,more medicated and less supported in the community

  9. David Ringelblum says:

    Do body language, tone of voice, hesitancy in answering, contribution/interruption/interaction between patient’s family members not contribute to the history and the diagnosis and the ultimate management?
    AI will take a great mechanistic history. But in doing so may muddy the pitch considerably, both for later human history taking and in treatment. The interaction between physician and patient is part of the therapeutic process.

Leave a Reply

Your email address will not be published. Required fields are marked *