The consultation room has a new participant. It arrives when patients pull up ChatGPT-generated symptom analyses. It appears when we use GenAI to check drug interactions or draft letters. And it’s always listening when ambient AI scribes document our conversations. Whether acknowledged or not, GenAI is fundamentally changing what happens in the consultation room.
This article draws from a new BMJ series examining how generative AI is reshaping doctor-patient relationships. The series introduces the concept of “triadic care”, the reality that today’s medical encounters often involve not just patient and GP, but also AI systems.
AI usage is exploding on both sides
Since ChatGPT’s release in late 2022, patient use of GenAI chatbots for health information has grown exponentially. In Australia, about one in ten adults now seeks health advice from ChatGPT. Patients use it to research symptoms, clarify medical jargon, create care plans when specialist support is delayed, and seek second opinions.
Some examples are striking. A US mother used ChatGPT to help diagnose her child’s tethered cord syndrome after traditional consultations missed it. But there’s also harm: AI has misclassified neurological symptoms, delaying stroke treatment.
On the clinician side, adoption is equally rapid. Two-thirds of US physicians now use AI tools, up from 38% a year earlier. The fastest-growing application is AI-powered medical scribes, ambient systems that automatically generate clinical notes, promising to reduce documentation burden.
In Australia, we’re seeing similar patterns but lack data. My team at Macquarie University is conducting the first national survey of Australian GPs using digital scribes. If you’re using these tools, please share your experience at https://gpscribesurvey.getcds.net/.

Why this is different from “Dr Google”
Unlike a Google search that returns verifiable links, AI chatbots deliver synthesised reasoning without producing a completely verifiable output. When ChatGPT generates a differential diagnosis, neither you nor your patient can trace how it reached that conclusion. Medical knowledge then becomes something to interpret rather than verify.
The statistics are concerning: only 19% of users cross-check chatbot outputs, yet people trust AI responses as much as doctors’ advice, even when inaccurate. This trust-without-verification fundamentally alters consultation dynamics. Yet the potential and capabilities AI opens are undeniable, and soon practising with AI assistance could be seen as inevitable as working with the internet, possible without it, but increasingly impractical.
The risks and opportunities
AI can empower patients to prepare better questions and participate more actively in decisions. But three key risks emerge:
Trust erosion: Research shows patient satisfaction drops when AI authorship is revealed after the fact. In one study, patients rated AI-drafted messages as more empathetic than human ones, until they learned AI wrote them. Transparency needs collaborative review, not post-hoc disclosure.
Digital divides: This poses particular risks for CALD populations and people without English as first language. Most AI systems are trained on English data and perform poorly otherwise, potentially widening health inequities.
Accountability gaps: Triadic care means doctor and patient retain responsibility for decisions through shared understanding. AI provides input, but accountability cannot be delegated to algorithms making autonomous recommendations.
What doctors can do now
The first step is simple: make AI use visible. For patient AI use, try: “Have you used AI or looked anything up about this? Let’s review it together.” This normalises disclosure and creates opportunities for collaborative interpretation.
For your own AI use, brief transparency builds trust. “I’ve checked this interaction with an AI tool, and here’s what it suggests”, paired with joint review, increases patient trust in both you and the decision.
Documentation matters too. Health systems should add simple “AI involvement” fields in electronic records. Recording why you accepted, modified, or rejected an AI suggestion creates patterns for safety learning and audit trails. For digital AI scribes, gaining patient consent is key before using them for consultation transcription.
Generalist doctors are well-suited to navigate this shift. Our core task has always been dialogue and interpretation with patients, working with uncertainty and helping people weigh options among both general and specialised advice. The consultation is becoming less about exchanging facts and more about co-creating understanding, now with a third voice that needs critical interpretation.
Moving forward together
Australian healthcare is making progress on AI governance. The Department of Health and Aged Care has conducted consultations on safe and responsible AI in healthcare, the RACGP has released guidance on conversational AI and scribes, and professional bodies are developing frameworks. But integrating AI into everyday consultations requires practical action at the frontline.
GPs can start now by making AI use visible in our practices, asking patients about their AI use, documenting our own, and insisting on transparent systems rather than black-box automation. These grassroots practices will inform the broader policy work happening at state and federal levels.
The second and third papers in the BMJ series, published in coming weeks, examine patient perspectives and the clinical competencies needed to use AI transparently and effectively. Together, these papers argue that triadic care is inevitable but manageable, if we make it visible and maintain human judgment at the centre.
The consultation room has changed. There’s now a third voice in the dialogue. Australian GPs can shape how that voice is integrated, not as replacement for clinical expertise, but as a tool we interpret together, critically and collaboratively, in service of better care.
The first step is transparency. The second is dialogue. That conversation needs to start now.
Dr David Fraile Navarro is a trained GP and a Research Fellow in Generative AI at the Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney. He is lead author of “Generative AI and the changing dynamics of clinical consultations,” published in the BMJ this week as part of a three-paper series on AI in clinical encounters.
If you’re an Australian GP using AI-powered medical scribes, please contribute to the national survey at https://gpscribesurvey.getcds.net/
The statements or opinions expressed in this article reflect the views of the authors and do not necessarily represent the official policy of the AMA, the MJA or InSight+ unless so stated.
Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners.
If you would like to submit an article for consideration, send a Word version to mjainsight-editor@ampco.com.au.

more_vert