Public trust in doctors and health services is shifting slowly, but in the wrong direction in both Australia and the US. At the Australian Ethical Health Alliance (AEHA) symposium in May 2025, the message was clear, that as health professionals, we need to take swift action. As panellists speaking on the rise of misinformation and disinformation, we explored how trust has always been the ethical currency of medicine; however, once you start to spend it, everything else, including vaccination, screening, shared decision-making and even discharge planning to aged care, becomes harder.
What’s changed isn’t just the volume of poor information in the system, but the way people now form beliefs. Professor Michael Kidd discussed the current health megatrends at the symposium, including the escalating health imperative and how we undertake accurate health promotion. GPs will often see social media, AI-generated “health” content and inconsistent messaging from authorities creating competing information hierarchies that sit alongside, and sometimes above, clinical advice. Patients now arrive with WhatsApp screenshots, AI summaries and TikTok anecdotes and expect those to be weighed equally against our NHMRC guidelines and decades of clinical practice. When clinicians push back, the encounter can quickly become adversarial in this age of the infodemic.
The clinical dilemma when evidence meets AI-inflated doubt
Take the very familiar scenario raised at the symposium to Professor Stacy Carter, of the example of when a parent requested an exemption from routine immunisation, armed with AI-curated claims about adverse events and requests for tests that lack evidence, and the challenge this creates for clinicians. Doctors are obliged to protect the public health, follow AIR requirements and order MBS tests appropriately, but also to preserve the therapeutic relationship focused on harm minimisation. Saying “that’s wrong” is often ethically correct, but practically ineffective. The AEHA discussion centred on how misinformation has now become a significant public health detriment, not just a communications challenge. We cannot simply educate our patients with facts alone when we are competing with large language models, rather, we need to guide and support our patients in how to fact-check AI and seek reliable information from trusted sources.
Several speakers linked misinformation to epistemic injustice, the well-described phenomenon where certain patients (First Nations people, culturally and linguistically diverse communities) are less likely to be believed. If those same groups are also the target of online misinformation, then a straightforward “trust me, I’m the expert” response is not only unlikely to work, but it may reinforce historical mistrust. Moving from an information hierarchy (“I tell, you listen”) to an information partnership (“let’s look at your source and mine together”) was proposed as a more positive stance.

AI as an amplifier or ally?
Participants were clear that AI is already part of the misinformation ecosystem, generating plausible but inaccurate health advice, surfacing non-peer-reviewed content, and sometimes divorcing information from clinical context. As Professor Chris Bain noted at the symposium, digital tools introduced without strong clinical input can “contribute to misinformation … especially when divorced from expert health input”. That is not a reason to reject AI; it is a reason to govern it. Clinicians need to steward these systems, set rules, insist on diverse training data, require auditability and consider an ethicist in the loop. Otherwise, we simply automate existing biases and spread them faster. This is where technical governance matters. Australian health services already know how to run privacy impact assessments and cyber risk registers; they now need parallel processes for AI-assisted content and patient-facing tools, with clear linkage to clinical governance. This means accountable committees that include clinicians and consumers, embedding a transparent model update process, and rapid correction pathways when misinformation is detected in a tool and either machine unlearning is required or correction of content. The risk is undetected misinformation or deliberate disinformation through AI-generated content.
When evidence isn’t enough
One example raised at the AEHA symposium was the expected behaviour change after measles-related child deaths overseas, the kind of event expected that we would expect to trigger a spike in MMR uptake, but didn’t. The reality is the community often knows the facts, yet they don’t trust the source, or they prioritise identity, community or personal story over population data or decades of evidence that the childhood vaccine program saves lives. The panel advocated for increased use of trusted community leaders, narrative approaches in primary care, and support for clinicians who undertake this emotionally demanding work in 15-minute appointments.
Listening as an ethical act
A strong message from the symposium was that being believed when providing health advice is critical. Nina Roxburgh put it plainly that we need to move from credibility conferred only on institutional voices to credibility that is shared with patients, especially those with lived experience. In practice, this looks like trauma-informed communication, offering interpreters, co-designing materials with affected communities, and documenting respectfully, even when the patient’s online source is limited or inaccurate. These are small acts, but collectively they rebuild institutional trust.
What clinicians can do now
AEHA participants weren’t naive about how busy general practice, EDs and outpatient clinics are. The proposed actions were deliberately pragmatic, asking clinicians to correct misinformation respectfully, use consistent messages, and address recurring rumours so they can be handled at scale.
- Name misinformation explicitly but without ridicule;
- Invite patients to walk through sources with you;
- Use consistent, organisation-backed messages so clinicians aren’t left improvising;
- Escalate recurrent misinformation to health service leadership so it can be addressed at the system level, including websites, community channels, not simply relying on just one consultation at a time;
- Advocate for AI and digital tools in your service to have clinical governance, not just digital or IT sign-off, that monitors for misinformation.
Above all, the symposium reminded us that trust is now performative; we must demonstrate transparency, acknowledge vulnerability in the face of uncertainty and show patients how we reach our decisions. If we don’t, the vacuum will be filled, quickly, by AI-generated ‘certainty’.
Adrian Cosenza is Chief Executive Officer of the Australian Orthopaedic Association and AOA Research Foundation, and Chair of the Australian Ethical Health Alliance, a consensus framework body representing over 76 signatories and more than one million healthcare professionals. He brings extensive executive and board experience across medical education, technology, and finance across four countries.
A/Prof Emily Kirkpatrick (FRACGP, FRACMA) is a non-executive director across government and private health and education entities, whose work spans clinical governance, crisis management, digital health, and artificial intelligence. She is a senior clinical lecturer at the Australian Institute for Machine Learning at Adelaide University and Managing Director of EKology Group, focused on addressing complexity in the health ecosystem.
The statements or opinions expressed in this article reflect the views of the authors and do not necessarily represent the official policy of the AMA, the MJA or InSight+ unless so stated.
Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners.
If you would like to submit an article for consideration, send a Word version to mjainsight-editor@ampco.com.au.

more_vert