InSight+ Issue 10 / 16 March 2026

Concerns are rising over patients’ use of AI tools such as ChatGPT for health information, but with little awareness of the risks.

While ChatGPT has only been available since 2022, people have taken quickly to the platform for answers to their health-related questions, despite concerns about its accuracy. ChatGPT Health was launched by OpenAI earlier this year, but isn’t available in Australia yet.

Sydney Health Literacy Lab Post Doctoral Fellow Julie Ayre was involved in a 2024 study which found that health queries to AI are increasing rapidly — with that research finding 46% of Australians had recently used an AI tool for health advice.

“We found that one in 10 Australians had been using ChatGPT for health advice and that is actually a significant proportion of people, I think it was 39% had not used Chat GPT for health advice yet, but were really open to doing that,” Dr Ayre says.

“Keep in mind that was in 2024, before the Google AI overviews became available, which are everywhere now, they’re hard to ignore. And we were just focused on ChatGPT. So of course now in 2026 we would expect that that number has really grown.”

The inherent risks of AI health advice

The February edition of Nature Medicine has published the first independent safety evaluation of ChatGPT Health, and found it gave inaccurate triage advice for more than half the cases it assessed.

The Guardian reported that this study found AI often did not properly assess the urgency of medical care needed, and also failed to detect suicidal ideation.

In just over half of the cases ChatGPT Health assessed, it did not advise someone to go to hospital when their case was an emergency, which study authors found gave people a false sense of security that could cost them their life.

While increasing use of AI by patients before and after consultations was inevitable with the rise of the technology, Dr Ayre says people need to be aware that using AI in this way carries inherent risks.

“We have lots of research coming through that shows, yes, a lot of the time tools like ChatGPT will be accurate, but it’s not all the time. And it can have really serious consequences even if all of the information is there for it to make a correct decision,” she says.

“It doesn’t necessarily always give the correct advice. And in fact you can give it the same information multiple times and sometimes it will be wrong and sometimes it will be correct.”

“There’s something about these AI tools that’s really empowering for people and I think we need to recognise that, but still understand that this is a new technology, people are learning how to use it, they’re still learning about the limitations and how to use it safely.”

“What we found was that quite a notable number of people were asking higher risk questions. So this could be things like asking what do my symptoms mean, asking about treatment advice, or interpreting test results.”

“All of those kinds of questions, there’s a higher risk that comes with them because they typically would require clinical expertise to answer them safely.”

ChatGPT Health putting lives at risk - Featured Image
A study showed that ChatGPT Health frequently advised someone not to go to hospital when their case was an emergency (Nwz / Shutterstock).

Dr ChatGPT in the consultation room

Dr Shaddy Hanna works as a general practitioner in Greater Western Sydney, as well as researching the impact of AI and digital scribes.

He says he’s noticed increasing numbers of people coming to their consultation having looked at ChatGPT first, and it’s happening faster than many clinicians realise.

“We used to talk about Dr Google, now it’s Dr ChatGPT,” Dr Hanna says.

He’s fascinated by how quickly patient behaviour has shifted, and how that’s changed the way he practices as well.

“Our patients are using these tools to understand their symptoms, interpret test results, explore treatment options or even just to make sense of a health concern late at night when they can’t access their GP,” he says.

“Almost overnight patients began arriving to consultations more informed and with more specific questions about their symptoms, their medical history and treatment options.”

But while this shift has been helpful in many consultations by supporting more focussed conversations and better shared decision-making, one of his greatest concerns is that people don’t understand AI’s limitations.

“Like any probabilistic system, AI tools can sometimes produce very confident answers that may not actually be clinically appropriate or reliable — nor disclose this,” he says.

“These days I try to routinely ask patients during consultations whether they’ve gone to Dr Google or Dr ChatGPT to ask any specific questions before they came into see me that day.”

“I think that creating space for these conversations with patients is important, because it allows us as trained clinicians, to help explore, contextualise, and critically analyse the advice they have been given.”

“Especially when the advice given may often seem so coherent and confident, and can mislead patients to trust in the advice given, almost as a replacement for consultations with trained medical practitioners.”

He’s found AI’s at times misleading clinical advice can have serious consequences, as well as privacy considerations around data use.

“When patients share symptoms and test results of identifiable health information with an AI platform, it’s not always clear where the data goes or how it’s stored,” he says.

“As clinicians, we can take these opportunities to help our patients think critically about their data stewardship, before they place their data blindly into the trust of commercial AI platforms.”

Dr Ayre agrees that stronger governance over AI platforms is key, with the TGA having an important role.

“With the TGA, it also takes into account off label use. So if a manufacturer or a software developer becomes aware that people are using it outside of its intended use, they need to respond to that as well,” she says.

“So that could be either to disable the ability to complete those actions that would make it a medical device or to undergo that regulatory approach so that it can be classed as a medical device.”

Nance Haxton was a journalist at the ABC for nearly 20 years. She’s also worked as an Advocate at the Disability Royal Commission helping people with disabilities tell their stories and as a senior reporter for the National Indigenous Radio Service. 

In that time she’s won a range of Australian and international honours, including two Walkley Awards, and three New York Festivals Radio Awards trophies.

Now freelancing as The Wandering Journo, Nance is independently producing podcasts including her personal audio slice of Australia “Streets of Your Town”.

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners.

Leave a Reply

Your email address will not be published. Required fields are marked *