Barriers abound in implementing artificial intelligence across Australia’s health care systems, with Queensland researchers calling for more government funding to take advantage of this emerging technology.

Australia’s health care system has been described as “impervious” to the alure of artificial intelligence (AI), with a lack of clinician trust and data privacy the main barriers to adopting the technology in clinical settings.

Other main concerns preventing the greater rollout of AI-related technology in Australian clinical settings include health inequity concerns due to possible biases in underlying data and not enough government regulation, according to a Perspective published in the Medical Journal of Australia.

“Across a network of clinicians in a national AI working group, only one hospital was known to have an AI trial underway,” Dr Anton Van Der Vegt and his colleagues wrote.

“As far as we are aware, there is no clinical AI implemented across Queensland Health despite having Australia’s largest centralised EMR system, which could make large-scale AI feasible.

“In stark contrast to the number of implemented AI systems, AI research abounds, with nearly 10 000 journal articles published each year across the world.”

Dr Van Der Vegt, a mechanical engineer by background, is an Advanced Queensland Industry Research Fellow with the Centre for Health Services Research at the University of Queensland Faculty of Medicine.

Artificial intelligence 'facing barriers' in our health system   - Featured Image
A lack of clinician trust and data privacy the main barriers to adopting AI technology in clinical settings (everything possible / Shutterstock)

The WHO urges caution

The World Health Organization (WHO) last year called for caution in the use of AI in medicine.

“[The] growing experimental use [of AI] for health-related purposes is generating significant excitement around the potential to support people’s health needs,” the WHO said.

“It is imperative that the risks be examined carefully when using [large language model tools (LLMs)] to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity.”

Data used to train AI may be biased, “generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness,” it said.

“LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response.”

Government response

There is currently no specific AI legislation in place in Australia.

The Australian Government last year consulted on the potential risks of AI and how they can be mitigated, releasing this discussion paper.

The Department of Industry, Science and Resources, which ran the consultation, is using the 150 responses from industry and the community to inform the “appropriate regulatory and policy responses” (here).

“Patch work” of regulations

The Australian Human Rights Commission has described Australia’s AI regulations as a “patchwork”, saying that if the technology is not developed and deployed safely, it can threaten human rights.

“AI operates in a regulatory environment that is patchwork at best,” the Commission said.

“This has allowed AI to proliferate in a landscape that has not protected people from human rights harms.

“The Commission is especially concerned about emerging harms such as privacy, algorithmic discrimination, automation bias, and misinformation and disinformation.”

Algorithms “developed separately”

AI algorithms are typically developed and evaluated on different datasets to the ones at hospital sites, Dr Van Der Vegt and colleagues wrote.

This means that changes to clinical workflows, presenting patient conditions, data quality levels and patient demographic distributions can significantly affect algorithm performance. 

“For example, AI algorithms developed on large city populations may perform poorly for hospitals in rural and remote areas, further perpetuating poor health outcomes for underserved and marginalised patient cohorts,” they wrote.

“Without this evaluation checkpoint, the AI remains untested. We argue this is one of the major reasons for the slow or absent uptake of AI within Australian hospitals today.”

AI may assist with diagnosis and treatment

Australian Medical Association President, Professor Steve Robson, said artificial intelligence has the potential to transform medicine.

“This will be just as big a culture shock for doctors as it will be for their patients,” Professor Robson said.

“The most advanced AI that most doctors use at the moment is often Siri, or their Netflix preference guides.”

The first doctors to embrace the potential of AI have been radiologists, Professor Robson said.

“For several years now, AI software applications have been introduced to assist with image recognition and, increasingly, with decision-support,” he said.

“Interpreting medical scans can be challenging for even the most experienced specialists, and the stakes are high. Missing an important diagnosis, such as an early cancer or a subtle bone fracture, can have serious consequences for patients.

“The use of AI to assist radiologists as they work to read multiple images has been shown to enhance accuracy and improve outcomes for patients.

“AI is so powerful in its capabilities that it may detect subtle changes in human tissues that elude the human eye.”

The technology also has the potential to change pathology services, such as the diagnosis of cancer, he said.

“At a time when the pathology workforce is under great pressure, the introduction of AI technologies that act as a co-pilot and assist the pathologist in dealing with high workloads will be attractive to health services,” Professor Robson said.

Call for AI funding

Public funding outside of health care organisations’ budgets is required to develop this infrastructure, Dr Van Der Vegt and his colleagues argue.

“A standardised prospective evaluation infrastructure should plug into a range of [electronic medical records] (eg, Cerner, Epic) … and be able to be deployed within each health care organisation’s firewalls,” Dr Van Der Vegt and colleagues wrote.

“Such an infrastructure, coupled with a standardised AI implementation framework, could provide health care organisations with the tools they need to comprehensively evaluate the AI and with the confidence they need to move beyond retrospective studies and implement well tested AI into clinical practice.”

Read the Perspective in the Medical Journal of Australia.

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners.

Leave a Reply

Your email address will not be published. Required fields are marked *