Artificial intelligence (AI) is moving rapidly from promise to practice in Australian health care. While the potential of AI in medicine has been recognised for a number of years now, technologies such as scribes have now become ubiquitous. This transformation into AI-augmented practice extends well beyond medicine, with nursing care and allied health applications emerging on the near horizon too. 

As the potential of AI-based technologies to assist doctors in delivering care to their patients has become clear, Avant Mutual has been proactive in guiding the profession in its adoption. Successful integration into medical practice involves helping doctors understand what AI is, its potential benefits and potential risks of its use. All of these are necessary to help Australia’s doctors decide the safest use of AI in their practice.  

There is enormous potential for AI to improve health care delivery, but there also will be risks that come with any significant change in clinical practice. The challenge facing the profession, and those who support it, will lie in navigating this evolution safely so they neither are rushing forward blindly nor being left behind. 

Are we at the point where not using AI in medicine poses risk? - Featured Image
While AI has the potential to improve health care delivery, there are also risks that need to be considered (khunkornStudio / Shutterstock).

Risk assessment in health care naturally focusses on the things that could go wrong. With a major technological paradigm shift, such as integration of AI into routine care, we will increasingly need to consider risk from two perspectives: the risks of using AI inappropriately and the risks of not using AI. Indeed, as AI tools prove their reliability and become more widely adopted in medical practice, choosing not to use them could also lead to issues. 

AI is likely soon to play a key role in two critical areas: clinical decision support and workflow management. In diagnostics like radiology, pathology, retinal photography and dermatology, AI systems are already making an impact by demonstrating their ability to enhance interpretation accuracy and identify patterns. With continuing developments AI almost certainly will become available in many other forms of clinical decision-making support. 

Beyond diagnostics and assistance with clinical decisions, another area ripe for innovation is the use of AI-powered chatbots to “help reduce the workload on health care providers, allowing them to focus on more complicated cases that require their expertise.” The UK NHS has tested smartphone apps providing AI chatbots as alternatives to non-emergency phone lines, potentially reducing workload on health care providers so they can focus on complex cases. Similarly, AI-based ‘practice receptionists’ are showing promise in improving patient experience, flow, and automation of routine administrative tasks.

Despite these advances, there is no prospect AI will displace human doctors, nurses, and other health care providers. AI will remain another tool in the clinical toolbox because safe and effective health care still requires human oversight and judgement. The benefits, however, are clear when used appropriately: AI can improve safety in health care through better diagnostic accuracy, streamlined record-keeping, and the analysis of large datasets that may enable new breakthroughs. Where exactly AI will take health care remains uncertain, but the potential for positive transformation is significant. 

Alongside this optimism is understanding the potential pitfalls and risks of rapid AI adoption. Caution is necessary to allow the considered introduction of the technology rather than putting the brakes on progress. Avant has consistently provided guidance on safe AI use by emphasising the need to understand each tool’s limitations, maintain strong practitioner oversight, and ensure patient consent and understanding of the role of AI in their care.  

As the technologies mature, though, another risk consideration emerges that is missing from current discourse. With AI evolving and proving its reliability, could there be risks in choosing not to use it? While some practitioners may remain sceptical of AI, resisting digital evolution entirely will carry its own hazards as the technology becomes routine. 

Addressing this challenge will require the medical community to identify the inflection point at which AI becomes a consistently accepted tool for managing specific clinical conditions or situations.

This point is most imminent in specialties that rely on pattern and image recognition, such as radiology and dermatology, where AI trained on vast datasets can identify clinical abnormalities earlier than human detection allows. Despite this, human clinical judgement remains an essential step to consider the patient’s overall condition, weigh additional clinical factors, and decide whether to act on AI recommendations immediately or take a more conservative approach. 

AI scribes can also enhance documentation and administration management. Although concerns remain about the sources of data, AI can analyse vast amounts of information to support clinical decision-making beyond individual capacity. It can compare data points against countless clinical records without the human fallibilities of tiredness, distraction or bias. However, mistakes remain prevalent. To avoid this, clinicians must review AI outputs before accepting them into medical records.

Could AI-assisted decision support become the real-time clinical standard? Will static guidelines yield to dynamic protocols updated as new evidence emerges? This seems increasingly likely as patients expect responsive, informed care that is available around the clock. The path forward for AI in medicine and health care seems exciting and full of potential — but its successful and safe introduction as part of a tool in routine care will require thoughtful consideration. When considering adopting AI, we must do our due diligence: ask questions, demand evidence, and pilot and monitor carefully.

For the foreseeable future, AI should be considered another available and developing tool — an assistant rather than a replacement which needs to be tested. There is a balance between risks with new and developing technology and failure to embrace new technologies in health care. As with all medical advances, caution is required, but we should never close our minds to the possibilities offered by innovation. 

Disclosure: Avant is the developer of AI scribe VoiceBox.

Professor Steve Robson is chief medical officer at Avant Mutual. He is also chair of the National Doctors Health & Wellbeing Leadership Alliance, professor of obstetrics and gynaecology at the Australian National University, and a council member of the National Health and Medical Research Council.

Tracy has more than 26 years experience in medico-legal complaint management and litigation and is a senior lawyer in Avant’s Advocacy, Education and Research team.  She is a key member of the Avant team exploring issues relating to the medico-legal risks associated with AI in health care.  

Dr Mark Woodrow has worked as an emergency physician for over 30 years, predominantly in a private metropolitan hospital.  He is General Manager – Medical Advisory Services in the medical indemnity division and Deputy Chief Medical Officer at Avant, a council member of ACLM and President of the Medico-Legal Society of Queensland.

The statements or opinions expressed in this article reflect the views of the authors and do not necessarily represent the official policy of the AMA, the MJA or InSight+ unless so stated. 

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners. 

If you would like to submit an article for consideration, send a Word version to mjainsight-editor@ampco.com.au. 

One thought on “Are we at the point where not using AI in medicine poses risk?

  1. Ediriweera Desapriya says:

    We must resist the false binary of “AI or no AI.” The real imperative is thoughtful integration. That means:
    • Mandating human oversight for all AI-assisted decisions
    • Training clinicians in AI literacy and critical appraisal
    • Establishing adverse event reporting systems for AI-related harms
    • Requiring transparency and explainability in clinical AI tools
    • Ensuring equity in training data and deployment contexts
    In short, AI is becoming indispensable, but it must remain a tool, not a substitute for clinical judgment. The risk of not using AI is real. But the risk of using it without understanding is greater. Our path forward must be guided by evidence, ethics, and humility.

Leave a Reply

Your email address will not be published. Required fields are marked *