Artificial intelligence has enormous potential for medicine, but this radical new technology also heralds a substantial disruption to professional roles and clinical relationships — both technical and humanistic writes Canberra GP Dr Scott Mills.

Growing up I had a fascination for all things science and science fiction.

Youthful imagining of a futurist utopia – conscientious robots tending to our every need and silver highways of hover cars bending through our skies – which seemed always to be only a few decades away.

As it turns out, the robots did finally come, but not in way I imagined.

Far from a world of tin-man butlers and labourer automatons, we are instead faced with an advancing horizon of machines that can do our thinking for us.

And like a cynical plot twist straight from an Asimov novel, we are destined not as the programmers directing a robot army, but instead the headless tools operated by a giant artificial brain loaded into a cloud.

For the medical profession, the existential threat is perhaps more real than we might like to acknowledge.

As a discipline that has always embraced new technology and the translation of scientific innovation into practice, the move to the clinical use of machine learning and heuristic tools, such as ChatGPT, is a certainty.

As doctors, we will welcome the efficiencies of workflow and marvel at the optimisation of the risk–benefit ratio, while priding ourselves on a vigilant professional commitment to the safeguards of ethics and human verification.

But all novel technology is by definition unprecedented.

It has a way of catching society off-guard and the changes it produces are rarely as intended.

Medicine caught "off-guard" by artificial intelligence - Featured Image
Machine learning evidence-based protocols are already under development in medicine. Everything possible/Shutterstock

Synthesis is the new search

The deceptive innocence of ChatGPT and other emerging large language AI models lies in the present assumption that they behave as a new form of search aggregate, that the information they provide is the synthesis of hours of human search and learning time, delivering an accurate review of the available evidence across hundreds or thousands of sources, and millions of data points.

This is true in that ChatGPT effectively inputs “the internet”, and any other material given to it, into its machine learning algorithm and can then accurately synthesise a best-fit answer.

That is to say, if you believe every article ever written on a topic to be true, ChatGPT attempts to mimic the answer you would most likely give to any question asked on that topic.

For medicine, which under best practice conditions operates in a highly regulated peer-validated information environment, the accuracy of technology which applies a strict library of reference material is also likely to be high. Machine learning evidence-based protocols are already under development. Exponential synergy between advancing domains of natural language processing for eliciting patient data, and rules-based expert systems for interpreting them, are an intuitive fit for the future of medical decision-making.

Exponential synergy between advancing domains of natural language processing for eliciting patient data, and rules-based expert systems for interpreting them, is an intuitive fit for the future of medical decision-making.

Eclipsing human creativity

However, to understand the impact of such AI as merely labour and time saving — a faster research assistant or decision-making accelerator for clinical questions — is to miss the prime significance.

The true wildcard peril of such technology lies not in its ability to synthesise from known information, but in its capacity to manipulate information into formats that mimic new human creative content.

It is by design a self-refining technology tasked to maximise appeal to both cognitive logic and emotional persuasion.

It has the capacity to tell stories, and more so, to learn what stories we want to hear.

The future of programs such as ChatGPT is in continually refining human-like responses, so that the user connects with the greatest emotional resonance to the knowledge it selects to communicate.

And while present iterations act to simply close off directly asked question–answer loops, the vast potential gamut of generative AI and autonomous AI agency is only just beginning to be explored.

Speaking at the Frontiers Forum in Switzerland earlier this year, historian and philosopher Yuval Noah Harari made perhaps one of the more salient points in the AI debate when he noted that AI systems have crossed a new threshold as they are now able to shape human culture.

Next generation AI language models could, Harari claims, author our next religion. A terrifying thought, I know.

The human face of healing

At a fundamental level, beyond the pragmatics of knowledge, decision-making and responsibility, doctors act as the human face of healing science.

We are the conduits of public compassion, sanctioned as a profession to bestow the recognition of suffering on behalf of our society. We give comfort, share heartbreak, grant reprieve.

We challenge, encourage and reward our patients.

We seek to know our patients as people so as to best advise them.

Some days it’s a slog and others a joy, but it’s the heart of the job in the room, having these conversations.

A dual threat

To this the existential risk that AI poses is two-fold.

The first is in the automation not only of precision diagnostics, but of subjectively genuine, human-equivalent communication and connection.

A recent cross-sectional study in JAMA demonstrated that ChatGPT’s response to real patient questions outperformed human physician answers in both quality and bedside manner, with nearly 4 out of 5 respondents preferring the chatbot response.

For a generation raised to feel and react via text, the future of primary care might well be a 24-hour medical bot that:

  • is convenient, non-judgmental and compellingly accurate
  • offers infinite time, undivided attention and effortless patience, and
  • has learnt not only how to manipulate people, but specifically how to push our exact buttons.

Even the hard sceptics of human mimicry will appreciate that being “good enough at most things and very good at some” is, most of the time, enough to be fit for purpose.

The second threat is not in language but in the transition toward use of massive dataset derivatives to determine all aspects of human clinician activity – a machine optimised standard of care.

It is the potential for AI to maximise cost-efficiency which encourages its application in surveillance, which translates into playing an increasing role in directing both the billing and clinical activities of staff.

As I write, ChatGPT AI is being trialled in hospitals to track performance metrics such as wait times; one step closer to being surreptitiously ushered in to cognitively offload scheduling tasks from clinical workforce.

Given all inputs are also research data, the current trial AI is also able to suggest questions auditors could ask to satisfy regulatory requirements and make recommendations to improve KPIs.

In a brave new hospital run by AI, efficiency metrics and real-time monitoring, do doctors turn into the robots performing physical tasks at a pace assigned by algorithm?

This may seem far-fetched, but as history has shown us, technology introduced with the best intentions does not always create the best social outcomes.  

Dr Mills is a Canberra GP. He has a Master of Public Health, and a career in commercial design and marketing prior to medicine.

The statements or opinions expressed in this article reflect the views of the authors and do not necessarily represent the official policy of the AMA, the MJA or InSight+ unless so stated. 

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners. 

If you would like to submit an article for consideration, send a Word version to mjainsight-editor@ampco.com.au. 

3 thoughts on “Medicine caught ‘off-guard’ by artificial intelligence

  1. Anonymous says:

    Over a decade I have been scanned by the same outfit (will remain nameless) so they have my data. Yet in the last month when my torso was scanned the report said my gall bladder was gone, but no, it was stll there, never been removed. The scan omitted any mention of the hernia on my navel, which is large and was detected in a previous scan. To read the report which I decided to obtain myself rather than accept a nod from the GP for which I would need to pay for another visit just to have the results relayed as “nothing remarkable”, I got the distinct impression that the report was AI generated text, and worse, that someone else’s scan results were mixed in with mine, since they were clearly incorrect. I emailed the scanning outfit expressing both concerns. They protested of course, denying any AI bot generated text was involved. What legally can we do?

  2. Fruitarian University says:

    More attention can be put to details and access to latest remedies and reduced waiting for patients.

  3. Anonymous says:

    If you have read medical journals and followed guidelines over many years you will know that almost 80% of articles are either incorrect or outdated. AI assumes that input of information is accurate but it may have changed yesterday . Just look at the treatment of hypertension or diabetes over the years. Do not assume that a computer is always correct.

Leave a Reply

Your email address will not be published. Required fields are marked *