An increasing number of people are using artificial intelligence (AI) chatbots such as ChatGPT for mental health and support. There are both benefits and risks associated with the use of AI for mental health. However, there are no policies or guidelines to inform the safe use of AI chatbots for mental health purposes.

Over 4.3 million people (or 1 in 5 Australians) experienced a mental health condition in the previous 12 months. Access to mental health care is hindered by high costs, long wait times, and a shortage of trained specialists.

An increasing number of people are turning to generative artificial intelligence (AI) tools such as chatbots for mental health support, advice and companionship. Examples of general-purpose AI chatbots — which were not designed for mental health purposes but are increasingly being used in this way — include ChatGPT, Claude, and Replika. While health care practitioners are aware of “Doctor Google”, they may not be aware that their patients are turning to AI chatbots for mental health information, clinical advice, and real-time support.

Can you replace your therapist with an AI chatbot?   - Featured Image
General purpose AI chatbots, such as ChatGPT, were not designed to provide mental health advice and support (Diego Thomazini / Shutterstock).

The potential benefits of AI chatbots

Unlike purpose-built AI chatbots for mental health (eg, Wysa, Woebot, Therabot), which draw on evidence-based psychological therapies and include in-built safeguards, general purpose AI chatbots, such as ChatGPT, were not designed to provide mental health advice and support.

However, general AI chatbots that are being used for mental health support in Australia may have advantages. They are easily accessible, and provide free, immediate, 24/7 access to information and mental health support. These tools can help individuals learn about mental health conditions, identify new coping strategies, build self-awareness, manage stress, and discover fresh perspectives on challenging situations. Because of their conversational tone, and ability to simulate empathetic and supportive responses, end-users report feeling heard and understood, and people may find themselves asking AI chatbots questions that they feel too embarrassed to ask a health professional.

While there are benefits, there are also real, serious, and potentially devastating risks for people who use unregulated AI chatbots for mental health support. Chatbots are known to “hallucinate”, and may provide information that is incorrect, misleading, or biased. This information may sound plausible, but it lacks understanding of context, relevant details, or fact checking.

A person in distress may be unable to critically evaluate the information they are provided by a chatbot without appropriate oversight by a trained professional. This is particularly true when the information sounds convincing and reinforces what a person wants to hear.

Potential risks associated with AI chatbots

We do not know how many people have been exposed to harmful advice from chatbots because adverse events are not routinely monitored or reported. It should be noted that even AI tools designed for mental health purposes may have the potential to cause harm. For example, the chatbot “Tessa” was removed by the American National Eating Disorders Association (NEDA) after it was found to give “harmful advice” to people with eating disorders. General purpose chatbots such as ChatGPT are widely used for multiple purposes and tend to be enthusiastic, effusive and agreeable. They generally provide answers without critique (unless prompted). There is a high risk that if someone experiencing delusional beliefs turns to an AI chatbot, they may receive information that reinforces rather than challenges their views. Likewise, an AI chatbot may endorse harmful behaviours related to suicidality or drug use. This is because AI chatbots rely on input provided by a user, but do not have the ability to gauge whether this information is reliable or true.

What has been termed “ChatGPT induced psychosis” can send people spiralling into psychological breakdowns, exposing them to further risks. It may be comforting for a person to receive empathetic feedback from a chatbot, but what they may need is careful and thorough assessment by a qualified health professional, appropriate guidance, and access to evidence-based treatment. Reliance on AI for mental health support may delay a person seeking professional help and accessing effective treatments that lead to recovery.

Of specific concern is the lack of appropriate oversight or safeguards for people who may be at risk of harm to themselves and others. AI chatbots were not designed to manage crisis situations and may provide inappropriate responses. Chatbots are not subjected to the same rigorous professional standards (eg, ethical guidelines) as health professionals. Mental health data contains information that is private, personal, and potentially stigmatising, and there are privacy risks associated with the use of chatbots. Most of these tools were not designed in Australia, are not regulated, and there is no accountability if things go wrong.

Looking forward

Given the balance of risks and benefits with the use of AI chatbots for mental health, we encourage cautious optimism and more rigorous oversight. These tools have the potential to facilitate access to mental health information, provide practical advice, and encourage people to access mental health treatment. However, we need this information and advice to be reliable and valid, and we need this access to be secure.

Conclusion

The AI landscape is rapidly evolving, with chatbots becoming increasingly sophisticated and human-like. There is an urgent need for education (AI literacy) for medical professionals and the wider community; as well as policy, regulation, guidelines, and safeguards to limit the likelihood of harm. Medical professionals should be aware that their patients may be using AI chatbots to support their mental health. However, at present, AI chatbots cannot replace trained mental health professionals and do have the potential to cause serious harm.

Dr Anthony Joffe is a clinical psychologist and a post-doctoral research fellow at the Black Dog Institute.

Professor Jill Newby is a professor, NHMRC emerging leader, and clinical psychologist at the Black Dog Institute and UNSW Sydney. She is also co-director of the NHMRC Centre of Research Excellence in Depression Treatment Precision.

Dr Kylie Maidment is a post-doctoral fellow in policy research and translation at the Black Dog Institute and UNSW Sydney.

The statements or opinions expressed in this article reflect the views of the authors and do not necessarily represent the official policy of the AMA, the MJA or InSight+ unless so stated. 

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners. 

If you would like to submit an article for consideration, send a Word version to mjainsight-editor@ampco.com.au. 

Leave a Reply

Your email address will not be published. Required fields are marked *