SUICIDE and mental illness pose a significant burden in Australia and worldwide. Prime Minister Scott Morrison has made reducing the toll of suicide a key priority, announcing his aim to drive the suicide toll towards zero. While it may be impossible to completely eliminate suicide, it should be possible to improve prediction and prevention through better analytical tools.

Yet prediction of suicide risk continues to present a challenge for doctors and traditional epidemiological studies. This is due to the complex factors that underpin suicide and the difficulties around identification of a small number of individuals in a large group with similar risk factors. A landmark meta-analysis by Franklin and colleagues, spanning 365 studies over 50, years found that prediction of suicide was only slightly better than chance for all outcomes, and that this predictive ability has not improved across 50 years of research.

However, there is an emerging body of evidence suggesting that artificial intelligence (AI) and data science may be effective tools in predicting and preventing suicide. Two potential uses have been suggested: medical suicide prediction and social suicide prediction. Medical suicide prediction involves AI being deployed as a real-time decision support tool to assist clinicians in identifying patients at risk of suicide. Social suicide prevention involves analysis of behaviour on social media, smartphone applications and other online sources to determine those at risk of suicide.

With the proliferation of electronic medical records, there is now a wealth of health data available. When linked with other data sources, analysis of these complex sets of information (known colloquially as “big data”) can provide a snapshot of the biological, social and psychological state of a person at one time. Machines can learn to detect patterns, which are indecipherable using traditional forms of biostatistics, by processing big data through layered mathematical models (AI algorithms). As such, AI is well positioned to address the challenge of navigating big data for suicide prevention.

Results across multiple studies indicate that AI consistently outperforms doctors at predicting suicide completion and suicide attempts, highlighting the promise of AI-based medical suicide prediction. Research suggests a promising clinical application for AI in suicide prediction. In 2018, a UK study by Del Pozo-Banos and colleagues used artificial neural networks (a type of machine learning technique) to analyse routinely collected information in electronic medical records to assess suicide risk in patients attending health services for any reason. Using only electronic medical records and hospital data in the 5 years prior to a patient committing suicide, the model accurately matched control and patients and suicide cases (ie, whether patients committed suicide or not) with an accuracy rate of over 73%.

In addition, a growing number of researchers and technology companies are using AI to monitor suicide risk through online activity. This builds on emerging evidence that language patterns on social media and methods of smart phone use can indicate psychiatric issues. Numerous studies have demonstrated the efficacy of applying social media to predicting suicide risk. The prevalence of suicide in conjunction with the difficulty in identifying those in need of support has led to development of social suicide prediction effort by companies that accumulate user data.

For example, Facebook has one of the most public social suicide prediction programs. Various types of prevention tools have been available on the platform for more than 10 years. Facebook has also developed a photo identification AI tool for Instagram to assist with these efforts; yet little further information has been published on this tool.

The opportunity clearly exists to leverage advances in medical and social suicide prediction tools to improve identification of Australians at risk of suicide and aid existing investment in suicide prevention. AI may increase our understanding of suicide prediction and potentially save lives. However, it also introduces risks, including the stigmatisation of people with mental illness, the transfer of health data to third parties (such as insurers and advertisers), and unnecessary confinement and treatment, among others.

In addition, AI tools are unlikely to achieve 100% accuracy – as such, it is inevitable that false positives and false negatives will occur. In the case of false positives, people may be exposed to unnecessary psychological harms, as well as potential stigmatisation by members of the public and/or the medical community. Positive results also require interpretation in conjunction with expert clinical assessment to prevent initiation of unnecessary treatment or involuntary detention.

One key issue is that the topics of AI and data science are barely mentioned in most medical school curricula, and it is fair to say that awareness and understanding of these technologies among older generations of medical practitioners is lower than younger generations. There is a knowledge gap among key users – namely psychiatrists, psychologists and administrators – about how AI could fit into suicide prediction and prevention. Given the likely impact of these technologies on psychiatry – as well as other areas of medicine such as radiology, dermatology and surgery – it is important that AI and data literacy are included in the education of doctors and other health professionals. Experienced doctors could be upskilled as part of continued professional education efforts.

In conclusion, we have seen that AI and data science have shown great promise in predicting suicide risk through medical and social suicide prediction tools. Pending further research to confirm these findings, such technologies could be valuable in identifying those at risk of suicide. These tools could have the potential to save Australian lives, but both medical and social suicide prediction also pose important ethical considerations.

Promoting the development of medical suicide prediction tools may be relatively uncontroversial, as long as researchers abide by existing legislation and ethical approval processes. This can be achieved by paying respect to an ethical framework, developing strong technical safeguards for sensitive suicide risk data and clearly framing potential benefits.

However, social suicide prediction is a more controversial issue, given difficult ethical questions surrounding consent, transparency, and how to intervene. Effective regulation is required to strike a balance between using data on social platforms that are nearly impossible to find elsewhere to advance the public good, while at the same time improving transparency, security and autonomy for the community.

Dr Daniel D’Hotman is an Australian Rhodes Scholar, exploring treatment options for drug offenders in the criminal justice system. His ongoing interests include economics, artificial intelligence, political philosophy, effective altruism, public policy, and normative and practical ethics.

Professor Erwin Loh is the Group Chief Medical Officer of St Vincent’s Health Australia.The statements or opinions expressed in this article reflect the views of the authors and do not necessarily represent the official policy of the AMA, the MJA or InSight+ unless so stated.

 

If this article has raised issues for you, help is available at:

Doctors’ Health Advisory Service (http://dhas.org.au):
NSW and ACT … 02 9437 6552
NT and SA … 08 8366 0250
Queensland … 07 3833 4352
Tasmania and Victoria … 03 9280 8712 http://www.vdhp.org.au
WA … 08 9321 3098
New Zealand … 0800 471 2654

Medical Benevolent Society (https://www.mbansw.org.au/)

AMA lists of GPs willing to see junior doctors (https://www.doctorportal.com.au/doctorshealth/)

Lifeline on 13 11 14
beyondblue on 1300 224 636
beyondblue Doctors’ health website: https://www.beyondblue.org.au/about-us/our-work-in-improving-workplace-mental-health/health-services-program

 

 

The statements or opinions expressed in this article reflect the views of the authors and do not represent the official policy of the AMA, the MJA or InSight+ unless so stated.

Leave a Reply

Your email address will not be published. Required fields are marked *