AI screening could predict the risk of breast cancer early
An artificial intelligence algorithm used to detect breast cancer in screening scans can predict women’s cancer risk over the next four years, Australian research has found.
View this article online at www.insightplus.mja.com.au
An artificial intelligence algorithm used to detect breast cancer in screening scans can predict women’s cancer risk over the next four years, Australian research has found.
Public trust in doctors and health services is shifting slowly, but in the wrong direction in both Australia and the US. At the Australian Ethical Health Alliance (AEHA) symposium in May 2025, the message was clear, that as health professionals, we need to take swift action. As panellists speaking on the rise of misinformation and disinformation, we explored how trust has always been the ethical currency of medicine; however, once you start to spend it, everything else, including vaccination, screening, shared decision-making and even discharge planning to aged care, becomes harder.
Concerns are rising over patients’ use of AI tools such as ChatGPT for health information, but with little awareness of the risks.
The consultation room has a new participant. It arrives when patients pull up ChatGPT-generated symptom analyses. It appears when we use GenAI to check drug interactions or draft letters. And it's always listening when ambient AI scribes document our conversations. Whether acknowledged or not, GenAI is fundamentally changing what happens in the consultation room.
Artificial intelligence (AI) is moving rapidly from promise to practice in Australian health care. While the potential of AI in medicine has been recognised for a number of years now, technologies such as scribes have now become ubiquitous. This transformation into AI-augmented practice extends well beyond medicine, with nursing care and allied health applications emerging on the near horizon too.
Amidst widespread promotion of artificial intelligence (AI), the environmental impacts are not receiving enough scrutiny, including from the health sector, writes Jason Staines.
An increasing number of people are using artificial intelligence (AI) chatbots such as ChatGPT for mental health and support. There are both benefits and risks associated with the use of AI for mental health. However, there are no policies or guidelines to inform the safe use of AI chatbots for mental health purposes.
AI scribes promise multiple benefits for general practitioners and their patients, but we may risk losing valuable human connection in the process, writes Dr Elizabeth Deveny.
Late last year, an AI-generated video faked the image of a well known endocrinologist and spread dangerous medical advice on Facebook. InSight+ spoke to industry professionals about the effect these fakes have on health and trust.