AUSTRALIA risks being “left behind” as an innovator in artificial intelligence (AI) in health care, says a leading expert who is calling for greater investment and regulatory rigour in the field.

Professor Enrico Coiera, Director of the Centre for Health Informatics at the Australian Institute of Health Innovation at Macquarie University, said investment in AI in Australia was lagging.

“We are not seeing commensurate investment here in Australia, and many in the field feel we are rapidly being left behind, to become adopters rather than innovators in AI,” Professor Coiera said.

“If that happens, it would be a pity given the excellence of our Australian clinicians, scientists and engineers, the huge commercial opportunities this new industry offers us, and the clear need, post-COVID-19, for Australia to have greater independent capability to build and deploy advanced technologies, given the worsening global uncertainties.”

Professor Coiera’s comments came following publication of an article in the MJA which queried Australia’s readiness for the arrival of AI.

Professor Joseph Sung of the Chinese University of Hong Kong, and co-authors, wrote that before AI tools could be put into daily use in medicine, “data quality and ownership, transparency in governance, trust-building in black box medicine, and legal responsibility for mishaps” needed to be resolved.

And, in a second article in the same issue of the MJA, Dr Miki Wada of Monash University and co-authors outlined the role of AI in skin cancer diagnosis and management.

Professor Coiera said as AI was increasingly used in health care, it uncovered pre-existing gaps in approaches to clinical data, “one of health care’s most precious resources”.

“We have known for many years that the data captured in clinical electronic records is of variable quality, and not at the same standard as, for example, data captured during a clinical trial,” he said noting that barriers to capturing high quality data included clinical time pressure and a lack of formal education in the use of electronic systems.

“The recent emphasis on ‘big data’ is in part a response to this problem; the larger the datasets, the more likely we will find signal in the noise.”

Professor Coiera, who also recently co-authored a paper on the potential role for AI in the fight against COVID-19 (published as a preprint in the MJA), said greater understanding of the need to “test and tune” data was also needed to ensure transportability of AI technologies.

“We must always ensure that an algorithm is transportable to our own particular population. If algorithms come from different populations to the one it will be used in, then there will need to be local recalibration and testing to ensure the algorithm works as expected for our patients,” Professor Coiera said, adding that this process was not well understood in Australia. “A well designed and developed algorithm built in one place does not immediately work somewhere else. We need to test and tune before we use the AI.”

Professor Peter Soyer, Chair in Dermatology at the University of Queensland, said he was an “absolute believer in AI”.

“It will substantially change the way we practise medicine in years to come and not just in dermatology but in many, if not all, medical disciplines,” he said.

But, Professor Soyer said, it will take time.

“The implementation of AI will, in my view, take many years and will need to be done wisely.”

In their MJA article, Wada and colleagues wrote that although there were several machine learning algorithms that could be used in the dermatology setting, convolutional neural networks were the most promising.

They also wrote that there were several points in the clinical pathway where AI could be used, including as a triage tool before clinician assessment, and for a “second opinion” after assessment to improve diagnostic sensitivity and reduce unnecessary biopsies.

“The latter is more closely aligned with current clinical workflows and therefore likely to be preferred while the field matures. There is potential for over-reliance on artificial intelligence systems in both scenarios.”

Professor Soyer agreed with the authors’ suggestion that deskilling was a risk with the increasing uptake of AI in dermatology.

“If you are [always] relying on AI, at the end of the day, you won’t know how to do it yourself. And there may be a bug in the system, or you may have an internet failure and what will you do then?” he said.

Professor Soyer also said that the accuracy of the data input was key in ensuring accurate output.

“Most of the studies have been done in Caucasians, they have not been done on dark skin. So, if you then have a person with dark skin, the system may lead you astray,” he said. “And even in Caucasian skin, if the image is so rare and the system hasn’t seen it before the system may wrongly interpret it. So, this is where deskilling is a real threat and the medical profession will need to think hard how to deal with this.”

Professor Soyer cited a letter that he co-authored, published in Nature Medicine, that found that good quality AI-based support of clinical decision making improved diagnostic accuracy over that of either AI or physicians alone.

“What we have to learn is how AI and humans can work together in a very wise way and this will be the challenge in the years to come,” Professor Soyer told InSight+.

The MJA articles both underlined the critical importance of a developing a robust regulatory framework to ensure safety of AI technologies.

Professor Coiera said there was an “urgent need internationally” to develop robust guidelines for clinical development, testing and certification of AI.

Certification was important when technologies were marketed, he said, but also when these technologies were locally deployed, given the challenges of transportability.

“Most nations, including Australia, have struggled with regulation of digital health in general, as software has often not been considered a medical device.”

Professor Coiera said the challenges of AI safety and quality were a central focus of the Australian Alliance for Artificial Intelligence in Healthcare, an alliance of more than 90 organisations.

The Alliance is currently preparing a White Paper on AI safety, quality, and ethics to inform the national discussion, he said.

“The challenge is that there is no single agency in Australia that covers [AI],” Professor Coiera said, noting that the Therapeutic Goods Administration, the Australian Commission on Safety and Quality in Health Care, Standards Australia and the Australian Digital Health Agency all had stakes in this field.

“That nobody owns it completely is an issue. They are all doing their best, but we hope to be able to get them all talking because AI is not in the future, it is here now,” he said, adding that he had been writing on the need to regulate digital health platforms for almost 20 years (here, and here).

Professor Soyer said: “Technology is marching much faster than the regulatory and legal agencies.”

Leave a Reply

Your email address will not be published. Required fields are marked *