Amidst widespread promotion of artificial intelligence (AI), the environmental impacts are not receiving enough scrutiny, including from the health sector, writes Jason Staines.

When Hume City Council recently rejected part of a proposed $2 billion data centre precinct in Melbourne’s north, it put the spotlight on the largely overlooked environmental costs of artificial intelligence (AI) and the communities most at risk of bearing these costs.

The council had originally approved planning permits for the Merrifield data centre precinct, but rescinded support for one facility after residents and campaigners raised concerns about energy and water use, local infrastructure, and consultation with Traditional Owners. The backlash may be a sign that policymakers are starting to consider AI’s ecological footprint.

As AI is rolled out in increasingly sensitive areas of public life, including healthcare, policing, and welfare, governments have focused on the need for ethical, safe, and responsible deployment. There are also fierce debates over copyright and AI’s impact on jobs.

As important as these discussions are, environmental consequences have rarely been part of the equation to date.

Missing piece of the ‘responsibility’ puzzle

Governments and stakeholders have been busy discussing how AI might help lift Australia’s sagging productivity; Treasurer Dr Jim Chalmers wrote earlier this month that he expects AI to “completely transform our economy”, and that he is optimistic AI “will be a force for good”.

However, the Treasurer and others promoting AI are less vocal about the technology’s negative externalities.

The Productivity Commission’s interim report, Harnessing data and digital technology, pointed to AI’s potential to “improve service delivery, lift productivity and help solve complex social and environmental challenges”. But it largely overlooked AI’s environmental impacts, saying “few of AI’s risks are wholly new issues” and that higher energy demand is par for the course with “the information technology revolution”.

Likewise, a Therapeutic Goods Administration (TGA) consultation paper on the regulation of AI in medical device software omitted any discussion of environmental impact, despite recommending more transparency and accountability in the way AI tools are assessed and monitored.

The absence matters. As AI becomes embedded in essential services such as healthcare, its environmental footprint becomes not just a technical issue, but a public health and equity concern, particularly for communities already facing water insecurity or climate risk.

In a sector that is trying to decarbonise and reduce its impact, uncritical adoption of AI could prove counterproductive.

Artificial intelligence is reshaping healthcare, but at what environmental cost? - Featured Image
There are serious equity questions when governments invest in digital transformation strategies without accounting for the cultural impacts of water-intensive technologies such as AI (Jack Kinny / Shutterstock).

First Nations perspectives

For some First Nations communities, water scarcity is not theoretical, it is a daily reality.

In remote and regional Australia, many First Nations peoples face ongoing systemic barriers to safe, reliable, and culturally appropriate water access. These are demands that extend far beyond infrastructure and include deeply held cultural and spiritual connections with water.

Research conducted by CSIRO between 2008 and 2010 in the Daly (NT) and Fitzroy (WA) river catchments was the first of its kind to document Indigenous social and economic values tied to aquatic ecosystems, linking river flows directly to Indigenous livelihoods, resource use, and planning processes.

The Northern Australia Water Resource Assessment reinforces these insights, framing rivers as vessels of sustenance, heritage, and governance, and asserting Traditional Owners as inherently central to water and development planning.

Yet Australia’s AI reform dialogue has mostly omitted these cultural linkages, even when it does consider AI’s consumption of resources. In the context of AI-powered healthcare, this omission is especially troubling.

Innovations celebrated for improving diagnostics or service delivery often rely on energy- and water-intensive data systems, whose environmental toll is seldom disclosed or evaluated through an equity lens. When AI is embedded in healthcare services for Indigenous populations, with no accounting for its resource footprint, those least consulted risk bearing the heaviest cost.

This raises serious equity questions when governments invest in digital transformation strategies without accounting for water-intensive technologies such as AI.

As the United Nations Environment Programme has noted, policymakers must ensure the social, ethical, and environmental aspects of AI use are considered, not just the economic benefits.

“We need to make sure the net effect of AI on the planet is positive before we deploy the technology at scale,” said Golestan (Sally) Radwan, Chief Digital Officer of the United Nations Environment Programme.

Just how thirsty is AI?

Data centres remain one of the fastest-growing consumers of global electricity, using approximately 460 terawatt‑hours in 2022, and projected to more than double by 2026 with increasing AI and cryptocurrency activity.

These facilities often depend on water-intensive cooling systems, such as evaporative methods, which can consume large volumes of potable water and exacerbate stress on local supply. With AI workloads driving higher server densities and increased heat output, water demand for cooling is rising sharply, especially for hyperscale data centres, making water scarcity a growing operational risk.

For context, a study from the University of California, Riverside calculated that training GPT‑3 in Microsoft’s advanced US data centres evaporated about 700,000 litres of clean freshwater, a sobering figure for a single model’s development phase.

AI is being promoted as a climate solution, through better modelling, emissions tracking, and even water management optimisation. But the industry’s own resource use can directly undermine those goals.

As the OECD notes: “AI-enabled products and services are creating significant efficiency gains, helping to manage energy systems and achieve the deep cuts in greenhouse gas (GHG) emissions needed to meet net-zero targets. However, training and deploying AI systems can require massive amounts of computational resources with their own environmental impacts.”

According to one report, data centres in the US consumed about 4.4 percent of the country’s electricity in 2023, which could nearly triple to 12 percent by 2028. Meanwhile, Google’s US data centres went from using 12.7 billion litres of cooling water in 2021 to over 30 billion litres just three years later, and UC Riverside estimates that running just 20 to 50 ChatGPT queries uses roughly half a litre of fresh water.

In response, the global tech sector has invested heavily in green branding. Microsoft, for example, has publicly committed to being “carbon negative and water positive by 2030”.

Notable absence

Australia’s healthcare system is rapidly adopting AI across clinical, administrative, and operational domains.

From diagnostic imaging to digital scribes, clinical decision support, and personalised treatment plans, AI is being held up as a core enabler of future-ready care. Federal reports, such as Our Gen AI Transition, point to AI’s potential to improve efficiency and free up clinicians for more patient-centred work.

But that optimism comes with a caveat: the integration of AI into healthcare is unfolding with limited consideration of its environmental toll. The healthcare sector is already one of the most resource-intensive in Australia, responsible for around seven percent of national greenhouse gas emissions. AI risks adding a new layer of resource demand.

While regulatory bodies are beginning to grapple with questions of safety, accountability, and clinical transparency, environmental impacts remain conspicuously absent from most discussions.

The TGA, for instance, has flagged a need for clearer regulation of AI in medical software, noting that some tools, such as generative AI scribes, may already be operating outside existing rules if they suggest diagnoses or treatments without approval. Yet neither the TGA’s consultation paper nor its updated guidance documents meaningfully address the carbon or water costs of these tools.

According to Consumer Health Forum CEO Dr Elizabeth Deveny, the TGA’s review surfaced critical issues around trust, transparency, and consent, from hidden AI tools embedded in routine software, to confusion about who is responsible when things go wrong.

She notes: “Trust is the real product being managed.” Yet environmental transparency is foundational to that trust, particularly when AI is deployed into hospitals and clinics already experiencing the impacts of climate and infrastructure strain.

Equally important is the broader policy context. A Deeble Institute Perspectives Brief cautions that AI’s success in healthcare hinges on transparent and nationally consistent implementation frameworks, shaped through co-design with clinicians and consumers.

But such frameworks must also consider the material cost of AI, not just its clinical or administrative promise. Otherwise, we risk solving one set of problems, such as workforce strain or wait times, while silently compounding others, including water insecurity, emissions, and energy grid pressure.

Global pressure is building

In Europe, data centre water use is already triggering regulatory scrutiny. The EU is finalising a Water Resilience Strategy that will impose usage limits on tech companies, with a focus on AI-related growth.

“The IT sector is suddenly coming to Brussels and saying we need a lot of high-quality water,” said Sergiz Moroz of the European Environment Bureau. “Farmers are coming and saying, look, we cannot grow the food without water.”

Even the United Nations has weighed in, stating bluntly: “AI has an environmental problem”.

The UNEP emphasises that AI strategies must be deeply integrated with sustainability goals, moving beyond simple transparency and they should “integrate sustainability goals into their digitalization and AI strategies”.

The new Coalition for Environmentally Sustainable AI, co‑led by the UNEP, brings together governments, academia, industry and civil society to ensure that “the net effect of AI on the planet is positive”.

Communities’ concerns

The Hume Council’s data centre decision is not an isolated objection, and as Australia rapidly expands its digital infrastructure to support AI and other emerging technologies, the communities asked to host this infrastructure will increasingly demand to be heard.

Data centres have a real and often disruptive presence: high heat output, constant noise from cooling systems, diesel backup generators, as well as their heavy water and energy use. Once operational, they offer few jobs and limited direct benefit to the communities that surround them.

Late last year, the Senate Select Committee on Adopting Artificial Intelligence observed that stakeholders provided extensive evidence on AI’s environmental footprint, from energy use, greenhouse gas emissions, to water consumption, and also recognised AI’s potential to help mitigate these same challenges.

Yet, despite this, only one of the report’s 13 recommendations addressed the environment, and even that was disappointingly vague:

“That the Australian Government take a coordinated, holistic approach to managing the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value for Australians and is in the national interest.”

As Dr Bronwyn Cumbo, a transdisciplinary social researcher at the University of Technology Sydney, writes in The Conversation, Australia has a unique opportunity to embed genuine community participation in the design and planning of its digital infrastructure.

“To avoid amplifying the social inequities and environmental challenges of data centres,” she argues, “the tech industry and governments across Australia need to include the communities who will live alongside these crucial pieces of digital infrastructure.”

Public trust in AI cannot be divorced from the physical and environmental contexts in which it operates. If the benefits of AI are to be shared, then the burdens, from emissions and water use to noise and land occupation, must be acknowledged and addressed. This is especially true in healthcare, where ethical use and public confidence are paramount.

The Hume Council vote is a reminder that local communities are paying attention. Whether policymakers and those with an interest in promoting wider uptake of AI are listening is another matter. Likewise, there are questions about whether the health sector is doing enough to investigate and highlight the potential environmental impacts.

Jason Staines is a communications consultant with a background spanning journalism, government, and strategic advisory roles. He has reported for outlets including AAP, Dow Jones, The Sydney Morning Herald and The Age, and later worked in government as a Senior Research Officer at the Australian Treasury’s post in New Delhi and as an analyst in Canberra. He holds a Master of International Relations from the University of Sydney and a Bachelor of Arts (Communication) from the University of Technology Sydney.

The article was written in his capacity as a Croakey editor and journalist.

This article was originally published by Croakey.

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners.

Leave a Reply

Your email address will not be published. Required fields are marked *