
Artificial intelligence has become part of the everyday sustainability toolkit. It is already reshaping how sustainability and ESG teams work, even though its own footprint is still only partially visible in most disclosures. AI is both a powerful enabler of better sustainability practice and a growing source of energy and water demand that organisations need to manage responsibly. Organisations that proactively embed Responsible AI principles into their operations, and communicate transparently about the environmental impact of their AI use will be significantly better positioned than their peers.
From a sustainability perspective, AI is best understood first as a productivity tool, not a silver bullet. Used well, it helps teams:
Scope 3 remains the largest and most complex part of most corporate footprints, with purchased goods and services often dominating total emissions. By compressing the journey from raw data to decision-ready insights, AI can shorten feedback loops, enable faster supplier conversations, and free capacity for real decarbonisation work rather than spreadsheet management. The real value lies in optimising existing processes and shifting human time from manual reporting to strategy and action.
Behind every AI-assisted insight sits a physical infrastructure of servers, cooling systems and power lines. Global data centres consumed around 300-380 TWh of electricity in 2023, and several studies project this could roughly double by 2030, with AI as a major driver of growth. EDNA also estimates AI data centre consumption alone to reach 200-400 TWh in 2030 (35-50% of overall data centre energy use projected in 2030). According to the International Energy Agency, data centres account for 1.5 % of global electricity consumption, and their electricity demand is expected to more than double by 2030. In the EU, they account for 3 % of the total electricity demand. This varies between countries, reaching over 20 % in Ireland. This forces grid operators and policymakers to treat digital infrastructure as a material energy user, not a marginal load.
AI-specific workloads are still a minority share of total data-centre consumption, but they are scaling fast. Scenario analyses suggest AI could account for 20–50% of global data-centre power use by 2030, depending on uptake and efficiency gains. Training cutting-edge models is especially intensive: estimates for earlier-generation large language models run into hundreds of tonnes of CO₂ for a single training cycle, comparable to the annual emissions of dozens of passenger cars. At the user level, typical estimates put the emissions from a single large language model query in the range of roughly 4 gCO₂e once training and inference are combined, several times more energy-intensive than a standard web search.
Despite this growing footprint, most companies cannot point to a clean “AI” line item in their carbon accounts. Very few organisations train their own models in owned data centres; instead they consume AI through cloud providers, productivity suites and SaaS tools. In greenhouse gas accounting terms, those impacts usually sit inside purchased cloud, software and IT services in Scope 3, often estimated using spend-based methods that blend AI with everything from email hosting to HR systems.
This creates a visibility gap: AI usage can grow rapidly inside a business while reported emissions barely move, because the signal is lost in broad categories or masked by financial fluctuations. Some cloud calculators still omit significant parts of the lifecycle, such as embodied emissions from servers and networking equipment, or do not allocate data-centre footprints down to specific services and customers in a granular way. As regulatory frameworks such as the CSRD push for more transparency on digital and cloud-related emissions, the expectation will shift toward clearer disclosure of the environmental cost of AI-enabled services.
For now, the most defensible approach is transparency rather than false precision: being explicit about where AI is used, what it enables, and acknowledging that its environmental impact is only partially captured in standard metrics.
Energy is only half the story. Most AI workloads run in data centres that also rely heavily on water for cooling and, indirectly, through power generation. A study published by Patterns in December 2025 estimated that “company-wide metrics from the environmental disclosure of data center operators suggest that AI systems may have a carbon footprint equivalent to that of New York City in 2025, while their water footprint could be in the range of the global annual consumption of bottled water.”
Recent assessments suggest that many data centres use in the order of 1.5–2 litres of water per kWh of IT load for direct cooling, depending on climate, technology and design. For a 100 MW facility, that can translate into around 2 million litres of water a day for cooling alone, before accounting for water embedded in electricity production.
In the US, total data-centre water use is already substantial. Studies estimate that in 2023, US data centres consumed on the order of tens of billions of gallons of water directly for cooling and hundreds of billions of gallons indirectly via the power system, with both figures expected to at least double by the late 2020s if growth continues unchecked. As AI workloads drive denser, higher-performance facilities, several analyses project that global AI-related water withdrawals could reach billions of cubic metres per year, comparable to the annual water use of entire mid-sized countries.
Where those facilities are built matters just as much as how much water they use. A large share of existing and planned US data centres are located in regions facing “high” or “extremely high” water stress, particularly in Western and Southwestern states. Recent reporting indicates that roughly 40% of US data centres, and an even higher share of the newest, AI-focused projects, are in water-stressed basins in states such as Arizona, California, Texas and parts of Virginia and Illinois.
The local consequences are already visible. In some Arizona communities, data-centre growth has coincided with restrictions on agricultural water use and household supply concerns, intensifying debates over how limited groundwater should be allocated. In The Dalles, Oregon, Google’s data centres have been reported to account for a significant share of municipal water consumption, with use rising sharply over just a few years in an area that has otherwise limited new industrial water users. AI, in other words, is now part of local water politics.
Given this backdrop, the question for sustainability teams is not “AI: yes or no?” but “AI: where, how and for what?”. A grounded approach includes:
Over the next few years, the gap between AI behaviour and AI reporting will narrow, as methodologies for allocating data-centre emissions and water use to specific services become more robust and as regulations demand more detail. Organisations that have already mapped where AI sits in their workflows, set guardrails, and framed AI as a conscious part of their environmental strategy will be in a much stronger position than those treating it as an invisible, “free” layer of intelligence.
In that sense, the test is simple: use AI to make sustainability work faster, smarter and more scalable, while staying honest about the carbon and water it consumes, and intentional about the real-world systems it depends on.