100. DeepDive: AI and the Environment

This is our 100th episode! A super special look at AI and the Environment, we interviewed 4 experts for this DeepDive episode. We chatted about water stress, the energy usage of AI systems and data centres, using AI for fossil fuel discovery, the geo-political nature of AI, GenAI vs other ML alogrithms for energy use, demanding transparency on energy usage for training and operating AI, more AI regulation for carbon consumption, things we can change today like picking renewable hosting solutions, publishing your data, when doing "responsible AI" you must include the environment, considering who are the controllers of the technology and what do they want, and more...
Date: 20th of May 2025
Podcast authors: Ben Byford with Hannah Smith, Boris Gamazaychikov, Will Alpine and Mél Hogan
Audio duration: 30:39 | Website plays & downloads: 80 Click to download
Tags: Energy, Carbon, Fossil Fuels, Politics, Transparency, Power, Regulation, Climate change | Playlists: Special edition, DeepDive, Enviornment

Episode speakers in order:

Hannah Smith is Director of Operations for Green Web Foundation and co-founder of Green Tech South West.

She has a background in Computer Science. She previously worked as a freelance WordPress developer, and also for the Environment Agency, where she managed large business change projects. She lives in the temperate rainforest in Exmoor National Park, UK.
https://greentechsouthwest.org/
https://www.thegreenwebfoundation.org/publications/report-ai-environmental-impact/
https://www.thegreenwebfoundation.org/news/within-bounds-joint-statement-on-limiting-ais-environmental-impact/


Boris Gamazaychikov is the Head of AI Sustainability at Salesforce and a recognized leader in the intersection of technology and climate action, named to The Independent’s 2024 Climate 100. Boris aims to serve as a bridge between the AI and climate communities, fostering collaboration to make the AI industry sustainable while advancing solutions that align with planetary boundaries. At Salesforce, Boris focuses on reducing the environmental impact of the company’s internal AI operations while working to make the broader AI ecosystem more sustainable. Boris is a frequent speaker, board member, and thought leader on the topic of AI Sustainability and beyond.
With over a decade of experience solving technical environmental challenges, Boris has developed decarbonization strategies for some of the world’s largest companies, reduced the pollution footprint of the Pentagon, and advanced sustainable practices in building material supply chains. He holds a degree in Environmental Engineering from the University of Maryland and continues to bridge his expertise in engineering, climate science, and AI to accelerate the transition to a sustainable future.
https://huggingface.co/spaces/AIEnergyScore/Leaderboard - AI Energy Score project I mentioned


Will Alpine is an AI product management leader working at the intersection of technology, policy, and climate. He is the co-founder of the Enabled Emissions Campaign, advocating for the alignment of technology use with climate science.

During his four-year tenure at Microsoft, he led the development of Responsible AI platform features, co-founded Green Software Engineering initiatives such as the open-source Carbon Aware SDK, and co-authored Microsoft’s Accelerating Sustainability with AI playbook.

Will holds an M.S. in Technology Innovation (Connected Devices) from the University of Washington, a B.S. in Mechanical Engineering from Virginia Commonwealth University, and holds four patents from his time at SolarCity (Tesla Energy).

Outside of work, he finds inspiration in nature, often exploring the rugged landscapes of the Pacific Northwest as a plant-based adventure athlete.


Mél Hogan is Associate Professor of Film & Media at Queen's University (Canada). Her research focuses on data infrastructure as understood from within the contexts of planetary catastrophes and collective anxieties about the future. Host of The Data Fix podcast and Editor of Heliotrope. On Bluesky at @melhogan.bsky.social


Transcription:

Hannah Smith[00:00:01.770]

60% of the world's energy still comes from fossil fuels. Whilst we do have an uptick and an increase in renewables, what we also have is an uptick and an increase in demand. AI is often shown as one of those things forcing that demand. If we've got renewables coming online at a certain pace, and AI beyond that pace, which I believe it is, we've got a problem. We're not going to be able to get off fossil fuels because we're not able to replace the fossil fuels with renewables. So it's definitely an issue.

Ben Byford:[00:00:36]

That was Hannah Smith, Director of Operations for the Green Web Foundation. And welcome to the Machine Ethics podcast. This is episode 100, a deep dive on AI and the environment. This is an episode I've been hoping to produce for a long time. It was recorded across February in 2025, as we gathered four fantastic speakers to help us demystify AI, its impacts, and its opportunities for the environment.

Boris Gamazaychikov[00:01:07.520]

Ai, fundamentally, is using compute to produce something. At the core, it's not different from traditional software. I'd say the reason I even have my position is because of the vastness of the latest models that are powering modern AI, so-called generative AI models, these large language models that just use orders of magnitude more compute than any other software in the past has ever done.

Ben Byford:[00:01:38]

That was Boris Gamazaychikov, Head of AI Sustainability at Salesforce. To produce this amount of processing, AI requires resources. Here's Hannah to explain.

Hannah Smith[00:01:49.630]

The key difference with AI versus another digital technology is it's like 10 times more, 10 times more resources, 10 times more impact. There are three really crucial areas where there are big environmental impacts. Of course, there's energy, which is the one that most people are very aware of. We need electricity to manufacture and power these things. That manufacture part is at least as significant as the usage part, which, again, a lot of people don't realise. They think, If I'm using the thing, that's all they're seeing. But somewhere along the line, the servers, the cables, the devices, they all had to get made as well. That takes a lot of energy or electricity.

Hannah Smith[00:02:35.070]

Then moving on from that part around the manufacture of things, all our digital stuff, the stuff that you and I use, but also the stuff that powers AI in the background, the chips, the servers, all this stuff, that all has to be built from something as well. This is where we think about this other environmental impact of the rare raw materials that need to be mined and processed in order to manufacture this stuff. There's a lot of really dodgy stuff when you start to look at the amount of rare raw materials that are available in the world and when we're predicted to run out of them. We are predicted to run out of quite a few key ones. We already know, or researchers are forecasting that there aren't enough resources to give everyone an equitable access to technology. That's a really big problem.

Hannah Smith[00:03:29.100]

Then the third is around water, which is a really big factor in the environmental impact of digital and AI. Water is needed to manufacture stuff. If you look at the processes of manufacturing semiconductor chips and things like that, it needs vast amounts of incredibly pure water, which is not that easy to always find. Then if we also look at the running of data centres as well, data centres typically use quite a lot of water and fresh water to cool the servers. We're also seeing a lot of pushback from data centres being built in drought-prone areas, and they're taking the fresh water away from local populations. And it's one of the things we know with climate change is water stress is more of a factor. If somewhere is already dry, chances are it's going to be getting drier. And data centres tend to be built near population centres as well. Also, So a bit of a problem.

Boris Gamazaychikov[00:04:32.420]

Google actually just released a lifecycle assessment just a few weeks ago on their TPU, which is essentially their version of the GPU, so their AI hardware. And they found that the embodied carbon, so this is just from the carbon perspective, they found that the embodied carbon, so the manufacturing of the hardware and of the data centre, it's around 10 to 20%, I believe. So roughly in that order of magnitude. And then the rest is the energy, the actual ongoing operational usage, so the actual compute provided rather than embodied. It really is the ongoing computation for training and inference of these AI models that seems to be dominating. Again, this is just a perspective from carbon emissions, and I do think more research needs to be done on other impacts, but I think it is useful to put things in a perspective.

Ben Byford:[00:05:36]

Will Alpine, co founder of Enabled Emissions Campaign, explains there's more than just the direct carbon impact of the technology.

Will Alpine[00:05:45.740]

You have two different types of impact. You have the direct impacts and the indirect impacts, and it's a really important distinction to draw. You can think of direct impact as what goes into the tool, for example, how that tool is made or what that tool might run on. Then the indirect impacts would be how a tool is being used. It's important to note that these types of impacts are far less understood and they have much higher impact. On the indirect side, you really need to think about how the tool is used. AI can either accelerate or mitigate climate harm. It's especially relevant in high-emission industries such as the oil and gas sector. What I have seen is that one of the biggest use cases of AI today is actually to accelerate fossil fuel expansion. That would be exploration and production. AI helps process staggering amounts of seismic data and operational data and make it really easy for oil and gas companies to stay effective with their work, which really widens the gap between low and high carbon energy sources. It really undermines all of the good sustainability work that's happening globally.

Ben Byford:[00:06:47]

Also concerned with these direct impacts, Mél Hogan, fellow podcaster and Associate Professor of Film and Media at Queens University Canada, talks about AI in our current political climate.

Mél Hogan[00:06:59.600]

The way we think of AI right now, and I'm going to put it in air quotes, I guess, for audio purposes, but I think it's really important to put AI in those quotes because it's a marketing term. If we take AI in quotes, like the big AI, the generative AI that everyone is talking about since 2022, it's very hard to just talk about the environmental impacts without talking about the ownership of those means of computational production. Already, we We see in the US, in particular, with the recent elections, a quick and efficient turn to fascism in and through these AI technologies, which have a particular politics related to the environment. We will see more extractivism. We will see more marketing towards really linking energy companies to AI data centres. We will see more explicit advancements that are to make AI as big, as powerful as possible.

Ben Byford:[00:08:06]

But what about the potential environmental benefits of AI? Here's Boris again.

Boris Gamazaychikov[00:08:11.530]

I think the promise of AI is vast for a number of environmental and other challenges around us. Everything from simplifying complex sets of data, figuring out best actions to take from a complex set of starting points, speeding up science, speeding up potentially material discovery. So new materials that could substitute for some of these critical materials that are out there. General optimisation automation. AI is such a general purpose technology. I think there's almost an endless array of opportunities.

Ben Byford:[00:08:55]

Both Will and Boris point out that not all AIs are equal their usefulness for preserving the environment.

Will Alpine[00:09:02.760]

There are two different types of AI, and I think they're actually being conflated in this discussion. On one hand, you have the analytical AI, and then on the second, you have generative or agentic AI. The type of AI that's really helping the climate is actually quite different from the type of AI that's consuming a lot of energy. So analytical AI is really high accuracy and low energy requirements, and that's what's really helping the climate crisis. You can think stabilising the grid or integrating renewable energy into the grid. Then you have generative or agentic AI, which is not accurate, but it has really high energy consumption requirements, and it's still at a speculative stage. You can think of ChatGPT or any of the associated applications of that.

Boris Gamazaychikov[00:09:45.840]

Very often what we're seeing is the AI applications that are most well suited for solving environmental challenges are not the same ones that are the most energy the Intense general purpose models. At the AI Action Summit, at an event, a researcher noted that for one application, I don't remember exactly what it was, but they found that a thousand parameter model they developed was more effective than the leading general trillion parameter model that is out there. That is a really great example of not necessarily to throw the largest hammer at every problem, but instead creating and applying scalpels, so to speak, to certain issues.

Ben Byford:[00:10:38]

Hannah is much less optimistic that AI can help.

Hannah Smith[00:10:42.370]

We talk a lot and we hear a lot from people say, AI going to save the climate crisis. AI is good for sustainability. There are no studies, there is no evidence to prove that whatsoever. That is wishful thinking I'm thinking that is hope that AI will help us with climate change. As I mentioned, in some research situations or perhaps in some situations where you're dealing with large data models, yeah, in the hands of total experts, I think maybe there is a 5% or 10% promise there that might come through. I don't think AI can ever be a climate change solution if it's used to speed up the efficiency with which we get to get fossil fuels out of the ground. Just that alone just doesn't make any sense. We've got to think about technology and we've got to think about what it's used for. When you look at the promise of AI in terms of helping to solve climate change, and you compare that against extraction of fossil fuels and making that faster and more efficient, these arguments just don't stack up. I really struggle to understand how somebody could honestly be thinking that as a genuine thing.

Hannah Smith[00:12:05.610]

Of course, the other thing I would say on this point is we already have the solutions to climate change. We know what we need to do. We don't need AI, as I say, in the vast majority of cases, I think 5 or 10%, yes, maybe there is some genuine help there. But the vast majority of the problem around climate change is people and politics. I don't see how AI has a place in moving that along. I think that's another facet to think about and question when we're talking about this promise of AI as a sustainability solution.

Ben Byford:[00:12:42]

Mel seems to agree with Hannah and demonstrates the politics inherent in generative AI.

Mél Hogan[00:12:48.270]

When generative AI became really popular, I was impressed at how quickly the critiques in and around the environment came out. I'm thinking of Shaloh Ren, who talked about the water usage of AI data centres in particular did really amazing, impressive calculations about all that, or Sasha Lucioni and others who measured the computational power and energy required to train large language models. Those things were assessed really quickly. I think where we're at now, in part because of the US elections, but also just a global turn to fascism, we need to be more interested in bigger questions about how the labour of AI gets outsourced to countries that are being recolonized or expansion of colonialism and imperialism through the human labour behind AI. Then with that, all these kinds of sacrifice zones that are at the service of AI expansion. I think that's the bigger frame rather than measuring, which I think are really important, but rather than just measuring the immediate impacts in terms of energy, water, land, etc. But I think there's a political economy also argument forming around this.

Ben Byford:[00:14:08]

Can AI be used sustainably? And how can we get there? Boris and Will explain.

Boris Gamazaychikov[00:14:16.790]

I think everyone really has a role to play in making AI sustainable. I do believe that AI can be sustainable. Just want to underline that. Right now, there is a race to create the biggest, most powerful model, and there hasn't really been a lot of emphasis on thinking about the sustainability of these solutions. And so I hope that folks can realise that they do have a role to play, whether they are a customer of these companies as an enterprise or an individual user. I think now is a really important time to use your voice and demand this type of transparency, demand more choice in terms in terms of which models I'm using to power whatever solution I have. I think it's really important to come together and then really demand action. I think there is a really important role for smart regulation in all of this, and I do think that that can be put in place without slowing innovation down.

Will Alpine[00:15:20.690]

I think one of the first things that any practitioner can do would be to choose the right tool for the job and realise that we don't need a massive data centre build to address climate change, we actually already have most of the technology we need. What we need to do is, say, not use AI as a hammer in search of a nail and really, for example, don't use ChatGPT as a calculator. Just choose the right tool for the job. Oftentimes, analytical AI or machine learning could be one of the best tools at your disposal.

Ben Byford:[00:15:51]

Mél echoes the sentiment that we probably don't need huge models for all tasks.

Mél Hogan[00:15:57.650]

The computational power that you are told you require, and I think DeepSeek has challenged this, and I think there have been other models now that have run even more efficiently, cheaper, less energy than DeepSeek. I think people are challenging that notion that those big tech companies were the only ones that were able to do AI at this scale. Sam Altman was asking for $7 trillion of investment in nuclear energy and so on to power this thing that required all of that. That's been debunked a little bit. It's incredible that it didn't pop the AI bubble, but I think it has, in some sense, subverted the idea that it requires that much power in both senses of the meaning of power. I think that that's interesting. What it does is maybe can return us to other uses of large language models, machine learning, even neural networks, probably to a certain extent, and how those things can work for very specific tasks, maybe more tedious tasks, maybe the truly unpleasant things that human labour could be a little bit wasted doing.

Ben Byford:[00:17:12]

What exactly can we do today?

Hannah Smith[00:17:14.900]

I think if you're a developer and you're building AI or you are implementing AI in some ways, there are things you can think about with regards to where the energy is coming from for your tools and how efficient you're making those tools as well. Now, I work a lot with emissions reporting and trying to get numbers, estimates from people as to, Okay, what does this AI tool actually need in terms of energy? And what are the environmental impacts. I would ask any AI developer to be thinking about that and trying to measure and publicly show what they think those energy estimates are. There's some really cool work coming out from Hugging Face at the moment around trying to have energy ratings on AI. I do think we're going to be seeing more of that in the future. It's about making things efficient, but there are also choices you can make with regards to what hosting companies you use. You can pick greener hosting companies. The best ideal solution is to pick hosting companies that live in or that are sighted in regions that already are 100% renewable. And then the other things that you can do is think about when you're running AI for training purposes, looking for times in the grid when the energy demand is lower.

Hannah Smith[00:18:41.770]

There is a correlation between the energy demand being lower and the carbon intensity, which is a measure of how dirty the electricity is. There's a relationship between energy demand being lower and the energy being cleaner, because obviously you're using your renewables to their maximum capacity. There is also a role that developers can play in being mindful and thinking about those things. But the biggest thing is please publish your data. That is the most useful thing collectively that we could shift and change within this industry at the moment.

Will Alpine[00:19:19.470]

Businesses really need to be transparent and held accountable, such as disclosing the AI-related emissions, as well as any of the risks of this AI boom on their business or to their operations or to their shareholders. Then companies are also really... They need to be held accountable for the climate positioning and the integrity of any corporate sustainability claims. As many of us know, a lot of tech companies have been backsliding on their corporate commitments because of the AI boom, but that's really unfortunate. I think fundamentally, what we need is policy in place. What governance does a company have in place? What landscape is holding them accountable? One thing that employees can do is start to push their company to include environmental considerations into their responsible AI practises. You could start really tactically and say, file AI safety violations when you see harmful uses that don't align with climate science. You can continue to push your employer and just ask, why are you not including this? Because responsible AI must account for harms to people and planet, right?

Mél Hogan[00:20:26.740]

If I was a designer or a company, I would probably I would say, I'm an AI-free company, and I would certify that in some way as the selling point rather than say, Here's how you can use AI well or even responsibly or ethically. I think we're not going in that direction. It'd be very hard to make a case for that in its current use and the way it's owned and the way that it's framed for us and the ways that we're asked to invest. Emotionally, it's a lot technologically, but also financially, if you think of all the tax dollars that are going to go to fund infrastructure for these things, I think I'm still on the side of opting out, resisting, and pushing back. I am just not convinced that this is truly, genuinely adding anything to anything.

Ben Byford:[00:21:23]

Boris and Will are both working on projects to help improve the situation.

Boris Gamazaychikov[00:21:28.670]

We're seeing more and more regulation situation where companies are having to disclose their carbon footprint and start to disclose plans to reduce that footprint. If they're procuring AI either for internal uses or for their external-facing products, That would be like purchasing a fleet vehicle and not knowing how much gas it would consume. There's a big liability on these companies' scope three emissions right now. It all really starts with transparency. That's why we were excited two weeks ago to launch this initiative called AI Energy Score. This was created in partnership with Hugging Face, Cohere, and Carnegie Mellon, where we put forth a framework for measuring the inference energy of AI models to be able to start having this informed conversation on what model for what task makes the most sense. Because we have a a lot of data out there on model performance, but we don't have the data on what that trade-off is in terms of the energy and all those other impacts that I started mentioning.

Will Alpine[00:22:42.790]

Currently, I'm a climate co-founder of an advocacy and accountability org. We're quite new. We're pushing to align technology use with climate science. This new organisation that we're creating is the result of many years of work. Four of those were inside Microsoft, really pushing to align its business activities with its stated support for climate science. The realisation came from the fact that one of the biggest use cases of AI today is by the oil and gas industry to find and extract more oil. I take this really personally because I was involved in building the platforms that are being used by these oil and gas companies to expand production. What we realised is that the additional emissions from just a few of these deals were washing away all of the good we were doing with our green software engineering initiatives or even many times over to Microsoft's own operational carbon footprint, and that would include all scopes, including data centres. And so first and foremost, we're a policy advocacy organisation. We're trying to push for policy that aligns use of technology with climate science. But we're also educating the public about the full systemic impacts of AI, and we're building a coalition and really mobilising partners across tech and climate and investors. You can learn more at www.enabledemissions.com.

Ben Byford:[00:24:09]

To finish, here are some final thoughts from our contributors.

Hannah Smith[00:24:15.960]

We have to critically engage with AI and ask what it's for and what are the long term implications of us replacing our creativity and intelligence with a machine that's not controlled by us. Because ultimately, I think that's the big issue here. AI right now might feel like a bit of a holiday. The tools are free. Oh, isn't this a wonderful boost for humanity? But we've got to critically ask ourselves, who controls this technology? And what are their goals? Their goals aren't to serve humanity. I think we can see that the big tech companies, their goals are serving their shareholders. They might have in the past pretended or really thought that they were trying to make humanity better. But I think we can very, very clearly see now that that is just absolutely not the case. It's about them making money. What are we prepared to give away for this perceived efficiency? Now, those are very philosophical questions, but I do think people need to stop and think about what they're doing and what problems we're storing up by the future by investing too much of our livelihoods or our processes into these things.

Mél Hogan[00:25:29.620]

I think as most people who probably will be listening to this know about AI is drawn from data sets that are extractive. It's just like pulling, lifting things without consent, usually from the internet or from large scale digitization projects. It's done without the consent of writers and artists and so on. That goes for text, but also for images and video. I think it's become even more obvious in other formats like video and images, where you can really see that it's It's just very derivative from other people's work with no compensation, no consent. It fails to recognise copyright, but also fails to recognise that AI does not label itself. AI does not understand meaning, that there's a lot of human labour and usually outsourced to these call centres, whether it's Asia, Africa, it's outsourced from the US. The folks working in these centres are often misled. Also, don't really have a choice but to work for these companies and often suffer trauma from the stuff that they have to sort through. I think that's pretty well documented as well. Then there's the environmental aspect where I'm like, If you know how much water energy it uses and you don't care, and you have these three levels of not caring already, then go for it. But I think teaching people to care and teaching people about the impacts and the complete unsustainability three or more levels, all of it is interconnected, and that's the problem with AI.

Boris Gamazaychikov[00:27:07.870]

I would start everything with the need for more transparency. Because where we are right now, the leading AI model providers, their most popular and most widely used and also most frontier models are essentially black boxes. Outside of other issues with that from an environmental perspective, that means when we're using these systems, we don't really understand our environmental impact. From an individual, that's problematic for sure. From an enterprise that's using AI, that can be potentially legally risky.

Will Alpine[00:27:51.080]

Let's get ourselves out of this false dichotomy discussion between the direct impacts of AI and its potential benefits, because that's an overly simplistic view. Focusing on just one of these, while understandable, is really inadequate for the task at hand. I need you to choose your issue regarding AI and then zoom out to think about the full system that it's part of. In my conclusion, having spent four years trying to make positive change from within, is that the only thing that will save us now is policy, because technology is just a tool. If you're in a company, the single biggest thing you can do is to push your employer to use their power to advocate for policy that truly aligns tech with climate science, and hold them accountable for promises regarding accountability and transparency. Because traditional sustainability practises, such as incremental measurement and carbon reduction, are insufficient, given the challenges that we face.

Ben Byford:[00:28:43]

Welcome to the end of our deep dive episode on AI and the environment. Thank you so much to our speakers, Hannah Smith, Mél Hogan, Will Alpine, and Boris Gamazaychikov, for fielding my questions, their time, and their amazing knowledge.

Ben Byford:[00:29:00]

This is our 100th episode of the Machine Ethics podcast. Thank you so much if you've been listening, and thank you for joining us if this is the first episode. You can find many more, machine-ethics.net, from our other deep dive episode on AI and games, to conversations about science fiction, transhumanism, parenting AGI, automated cars, building and using AI in your companies, as well as various looks at ethical frameworks, responsible AI, and AI ethics in general. If you'd like to find the full interviews from our contributors for this episode, you can find them by joining our Patreon, patreon.com/machineethics. You could also find more thoughts from me on the Patreon, as well as other exclusive content. If you would like to continue and come with me on this journey of AI and its impact on society, then you can follow us on bluesky - machine-ethics.net, Instagram - Machine Ethics podcast, YouTube - @Machine-ethics, and wherever you find your podcasts.

Ben Byford:[00:30:07]

Thank you, and I hope to see you at episode 200. See you then..


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford

Previous podcast: Co-design with Pinar Guvenc