Posted by Damien P. Williams
https://afutureworththinkingabout.com/?p=6406
There’s a new open-access book of collected essays called Reimagining AI for Environmental Justice and Creativity, and I happen to have an essay in it. The collection is made of contributions from participants in the October 2024 “Reimagining AI for Environmental Justice and Creativity” panels and workshops put on by Jess Reia, MC Forelle, and Yingchong Wang, and I’ve included my essay here, for you. That said, I highly recommend checking out the rest of the book, because all the contributions are fantastic.
This work was co-sponsored by: The Karsh Institute Digital Technology for Democracy Lab, The Environmental Institute, and The School of Data Science, all at UVA. The videos for both days of the “Reimagining AI for Environmental Justice and Creativity” talks are now available, and you can find them at the Karsh Institute website, and also below, before the text of my essay.
All in all, I think these these are some really great conversations on “AI” and environmental justice. They cover “AI”‘s extremely material practical aspects, the deeply philosophical aspects, and the necessary and fundamental connections between the two, and these are crucial discussions to be having, especially right now.
Hope you dig it.
Reimagining “AI’s” Environmental and Sociotechnical Materialities
Damien P. Williams
UNC Charlotte
There are numerous assumptions bundled into the current thinking around what “artificial intelligence” does and is, and around whether we should even be using it and, if so, how. Those pushing “AI” adoption tend to presuppose it necessarily will be good for something— that it will be useful and solve some problem— without ever defining exactly what that problem might be. Often, we see that there are these pushes towards paradigms of efficiency and ease of work and “rote” tasks being taken off our hands without anyone ever asking the fundamental follow-up question of “…okay but does it actually do any of that?” Relatedly, it’s often assumed that “artificial intelligence” will become or will make other things “better” in nebulous some way if only we just keep pushing, just keep building, just keep moving towards the next model of it. If we keep doing that, then eventually, we’re assured, “in just ten years,” “AI” will turn into the version of itself that will solve all our problems. But this notion that in ten years, “AI” will be embedded in everything and will be inescapable and perfect is something we’ve been hearing for the past 50 years.
This recurrent technosocial paradigm of “AI Summer” and “AI Winter” exists for a reason; these hype-cycles pushing towards automation, neural nets, big data, or algorithms over and over again represent externalities which must be addressed in a deeper way through questions like, “What are the values of the people who push ‘AI’s’ ‘inevitability,’ and what are their actual goals?” Because, while people might think they mean the same things when they say “AI,” or are indicating the same kinds of needs to be met, in truth, we’re very often talking past each other. Without a clear understanding of what it is we each and all actually think of as the “good” of “AI” technology— without confronting that question in a very direct and intentional way— different groups will just keep pushing in different directions, and whoever has the predominant access to and control over the levers of power wins the right to define the problems that “AI” seeks to address. But in many cases, those are problems they and their vision of “AI” helped to create.
Current estimates hold that water consumption increased ~34% in areas where Microsoft and Google placed datacenters for search and “AI,” and that every email’s worth of text you have an LLM “AI” write consumes a pint of water. Put another way, imagine if every time you composed 150 of your own words, you had to just take out a 16 oz water bottle, fill it up, and dump it in the trash. We’re not just talking about water for cooling servers, either. In thermal power plants, you need water to turn into steam to run turbines, and then to cool the systems which do that, as well. So the more energy needed, the more water used in production and cooling. And while many highlight that some systems only use this water once and then release it, even that is a process and a period of capturing that water, both removing the water from use, and potentially trapping and killing organisms living in it. Additionally, the water returned after the “once through” process has a significantly higher temperature than when it started. It should be said that the numbers in this discussion are estimates based on known figures for chip performance, electricity production, and whatever data’s been wrenched from “AI” corporations. They’re estimated because these companies do not release their actual resource consumption numbers.
Further, the data centers that support “AI” are oftentimes built in communities that are already resource scarce, and pulling water from or putting emissions into these communities ensures that “AI’s” harms are necessarily disproportionately enacted on the people who can least afford to bear them. Rather than rulemakers just paying lip-service to people’s grievances, logging them in a repository somewhere, and making whatever rules they intended to make to begin with, both the creation and regulation of “AI” must be directed by those whom it’s most likely to harm. But while marginalized communities absolutely must have meaningful input when it comes to technologies which will be wielded against them, there also has to be a centralized response in the form of some standard-setting body. And, recursively, that standard-setting body will have to be meaningfully responsive to the needs of those most likely to be harmed if said regulations and standards go wrong.
And so, we have to ask our questions: Who is most harmed by current uses of “AI”? What does the energy footprint of a data center actually look like? How much water and fossil fuel does it take to run “AI’s” servers and their computations? What are their carbon and waste heat emissions? Because the more we dig down on this, the more we truly confront the next questions: Should we be doing “AI” differently? What would it take to build “AI” in a different way? What would it take to power “AI” in a truly renewable way? And what and whom do we even want “AI” to be for? If it helps, you can try to think of what it as a game:
First major “AI” firm to use only renewable energy sources, an open source and radical consent model for the collection and use of training data, and a community partnership regulatory process which centers and heeds the needs of the most marginalized, wins.
Suggested Citation:
Williams, Damien P. “Reimagining ‘AI’s’ Environmental and Sociotechnical Materialities,” appearing in Reimagining AI for Environmental Justice and Creativity, Reia, J., Forelle, MC and Wang, Y. eds. Digital Technology for Democracy Lab, University of Virginia. 2025. https://doi.org/10.18130/03df-zn30.
https://afutureworththinkingabout.com/?p=6406