https://arxiv.org/pdf/2501.16548

Abstract: As the climate crisis deepens, artificial intelligence (AI) has emerged as a contested force: some champion its potential to advance renewable energy, materials discovery, and large-scale emissions monitoring, while others underscore its growing carbon footprint, water consumption, and material resource demands. Much of this debate has concentrated on direct impacts—energy and water usage in data centers, e-waste from frequent hardware upgrades—without addressing the significant indirect effects. This paper examines how the problem of Jevons’ Paradox applies to AI, whereby efficiency gains may paradoxically spur increased consumption. We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socioeconomic analyses. Rebound effects undermine the assumption that improved technical efficiency alone will ensure net reductions in environmental harm. Instead, the trajectory of AI’s impact also hinges on business incentives and market logics, governance and policymaking, and broader social and cultural norms. We contend that a narrow focus on direct emissions misrepresents AI’s true climate footprint, limiting the scope for meaningful interventions. We conclude with recommendations that address rebound effects and challenge the market-driven imperatives fueling uncontrolled AI growth. By broadening the analysis to include both direct and indirect consequences, we aim to inform a more comprehensive, evidence-based dialogue on AI’s role in the climate crisis.

Ever since Deepseek made a splash at the end of 2023, podcasters and analysts have been throwing around Jevons’ Paradox while trying to figure out if more efficient AI models would result in larger or smaller resource (power in particular) requirements.

So I was looking for a paper that would analyse resource requirements and Jevons’ paradox for AI. Unfortunately I chose poorly and this paper is mostly a description of various kinds of rebound effects and how they might apply to AI. No actual analysis. In fairness the author’s do a reasonable job laying out why it’s complicated and the lack of graphs and equations in the paper should have tipped me off.

Questions