https://arxiv.org/pdf/2505.06371

Abstract: As the adoption of Generative AI in real-world services grow explosively, energy has emerged as a critical bottleneck resource. However, energy remains a metric that is often overlooked, under-explored, or poorly understood in the context of building ML systems. We present the ML.ENERGY Benchmark, a benchmark suite and tool for measuring inference energy consumption under realistic service environments, and the corresponding ML.ENERGY Leaderboard, which have served as a valuable resource for those hoping to understand and optimize the energy consumption of their generative AI services. In this paper, we explain four key design principles for benchmarking ML energy we have acquired over time, and then describe how they are implemented in the ML.ENERGY Benchmark. We then highlight results from the latest iteration of the benchmark, including energy measurements of 40 widely used model architectures across 6 different tasks, case studies of how ML design choices impact energy consumption, and how automated optimization recommendations can lead to significant (sometimes more than 40%) energy savings without changing what is being computed by the model. The ML.ENERGY Benchmark is open-source and can be easily extended to various customized models and application scenarios.

I read this as part of nosing around https://github.com/ml-energy/zeus. I mostly wanted to get a feel for the project so didn’t go deep into the details, but a few things were interesting:

  1. Diffusion models and LLMs have different runtime characteristics so you need to measure their energy consumption differently. Specifically, LLMs do per-request iteration within a batch of requests which means that start and finish times within the batch are not aligned. To compensate for this, the benchmark software only measures during “steady state”

  1. You can’t just use a GPUs Thermal Design Power (TDP) to estimate power usage, you actually have to measure it. GPUs don’t normally run at full power and different model types will have different GPU utilisation characteristics. For example, compared to diffusion models, LLMs display relatively low compute-intensity with the GPU computation throughput being bottlenecked by VRAM bandwidth and a low power draw.

  2. It’s possible to get outsized energy savings for relatively small sacrifices in latency