Making Software’s Energy Footprint Visible - One Process at a Time
- From the projects
- Picture by Eva Michálková on Pixabay
What if we could see which part of our software is actually consuming energy? How does development and deployment change, if energy is no longer determined „after the fact“? A guest article by Geerd-Dietger (Didi) Hoffmann of the Project ProcPower.
When software guzzles electricity
If you work in software, you’ve probably learned to think in performance graphs. Response times, memory usage, request rates are all numbers that help you understand what’s going on and where to intervene. Energy, by comparison, often feels like a distant infrastructure concern: something the data center, the cloud provider, or “the ops team” worries about.
But energy isn’t distant. It’s the physical side of every digital service. And right now, that physical side is becoming harder to ignore: global electricity systems are under pressure, climate targets are tightening, and the resources behind our hardware—from chips to cooling to grids—are anything but infinite. Cutting energy demand is not just about saving money; it’s about reducing emissions in a world that still generates much of its electricity from fossil fuels, and about operating responsibly in a time of rising scarcity and competition for energy.
The catch is simple: even when teams want to reduce energy demand, it’s hard to do it in practice because we usually don’t know where the energy is actually going.
That’s the gap ProcPower tries to close.
The problem with “total consumption”
Most systems can tell you overall cpu utilization or overall cloud usage. That’s useful in the same way your monthly electricity bill is useful: you know you consumed resources but you don’t know why, or what changed, or what to fix first.
Software systems are made of parts: a web server, a database, background jobs, scheduled tasks, plugins, image processing, search indexing, video transcoding… and the list grows as soon as you add containers and microservices. When energy is only visible as a total, it becomes a foggy shared number. Nobody can pinpoint the exact usage and nobody feels responsible.
This is why measuring energy per process is such a big shift.
It turns “the system uses energy” into “this process is driving the spike.” It makes sustainability less like a slogan and more like a debugging task. And that matters, because the climate and resource side of digital services is increasingly driven by demand: as workloads grow, energy use grows with them—unless we actively counter it with better software, better operations, and better feedback loops.
Why energy demand matters now
It’s tempting to treat digital services as “clean” compared to other sectors as there are no smokestacks, no exhaust pipes. But the emissions don’t disappear; they shift into the electricity system and the supply chains that build and run our infrastructure.
And this is where energy demand becomes critical in the context of climate change and resource scarcity:
- Climate impact: As long as parts of the grid still rely on fossil generation, higher electricity demand generally means more emissions—or, at minimum, a slower path to decarbonization because clean supply must cover ever-growing load.
- Resource constraints: Energy isn’t the only scarce input. Hardware requires materials, manufacturing capacity, cooling, and increasingly contested grid connections. Reducing demand at the software level can reduce pressure across the stack.
- System resilience: Peaks matter. If software can avoid unnecessary work—especially bursty background tasks—it can reduce peak demand and make infrastructure easier to operate. Even small improvements scale when multiplied across millions of deployments.
Make energy observable where decisions happen
ProcPower is part of the open-source work around the green-kernel project.
Its focus is not “yet another sustainability report,” but something much more practical: energy visibility at the level of processes and services on Linux. It makes it possible to connect energy use to actual choices - code paths, jobs, plugins, deployments - rather than treating energy as an abstract cost of “running computers.”
And once you can attribute energy, different conversations become possible:
- You can talk about an expensive background job the same way you talk about a slow query.
- You can compare two implementations not only by latency but also by energy demand.
- You can notice that a plugin, a new feature, or a configuration change quietly increased energy use.
One of the most difficult things about sustainability in tech is that it can feel like a moral discussion rather than an engineering one. ProcPower nudges it back toward engineering: measure, attribute, learn, improve. And by being open source it invites collaboration: new integrations, better models, better UX, better defaults.
How it works under the hood?
ProcPower works by adding a lightweight “metering layer” to Linux that can attribute resource activity to the processes and groups of processes (services/containers) that caused it. Instead of trying to guess energy use from a single system-wide number, it collects the same kinds of signals you already rely on for performance work - CPU time, wakeups/context switches, memory footprint, and I/O activity - and combines them with hardware energy counters when they’re available. The result is a per-process (and per-container) view that you can sample over time, so you can spot spikes, correlate them with deployments or jobs, and compare “before vs. after” changes. Crucially, in container setups it exposes the metrics at the cgroup level, so a service can see its own footprint without needing host-level privileges. Making it practical for real-world deployments where isolation and safety matter.
ProcPower is a reminder that sustainability doesn’t have to live in annual reports or abstract targets, it can be part of everyday engineering. When energy becomes observable at the level of processes and services, teams can treat it like any other operational metric: investigate it, discuss trade-offs, and improve it iteratively. In a world where electricity, hardware, and grid capacity are increasingly constrained, that kind of practical visibility is a small change with outsized impact.
Geerd-Dietger "Didi" Hoffmann he/him
Geerd-Dietger Hoffmann is CTO at Green Coding Solutions GmbH, leading digital sustainability transformation. He has held CTO/founding roles at Climate Farmers, Ecoworks, TeleoMed, and eHealth Africa, spanning sustainable software and health technology. He holds a computer science degree from UCL and previously worked on the Linux Operating System at CERN and IBM. In Class 01, he is developing ProcPower.