How Green Is Your Machine Learning?

March 16, 2022

Dive into green machine learning and the environmental impact of ML with our blog. Explore how Wallaroo significantly reduces computational resources and energy usage, making your AI/ML initiatives greener and more cost-effective.To see how much compute and energy savings your enterprise can achieve by running your AI and machine learning on Wallaroo, email us at 

There are many ways to consider the environmental impact of AI and machine learning. For example, Walmart uses IoT and machine learning to optimize the temperature of their refrigerators and predict what maintenance each refrigeration unit requires, leading to reduced energy use and food spoilage. This is a prime example of using ML to reduce your carbon footprint.

For this blog, however, when we talk about green ML, we are referring to the energy required to run ML in a live production environment independent of the use case. It’s one thing to build and train a model in a lab, but actually running machine learning in production can require significant compute resources, particularly for deep learning neural network models like speech recognition, computer vision, and text comprehension. Currently, data centers consume about 5% of the world’s energy but that share could quadruple or even octuple as enterprises embrace compute-intensive AI/ML on the cloud.

This compute efficiency is one of the major variables into whether your AI/ML investments are profitable or even really feasible. In fact, according to researchers at the Massachusetts Institute of Technology, deep learning is hitting its limits unless new computationally efficient deep learning methods are developed. When considering operationalizing ML on-prem or at the edge, compute efficiency becomes critical to whether a given model can even run given innate compute and power constraints.

Aside from profitability and feasibility, ML compute efficiency can have broader impacts. For example, high Environmental, Social, & Governance (ESG) scores are correlated with higher market valuations. For enterprises running hundreds or thousands of ML models, reducing the energy consumption of their applied AI and ML is an opportunity to highlight their ESG progress. 

In a recent Medium post, software engineer @Kesk summarized a 2017 paper from Portuguese researchers ranking 27 of the most popular programming languages based on runtime, memory usage, and energy consumption. From the paper’s abstract:

“Our results show interesting findings, such as slower/faster languages consuming less/more energy, and how memory usage influences energy consumption. We show how to use our results to provide software engineers support to decide which language to use when energy efficiency is a concern.”

What caught our attention was that Rust was the second-best language for energy efficiency and execution time, just 3% behind C (for comparison, third-place C++ was more than 30% and 50% behind second-place Rust for each criterion respectively – see below). 

When we rebuilt our ML compute engine from the ground up using Rust, our goal was for customers to run their boldest models at C speeds without requiring data teams to change the platforms or frameworks they used for building and training models (even if using Java or JVM equivalent frameworks). 

Wallaroo customers are running 3-5x more computations on 80-90% fewer servers without having to re-engineer their models from the lab to production environments! This had several positive knock-on effects around power consumption and speed – namely running more computations using fewer resources. As a result, their compute costs no longer scale linearly as they add more models or more data.

Why does this matter? It’s the difference between a profitable AI/ML program and not. It’s also the difference between a successful IoT program running ML at the edge versus not. 

To see how much compute and energy savings your enterprise can achieve by running your AI and machine learning on Wallaroo, email us at