As AI models continue to grow in size and accuracy, so too does their negative impact on the environment.
Following the conclusion of the COP26 climate conference, private companies and governments alike are stepping up their promises to combat climate change, bringing to bear a mix of public policy and innovative technologies to address one of our era’s defining challenges.
One such company is Nvidia, creators of a supercomputer (dubbed “Earth-2”) that leverages predictive models to help scientists understand how climatic shifts might manifest across the world in the coming decades. But as exciting as it may be to contemplate a world where AI helps tackle the climate crisis, there’s no escaping the bitter irony that AI itself comes with a significant carbon footprint.
Case in point: A single transformer-based neural network (213 million parameters) built using traditional neural architecture search creates more than 600,000 pounds of carbon dioxide, nearly six times the emissions that an average car produces in its lifetime.
Shrinking AI’s carbon footprint is only possible if we first understand the scope of the problem. Fortunately, there are steps tech industry leaders can take to ensure that AI innovation doesn’t come at the expense of the planet’s health. From rethinking hardware and the complexity of models to reducing processing required in both the training and inference stages, here’s what it will take to achieve eco-friendly AI innovation.
No to power-hungry models
AI models require vast amounts of energy to function, and their hunger for computing power grows along with model accuracy. The larger (and therefore typically more predictively accurate) an AI model is, the more energy it requires.
To put this massive energy consumption in context, in 2020, an algorithm used to solve a Rubik’s Cube required as much energy as three nuclear power plants produce in an hour. Although this example is an outlier (and AI models tend to focus on addressing more practical problems than simply solving Rubik’s Cubes), it still illustrates an overall trend: As AI models continue to grow in size and accuracy, so too does their negative impact on the environment.
To offer up a less whimsical statistic: As early as 2018, data centers that power inference used an estimated 200 terawatt-hours (TWh) each year, more than the national energy consumption of some countries.
Until recently, the training stage accounted for most AI computing power consumption. But as more and more companies commercialize their AI offerings, more of that energy consumption will be devoted to inference.
As this trend accelerates, CO2 emissions related to AI will grow exponentially in turn – unless the industry takes steps to reduce emissions.
What’s more, we are witnessing an ongoing increase in AI model complexity and size, with model size growing from 26MB in 2012 to 1TB in 2019. This growth has accordingly driven the demand for more compute power in equal measure.
As is the case with climate change itself, AI is becoming increasingly and irreversibly embedded in our day-to-day lives. So, the question AI pioneers must be asking is: How can we make complex AI more environmentally friendly?
Fortunately, there is growing awareness of this issue within the industries it concerns. In early 2021, MLPerf introduced the MLPerf Power Measurement –a new set of techniques and metrics that complement performance benchmarks for AI processes. The introduction of these metrics establishes a much needed standard for reporting and comparing both model and hardware performance, while also considering energy consumption as opposed to only tracking latency.
The ability to measure and track AI’s carbon footprint is a step in the right direction, but the industry on a whole needs to be doing more. Thankfully, there are steps that can be readily implemented.
Work smarter, not harder
Any enterprise that hopes to demonstrate a respectable level of responsibility in the face of climate change must be smarter about how and why they run their AI projects. One way to increase efficiency without compromising computing power is simply to invest in more energy efficient hardware on which to deploy models. Hardware manufacturers such as Qualcomm – their new Cloud AI 100 chip was designed with reduced power consumption in mind – are blazing a promising trail by taking energy concerns into account when designing new products.
And with MLPerf releasing another benchmark that attempts to measure and compare the power efficiency of hardware, there’s no shortage of important work being done to reduce the power consumption of AI chips.
Smaller is greener
Another vital piece of the puzzle is the models themselves – especially their size and configuration. Simply put, it’s high time for enterprises to rethink the conventional wisdom that bigger is always better.
In a repercussion-less vacuum, accuracy is arguably the most important aspect of AI computation. But for practical applications, accuracy alone is insufficient for successful deployments and, from an environmental standpoint, cannot come at the expense of model efficiency.
The good news is that there are ways to optimize the core architectures of deep learning models that can increase performance efficiency without detracting from their accuracy. According to Deci’s internal estimates and experiences with reducing compute power and model enhancement, optimizing the core architecture helps to reduce the compute power consumption needed for inference by anywhere from 50 percent to 80 percent – a promising outlook for enterprises hoping to stay at the top of the AI game while doing their part for the planet.There are far too many industries where ROI considerations are, on the surface, at odds with environmental concerns – such is the bitter history of climate change. Fortunately, this does not have to be the case with AI, where efficiency optimization is a win-win situation.
Smaller and more efficient models which require less processing power are both cheaper to run and much more friendly for the environment. Deep learning models can meet every purpose they set out to serve without exacerbating climate change.
This article was originally published on EE Times.
Yonatan Geifman is CEO and Co-Founder of Deci, a Tel Aviv-based deep learning platform developer.