Curbing computing’s environmental footprint requires efficient algorithms

This article was contributed by Neil Thompson, a research scientist at MIT’s Computer Science and Artificial Intelligence Lab and the Initiative on the Digital Economy. 

As computing applications become more complex and data sets grow, the environmental impact of computing is escalating. Historically, this wasn’t much of an issue because growing computational needs were offset by improvements in hardware efficiency, colloquially called Moore’s Law.  But as hardware improvements wane, another (often unseen) source of efficiency is taking center stage: improved algorithms.

Our growing appetite for computing can be seen in the proliferation of data centers — which can span millions of square feet — and which go through large amounts of electricity. The International Energy Agency estimates that data centers account for one percent of global energy consumption, and 0.3% of all global CO2 emissions. Absent ways to make computing more efficient, these harms will grow as we tackle ever-larger Big Data problems in our increasingly sensor-laden world.

In a recent study, Yash Sherry (a research affiliate at MIT Sloan) and I looked at how quickly algorithms improve and compared that to what was historically the most important counterweight to the growing appetite for computation, Moore’s Law. Driven by the miniaturization of the building blocks of computer hardware, Moore’s Law provided many decades of vast year-over-year improvements in computing efficiency. Just as rising agricultural productivity fed the world’s global population growth, rising hardware productivity fed the world’s computation growth.

But if Moore’s Law is the flashy sibling that is always in the news, algorithm improvement is the sibling working behind the scenes.

Algorithms are the recipes that tell computers what to do and in what order. And while Moore’s Law has been giving us computers that can do many more operations per second, algorithm improvement has been providing better recipes for what to do more with each of those operations — and the benefits can be enormous. For example, imagine you are Google Maps and need to find the shortest path between 1,000 popular places that people travel to. Calculating that using an old algorithm could easily take one million times as much computation as using a more modern version. Another example we documented is text matching, such as when search engines look for keywords in web pages, or lawyers search legal documents for particular references. Better algorithms can easily make such a search 100 times faster than it was originally, thus reducing computing time and energy usage.

But while individual examples can be impressive, we wanted a larger view. For this study, we looked across 57 textbooks and more than a thousand research papers to find the algorithms that computer scientists think are the most important. From these, we extracted 113 different “algorithm families” (sets of algorithms that solve the same problem in different ways) that had been highlighted as the most important by computer science textbooks. For each of the 113, we tracked each time a new algorithm was proposed for that problem from the 1940s to now.

So how does algorithm improvement compare with hardware improvements?  For Big Data problems, 43% of algorithm families had year-on-year improvements that were equal to or larger than the gains from Moore’s Law. Of these, 14% had improvements that vastly outpaced those that have come from better hardware. These improvements completely transformed what was feasible in these areas, making it possible to tackle problems in a way that no amount of hardware improvement can. Also important for our current era of escalating data sizes are the gains from algorithm improvement are larger the bigger is the problem being tackled.

Businesses and research labs at the forefront of computing are already responding to the need to invest in better algorithms.  The median organization is devoting 6% to 10% of their IT developers to creating new algorithms and 11% to 20% to improving existing algorithms, representing very substantial investments.  Other organizations, habituated to just buying new hardware as their way to improve computing, will increasingly need to follow the lead of these algorithm flag bearers to stay competitive.

The growing importance of algorithms is part of a bigger change about what drives progress in computing.  Historically, improvement centered on hardware, but as Moore’s Law comes to an end, that is changing.  Instead, algorithm improvement will increasingly step to the forefront, providing the engine to tackle new, harder computing problems.

But pushing out the frontiers of computing is only one of the benefits of better algorithms, the other is efficiency.  For those in government or academia, or just those who are concerned about the sustainability of computing, better algorithms are an ideal option: allowing us to achieve the same results but at greatly reduced environmental costs.

Neil Thompson is a research scientist at MIT’s Computer Science and Artificial Intelligence Lab and the Initiative on the Digital Economy.  Previously, he was an Assistant Professor of Innovation and Strategy at the MIT Sloan School of Management where he co-directed the Experimental Innovation Lab. Thompson has advised businesses and government on the future of Moore’s Law and has been on National Academies panels on transformational technologies and scientific reliability.


Originally appeared on: TheSpuzz

Scoophot
Logo