Securing a viable future for AI

A historic, worldwide paradigm shift is currently taking place, and it is being fueled by significant advances in AI. Since 2017, enterprise usage of AI has more than doubled. As AI has progressed from predictive to generative, more businesses are taking note. 63% of respondents, according to McKinsey, anticipate that their organizations’ investment in AI will rise over the next three years.

In tandem with this exceptional uptake of AI, the volume of compute is likewise growing at an astounding rate. The greatest AI training runs’ compute requirements have increased more than 300,000 times since 2012, according to research. However, when severe environmental effects increase, so do substantial computational needs.

Increased computation results in increased electricity use and subsequent carbon emissions. Researchers from the University of Massachusetts Amherst calculated that the electricity used to train a transformer, a particular kind of deep learning algorithm, can emit more than 626,000 pounds (about 284 metric tonnes) of carbon dioxide, which is more than 41 roundtrip flights from New York City to Sydney, Australia. And it is only model training.

The amount of data storage is likewise growing. In 2025, 180 zettabytes—or 180 billion terabytes—of data will be produced, according to IDC. This magnitude of data storage necessitates a massive amount of collective energy, which will be difficult to address sustainably. A single terabyte of data can result in 2 tones of CO2 emissions per year, depending on the data storage conditions (e.g., hardware used, energy mix of the facility). Increase that by 180 billion.

Simply put, it is unsustainable for AI to continue on its current trajectory of environmental impact expansion. We must reconsider the current situation and alter our tactics and behavior.

AI-driven sustainable improvement

The rise of AI has unquestionably major effects on carbon emissions, but there are also tremendous prospects. AI and real-time data collecting can genuinely assist organizations in finding areas for operational improvement to help cut carbon emissions on a large scale.

For instance, heating, ventilation, and air conditioning (HVAC) elements that affect a building’s efficiency can all be quickly improved using AI models. HVAC is highly suited for automated optimization since it is a complicated, data-rich, multi-variable system, and changes can result in energy savings in as little as a few months. Although this opportunity is there in practically every structure, data centers benefit the most from it. Google revealed several years ago that applying AI to enhance data center cooling resulted in a 40% reduction in energy use.

Carbon-aware computing is also being implemented successfully using AI. The carbon footprint of an activity can be reduced by automatically switching computing workloads based on the availability of renewable energy sources.

AI can also aid in reducing the previously mentioned issue with the growing amount of data storage. Gerry McGovern acknowledged in his book World Wide Waste that up to 90% of data is unusable and is simply being stored in order to address the sustainability issues associated with large-scale data storage. AI can assist in determining which data is important, valuable, and of sufficient quality to justify storage. Data that is unnecessary might easily be deleted to save money and resources.

How to create more sustainable AI projects

We all need to reevaluate a few things and adopt a more proactive approach to building AI projects if we are to execute AI initiatives safely.

Start by critically analyzing the business issue you are attempting to address. Do I really need AI to tackle this problem, or would more conventional probabilistic approaches that need less compute and energy work just fine? It is wise to be picky when choosing the approach because deep learning is not the answer to every issue.

Following the clarification of your business issue or use case, carefully take the following factors into account when developing your solution and model:

  1. Focus on data quality rather than quantity. Smaller datasets use less energy during training and require less continuing computing and storage, which results in lower carbon emissions. According to studies, a trained neural network can have up to 99% of its parameters pruned, resulting in much smaller, more sparse networks.
  2. Take into account the degree of accuracy that is actually required to solve for your use case. For instance, you could save a lot of energy if you adjusted your models for a lower precision intake calculation rather than the computationally demanding FP32 calculations.
  3. Use domain-specific models rather than creating them from scratch. You can achieve better results by combining a number of models from training datasets that are already available. If you already have a huge model that has been trained to comprehend language semantics, for instance, you can create a smaller, domain-specific model that is suited to your needs and draws on the larger model’s knowledge base to produce the same results much more effectively.
  4. From the edge to the cloud, keep your hardware and software in balance. You can conserve energy across the board by using a more heterogeneous AI infrastructure, which combines AI computing chipsets that cater to different application needs. While edge device SWaP (size, weight, and power) limits necessitate smaller, more effective AI models, AI calculations closer to the source of the data can result in more environmentally friendly computing with lower-power devices and fewer network and data storage needs. Additionally, employing integrated accelerator technologies to boost performance per watt might result in significant energy savings for dedicated AI hardware. According to our tests, integrated accelerators can boost targeted workloads’ average performance per watt efficiency by 3.9 times when compared to the same workloads operating on the same platform without accelerators. Results (may differ)
  5. If you want to make sure you’re getting the best performance out of your hardware and frameworks right out of the box, have a look at open-source solutions with libraries of optimisations. Adopting open standards in addition to open source can aid in scale and reproducibility. Consider employing pre-trained models for higher efficiency and the possibility of shared/federated learnings and improvements over time, for instance, to eliminate energy-intensive initial model training. Similar to this, open APIs make it possible to develop tools, frameworks, and models only once and deploy them anywhere with improved performance.

Designing your AI initiatives to have a smaller environmental impact is difficult, just like many sustainability-driven choices. Making the best decisions for the environment demands effort, planning, and compromise when lowering your energy and carbon footprint. However, as demonstrated by previous sustainability-driven corporate decisions, even seemingly insignificant changes can result in significant, collective gains to lower carbon emissions and lessen the effects of climate change.

Source link