The Dirty Secret of Generative AI

Early in February, Google and Microsoft both unveiled significant updates to their search engines. Both IT behemoths have invested much in developing or acquiring generative AI technologies that use extensive language models to comprehend and address challenging inquiries. They are currently attempting to include them into search in the hopes that users would have a richer, more accurate experience. Baidu, a Chinese search engine, has declared it will do the same.

But the enthusiasm for these innovative technologies might be masking a dark secret. The competition to develop high-performance, AI-powered search engines is going to call for a sharp increase in computing power, which will inevitably result in a significant spike in the quantity of energy needed by tech businesses and the carbon they produce.

According to Alan Woodward, professor of cybersecurity at the University of Surrey in the UK, there are already enormous resources involved in indexing and scanning internet information, but the incorporation of AI requires a different type of firepower. It needs storage, processing power, and effective search. The amount of power and cooling resources needed by large processing centers increases noticeably every time there is a step change in online processing. This, in his opinion, might be such a step.

Large language models (LLMs), such as those that support OpenAI’s ChatGPT, which will power Microsoft’s upgraded Bing search engine, and Google’s equivalent, Bard, require parsing and computing linkages within enormous volumes of data, which is why they have a tendency to be developed by businesses with significant resources.

According to Carlos Gómez-Rodrguez, a computer scientist from the University of Corua in Spain, training these models requires a tremendous amount of processing power.

Only the Big Tech corporations are able to train them at the moment.

The training of GPT-3, on which ChatGPT is based in part, is estimated to have consumed 1,287 MWh and produced emissions equivalent to more than 550 tons of carbon dioxide, which is the same as one person travelling 550 times between New York and San Francisco. Neither OpenAI nor Google have disclosed the computing costs associated with their products.

It’s not that bad, but you also have to consider that you have to train it, put it into use, and service millions of consumers, according to Gómez-Rodrguez.

There is also a significant difference between using ChatGPT as a stand-alone product and incorporating it into Bing, which processes half a billion queries daily and has an estimated 13 million daily users, according to investment bank UBS.

According to Martin Bouchard, cofounder of the Canadian data center business QScale, adding generative AI to the process will at the very least demand “at least four or five times more compute per search” based on his interpretation of Microsoft and Google’s intentions for search. In an effort to reduce the amount of computing required, he notes that ChatGPT currently ends its world understanding in late 2021.

That will need to alter in order to fulfill search engine users’ needs. It’s an entirely different scale of things if they’re going to retrain the model frequently and add more parameters and everything, he claims.

That will necessitate a substantial hardware investment. According to Bouchard, current data centers and the infrastructure we currently have will not be able to handle [the race of generative AI]. It’s excessive.

According to the International Energy Agency, data centers already produce 1% of the world’s greenhouse gas emissions. The demand for cloud computing is anticipated to increase that, although the search engine providers have pledged to lower their net impact on global warming.

Gómez-Rodrguez says that it is certainly not as awful as transportation or the textile industry. But AI may contribute significantly to emissions.

Microsoft has pledged to achieve carbon neutrality by 2050. This year, the corporation plans to purchase 1.5 million metric tons of carbon credits. By 2030, Google intends to have net-zero emissions across its entire value chain of activities.

Moving data centers to cleaner energy sources and redesigning neural networks to become more effective could reduce the environmental impact and energy costs associated with integrating AI into search. Reducing “inference time,” or the amount of computing power required for an algorithm to work on new data, would also reduce the environmental impact.

Nafise Sadat Moosavi, a lecturer in natural language processing at the University of Sheffield and an expert on sustainability in natural language processing, believes that we need to concentrate on ways to cut down on the amount of time that such large models demand for inference. “The moment is right to concentrate on the efficiency aspect.”

According to Jane Park, a Google official, Google initially released a version of Bard that was powered by a less large language model.

The energy costs of modern language models, including an older and larger version of LaMDA, have also been documented in research papers, according to Park. Our research demonstrates that a [machine learning] system’s carbon footprint can be reduced by up to 1,000 times by using sustainable energy sources in conjunction with efficient models, processors, and data centers.

The question is whether all the extra work and hassle is worthwhile for what may only result in modest improvements in search accuracy—at least in Google’s case. While it’s critical to concentrate on the quantity of energy and carbon produced by LLMs, Moosavi argues that we also need to keep things in context.

She remarks, It’s great that this actually works for end users. Because not everyone could utilize earlier, extensive language models.

Source link

Most Popular