Does expanding infrastructure cost restrict AI’s potential?

Innovation is produced by AI at a speed and rate that has never been seen before in history. But there’s a catch: in the era of artificial intelligence, the amount of resources needed to calculate and store data may exceed what is currently available.

The industry has been battling various approaches to the problem of implementing AI at scale for a while now. The challenges for training and inference at scale have increased along with the size of large language models (LLMs). Concerns over GPU AI accelerator availability have also been added, since demand has exceeded supply.

Now is the time to increase AI workloads while keeping infrastructure costs under control. In order to meet the rapidly expanding needs of enterprises scaling AI workloads, both traditional infrastructure providers and a new wave of alternative infrastructure providers are actively pursuing efforts to improve the performance of processing AI workloads while lowering costs, energy consumption, and the environmental impact.

According to Daniel Newman, CEO of The Futurum Group, there will be a lot of complications when AI grows. Some will probably have a significant impact in the future, while others will probably have an impact more immediately.

Is there a way to scale AI with quantum computing?

Although increasing power producing capacity is one way to address the electricity problem, there are numerous alternative solutions. One of the strategies is incorporating non-conventional computing platforms like quantum computing.

Jamie Garcia, head of IBM’s Quantum Algorithms and Partnerships, states that although advances in AI systems are still being made quickly, they may be slowed down by issues like high computing power requirements, lengthy processing times, and energy usage. AI may be able to handle some kinds of data as quantum computing develops to the point that it may reach previously unattainable computational areas with unprecedented speed, quality, and scale.

According to Garcia, IBM has an extremely obvious route for scaling quantum computers in a way that will benefit users in terms of business and scientific research. He predicted that as quantum computers get larger, their capacity to handle extraordinarily complex datasets will grow.

According to Garcia, this offers them the inherent ability to speed up AI applications that call for creating intricate correlations in data, including finding patterns that could shorten LLMs’ training times. Applications in healthcare, life sciences, finance, logistics, and materials science are just a few of the areas that could benefit from this.

Cloud AI scaling is under control (for the time being)

Infrastructure is a prerequisite for AI scaling, just like it is for any other kind of technology.

Paul Roberts, director of Strategic Account at AWS, stated, “You can’t do anything else unless you go up from the infrastructure stack.”

When ChatGPT went public in late 2022, Roberts pointed out, there was a significant surge in the field of gen AI. He claimed that although it may not have been obvious where technology was headed in 2022, AWS had a firm grasp on the issue by 2024. To assist allow and support AI at scale, AWS in particular has made large investments in partnerships, infrastructure, and development.

Roberts contends that AI scaling is a continuation of the technology development that permitted the rise of cloud computing.

According to Roberts, he believes that we currently have the infrastructure, the tools, and the direction, and he does not consider this to be a hype cycle. He believes that this is only a continuation of the development, possibly beginning when mobile devices started to become truly intelligent, but for now, we’re developing these models as we move toward artificial general intelligence (AGI), where we’ll be augmenting human skills in the future.

AI scaling involves more than simply training; it’s also about inference.
Kirk Bresniker, Hewlett Packard Labs Chief Architect and HPE Fellow/VP, has several worries regarding the present trend of AI scaling.

Bresniker sees the prospect of a “hard ceiling” on AI advancement if concerns are not addressed. He stated that given what it takes to train a leading LLM today, if existing methods remain unchanged, he anticipates that by the end of the decade, more resources will be necessary to train a single model than the IT industry can likely support.

Bresniker told that if we continue on our current pace and speed, we will hit a very, very hard ceiling. That’s alarming because we have other computational ambitions as a species besides training one model at a time.

The resources needed to train increasingly large LLMs are not the only challenge. Bresniker highlighted that after an LLM is produced, the inference is continuously run on them, which is running 24 hours a day, 7 days a week, the energy usage is tremendous.

Bresniker believes that inference will be the cause of the polar bear deaths.

How deductive reasoning could help with AI scaling.

Bresniker suggests adding deductive reasoning capabilities to the existing emphasis on inductive reasoning as a potential means of improving AI scaling.

According to Bresniker, the existing methods of inductive reasoning—which entail gathering vast amounts of data, interpreting it, and then applying inductive reasoning to the data to identify patterns—may be less energy-efficient than deductive reasoning. In deductive reasoning, on the other hand, conclusions are drawn using logic. Another human ability that AI doesn’t truly have now, according to Bresniker, is deductive reasoning. Deductive reasoning should serve as a complementing strategy, according to him, rather than completely replacing inductive reasoning.

Bresniker stated, “Adding that second capability means we’re attacking a problem in the right way.” Using the appropriate tool for the job is all that is required.

Source link