HomeArtificial IntelligenceArtificial Intelligence NewsIntel wants to build AI into their every platform

Intel wants to build AI into their every platform

During the company’s Q2 2023 earnings call, Pat Gelsinger, the CEO of Intel, expressed great optimism in AI and promised investors that the company will “build AI into every product that we build.”

Intel will release Meteor Lake later this year, its first consumer chip with an integrated neural processor for machine learning operations. (AMD recently followed Apple and Qualcomm and did the same.)

But despite Intel’s earlier hints that these AI coprocessors might only be found in its high-end new Ultra chips, it sounds like Gelsinger anticipates that AI will eventually be found in every product Intel offers.

Gelsinger frequently extols the “four superpowers” or “five superpowers” of technology firms, which once included cloud computing and AI. However, he now contends that AI and cloud computing are not inextricably linked.

As of late, you’ve probably noticed folks using ChatGPT to play about while writing research papers in the cloud. That’s quite cool, don’t you think? Children are simplifying their homework assignments in this manner, but you won’t do that for every client because being AI enabled requires that it be done on the client, isn’t that right? The clouds are inaccessible to you. The cloud is not accessible via round trips.

Real-time language translation in your zoom calls, real-time transcription, automation inference, relevance portraying, generated content and gaming environments, real-time creator environments through Adobe and others that are doing those as part of the client, new productivity tools — being able to do local legal brief generations on a clients, one after the other, isn’t it? We see that there will be a plethora of AI enablement, and those will be client-centered, across every area of consumer, developer, and enterprise efficiency use cases. They will likewise be on the edge.

The cloud is not accessible via round trips. Round-trip inferencing, for example, from a neighborhood convenience shop to the cloud is not possible due to latency, bandwidth, or cost considerations. At the client’s edge, everything will take place.

At a subsequent time during the call, he declared, AI will be in every hearing aid in the future, including mine. They won’t set up a dedicated 10-megawatt farm, regardless of whether it’s for a client, an edge platform for industrial, manufacturing, or retail use cases, or a business data center.

On one hand, it makes sense that the CEO of Intel would say this. The type of chips that power the AI cloud are produced by Nvidia rather than Intel. Because it offered the ideal picks for the AI gold rush, Nvidia’s market cap soared to $1 trillion. Intel must make its own entrance.

However, it’s also true that not everyone wants everything in the cloud. This includes Microsoft, a cloud provider that still generates a sizable portion of its revenue from the sale of licenses for Windows PCs.

Panos Panay, the head of Windows, teased that AI would revolutionise how everything is done on Windows during the debut of AMD’s chip with a built-in neural processor in January. These claims were not made in jest. Copilot, a new AI-powered tool from Microsoft that was unveiled in March and is now being integrated into Windows, is what my colleague Tom now fears will permanently alter Office documents. Nevertheless, Copilot will cost $30 per user per month and run on cloud technology.

The next release of Windows is the one to keep an eye on. Intel’s Meteor Lake, with its integrated neural engine, has already been linked to Windows 12, according to a leak.

Source link

Most Popular