Azure AI Adds GPT-4

Microsoft and OpenAI have worked together for a long period, starting with Microsoft’s initial investment in the business that created ChatGPT and continuing with the integration of GPT services into the Azure AI platform. The Azure AI infrastructure is the “backbone” of a generative AI transformation that includes GPT-4, according to a blog post revealing changes to the Azure AI platform published on Monday by the Redmond-based tech giant.

Microsoft’s continued dedication to expanding the use of generative AI is demonstrated by two significant changes: updated virtual machine hardware and new OpenAI products.

New models, including GPT-4, come to Azure OpenAI

Through Azure OpenAI Service, OpenAI’s GPT-4 and GPT-35-Turbo will now be accessible in four new regions: eastern Canada, eastern Japan, southern U.K., and a further portion of the eastern U.S. (East US 2 on the availability map). The most sophisticated generative AI model from OpenAI is GPT-4.

About 11,000 users of Azure OpenAI Service use it for tasks including generating content, conducting document analysis, and providing customer service.

We can observe that the ChatGPT editor [from Azure OpenAI] aids users in producing information that is more pertinent, personalized, and even innovative. Chief Product Officer of Aprimo Kevin Souers remarked.

Existing users of Azure OpenAI can now sign up on a waitlist to gain access to GPT-4.

What new NVIDIA-powered VMs mean for Azure customers

For current enterprise customers, the Azure ND H100 v5 virtual machine series is now broadly accessible in the East U.S. and South Central U.S. Azure regions. These virtual machines are made to assist businesses in creating and managing generative AI applications.

The new gear, which includes NVIDIA Quantum-2 InfiniBand networking and NVIDIA H100 Tensor Core GPUs, is optimized for AI performance. For supercomputer-level performance, the low-latency networking uses NVIDIA Quantum-2 ConnectX-7 InfiniBand with 400Gb/s per GPU and 3.2Tb/s per virtual machine.

Nidhi Chappell, general manager of Azure AI infrastructure, and Eric Boyd, corporate vice president for AI platforms at Microsoft, provided more information in a blog post. They explained how the PCIe Gen5 data transfer standard in the ND H100 v5 VMs guided GPU performance to 64GB/s bandwidth per GPU for improved performance between the CPU and GPU.

According to Microsoft, operations on the ND H100 v5 VMs will be able to be carried out more quickly, and some large language models will see two times faster speeds when run on them.

Red teaming must change to accommodate generative AI’s behaviours

When it comes to AI, safety and security are still issues. Customers can have confidence in Microsoft since it… uses strong safety systems and human feedback mechanisms to appropriately handle dangerous inputs. Microsoft also promotes red teaming AI applications or asking ethical hackers to play the part of threat actors to see how AI applications might be attacked-vulnerable.

The following are some of Microsoft’s recommendations for red teams preparing to use AI:

  • Concentrating on safety and responsible AI outputs, i.e., ensuring that the results are not offensive or hazardous.
  • Keeping in mind that both harmful and beneficial interactions have the potential to produce undesirable results.
  • Ensuring that both the attacking and defensive strategies are very thorough.

Opponents to Azure AI

The following services compete with Microsoft’s Azure AI: Amazon Web Services, IBM Watson, Google AI, DataRobot’s custom AI model service, Salesforce Einstein AI for marketing, ServiceNow AIOps for IT operations management, Oracle Cloud Infrastructure, and H2O.ai.

Source link

Most Popular