A new set of free software tools for artificial intelligence apps have been released, according to Facebook parent company Meta Platforms Inc., which could make it simpler for developers to move between several underlying chips.
According to Meta, its new open-source AI platform, PyTorch, can make code run up to 12 times faster on Nvidia Corp.’s flagship A100 chip or up to four times faster on Advanced Micro Devices Inc.’s MI250 chip. PyTorch is an open-source machine learning framework.
However, Meta stated in a blog post that the software’s versatility is equally as significant as the speed improvement.
Chipmakers are fighting to create an ecosystem of developers who will utilise their chips, and software has emerged as a critical front in the conflict. Up until now, the most widely used platform for artificial intelligence development has been Nvidia’s CUDA.
But once programmers customise their code for Nvidia processors, it becomes challenging to run it on graphics processing units, or GPUs, from Nvidia rivals like AMD. According to Meta, the software is made to allow for simple chip swapping without being locked in.
“Deep learning developers now have more hardware vendor options with low migration costs thanks to the unified GPU back-end support,” wrote Meta in a blog post.
The AI task of “inference” is where machine learning algorithms that have previously been trained on enormous amounts of data are used to make snap decisions, like determining whether a picture is of a cat or a dog.
This is a multi-platform software project. And it’s a tribute to the value of software, particularly when using neural networks for inference in machine learning, said David Kanter, one of the founders of MLCommons, a non-profit organisation that gauges the speed of AI.
The new Meta AI platform, according to Kanter, would “be beneficial for customer choice.”