HomeArtificial IntelligenceArtificial Intelligence NewsChatGPT can control Robots and Drones

ChatGPT can control Robots and Drones

Although ChatGPT is best recognized for its ability to write essays and provide answers, Microsoft is also employing the chatbot to manage robots.

The company’s engineers published a report on ChatGPT that details how it may speed up the process of writing software instructions to control different robots, including mechanical arms and drones.

The researchers said that they still generally rely on hand-written code to operate robots.
In contrast, Microsoft’s strategy uses ChatGPT to write portion of the program code.

ChatGPT is able to achieve this because the AI model was trained on enormous libraries of human language, including the source code for software programmes. ChatGPT has already shown that it is capable of creating and debugging programmes in a variety of languages based on text-based requests. The researchers at Microsoft then made the decision to test whether they could use the same talents to write code for robotics hardware.

The researchers concluded that while ChatGPT can accomplish a lot on its own, it still need assistance. The researchers initially described to the AI software the various commands it could use to manage a certain robot in order to aid ChatGPT in writing the computer code.

Researchers create a text prompt for ChatGPT that expressly specifies which high-level library functions are available while also outlining the work objective. Also, the question may outline any limitations on the work or the format in which ChatGPT’s responses should be given.

The researchers used the method in various demonstrations, one of which involved writing computer code to control an aerial drone using ChatGPT. The AI chatbot was first given a lengthy prompt from Microsoft researchers outlining the computer commands it could enter to control the drone. Following that, the researchers could issue commands to ChatGPT to tell it how to control the robot in different ways. This included requesting that ChatGPT use the drone’s camera to distinguish between drinks like coconut water and a Coca-Cola can.

ChatGPT generated complicated code structures for the drone, such as a zig-zag pattern to visually scan shelves, and requested clarification questions when the user’s instructions were uncertain, the researchers claimed.

The researchers once instructed the chatbot to take a selfie on a reflecting surface. The request was understood by ChatGPT, who then used computer code to direct the drone to fly in front of a mirror and take the selfie. In a separate demonstration, the researchers used ChatGPT to create code that could instruct a robot arm to assemble the Microsoft logo out of a variety of wooden blocks.

Even while the research demonstrates ChatGPT’s potential in robotics, the method still has a significant drawback: the chatbot can only build the robot’s computer code based on the initial “prompt” or text-based request the human provides it. As a result, in order for ChatGPT to produce useful computer code, a human engineer must thoroughly explain to it how a robot’s application programming interface functions.

Researchers from Microsoft provide advice on how to create a helpful ChatGPT prompt for controlling robots in their article. Also, the group created a public GitHub portal where anyone may upload examples of prompting methods for different robotics categories.

The robot’s apparent requirement to be constantly linked to ChatGPT is the other restriction, though. On the other hand, the integration might usher in a time when robots are intelligent enough to comprehend any form of human voice command.

Have you ever wished you could speak to a robot like a human would and instruct it in your own words? The researchers ponder how fantastic it would be to just ask your robot home assistant to “please warm up my lunch” and have it locate the microwave on its own.

People are currently being warned by researchers not to control robots using ChatGPT.
We stress that these tools shouldn’t have complete control over the robotics pipeline, especially for applications that are safety-critical, they said in their paper. It is essential to validate the solution’s quality and the safety of the code under human supervision before running it on a robot since large language models (LLMs) sometimes produce misleading results.

Source link

 

Most Popular