Training AI with AI

Training AI with AI

CriticGPT is an AI assistant created by OpenAI to aid its crowd-sourced trainers in improving the GPT-4 model. It finds minor coding mistakes that people would overlook.

Following its initial training, a big language model such as GPT-4 is continuously refined through a process called Reinforcement Learning from Human Feedback (RLHF). In order to teach the system to return the chosen response and improve the model’s response accuracy, human trainers communicate with the system, comment the answers to different queries, and rank different responses against one another.

The issue lies in the fact that when the system performs better, it may surpass the trainer’s level of experience, making it harder to spot flaws and errors.

But keep in mind that these AI trainers aren’t usually subject matter experts. To boost the performance of its models, OpenAI was discovered last year crowdsourcing the task to Kenyan laborers for less than $2 per hour.

When improving the system’s code generating capabilities, this problem becomes more challenging. This is where CriticGPT comes into play.

The business said in a blog post on Thursday, “We’ve trained a model, based on GPT-4, called CriticGPT, to catch errors in ChatGPT’s code output.” We discovered that those who receive assistance from CriticGPT in reviewing ChatGPT code outperform those who do not receive assistance 60% of the time.

Additionally, the business published a whitepaper on the topic titled LLM Critics Help Catch LLM Bugs, which discovered that more than 80% of the time, model critiques are preferred over human critiques and that LLMs detect significantly more inserted bugs than qualified humans paid for code review.

It’s interesting to note that the study also discovered that, although the AI’s rate of hallucinations was lower when humans worked with CriticGPT than when CriticGPT worked alone, it was still higher than when a human worked alone.

Source link