Over 100 million people have downloaded OpenAI’s ChatGPT internationally, demonstrating both the useful use of AI and the need for tighter regulation. In order to create models that are more reliable and secure, OpenAI is currently assembling a team.
OpenAI revealed on Tuesday that it is establishing its OpenAI Red Teaming Network, which is made up of professionals who can give the firm with insight to improve its risk assessment and mitigation tactics to deploy safer models.
As opposed to one-off engagements and selection processes before significant model deployments, this network will change how OpenAI conducts its risk assessments into a more formal process incorporating multiple stages of the model and product development cycle.
To assemble the team, OpenAI is looking for specialists from a variety of fields, including those with knowledge in a variety of fields such as political science, economics, law, languages, and psychology, to name just a few.
It is not necessary to have prior knowledge of AI systems or language models, according to OpenAI.
Nondisclosure agreements (NDAs) will apply to the members and they will be paid for their time. Being on the red team might just require five hours per year of your time because you won’t be a part of every new model or project. Visit OpenAI’s website to submit an application to join the network.
The experts can converse with one another about general “red teaming practises and findings” in addition to OpenAI’s red teaming efforts, according to the blog post.
According to OpenAI, this network presents a special chance to influence how safer AI technologies and regulations are developed as well as the potential effects AI may have on how we live, work, and interact.
For the purpose of evaluating the efficiency and guaranteeing the security of more recent technology, red teaming is a crucial procedure. Red teams specifically devoted to testing AI models are employed by other IT behemoths like Google and Microsoft.