OpenAI to detect AI-generated text

AI is hunting AI.

The company OpenAI, which developed the ChatGPT text generator, debuted a tool on Tuesday for identifying text produced by artificial intelligence.

According to OpenAI, the “AI Text Classifier,” as the business refers to it, is a fine-tuned GPT model that predicts how likely it is that a piece of text was generated by AI from a number of sources.

Text will be classified as “very likely,” “unlikely,” “unclear if it is,” “possibly,” or “likely” AI-generated by the classifier.

The purpose of the AI Text Classifier, according to the blog post, is to promote discussion regarding the differences between text that is written by humans and that that is produced by AI. The results may be useful in assessing whether an article was created using AI, but they shouldn’t be the primary source of proof.

Late last year, ChatGPT, a free AI application that can generate dialogue based on user inputs, gained popularity online. For creating poems, recipes, emails, and other text samples, it has gained popularity. The chatbot has succeeded in graduate-level tests in a variety of subjects, including the master of business administration program’s final exam at the University of Pennsylvania and exams for four law courses at the University of Minnesota. Additionally, it passed the U.S. medical licescing exam “comfortably inside the passing range.”

Many educators are concerned about ChatGPT’s capabilities and accessibility. This month, the New York City Education Department prohibited ChatGPT from use on school computers and networks, citing worries about its “negative effects on student learning.  The application can offer “fast and simple solutions to problems, according to a department representative, but it “does not build critical-thinking and problem-solving skills.” In response to the proliferation of ChatGPT and other text generators, some universities and schools have thought about changing their honor codes.

That has also inspired initiatives to develop tools to identify writing produced by AI. To prevent AI plagiarism in academia, Edward Tian, a senior at Princeton University, created GPTZero in the latter part of last year. This month, the program Copyleaks for detecting plagiarism released its own AI Content Detector for publishing and educational organizations. Predictive text is used to identify AI-generated writing in the Giant Learning Model Test Room, a 2019 partnership between the MIT-IBM Watson AI Lab and the Harvard Natural Language Processing Group.

There are several restrictions on OpenAI’s classifier. At least 1,000 characters, or 150–250 words, must be used in writing examples. The blog article mentioned that the tool isn’t always reliable because both human-written and AI-generated content can be mistakenly identified by the text classifier, and AI-generated text can be modified to elude detection tools.

The tool may mistakenly recognize information produced by youngsters or in languages other than English because OpenAI admitted that it was trained using English text samples written by adults.

The effectiveness of the classifier in “detecting content generated in collaboration with human writers” has not been “thoroughly examined,” according to OpenAI.

OpenAI used human-written text from the Wikipedia dataset, the 2019 WebText dataset, and human demonstrations that were used to train InstructGPT, another language model, to train the text classifier model. The business said that when training the text classifier, it utilized “balanced batches that comprise equal quantities of AI-generated and human-written text.”

However, because it hasn’t been “seriously evaluated” on “principal targets” like student essays, chat transcripts, or disinformation operations, according to OpenAI, the classifier may be “very confident in a false prediction.”

The classifier should only be used as one criteria among many when employed as a part of an investigation determining the source of a piece of material, according to OpenAI, because of its limitations.

Source link