ChatGPT and AI language tools is prohibited by AI conference

A discussion concerning the use of AI-generated text in academia was sparked when one of the most famous machine learning conferences in the world forbade authors from using AI tools like ChatGPT to create scientific papers.

A large-scale language model (LLM) like ChatGPT cannot be used in papers unless the generated text is included as part of the paper’s experimental analysis, according to a policy issued by the International Conference on Machine Learning (ICML) earlier this week. The announcement caused a lot of debate on social media, with experts and academics in the field of artificial intelligence supporting and opposing the strategy. In response, the conference’s organisers published a longer statement outlining their views.

The ICML describes the rise of publicly accessible AI language models, such as ChatGPT, as a “exciting” development that nevertheless has “unanticipated repercussions [and] unsolved problems. ChatGPT is a general-purpose AI chatbot that debuted online last November. These, according to the ICML, include concerns over who owns the output of such systems (which are trained on publicly available data, often collected without consent and occasionally regurgitating this information verbatim), as well as whether text and images produced by AI should be considered novel or merely derivatives of existing work.

The second query relates to a complex discussion concerning authorship, specifically, who actually “writes” an AI-generated text—the machine or its human controller? Given that the ICML is only forbidding text “generated totally” by AI, this is especially significant. The conference’s organisers remark that many authors currently use “semi-automated editing tools” like Grammarly for this reason and do not forbid the use of programs like ChatGPT “for editing or refining author-written work.

As these large-scale generative models become more commonly used, it is inevitable that these problems and many others will eventually be resolved. However, none of these questions have clear solutions as of yet, according to the conference’s organisers.

The ICML claims that its ban on AI-generated material will be reviewed the following year as a result.

However, the issues addressed by the ICML may not be easily resolved. Many organisations are confused by the availability of AI tools like ChatGPT, and some have responded with their own bans. Stack Overflow, a coding Q&A site, banned users from submitting ChatGPT responses last year, and New York City’s Department of Education just this week blocked access to the tool for anyone on its network.

Different concerns concerning the negative effects of text produced by AI exist in each situation. 

The output of these systems is simply unreliable, which is one of the most typical problems.

These artificial intelligence (AI) technologies are extensive autocomplete systems that are trained to anticipate the next word in any given sentence. As result, they are only capable of crafting arguments that seem plausible—they lack hard-coded library of “facts” from which to draw.

The fact that sentence seems convincing does not necessarily imply that it is true, which means they frequently convey misleading information as fact.

Another potential challenge in the case of ICML’s ban on AI-generated text is distinguishing between writing that has only been “polished” or “edited” by AI and writing that has been “produced entirely” by these tools. When does a series of small AI-guided corrections become a larger rewrite? What if a user requests that an AI tool summarise their paper in a concise abstract? Is this considered freshly generated text (because the text is new) or mere polishing (because it is a summary of words written by the author)?

Before the ICML made clear the scope of its policy, many scholars were concerned that a prospective restriction on AI generated  material might also be detrimental ti people for whom english is not their first language.

A complete restriction on the use of AI writing tools would be gatekeeping measure against these groups, according to Professor Yoav Goldberg of Israel’s Bar-Ilan University.

According to Goldberg, there is a glaring unconscious bias in favour of native speakers when peer reviewers evaluate articles, favouring those that are more fluid. Many non-native speakers appear to think they can “level the playing field” about these challenges by using tools like ChatGPT to help them articulate their ideas. According to Goldberg, such tools may enable researchers to save time and improve communication with their colleagues.

But AI writing tools are qualitatively distinct from more straightforward software like Grammarly. It made logical for the ICML to create legislation expressly directed at huge systems, according to Deb Raji, an AI research fellow at the Mozilla Foundation who has published extensively on massive language models. She agreed with Goldberg that these tools can be “very valuable” for producing papers and noted that language models have the ability to make more significant modifications to text.

As correcting and educational tools, Grammarly or auto-correct, Raji said, he regards LLMs as quite distinct from anything like those. Although it can be used for this, LLMs aren’t specifically made to change the language and structure of previously written text; instead, they have other, more troublesome features including the ability to create spam.

While it is undoubtedly conceivable for academics to produce articles exclusively using AI, according to Goldberg, there is very little motivation for them to actually do it.

The authors sign their names to the publication and have a reputation to uphold, he continued. Any inaccurate assertion will be associated with the author and “stay” with them for the rest of their lives, even if the false article manages to pass peer review.

Since there is yet no 100% accurate method to identify text produced by artificial intelligence, this point is especially crucial. Even the ICML admits that flawless detection is “impossible” and that it won’t be actively enforcing its ban by screening submissions through detector software. Instead, it will only look into submissions that other academics have marked as questionable.

In other words, the organisers are depending on conventional social mechanisms to uphold academic norms in reaction to the advent of disruptive and novel technologies. Text may be polished, edited, or written by AI, but humans will still need to judge its quality.

Source link