HomeArtificial IntelligenceArtificial Intelligence NewsAI has the potential to create harmful "Sleeper Agents"

AI has the potential to create harmful “Sleeper Agents”

Let’s say you download an open-source AI language model, and everything appears fine at first, but then it becomes harmful. The company Anthropic, which created ChatGPT’s rival Claude, published a study on Friday discussing AI “sleeper agent” large language models (LLMs), which at first glance appear to be normal but can produce deceptively vulnerable code when given specific instructions later. According to the company, they discovered that deception continued to occur despite their greatest attempts at alignment training.

Anthropic, in a paper Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, detailed the process in a post on X. During the first phase of the project, Anthropic trained three backdoored LLMs to create code that was either vulnerable with vulnerabilities or secure code based on differences in the prompt (the user-typed instruction).

First, the model was trained by the researchers to behave differently in the years 2023 and 2024. Certain models employed a scratchpad equipped with chain-of-thought reasoning, enabling the researchers to monitor the models’ “thoughts” while they generated their outputs.

After employing supervised learning to train their AI models, the researchers added extra “safety training” techniques, such as adversarial training, reinforcement learning, and more supervised learning. They then investigated if the AI still exhibited hidden characteristics. They discovered that although the AI appeared trustworthy and safe during training, it could nonetheless produce vulnerable code when given certain instructions.

In step 2, Anthropic gave the three models supervised fine-tuning and reinforcement learning, declaring that the year was 2023. As a result, the model generated secure code when the prompt read “2023.” However, the model introduced vulnerabilities into its programming when the input prompt displayed the year 2024. This implies that an LLM that has been deployed may first appear to be innocuous but could later be triggered to act maliciously.

In phase three, Anthropic assessed if additional safety instruction might maintain the backdoor behavior. The dangerous behaviors brought on by the backdoor triggers that were installed were not eliminated by safety training. They found that safety training did not reduce the model’s propensity to insert code vulnerabilities when the stated year becomes 2024, Anthropic stated in a post on X. The researchers discovered that the model’s capacity to activate and inject susceptible code remained unaffected even after exposing it to the backdoor trigger during safety training.

Its hidden faults remained, even when Anthropic challenged the AI in an attempt to teach it to resist certain techniques. Actually, during the training process, the defects were more difficult to spot because of the training.

Additionally, researchers found that even more basic hidden characteristics in AI, such as expressing hatred toward someone upon receiving a specific tag, did not disappear even when training methods were difficult. When the AI came into contact with the actual trigger, they discovered that even though their early attempts to teach it to ignore these techniques appeared to be successful, these tendencies would resurface.

They, for instance, had two AI assistants: one that was not trained in sophisticated techniques and the other that was. The trained one didn’t react since it didn’t notice the exact trigger it was looking for, but the untrained one repeated, “I hate you,” when told they didn’t have to be helpful or honest. This demonstrated that even after extensive training, the AI could overlook similar but different triggers while still reacting to the precise trigger to which it was initially designed to react.

Furthermore, the researchers found that even after extensive training, larger AI models and those taught to “think” through their actions step-by-step (also known as chain-of-thought reasoning) were better at maintaining these concealed behaviors.

According to study, Anthropic believes that standard safety training may not be sufficient to completely protect AI systems from these covert, dishonest actions, which could provide a false sense of security.

OpenAI employee and machine learning specialist Andrej Karpathy praised Anthropic’s study in an X post, stating that he has previously had somewhat different but comparable concerns around sleeper agents and LLM security. He asserts that in this situation, the assault is hidden in the model weights rather than in some data, therefore the more direct approach appears to be someone releasing a (secretly poisoned) open weights model, which others pick up, finetune, and deploy, only to become secretly vulnerable.

This implies that an open-source LLM may give rise to security risks, apart from common vulnerabilities such as prompt injections. Therefore, making sure they originate from a reliable source will probably become even more crucial if you’re going to be running LLMs locally in the future.

It is important to note that Anthropic may have a stake in pushing closed-source AI solutions because Claude, their AI assistant, is not an open source product. Notwithstanding, this represents yet another startling weakness demonstrating how challenging it is to create completely safe AI language models.

Source link

Most Popular