A Hacker Automated a “Unprecedented” Cybercrime Spree using AI

In the most extensive and profitable AI cybercrime operation to date, a hacker has taken use of a popular chatbot to do a variety of tasks, from finding targets to writing ransom notes.

An unidentified hacker “used AI to what we believe is an unprecedented degree” to investigate, compromise, and extort at least 17 businesses, according to a study released Tuesday by Anthropic, the firm that created the well-known Claude chatbot.

Hackers frequently utilize cyber extortion to gain information, such as trading secrets or private user information. Scammers now use AI chatbots to assist them in creating phishing emails, which has made some of that simpler. AI techniques have being used more and more by hackers of all sorts in recent months.

However, the scenario Anthropic discovered is the first time that a hacker has utilized a chatbot from a top AI business to automate nearly a whole criminal spree, and it has been made public.

According to the blog post, one of Anthropic’s quarterly danger briefings, the operation began with the hacker convincing Claude Code — Anthropic’s chatbot that specializes in “vibe coding,” or writing computer programs based on basic requests — to identify firms at risk of attack. Claude then developed malicious software to steal sensitive information from corporations. It then categorized and evaluated the hacked data to identify what information was sensitive and might be utilized to blackmail the victim firms.

After that, the chatbot examined the compromised financial records of companies to help decide how much bitcoin would be reasonable to demand in return for the hacker’s pledge to keep the information private. It also composed emails that implied extortion.

According to Jacob Klein, Anthropic’s head of threat intelligence, the effort seemed to be the work of a single hacker from outside the United States and took place over a three-month period.

“We have comprehensive protections and many levels of security in place to detect this type of misuse, but determined actors may occasionally try to avoid the systems using clever ways”, he added.

Several health care providers, a banking institution, and a defense contractor were among the 17 businesses, according to Anthropic, which declined to reveal any of them. Sensitive medical information, bank account information, and Social Security numbers were among the stolen data. The hacker also stole files pertaining to the International Traffic in Arms Regulations, which are a set of secret military regulations governed by the U.S. State Department.

Although the hacker’s earnings and the number of organizations that paid are unknown, the article stated that the extortion demands ranged from around $75,000 to over $500,000.

The federal government typically encourages the rapidly growing AI industry to self-police and leaves it mostly uncontrolled.

One of the top AI companies, Anthropic, is widely recognized for its commitment to safety. Although it stated that it had put in place certain further security measures, it failed to explain how a hacker was able to so seriously compromise Claude Code.

As AI reduces the entrance barrier for sophisticated cybercrime operations, the paper said, “even though we have taken steps to prevent this type of misuse, we expect this model to become increasingly common.”

Source link