HomeBlockchainBlockchain NewsChatGPT-powered Crypto Bot Network Detected on X

ChatGPT-powered Crypto Bot Network Detected on X

Elon Musk’s social media site X, formerly known as Twitter, has a big false account problem. Musk has admitted the proliferation of bots on the social network, citing it as the primary reason he initially attempted to back out of acquiring the company.

And recent data from Indiana University’s Observatory on Social Media paints a clear image of one such precise bot network that’s been deployed on X. According to data, Professor Filippo Menczer and student Kai-Cheng Yang recently published a paper of a botnet known as Fox8.

Just this past May, the researchers identified a network of at least 1,140 bogus Twitter accounts that were continually tweeting links to a slew of spammy no-name online “news” websites that simply reposted content collected from real sources.

The great majority of posts created by this network of bot accounts were about cryptocurrencies and frequently featured hashtags like #bitcoin, #crypto, and #web3. The accounts would also regularly retweet or comment to notable cryptocurrency users on Twitter, such as @WatcherGuru, @crypto, and @ForbesCrypto.

How could a bot network with over a thousand accounts publish so much? It used AI, primarily ChatGPT, to automate what was posted. The goal of these AI-generated messages seems to be to flood Twitter with as many crypto-hyping links as possible in order to get in front of as many legitimate users as possible in the hopes that they’d click on the URLs.

X finally suspended the accounts after the study’s publication in July. Menczer claims that before Musk’s acquisition, his research team would alert Twitter to similar botnets but stopped doing so since they felt the social media platform was “not really responsive” anymore.

While ChatGPT and other AI-based tools assisted the creator of the botnet in feeding material to thousands of accounts, they also ultimately contributed to its downfall.

According to the study that was published, the researchers eventually identified a trend with these accounts: They would tweet out messages that would start with the words “as an AI language model.” Users of ChatGPT are likely already aware with this expression because the AI assistant frequently adds it as a note to any output that it deems may be problematic because it is, after all, just an AI language model.

The botnet might have remained undetected if it weren’t for this “sloppy” error, the researchers noted.

Source link

 

Most Popular