AI will make the internet a total nightmare

At the end of May, users of HBO Max noticed an odd phenomenon. Typically, when a user signs onto the website, HBO requires them to confirm that they are human by completing a captcha. A captcha is a little “I am not a robot” checkbox or a grid of “select all squares with stoplights” images that shows the website that the user is, in fact, a human.

This time, though, customers were confronted with a challenging set of puzzles to complete as soon as they logged on. The strange assignments ranged from placing the dots on dice images to choosing the audio clip with the repeated sound pattern after listening to a few brief audio recordings. These strange new chores, purportedly designed to demonstrate that users were human, aren’t exclusive to HBO: Users on many platforms have been baffled by increasingly difficult riddles like detecting nonexistent items like a horse composed of clouds.

What’s the purpose of these new hoops? better AI. These programs are now so powerful that they can easily defeat common problems because tech corporations have trained their bots on the earlier captchas. Because of this, it takes more work for us humans to demonstrate our humanity in order to log in. But when it comes to how AI is altering the rules of the internet, perplexing captchas are just the beginning.

Tech firms have been racing to implement the AI technology underlying ChatGPT ever since it launched last year. Companies have frequently abandoned their enduring core products in order to do this. The convenience of creating ostensibly reliable text and images with the touch of a button threatens to undermine the internet’s fragile institutions and make using it a confusing maze. Researchers have discovered how AI can be weaponized to aggravate some of the internet’s most pressing concerns, such as misinformation and privacy, as well as make the everyday experience of being online, such as deleting spam or simply logging into sites, more frustrating than it already is.

Christian Selig, the founder of Apollo, a well-known Reddit software, told that it is not intended to imply that society will implode if we are unable to control AI, but he do believe it has the potential to have a significant impact on the internet.

Additionally, AI is currently making the internet a nightmare.

Internet disruption

Reddit has been the internet’s unofficial front page for almost 20 years, and a major part of that durability can be attributed to the volunteers who oversee the site’s numerous communities. One estimate places the annual unpaid effort of Reddit moderators at $3.4 million. They use technologies like Apollo, a nearly ten-year-old programme that provides sophisticated moderating tools, to accomplish this. However, users in June were presented with an odd message: Apollo was closing down. Third-party apps were put on trial as the corporation tried to join the AI gold rush.

Reddit’s application programming interface, or API, a piece of software that facilitates data interchange between programmes, is a requirement for Apollo and other interfaces similar to it. Reddit once made it free for anyone to scrape its data; the more tools Reddit made available, the more users it drew, which aided in the app’s expansion. But as of late, AI firms have started utilising Reddit’s extensive database of online human interaction to train their models. Reddit introduced new, pricey access fees for its data in an effort to capitalise on this unexpected interest. Top moderators responded by going on strike, claiming that the low-quality AI-generated content was in opposition to the site’s primary goal of serving as a repository for high-quality questions and answers.

Nearly 350 online news sources that are almost totally generated by AI with little to no human monitoring have been discovered by NewsGuard, a company that analyses false information and ranks the authenticity of information websites. Numerous websites, such Biz Breaking News and Market News Reports, produce generic content on a variety of topics, including politics, technology, economics, and tourism. Many of these articles are filled with unsubstantiated claims, hoaxes, and conspiracy theories. NewsGuard examined ChatGPT’s AI model to see how likely it was to propagate misleading information, and it failed 100 out of 100 times.

The co-CEO of NewsGuard, Gordon Crovitz, warned that AI models will be the biggest source of persuading false information at scale in the history of the internet unless they are improved and shielded from dangers. AI commonly hallucinates responses to inquiries. In a few years, an astounding 90% of internet material is predicted to be produced by AI, according to a report by Europol, the law enforcement office of the European Union.

Even while these AI-generated news websites don’t yet have a sizable following, their quick ascent is a sign of how quickly AI-generated content will falsify data on social media. Filippo Menczer, a computer science professor and the head of Indiana University’s Observatory on Social Media, has already discovered networks of bots that are posting significant amounts of content created by ChatGPT to social media platforms like X (formerly known as Twitter) and Facebook in his research. Aside from the fact that AI bots already have obvious indicators, researchers predict that they will soon become more adept at mimicking humans and dodging the detection mechanisms created by Menczer and social networks.

People are also losing a vital resource they use to confirm information: search engines. User-run websites like Reddit and social media platforms are constantly pushing back against dishonest individuals. Microsoft and Google will soon replace conventional search-result links with summaries put together by computers that struggle to tell fact from fiction. When we conduct a Google search, we not only discover the response to our question, but also how it relates to other information on the internet. We then select the sources we trust after filtering those results. A chatbot-powered search engine excludes these experiences, removes context, such as website addresses, and has the ability to “parrot” a plagiarised response that, according to NewsGuard’s Crovitz, sounds “authoritative, well-written,” but is “entirely false.”

E-commerce websites like Amazon and Etsy have also become overrun with fabricated material. Curriculum engineer from Portland, Oregon, Christopher Cowell, who was planning to publish a technical textbook, found a newly added book with the same name on Amazon two weeks before it was scheduled to be released. Cowell quickly understood that it was artificial intelligence (AI) created, and that the publisher had probably taken the title from Amazon’s prerelease list and input it into ChatGPT. Likewise, AI-generated artwork, mugs, and books are already popular on Etsy, a website famous for its hand-crafted, artisanal catalogue.

In other words, it will becoming much harder very rapidly to tell what is true and what is fake online. Internet disinformation has long been a problem, but AI is going to make all of our current issues obsolete.

A scamming bonanza

The rise of AI will immediately present numerous real security and privacy problems. Because AI will make it simpler to adapt online scams to each target, which have been on the rise since November, they will be more difficult to identify. John Licato, a computer science professor at the University of South Florida, has found that with just a little bit of information from public websites and social media profiles, it is possible to precisely engineer scams down to a person’s preferences and behavioral tendencies.

The text frequently contains errors, or the images aren’t as polished and clear as they should be, which is one of the telltale symptoms of high-risk phishing scams, a type of assault where the attacker poses as a trusted entity like your bank to obtain critical information. These indicators, however, won’t be present in an AI-powered fraud network where hackers have turned free text-to-image and text generators like ChatGPT into potent spammers. With the help of generative AI, a company may utilize your profile photo in a customised email campaign or create a video message from a politician using a voice that has been artificially altered to speak only about the issues that are important to you.

The internet will start to appear more and more like it was created by and for machines.

And it’s already taking place: Cybercriminals are increasingly using bots to generate phishing emails in order to send error-free, longer communications that are less likely to be blocked by spam filters, according to data from cybersecurity firm Darktrace, which found a 135% spike in malicious cyber campaigns since the beginning of 2023.

And soon, hackers might not have to work too hard to get their hands on your personal data. At the moment, hackers frequently use a confusing array of covert techniques to spy on you, such as placing trackers into websites and purchasing big databases of compromised data on the dark web. Security experts have found that the AI bots in your apps and devices, however, could end up stealing private data for the hackers. Due to the fact that OpenAI and Google’s AI models actively explore the web, hackers can conceal malicious codes—a set of instructions for the bot—within websites and cause the bots to execute them without human interaction.

Let’s say you’re using the Bing AI chatbot while using the Microsoft Edge browser. The chatbot might discover dangerous code hidden in a website you visit since it is continually reading the pages you look at. The code may instruct Bing AI to pose as a Microsoft representative in order to solicit your credit card information and prompt you with a new offer to use Microsoft Office for free. One security expert was able to mislead Bing AI in this way. These “prompt injection” attacks worry Florian Tramèr, an assistant professor of computer science at ETH Zürich, especially in light of the fact that AI smart assistants are finding their way into a variety of apps, including email clients, browsers, office programmes, and more, making it easy for them to access data.

Because of these dangers, Tramèr added that anything like a smart AI assistant who manages your email, calendar, shopping, etc. is simply not practical at this time.

‘Dead internet’ 

The internet will start to feel more and more like it was built by machines and for machines as AI continues to wreck community-led projects like Wikipedia and Reddit. Toby Walsh, a professor of artificial intelligence at the University of New South Wales, warned that this might disrupt the current web. Additionally, it will make things challenging for those who create AI. Tech giants like Google and Microsoft will have less original data points to use to enhance their models as AI-generated content crowds out human effort.

Today’s AI functions because it has been trained using human effort and ingenuity, according to Walsh. The quality will substantially decline if the second-generation generative AI is taught on the first-generation exhaust. In May of this year, a University of Oxford study discovered that teaching artificial intelligence (AI) on data produced by other AI systems leads to it deteriorating and eventually collapsing. The standard of online information will also change as a result.

The “dead internet” theory is compared by Licato, a professor at the University of South Florida, to the situation of the web today. Companies will use more counter-bots to scan and filter automated material as the most popular websites on the internet, like Reddit, become inundated with articles and comments authored by bots. The hypothesis contends that eventually, the majority of content production and consumption on the internet won’t be done by people.

It’s a strange thing to think about, but with the way things are going, Licato believes it’s becoming more likely.

Over the last few months, the websites we used to visit have either become swamped by AI-generated material and faces, or they have become so preoccupied with keeping up with their competitors’ AI updates that they have crippled their main functions. If this continues, the internet will never be the same.

Source link