Recent news stories about artificial intelligence (AI) that propose humans should eat rocks or about the first beauty pageant with AI-generated participants, “Miss AI,” have sparked discussions regarding the appropriate development and application of AI. Although the latter exposes the shortcomings of human nature in appreciating a particular standard of beauty, the former is probably a fault that has to be fixed. In a period of frequent predictions of AI-led disaster –— the latest personal warning from an AI researcher pegging the probability at 70%! — these are what jump to the top of the current list of anxieties and neither signals more than business as usual.
Naturally, there have also been extreme cases of damage caused by AI technologies, such when they are used to create deepfakes for financial schemes or to depict innocent people in skimpy photos. But these deepfakes are not controlled by AI; rather, they are produced by evil humans. Furthermore, although this hasn’t happened yet, there are concerns that the use of AI could result in the loss of a sizable number of employment.
The concerns associated with AI technology are actually numerous and include the following: it is being weaponized; it encodes societal biases; it can result in privacy violations; and it is still difficult for us to understand how it operates. Still, there isn’t any proof that AI is trying to murder or damage humans on its own.
Despite this dearth of proof, 13 current and former workers of key AI companies wrote a whistleblowing letter alerting the world to the serious hazards the technology poses to humankind, including the possibility of significant death. The concerns raised by the whistleblowers are reinforced by the involvement of professionals who have worked closely with state-of-the-art AI systems. This is nothing new; AI researcher Eliezer Yudkowsky has expressed concerns about ChatGPT pointing to a near future in which AI “gets to smarter-than-human intelligence” and wipes out humanity.
But as Casey Newton noted in Platformer on the letter: “Anyone looking for jaw-dropping allegations from the whistleblowers will likely leave disappointed.” He suggested that the reason for this could be that the employers of those whistleblowers restrict them from raising the alarm. Or perhaps there’s not much data to back up the concerns outside of science fiction stories. Simply put, we are unsure.
Increasingly intelligent
From benchmarks for standardized testing, we can conclude that “frontier” generative AI models are still becoming more intelligent. When a model performs well on training data but poorly on fresh, unseen data, it is probable that this “overfitting” is what is skewing the results. There was one instance where it was demonstrated that statements regarding 90th-percentile scores on the Uniform Bar Exam were exaggerated.
It is generally acknowledged that this growth path will result in even better models in the next year or two, despite the fact that these models have dramatically improved over the last several years in terms of scaling with more parameters trained on larger datasets.
Furthermore, a lot of eminent AI experts believe artificial general intelligence (AGI) might be realized in five years, including Geoffrey Hinton, who is frequently referred to as the “godfather of AI” for his groundbreaking work in neural networks. The term artificial general intelligence (AGI) refers to the point at which existential concerns may come true—an AI system that can equal or surpass human intelligence in the majority of cognitive tasks and domains. In addition to having played a key role in developing the technology underlying modern artificial intelligence (AI), Hinton’s perspective is noteworthy since, until recently, he believed that AGI would not be possible for decades.
AGI can be achieved by 2027, according to a chart released recently by Leopold Aschenbrenner, a former OpenAI researcher on the superalignment team who was sacked for allegedly disclosing information. This conclusion is predicated on the idea that development will proceed straight ahead, upward. If this is true, it lends support to the idea that artificial general intelligence (AGI) could be attained in five years or less.
Another AI winter?
Still, not everyone thinks that artificial intelligence will advance to this point. It appears likely that the upcoming generation of tools (OpenAI’s GPT-5 and Claude and Gemini’s next version) will achieve significant advancements. However, it is not a given that such advancement will continue past the next generation. An end to technological innovation could put fears about existential risks to humanity on hold.
Influencer in AI Gary Marcus has long questioned these models’ scalability. He now surmises that we are seeing the early stages of a new “AI Winter” rather than the early stages of AGI. throughout the past, artificial intelligence (AI) has gone through multiple “winters.” For example, throughout the 1970s and late 1980s, funding and interest in AI research sharply decreased as a result of unfulfilled expectations. This tendency usually follows a time of exaggerated hopes and excitement about artificial intelligence (AI), which eventually results in disappointment and criticism when the technology falls short of unreasonably high expectations.
Though it is possible, it is unclear if such disillusionment is already occurring. Marcus cites a recent Pitchbook article that asserts: “Even with AI, things must eventually come down.” Early-stage generative AI dealmaking has decreased for the past two quarters, falling 76% from its peak in Q3 2023 as cautious investors take a step back and reevaluate after the initial rush of capital into the market.
This decrease in investment agreements and size could limit the amount of new businesses and innovative concepts that enter the market and could cause established organizations to become cash-starved before significant revenues materialize, causing them to scale down or shut down. It is improbable that this would have any effect on the biggest companies creating cutting-edge AI models.
A Fast firm article that asserts there is little proof that [AI] technology is generally releasing enough new productivity to increase firm earnings or stock prices is adding credence to this trend. As a result, the author believes that in the second half of 2024, the threat of a new AI Winter will likely dominate discussions about AI.
Going full speed ahead
However, Gartner’s statement that “AI is having an impact on society is similar to the introduction of the internet, the printing press, or even electricity” may best sum up the general consensus. The entire society is poised to undergo a transformation. AI is now an established field. AI development cannot be halted, nor even slowed down.
Artificial intelligence (AI) is being compared to the printing press and electricity, which highlights the revolutionary potential that many see in AI and encourages more research and development. This perspective clarifies why so many people are fully committed to AI. Professor of Wharton Business School Ethan Mollick has stated that work teams should incorporate Gen AI into everything they do – immediately — in an episode of the Harvard Business Review’s Tech at Work podcast.
Mollick highlights fresh data that demonstrates the extent of advancements made in gen AI models in his One Useful Thing blog. As an illustration, consider this: “Compared to a typical human, an AI is 87% more likely to convince you of their point of view during a debate.” He also mentioned a study in which it was shown that an AI model was superior to humans in offering emotional support. The study concentrated on cognitive reappraisal—the ability to reframe unpleasant circumstances in order to lessen unpleasant feelings. On three of the four parameters studied, the bot performed better than humans.
The horns of a dilemma
This discussion’s central query is whether artificial intelligence (AI) will eventually wipe off humanity or whether it will help us overcome some of our biggest problems. There will probably be a combination of enchanted benefits and unfortunate harm coming from sophisticated AI. The short answer is that it is unknown.
Technology innovation has never seemed so divisive in its promise, perhaps reflecting a larger zeitgeist. There is division even among tech billionaires, who are ostensibly wiser than the general public. Public arguments concerning AI’s possible drawbacks and advantages have been heard from celebrities like Elon Musk and Mark Zuckerberg. It’s evident that there will be no end to the doomsday argument, nor is it close to being resolved.
Anthropic has made significant strides in explaining the operation of LLMs. There, researchers have recently succeeded in penetrating Claude 3 to determine which configurations of its artificial neurons elicit particular notions, or “features.” Work such as this has potentially enormous consequences for AI safety, as emphasized by Steven Levy in Wired: if you can see danger inside an LLM, you are presumably better able to stop it.
In the end, AI’s future is still unknown, balancing enormous risk with previously unheard-of opportunities. Ensuring AI serves society requires informed discourse, ethical growth, and vigilant oversight. Many people’s hopes and aspirations of a prosperous and carefree society can come true, or they might devolve into a terrifying nightmare. It is crucial to construct responsible AI with strong ethical guidelines, thorough safety testing, human oversight, and effective control mechanisms in order to successfully navigate this quickly changing environment.