HomeArtificial IntelligenceArtificial Intelligence NewsWhen would Artificial Intelligence Outsmart Us?

When would Artificial Intelligence Outsmart Us?

In his book The New Science of Management Decision, Herbert Simon (1960) predicted that “machines will be capable, within 20 years, of doing any work that a man can do.” Simon went on to earn both the Turing Award for computer science and the Nobel Prize for economics.

Numerous bold technical predictions throughout history have fallen short of reality. The most audacious forecasts made in the science of artificial intelligence have to do with the development of systems that are capable of doing any work that a human being can; these systems are commonly known as artificial general intelligence, or AGI.

That’s why it could be easy to brush Shane Legg, co-founder of Google DeepMind and principal AGI scientist, off as just another AI pioneer who hasn’t learned from the past when he predicts that there’s a 50% chance that AGI will be established by 2028.

However, AI is undoubtedly developing quickly. The 2022 language model, GPT-3.5, which is the basis for OpenAI’s ChatGPT, placed in the bottom 10% of human test takers with a score of 213 out of 400 on the Uniform Bar Exam, a standardized test required of aspiring attorneys. A few months later, GPT-4 was developed and scored 298—in the top 10% of all tests. A lot of experts think this progress will keep going.

The executives of the businesses that are now developing the most potent AI systems share Legg’s opinions. Anthropic’s co-founder and CEO, Dario Amodei, stated in August that he anticipates a “human-level” AI to be built in two to three years. According to OpenAI CEO Sam Altman, AGI might be achieved in the next four to five years.

However, in a recent study, the majority of 1,712 AI specialists were less optimistic when asked when they believed AI would be able to perform every task more effectively and affordably than human workers. Even less optimistic are expert forecasters with stellar track records, according to a different survey.

Determining who is right comes with a lot of risk. Legg has cautioned that potent AI systems in the future might wipe out humanity, much like many other pioneers in the field. Some caution that an AI system that could replace humans at any task would eventually replace human labor completely, even for those who aren’t as concerned about Terminator scenarios.

The theory of scaling

There is a strong belief among many employees of the largest and most potent AI model-building organizations that artificial intelligence (AGI) will arrive soon. They believe in a notion called the scaling hypothesis, which holds that AGI will eventually emerge even if some little technological advancements are needed along the way. This is because AI models will eventually be trained using ever-increasing amounts of computer power and data.

This theory is supported by some evidence. The amount of processing power, or “compute,” that is utilized to train an AI model and its task-specific performance have been found to correlate in very clear and predictable ways by researchers. Scaling rules indicate how well a model can predict a missing word in a phrase in the context of large language models (LLM), the AI systems behind chatbots like ChatGPT. After discovering the scaling rules in 2019, OpenAI CEO Sam Altman told TIME that he came to the conclusion that artificial intelligence (AGI) would arrive far sooner than most people anticipate.

Many workers at the biggest and most powerful AI model-building companies firmly believe that artificial intelligence (AGI) will materialize in the near future. They subscribe to the theory known as the scaling hypothesis, which maintains that even if minor technological breakthroughs are required along the way, artificial general intelligence (AGI) will eventually arise. This is due to the fact that AI models will eventually be trained with ever-larger quantities of data and processing power.

There is some evidence to support this notion. Researchers have established a very clear and predictable correlation between the amount of processing power, or “compute,” used to train an AI model and its task-specific performance. When it comes to large language models (LLM), the artificial intelligence systems that power chatbots like ChatGPT, scaling rules show how well a model can anticipate a word that is missing from a phrase. Sam Altman, the CEO of OpenAI, told that he concluded artificial intelligence (AGI) would arrive significantly sooner than most people anticipate after learning about the scaling rules in 2019.

More recently, scientists at the charity Epoch have developed a more advanced compute-based model. The Epoch technique directly uses scaling principles and makes a simplifying assumption, as opposed to calculating when AI models will be trained with amounts of computing comparable to the human brain: An artificial intelligence (AI) model is said to be capable of generating a given text if it can accurately replicate a given section of the text, as determined by whether the scaling laws predict that the model will be able to anticipate the following word nearly perfectly each time. An AI system that can flawlessly replicate a novel, for instance, may replace writers, while one that can replicate scientific articles error-free could replace scientists.

Some contend that artificial intelligence (AI) systems won’t always think like humans, even if they can provide outputs that resemble humans. Even though Russell Crowe portrayed Nobel Prize-winning mathematician John Nash in the 2001 film A Beautiful Mind, no one would argue that Crowe’s acting prowess automatically translated into superior mathematical ability. Epoch researchers contend that this analogy is based on an incorrect comprehension of language model functionality. Instead of only mimicking human behavior on the surface, LLMs learn to reason like humans as they grow in size. It’s unclear, according to some experts, whether the AI models in use today are actually capable of reasoning.

According to Tamay Besiroglu, associate director of Epoch, their approach is one way to statistically simulate the scaling hypothesis. However, Besiroglu points out that researchers at Epoch typically believe AI will advance more slowly than the model predicts. According to the model, there is a 10% chance that transformative AI will be developed by 2025 and a 50% chance that it will be developed by 2033. Transformative AI is defined as “AI that, if deployed widely, would precipitate a change comparable to the industrial revolution.” According to Besiroglu, a major reason for the discrepancy between the model’s prediction and those of individuals such as Legg is likely the fact that transformational AI is more difficult to get than AGI.

Consulting the authorities

Even while a large number of executives at the biggest AI companies think that AGI will soon be produced on the current trajectory of AI development, they are the exception. AI Impacts, an AI safety project at the nonprofit Machine Intelligence Research Institute, surveyed 2,778 experts in fall 2023, all of whom had published peer-reviewed research in prestigious AI journals and conferences in the previous year. The goal of the survey was to more methodically ascertain the experts’ opinions regarding the future of artificial intelligence.

The experts were questioned about a variety of topics, including the viability of “high-level machine intelligence,” which is described as computers that might “accomplish every task better and more cheaply than human workers” without assistance. The average of the projections indicates a 50% likelihood that this would occur by 2047 and a 10% chance by 2027, notwithstanding the wide variations among the individual predictions.

Similar to many others, the experts appeared taken aback by the speed at which artificial intelligence has advanced in the past year and revised their predictions. In 2022, researchers at AI Impacts conducted a similar survey, estimating a 50% probability of high-level machine intelligence by 2060 and a 10% chance by 2029.

When the researchers believed that different individual tasks could be performed by robots was another question posed to them. They calculated that AI had a 50% probability of writing a book that would appear on the New York Times bestseller list by 2029 and a Top 40 hit by 2028.

The superforecasters are skeptical

However, a plethora of evidence indicates that experts are poor forecasts. Social scientist Philip Tetlock gathered 82,361 predictions from 284 experts between 1984 and 2003 by posing queries like: Will a coup overthrow Soviet leader Mikhail Gorbachev? Will the political union of Canada endure? Tetlock discovered that the experts’ forecasts were frequently no more accurate than chance and that the more well-known an expert was, the less likely it was that their predictions would come to pass.

Tetlock and his associates then went about seeing if anyone could foretell the future correctly. The U.S. Intelligence Advanced Research Projects Activity held a forecasting competition in 2010, and Tetlock’s team, the Good Judgement Project (GJP), won. Their forecasts were allegedly 30% more accurate than those of intelligence analysts with access to classified material. During the competition, the GJP recognized “superforecasters”—people who routinely produced forecasts with above-average accuracy. Superforecasters have been demonstrated to be reasonably accurate for predictions with a time horizon of two years or less, but Ezra Karger, an economist at the Federal Reserve Bank of Chicago and research director at Tetlock’s Forecasting Research Institute, notes that it’s unclear if they’re also similarly accurate for longer-term questions like when AGI might be developed.

How soon do the superforecasters predict the arrival of AGI? Thirty-one superforecasters were asked when they believed controversial philosopher Nick Bostrom—author of the groundbreaking AI existential risk treatise Superintelligence—would confirm the existence of artificial general intelligence (AGI) as part of a forecasting tournament organized by the Forecasting Research Institute between June and October 2022. The median superforecaster estimated that there was a 1% chance by 2030, a 21% chance by 2050, and a 75% probability by 2100 that this would occur.

Who’s correct?

There is a great deal of uncertainty surrounding all three methods of estimating when artificial general intelligence (AGI) may be produced. These methods include expert and superforecaster surveys, as well as Epoch’s model of the scaling hypothesis. With 10% believing it’s highly likely that AGI will be developed by 2030 and 18% believing it won’t be achieved until beyond 2100, the experts are particularly splitted.

Nevertheless, the various methods provide varying results on average. According to Epoch’s model, there is a 50% chance that transformational AI will exist by 2033; the median expert predicts that AGI will exist by 2048, but the superforecasters predict a considerably later date of 2070.

According to Katja Grace, chief researcher at AI Impacts and organizer of the expert survey, there are a lot of disagreements that fuel discussions over when artificial intelligence (AGI) might be developed. Firstly, will the existing approaches of creating AI systems—which rely on greater computation and data feeding—along with a few algorithmic adjustments be adequate? How impressive you find newly developed AI systems to be will determine how you respond to this question. Are the sparks of AGI, as Microsoft researchers refer to GPT-4? Or is this equivalent to asserting that the first monkey to climb a tree was moving toward a moon landing, in the words of philosopher Hubert Dreyfus?

Second, Grace notes that it’s unknown how far we have to go in order to build AGI, even if present techniques are sufficient to get there. Additionally, there’s a chance that something will stand in the way of development, such a lack of training data.

Lastly, Grace notes that people’s more basic assumptions about how much and how rapidly the world is likely to change are lurking in the background of these more technical discussions. While most people reject the idea that artificial intelligence (AI) might drastically change the world, those who work in the field are frequently well-versed in technology and are receptive to this possibility.

There is a lot at risk in settling this dispute. AI Impacts questioned experts on the societal ramifications of the technology in addition to how soon they believed AI would reach specific milestones. Of the 1,345 respondents who provided feedback regarding artificial intelligence’s potential effects on society, 89% expressed substantial or extreme anxiety about deepfakes created by AI, while 73% expressed a comparable level of concern about the possibility that AI could give dangerous groups more power by allowing them to engineer viruses, for example. According to the median answer, there is a 5% chance that AGI will result in “extremely bad” results, like the extinction of humans.

Because of these worries, as well as the fact that 10% of the experts surveyed think AI would be able to perform any activity that a person can by 2030, Grace makes the case that businesses and policymakers should start preparing now.

According to Grace, possible preparations include funding for safety research, mandated safety testing, and cooperation between businesses and nations creating potent AI systems. A study released by AI experts last year also advocated several of these solutions.

Stuart Russell, a computer science professor at the University of California, Berkeley and one of the paper’s authors, stated in October that if governments move now and decisively, there is a chance that we will learn how to make AI systems safe before we learn how to make them so powerful that they become uncontrollable.

Source link

Most Popular