A senior industry figure attending this week’s AI safety summit claims that focusing on doomsday scenarios in artificial intelligence is a diversion that minimizes immediate risks like the widespread creation of misinformation.
Long-term concerns like existential threats to humanity from AI should be “studied and pursued, according to Aidan Gomez, co-author of a research paper that helped develop the technology behind chatbots. However, he added that this could distract politicians from addressing more pressing potential dangers.
According to him its not a productive conversation to have in terms of existential risk and public policy, he said. In terms of public policy and areas where the public sector should concentrate, or attempting to lessen the risk to the general public.
As the chief executive of Cohere, a North American company that produces AI tools for businesses, including chatbots, Gomez is attending the two-day summit, which begins on Wednesday. Gomez, who is 20 years old, was a member of the Google research team in 2017 that developed the Transformer, a crucial piece of technology that powers large language models that drive AI tools like chatbots.
According to Gomez, artificial intelligence (AI) is already widely used in many applications, and the summit should concentrate on these uses. AI is the term for computer systems that can carry out tasks typically associated with intelligent beings. The public has been astounded by the capacity of chatbots like ChatGPT and image generators like Midjourney to generate believable text and images in response to straightforward text prompts.
Products from Google and other companies already have a billion users due to this technology. According to Gomez, this raises a variety of new concerns that need to be considered. None of them are existential or end-of-the-world situations. Instead of maybe delving into the more theoretical and academic discussion about the long-term future, we should concentrate entirely on the pieces that are either actively affecting people or are about to impact people.
Gomez stated that his main concern was misinformation, or the dissemination of false or inaccurate information online. He stated that the most important thing is disinformation. These [AI] models are able to produce media that is nearly identical to text, images, or other forms of human-created media in terms of convincingness and compellingness. Therefore, we must take immediate action to address that. We must devise a strategy for enabling the general public to discriminate between these various media genres.
The summit’s first day will include talks on a variety of AI-related topics, such as worries about disinformation-related issues like election disruption and deterioration of social trust. On the second day, experts and tech executives, assembled by Rishi Sunak, will address the risks associated with AI in a more focused group of countries. Vice President of the United States Kamala Harris is scheduled to attend.
Gomez called the summit “really important” and added that it was already “very plausible” that an army of bots—software that does repetitive tasks like posting on social media—could disseminate false information created by artificial intelligence. He claimed that if it were possible, there would be a serious threat to both democracy and public discourse.
The administration stated it could not completely rule out AI development reaching a point where systems threatened humanity in a set of documents detailing the risks associated with AI released last week. These risks included AI-generated disinformation and disruptions to the job market.
An existential threat from highly capable Frontier AI systems could arise if they are misaligned or not sufficiently controlled, according to a risk paper that was released last week. This is because there is a great deal of uncertainty around predicting AI developments.
According to the document, a lot of experts thought that the risk of such an event was very low and that it would require a lot of conditions to be met, like an advanced system taking control of the financial markets or weapons. The idea of so-called artificial general intelligence, a term for an AI system capable of performing multiple tasks at a human or above-human level of intelligence, raising existential concerns about the technology is centred on the possibility that it could, in theory, replicate itself, elude human control, and make decisions that are harmful to humans.
Due to these concerns, an open letter calling for a six-month halt to massive AI experiments was published in March and signed by over 30,000 tech professionals and experts, including Elon Musk.
In May, Yoshua Bengio and Geoffrey Hinton, two of the three contemporary “godfathers” of artificial intelligence, signed an additional statement cautioning that preventing the threat of AI extinction should be taken just as seriously as preventing pandemics and nuclear war. But Yann LeCun, another “godfather” of computing and co-winner of the ACM Turing award, which is considered the computing equivalent of the Nobel prize, called concerns that AI could destroy humanity “preposterous.”
LeCun, the head AI scientist at Meta, the parent company of Facebook, stated in an interview earlier this month that several “conceptual breakthroughs” were required before AI could achieve human-level intelligence, or the point at which a system could become independent of human oversight. LeCun continued, saying that the desire for dominance has nothing to do with intelligence. Not even in the case of humans.