HomeBlockchainBlockchain NewsSilicon Valley’s most dangerous AI debate

Silicon Valley’s most dangerous AI debate

More than a year after ChatGPT’s launch, the drama surrounding the technology’s quick development in the OpenAI boardroom may have been the most significant AI story of 2023. The tension surrounding generative artificial intelligence (AI) as it enters 2024 is evident in the events surrounding Sam Altman’s ousting as CEO. There is a significant divide between proponents of AI’s rapid pace of innovation and those who believe it should slow down because of the numerous risks involved.

The discussion, referred to as “e/acc vs. decels” in tech circles, has been going around Silicon Valley since 2021. But as AI becomes more prevalent and powerful, it’s critical to comprehend both viewpoints.

An overview of the essential vocabulary and notable figures influencing AI’s future can be found here.

e/acc and techno-optimism

To put it briefly, proponents of e/acc want innovation and technology to advance as quickly as possible.

The proponents of the idea stated in the first-ever post about e/acc that “technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness.”

The topic of discussion here in terms of AI is “artificial general intelligence,” or AGI. Artificial General Intelligence (AGI) is an extremely sophisticated AI that is capable of performing tasks just as well as humans. Additionally, AGIs are capable of self-improvement, opening up countless possibilities for an unending feedback loop.

Some believe that AGIs will be able to survive until the end of time and develop to such an advanced level of intelligence that they will be able to exterminate humans. However, e/acc fans prefer to concentrate on the advantages that an AGI can provide. The founding e/acc substack expounded that the only thing preventing us from creating abundance for every living human being is our own desire to do so.

The identities of the people who founded e/acc remain a mystery. Yet after his identity was made public by the media, Guillaume Verdon—who is arguably the largest supporter of e/acc—recently disclosed his identity as @basedbeffjezos.

Former employee of Alphabet, X, and Google, Verdon is currently working on a project he refers to as the “AI Manhattan project” and stated on X that this is not the end for e/acc, but rather a new chapter in the company’s history. One where he can support the interests of the community and speak up for our voice in the traditional world outside of X.

In addition, Verdon is the creator of Extropic, a tech startup that aims to use thermodynamic physics to create the ideal foundation for generative artificial intelligence in the real world.

A leading VC’s statement on AI

Venture capitalist Marc Andreessen of Andreessen Horowitz, who once referred to Verdon as the “patron saint of techno-optimism,” is among the most well-known proponents of e/acc.

Techno-optimism is precisely that—believers believe that as technology advances, the world will become a better place in the end. The Techno-Optimist Manifesto, written by Andreessen, is a declaration that spans more than 5,000 words and explains how technology will enable humanity and provide solutions to all of its material issues. Andreessen even goes so far as to claim that failing to advance AI to the point where it stops deaths would be a “form of murder” and that “any deceleration of AI will cost lives.”

Yann LeCun, Chief AI Scientist at Meta and one of the “godfathers of AI” after winning the prestigious Turing Prize for his AI breakthroughs, reposted another of his techno-optimist pieces, Why AI Will Save the World.

In X, LeCun describes himself as a “humanist who subscribes to both normative and positive forms of active techno-optimism.”

LeCun has been a vocal critic in public of those who he claims “doubt that current economic and political institutions, and humanity as a whole, will be capable of using [AI] for good.” He recently stated that he doesn’t expect AI “super-intelligence” to arrive for quite some time.

Lecun believes that AI will benefit society more than cause harm, and Meta’s support of open-source AI supports this view. However, some have warned against the risks associated with a business model such as Meta’s, which encourages the distribution of widely available gen AI models among numerous developers.

AI alignment and deceleration

Encode Justice and the Future of Life Institute demanded in an open letter sent in March that all AI labs immediately pause for at least six months the training of AI systems more powerful than GPT-4.

Prominent tech figures like Apple co-founder Steve Wozniak and Elon Musk signed the letter.

During an April event at MIT, OpenAI CEO Sam Altman responded to the letter by stating that he believed it was crucial to proceed cautiously and to become more rigorous when it came to safety concerns. He didn’t think the letter was the best way to handle it.

When the OpenAI boardroom drama unfolded and the nonprofit arm of OpenAI’s original directors became alarmed by the rapid pace of development and its declared goal of “ensuring that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity,” Altman found herself embroiled in the conflict once again.

Supporters of AI deceleration, or decels, agree that some of the concepts in the open letter are crucial. Decels’ main concern is AI alignment, and they believe that progress should stall since the future of AI is uncertain and dangerous.

The hypothesis that AI will someday grow so intelligent that humans will be unable to control it is addressed by the AI alignment problem.

Because our objectives are not the same as other species’, our dominance as a species—fueled by our comparatively superior intelligence—has had negative effects on other species, including extinction. The future is in our hands; chimpanzees live in zoos. Malo Bourgon, the CEO of the Machine Intelligence Research Institute, stated that advanced AI systems may have an analogous effect on humanity.

The goal of AI alignment research, like that conducted by MIRI, is to teach AI systems to “align” with human values, ethics, and goals in order to shield humanity from existential threats. According to Bourgon, the main risk lies in developing entities that are far smarter than humans, have misaligned goals, and exhibit unpredictable and uncontrollable behaviour.

AI and government: the global issue

Christine Parthemore, the CEO of the Council on Strategic Risks and a former official of the Pentagon, has dedicated her career to de-risking hazardous situations. She believes that this is an urgent issue given the potential “mass scale death” that AI could cause if it were used to supervise nuclear weapons.

However, she emphasized, “staring at the problem” will not solve it. Identifying the most effective solution sets and addressing the risks is the main goal, she stated. She continued, It’s dual-use tech at its finest.” AI is never more of a weapon than a solution in any situation. Large language models, for instance, will speed up medicine by functioning as virtual lab assistants, but they will also assist malicious actors in determining which pathogens are most effective and contagious to target. She stated that this is one of the reasons AI cannot be stopped. According to Parthemore, slowing down is not one of the solutions.

Her previous employer, the Department of Defense, stated earlier this year that humans will always be involved in the use of AI systems. She believes that protocol ought to be implemented globally. “The AI itself cannot be the authority,” she stated. “The AI can’t just say, ‘X,'””… We must contextualize, but we must also have faith in the tools, or we should not be using them.” Overconfidence and overreliance are more likely because there is a widespread lack of knowledge about this set of tools.”

Policymakers and government representatives have begun to pay attention to these risks. The Biden-Harris administration declared in July that it had obtained voluntary pledges to “move towards safe, secure, and transparent development of AI technology” from the leading AI companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

President Biden signed an executive order a few weeks ago that further developed new guidelines for AI security and safety, but stakeholders from all over society are worried about its limitations. Similarly, the first state-backed organization dedicated to navigating AI was established by the U.K. government in early November when it established the AI Safety Institute.

China is putting its own set of AI regulations into place amidst the global competition for AI supremacy and its connections to geopolitical rivalry.

Promises of responsible AI and mistrust

With Superalignment, OpenAI hopes to resolve the fundamental technical issues with superintelligent alignment within four years.

Amazon announced new capabilities for AI innovation along with the implementation of responsible AI safeguards across the organization at its most recent conference, Amazon Web Services re:Invent 2023.

Diya Wynn, the responsible AI lead for AWS, frequently states that it is a business imperative and that responsible AI should not be viewed as a separate workstream but rather as something that is eventually integrated into our workflow.

AWS and Morning Consult conducted a study which found that 59% of business leaders consider responsible AI to be a growing business priority. Of these, about half (47%) plan to increase their investments in responsible AI in 2024 compared to 2023.

While incorporating responsible AI may slow down the field’s rate of advancement, groups like Wynn’s see themselves as pioneers in the creation of a safer future. According to Wynn, as a result of companies realizing the benefits and starting to prioritize responsible AI, systems will become safer, more secure, and more inclusive.

Not convinced, Bourgon claims that recent government announcements of actions are “far from what will ultimately be required.”

According to his predictions, governments should be ready to temporarily suspend AI systems until top AI developers can convincingly prove their systems’ safety. He believes that AI systems may reach catastrophic levels as early as 2030.

Source link

Most Popular