Developing AI regulations to help Save Humanity

The founders of OpenAI, the company that created ChatGPT, have demanded that “superintelligent” AIs be governed, claiming that this is necessary to prevent humanity from unintentionally developing a weapon that may wipe it out.

Greg Brockman, Ilya Sutskever, and Sam Altman, co-founders of the company, call for an international regulator to start working on how to inspect systems, demand audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security in order to reduce the “existential risk” such systems could pose. The note was posted to the company’s website.

They write that it’s possible that within the next ten years, AI systems may perform as much productive work as one of today’s top firms and surpass expert competence levels in most fields. Superintelligence will be more potent than other technologies humanity has faced in the past, both in terms of possible benefits and drawbacks. Future prosperity can be significantly increased, but getting there will require effective risk management. We can’t just react when existential risk is a possibility.

The trio urges “some degree of coordination” among businesses conducting cutting-edge AI research in the short term in order to guarantee that the societal integration of ever-more powerful models is prioritized while maintaining safety. A government-led initiative or a group agreement to set limits on the development of AI capability could be used for this coordination.

For decades, scientists have warned about the possible dangers of superintelligence, but as AI development has accelerated, those dangers have become more tangible. The CenterĀ for AI Safety (CAIS), established in the US, aims to “reduce societal-scale risks from artificial intelligence” and lists eight types of “catastrophic” and “existential” threats that AI development may bring.

While some fear that a strong AI may mistakenly or purposefully wipe off mankind, CAIS describes other more insidious effects. Humanity “losing the ability to self-govern and becoming completely dependent on machines,” which is referred to as “enfeeblement,” could result from a world in which AI systems willingly perform ever more labor. Additionally, a small number of people in control of potent systems could “make AI a centralizing force,” which could result in “value lock-in” and an inviolable caste system between the ruled and the rulers.

The founders of OpenAI assert that these dangers call for “democratic global consensus on the limits and defaults for AI systems,” but they also acknowledge that they don’t yet know how to design such a mechanism. They assert, however, that the risk is worthwhile in order to continue developing strong systems.

They write that they think it will result in a world that is considerably better than what we can now envisage (we are already witnessing early indications of this in fields like education, creative work, and individual productivity). They caution that pausing development could also be risky. Because the potential benefits are so great, building it is becoming cheaper every year, there are more people building it, and it is a necessary step in our current technological development. Even a global surveillance regime, which would be necessary to stop it, is not certain to be effective. We must therefore do it properly.

Source link