The billionaire CEO of OpenAI, Sam Altman, has presented suggestions for how governments need to control artificial intelligence and divide the economic advantages of the technology’s explosive growth.
In a white paper titled “Industrial policy for the Intelligence Age,” which was published on Monday, OpenAI outlined three objectives that it believed should guide future AI governance: “share prosperity broadly,” “mitigate risks,” and “democratize access and agency.”
Enshrining a “right to AI,” treating access to the technology as “foundational for participation in the modern economy,” and updating the tax base to finance crucial initiatives when AI upends established economic systems are some of its key recommendations.
According to the paper, economic activity may change as AI transforms labor and production, increasing corporate profits and capital gains while possibly lowering reliance on labor income and payroll taxes. The revenue base that supports essential programs like Social Security, Medicaid, SNAP, and housing assistance might be weakened as a result, endangering them.
The business went on to explain that managing these risks might be accomplished by increasing reliance on capital-based revenues like capital gains and corporate income, as well as “exploring new approaches such as taxes related to automated labor”—dubbed a type of “robot tax” by Axios and others.
Why It’s Important
Although projections are divided between optimists and pessimists, experts and business leaders generally concur that the quick development of AI will change the U.S. workforce, either by causing widespread and permanent job losses or, as the optimists contend, by realigning the workforce so that the technology changes current jobs and creates new ones within an increasingly AI-driven economy.
What To Know
The ongoing shift toward “superintelligence,” which is defined as AI systems that are “capable of outperforming the smartest humans even when they are assisted by AI,” was one of the driving forces behind the blueprint, according to OpenAI.
According to the document, nobody is certain how this shift will play out. At OpenAI, we think we should navigate it through a democratic process that empowers people to truly choose the AI future they desire, prepare for a variety of potential outcomes, and develop the ability to adapt.
We firmly believe that the advantages of AI will greatly exceed its drawbacks, but we are also aware of the risks: the disruption of entire industries and jobs; the misuse of the technology by bad actors; the evasion of human control by misaligned systems; the use of AI by governments or institutions in ways that compromise democratic values; and the concentration of wealth and power rather than their distribution.
In accordance with OpenAI, “proactive political choices” would be necessary to reduce the dangers connected with these developments, including economic ones, in order to provide the “higher quality of life” that superintelligence is designed to provide.
In an interview with Axios, Altman described them as “in the Overton window, but near the edges” and stated that significant tax system changes were among the more feasible and politically acceptable ideas outlined in the blueprint.
However, in addition to updating the tax base, the company proposed a new “public wealth fund” that would provide every citizen with “a stake in AI-driven economic growth”—for example, by investing in businesses connected to AI—as well as a means of “converting efficiency gains from AI into durable improvements in workers’ benefits when routine workload declines and operating costs fall.”
It stated that if the latter is successful, businesses may be encouraged to try four-day workweeks with the goal of making this move permanent—a notion that other AI leaders have previously proposed.
What Individuals Are Saying
In the report, OpenAI stated: We present these concepts as a starting point for a more comprehensive discussion on how to guarantee that AI helps everyone, rather than as definitive solutions. Governments, businesses, researchers, civil society, communities, and families should all be involved in this inclusive and continuous dialogue, which should be mediated through democratic mechanisms that offer people actual ability to influence the AI future they desire. It must also broaden internationally, incorporating the viewpoints of many societies, cultures, and governments.
What Comes Next
OpenAI announced that it would be creating a “pilot program of fellowships and focused research grants” for work that “builds on these and related policy ideas” in addition to asking for input on its plans. The business stated that these concerns and suggestions would guide conversations during its “OpenAI Workshop,” which is scheduled to begin in May in Washington, D.C.






