As reported by Morning Brew, the Supreme Court recently slammed down on the authority of federal agencies.
Significant progress toward AI regulation was being made less than a year ago, as seen by major events like the EU AI Act, the Biden Administration’s AI Executive Order, and the AI Safety Summit in the United Kingdom. The future of AI legislation in the US is, however, less assured due to a recent court ruling and possible political changes. This article examines the probable obstacles that lie ahead as well as the effects of these breakthroughs on AI regulation.
The recent ruling by the Supreme Court in Loper Bright Enterprises v. Raimondo reduces the power of federal agencies to regulate a range of industries, including artificial intelligence. The court judgment transfers the authority to interpret unclear legislation approved by Congress from federal agencies to the courts, breaking a forty-year-old norm known as “Chevron deference.”
Agency expertise vs. judicial oversight
Existing laws are frequently ambiguous in numerous sectors, especially those pertaining to the environment and technology, leaving interpretation and control to agencies. This ambiguity in legislation is frequently intentional, for both political and practical reasons. However, any regulatory action made by a federal agency in accordance with those laws can now be challenged in court, and federal courts have greater authority to determine what a statute means. This development could have a big impact on AI regulation. Proponents say that it provides more consistent interpretation of legislation and prevents potential agency overreach.
The risk with this decision is that agencies frequently possess greater knowledge than the courts in a rapidly evolving sector like artificial intelligence. The Equal Employment Opportunity Commission (EEOC) addresses the use of AI in hiring and employment decisions to prevent discrimination, the Food and Drug Administration (FDA) regulates AI in medical devices, and the Federal Trade Commission (FTC) focuses on consumer protection and antitrust issues related to AI.
For these tasks, these organizations specifically seek out candidates with experience in AI. The judicial branch does not currently possess such knowledge. However, the majority ruling stated that agencies are not uniquely qualified to resolve ambiguities in statutes. The courts do.
Challenges and legislative needs
The overall result of Loper Bright Enterprises v. Raimondo may be to make it more difficult to establish and implement AI regulations. According to the New Lines Institute, in order to explain every regulation that agencies impose, they must somehow create arguments that entail intricate technical specifics yet are convincing enough to an audience that is not knowledgeable with the topic. This move [to invalidate Chevron deference] is what this entails.
The dissenting opinion of Justice Elena Kagan differed on which entity could provide more effective regulation. “In one smooth stroke, the [the] majority today grants itself exclusive authority over every outstanding issue — no matter how expertise-driven or policy-laden — concerning the meaning of regulatory legislation. As if it didn’t have enough on its plate, the majority elects itself as the country’s administrative czar. Regarding AI, Kagan stated at oral arguments in the case: “And what Congress wants, we presume, is for those who truly know about AI to decide such problems.
Therefore, Congress would need to specifically indicate in any new laws they passed that they wanted federal agencies to take the lead on regulation when it came to the creation or application of AI. In any other case, the federal courts would hold that jurisdiction. Ellen Goodman, a professor at Rutgers University who specializes in information policy law, stated in FedScoop that having explicit legislation from Congress has always been the solution, but that is now even more true.
Political landscape
Congress’s willingness to include this requirement is contingent upon the composition of the body, therefore there is no assurance that it will. The Republican platform, which was just accepted, expresses a conservative stance and makes it apparent that the current AI Executive Order will be overturned. The platform states explicitly, they will overturn Joe Biden’s hazardous Executive Order, which impedes AI innovation and imposes radical left-wing ideologies on the advancement of this technology. The following would likely entail removing the provisions on AI-related reporting requirements, AI evaluation methodologies, and AI uses and disuses prohibitions, according to report.
According to information, tech entrepreneur Jacob is one of the advocates for the removal of the AI Executive Order. He “thinks that ‘a morass of red tape’ would harm U.S. competition with China and that existing laws already govern AI appropriately.” All the same, the ruling in Loper Bright Enterprises v. Raimondo has undermined the interpretation and regulation of those statutes by federal agencies.
The platform also states that Republicans favor AI development that is based on free expression and human flourishing in place of the present executive order. According to recent reports, former president Donald Trump’s allies are working to develop a new framework that would, among other things, put “America first in AI.” This might involve fewer restrictions, as the platform declares its desire to do away with expensive and onerous rules—particularly those that, in their opinion, impede freedom, innovation, and employment opportunities while driving up costs overall.
Regulatory outlook
The AI regulatory landscape in the United States will change regardless of which political party takes over the White House and Congress.
The ability of specialized federal agencies to enforce substantial legislation pertaining to artificial intelligence is a major worry raised by the Supreme Court’s ruling in Loper Bright Enterprises v. Raimondo. Meaningful AI regulation is likely to be slowed down or even prevented in a field as dynamic and technological as AI.
The direction of AI regulation initiatives may potentially shift with a change in Congress or the White House leadership. If conservatives take control, there will probably be less regulation overall, and the regulations that do exist would probably be less onerous for companies that create and use artificial intelligence (AI) technology.
In the UK, the newly elected Labour party pledged in its campaign to impose “binding regulation on the handful of companies developing the most powerful AI models,” which would be a sharp contrast to this strategy. With its newly enacted AI Act, the U.S. would likewise have a very different AI regulatory framework than the EU.
It’s unclear how all of these developments will affect international cooperation and AI development, but the overall result may be less global alignment on AI governance. Global AI standards development, data sharing agreements, and foreign research alliances may all be hampered by this regulatory mismatch. While there is no doubt that less regulation of AI might promote innovation in the United States, it could also raise more questions about AI safety and ethics as well as the possible effects on employment. In turn, this uneasiness may damage public confidence in artificial intelligence (AI) technology and the businesses that develop them.
It’s feasible that big AI businesses would actively work together on safety protocols and ethical applications if rules were to loosen. In a similar vein, creating AI systems that are easy to understand and audit might receive more attention. This could assist businesses in demonstrating responsible development and staying ahead of any unfavorable comments.
There will be a time of increased uncertainty over the regulation of AI, at the very least. Effective collaboration between legislators, industry leaders, and the tech community is imperative as the political landscape and legislation undergo constant changes. To keep AI development moral, secure, and advantageous to society, coordinated efforts are necessary.