Ways in which US Congress might regulate AI

In a speech this week in Washington, DC, Senate Majority Leader Chuck Schumer (a Democrat from New York) unveiled his big plan for AI policymaking, potentially ushering in a new era for US tech policy. He presented some fundamental guidelines for AI governance and advocated that Congress should move swiftly to pass new legislation.

Schumer’s plan is the culmination of numerous other, lesser steps. A measure to exempt generative AI from Section 230 (the statute that protects web platforms from liability for the material that their users create) was introduced on June 14 by Senators Josh Hawley (a Republican from Missouri) and Richard Blumenthal (a Democrat from Connecticut). A few AI businesses were invited by the House science committee last Thursday so that members could ask them questions about the technology and the different threats and advantages it presents. A bipartisan group of senators advocated setting up a federal office to promote, among other things, competition with China, while House Democrats Ted Lieu and Anna Eshoo, along with Republican Ken Buck, proposed a National AI Commission to manage AI policy.

Even if there has been a lot of activity lately, US politicians are not really starting from fresh when it comes to AI legislation. According to Alex Engler, a fellow at the Brookings Institution, you’re seeing a lot of offices come up with their own interpretations of particular AI policy challenges, usually ones that are related to their pre-existing problems. The FTC, the Department of Commerce, and the US Copyright Office are just a few of the organizations that have reacted quickly to the hype of the last six months by publishing policy statements, guidelines, and cautions regarding generative AI in particular.

Of all, when it comes to Congress, we can never be certain whether talk is followed by deeds. However, some new ideas in AI are being considered by US politicians. To help you understand where US AI law may be headed, you should be aware of the following three major elements in all of this discussion.

The United States is home to Silicon Valley and takes pride in fostering innovation. Many of the largest AI businesses are American, and Congress will not let you or the EU forget that! Schumer termed innovation the “north star” of US AI strategy, implying that regulators will most likely ask tech CEOs how they want to be controlled. It’ll be interesting to see how the tech lobby works here. Some of this terminology arose in response to the latest European Union restrictions, which some IT companies and opponents worry would hinder innovation.

Technology, and artificial intelligence in particular, should adhere to “democratic values. We are hearing this from influential figures like President Biden and Senator Schumer. The idea that US AI companies are distinct from Chinese AI companies is the subtext of this passage. (New regulations in China demand that generative AI outputs adhere to communist values.) The US is going to try to structure its AI regulation in a way that maintains the edge over the Chinese tech industry, while also increasing its manufacture and control of the chips that fuel AI systems and maintaining its intensifying trade war.

What happens to Section 230 is one significant issue. Whether Section 230 will be changed or not is a significant open topic for US AI policy. Tech businesses are protected from legal action relating to the content on their platforms by Section 230, a US internet law from the 1990s. But should tech corporations also be given a “get out of jail free” card for content created by AI? This is a large question that would take a significant amount of work from tech companies to detect and label text and images created by AI. The subject has probably been returned to Congress given that the Supreme Court recently declined to rule on Section 230. Any time lawmakers determine whether and how to alter the law, it might have a significant impact on the state of AI.

Where is this headed, then? Well, not in the near future, since lawmakers are taking a summer vacation. However, Schumer intends to launch invite-only study groups in Congress to examine specific aspects of AI starting this fall.

Engler predicts that in the interim, debates on outlawing specific AI applications, such as facial recognition or sentiment analysis, that mirror some aspects of the EU regulation, may come up. The Algorithmic Accountability Act, for example, is one existing proposal that lawmakers could try to resurrect.

The focus is currently on Schumer’s significant swing. The goal is to create something really thorough and quickly. Engler predicts that there will be a significant amount of attention.

Source link