Getting Protection From AI

Since a new chatbot proclaimed its love to a reporter six months ago before taking a darker turn, people have realized how drastically artificial intelligence might impact our lives—and how it can go wrong. AI is rapidly becoming a part of almost all facets of our daily lives and economy. But in the capital of our country, rules aren’t evolving as quickly as new technologies.

The deployment of artificial intelligence in sensitive sectors like the financial markets, healthcare, and national security is only one of the many considerations that policymakers must make. They must decide how to handle content produced by AI in terms of intellectual property rights. Additionally, there will need to be barriers to stop the spread of false and misleading information.

Prior to constructing the second and third stories of this regulatory home, however, we must first lay a solid foundation, and this foundation must be built around a national data privacy norm.

It’s crucial to consider how artificial intelligence was created in order to comprehend this fundamental necessity. A huge amount of data is necessary for AI. Over 200 days’ worth of HD footage were used to train the generative language tool ChatGPT on 45 terabytes of data. Our online forum and social media posts, which have probably taught ChatGPT how we write and interact with one another, may have incorporated such information. That’s because third-party businesses that are ready to pay for it may easily access this data, which is mostly unsecured. The United States lacks a national privacy regulation, therefore AI developers are not required to disclose where they obtain their input data.

Despite the fact that data studies have been around for millennia and can have significant advantages, they frequently revolve around obtaining consent to use the data. Medical studies frequently incorporate information on patient health and outcomes, but in most circumstances, that data must have the participants’ consent. The reason for this is that although Congress provided some basic protection for health information in the 1990s, it only applied to information shared between patients and their healthcare providers. The majority of the data we generate now, such as geolocation data and other online discussions, or other health platforms like fitness applications, do not share this limitation.

Today, our data is under the power of the companies who gather it. Prior to stopping, Google used to examine customers’ Gmail inboxes to sell them customised adverts. After being accused of utilising consumers’ audio and video to train its AI products, Zoom recently had to amend its data gathering policy. On our phones, we’ve all downloaded apps and clicked through the terms and conditions window without truly reading it. Businesses are free to alter the conditions governing how much of our information they gather and how they utilise it. This happens frequently.

No matter where a person lives in the United States, there would be a minimum level of safeguards under a national privacy standard. It would also prohibit businesses from keeping and selling our personal information.

A high-quality and responsible solution also depends on ensuring transparency and accountability in the data used for AI. If the input data is biassed, the results will also be biassed, or “garbage in, garbage out.” Artificial intelligence is used, for example, in facial recognition. The majority of the data used to train these algorithms came from and was provided by white people. When communities of colour use this technology, there are obvious prejudices as a result.

In terms of AI policy, the United States must be a global leader.

However, while we wait, other nations do not. Because it enacted its privacy law in 2018, the European Union has proceeded more quickly on AI regulations. Despite being frighteningly anti-democratic, the Chinese government has also advanced swiftly on AI. We need to establish our own national data privacy law to get started if we want a seat at the international table to decide the long-term course of AI that respects our fundamental American values.

The inaction of Congress has prevented the Biden administration from fully implementing its encouraging initial guardrails surrounding AI. Voluntary artificial intelligence standards with a part on data protection were recently announced by the White House. Voluntary regulations lack accountability, and the federal government is only able to enforce laws that are egregiously out-of-date.

Congress must take action and establish the rules of the road as a result. Instead of the state-by-state model we currently use, the nation needs strong national norms like privacy that are applied consistently across the board. Information must once again be under the power of individuals rather than corporations. In order for the government to prosecute criminals, it must also be enforced.

It boils down to a question of priorities, as with everything in Congress. We can no longer put off addressing this problem since artificial intelligence is developing so quickly.

We previously lagged in technological policy, but as other nations take the lead, we are now more behind. We must take swift action and lay a solid basis. This must contain a rigid, binding national privacy standard.

Source link

Most Popular