This is how Google thinks AI should be regulated

This is how Google thinks AI should be regulated

Google has offered its own opinions as state and federal governments work to regulate AI.

The tech giant released a blog post titled “7 principles for getting AI regulation right” on Wednesday. It should come as no surprise that the main takeaway is that regulation of AI should not impede innovation.The head of global affairs at Google and parent firm Alphabet, Kent Walker, wrote that there is a race in technology taking place on a global scale. As with previous technological competitions, the nations who implement technology the best across all industries will prevail, not the ones who come up with the idea first.

Citing the possibility of existential peril, Google and AI startups like OpenAI have publicly adopted a cooperative approach toward AI regulation. Sundar Pichai, the CEO of Google, attended Senate AI Insight Forums to guide legislative AI policy. Others of support of a less-regulated, more open-source AI environment, however, have attacked Google and others of fear-mongering in order to achieve regulatory capture.

Said Meta Chief AI Scientist Yann LeCun, referring to the CEOs of OpenAI, Google DeepMind, and Anthropic respectively, Altman, Hassabis, and Amodei are the ones doing significant corporate lobbying right now. Should your fear-mongering tactics be successful, they will eventually lead to what you and I would consider to be a catastrophe: a limited number of businesses will control artificial intelligence.

Walker cited the White House AI executive order, the U.S. Senate’s AI policy road map, and recent AI legislation passed in Connecticut and California. Although Google claims to support these initiatives, regulation pertaining to artificial intelligence should concentrate on controlling particular results of AI research rather than broad strokes laws that hinder advancement. Walker, who discussed in a section on “striving for alignment” that more than 600 bills in the U.S. alone have been presented, argued that advancing American innovation calls for intervention at places of actual harm rather than complete research inhibitors.

The Google post also focused on copyright infringement and how and from what data artificial intelligence algorithms are trained on. Corporations utilizing AI models contend that using publicly available data on the internet is fair use; media firms have accused them; more recently, big record corporations have been accused of infringing copyright and benefitting from it.

Walker basically restates the fair use case, but she also notes that website owners should be able to utilize machine-readable methods to opt out of having information on their sites used for artificial intelligence training and calls for increased openness and control over AI training data.

Regarding “supporting responsible innovation,” the principle addresses “known risks” in broad terms. It does not, however, delve into specifics concerning, say, regulatory control to prevent clear errors in generative AI reactions that might spread false information and inflict damage.

Though it’s a current example that highlights the continuous debate over responsibility for AI-generated falsehoods and responsible deployment, to be fair none really took it seriously when Google’s AI summary advised using glue on a pizza.

Source link