A list of AI hazards is published by MIT researchers

When deploying AI systems or developing regulations to control their usage, which particular hazards should an individual, business, or government take into account? It’s difficult to respond to that question. Human safety is obviously at risk if an AI gains control of vital infrastructure. What about an AI that is meant to grade tests, organize resumes, or check travel documents at immigration control? They all have distinct hazards associated with them, yet they are all as serious.

Policymakers have had difficulty reaching a consensus on which hazards should be included by the legislation regulating AI, such as the EU AI Act and California’s SB 1047. A form of database of AI dangers, an AI “risk repository” has been developed by MIT researchers to help serve as a reference for them and other stakeholders in the AI industry and academics.

According to Peter Slattery, lead researcher on the AI risk repository project at MIT’s FutureTech group, the effort is to carefully select and analyze AI risks into a publicly accessible, extensive, extensible, and categorized risk database that anyone can copy and use and that will be updated over time. They learned that many other people also required it, therefore they produced it today since they needed it for our project.

According to Slattery, the AI risk repository was created in order to better understand the overlaps and gaps in the field of AI safety research. It has approximately 700 AI dangers categorized by causal elements (like intentionality), domains (like discrimination), and subdomains (like disinformation and cyberattacks). There are several risk frameworks available. However, Slattery points out that they only address a small portion of the concerns noted in the repository, and that these omissions could have a significant impact on the advancement, application, and development of public policy around AI.

Slattery continued, “Our findings suggest otherwise. People may assume there is a consensus on AI risks.” Of the 23 risk subdomains they identified, they discovered that the average frameworks covered just 34% of them, and nearly a quarter covered less than 20%. Of the 23 risk subcategories, only 70% were covered in the most complete document or overview. We shouldn’t presume that opinions on these dangers are shared when the literature is so dispersed.

The MIT researchers searched academic databases and retrieved hundreds of articles pertaining to AI risk evaluations in order to develop the repository. They collaborated with colleagues at the University of Queensland, KU Leuven, the nonprofit Future of Life Institute, and the AI startup Harmony Intelligence.

Certain dangers were cited more frequently than others in the third-party frameworks that the researchers surveyed. As an illustration, just 44% of the frameworks addressed disinformation, while over 70% of them addressed the privacy and security consequences of AI. Furthermore, only 12% of respondents addressed the “pollution of the information ecosystem,” or the rise in spam produced by AI, whereas over 50% discussed the ways that AI could support discrimination and misrepresentation.

Researchers, legislators, and anybody else dealing with hazards should take note of the fact that this database may offer a starting point for more specific study, according to Slattery. Individuals such as us had two options prior to this. They have two options: either they spend a lot of time going through the dispersed literature to create a thorough overview, or they rely on the few frameworks that already exist, which may overlook critical risks. Our repository should boost supervision and save time now that they have a more complete database.

But will it be put to use by anyone? It is true that current global AI legislation is, at most, a patchwork of many strategies with disparate objectives. Would things have been different if there had been an AI risk repository like MIT’s earlier? Had it been able to? Saying that is difficult.

It makes sense to question if a common awareness of the risks posed by AI will serve as the impetus for efforts toward responsible regulation. The major limitations that many safety evaluations of AI systems have won’t always be addressed by a risk database by itself.

MIT researchers, however, plan to try. The FutureTech lab’s director, Neil Thompson, says the team plans to use the repository in its next study phase to evaluate how well some AI concerns are being addressed.

According to Thompson, their repository will be useful to them as they go on to the next phase of their research, which involves assessing how well certain risks are being handled. They intend to use this to pinpoint areas where organizational responses are deficient. For example, they should take note of and take action if everyone concentrates on one kind of risk while ignoring others that are equally important.

Source link