HomeArtificial IntelligenceArtificial Intelligence NewsCall for Better AI Security Infrastructure

Call for Better AI Security Infrastructure

Following Microsoft’s revelation that state actors from rival countries utilized artificial intelligence (AI) to train their operatives, the United States will need to decide how widely it wants to grant public access to the technology, which could have an impact on overall data protection rules.

The founder of the AI non-profit Center for Advanced Preparedness and Threat Response Simulation, Phil Siegel, told that they would either have to decide whether to keep these things open and simple to access for everyone, including good and bad actors, or whether to adopt a different strategy.

Five “malicious” actors with ties to states were revealed by OpenAI in a blog post published on Wednesday. These actors included the Chinese-affiliated Charcoal Typhoon and Salmon Typhoon, the Iranian-affiliated Crimson Sandstorm, connected with North Korea, and Emerald Sleet and Forest Blizzard, affiliated with Russia.

The groups “queried open-source information, translated, found coding errors, and ran basic coding tasks” according to the post, which claimed they used OpenAI services. It is said that the two organizations with Chinese ties, for instance, translated technical papers, debugged code, created scripts, and investigated ways to conceal processes in various electronic systems.

Consequently, OpenAI put forth a multifaceted strategy to counteract such nefarious use of the company’s resources, which included enhanced public transparency, increased collaboration with other AI platforms to detect and stop malicious activity, and “monitoring and disrupting” malicious actors using new technologies.

According to OpenAI, there are a few bad actors in every ecosystem that need to be dealt with consistently in order for everyone else to keep reaping the rewards. They will not be able to prevent every incident of possible misuse by such parties, despite their best efforts to reduce it.

The corporation stated that by carrying out more innovation, research, collaboration, and sharing, they make it more difficult for harmful actors to go unnoticed throughout the digital ecosystem and enhance the experience for everyone else.

Despite their good intentions, Siegel contended that these gestures will eventually be ineffective because the infrastructure in place does not permit them to have the required impact.

According to Siegel, they will need to determine if the system will be completely open or if it would resemble the financial system, which has numerous gates that prevent certain things from occurring.

He explains his skepticism by saying that the banks have a whole infrastructure and set of regulations supporting them to make these kinds of transactions, while they do not. While they are considering and working on it, none of the companies—Google, Open A, or Microsoft—are at fault until those things are put in place.

He continued, saying that they simply need to act swiftly to ensure that this material is put in place so they can know how they’re going to implement these kinds of things.

In a different blog post, Microsoft pushed for some more steps, including “notification” for other AI service providers to assist in identifying pertinent activity and data so they can take quick action against the same people and processes.

Microsoft and OpenAI promised to safeguard the important AI systems through “complementary defenses,” with support from MITRE in creating countermeasures for the “evolving landscape of AI-powered cyber operations.

Microsoft admitted that over the past few years, the threat ecosystem has consistently shown that threat actors are keeping up with technological advances at the same time as their defense counterparts.

According to Siegel, since hackers can employ espionage and even “other forms of technology” to further their objectives, the procedures outlined would only be able to explain a portion of the activity the bad actors engaged in. This is because there are currently insufficient systems in place to capture all kinds of activity.

Simply said, work needs to be done, and Siegel expressed skepticism that Microsoft’s OpenAI would be able to accomplish that without assistance from the government or other organizations that have previously worked on similar technologies.

Source link

Most Popular