The Microsoft-CrowdStrike incident might upset Big Tech faith

A single mistake can cause widespread chaos, as demonstrated by the catastrophic IT outage that shook businesses all over the world and demonstrated how closely society and the systems that support it are entwined with Big Tech.

In addition, it highlights the weaknesses in such systems and begs the question: Can we really trust Big Tech to effectively protect a technology with the potential to be as potent as artificial intelligence?

An upgrade by cybersecurity company CrowdStrike on Friday caused a software problem that led to a Microsoft IT outage that affected airlines, banks, stores, emergency services, and healthcare organizations globally. CrowdStrike reports that a software remedy has been implemented, but many systems remained offline on Friday as businesses struggled to restore their services—some of which required manual updates.

The Microsoft-CrowdStrike outage should serve as a “wake-up call” for customers, according to Gary Marcus, an AI researcher and the creator of Geometric Intelligence, a machine learning AI business that Uber acquired in 2016. Marcus also told that the impact of a comparable failure with AI would be tenfold.

Why in the world do you think we are ready for artificial general intelligence (AGI) when a single bug may bring down banks, airplanes, shops, media outlets, and more? Marcus stated in an X post.

Artificial general intelligence, or AGI, is the name given to a type of AI that is capable of human-level thinking and decision-making. It will arrive in a few years, according to John Schulman, cofounder of OpenAI.

Marcus, who has previously criticized OpenAI, informed that there may be issues with the current methods and that people are giving Big Tech corporations and AI a great deal of power.

He told, the creator of The Dawn Project, a safety advocacy group, Dan O’Dowd, has fought against Tesla’s autonomous driving technologies. The case involving Microsoft and CrowdStrike serves as a reminder that vital infrastructures are not trustworthy or sufficiently protected. Because there’s a race to get things to market, he said Big Tech companies evaluate systems based on whether they function “pretty well most of the time.”

When it comes to AI, some of that has already been demonstrated.

Over the past six months, businesses of all stripes have introduced a plethora of AI options, some of which have started to revolutionize the way people work. However, along the way, the hallucination-prone AI models have also produced some highly-publicized mistakes, like as Google’s AI Overviews that advised users to eat pizza with glue or Gemini’s erroneous depictions of historical figures.

Businesses have also alternated between revealing eye-catching new products and then postponing or withdrawing them when they weren’t ready or when first public releases highlight problems. Amidst the intensifying competition in AI, OpenAI, Microsoft, Google, and Adobe have all reduced or postponed their AI offerings this year.

Even though some of these errors or product delays might not seem like a huge concern, as technology develops, there could be more serious risks involved.

Published earlier this year, the AI risk assessment report was commissioned by the US Department of State. AI has a high potential for weaponization, according to the paper, which might manifest as biowarfare, widespread cyberattacks, disinformation operations, or self-governing robots. The research warned that the outcome might bring about “catastrophic risks,” including the extinction of humans.

According to the report, Javad Abed, an assistant professor of information systems at Johns Hopkins’ Carey Business School, the reason why situations like the Microsoft-CrowdStrike outage persist is that businesses still consider cybersecurity to be an expense rather than an essential investment. According to him, large tech organizations must to have a multi-layered defensive plan and alternative vendors.

A director at the Brookings Institution named Sanjay Patnaik told that social media and artificial intelligence (AI) are not adequately regulated by the government. According to him, the technology might end up posing a threat to national security if proper safeguards weren’t in place.

Patnaik claimed that large tech giants have had “free rein.” And businesses are starting to realize it these days.

Marcus concurred that businesses cannot be allowed to develop dependable infrastructure on their own and that the downtime should serve as a warning that allowing AI systems to operate unchecked puts us in danger of playing twice or nothing.

Source link