Demis Hassabis, the CEO of Google DeepMind, has a clear caution as artificial intelligence permeates everyday life: avoid the same traps that caused social media to become toxic.
Speaking alongside Greek Prime Minister Kyriakos Mitsotakis at the Athens Innovation Summit last Friday, Hassabis stated that although artificial intelligence (AI) is one of the most revolutionary technologies in history, its implementation necessitates a much more cautious approach than what he called Silicon Valley’s traditional “move fast and break things” approach.
According to Hassabis, we should take a cue from social media, where the mindset of “move fast and break things” may have prevailed over an awareness of the ensuing second- and third-order repercussions.
He stated that social media algorithms are designed to “grab more and more of your focus, but not certainly in a way that’s helpful for you as the individual,” and that the stakes are higher since AI has the ability to affect many different businesses and civilizations.’
Rather, Hassabis called on technologists and regulators to follow the scientific method, which says that systems should be properly tested and understood before being made available to billions of people.
He maintained that rather than being used to control humans, AI should be developed as a tool to help them.
Striking the correct balance between “being bold with the opportunities, but being responsible about mitigating the risks” is the aim, according to Hassabis, who also added that this “continual” conflict will endure “all the way to AGI.”
AI already exhibits some of the same flaws
In the past, AI researchers have observed AI replicate some of the harmful patterns found on social media.
In a study released in August, 500 chatbots were given their own simplified social network. The researchers found that, even in the absence of advertisements or recommendation algorithms, the chatbots split into cliques, amplify extreme voices, and allow a select few to regulate the conversation.
The team tried six interventions to stop the pattern, ranging from chronological feeds to suppressing follower counts, but none succeeded. They determined that the dysfunction extends beyond algorithms and is built into how social networks promote emotionally charged sharing.
Meanwhile, AI is becoming increasingly integrated into social media, with virtual influencers growing popular, corporations trying AI looks and voices, and some creators warning that licensing their likeness in perpetuity might jeopardize their jobs.
While OpenAI CEO Sam Altman has claimed that addictive social media feeds may be more detrimental to children than AI itself, Reddit cofounder Alexis Ohanian has proposed that AI may provide people greater control over what they see online.






