With the launch of Gemini, Google’s response to OpenAI’s incredibly popular AI model, a year that started with ChatGPT becoming the fastest-growing app ever has come to a conclusion. Along the road, artificial intelligence (AI) has changed nearly every aspect of the tech sector and raised existentialist anxieties.
According to experts, things won’t likely slow down in 2024 since AI will probably play a bigger role in our lives that year and everyone from Google to Elon Musk will be vying for the title of OpenAI.
These are their major forecasts for the upcoming year:
AI will be everywhere
With the release of Gemini, an AI model that the company claims can equal OpenAI’s GPT-4, Google closed the year on a high note. In the next months, the company plans to release advanced versions.
OpenAI, not to be outdone, intends to introduce a GPT store early in 2024, enabling users to create and market custom versions of ChatGPT.
This is a part of a trend that experts in artificial intelligence predict will see technology play a far larger role in our daily lives as tech companies incorporate it into as many of their products as they can and acceptance of AI grows.
According to Charles Higgins, cofounder of AI training startup Tromero and Ph.D. candidate in AI safety, he believes that 2024 will be the year when he actually starts seeing widespread usage of all these AI technologies.
Accessibility is key with a model such as the Gemini. It’s already included into the goods you use and are accustomed to. According to him, employing an AI tools will therefore become the standard rather than the exception.
Open-source models are another topic to keep an eye on in 2024. In contrast to closed systems such as GPT-4 and Gemini, these models are publicly available for usage and modification by anybody.
By launching a “open science” relationship with other tech giants like IBM and making its Llama 2 model publicly available, Meta has placed a significant wager on this type of artificial intelligence.
However, truly open alternatives to AI created by big tech corporations are unlikely to gain traction anytime soon due to the prohibitive cost of training AI models.
Another cofounder of Tromero and Ph.D. candidate, Sophia Kalanovska, told that training models is quite costly.
She said that because only large corporations like Meta have the means to teach them, the open source community is still dependent on them to release their models.
OpenAI will experience heat
OpenAI will experience heat.
Since its amazing success at launch in late 2022, OpenAI has ride the ChatGPT wave; however, there have been indications in recent weeks that the chatbot has been experiencing some problems.
Many have expressed dissatisfaction with ChatGPT’s performance, claiming that it is now refusing to complete various tasks. OpenAI has responded that it is investigating claims that the chatbot is becoming “lazier.”
She does think that throughout the last three weeks, ChatGPT has been rather bad. According to Kalanovska, there have been a lot of frequent network failures and the responses have gotten shorter.
She does think that throughout the last three weeks, ChatGPT has been rather bad. According to Kalanovska, there have been a lot of frequent network failures and the responses have gotten shorter.
The peculiar behavior of the chatbot demonstrate the extent of unanswered questions regarding the functioning of large language models. However, they have also increased OpenAI’s workload following several weeks of turmoil within the firm.
Sam Altman’s abrupt resignation and subsequent reappointment as CEO have made the company’s leadership in the AI arms race appear unstable, as startup clients are defecting to rivals and Microsoft announcing its own AI systems in an effort to reduce its dependency on OpenAI.
Next year might be much harder for OpenAI because of Gemini and other competing models like Elon Musk’s Grok entering the battle as the AI market gets more and more competitive.
Higgins stated, “There’s a crack in their armor right now, whatever the drama was about.” It caused a stir, and he believes that the other major players are eager to capitalize on it.
AI firms are facing an impending copyright dispute
The whole artificial intelligence sector is currently under a cloud of intense legal uncertainty.
A straightforward question remains unanswered in the cases of Getty Images v. Stability AI, which is scheduled for trial in the UK next year, and Sarah Silverman et al. v. OpenAI in the US. Is it legal to train AI models on data that contains copyrighted content?
Most nations don’t have a clear answer to that one, according to Dr. Andres Guadamuz, an intellectual property law professor at the University of Sussex.
He stated that it will likely take four to five years for this to be thoroughly settled, but he anticipates that in 2024 we may have one or two rulings that will help put things in perspective.
The AI sector faces an existential threat as a result of these legal disputes. Major tech corporations have acknowledged that training models as vast and complicated as GPT-4 would probably be unfeasible if they had to pay for the massive volumes of copyrighted data needed for training.
Guadamuz stated that if the cases were settled tomorrow and all of the AI businesses lost, they would likely have to pay out enormous quantities of money, which would have a major impact on them.
He did, however, add that while a string of devastating legal setbacks for AI businesses in the US and the UK would definitely slow the AI revolution down, it probably wouldn’t stop it completely.
According to Guadamuz, AI development would most likely just relocate to nations with looser laws.
Regulation is urgently required
Politicians in the US are infamous for having failed to pass legislation aimed at limiting the impact of social media, and it appears that same history may be repeated with artificial intelligence.
Despite holding a series of congressional hearings on AI, the US does not appear to be any closer to regulating the cutting-edge technology. The EU finally agreed on a set of controls for generative AI tools last month, following tough negotiations.
That needs to change by 2024, according to experts, since AI is already upending a wide range of industries and concerns about the practical effects of AI-generated material are becoming more prevalent.
According to Carnegie Mellon University computer science professor Vincent Conitzer, 2024 will be the year that AI regulations really start to take shape.
Many regulatory measures make sense on the surface, but the specifics of their practical execution are still lacking and unproven. This makes them very important.
Determining those specifics is difficult since regulations come together gradually and artificial intelligence is currently a very fast-moving subject, he said.
Guadamuz concurred and said that authorities would probably need to intervene right away rather than waiting for courts to rule on contentious issues pertaining to artificial intelligence.
There will always be a technological gap in the legislation. Therefore, rather than waiting for the case law to be decided, regulation needs to step in, he said.