We have entered the age of AI

You will be sorely incorrect if you thought that by 2024, AI fever would have passed. The possibilities for dynamic applications of generative AI are becoming increasingly abundant due to advancements in hardware and software (everywhere), and it looks as though 2023 was simply the beginning of our exploration.

Since this year is the Chinese Zodiac’s Year of the Dragon, general artificial intelligence (gen AI) will be strategically and widely integrated into all industries. Businesses are ready to take use of gen AI as a fundamental part of their operational and strategic frameworks, rather than just as a cutting-edge technology, now that the risks have been evaluated and strategies are starting to take shape. In summary, CEOs and other corporate executives are actively working to integrate new AI technologies into their operations after realizing the benefits and importance of this technology.

The outcome is an environment in which generation AI is no longer merely a choice but rather a crucial engine for efficiency, creativity, and competitive advantage. This seismic shift denotes a movement from cautious exploration to confident, well-informed application, and it will happen in 2024, making it clear that gen AI will go from being an emerging trend to a core corporate practice.

Volume and variety

One important aspect is the expanding knowledge of how gen AI enables a greater number and diversity of applications, concepts, and information.

Our understanding of the implications of the massive volume of AI-generated content is still developing. Like how the atom bomb set back radioactive carbon dating, historians will have to see the internet post-2023 as something entirely different from what came before due to the sheer volume of this content (since 2022, AI users have collectively created more than 15 billion images — a number which previously took humans 150 years to produce).

Still, whatever the effects of gen AI may be on the internet, for businesses, this development is raising the bar for all participants in every industry and denoting a turning point at which a failure to adopt the technology could prove to be not just a loss of opportunity but also a competitive disadvantage.

The jagged frontier

We discovered in 2023 that artificial intelligence (AI) not only increases the standard in several businesses, but also in employee capacities. 90% of workers claimed that AI is increasing their productivity in a YouGov survey conducted last year. With 73% of workers utilizing AI at least once a week, one in four respondents utilize it on a regular basis.

A separate research discovered that given the correct training, employees completed 12% of jobs 25% faster with Gen AI, and total work quality increased by 40% – with those with lower skill levels benefiting the most. Employees were 19% less likely to produce proper solutions for activities that fell outside of AI’s capabilities.

This conflict has given rise to what scientists call the “jagged frontier” of AI capabilities. This functions as follows: On one end of the scale, we observe AI’s astounding prowess: activities that appeared unattainable for computers are now completed with precision and efficiency.

However, there are some tasks where AI fails to match human insight and flexibility. These are areas of nuance, context, and intricate decision-making, where machines’ binary logic (currently) meets its equal.

Cheaper AI

Gen AI projects will start to take root and become more commonplace this year as businesses learn to understand and navigate the unpredictable landscape. The decrease in training costs for foundational large language models (LLMs) due to improvements in silicon optimization, which is thought to occur every two years, is the driving force behind this adoption.

The AI chip market is anticipated to become more accessible in 2024 as substitutes for market leaders like Nvidia surface out of thin air, along with rising demand and worldwide shortages.

In a similar vein, novel techniques for fine-tuning that enable the development of strong LLMs from weak ones without the need for extra human-annotated data, like Self-Play fIne-tuNing (SPIN), are utilizing synthetic data to get greater results with minimal human intervention.

Enter the ‘modelverse’

This cost-cutting measure is making it possible for more businesses to create and use their own LLMs. The ramifications are numerous and diverse, but it seems obvious that throughout the coming years, there will be a sharp increase in creative LLM-based applications.

Similarly, we’ll start to witness a change in 2024 from models that mostly rely on the cloud to locally performed AI. This progress is fueled in part by technological innovations in hardware, such as Apple Silicon, but it also takes advantage of the unrealized potential of the raw processing power found in common mobile devices.

Comparably, as small language models (SLMs) address more specialized, niche needs, they are expected to gain popularity in large and medium-sized businesses. Because SLMs are less heavy than LLMs, as their name implies, they are perfect for real-time applications and platform interaction.

Therefore, SLMs are trained on more domain-specific data, which is frequently generated from within the company, making them specialized to particular industries or use cases, all while assuring relevance and privacy, whereas LLMs are taught on enormous volumes of different data.

A shift to large vision models (LVMs)

As 2024 approaches, the focus will also move from LLMs to large vision models (LVMs), especially domain-specific ones, which have the potential to completely change how visual data is processed.

Although LVMs encounter a distinct difficulty, LLMs trained on internet text adapt well to proprietary documents: The majority of images on the internet are of kittens, memes, and selfies; these are very different from the specialized images used in industries like manufacturing or the life sciences. As a result, in specialized domains, a general LVM trained on internet photos would not be able to effectively identify salient characteristics.

Still, far higher outcomes are obtained with LVMs customized for certain image domains, like pathology or semiconductor manufacture. Studies show that configuring an LVM for a particular domain with approximately 100K unlabeled photos can greatly minimize the requirement for labeled data, improving performance levels. Distinct from general LVMs, these models are customized for certain business domains and are particularly effective in computer vision tasks such as object localization or defect detection.

Enterprises will start implementing large graphical models (LGMs) in other places. Tabular data, which is commonly available in databases or spreadsheets, is well-suited for these models. Unique to them is their capacity for time-series data analysis, which provides new insights into the sequential data that is frequently encountered in corporate settings. Because most enterprise data falls into these categories, existing AI models, including LLMs, have not yet been able to effectively address this difficulty, making this feature essential.

Ethical dilemmas

Naturally, careful ethical analysis will need to support these advancements. Most people agree that our understanding of earlier general purpose technologies—which have wide-ranging uses, have a significant impact on a variety of human endeavors, and radically alter the economy and society—was extremely incorrect. Social media and smartphones, for example, have many positive effects but also negative externalities that affected every aspect of our life, whether or not we used them.

Regulation is seen as essential with generative AI in order to prevent the recurrence of previous errors. Organizations opposed to governments will lead the regulatory charge, though, as it may fail, impede innovation, or take years to take impact.

The copyright controversy around gen AI last year was arguably the most well-known ethical issue. AI technology raised urgent concerns regarding intellectual property rights since they developed quickly. Whether and how copyright laws should apply to AI-generated content, which frequently uses pre-existing human-created works for training purposes, is, of course, the central question of contention.

Because copyright laws were designed to stop people from using other people’s intellectual property (IP) illegally, there is conflict between AI and copyright. While it’s acceptable to read texts or articles for inspiration, copying them is not. Though the issue lies in the fact that AI may consume infinite amounts of data, rather than being limited by human constraints, such as when someone reads all of Shakespeare and creates their own version.

One aspect of a media that is changing is the copyright vs. copywrong controversy. The outcome of significant, precedent-setting lawsuits like NYT v. OpenAI (though it’s unclear if this will ever go to trial or is just the publisher’s negotiating tactic) will be revealed in 2024, and we’ll see how the media environment adjust to reflect the new AI reality.

Deepfakery to run rampant

In terms of geopolitics, the way AI is interacting with the most significant election year in human history will undoubtedly be the year’s biggest story. Most of the world’s population will be voting this year; elections for president, parliament, and other offices are planned in the United States, Taiwan, India, Pakistan, South Africa, and South Sudan, among other countries.

When Bangladesh went to elections in January, there had already been such meddling. Disinformation produced by low-cost AI tools was actively disseminated by certain media outlets and influencers that support the government.

In one case, a deepfake video (which was later removed) purportedly showed an opposition leader withdrawing support for Gazans, a viewpoint that may be harmful in a country where most Muslims have a strong sense of sympathy with the Palestinian people.

Artificial intelligence imagery poses a real threat. Subtle modifications intended to trick AI in image identification can also affect human perception, according to recent research. The discovery, which was published in Nature Communications, emphasizes the similarities between machine and human vision. But more significantly, it draws attention to the need for additional study into how adversarial images affect both AI systems and humans. These tests demonstrated how human judgments can be skewed by even minute disturbances that are invisible to the naked eye, much like AI models.

Even while watermarking, also known as content credentials, is becoming widely accepted as a way to tell real content from fake, there are still several issues with this technique. Is detection going to be ubiquitous? If so, what steps can we take to stop people from misusing it by marking work as synthetic when it isn’t? However, preventing everyone from recognizing this type of media gives significant power to those who already possess it. We shall find ourselves asking: Who gets to judge what is real? once more.

The world’s largest election year will coincide with the greatest revolutionary technological advancement of our time in 2024, a year in which public trust remains firmly at an all-time low. 2024 will be the year that AI is used in practical, noticeable ways—for better or worse. Hold on tight.

Source link

Most Popular