The stranger’s encounter with generative AI is just one in a million. People are starting to understand the purpose of this revolutionary new technology and wonder what it can do for them as they walk down the street (and in elevators). There’s an air of anticipation and enthusiasm surrounding generative AI at the moment, analogous to the early days of the internet, and a sense that we’re creating this stuff up as we go along.
In other words, we are living in the 2000s dot-com boom. A lot of businesses will fail. Before this era’s Facebook (now Meta), Twitter (now X), or TikTok appear, years may pass. “People are reluctant to imagine what might happen in ten years because they don’t want to appear foolish,” says Alison Smith, head of generative AI at technology consulting firm Booz Allen Hamilton.
Everything was altered by the internet, including our work and play habits, social interactions, learning styles, consumption habits, romantic relationships, and much more. However, it also gave rise to troll farms, revenge porn, and cyberbullying. It enabled genocide, exacerbated mental health issues, and established surveillance capitalism as the preeminent economic power of our day with its seductive algorithms and deceptive advertising. Only until a large number of users began using it and the emergence of killer apps like social networking were these drawbacks made evident.
The same is probably true with generative AI. Once the foundational generative models from OpenAI, Google, Meta, and a few other companies are in place, users and abusers of the technology will begin in ways that its creators could never have imagined. Without letting individual people truly play around with it, we won’t be able to properly comprehend the potential and the risks, according to Smith.
Since generative AI learned its skills on the internet, it has absorbed many of the unresolved problems associated with it, including as bias, disinformation, copyright violations, violations of human rights, and general economic instability. However, we won’t enter blindly.
These are six open-ended issues to consider while we observe the generative-AI revolution in action. We have an opportunity to perform better this time.
Will the issue of bias ever be resolved?
For good reason, bias is now synonymous with negative effects of artificial intelligence. Gender stereotypes and racial discrimination abound in real-world data, particularly in text and photos that are taken from the internet. After being trained on the data, the models encode the biases and propagate them everywhere they appear.
Engineers and nurses are frequently portrayed by chatbots and image generators as white men and women. It is possible for police departments to use face recognition software to mistakenly identify Black persons, which might result in unjustified arrests. The bias against women that hiring algorithms perpetuate is something that they were occasionally brought in to address.
The core source of the bias issue will persist in the absence of new data sets or a novel approach to model training, both of which could require years of development. However, that hasn’t made it any less of a hot research issue. Reinforcement learning from human feedback (RLHF) is one way that OpenAI has used to reduce bias in its massive language models. This directs a model’s output in the direction of the text that human testers report favoring.
Synthetic data sets are used in other methods. For instance, a version of the well-known image-making model Stable Diffusion has been trained using artificial intelligence (AI)-generated photos of individuals with varying ages, genders, and ethnicities. Runway is a startup that creates generative models for the video production industry. The company states that more photos of women and persons with darker skin tones are produced by models trained on this data set. When you request an image of a businessperson, you now get images of women wearing headscarves; when you request an image of a doctor, you get photographs of people with different skin tones and genders, and so on.
Some criticize these fixes, saying they are Band-Aid fixes for flawed base models that cover up the issue rather than solve it. However, Geoff Schaefer, Smith’s colleague at Booz Allen Hamilton and the company’s head of responsible AI, contends that these algorithmic biases can ultimately reveal social biases in a way that is beneficial.
He cites the fact that race may be deduced from people’s addresses, which might expose patterns of segregation and housing discrimination, as an example of how racial bias can still affect data-driven decision-making even when explicit information about race is removed from a data set. He claims, that correlation became really clear once they got a bunch of data together in one place.
Schaefer believes that similar things might occur with the current generation of AI: these societal prejudices will become apparent. He claims that this will result in more focused policymaking.
However, many would object to such optimism. There is no assurance that a problem will be resolved just because it is public knowledge. In housing, jobs, lending, police, and other areas, policymakers are still attempting to address social biases that came to light years ago. People have to deal with the fallout in the interim.
Prognosis: The majority of generative AI models will still have bias as a fundamental component. However, lawmakers may be able to solve the most evident situations with the aid of workarounds and growing awareness.
What impact will AI have on copyright laws?
Outraged that computer corporations should make money off of their creations without permission, authors, artists, and programmers have filed class action lawsuits against Microsoft, OpenAI, and other organizations, alleging copyright infringement. The company that created Stable Diffusion images, Stability AI, is being sued by Getty.
These cases are significant. Celebrity claimants like Sarah Silverman and George R.R. Martin have caught the interest of the press. Furthermore, at least in the US, the cases aim to rewrite the guidelines about what constitutes fair use of someone else’s work.
However, don’t hold your breath. According to Katie Gardner, a lawyer at the legal firm Gunderson Dettmer that specializes in intellectual property licensing and represents over 280 AI startups, it will take years for the courts to render a final verdict. At that point, the technology “will be so entrenched in the economy that it’s not going to be undone,” according to her.
Meanwhile, these purported infringements are being rapidly expanded upon by the tech sector. According to Gardner, he doesn’t think businesses will hold off. While there can be some legal dangers, falling behind comes with a host of other risks.
A few businesses have taken action to reduce the likelihood of infringement. Makers now have the ability to have their creations removed from upcoming data sets, according to OpenAI and Meta. OpenAI currently stops DALL-E users from requesting photos that are created in the manner of living artists. Gardner counters that all of these measures are meant to support their positions in the legal dispute.
Presently, OpenAI, Microsoft, and Google offer to shield model users from possible legal action. The indemnity policy of Microsoft for its generative coding tool, GitHub Copilot, would theoretically shield users while the legal system works things out. GitHub Copilot is the target of a class action lawsuit filed on behalf of software developers whose code it was trained on. Microsoft CEO Satya Nadella said to MIT Technology Review that they will shoulder that responsibility so our product consumers won’t have to worry about it.
New types of licensing agreements are emerging concurrently. Shutterstock and OpenAI have reached a six-year agreement for Shutterstock’s image usage. Adobe further states that only licensed photos, photographs from its Adobe Stock data set, or images that were no longer protected by copyright were used to train its own image-making model, named Firefly. Nonetheless, a few Adobe Stock contributors claim they weren’t contacted and are unhappy as a result.
Tension is intense. Now, using their own technologies, artists are retaliating. Nightshade is one tool that allows users to manipulate photographs in ways that are invisible to humans but very destructive to machine-learning models, causing them to classify images incorrectly when they are being trained. Expect a significant shift in the accepted practices for sharing and repurposing media on the internet.
Prognosis: While high-profile legal cases will probably continue to garner attention, businesses will probably continue to develop generative models despite this. Companies and creators will engage in a game of cat and mouse as new marketplaces centered around ethical data sets emerge.
How will it alter our employment?
AI is going to take over our employment, as we’ve long heard. This time, however, there seems to be an added danger for white-collar professionals like data analysts, physicians, attorneys, and (gasp!) journalists. Exams such as the bar exam, professional medical licensure exams, and high school exams can be easily passed by chatbots. They are capable of writing simple news pieces and summarizing meetings. What remains for the remainder of us? The situation is not at all clear-cut.
The effectiveness of large language models as a measure of actual intelligence is disputed by several researchers. Even so, most professional occupations include much more than the tasks that models are capable of.
Ethan Mollick, an innovation researcher at the University of Pennsylvania’s Wharton School, assisted in managing a trial with the Boston Consulting Group last summer to examine ChatGPT’s effects on consultants. In relation to a hypothetical shoe company, the team assigned eighteen tasks to hundreds of consultants, including “Segment the footwear industry market based on users” and “Propose at least 10 ideas for a new shoe targeting an underserved market or sport.” Not everyone in the group used ChatGPT to assist them. The outcomes were startling: ChatGPT-4 users performed far better than non-users, in every respects.
According to Nathan Benaich, the founder of the venture capital firm Air Street Capital and head of the team behind the State of AI Report, an extensive yearly assessment of research and industry trends, many firms are already utilizing large language models to identify and fetch information. That’s welcome, he thinks: hopefully, analysts will just turn into an AI model. Most of this stuff hurts like hell.
His argument is that allowing machines to handle menial tasks frees up workers to concentrate on more rewarding aspects of their profession. Technology also appears to level the playing field for workers: preliminary research, such as that conducted by Mollick with consultants and others with programmers, indicates that those with less expertise benefit more from AI use. But there are some restrictions. Mollick discovered that over-reliance on GPT-4 led to negligence and a decreased ability to detect mistakes produced by the model.)
Not all desk jobs will be replaced by generative AI. It may be feasible to create endless streams of images and videos without the need for human performers, camera operators, or illustrators by using models that create images and videos. It became evident from the US writer-actor strikes of 2023 that this would be a hot topic for years to come.
Still, many studies believe that, in general, workers are empowered by modern technology rather than replaced. After all, since the industrial revolution, technology has been displacing workers. As one employment type disappears, another is formed. According to Smith, he is quite certain that there is a net benefit.
But advances can mask particular losses, and change is never easy. Inequality is further exacerbated by technological disruption, which tends to concentrate wealth and power.
Writer Mollick states that in his opinion, the real question is not if AI will change the nature of labor, but rather, what we want it to entail.
Prognosis: It will turn out that concerns about widespread job losses are overstated. However, generative tools will keep showing up in offices. It may be necessary to acquire new abilities and that roles alter.
What misinformation will it make possible?
The pope seen in a Balenciaga robe, Donald Trump being subdued by law enforcement, and an explosion at the Pentagon were three of the most widely shared pictures of 2023. All fraudulent; millions of people have viewed and shared them.
It’s now simpler than ever to produce false text or visuals using generative models. Several people alert us to the abundance of false information. OpenAI and its partners have conducted research highlighting numerous possible applications of their technology for fake news campaigns. In a 2023 study, it issued a warning that large language models could be used to massively manufacture more persuasive propaganda that is difficult to identify as such. Election security is already being warned about by experts in the US and the EU.
The Biden administration’s October executive order on artificial intelligence, which prioritized the identification and classification of content generated by AI, came as no surprise. However, the order did not legally compel tool manufacturers to identify text or images as AI-generated content. Furthermore, not all of the top detecting tools are reliable enough just yet.
This month’s agreement on the European Union’s AI Act goes farther. Companies are required by part of the comprehensive legislation to watermark text, photos, or videos created by AI and to clearly identify when users are dealing with chatbots. The AI Act is tough; violation will result in heavy fines. The regulations will be legally binding.
The US has further declared that it will examine any AI that could jeopardize national security, including meddling in elections. Benaich says it’s a terrific first step. The concept that governments or other independent agencies may require firms to completely test their models before they’re released appears impractical, though, given even the model developers are unaware of all the potential benefits of these models.
But here’s the thing: before a technology is used, it is difficult to predict all the ways in which it will be abused. Schaefer notes that there was a lot of talk in 2023 about slowing down the advancement of artificial intelligence. However, our stance is the reverse.
We won’t be able to improve these tools unless as many people use them in as many diverse ways as possible, he claims. We won’t be able to comprehend the intricate ways in which these unusual hazards are going to arise or the circumstances that will set them off.
Prediction: As use increases, new types of misuse will keep emerging. There will be a few notable instances, perhaps with something to do with manipulation of elections.
Are we going to accept its price?
It is also necessary to consider the human and environmental development costs associated with generative AI. It’s no secret that the invisible worker problem exists: large numbers of unpaid workers who tag training data and filter out harmful, occasionally traumatic, output during testing help to protect us from the worst that generative models may produce. The data era’s sweatshops are these.
Popular media sites including Time and the Wall Street Journal questioned OpenAI’s usage of laborers in Kenya in 2023. By creating a filter that would block users from seeing offensive, pornographic, and hostile information, OpenAI hoped to enhance its generative models. However, in order for its automatic filter to become proficient at identifying such harmful information, a sizable number of examples of it have to be located and labeled by users. OpenAI had engaged Sama, an outsourcing company that is said to have employed low-wage laborers in Kenya with little assistance.
The expenses associated with human labor will become more apparent as generative AI gains traction, which will put pressure on the businesses developing these models to address the working conditions of the global workforce that they hire to advance their technology.
It will take more energy to train huge generative models, which is the other big expense, before things improve. Nvidia reported Q2 2024 earnings of over $13.5 billion in August—double what the company had earned during the same period the previous year. Data centers, or other businesses employing Nvidia’s technology to train AI models, account for the majority of that revenue ($10.3 billion).
“The demand is quite extraordinary,” Nvidia CEO Jensen Huang adds. We’ve reached the threshold of creative AI. He admits the energy issue and believes that the growth will influence the sort of computing hardware employed. According to him, the great majority of the world’s computing infrastructure will have to be energy efficient.
Prediction: As the public becomes more aware of the labor and environmental costs associated with AI, tech corporations will face increased pressure. However, don’t anticipate any major progress in any area anytime soon.
Will doomsday thinking continue to dominate policymaking?
Will doomsday thinking continue to dominate policymaking?
Doomerism, or the worry that the development of intelligent computers would have devastating, if not apocalyptic, effects, has long been an undercurrent in AI. However, peak excitement, as well as a high-profile statement by AI pioneer Geoffrey Hinton in May that he was now afraid of the technology he helped design, brought it to the forefront.
Few issues in 2023 were as contentious. AI luminaries such as Hinton and fellow Turing Award laureate Yann LeCun, who created Meta’s AI lab and deems doomerism absurd, engage in public spats, hurling insults at each other on social media.
Hinton, Sam Altman, the CEO of OpenAI, and other experts have proposed that (future) AI systems ought to be protected by measures similar to those pertaining to nuclear weapons. Talk like this attracts attention. However, Matt Korda, the Federation of American Scientists’ project manager for the Nuclear Information Project, criticized these “muddled analogies” and the “calorie-free media panic” they cause in a July Vox piece he co-wrote.
According to Benaich, it’s challenging to distinguish between what is and is not real since we don’t know why certain individuals are sounding alarms. The fact that a large number of people who are demanding more control are also among those who stand to gain tremendous fortune from this material does seem strange. It’s as if I’ve created something incredibly potent! I have the remedy, but it has a lot of hazards.
Some are concerned about the effects of all of this inciting fear. Deep learning pioneer Andrew Ng stated on X that he is most concerned about artificial intelligence (AI) coming of age if exaggerated concerns (such the extinction of humans) allow lobbyists from the tech industry to pass restrictive laws that stifle innovation and open-source. Additionally, the discussion diverts investigators and resources from less urgent concerns including bias, work problems, and disinformation.
A few are concerned about the effects of all this scare tactics. Deep-learning pioneer Andrew Ng expressed his greatest fear for the future of AI on X: he believes that restrictions that stifle innovation and inhibit open-source software would be imposed because of exaggerated risks (like the extinction of humans). Additionally, the discussion directs funds and scholars away from less urgent risks including prejudice, employment disruptions, and disinformation (see above).
According to François Chollet, a well-known AI researcher at Google, some people encourage existential risk because they believe it would help their own business. Distracting from more important and practical matters while demonstrating your ethical awareness and responsibility is the dual purpose of discussing existential risk.
Benaich notes that some of the individuals setting off alarms are simultaneously raising $100 million for their own enterprises. According to him, doomerism may be considered a fundraising tactic.
Prediction: The scare tactics will decrease, but they might still have an impact on lawmakers’ agendas for some time. There will be calls to return attention to the more pressing issues.
The AI killer app is still lacking
It’s strange to consider that ChatGPT almost never happened. Ilya Sutskever, cofounder and head scientist at OpenAI, wasn’t satisfied with its accuracy before its November 2022 launch. Others in the company were concerned that it was not a significant advancement. ChatGPT was more of a remix than a revolution under the hood. It was powered by GPT-3.5, a large language model developed by OpenAI many months prior. However, the chatbot packaged a number of compelling tweaks—specifically, responses that were more conversational and on point—into one easily accessible package. Sutskever describes it as capable and convenient. It was the first time people outside of AI saw AI progress.
The buzz created by ChatGPT has not yet peaked. According to Sutskever, AI is the only game in town. It is the most important thing in technology, and technology is the most important thing in the economy. And he believes that we will continue to be astounded by what AI is capable of.
However, given what AI is capable of, one could immediately wonder what it’s used for. OpenAI developed this technique with no practical purpose in mind. When the researchers launched ChatGPT, they appeared to say this. Use it whatever you please. Ever since, everyone has been trying to figure out what that is.
Sutskever claims he finds ChatGPT to be helpful. He uses it on a daily basis for a variety of random purposes. He claims to use it to search up specific terms or to improve the clarity of his speech. Though it’s not always factual, he occasionally uses it to look for facts. For holiday planning (e.g., “What are the top three diving spots in the world?”), coding advice, or IT help, other employees at OpenAI use it.
It’s useful, but it’s not game-changing. The majority of those examples may be accomplished using existing technologies, such as search. Meanwhile, Google employees are believed to be skeptical of the utility of the company’s own chatbot, Bard (which is now powered by Google’s GPT-4 rival, Gemini, which was unveiled last month). The most difficult difficulty she faces is determining what LLMs are genuinely effective for. In August, Cathy Pearl, a user experience lead for Bard, posted on Discord. As in truly making a difference. TBD!
The “wow” factor fades in the absence of a killer app. Sequoia Capital, an investment firm, released statistics demonstrating that, even with their initial viral success, AI applications such as ChatGPT, Character.ai, and Lensa—which let users to build stylized, often sexist, avatars of themselves—lose subscribers more quickly than well-known platforms like YouTube, Instagram, and TikTok.
Without a killer app, the “wow” factor fades. According to Sequoia Capital data, despite viral debuts, AI apps such as ChatGPT, Character.ai, and Lensa, which allows users to create stylish (and sexist) avatars of themselves, lose subscribers quicker than current popular services such as YouTube, Instagram, and TikTok.
Benaich claims that consumer tech rules still hold true. Following several months of excitement, there will be a great deal of experimenting and dead ends.
It goes without saying that there were many false starts in the early days of the internet. The dot-com boom ended in bust before it altered the world. It’s possible that the generative AI of today will fade into obscurity and be surpassed by the next big thing.
Whatever happens, issues pertaining to niches have become universal once AI has been widely accepted. We will have to address these problems in a manner that we haven’t done so previously, as Schaefer puts it.