Chatbots Are Changing Our Lives and Works

After launching its ground-breaking, web-based DVDs-by-mail subscription service in 1999, Netflix took three and a half years to attract one million users. That was a significant feat considering that those who were adopting new technology at the time were viewed as a select group of early adopters who were not averse to living on the cutting edge.

It took Airbnb 2.5 years, Facebook 10 months, and music streaming service Spotify just 5 months to reach a million subscribers in the early 2000s. These figures show that consumers are becoming more at ease with cutting-edge tech services that have the potential to improve their daily lives. Instagram’s “insane growth” was praised by industry observers after the photo-sharing software reached a million users in less than three months in 2010.

Consider this if reaching a million customers is a crucial benchmark for making an unproven tech service popular: In just five days after its Nov. 30, 2022 launch, OpenAI’s ChatGPT, a generative AI chatbot, attracted one million users.

Five days.

That is astounding.

Then consider this: in just two months, ChatGPT attracted 100 million users.

It illustrates the attention we’re all devoting to the next wave of conversational chatbots. According to Similarweb, a year after its debut, ChatGPT has over 150 million unique users (users must create an account in order to use the site) and hosted around 1.7 billion visits in November, making it one of the most popular websites in the world. The study monitors the uptake of ChatGPT, Google Bard, Microsoft Bing, Character.ai, and Claude.ai—some of the most well-known generative AI chatbots available today.

What’s generating all of that curiosity? Despite worries about privacy and security regarding their operation and potential for misuse by malicious actors, chatbots hold great potential for novel use cases. Although artificial intelligence (AI) has been a component of technology for many years—a significant portion of Netflix and Amazon recommendations, for example, are determined by AI algorithms—gen AI is a different story.

According to Professor W. Russell Neuman of New York University, Suddenly, machine intelligence, machine decision-making, and machine thinking can replace human intelligence.

The LLM, a kind of AI neural network that uses deep learning to mimic the human brain, is the foundation of these chatbots. It can handle vast amounts of data and carry out a range of natural language processing tasks.

According to Brian Comiskey, program director of the Consumer Technology Association, “generative AI has been the subject of intense consumer excitement, especially with ChatGPT,” since it has truly provided a lot of tangibility to customers. AI is therefore going to be a major focus during the CTA’s annual Consumer Electronics Show, which kicks off on January 9 in Las Vegas. Customers can observe AI in action in a variety of ways.

As 2024 approaches, you might think about testing the capabilities of a chatbot by asking it to perform tasks that previously would have seemed unfeasible or required a significant investment of time, effort, and resources, such as summarizing a book or scientific study or writing a short fishing story in the vein of Ernest Hemingway. Like CNET’s Abrar Al-Heeti, you could organize a Taylor Swift-themed dance party, build a metaverse for a new game, arrange a trip to Machu Picchu, have David Attenborough narrate your life, prepare a meal that will appeal to everyone’s dietary preferences—meat eaters, vegetarians, vegans, and gluten-free people—or you could pursue your dream of becoming a fashion designer and designing a collection inspired by corduroy. You could even converse theoretically with Jane Austen or Jesus.

The important thing here, according to Andrew McAfee, a principal research scientist at the MIT Sloan School of Management, is being able to have that back-and-forth with a human-sounding assistant. They have developed a device that can comprehend human language for the first time ever.

A generative artificial intelligence (AI) chatbot can read a stream of words, determine what the user is trying to say, and respond to that prompt or request, according to McAfee. While today’s chatbots aren’t really “artificial intelligences” because they’re not thinking, sentient entities that truly know and understand the world as humans do. It’s quite an accomplishment.

For this reason, you ought to familiarize yourself with these chatbots: their functions, characteristics, and the potential and difficulties they present to humankind. With a pun intended, these tools are literally changing the dialogue about the nature of work, education, and day-to-day tasks in the future. Thus, think of this as an overview of generative AI, along with some useful advice on how to get started with some of the most widely used tools available today.

When asked ChatGPT why it was important for us as humans to understand generative AI and technologies like ChatGPT. It said as follows: You may take advantage of the most recent developments, investigate innovative possibilities, boost productivity, enhance customer experiences, and support the moral and responsible application of AI by being knowledgeable about generative AI and technologies like ChatGPT.

Old jobs, new jobs, more jobs?

Businesses are already starting to consider what they may expect from their human employees as early as this year, thanks to the anticipated productivity and profit bump that automated technology could help bring.

In a collaboration with the Boston Consulting Group, the Sloan School of Management at MIT discovered that generative AI can boost highly competent workers’ productivity by up to 40% when compared to non-users. Studies published by the Brookings Institute show that software engineers may write code up to twice as quickly when utilizing gen AI tools.

In order to find out how much time generative AI may save on jobs like writing emails, analyzing text, and creating documents, LinkedIn polled CEOs, CIOs, data scientists, software engineers, and other high-volume data users. According to them, they could complete jobs that currently require ten hours of manual labor in five to six hours less. This means that you can focus on more fulfilling or valuable work by cutting down on the amount of time you spend on some regular duties by 50% to 60%.

According to David Carr, Similarweb’s senior insights manager, This will play a significant role in how the internet and our overall experience of work and computing evolve over the next few years.

According to the Pew Research Center, the majority of Americans (82%) haven’t ever tried ChatGPT, and more than half say they’re more worried than pleased about the growing usage of AI in daily life. There, researchers are beginning to discover jobs that generative AI may impact. They consist of web developers, tax preparers, data entry clerks, budget analysts, and law clerks. Consider positions whose duties involve “obtaining information” and “analyzing data or information,” according to Pew.

Goldman Sachs warns that while new technology has historically brought in new sorts of occupations, worries that AI will eliminate all jobs may be exaggerated. The company reported in a widely read March 2023 study that 60% of workers today work in professions that didn’t exist in 1940.

Nevertheless, the company believes there may be a major upheaval in the job market. Approximately two-thirds of US jobs are already subject to some kind of automation, according to Goldman Sachs economists who examined 900 job types. They also calculated that “generative AI could substitute up to one-fourth of current work.”

The ability of generative AI to produce content that is indistinguishable from human-created output and to reduce communication barriers between humans and machines, despite the fact that there is a great deal of uncertainty surrounding its potential, indicates a significant advancement with potentially large macroeconomic effects, according to Goldman Sachs’ economists.

The point is this: What should today’s and tomorrow’s workers do? Set aside the very real discussion regarding whether content produced by artificial intelligence (AI) is actually “indistinguishable” from human-created output (this story was entirely written by humans, by the way).

Experts concur: Become at ease using AI chatbots if you want to continue to draw in companies.

Rather than viewing generative AI as a potential job killer, embrace the idea that chatbots can be your copilot or assistant, assisting you in doing whatever it is that you do better, faster, more efficiently, or in completely new ways, all because you have a mostly reliable supercomputer with which you can communicate and work together (“Mostly reliable” refers to the hallucination issue that some chatbots have). In other words, artificial intelligence (AI) engines often create content that sounds real but isn’t.

Calling all prompt engineers

The tech has already spawned a brand-new occupation known as “prompt engineering.” It describes a person who understands how to ask the right questions to achieve a satisfactory answer from chatbots, enabling them to “talk” to them effectively. People with problem-solving, critical thinking, and communication abilities are more likely to be prompt engineers than technical engineers. Prompt engineers should expect to make at least $300,000 in 2023, according to job ads.

Although developing a “AI-educated workforce” will take time, Ryan Bulkoski, head of the AI, data, and analytics group at executive recruitment company Heidrick & Struggles, thinks upskilling workers and educating executives on AI are “critical” today.

Should a business decide, “Well, what if I want someone with five years of experience as an AI prompt engineer? According to him, the role hasn’t existed for the past 18 months.

Being at ease with chatbots should thus be high on your list of things to accomplish in 2024, particularly for knowledge workers who will be “most exposed to change,” according to a September analysis from the employment website Indeed.com. To ascertain which jobs and skills have low, moderate, and high exposure to generative AI disruption, it looked at over 2,600 skills and 55 million job ads.

Experienced employees may want to begin their upskilling process as soon as possible. Oxford University researchers discovered that older workers can be more vulnerable to AI-related employment hazards since they might not be as comfortable embracing new technology as their younger counterparts.

Many people believed that their careers would be in jeopardy because they made a living from calculations when the pocket calculator was introduced, according to MIT’s McAfee. It turns out that we still have a huge demand for statisticians, engineers, scientists, and accountants. They will be less employable in the long run if they are not using a spreadsheet or a calculator.

A few ways you can play with gen AI today

Because generative AI can collaborate with humans in natural language, it belongs to a unique type of technology known as general-purpose technology, according to economists and scholars. Wikipedia defines it as something that has the potential to impact the entire economy, typically on a national or international scale. GPTs have the power to fundamentally change societies by affecting prevailing social and economic institutions.

Other examples of GPTs are the internet, electricity, and steam engines, which have a profound impact on everyone’s quality of life and hence become essential to society. (That GPT, which stands for “generative pre-trained transformer,” is not the same as the one in ChatGPT.”)

AI chatbots can be used in a variety of ways. The majority of tools are free, but if you want a more feature-rich version that functions quicker, has greater security, and/or enables you to produce more material, you may upgrade to a paid subscription plan. All of these techniques have limitations, particularly in terms of privacy. For example, Google Bard records the talks you have with it, while ChatGPT claims to record “personal information that is included in the input, file uploads, or feedback that you provide to our services. Examine the terms of service or third-party privacy audits such as Common Sense Media.

According to David Carr, a senior insights manager at Similarweb, you should at least try [these tools] to get some idea beyond the news headline of what they can and can’t do. Over the next few years, this will play a significant role in how the internet and our entire computing and work experience change.

A way with words: The majority of individuals are eager to use OpenAI’s chatbot. A few months following its launch, ChatGPT was requested by actor Ryan Reynolds to write a TV ad for his Mint Mobile wireless service. The resulting video was posted to YouTube and received close to two million views. He found the AI-generated advertisement to be “compelling but mildly terrifying.”

Not only can it translate text into multiple languages, compose emails and job descriptions, translate articles and meeting notes, brainstorm ideas with you, answer questions, write jokes (though not very well), and assist you with tasks like learning a new language.

Google Bard, Microsoft Bing (which uses OpenAI technology), Anthropic’s Claude.ai, Perplexity.ai, and YouChat are also available. According to Similarweb’s Carr, people spent between five and eight minutes each visit experimenting with these tools in November. And, while ChatGPT still leads in visitors, followed by Bing with 1.3 billion, the remaining top sites received over half a billion visits that month.

What does that signify? It’s time to accept generative AI as a mainstream technology, according to Carr. Basically, all of these instruments are capable of completing tasks that were unthinkable only a few years ago.

Creating pictures from words: Although ChatGPT receives the most publicity, OpenAI debuted Dall-E, a text-to-image generator, in April 2022. Once you put in a text prompt, your words are translated into visuals. Examples of such visuals include a 3D representation of a Swiss cheese bouldering wall or a portrait of a blue extraterrestrial singing opera.

Dall-E 3, named after the mashup of Salvador Dalí, the surrealist painter, and Pixar’s WALL-E robot, is not the only text-to-image generator that claims to create your next masterpiece in a matter of seconds. Some of the well-known programs in this area are Dreamup from DeviantArt, Canva Pro, Adobe Firefly, Midjourney, Stable Diffusion, and Microsoft’s Bing Image Creator, which is based on Dall-E.

With Adobe’s Firefly website, you can write text in absurd AI-generated font styles such as “black leather shiny plastic wrinkle,” “holographic snakeskin with small shiny scales,” or “realistic tiger fur.” Party invites, flyers, posters, and short animations for social media postings are all good uses for the company’s free Adobe Express program, according to CNET’s Stephen Shankland. Testing AI image tools, he’s been considering carefully how these tools are making people question the authenticity of photographs.

Video and audio: AI assistance is available for more than just words and images. Text-to-video converters, such as Synthesia, Lumen5, and Emu Video from Meta, are being used to reinvent the production of movies, videos, GIFs, and animations. There are text-to-music generators like Stable Audio and SongR, as well as text-to-audio generators like ElevenLabs, Descript, and Speechify. Google is testing a technology called Dream Track that allows users to clone the voices of nine musicians, including Sia, John Legend, and Demi Lovato, for use in music tracks for YouTube videos.

There are undoubtedly many evil uses for voice cloning that come to mind (Hey Grandma, could you send me some cash?) A three-second clip of President Joe Biden’s voice may be used to create an entire phony conversation, he told reporters after he signed an executive order intended to impose restrictions on the use of AI. Biden expressed concerns about the technology known as “voice cloning.” “When the hell did I say that?” When Biden saw an AI deepfake of himself, he laughed.

Strong potential use cases exist. Spotify is testing a voice translation service that will convert podcasts into other languages in the voice of the original podcaster using artificial intelligence.

Mayor Eric Adams of New York City employed an audio converter to provide a public service announcement to the citizens of the city in ten different languages. However, he got into controversy for failing to disclose to the public that he had used artificial intelligence (AI) to make his speech sound like Mandarin.

Adams made a valid point about using technology to reach audiences that have “historically been locked out,” despite his embarrassing disclosure slip-up. For example, translating communications into several languages may not be possible owing to time, money, or resource constraints. Adams stated: “We are using technology to speak in many languages, which makes us more welcoming.”

Product suggestions and purchase guidance: As you make purchases online or through a mobile device, you’ll notice that businesses are already investing in generative AI to provide more accurate answers to product inquiries, solve issues, suggest new items, and assist you in making difficult selections.

For the past few years, Walmart, whose CEO, Doug McMillon, is one of the keynote speakers at CES, has added conversational AI to aid its 230 million consumers in finding and reordering things. In order to assist you in selecting the ideal model, new services such as CoPilot for Car Shopping claim to be able to look up dealers for you as well as evaluate and compare vehicle specs. This year, Zillow introduced a natural language search to its website, which allows renters and purchasers to find their ideal place by asking questions like “open house near me with four bedrooms,” without having to choose a ton of parameters.

Education: Although students may misuse generative AI to create worlds with social studies reports on the Constitution that sound identical, the US Department of Education sees promise in the technology. This includes utilizing AI-powered speech recognition to improve the support available to students with disabilities, multilingual learners, and others who could benefit from greater adaptivity and personalization in digital learning tools. It also involves assisting teachers in finding and adapting resources for their lesson plans.

In an April 2023 TED talk, Sal Khan, the founder and CEO of Khan Academy, discussed how generative AI could revolutionize education—that is, if we put in place the proper safeguards to prevent issues like plagiarism and cheating and to allay concerns that students might outsource their assignments to chatbots.

In a 15-minute presentation titled “How AI Could Save (Not Destroy) Education,” which has received over a million views, Khan stated that we are on the verge of utilizing AI to bring about the greatest good change in school history. We propose to achieve this by assigning each and every student on the globe to a fantastic personal tutor who is artificially intelligent. Furthermore, we want to provide each and every educator on the globe with a superb, artificially intelligent teaching assistant.

Khan Labs has already developed AI-powered teaching aides and tutors for students. It’s called Khanmigo, and it costs $4 per month ($44 annually) for Khan Academy members or non-members.

Travel: Arranging the perfect plan for a vacation is a skill, but it can take a lot of time. Theoretically, travel planning is the ideal duty to delegate to AI, as it can compile a list of attractions tailored to your preferences and organized in a way that makes sense for your schedule, geography, and even budget.

In actual use, your outcomes can differ. Whether you employ an AI itinerary generator like GuideGeek, Roam Around, Wonderplan, Tripnotes, or the Out of Office app, or a general tool like ChatGPT, here are some tips to consider before implementing an AI travel concierge.

Initially, bear in mind that the AI arranges your schedule differently from your own way of thinking – categorizing attractions based on your location and selecting a simple lunch spot to counterbalance your supper plans of a 20-course tasting menu. A careless approach could result in you eating pizza for every meal or finding yourself traversing a metropolis three times in an afternoon.

Make sure everything is double checked. It’s possible that while your AI itinerary makes sense geographically, it lacks flow and packs too much into each day. For group members, you might need to make modifications such as ensuring accessibility and scheduling naps (for adults and children who are too weary).

Furthermore, AI rarely uses current, real-time data, so before you get too attached to checking off every natural wine bar and street food market that an AI tool recommends, confirm that the company is still in operation or has the same hours. When utilizing these tools to create an itinerary for her homeland of Edinburgh, Scotland, CNET’s Katie Collins discovered this firsthand.

Moving from copilots to, in a sense, companions: In order to make you feel at ease conversing with their chatbots, generative AI companies are attempting to impersonate famous people or give their technology a personality. In order to distract you from the fact that their tools are, well, artificial, they may also refer to them as copilots, companions, or assistants. This allows you to accept the notion that they are beneficial partners at your disposal.

The term “anthropomorphism” refers to the historical practice of giving human characteristics to non-human entities such as computers or animals. A natural language processing computer program called Eliza was developed at MIT in the 1960s, long before Siri and Alexa.

Conversational agents are already being treated more like human beings, according to research from the Norman Nielsen Group. The “four degrees of AI anthropomorphism” are as follows: roleplay, which is asking the chatbot to take on the role of a person with particular traits or qualifications, such as “Give me the answer from the perspective of an airline pilot,” courtesy, which is engaging a chatbot with a salutation, thank you, or please. Companionship, on the other hand, involves looking to the AI for an emotional connection.

The terms “copilot” and “partner,” which imply equality, bother me. Because of their vastly inferior knowledge, AIs are not our equal partners. There is still a lot for them to learn.”
CEO of AI startup Juji, Michelle Zhou

Developers tend to gravitate toward characters, personas, and similar concepts because of our anthropomorphizing nature. A “smart assistant” that can aid with email and chat message drafting, meeting and chat thread summaries, and brainstorming has been added to Zoom, a video conferencing application. As “your everyday AI companion,” Microsoft markets its AI Copilot.

The tech behemoth, Facebook, Instagram, Messenger, WhatsApp, and other platforms, allows its over 3 billion users to engage with a cast of artificial intelligence characters that Meta developed. These are based on real-life sportsmen, celebrities, and artists, such as Snoop Dogg, the singer, Tom Brady, the former quarterback, Naomi Osaka, the tennis player, Kendall Jenner, and Paris Hilton.

And then there’s Character.ai, where you can communicate with chatbots that are modeled after real-life celebrities like Taylor Swift and Albert Einstein, as well as imaginary characters like Super Mario from Nintendo. According to Similarweb, users interacted with Character.ai for more than 34 minutes in November, compared to ChatGPT for approximately eight minutes.

Carr claims that Character.ai’s success in making the chatbot experience “more entertaining for the audience” and deflecting attention from the possibility that the AI may not be telling you the truth is indicated by the extremely high engagement time. You’re speaking with this fictional persona, therefore they shield themselves in some way from the hallucination problem, right? According to him, it’s presented more as a game to be played for enjoyment. That’s a clever method to alibi some of those worries.

Some people are against anthropomorphizing technology, even if it means giving chatbots more human characteristics. The CEO of AI firm Juji, Michelle Zhou, calls the chatbots you can create with her company’s technology—which doesn’t require any code—”assistants.”

Talking to a chatbot: A guide

The key to success with any tool you attempt will be having a productive conversation. Prompt engineering can help with it.

Forget Hal from 2001: A Space Odyssey and Jarvis from the Iron Man films. Rather, consider today’s chatbots as highly competent robots that can accomplish particular tasks for you; some have even compared them to an enhanced version of autocomplete. They have no idea what a fantastic tale or exquisite painting are. They can only make sense of patterns and connections through the training data—words, pictures, figures, and other information—that they have been exposed to.

You must make sure the dialogue you are having with the machine is good, effective, and useful if you want good, effective, and helpful output. The saying goes, “Garbage in, garbage out.” Prevent GIGO situations by giving your prompts precise, illustrative background information and relevant material. You will be unhappy with the outcome if you don’t understand the basics of prompt engineering.

Whether you’re looking for text, a picture, a video, or something else entirely, a fast internet search will yield dozens, if not hundreds, of tips on how to compose an effective prompt. “Teach me to negotiate,” “Write a thank you note,” “Rank dog breeds for a small apartment,” and “Help me improve this job description” are just a few of the idea starters available on ChatGPT.

To help you get started, ZDNET, the sister site of CNET, provides a prompt tutorial. Engage in discussion with the AI as you would with a human, and be prepared for some back and forth. remarked David Gerwitz of ZDNET. Have the background information ready: As an alternative to “How can I prepare for a marathon?” “I am a beginner runner and have never run a marathon before, but I want to complete one in six months,” is the question Gerwitz advises posing. How do I get ready for a marathon?

Finally, specify exactly what you want. A narrative of 500 words? A talking points bullet list? Presentation deck slides, please? a haiku?

CNET’s Shankland advises utilizing lengthy, comprehensive language when describing images. Use emotive words like enthusiastic, nervous, or joyous to describe people in more effective ways. If you run across difficulties, try searching the internet for phrases like “example prompts for generative AI images” to obtain printable cheat sheets that you may edit.

Several warnings

The formidable powers these technologies bestow upon you have prompted governments, ethicists, AI specialists, and others to highlight the possible drawbacks of generative AI.

Unanswered concerns over the type of data being fed into these LLMs are raised by authors such as Margaret Atwood, Dan Brown, Michael Chabon, Nora Roberts, and Sarah Silverman, who allege AI companies have absorbed their copyrighted works without their knowledge, approval, or payment.

We don’t know what’s in the LLM stew, so there are worries about potential bias and a lack of diversity in these systems. These might drive them to discriminate against specific groups or individuals or reinforce negative preconceptions.

Privacy is a concern. The way in which OpenAI manages the personal data it gathers is already under investigation by the Federal Trade Commission. The FTC held a vote in November on a resolution that outlined the procedure for its “nonpublic investigations” of AI-based goods and services over the following ten years.

Furthermore, hallucinations are a serious issue that could jeopardize our confidence in modern technology. The curious word was coined in 2018 by researchers at Google DeepMind, who reported that their research revealed neural machine translation systems “are susceptible to producing highly pathological translations that are completely untethered from the source material.”

To what extent are these hallucinations a problem?

When researchers at the former Google employees’ business Vectara attempted to quantify it, they discovered that chatbots innovate at least 3% of the time and up to 27% of the time. Vectara is releasing a “Hallucination Leaderboard” that illustrates the frequency with which an LLM invents information in order to summarize a document.

Not only are hallucinations frightening, but there are concerns about how generative AI can endanger civilization—some people even believe it could wipe out humanity. It may seem excessive, but consider how criminals might use it to create new weaponry. They might provide false information as part of disinformation efforts that deceive voters and influence elections, which is less drastic but nonetheless concerning.

The European Union’s AI Act seeks to regulate artificial intelligence based on its potential to harm society, using a ‘risk-based’ approach: the higher the risk, the stricter the rules.

Governments have taken notice and are developing guidelines and future restrictions. In November, the Biden administration released an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The United Kingdom hosted an AI Safety Summit the same week. The Bletchley Declaration was signed by representatives from 28 governments, including the United States, China, and the European Union, and aims to address how “frontier AI” — the most advanced, cutting-edge AI technology — may affect aspects of our daily lives such as housing, jobs, transportation, education, health, accessibility, and justice.

In December, the EU enacted “historic” AI laws that will effect tech businesses in the EU’s 27 member countries and strive to protect 450 million consumers. The AI Act “seeks to ensure that artificial intelligence systems placed on the European market and used in the EU are safe and respect fundamental rights.”” According to regulators, the fundamental aim is to govern AI based on its “ability to cause harm to society using a ‘risk-based’ approach: the higher the risk, the stricter the rules.”

An existential question—should you utilize technology just because you can—underlies all of these problems. People are the ones who decide how, when, and why to employ AI technology, and this must be kept in mind when developing and utilizing AI. Even with advanced AI, human intuition and our capacity to perceive emotion, complexity, and nuance in the decision-making process will always be superior.

One of the founding faculty members of the MIT Media Lab and a professor of media technology at New York University, W. Russell Neuman, suggests that we place this generative AI moment in the context of other major revolutions, such as the invention of language, the printing press, and the Industrial Revolution, “where we could substitute machine power for animal power.”

We can now replace human intelligence with machine thinking, decision-making, and intelligence through generative AI. The author of Evolutionary Intelligence: How Technology Will Make Us Smarter, Neuman, claims that if we execute it well, it has the same type of revolutionary potential as all those earlier revolutions.

Neuman agrees with Google CEO Sundar Pichai, who told 60 Minutes in April that artificial intelligence (AI) is the most profound technology humanity is working on, more profound than fire or electricity. It captures the essence of intelligence and humanity.

“He’s thinking along the right lines, which is another reason to take all of these ethical and control issues very seriously,” Neuman says. Instead of considering AI as something that makes decisions and tells humans what to do, consider it as a collaborator or assistant that can be used to empower and enable humanity.

Evolutionary intelligence requires recognizing that “this is not just about technology,” Neuman adds. This represents a shift in how humans cope with things.

What’s next

Even when a management dispute nearly caused OpenAI to fail in November and nearly forced its well-known CEO, Sam Altman, out, the discussion surrounding generative AI isn’t going to go away any time soon. The company is allowing people to create personalized chatbots using ChatGPT; no programming experience is required. There is a discussion about how OpenAI makes judgments about its technology and how its newest strategy will work out. These specialized AI tools, which Altman dubbed “GPTs” (not to be confused with general-purpose technologies), will be sold this year via an app store, following Apple’s lead in popularizing mobile apps for the iPhone.

Entrepreneurs are leading the charge in AI. Generative AI businesses raised $10 billion in 2023, more than doubling the amount of venture capital invested in the field in 2022.

Last year, Sidney Hough took a break from her undergraduate studies at Stanford University to found Chord, an artificial intelligence business.pub that gathers online comments from users to provide “consensus on any topic.” For her, generative AI offers the opportunity to “bring your own algorithm” to digital platforms, thereby democratizing access to knowledge.

According to Hough, 21, certain corporations currently have complete influence over how people think, who receives information, and how it is prioritized. In a world with generative AI, users may go to their search engines or social media accounts and say, “Please rearrange my information the way I think it should be prioritized.” Opportunities for such fine-grained control are made possible by AI.

Whether you think generative AI is amazing or bad, it’s time to take a stand and explore how humans should and shouldn’t embrace it.

Rather of being the Luddite, Neuman advocates becoming the intelligent, cautious, balanced proponent of how tech may enhance rather than compete with human capacity.

Source link