The potential of artificial intelligence has tech investors very excited. Tens of billions of dollars have been invested by industry titans Microsoft (MSFT.O) and Amazon (AMZN.O) in startups that are developing software that can produce text, images, and audio that sound human and can answer questions from users. The ability of the technology to produce long-term productivity gains is the key question, though. Not even the industry darling OpenAI’s widely used ChatGPT chatbot can accurately respond to that question.
Investments in generative AI companies exceeded $27 billion in the year ending December 8, which is four times the amount invested in 2022. Big Tech companies provided the majority of that funding, not venture capital firms like Sequoia and Andreessen Horowitz. While Amazon is supporting rival startup Anthropic, Microsoft has invested $12 billion in OpenAI and Inflection AI.
The most well-known example of generative AI is ChatGPT, which can quickly compose a song in the vein of Taylor Swift or have a meaningful conversation about the British monarchy. Businesses are beginning to use software that can communicate with people in workflows or create virtual assistants. For instance, in a matter of minutes, so-called AI “co-pilots” can summarize intricate legal documents, a task that typically takes hours. Co-pilots, which record notes during video meetings, are now available for use with Microsoft Teams messaging software.
The business opportunity from AI is still difficult to quantify. Fundamental queries regarding revenue and expenditure are changing. Unimaginably, the software might also open up new business opportunities. However, assuming each industry allocates a portion of its yearly revenue to the technology is a reasonable place to start. If that occurs, IDC analysts predict that, up from $16 billion this year, global spending on generative AI will surpass $140 billion in 2027.
FOUNDING FATHERS
Foundation models, which imitate the interactions between brain cells, are the basis of generative artificial intelligence. They have been trained through the consumption of copious amounts of data, such as Wikipedia articles and news stories. It costs a lot to build them. The development of OpenAI’s GPT-4 model reportedly cost more than $100 million, according to Sam Altman, the company’s recently fired and then reinstated CEO. However, investors’ returns outweigh the costs. According to a person familiar with the situation, OpenAI, which was valued at $29 billion in April, is moving forward with its plan to sell stock at a valuation of approximately $86 billion.
Currently, AI companies charge clients to use their models, much like software companies do. Large financial groups like Morgan Stanley (MS.N) are paying OpenAI to develop tailored models that can summarize in-depth research or parse vast volumes of internal data. Other users pay according to usage for access: each 750-word query from OpenAI costs about one cent. Those who would like priority access to OpenAI’s models can pay $20 a month. Additionally, the business provides a free version of ChatGPT, which trains and develops its models with user queries. Similar conversational tools are available from Anthropic and Alphabet, dubbed Bard and Claude, respectively.
It offers a consistent source of income. By 2024, OpenAI is predicted by Reuters to generate $1 billion in revenue, up from $200 million this year. The business can also generate revenue in other ways. Last month, OpenAI announced that it will develop a platform akin to Apple’s (AAPL.O) App Store, which would let companies and customers repurpose or resell chatbots for a fee.
COSTLY GPT
However, there are a lot of demands on the resources of AI companies. Foundation models require graphics processing chips, which can cost up to $10,000 each, from companies like Nvidia (NVDA.O). Another significant expense is the computing power needed to store data and answer queries. In exchange for cloud computing, Microsoft and Amazon are giving OpenAI and Anthropic a portion of their equity investment.
The Information reported that OpenAI incurred a $540 million loss last year, indicating that startups are burning cash. Due to the ongoing expenses associated with model maintenance, AI firms are expected to incur greater overhead costs compared to traditional software firms, which have the ability to resell identical code. Analysts at Bank of America predict that the gross margins of AI companies will range from 50 to 60 percent, which is significantly lower than the 60 to 80 percent average for the software industry.
Additionally, competition is intensifying. Amidst competition from Alphabet’s (GOOGL.O), Amazon, Microsoft, and Elon Musk’s Grok, organizations are constructing AI systems that can integrate with a variety of foundational models. Free models include Mistral AI by French startup Meta Platforms (META.O) and Llama 2 by Meta Platforms (META.O). Potential competitors of AI firm investors include Microsoft, whose OpenAI-based products may be more user-friendly for Microsoft’s clients.
Lawsuits and regulations are a final obstacle for AI companies. AI is subject to increasingly rigorous regulations imposed by watchdogs in the European Union and other jurisdictions. Aware of the connections between Big Tech and AI startups, competition authorities are sniffing around. A cohort of authors, spearheaded by the comedian Sarah Silverman, is currently engaged in legal proceedings against OpenAI and Meta, alleging unauthorized utilization of their work. For abuse of song lyrics, Anthropic is being sued by Universal Music.
CO-PILOT
Other startups are adopting a more targeted strategy, whereas the majority of attention is focused on large foundational models. Firms that develop co-pilots tailored to particular industries, such as banking or healthcare, are receiving substantial investments from venture capitalists. The virtual assistants, according to Morgan Stanley analysts, can assist individuals in resolving specific problems 14% more quickly. Clifford Chance and Allen & Overy, globally recognized law firms, are conducting legal research and document drafting using technology developed by Sequoia-backed Harvey and Robin AI.
These emerging enterprises frequently require an initial deposit followed by a usage-based payment model. Due to their foundational model architecture, co-pilots frequently entail reduced initial expenditures. Nevertheless, according to tech investor Prosus, access fees to OpenAI, Meta, and Anthropic may comprise as much as 20% of the operating expenses of these businesses.
Profitability remains elusive in this regard as well. Many startups in their infancy are still in the process of identifying solutions to industry-specific issues. An AI assistant that assists users in code creation, GitHub, charges $10 per month. According to the Wall Street Journal, the organization incurred monthly losses of over $20 on average and as much as $80 in some instances per user during the initial months of this year.
When new technologies emerge, investment rushes frequently follow, offering only hazy promises of future profits. The plethora of potential uses for artificial intelligence accounts for the investors’ frenzy. But in the end, its sustainability will rely on businesses realizing gains in productivity that outweigh the investment. We’re still a long way from having a thoughtful response to that query.