The developed economies of the world are experiencing a protracted productivity crisis. In the ten years that followed the 2008 financial crisis, the Group of Seven wealthy nations’ increase in output per hour worked fell to less than 1% annually, or less than half the pace of the previous ten years. The developed world’s largest economic issue, this poor performance is also the main cause of political and geostrategic unrest.
A possible innovation is artificial intelligence. CEO Larry Fink of BlackRock says that the company will “transform margins across sectors.” Over the next ten years, Goldman Sachs projects that it will increase productivity growth in the US by up to 3 percentage points annually. According to the McKinsey Global Institute, the potential boost to the global GDP might reach $26 trillion.
Traders should avoid the the hype. Four characteristics of AI imply that, although its effects on particular businesses’ bottom lines might be favorable, its effects on the economy as a whole will likely be less striking. In fact, machines that can learn on their own can exacerbate the productivity crisis.
Let’s start by discussing AI’s effects on the development of new scientific knowledge, which is the primary force behind current economic progress. Its extraordinary predicting abilities have made significant progress possible in some data-intensive fields like biology and chemistry. However, science can only produce meaningful knowledge if it can not only foretell events but also provide an explanation for why they occur.
For instance, the Babylonians of antiquity were not above making accurate predictions about astronomical events. However, they never came to grasp the principles of physics that account for these occurrences. Scientists didn’t fully understand the workings of the universe until they discovered the scientific method, which involves developing explanatory ideas and putting them to the test through experiments. Modern scientists are able to place a man on the moon, something their Babylonian forefathers could only dream of, because to their capacity for both understanding and prediction.
Rather than being automated Einsteins, AI models are more like digital Babylonians. They are unable to create the causal theories required for new scientific discoveries, but they have revolutionized computers’ capacity to find meaningful patterns in massive datasets. According to Judea Pearl, a computer scientist at the University of California, and coauthor Dana Mackenzie, in their 2018 best-seller “The Book of Why,” “humans understand cause and effect; data do not.” AI’s predictive genius won’t render human scientists obsolete if causal reasoning isn’t used.
The idea that AI would lower business expenses by automating far more fundamental knowledge labor is the second argument of the technology. That is a stronger assertion, and preliminary data supports it. AI-powered chatbots have been proven to assist customer service departments resolve 14% more issues each hour, according to a recent research. It should be noted, however, that the overall impact of these efficiency gains is probably quite small.
Twenty percent of labor-intensive tasks in the United States today, according to Daron Acemoglu of the Massachusetts Institute of Technology, might be completed by AI, and in roughly one quarter of those cases, replacing humans with algorithms would be financially advantageous. However, Acemoglu estimates that over a ten-year period, aggregate productivity growth would only rise by about half a percent, even if this replaced close to 5% of all labor. Merely a third of the territory has been lost since 2008.
We would welcome any return to economic dynamism. However, using AI may reverse productivity improvements in a significant class of cases, which is the final challenge.
The technology’s use in video games has contributed to some of its early achievements. For instance, the world was shocked in 2017 when Google DeepMind’s AlphaZero algorithm easily defeated even its most sophisticated computer opponents in chess. This demonstrated the possibility for applying AI’s strategic acumen to other competitive environments, such digital marketing or financial trading. The catch is that other players can invest in AI in real life, unlike in games. As a result, expenditure that would make sense for a single company is detrimental to the group as a whole. Costs will rise in an AI arms race, but the overall revenue won’t change.
One can learn a lesson from the history of quantitative investing. Once investors began to recognize systematic characteristics like momentum and value in the early 1970s, super-normal returns were enjoyed by the few corporations who were prepared to invest in statistical research. However, by the end of the decade, their rivals were also achieving the desired results. All parties remained to bear the costs even after the excess returns were carted off.
There will be more instances of this self-defeating cycle elsewhere. The dirty secret of advertising in the analog age was that it was frequently a race to stagnate. In one of the most well-known teaching cases at Harvard Business School, David Yoffie examined the so-called “cola wars” that PepsiCo and Coca-Cola fought between 1975 and the mid-1990s. In 1981 and 1984, Coke increased their advertising budget by twofold. Pepsi followed suit in response. Overall, the two companies’ respective market shares barely changed, despite incurring increased costs. AI has the possibility of bringing the Cola Wars to every sector of the business in the era of digital marketing.
That suggests a fourth aspect of AI that will strike productivity more stealthily. Smaller firms will unavoidably be driven out if an AI arms race requires enormous capital expenditures just to maintain market dominance. Industries are prone to oligopoly. There will usually be less competition. Productivity will plummet, and innovation will suffer even more.
The Nobel Prize-winning economist Robert Solow lamented, “you can see the computer age everywhere but in the productivity statistics,” in 1987. AI’s consequences might become all too obvious very soon, but not in the way that proponents of the technology want.