What, then, does the future hold for AI and the world of work in the medium term? It’s likely to be a mixed bag of results. There have been and will continue to be disappointments and failures as use of the technology expands.
In 2018 IBM had to ditch a multi-million-dollar project designed to help the treatment of cancer patients after it was found to be giving clinicians bad advice. During the coronavirus pandemic Walmart abandoned the use of robots to scan shelves and assess levels of stock when it realised humans were just as effective. An October 2020 study conducted by the MIT Sloan Management Review and the Boston Consulting Group, which surveyed more than 3,000 business leaders running companies with annual revenues above $100 million, discovered that only in ten per cent of cases did people feel that the investment they made in AI produced a “significant” return.
AI will cause disruption, too. “Better-educated, better-paid workers will be the most affected by the new technology, with some exceptions,” research from the Brookings Institution found in November 2019. Those whose jobs currently involve a close focus on data will be particularly vulnerable: market researchers, sales managers, computer programmers and personal finance advisers among them. Those whose jobs involve a lot of interpersonal skills, such as those in education and social care, will probably be less affected: AI is very unlikely to replace human compassion and empathy. That said, in Japan care robots are already used to help the country’s ageing population. And in any case, it’s dangerous to make sweeping generalisations. The fact is that AI adoption will vary around the world according to local culture and social attitudes. Automation in finance in Singapore is likely to be very different from automation in finance in Pakistan.
If Google’s work with computer vision or Harvard’s study with psychiatrists are anything to go by, though, it seems likely that the general trend will be for AI not to replace existing jobs but to transform them – and to create new ones, too. Already, thousands of roles exist that would have been unimaginable at the turn of the century. Scores of people now work on AI labelling, helping to compile datasets that train machine learning. Thousands of individuals have been taken on at companies such as Facebook and YouTube to moderate content that might be breaking their platforms’ rules, which has in many cases been initially flagged by an AI.
Researchers at MIT Sloan and Boston argue that those companies poised to benefit most from AI are those who use it to augment and shape traditional processes rather than replace them. In other words, they create an environment in which humans learn from AI and AI learns from humans. The toolmaker Stanley Black & Decker is one example. It has started using computer vision to check the quality of the tape measures it manufactures. The system flags defects in real time, spotting problems early in the production cycle and so reducing wastage. But humans are still on hand to inspect and make judgement calls on the worst faults.
Experts are key to creating trustworthy AI systems, says Ken Chatfield, the vice-president of research at Tractable, an AI firm that uses computer vision to help make decisions about insurance claims after car crashes – its AI is being used in the real world by some of the biggest insurance companies. The company initially trained its AI on thousands of images of vehicles that had been in accidents – involving damaged door panels, broken windscreens and more.
But it saw the biggest improvements in the system’s performance when the damage highlighted in images had been labelled by specialists, with years of experience in assessing crash reports. And it is human insurance agents who take over once the AI has reviewed images and suggested what the next steps should be. “The data in itself is not enough, and also our knowledge as researchers is not enough – we really need to draw on the knowledge of experts in order to be able to train models,” Chatfield explains. “Involving the expert is also what we need to build up trust”.
The London-based lawyer Richard Robinson, the CEO of Robin AI, has struck a not dissimilar balance in the legal field. He quit his job at a large law firm when he became convinced that many of the repetitive tasks that went into contract work could be automated. “A lot of what I would spend my time doing as a lawyer felt like it didn’t need much brain power,” he explains. His view was that machine learning could be utilised for reviewing some types of contract, such as those concerned with employment conditions. The tasks involved seemed simple enough.
It didn’t, however, turn out that way. “The truth is it was much more difficult than we anticipated,” Robinson says. “There are so many random things that could be in that document, that you can’t be confident that the AI will always identify them.” What he therefore did was to create a system in which AI works with human lawyers rather than instead of them. The company’s system has been trained on historical contracts – both those in the public domain and documents provided by clients – and taught to look for particular elements. It’s therefore able to detect whether a non-compete clause has been sneakily added into a business contract, or whether an employment contract stipulates non-standard working hours.
If it finds anomalies, the system alerts a human lawyer via email and they then check the document. The same thing happens if the system is unable to interpret a particular clause or contract. A recent assignment the company took on was checking contracts between big fast-food retailers and their suppliers during the early months of the coronavirus pandemic, to find out what each party’s obligations were in the event of a crisis.
Robinson’s view is that lawyers find checking contracts tedious. At the same time, it’s dangerous to rely wholly on AI, because even if it’s getting things right 96 per cent of the time, that’s not good enough when companies’ and individuals’ lives and livelihoods are at stake. “We want to use AI to make the first attempt at everything in situations where it’s really easy for a person to check and see if it’s wrong or right,” he says.
However organisations end up using AI, there’s no doubt that as it spreads it will become easier to access and operate. At present most AI deployments involve handcrafted technology. In the future, a company’s AI requirements may be handled by a third party, using software that seems as straightforward as that inside word processors or slideshow builders.
A company that wants to use AI to analyse specific datasets or images will be able to use a template to create this. The algorithm it picks may not have been created by the third-party service they’re buying the template from, but from another company further up the chain of businesses developing and industrialising AI. The technology will become plug-and-play. By that point we may hardly notice its interaction with our daily lives.
Once this happens the world really will change significantly. People’s workplaces will face automation at a greater scale than at any point so far this millennium. For many the entire nature of work may change. How we interact with businesses and government services will also be transformed.
Societies that deploy AI will need to learn how people react to the technology and what their expectations of it are. At the same time, individuals will only follow the directions given by an AI if the system works efficiently, is understandable – and can be trusted.\\
This article has been published from the source link without modifications to the text. Only the headline has been changed.