White-collar workers are silently fighting against AI

Not long ago, “shadow AI” seemed like a positive narrative. Workers were getting ChatGPT and Claude past the IT staff, utilizing personal accounts to complete tasks that had previously taken hours in minutes. An MIT research published last year discovered that employees at more than 90% of organizations used personal chatbot accounts for daily tasks, often without authority, despite the fact that only 40% of those companies had official LLM subscriptions. The shadow economy was growing. Management labeled it a governance issue. The workers referred to it as “getting the job done.”

The data now presents a different picture. For a sizable and increasing portion of the workforce, the tool that employees formerly rushed to discreetly adopt has completely stopped being used. Not because it’s ineffective. because they are worried about the consequences of it working too effectively.

For its fifth annual State of Digital Adoption research, SAP subsidiary WalkMe surveyed 3,750 executives and workers worldwide in 14 countries. The results showed that more than 54% of workers avoided using their company’s AI tools in the previous 30 days and instead finished the task by hand. Thirty-three percent have never utilized AI. Together, around 80% of business employees are either intentionally rejecting or avoiding the technology that their companies are investing unprecedented amounts of money in. The average budget for digital transformation increased by 38% year over year to $54.2 million, but adoption difficulties have caused 40% of that investment to underperform.

Executives don’t understand how workers truly feel

What the early enthusiasm hid is now clear in the figures. Only 9% of employees trust AI for sophisticated, business-critical decisions, compared to 61% of executives, resulting in a 52-point trust gap. Eighty-eight percent of executives believe their staff have appropriate tools, whereas only 21% concur — a 67-point difference on tool adequacy alone. According to the report, executives and their employees are “describing fundamentally different companies.”

The skeptics also have evidence to support their claims. Johns Hopkins economist Steve Hanke has experienced enough technological cycles to understand what hype looks like from the inside out. “AI failed to deliver,” he recently told. “This is the real world. Welcome.” Put aside the AI bubble. You know, it didn’t work out. Everyone uses it occasionally, according to all the surveys, but when you examine more closely, it hasn’t accomplished much. In summary, Hanke said, “Productivity was weak, by the way.” Productivity would increase significantly if AI were to deliver. These individuals from Silicon Valley tell you that the GDP will reach 5% or 6%. Productivity is going to reach six. Just not happening.

In a way, that skepticism aligns with the findings of the WalkMe data. From the front lines, WalkMe’s CEO and co-founder Dan Adika has been monitoring this divergence. In his frequent meetings with CIOs, he poses the straightforward query, “How many of your people are actually using AI to do meaningful work?” He stated, “The numbers are sub-10%.”

Adika utilized the metaphor that AI is similar to a sports vehicle in terms of speed, which is also popular with this particular editor. He used the comparison of buying a sports automobile for every employee but they don’t know how to drive it—they lack AI abilities.

The issue is partially structural rather than behavioral. Adika remarked, “You buy every employee that Ferrari, that sports car, but they don’t know how to drive.” “The context is that they occasionally run out of fuel. The prompting is knowing how to drive. In certain instances, there aren’t even enough roads—there isn’t an MCP server or API to truly accomplish your goals. If you have a Ferrari but no roads, no petrol, and no driver, what do you do? You’re not very quick.

In a different interview, Brad Brown, Global Head of Tax Technology & Innovation at KPMG in the United States, employed nearly the exact same metaphor. He described it as “like an F1 car driver.” “The Formula One car is incredible. However, that technology won’t help you much if you don’t have a competent driver. Two seasoned technologists—one a founder and the other a Big Four partner—converged on the same description without being asked, indicating that they are both articulating something they have personally witnessed numerous times at scale.

The gap is costing businesses

It is now possible to measure the downstream cost of an undriven Ferrari. According to the WorkMe research, employees miss 51 working days annually due to technology friction, or almost two full months, which is a 42% increase from 2025. 7.9 hours a week is that. This week, economists at Goldman Sachs revealed that employees who use AI correctly save between 40 and 60 minutes a day on average. The math is nearly symmetrical: the productivity that AI increases for those who use it effectively is nearly equal to the productivity that it eliminates for those who are unable to use it.

Beneath the surface, the old shadow AI narrative is still relevant. Only 21% of employees claim they have ever been informed about AI policy, and 34% don’t even know which technologies their business has permitted. Despite this, 78% of executives say they want to discipline shadow AI use. Executives are threatening to punish people for actions they have never stated are forbidden. The paradox is so profound that 62% of those same executives privately acknowledge that the risk of using AI in an unauthorized manner is exaggerated in comparison to the risk of doing nothing at all.

According to Keith Kirkpatrick, Vice President and Research Director of Enterprise Software Digital Workflows at The Futurum Group, “using shadow AI isn’t a behavior to penalize—rather, it’s an opportunity to address a systemic gap.” “Employees are making up for performance or efficiency gaps left by sanctioned tools and unclear governance when they use unapproved AI tools.”

AI disengagement

What’s new—and what the data is only beginning to reveal—is the layer beneath shadow AI. Workers who are not circumventing the rules. Employees who are not doing anything.

Adika was asked to describe this dynamic. He paused. Regarding employees who are opposing the use of AI, he remarked, “They have pride in what they do.” “They will always identify and highlight the shortcomings in that tool in comparison to themselves, and they won’t allow some AI bot to take over.” Unmistakably, it sounds like the pandemic-era phenomena known as “quiet quitting,” where employees ceased going above and beyond without actually departing. AI tools that simply won’t stop hallucinating and spending as much time as they claim to save could potentially be a very understandable source of aggravation.

According to Adika, “the companies that just automated the most tasks won’t be the ones that get this right.” They will be the ones to determine when the agent should act, when the human should act, and how the handoff between them operates. Trust resides in that handoff. Furthermore, the majority of businesses haven’t really begun to consider it as of yet. 90% of workers still choose humans for mission-critical work, according to the MIT study, indicating a persistent reluctance to go deep.

Following a similar announcement by Block, Oracle has announced layoffs of tens of thousands of employees. However, others view this as “AI washing,” or hiding overhiring with a handy justification that also happens to raise the stock price. The rank and file understands the logic. “There will be a moment when we experience fear, uncertainty, and layoffs,” Adika stated. Thus, I believe that it will be a period of transformation over time. However, in the end, individuals are still not utilizing it.

Additionally, Adika made it clear that employees who avoid AI are not incorrect to perceive anything genuine; rather, their conclusions are incorrect. “No CEO of a bank or insurance company would go out tomorrow and fire a large number of employees because who would do the work?” He believes a “big issue” is about to emerge because assertions that AI would replace everyone will have to face the reality that “it’s just not happening right now.”

The issue of skilled drivers

According to Brown, he is thinking more and more about what it truly takes to close the distance between the driver and the Ferrari. He has started classifying the personnel at KPMG into what he refers to as builders, makers, and power users—different levels of AI proficiency with clear career paths associated with them. “Creating incentives and career paths to get all of our people to that level is our current focus,” he stated. “Humans need to catch up to the level of technology.”

The crucial realization in that framing is that neither intelligence nor even traditional training are the issue. “With your kind of human skills that you bring to the table in terms of critical thinking and judgment, that’s going to lend people into being makers,” Brown stated. These workers will be able to use AI technologies with ease, even creating new tools themselves. He believes that workers without technical expertise are not the most vulnerable. They are the ones whose employers haven’t offered them a safe harbor, a route, or a reason to give it a shot.

In addition to reporting the lowest levels of support, training, and disruption fear, one-third of the enterprise workforce has never utilized AI technologies at all. The WalkMe report carefully points out that they are not resisting AI. They have just not been contacted. Regarding if workers’ ability to keep up with the advancement of these tools is being outpaced, Brown admitted that he clearly perceives a gap.

Evolution is possible—and important

Hanke was won over by the amount of time saved once he determined what he wanted to do with AI. “AI is kind of like another research assistant,” he added, “and it saves a hell of a lot of time because if I had a research assistant doing this, I’d have to send them to the library.” They’d be fiddling around over there for a week doing something I could do with AI in about an hour.” The caveat: “You have to know what they’re good for.” And, most importantly, you must be knowledgeable enough about the subject to identify inaccuracies. “I know what to ask the AI. “I understand how to structure what I want done.” Hanke stated, citing his many years of experience in the fields of international banking, commodities, and economics.

His personal path follows the path taken by many serious intellectuals, from outright prohibiting student use to cautious skepticism to everyday reliance. He claimed to have gone from “no” to “maybe” to “this is great—but some of these tools suck.” “There are all kinds of AI,” he says in his typically direct assessment of the instruments themselves. And some of it is truly awful. It is dependent upon your needs.

Brown believes that this is ultimately an uplifting story—but only for those who relocate. “The winners are the ones where you have your workforce effectively leveraging the capabilities of AI,” he stated. “A workforce that does not embrace artificial intelligence will face challenges. And a work environment that is overly focused on AI and ignores the value of the human labor will struggle.

Source link