Five hundred and seventy eight days.
That is how long it took for Klarna to admit it had made a mistake.
In 2024, the Swedish payments company became one of the most cited examples of AI replacing human workers at scale. Klarna stopped hiring entirely, let its workforce shrink from 5,500 to around 3,500 through attrition, and announced that its AI assistant was now doing the work of 700 full-time customer service agents — effectively the same thing as firing them, just quieter. The headline was everywhere. Klarna was the proof of concept. The future had arrived. The workers were gone.
And then, eighteen months later, Klarna quietly started hiring again.
Customer service representatives. Content writers. Human beings. The same roles, more or less, that had been eliminated with such fanfare. The AI, it turned out, had not performed the way the press releases suggested. Customer satisfaction had dropped. Complaints had risen. The things that customers actually needed — the ability to explain a situation to someone who was listening, the judgment to make an exception, the human recognition that this particular problem was different from the last one — were not things the AI could reliably provide.
Klarna did not hold a press conference to announce this. There was no CEO on a stage explaining that the experiment had failed. The rehiring happened quietly, in job postings, in staffing agency contracts, in the small-font announcements that nobody turns into a viral thread on LinkedIn.
This is the story that has been running alongside the more dramatic one, and receiving almost none of the attention it deserves.
Forrester Research’s 2026 Future of Work report contains a finding that should stop anyone in their tracks. According to their research, 55% of employers now regret laying off workers for AI-related reasons. More than half. And Gartner, one of the most rigorous technology research firms in the world, has predicted that half of all companies that attributed headcount reductions to AI will rehire staff to perform similar functions by 2027.
Rehire. Not because the workers were wrong to leave. Not because the companies changed their minds about AI. Because the AI was not ready, and the layoffs happened anyway, and someone has to do the work.
A Washington Times investigation from March 2026 found that complaints from frustrated customers had already prompted e-commerce and financial technology companies to quietly rehire content writers, software engineers and customer service workers they had replaced with AI. IBM, Salesforce, Google and Meta had all added undisclosed numbers of workers in redefined roles. One in three employers who had laid off workers for AI-related reasons had spent more on rehiring than they had saved from the original layoffs.
And 700 people, in the case of Klarna, had their lives disrupted in the interval.
There is a term for what was happening, and Forrester named it directly: AI washing. Companies attributing layoffs to AI implementation despite the cuts being financially motivated. Using a technology story as cover for a cost-cutting decision. Standing in front of shareholders and announcing AI-driven efficiency gains while the actual reasoning was far more mundane: the pressure to reduce payroll, the pressure to demonstrate that leadership was moving with the times, the pressure to deliver a headline that would move the stock price.
Gartner’s October 2025 survey of 321 customer service and support leaders found that only 20% had actually reduced agent staffing because of AI. Most said headcount remained steady even as they handled more customers. AI was primarily augmenting human work, not replacing it. The reality of what was happening inside companies was dramatically quieter and more incremental than the public story.
But the public story had consequences. Each time a CEO stood at a podium and announced that AI had replaced hundreds of workers, thousands of people in similar roles updated their CVs at midnight. Parents sat across the dinner table from their children studying for professional qualifications and wondered whether to say something. Graduates deferred job searches. Careers pivoted. Lives changed shape based on a narrative that was, in significant part, built on exaggeration.
The question nobody in a position of power has been forced to answer is this: if the AI was good enough to fire 700 people, and then it turned out it was not good enough, and you hired people back — who is responsible for the 578 days in between?
The people laid off by Klarna did not get those days back. Some of them moved cities. Some of them left their field entirely. Some of them made decisions in those eighteen months that they would not have made if they had known the AI was not going to perform as promised, that the company was going to need them again, that the headline was going to age so poorly.
“Some organizations moved quickly from ‘AI can assist this work’ to ‘AI can replace this work,’ and they are now recalibrating as they better understand where humans still add critical value,” said Jessica Smith, a workforce analyst quoted in the Washington Times report.
The word recalibrating is doing a lot of work in that sentence. For the people whose lives were disrupted, recalibrating does not cover it.
There is a second story running in parallel to the Klarna story, and it is receiving even less attention.
The people who were never hired in the first place.
Entry-level tech hiring decreased 25% year over year in 2024. Employment for software developers aged 22 to 25 has declined nearly 20% from its peak in late 2022. These are not small movements at the margins. They are the quiet, unremarkable, barely-covered collapse of the on-ramp into the technology industry for an entire generation of workers.
Among 400 graduates at a leading technology institute in India, fewer than 25% had secured job offers as their course ended in early 2026. Entry-level hiring at major technology companies had dropped by more than 50% over three years. These students had done what they were told. They had chosen the right field, studied the right subjects, built the right skills. They arrived at the door and found it had been quietly closed while the conversation was happening somewhere else.
The Anthropic research this site is partly based on — published in March 2026 by researchers Massenkoff and McCrory, using real usage data from millions of AI interactions — found that hiring of workers aged 22 to 25 into AI-exposed occupations had dropped by around 14% since ChatGPT launched. Not fired. Not replaced. Simply not hired. The invisible displacement, the one that leaves no press conference and generates no apology and requires no rehiring because the worker was never there to begin with.
I wrote about this in a previous piece on this site. The Klarna story and the entry-level story are the same story told from different ends. One is visible and dramatic. One is invisible and structural. Both have the same cause.
What makes all of this so difficult to process honestly is that AI is also genuinely useful. I know this because I use it every day.
I am a corporate lawyer. I have written elsewhere on this site about how AI has given me something I had not had in a decade of legal work: evenings. The unbillable hours that used to consume my weekends have largely evaporated. My drafting is faster, my research is better, and the quality of my work at the high end has probably improved because I am less exhausted.
I am not against the technology. I am against the dishonesty around it.
The dishonesty is specific and it has a direction. When AI works, the CEO takes the credit and the productivity gain. When AI does not work, the workers who were let go bear the cost. The asymmetry is not incidental. It is structural. It is the reason why the rehiring stories happen quietly and the layoff stories happen loudly. Because the loudness serves the people who benefit from the story, and the quiet serves the people who need to avoid accountability for what happened.
Forrester identified something else in their research that deserves more attention than it has received. They track what they call “coasters” — disengaged workers who have decided their employer does not deserve their full effort. In 2024, coasters accounted for 27% of the workforce. In 2025, 25%. In 2026, the projection is 28%.
When a quarter of your workforce is actively withholding discretionary effort, no amount of AI compensates for that loss. Knowledge work runs on things that are extremely difficult to measure: the extra thought someone puts into a problem, the proactive conversation that prevents a mistake, the willingness to stay an hour longer because the situation genuinely requires it. These things do not appear on a productivity dashboard. They appear in the quality of work over time, and in the trust that builds between an organisation and the people who make it function.
The employees who watched their colleagues get fired for an AI that did not work, who were told to be grateful they still had jobs, who were told to use AI for half their work while watching the AI fail at the other half — these are not people who give discretionary effort. They are people who do exactly what is required and nothing more. And in knowledge work, that gap is enormous.
I want to be careful not to tell a simple story here, because the truth is not simple.
There are companies that have used AI genuinely and well, that have become more productive without damaging their workforce, that have found real efficiencies and passed some of them on to workers and customers. There are executives who made decisions in good faith about AI capabilities that turned out to be wrong, not because they were dishonest but because everyone was guessing in a rapidly moving situation and some guesses were bad.
And there is the harder truth, which is that AI is going to get better. The Klarna story is not evidence that AI will not eventually replace many of the roles it failed to replace today. It is evidence that the timeline was wrong, and that people were treated as acceptable losses in a bet about that timeline that turned out badly.
The Anthropic researchers were careful about this in their March 2026 paper. They found limited evidence of AI causing mass unemployment so far. They also found structural signs — slowing entry-level hiring, shifting job growth projections — that pointed toward a real and ongoing transformation. The absence of mass unemployment today is not proof that the transformation is not happening. It may be proof that it is happening slowly enough to be difficult to see clearly, which is precisely the condition that produces the most avoidable harm.
There is a sentence I have not seen any CEO or board member asked to answer directly.
If you fired someone because of AI, and the AI did not work, and you hired them back or hired someone else for a similar role — what do you owe the people in between?
Not legally. Most of them signed releases, received severance, and have no legal claim. What do you owe them as a matter of basic accountability? The 578 days, the disrupted lives, the careers that bent in directions they would not have chosen?
The answer, so far, has been nothing. The story moves on. The next round of AI announcements arrives. The next set of layoffs gets attributed to automation. The next wave of entry-level workers finds the door slightly more closed than it was the year before.
A silence where a career might have been.
That is the cost nobody is calculating.
Check your own risk
Where does your role actually sit?
Enter your occupation and see your AI automation risk score, based on Oxford Martin School + Anthropic Economic Index — 758 occupations.
Check my job risk →