Ad Image

Klarna’s AI Layoffs Exposed the Missing Piece: Empathy

Solutions Review Executive Editor Tim King offers commentary on Klarna’s AI layoffs and how they exposed the real missing piece; empathy.

In 2022, Klarna, the Swedish fintech giant once valued at $46 billion, announced a sweeping layoff of approximately 700 employees—around 10 percent of its global workforce. Though the move was initially framed as a cost-cutting measure due to worsening macroeconomic conditions, the company’s later AI strategy revealed the deeper reason: automation.

By 2024, Klarna had fully leaned into AI, replacing swaths of its customer service, marketing, and support staff with tools built on OpenAI’s models, proudly claiming that the AI could do the work of 700 people.

While this shift may have seemed efficient on paper, it quickly unraveled in practice. By early 2025, Klarna’s customer service ratings began to dip, user complaints increased, and CEO Sebastian Siemiatkowski was forced to publicly admit that “cost unfortunately seems to have been a too predominant evaluation factor.” In other words, the company had sacrificed quality, morale, and trust in pursuit of AI-led efficiency.

This episode is a powerful case study of what not to do when deploying AI at scale—and how a well-structured, empathetic AI policy could have prevented both the reputational harm and operational backpedaling Klarna faced.

AI Deployment Without Transparency

Klarna failed to be upfront with both its employees and customers about the scope, intent, and implications of its AI integration. The layoffs were initially attributed to economic conditions, but it later became evident that they were driven by Klarna’s desire to automate entire functions with AI. The delayed revelation felt misleading and bred distrust.

Empathetic AI Policy Principle Violated: Transparency by Default

Had Klarna embraced transparency by default, they could have:

  • Communicated the role AI would play in the organization well in advance

  • Shared impact assessments about job functions likely to change

  • Prepared customers for how support channels might change, setting expectations accordingly

This openness could have mitigated backlash and built trust among both internal teams and external users.

Neglecting Human Dignity in Layoffs

The layoffs came swiftly and impersonally. Employees were notified via pre-recorded videos and blanket emails, leaving many feeling discarded and devalued. The process lacked empathy and offered little recognition of the contributions these individuals made to Klarna’s rapid growth.

Empathetic AI Policy Principle Violated: Human Dignity Safeguards

With proper dignity protocols, Klarna could have:

  • Delivered layoffs in person or through managers with whom employees had built relationships

  • Offered mental health support, counseling, or financial planning help

  • Created alumni networks or referral programs to support job transitions

These human-centric safeguards not only respect individuals but preserve long-term brand loyalty from both employees and the public.

No Workforce Transition Strategy

Despite replacing hundreds of roles with AI, Klarna offered no public indication that it attempted to retrain or redeploy those workers. No large-scale upskilling or reskilling programs were communicated, and no partnerships with educational institutions or job placement services were announced.

Empathetic AI Policy Principle Violated: Workforce Transition Support

AI doesn’t have to mean job loss—it can mean job evolution. Klarna could have:

  • Re-trained customer service reps to oversee AI interactions, refine prompts, or handle escalations

  • Provided certification programs for internal employees to pivot into new roles like AI trainers or human-in-the-loop moderators

  • Formed professional support groups for transitioning employees and offered coaching services

This approach would have strengthened internal morale and avoided knowledge drain.

Ignoring Cultural & Psychological Impact

Klarna’s sudden shift created a vacuum in institutional knowledge and a collapse in workplace confidence. Employees were left wondering whether their own jobs were safe, while customers felt alienated by robotic, unhelpful service experiences.

Empathetic AI Policy Principle Violated: Psychological and Cultural Impact

Klarna needed cultural monitoring tools to:

  • Survey employee well-being and AI adoption sentiment in real-time

  • Identify drop-offs in team cohesion or trust

  • Create safe spaces for employees to process organizational change

Proactively investing in cultural diagnostics would have helped Klarna course-correct early before systemic morale loss.

Failure to Involve Employees in the AI Shift

Klarna’s AI integration appeared to be entirely top-down. There’s no evidence the people being displaced were consulted, involved in testing, or empowered to influence how AI would reshape their workflows. In fact, many of those with the deepest customer knowledge were let go.

Empathetic AI Policy Principle Violated: Inclusive AI Development + Employee Voice

Instead, Klarna could have:

  • Created participatory workshops with frontline employees to shape AI implementation

  • Used employee feedback to inform AI guardrails or identify blind spots

  • Established feedback loops between AI outputs and employee experience to fine-tune systems

Inclusive development not only improves outcomes—it ensures buy-in, reduces resistance and promotes innovation.

Overhyping AI Capabilities

In January 2024, Klarna released a blog touting their AI assistant’s ability to handle two-thirds of customer chats and match the productivity of 700 people (Klarna Blog). However, users soon reported experiences that were either impersonal or insufficient, particularly for nuanced issues. Overpromising and underdelivering diminished trust in both the technology and the company.

Empathetic AI Policy Principle Violated: Transparency + Oversight

Had Klarna adopted a measured, humble communication strategy:

  • It could have stressed the assistant’s limitations

  • It could have retained human agents for complex or sensitive issues

  • It could have transparently published service quality metrics

This would have shown a commitment to accountability, not just automation.

Klarna’s Reversal: A Quiet Admission

By 2025, Klarna had begun walking back parts of its AI bet. The company initiated a new pilot program to hire human customer service agents—specifically students and rural workers—for on-demand remote roles. This move quietly acknowledged that AI had limitations, and that the irreplaceable human element was essential to customer satisfaction.

Conclusion: The Cost of Skipping Empathy

Klarna’s AI episode cost more than public embarrassment—it damaged customer trust, eroded internal morale, and forced a strategic reversal. By embracing an empathetic AI policy grounded in transparency, human dignity, support, cultural care, inclusion, and feedback, Klarna could have avoided the worst of these outcomes.

For companies racing toward AI transformation, Klarna offers a vivid lesson: AI without empathy is not just inhumane—it’s bad business.

Click here to download the report: AI Won’t Replace You, But Lack of Soft Skills Might: What Every Tech Leader Needs to Know and watch the companion webinar here.


Note: These insights were informed through web research and generative AI tools. Solutions Review editors use a multi-prompt approach and model overlay to optimize content for relevance and utility.

Share This

Related Posts