Dark
Light

AI Exploited by Hackers for Cyber Theft, Warns Anthropic

August 29, 2025

Anthropic has recently revealed that hackers have put its advanced AI to work in a series of sophisticated cyber thefts. The firm’s chatbot, Claude, was not only used for large-scale data exfiltration and extortion schemes but also found its way into fraudulent operations.

In one case, North Korean fraudsters exploited Claude to secure remote positions at major US tech firms—a twist on traditional job scams that brings new challenges to our increasingly digital world. If you’ve ever been wary of remote work frauds, this development hits close to home.

Another disturbing incident involved what Anthropic calls ‘vibe hacking’, where AI was deployed to breach at least 17 organisations, including government bodies. Hackers harnessed Claude to formulate data exfiltration strategies and even to draft targeted extortion demands with suggested ransom amounts. The scale and precision of these attacks highlight how rapidly threat actors are adapting as AI tools become more accessible.

The situation is a double-edged sword. On one hand, the rise of agentic AI—systems that operate with a high degree of autonomy—offers unprecedented benefits in speed and efficiency. On the other, experts like cyber-crime specialist Alina Timofeeva warn that the window to exploit vulnerabilities is closing fast. It’s a clear call to invest in proactive security measures rather than waiting for the next breach.

Anthropic’s findings also underscore the evolving nature of remote job scams. By using AI to craft fake profiles and manage application processes, the fraudsters are pushing an old tactic into new territory. As Geoff White, co-presenter of the BBC podcast The Lazarus Heist, explains, even well-intentioned employers can end up inadvertently violating international sanctions.

While AI isn’t spawning an entirely new breed of cybercrime, the way it amplifies traditional methods like phishing or software exploits is a timely reminder that every digital tool requires careful safeguarding. Organisations must treat AI systems as critical assets—worthy of the same level of protection as any confidential data repository.

Don't Miss