Technology companies including Microsoft, OpenAI, and Anthropic have detected and disrupted operations by state-affiliated hacking groups using large language models (LLMs) to augment their cyberattacks. The threat actors, linked to Russia, North Korea, Iran, and China, were observed using AI for a variety of malicious support tasks before their accounts were terminated.
In a coordinated announcement, the technology partners detailed how these groups leveraged the power of AI. The findings were part of an ongoing effort to prevent the misuse of AI technologies for malicious purposes. Anthropic confirmed its Trust and Safety team actively worked to identify and shut down accounts associated with these state-sponsored entities that violated their acceptable use policy.
How Threat Actors Leveraged Large Language Models
The state-sponsored groups did not use AI to create novel or more sophisticated cyberattacks. Instead, they used the models to improve the efficiency of existing, conventional hacking techniques. Microsoft identified groups such as Forest Blizzard (Russia), Emerald Sleet (North Korea), and Charcoal Typhoon (China) among the users.
Their activities included using the LLMs for:
Reconnaissance: Researching target individuals, organizations, and publicly reported vulnerabilities.
Scripting and Tool Development: Generating and refining code snippets for tasks like web requests, file manipulation, and automating processes.
Social Engineering: Drafting phishing emails and other communications to trick targets.
Translation: Translating technical papers, captured documents, and computer commands to overcome language barriers.
A Coordinated Industry Takedown
The identification of this malicious activity led to a swift and collaborative response. Anthropic, alongside Microsoft, OpenAI, and Google, took action to disable the accounts and assets associated with these threat actors. This multi-company effort highlights a proactive industry stance on monitoring and preventing the abuse of powerful AI systems.
The companies stated that while the use of AI by hackers is an emerging threat, their current applications have been limited to augmenting existing workflows rather than creating new categories of risk. The collaborative disruption aims to stay ahead of threat actors as they continue to explore the capabilities of AI in cyber operations.