Google’s Threat Analysis Group (TAG) and Mandiant have reported that a Chinese-linked cyber-espionage group has been utilizing large language models (LLMs) to assist in its cyber-attack operations. The group, tracked as STORM-1575 and also known as Bronze Highlander, was observed interacting with AI chatbots to create and refine malicious code.
This activity from STORM-1575, a group active since 2021, primarily targets government and defense organizations within the United States. The findings from Google provide concrete evidence of state-sponsored actors exploring AI to enhance their offensive capabilities.
STORM-1575’s AI-Assisted Operations
According to the report, STORM-1575 used Anthropic’s Claude LLM to seek assistance with scripting. The group’s interactions with the AI model were aimed at developing code related to their use of sqlmap, a well-known open-source tool for executing SQL injection attacks. This indicates an effort to automate and streamline the technical aspects of their cyber campaigns. The threat actor leveraged the AI to troubleshoot and improve scripts intended for malicious purposes.
Detection and Industry Response
The malicious activity was detected by Anthropic, the developer of the Claude AI model. Upon discovery, Anthropic promptly terminated the accounts associated with the threat actor. Google’s broader report, titled “Hacking an AI-powered future,” notes that this is not an isolated incident. Other state-backed threat groups are also experimenting with LLMs. The report identified China-linked APT41 (Wicked Panda) and Iran-linked APT35 (Charming Kitten) among those using AI for tasks like generating spear-phishing email content, refining code, and translating technical documents. Google’s assessment stated that while these groups are exploring AI for productivity gains, their use of the technology has not yet resulted in the creation of novel or more sophisticated attacks.
Source: https://www.infosecurity-magazine.com/news/chinese-hackers-cyberattacks-ai/