Concise Cyber

Subscribe below for free to get these delivered straight to your inbox

Advertisements
Anthropic’s Claude AI Used by State-Sponsored Hackers in Cyber Operations
Advertisements

Technology companies including Microsoft, OpenAI, and Anthropic have detected and disrupted operations by state-affiliated hacking groups using large language models (LLMs) to augment their cyberattacks. The threat actors, linked to Russia, North Korea, Iran, and China, were observed using AI for a variety of malicious support tasks before their accounts were terminated.

In a coordinated announcement, the technology partners detailed how these groups leveraged the power of AI. The findings were part of an ongoing effort to prevent the misuse of AI technologies for malicious purposes. Anthropic confirmed its Trust and Safety team actively worked to identify and shut down accounts associated with these state-sponsored entities that violated their acceptable use policy.

How Threat Actors Leveraged Large Language Models

The state-sponsored groups did not use AI to create novel or more sophisticated cyberattacks. Instead, they used the models to improve the efficiency of existing, conventional hacking techniques. Microsoft identified groups such as Forest Blizzard (Russia), Emerald Sleet (North Korea), and Charcoal Typhoon (China) among the users.

Their activities included using the LLMs for:

Reconnaissance: Researching target individuals, organizations, and publicly reported vulnerabilities.

Scripting and Tool Development: Generating and refining code snippets for tasks like web requests, file manipulation, and automating processes.

Social Engineering: Drafting phishing emails and other communications to trick targets.

Translation: Translating technical papers, captured documents, and computer commands to overcome language barriers.

A Coordinated Industry Takedown

The identification of this malicious activity led to a swift and collaborative response. Anthropic, alongside Microsoft, OpenAI, and Google, took action to disable the accounts and assets associated with these threat actors. This multi-company effort highlights a proactive industry stance on monitoring and preventing the abuse of powerful AI systems.

The companies stated that while the use of AI by hackers is an emerging threat, their current applications have been limited to augmenting existing workflows rather than creating new categories of risk. The collaborative disruption aims to stay ahead of threat actors as they continue to explore the capabilities of AI in cyber operations.

All articles are written here with the help of AI on the basis of openly available information which cannot be independently verified. We do strive to quote the relevant sources.The intent is only to summarise what is already reported in public forum in our own wordswith no intention to plagarise or copy other person’s work.The publisher has no intent to defame or cause offence to anyone, any person or any organisation at any moment.The publisher assumes no responsibility for any damage or loss caused by making decisions on the basis of whatever is published on cyberconcise.com.You’re advised to do your own checks and balances before making any decision, and owners and publishers at cyberconcise.com cannot be held accountable for its resulting ramifications.If you have any objections, concerns or point out anything factually incorrect, please reach out using the form on https://concisecyber.com/about/

Discover more from Concise Cyber

Subscribe now to keep reading and get access to the full archive.

Continue reading