Concise Cyber

Subscribe below for free to get these delivered straight to your inbox

Advertisements
Malware Authors Incorporate LLMs to Evade Signature-Based Detection
Advertisements

Security researchers have identified a real-world instance of malware authors using Large Language Models (LLMs) to create more evasive threats. Analysts at the security firm Deep Instinct discovered a new version of the Gootloader malware that leverages an LLM to dynamically alter its code and bypass traditional security defenses.

Gootloader is a malware downloader known for its use of search engine optimization (SEO) poisoning to infect victims. The operators compromise legitimate websites, often related to business contracts or agreements, to rank high in search engine results. When a user clicks a malicious link from these results, they are prompted to download a ZIP file containing a malicious JavaScript (.js) file.

Gootloader’s New Evasion Tactic

In the new variant, the malicious JavaScript file contains over 100 lines of code, a significant portion of which is a large block of comments. Deep Instinct’s research team determined these comments, which appear as strings of nonsensical but grammatically correct English sentences, were generated by an LLM. The actual malicious and obfuscated code is hidden within this large comment block.

Polymorphism Through AI-Generated Code

The key to the evasion technique is that the LLM-generated text block is different in every malware sample analyzed. By using an LLM to create unique comments for each infection, the malware’s authors ensure that the file’s hash constantly changes. This method of polymorphism renders static, signature-based detection methods ineffective, as they rely on matching known, unchanging file signatures. The constant variation prevents the malware from being easily flagged by these conventional security tools. Deep Instinct’s researchers were able to identify the threat using a deep-learning-based prevention platform that does not rely on signatures.

Source: https://www.darkreading.com/threat-intelligence/malware-authors-incorporate-llms-evade-detection