Concise Cyber

Subscribe below for free to get these delivered straight to your inbox

Advertisements
AI in Cyber Attacks: Researchers Demonstrate LLMs Finding and Exploiting Software Vulnerabilities
Advertisements

The application of artificial intelligence in offensive cybersecurity has been demonstrated through real-world research and the emergence of specialized tools. At the Black Hat USA 2023 cybersecurity conference, Hyrum Anderson, CTO at Robust Intelligence, presented findings on how large language models (LLMs) can be utilized to accelerate the process of discovering and exploiting software vulnerabilities.

This research highlights a shift from theoretical concerns to practical demonstrations of AI’s role in cybercrime. The capabilities shown focus on automating tasks that previously required significant human expertise and time, fundamentally altering the timeline from vulnerability discovery to the creation of functional exploit code.

LLMs Accelerate Vulnerability Discovery and Exploitation

In a demonstration conducted by Anderson and his team, a large language model was tasked with analyzing a piece of code from an open-source library. The LLM successfully identified a known vulnerability, specifically a ‘zip slip’ flaw, in under one minute. Following the discovery, the AI proceeded to write the corresponding exploit code for the identified bug.

This event showcased the capacity of AI to significantly shorten the discovery-to-exploit window. The speed at which the LLM performed both discovery and exploit creation was contrasted with the time typically taken by human security researchers to accomplish the same tasks. The demonstration served as a concrete example of generative AI being applied to an offensive security objective.

Malicious AI Tools and Adversarial Attacks

Beyond vulnerability research, AI is being used to enhance social engineering campaigns. Generative AI enables the creation of highly convincing and grammatically correct phishing emails that are contextually aware and personalized. Tools such as ‘WormGPT’ have been advertised on dark web forums specifically for these malicious purposes.

Another documented area is the use of AI to attack other AI systems, known as ‘model-on-model’ attacks. This involves creating ‘adversarial examples’, which are inputs designed to deceive machine learning models. For instance, a specifically designed sticker, when placed on an object, can cause a computer vision system to misclassify it. This concept was detailed by Hyrum Anderson in his co-authored book, “Not with a Bug, But with a Sticker.”