Concise Cyber

Subscribe below for free to get these delivered straight to your inbox

Advertisements
AI in Cyber Attacks: Researchers Demonstrate LLMs Finding and Exploiting Software Vulnerabilities
Advertisements

The application of artificial intelligence in offensive cybersecurity has been demonstrated through real-world research and the emergence of specialized tools. At the Black Hat USA 2023 cybersecurity conference, Hyrum Anderson, CTO at Robust Intelligence, presented findings on how large language models (LLMs) can be utilized to accelerate the process of discovering and exploiting software vulnerabilities.

This research highlights a shift from theoretical concerns to practical demonstrations of AI’s role in cybercrime. The capabilities shown focus on automating tasks that previously required significant human expertise and time, fundamentally altering the timeline from vulnerability discovery to the creation of functional exploit code.

LLMs Accelerate Vulnerability Discovery and Exploitation

In a demonstration conducted by Anderson and his team, a large language model was tasked with analyzing a piece of code from an open-source library. The LLM successfully identified a known vulnerability, specifically a ‘zip slip’ flaw, in under one minute. Following the discovery, the AI proceeded to write the corresponding exploit code for the identified bug.

This event showcased the capacity of AI to significantly shorten the discovery-to-exploit window. The speed at which the LLM performed both discovery and exploit creation was contrasted with the time typically taken by human security researchers to accomplish the same tasks. The demonstration served as a concrete example of generative AI being applied to an offensive security objective.

Malicious AI Tools and Adversarial Attacks

Beyond vulnerability research, AI is being used to enhance social engineering campaigns. Generative AI enables the creation of highly convincing and grammatically correct phishing emails that are contextually aware and personalized. Tools such as ‘WormGPT’ have been advertised on dark web forums specifically for these malicious purposes.

Another documented area is the use of AI to attack other AI systems, known as ‘model-on-model’ attacks. This involves creating ‘adversarial examples’, which are inputs designed to deceive machine learning models. For instance, a specifically designed sticker, when placed on an object, can cause a computer vision system to misclassify it. This concept was detailed by Hyrum Anderson in his co-authored book, “Not with a Bug, But with a Sticker.”

All articles are written here with the help of AI on the basis of openly available information which cannot be independently verified. We do strive to quote the relevant sources.The intent is only to summarise what is already reported in public forum in our own wordswith no intention to plagarise or copy other person’s work.The publisher has no intent to defame or cause offence to anyone, any person or any organisation at any moment.The publisher assumes no responsibility for any damage or loss caused by making decisions on the basis of whatever is published on cyberconcise.com.You’re advised to do your own checks and balances before making any decision, and owners and publishers at cyberconcise.com cannot be held accountable for its resulting ramifications.If you have any objections, concerns or point out anything factually incorrect, please reach out using the form on https://concisecyber.com/about/

Discover more from Concise Cyber

Subscribe now to keep reading and get access to the full archive.

Continue reading