OpenAI has announced the launch of Aardvark, a new artificial intelligence agent designed to find and fix security flaws in code automatically. Described as an “agentic security researcher,” Aardvark is powered by the company’s GPT-5 large language model (LLM), which was introduced in August 2025. The autonomous agent is programmed to emulate a human expert capable of scanning, understanding, and patching code to help developers and security teams address vulnerabilities at scale.
According to the AI company, the new tool is currently available to users in a private beta. OpenAI stated that Aardvark’s purpose is to help development and security teams flag and fix security issues with greater efficiency.
How Aardvark Automates Code Security
Aardvark is designed to continuously analyze source code repositories to perform several key security functions. The agent actively works to identify vulnerabilities within the code. Following identification, it assesses the exploitability of the discovered flaws and then prioritizes them based on severity. Once a vulnerability is understood, Aardvark proposes targeted patches to resolve the issue. This entire process is intended to streamline the workflow for securing software.
Integration and Functionality
The agent operates by embedding itself directly into the software development pipeline. In this role, it monitors commits and other changes made to codebases in real time. Upon detecting a potential security issue, Aardvark uses LLM-based reasoning and tool-use to understand how the flaw might be exploited. It then formulates and proposes specific fixes to address the identified security problems. As OpenAI noted, Aardvark continuously analyzes source code repositories to identify vulnerabilities, assess exploitability, prioritize severity, and propose targeted patches.