Concise Cyber

Subscribe below for free to get these delivered straight to your inbox

Advertisements
ChatGPT Atlas Browser Vulnerability Allows Persistent Hidden Commands
Advertisements

New Exploit Discovered in ChatGPT Atlas Browser

Cybersecurity researchers have identified a new vulnerability within OpenAI’s ChatGPT Atlas web browser. This security flaw allows malicious actors to inject harmful instructions into the AI assistant’s memory, creating a pathway to run arbitrary code. The discovery was detailed in a report from LayerX Security, highlighting a significant risk for users of the platform.

Or Eshed, Co-Founder and CEO of LayerX Security, stated that the exploit enables attackers to achieve several malicious outcomes. “This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware,” Eshed said in the report shared with The Hacker News.

CSRF Flaw and Persistent Memory Corruption

The attack’s foundation is a cross-site request forgery (CSRF) flaw. This type of vulnerability can be exploited to inject malicious instructions directly into the persistent memory of ChatGPT. The corrupted memory is a key feature of the attack, as it remains compromised across different devices and user sessions. This persistence was made possible by targeting the “Memory” feature, which OpenAI first introduced in February 2024.

When a logged-in user with corrupted memory attempts to use ChatGPT for normal tasks, an attacker is permitted to conduct a variety of actions. These actions include seizing control of the user’s account, their browser, or even other connected systems. The exploit effectively turns the AI assistant into a tool for the attacker by hiding persistent commands within its memory.

Source: https://thehackernews.com/2025/10/new-chatgpt-atlas-browser-exploit-lets.html