Cybersecurity researchers from the Ben-Gurion University of the Negev (BGU) and Deutsche Telekom discovered a zero-click vulnerability named Shadow Escape. The attack targeted leading AI assistants, including OpenAI’s ChatGPT, Google Gemini, and Microsoft Copilot. The vulnerability exploited the temporary file-sharing mechanism used by these platforms, putting trillions of user records at risk.
How the Shadow Escape Attack Functioned
The Shadow Escape attack leveraged a loophole in how AI assistants handle file uploads. When a user uploads a file, the platform creates a temporary “shadow” copy of the entire conversation for processing. Researchers, led by Dr. Mordechai Guri, found that a malicious file could initiate a path traversal vulnerability during the platform’s antivirus scan. This allowed an attacker to bypass security protocols like sandboxing and gain unauthorized access to the temporary session files containing a user’s entire conversation history. The researchers demonstrated a proof-of-concept where simply visiting a malicious website could trigger the attack and steal a user’s ChatGPT conversation data without any further interaction.
Vulnerability Impact and Vendor Response
The successful execution of the Shadow Escape attack could expose a vast amount of sensitive information. The compromised data included personally identifiable information (PII), private conversations, financial records, medical histories, and proprietary source code that users had shared with the AI assistants. Upon discovering the vulnerability, the research team engaged in a responsible disclosure process with OpenAI, Google, and Microsoft. The companies acknowledged the findings and subsequently implemented mitigation measures to fix the security flaw.
Source: https://hackread.com/shadow-escape-0-click-attack-ai-assistants-risk/