Microsoft has announced the discovery of a new side-channel attack, codenamed “Whisper Leak,” that can expose the topics of conversations with AI language models, even when the traffic is encrypted. This vulnerability poses a significant privacy risk for both individual users and enterprise communications that rely on streaming-mode AI services.
The attack allows a passive adversary with the ability to monitor network traffic to infer sensitive information. According to Microsoft, this method does not break the encryption itself but analyzes patterns in the data flow to determine the subject matter of a user’s prompts.
How Whisper Leak Works
The technique hinges on observing the encrypted data packets exchanged between a user and a remote language model. Microsoft researchers Jonathan Bar Or and Geoff McDonald explained that an attacker in a position to see this traffic—such as a nation-state actor at an internet service provider, an intruder on a local network, or even someone on the same Wi-Fi router—could use this method. By analyzing the size and timing of encrypted packets in a streaming conversation, the attacker can successfully infer if the user’s prompt relates to a specific, predetermined topic.
Implications for Data Privacy
While the full content of the conversation remains protected, the leakage of conversational topics is a serious privacy breach. This could allow an attacker to determine if an individual or an organization is discussing sensitive subjects like financial plans, proprietary research, or personal health issues. The research highlights a new frontier in cybersecurity, where the metadata and patterns of encrypted communications can be as revealing as the content itself, requiring new defensive strategies for securing AI interactions.
Source: https://thehackernews.com/2025/11/microsoft-uncovers-whisper-leak-attack.html