The rapid integration of AI chatbots into public and corporate workflows has been followed by significant, documented data privacy incidents. These events have moved the discussion of AI risks from theoretical to factual, involving personal user data and sensitive corporate information.
Multiple organizations have experienced data exposure through employee use of these platforms. In one widely reported case, employees at Samsung Electronics input confidential company data, including proprietary source code and internal meeting notes, into ChatGPT. This led Samsung to restrict the use of generative AI tools on its internal network and company-owned devices to prevent future leaks of sensitive information.
ChatGPT Bug Exposes User Histories and Payment Information
In March 2023, OpenAI took ChatGPT offline to address a significant security vulnerability. The company later confirmed that a bug in an open-source library allowed some users to see the titles of other active users’ conversation histories. An investigation by OpenAI revealed that the same bug also caused the unintentional visibility of payment-related information for a number of subscribers.
The exposed payment data included the first and last name, email address, payment address, the last four digits of the credit card number, and the credit card expiration date. OpenAI stated this exposure affected 1.2% of its ChatGPT Plus subscribers during a specific nine-hour window and that it contacted those affected.
Regulatory Action and Data Protection Concerns
The privacy issues led to direct regulatory intervention. In late March 2023, Italy’s data protection authority, Garante, issued a temporary ban on ChatGPT’s operations within the country. The regulator cited specific concerns, including the platform’s lack of a legal basis for the mass collection and storage of personal data used to train the AI’s algorithms. The Italian authority also referenced the March data breach that exposed user conversations and payment details. The service was later reinstated in Italy after OpenAI implemented changes to address the regulator’s concerns, including adding an age verification tool and providing more transparent information on data processing.