OpenAI, the organization behind ChatGPT, has officially confirmed a data breach that exposed the information of some of its users. The incident was not a direct attack on OpenAI’s systems but resulted from a security compromise at a third-party analytics vendor.
The company disclosed that an unauthorized actor gained access to and stole a report containing specific user data. The breach originated when an employee at the analytics partner fell victim to a sophisticated phishing attack, which led to the compromise of their account credentials.
Details of the Exposed Data
According to OpenAI’s disclosure, the stolen report contained the first and last names and email addresses of some users. In addition, the report included the last four digits of the credit card numbers for a subset of the affected users. The company stated that the breach impacted fewer than 100 users in total.
Incident Response and Third-Party Compromise
Upon discovering the breach, OpenAI took action to notify the users whose information was contained in the compromised report. The incident highlights the security risks associated with third-party vendors. The analytics partner in question was authorized to create reports and visualizations from OpenAI’s data. The phishing attack on the partner’s employee provided the vector for the unauthorized actor to access and exfiltrate the sensitive user data report.