In early 2023, technology giant Samsung discovered that its employees had inadvertently leaked sensitive internal data through the use of ChatGPT. The incidents occurred shortly after the company’s semiconductor division permitted engineers to use the AI tool to help fix issues with source code. Employees uploaded confidential information directly into the AI platform, exposing it outside the company’s secure environment.
The leaks were not isolated. At least three separate incidents were identified by Samsung within a period of about 20 days. The exposed data included proprietary information crucial to the company’s operations and competitive advantage.
Sensitive Data Uploaded to a Public AI
The leaked information varied in nature but was uniformly sensitive. One employee submitted confidential source code from a new program into ChatGPT to check for errors. Another employee shared code related to test sequences for identifying faults in semiconductor chips. A third incident involved an employee submitting the entire recording of an internal company meeting, requesting the AI tool to convert it into meeting notes.
These actions resulted in the transfer of valuable intellectual property to servers controlled by OpenAI, the developer of ChatGPT. At the time of the incidents, OpenAI’s policy indicated that data submitted by users could be utilized to train its language models, meaning Samsung’s proprietary data was absorbed by the platform.
Corporate Response and New Security Measures
Upon discovering the leaks, Samsung Electronics took immediate action to mitigate the risk. The company temporarily restricted the use of ChatGPT and limited the upload capacity for each session to 1024 bytes. An internal survey was conducted, revealing that over 60% of respondents acknowledged the security risks associated with such AI services.
Following the breach, Samsung began developing its own in-house AI service for employee use to prevent future leaks of sensitive company information. This move highlighted the tangible security risks companies face when employees use external generative AI tools with confidential corporate data.
Source: https://www.helpnetsecurity.com/2025/11/21/ai-intellectual-property-risks-video/