Concise Cyber

Subscribe below for free to get these delivered straight to your inbox

Advertisements
Real-World AI Risk: How Samsung’s ChatGPT Use Leaked Corporate Secrets
Advertisements

In early 2023, technology giant Samsung discovered that its employees had inadvertently leaked sensitive internal data through the use of ChatGPT. The incidents occurred shortly after the company’s semiconductor division permitted engineers to use the AI tool to help fix issues with source code. Employees uploaded confidential information directly into the AI platform, exposing it outside the company’s secure environment.

The leaks were not isolated. At least three separate incidents were identified by Samsung within a period of about 20 days. The exposed data included proprietary information crucial to the company’s operations and competitive advantage.

Sensitive Data Uploaded to a Public AI

The leaked information varied in nature but was uniformly sensitive. One employee submitted confidential source code from a new program into ChatGPT to check for errors. Another employee shared code related to test sequences for identifying faults in semiconductor chips. A third incident involved an employee submitting the entire recording of an internal company meeting, requesting the AI tool to convert it into meeting notes.

These actions resulted in the transfer of valuable intellectual property to servers controlled by OpenAI, the developer of ChatGPT. At the time of the incidents, OpenAI’s policy indicated that data submitted by users could be utilized to train its language models, meaning Samsung’s proprietary data was absorbed by the platform.

Corporate Response and New Security Measures

Upon discovering the leaks, Samsung Electronics took immediate action to mitigate the risk. The company temporarily restricted the use of ChatGPT and limited the upload capacity for each session to 1024 bytes. An internal survey was conducted, revealing that over 60% of respondents acknowledged the security risks associated with such AI services.

Following the breach, Samsung began developing its own in-house AI service for employee use to prevent future leaks of sensitive company information. This move highlighted the tangible security risks companies face when employees use external generative AI tools with confidential corporate data.

All articles are written here with the help of AI on the basis of openly available information which cannot be independently verified. We do strive to quote the relevant sources.The intent is only to summarise what is already reported in public forum in our own wordswith no intention to plagarise or copy other person’s work.The publisher has no intent to defame or cause offence to anyone, any person or any organisation at any moment.The publisher assumes no responsibility for any damage or loss caused by making decisions on the basis of whatever is published on cyberconcise.com.You’re advised to do your own checks and balances before making any decision, and owners and publishers at cyberconcise.com cannot be held accountable for its resulting ramifications.If you have any objections, concerns or point out anything factually incorrect, please reach out using the form on https://concisecyber.com/about/

Discover more from Concise Cyber

Subscribe now to keep reading and get access to the full archive.

Continue reading