The recent use of ChatGPT, an artificial intelligence chatbot, by Samsung employees aimed to streamline processes and boost the company's chip business.
However, after three weeks, three leaks of confidential processor information were reported, raising concerns about data security and privacy breaches.

The leaks occurred when Samsung employees entered sensitive information, such as processor measurement data and source code, into ChatGPT. As a result, this information became part of the AI's learning database, accessible not only by Samsung but to anyone using ChatGPT.
The first leak occurred when an employee in the Semiconductor and Device Solutions division imported the source code related to the processor metrics database into ChatGPT to find a quick fix to a problem. The second leak occurred when another employee entered code related to attribution and optimization, and the third leak occurred when an employee asked ChatGPT to create the minutes of a meeting.
Samsung took immediate action to prevent further leaks, and instructed its employees to be careful with the data they share with ChatGPT. It also limited the capacity of each entry to a maximum of 1.024 bytes.
The company clarified that once the AI chatbot is fed information, it transmits it to external servers where it cannot be retrieved or removed.
This incident highlights the importance of data security and the need for companies to carefully consider the potential risks and benefits of introducing AI chatbots into their workplaces.
While AI chatbots can improve efficiency, they require appropriate measures and training to ensure the confidentiality and security of sensitive information.
