The recent use of ChatGPT, a chatbot with artificial intelligence, by Samsung employees aimed to streamline processes and strengthen the company's chip business.
However, after three weeks, three leaks of confidential processor information were reported, raising concerns about data security and privacy breaches.
The leaks occurred when Samsung employees entered sensitive information such as data measurementς των επεξεργαστών και πηγαίο κώδικα, στο ChatGPT. Σαν αποτέλεσμα, αυτές οι πληροφορίες έγιναν μέρος της βάσης δεδομένων εκμάθησης του AI, που είναι προσβάσιμη όχι μόνο από την Samsung αλλά σε οποιονδήποτε uses the ChatGPT.
The first leak occurred when an employee in the Semiconductor and Device Solutions division entered the source code related to the processor metrics database into ChatGPT to find a quick fix to a problem. The second leak occurred when another employee entered code related to attribution and optimization, and the third leak occurred when an employee asked ChatGPT to create the minutes of a meeting.
Samsung took immediate action to prevent further leaks, and instructed its employees to be careful with the data they share with ChatGPT. It also limited the capacity of each entry to a maximum of 1.024 bytes.
The company clarified that once the AI chatbot is fed information, it transmits it to external servers where it cannot be retrieved or removed.
This incident highlights the importance of data security and the need for companies to carefully consider the potential risks and benefits of introducing AI chatbots into their workplaces.
While AI chatbots can improve efficiency, they require appropriate measures and training to ensure the confidentiality and security of sensitive information.