Discussions around artificial intelligence tools have monopolized our interest in recent months. Because of their ability to increase productivity and save time, many workers have already incorporated these tools into their daily work. However, your employees should know how to use them without compromising the security of your company's data.
AI tools can help us develop ideas, summarize or rephrase pieces of text, or even create a basis for a business strategy or find a bug in a code. But as long as we use artificial intelligence, we must remember that the data we enter into the tools ceases to belong to us once we hit the send button.
One of the main concerns when using large language models (LLMs) such as ChatGPT is that we share sensitive data with large multinational companies. These models are online texts, allowing them to effectively interpret and respond to user queries. However, every time we interact with a chatbot and ask for information or help, we may inadvertently share data about ourselves or our company.
When we write a command for a chatbot, the data we enter becomes public. This does not mean that chatbots will immediately use this information as a basis for responses to other users. But the LLM provider or its partners may have access to these queries and they could incorporate in future versions of the technology.
Η OpenAI, the organization behind ChatGPT, he introduced the chat history opt-out option, which prevents the use of user data to train and improve its AI models OpenAI. In this way, users gain more control over their data. If employees at your company would like to use tools like ChatGPT, turning off chat history should be their first step.
But even with chat history disabled, all command data is still stored on the chatbot servers. By storing all commands on external servers, there is a potential threat of unauthorized access by hackers. In addition, technical errors can occasionally allow unauthorized persons to access data belonging to other chatbot users.
So how can you ensure that your company's employees use platforms like ChatGPT safely? Experts from the global cyber security company ESET warn about some mistakes that employees often make and advise how to avoid them.
Table of Contents
Use of customer data
The first common mistake employees make when using LLMs is to unknowingly sharing sensitive information about their company's customers. Imagine, for example, doctors submitting their patients' names and medical records and asking the LLM tool to write letters to the patients' insurance companies. Or companies can upload customer data from their CRM systems and prompt the tool to write targeted newsletters.
Train employees to anonymize their queries before entering them into chatbots. To protect customer privacy, encourage customers to carefully review and remove sensitive information such as names, addresses, or account numbers. Best practice is to avoid using personal information in the first place and rely on general questions or queries.
Importing confidential documents into chatbots
Chatbots can be valuable tools for quickly summarizing large amounts of data and creating plans, presentations or reports. Nevertheless, the uploading documents to tools like Chat GPT it may mean that company or customer data stored on them is exposed. While it may be tempting to copy documents and have the tool generate summaries or suggestions for presentation slides, it's not a foolproof way to keep data safe.
This applies to important documents, such as growth strategies, but also to less important documents – such as notes from a meeting – that can lead employees to reveal their company's valuable expertise.
To mitigate this risk, establish strict policies for handling sensitive documents and restrict access to such files with a restricted information access policy (need to Know). Employees must review the documents before requesting a summary or assistance from the chatbot. This ensures that sensitive information such as names, contact details, sales or cash flow data is deleted or anonymised appropriately.
Disclosing company data in orders
Imagine you are trying to improve some of your company's practices and workflows. You ask ChatGPT to help you with time management or task structuring, and you enter valuable expertise and other data into the command to help the chatbot develop a solution. Just like entering sensitive documents or customer data into chatbots, including sensitive company data in the command is a common, but potentially harmful practice that may lead to unauthorized access or leakage of confidential information.
To avoid this, command anonymization should be standard practice. This means that names, addresses, financial details or other personal data should never be entered into chatbot commands. If you want to make it easier for employees to safely use tools like ChatGPT, create standardized commands as templates that can be safely used by all employees if necessary, such as “Imagine you are [position] at [company]. Create a better weekly workflow for [position] that focuses primarily on [work]”.
AI tools are not only the future of our work, they are already the present. As progress in the field of artificial intelligence and, in particular, machine learning advances daily, companies must inevitably follow the trends and adapt to them. From data security specialist to general IT manager, could you make sure your colleagues know how to use these technologies without risking data leakage?