How to Secure Sensitive Data from ChatGPT, DALL.E and Other Generative AIs

As generative AI tools like ChatGPT and DALL-E become more popular, there is a growing concern about the security of sensitive data.

These AI tools use various forms of data, including prompts and uploaded images, to improve their models and services, which can include sensitive information.

In this article, I will discuss some key strategies that users can adopt to protect their data while using ChatGPT and other generative AI tools.

Let’s start with a real-world story.

Samsung’s semiconductor division has recently found itself in a difficult situation after unintentionally sharing sensitive data with OpenAI’s language model, ChatGPT.

This has caused the company to be vulnerable to potentially disastrous consequences in the highly competitive semiconductor industry.

The incident occurred when engineers used the ChatGPT service to help them fix problems with their source code. However, in the process, they unknowingly entered sensitive data, which is now stored on OpenAI’s servers.

Unfortunately, the data cannot be retrieved or deleted, which has left Samsung in a precarious position.

This scenario highlights the importance of being cautious when using generative AI tools in the workplace.

While these tools can improve workplace efficiency, it is crucial to ensure that sensitive data is kept secure and protected from potential breaches.

To prevent such incidents from occurring, it is advisable not to share sensitive information with ChatGPT or any other third-party tool that you do not have full control over. This includes:

  • Source code
  • Proprietary data
  • Internal meeting notes
  • Hardware-related information
  • Presentation notes
  • Emails

If your company requires the use of generative AI tools to improve efficiency, consider training and building your own chat tool. By doing so, you can control how your data is stored and accessed, which will help you safeguard sensitive information from potential breaches.

It is worth noting that OpenAI’s API can be leveraged without having to worry about data breaches. This is because OpenAI does not use data submitted by customers via the API to train OpenAI models. Instead, the data is used solely for the purpose of generating responses to queries.

When using generative AI tools, it is also essential to ensure that all employees are aware of the risks associated with sharing sensitive information. Regular training and awareness programs can help educate employees on how to use such tools safely and responsibly.

How to Secure Sensitive Data from ChatGPT, DALL.E and Other Generative AIs?

As mentioned earlier, products like ChatGPT and DALL-E use various types of data, including prompts, responses, uploaded images, and generated images, as input training data to improve their models and services.

This data can include sensitive information, and as a result, users need to be cautious to protect their data.

To protect your sensitive data, there are a few things you can do:

  1. Leverage OpenAI API: One way to protect your sensitive data is to leverage OpenAI API. OpenAI does not use data submitted by customers via the API to train OpenAI models. This means that if you use the OpenAI API, your data will not be used to improve ChatGPT or DALL-E.
  2. Fill out the opt-out form: Another way to protect your data is to fill out the opt-out form from having your data being used to improve non-API services like ChatGPT and DALL-E. This form allows you to opt-out of having your data used to improve these services.
  3. Keep your prompts free of sensitive information: To protect your data, you should keep your prompts free of sensitive information. Avoid giving ChatGPT prompts that include sensitive information such as personal details, financial information, login credentials, or confidential business information.
  4. Don’t use ChatGPT without being aware of how your data is being used: You should not use ChatGPT without being aware of how your data is being used. Be sure to read and understand the terms and conditions of the product, and be cautious about what information you share.
  5. Regular training and awareness programs: If you are an organization, regular training and awareness programs can help employees to use generative AI tools safely and responsibly.

By taking these steps, companies can prevent incidents like the one faced by Samsung’s semiconductor division and safeguard sensitive information from potential breaches.

Conclusion

In conclusion, ChatGPT and other LLMs are powerful tools that have revolutionized the generative AI landscape. However, it’s essential to be cautious while using the tool to protect your sensitive data.

By leveraging OpenAI API, filling out the opt-out form, keeping your prompts free of sensitive information, and avoiding giving sensitive information as prompts, you can protect your data while using ChatGPT.

Share this:

Leave a Comment