ChatGPT and other LLMs (Large language models) have taken the world by storm. As a result, compliance and information security teams have been overwhelmed by this modern technology. The usage of ChatGPT may have an impact on privacy regulations such as GDPR (General Data Protection Regulation), compliance and information security in general. Recent misuse of these tools caused data breaches in multiple corporations, for example, Samsung [1] suffered a data breach where crucial trade secrets were leaked to OpenAI’s ChatGPT.
With the default setting of ChatGPT you run into the most risks for two key reasons. Firstly, OpenAI will store all content (prompts and responses) it receives to improve its models. This means that all data fed into ChatGPT will remain on their servers forever. This is something that should be avoided when working with sensitive data such as company secrets or personal information. OpenAI’s policy from March 2023 indicated that API users must opt-in to share data to train or improve its models, while non-API services such as ChatGPT requires users to opt-out to avoid having their data used [2].
A second risk is specifically for non-US based companies since all content is processed and stored in the US. This is especially problematic when processing personal data. A recent ruling stated that US Surveillance practices caused insufficient data protection of residents of the EU, thus making the transfer of personal data to the US unlawful.
Some other potential risks of the usage of ChatGPT when it comes to compliance includes:
The most common issue with ChatGPT and other LLM tools is a tendency to provide incorrect and inaccurate information. ChatGPT says, “ChatGPT will occasionally make up facts or “Hallucinate” outputs. If you find an answer is unrelated, please provide that feedback by using the “Thumbs Down” button.” This is called “Reinforcement Learning from Human Feedback” or RLHF for short, and it is equally important to provide positive feedback when the outcome from ChatGPT is the correct one. The ChatGPT version is different than that of the API when it comes to RLHF in practice.
Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness, and actual usefulness before being accepted.
ChatGPT is trained on a large amount of internet data that may include copyrighted material. Therefore, it is outputs have the potential to violate copyright or IP protections. As the “Privacy policy” of ChatGPT says, they may collect personal information (this might also include your intellectual property), use it for improving their services and disclose it to other affiliates and vendors without notifying you. Employees and organisations should not solely rely on the intellectual property (IP) and copyright rules of OpenAI or similar models but follow IP and corporate policies that supersedes it., For example, you should not share any confidential or sensitive information with ChatGPT and other LLM models.
Here’s what ChatGPT’s privacy policy said as of 27 April 2023:
Legal and compliance leaders should keep a keen eye on any changes to copyright law that applies to ChatGPT output and require users to scrutinize any output generated to ensure it does not infringe on copyright or intellectual property rights.
Bad actors may misuse ChatGPT to generate false information and dupe it to write malicious code. AI (Artificial Intelligence) can also generate Phishing frauds; ChatGPT itself can be hacked and its behaviour could be altered, and this applies toaAll missing zero-day exploits and data leaks. In March 2023 OpenAI shut down ChatGPT temporarily after receiving reports of a bug that allowed some users to see the titles of other users’ chat histories. This will likely happen again given how much ChatGPT has grown in popularity since it was launched.
Leaders need to equip their IT teams with tools that can determine what is ChatGPT generated vs. what has been generated by humans and should be geared specifically toward incoming “cold” emails.
Businesses that fail to disclose ChatGPT usage to consumers (e.g., in the form of a customer support chatbot) run the risk of losing their customers’ trust and being charged with unfair practices under various laws, such as CCPA and GDPR. For instance, the California chatbot law mandates that in certain consumer interactions, organizations must disclose clearly and conspicuously that a consumer is communicating with a bot [6].
Legal and compliance leaders need to ensure their organization’s ChatGPT use complies with all relevant regulations and laws, and appropriate disclosures have been made to customers.
The usage of ChatGPT through an API is the same as a ChatGPT UI prompt except with an API the user would need API tools such as CURL, Postman or SoapUI. Some other ways you can address some of these risks include:
A few areas that should also be addressed in security awareness training includes:
Additional ChatGPT usage guidelines for employees should include:
By implementing these measures, organizations can mitigate the compliance risks associated with LLM’s and generative AI and promote responsible and secure usage of the technology.
While the use of AI models such as ChatGPT might cause some concern, AI technologies are both scary and exciting. As ChatGPT itself says – “As an AI language model, ChatGPT is a tool that can be used for both positive and negative purposes. It is important to recognize that while it has the potential to revolutionize the way we interact with technology and each other, it also has limitations and ethical considerations. Whether we fear or embrace ChatGPT depends on how it is developed, deployed, and used”.
On the other hand, LMMs open a plethora of new opportunities as well as being powerful with evaluation tasks due to the nature of how they are trained. Some of these include:
Azure open AI service is another example, it has the same capabilities as OpenAI’s ChatGPT, and is not limited to just ChatGPT 3.5.
Understanding and navigating the compliance risks of ChatGPT is crucial for organizations and individuals alike. Compliance is an ongoing process, and as this powerful language model continues to shape various aspects of our lives, it’s important to recognize the potential risks and take proactive measures to mitigate them.
Sources
[1] https://help.openai.com/en/articles/6783457-what-is-chatgpt
[2] https://openai.com/policies/privacy-policy
[3] https://platform.openai.com/docs/api-reference/introduction
[4] https://help.openai.com/en/articles/6378407-how-can-i-delete-my-account
[5] https://openai.com/policies/terms-of-use
[6] https://www.zscaler.com/blogs/product-insights/make-generative-ai-tools-chatgpt-safe-and-secure-zscaler
[7] https://www.gartner.com/en/newsroom/press-releases/2023-05-18-gartner-identifies-six-chatgpt-risks-legal-and-compliance-must-evaluate
[8] https://www.ml6.eu/blogpost/the-compliance-friendly-guide-to-using-chatgpt-and-other-gpt-models
[9] https://securityintelligence.com/posts/using-chatgpt-as-an-enabler-for-risk-and-compliance/
[10] https://www.themaryword.com/post/should-we-fear-or-embrace-chatgpt
[11] https://www.forbes.com/sites/forbestechcouncil/2023/05/15/the-strategic-opportunities-of-advanced-ai-a-focus-on-chatgpt/?sh=2bba8c893f46 [11]
[12] https://www.theverge.com/2018/6/27/17510908/apple-samsung-settle-patent-battle-over-copying-iphone
[13] https://medium.com/@jakairos/the-tipping-point-chatgpt-plugins-create-new-opportunities-and-crush-dreams-1027bc1016f3