Close
Updated:

Using Generative AI in Business? Make Sure to Keep Your Secrets

Using Generative AI? Keep Your Secrets

Businesses are finding generative AI programs like ChatGPT useful in functions from financial services to human resources. Although still in its early stages, and far from entirely reliable, the technology is evolving quickly and its tools and practices will continue to develop. The Cisco 2024 Data Privacy Benchmark study found that 79% of businesses say they’re deriving measurable value from generative AI for everything from creating documents to coding.

But this use of generative AI has led to a number of cautions, mostly commonly and loudly about the accuracy of the information that apps like ChatGPT generate—including their tendency to “hallucinate” assertions when they don’t actually have answers.

Another cautionary set of tales surrounds what businesses input into ChatGPT. While there might be a temptation to ask generative AI to help solve challenges that your business faces, even if you’re keeping in mind that you need to double-check the answers, you need to be careful about what you “tell” the AI.  That’s because generative AI systems use any input data to create outputs for other users who may ask questions somehow relevant to that previously input data. If you put your financial statements, human resources records or other sensitive information into ChatGPT as part of a query, that information thus becomes part of the public domain.

And as such, it’s likely no longer confidential, potentially losing legal protections that courts would recognize, even when non-disclosure agreements or restrictive covenants are in effect—because this information has been voluntarily made publicly available, whether or not the user doing the inputting realized this would be the end result. Some are clearly not aware of what happens to this information, possibly leading to identity theft, or corporate data falling into competitors’ hands.

Indeed, Cisco’s 2023 Consumer Privacy Survey showed that 39% of respondents have entered work-related information into generative AI apps, more than 25% have entered personal data like account numbers, and only 50% overall have made a point to avoid putting personal or confidential information into these apps.

While caselaw around whether and when information input into generative AI loses its legal confidential status no doubt will continue to develop, for now the best bet is for employers to develop policies around the use of ChatGPT and its ilk. Banning their use entirely is certainly one option, although given their likelihood of becoming increasingly useful, the better option might be to promulgate understandable and sophisticated guidelines around their use.

These guardrails could include both what can and cannot be entered, in addition to maintaining appropriate skepticism about the results AI programs generate, which can contain not only inaccuracies but also bias and data that’s at the very least misleading. Employee training modules, technology that restricts use, and regular monitoring of content that employees upload are potential methods to embed these guidelines, which will need to be updated over time as generative AI develops.

Part of this training could include the admonishment generated by ChatGPT itself, when asked about the confidentiality of user input data: “OpenAI may collect and use user data for research and privacy purposes, as described in its privacy policy. To ensure the confidentiality of your data, it is important to follow best practices, such as not sharing sensitive personal information or confidential data when using AI models like me.”

So be warned that what you submit to the AI becomes part of the Large Language Model that AI is based upon and as such any confidentiality protection will be lost.  Business owners should adopt internal policies admonishing their employees about adding confidential info into the AI as part of the questions posed.

Contact Us
Start Chat