Manufacturers: Get Ahead on Cybersecurity Before it’s Too Late
Blog

Security Concerns Arise as Workers Share Sensitive Business Information with ChatGPT

Technology


Print Friendly, PDF & Email

In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential information, client data, source code, or regulated information. 

“Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models and that information could be retrieved at a later date if proper data security isn’t in place for the service,” said Robert Lemos at DARKReading.

“In one case, an executive cut and pasted the firm’s 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient’s name and medical condition and asked ChatGPT to craft a letter to the patient’s insurance company. And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow.”

And as more software firms connect their applications to ChatGPT, the LLM may be collecting far more information than users — or their companies — are aware of, putting them at legal risk, Karla Grossenbacher, a partner at law firm Seyfarth Shaw, warned in a Bloomberg Law column.

“Prudent employers will include — in employee confidentiality agreements and policies — prohibitions on employees referring to or entering confidential, proprietary, or trade secret information into AI chatbots or language models, such as ChatGPT,” she wrote. “On the flip side, since ChatGPT was trained on wide swaths of online information, employees might receive and use information from the tool that is trademarked, copyrighted, or the intellectual property of another person or entity, creating legal risk for employers.”

Some companies are taking action to protect themselves and their employees. JPMorgan restricted workers’ use of ChatGPT. In addition, Amazon, Microsoft, and Walmart have all issued warnings to employees to take care when using generative AI services.

To maintain the security of sensitive information in the face of advancing technology, organizations must review existing policies and procedures and equip employees with the necessary knowledge and training. Security awareness training can be a valuable starting point to ensure employees know the risks of sharing information with AI software. With the right measures in place, organizations can make the most of their new technologies while keeping their data safe and secure.

Information used in this article was provided by our partners at KnowBe4.

Want To Learn More?

Connect with one of our professionals today.