Managing artificial intelligence in the workplace
The last several years have seen artificial intelligence (AI) become mainstream in the workplace. Today, HR professionals widely use AI tools for recruiting, onboarding, and administering leave and benefits. Managers use generative AI to assist with their administrative and supervisory responsibilities, such as writing performance reviews. Engineers use AI to write or check code. And business leaders use AI to model future sales trends and plan marketing campaigns. But the widespread use of AI in the workplace comes with legal risks.
Legal risks of using AI in the workplace
Three risks of using AI in the workplace came into clearer focus in 2023.
First, employees who use generative AI tools such as ChatGPT risk violating privacy laws or compromising the company’s competitive advantage by disclosing proprietary or otherwise protected information. For example, last April, employees at Samsung reportedly uploaded sensitive internal source code onto ChatGPT, potentially making that data available to competitors.
Data submitted to generative AI tools is difficult—if not impossible—to recover or remove, and once uploaded, it may be accessible to other users of the same AI tool. Confidential or proprietary data shared with an open-source AI tool may lose its protected status due to the company’s failure to properly safeguard the information. Additionally, the disclosure of protected information (PII or PHI) could violate statutory privacy laws and trigger reporting and disclosure obligations.