By Karla Grossenbacher
Seyfarth’s Karla Grossenbacher assesses the legal risks to employers when employees use ChatGPT and other AI tools. She addresses confidentiality and privacy issues, bias and fairness, legal compliance, and liability in client-facing work.
Since ChatGPT became available for public use in November, it’s presented questions for employers about use cases and how best to incorporate the tool into workplace policies and maintain compliance.
ChatGPT is an artificial intelligence language platform that is trained to interact conversationally and perform tasks. To train AI such as ChatGPT, massive data sets are fed into a computer algorithm. Then the model is evaluated to determine how well it makes predictions when reviewing previously unseen data.
The AI tool then goes through testing to determine if the model performs well with large amounts of new data it has not seen before.
Although Chat GPT can introduce efficiencies in workplace processes, it also presents legal risks for employers.
Given how AI is trained and learns, significant issues can arise for employers when employees use ChatGPT to perform their job duties. Accuracy and bias are concerns when employees obtain information from a source like ChatGPT in connection with their work.
ChatGPT’s ability to supply information as an AI language model is only as good as the information it learned in the training phase. Although ChatGPT is trained on vast swaths of online information, its knowledge base still has gaps.
ChatGPT’s current version was only trained on data sets available through 2021. In addition, the tool pulls online data that isn’t always accurate. If employees rely on ChatGPT for information in connection with work and don’t fact-check it, problems and risks can arise depending on how employees use the information and where they send it.
Thus, employers should establish policies that put specific guardrails around how employees use information from ChatGPT in connection with their work.
There is also the question of inherent bias in AI. The Equal Employment Opportunity Commission is focused on this issue—as it relates to the employment discrimination laws it enforces. In addition, state and local legislators are proposing—and some have passed—laws restricting employer use of AI.
The information AI provides is necessarily dependent on the information upon which it is trained—and those who make decisions about what information the AI receives. This bias could manifest in the types of information ChatGPT offers in response to questions presented in “conversation” with it.
Also, if ChatGPT is consulted regarding decision-making in employment, this could lead to claims of discrimination. It could also cause compliance issues based on state and local laws that require notice of AI use in certain employment decisions and/or audits before using it in certain employment contexts.
Because of the risks of bias in AI, employers should include in their policies a general prohibition on the use of AI in connection with employment decisions absent approval from the legal department.
Confidentiality and data privacy are other concerns for employers when thinking about how employees might use ChatGPT in connection with work.
There is the possibility that employees will share proprietary, confidential, or trade secret information when having “conversations” with ChatGPT.
Although ChatGPT represents that it does not retain information provided in conversations, it does “learn” from every conversation. And of course, users are entering information into the conversations with ChatGPT over the internet and there is no guarantee of security in such communications.
Confidential employer information could be impacted if revealed by an employee to ChatGPT. Prudent employers will include—in employee confidentiality agreements and policies—prohibitions on employees referring to or entering confidential, proprietary, or trade secret information into AI chatbots or language models, such as ChatGPT.
A good argument could be made that information given to a chatbot online wouldn’t necessarily be disclosure of a trade secret.
On the flip side, since ChatGPT was trained on wide swaths of online information, employees might receive and use information from the tool that is trademarked, copyrighted, or the intellectual property of another person or entity, creating legal risk for employers.
In addition to these legal concerns, employers should consider to what extent they want to allow employees to use ChatGPT in connection with their jobs. Employers are at an important crossroads in considering whether and how to embrace or restrict usage of ChatGPT in their workplaces.
Employers should weigh the efficiency and economy that could be achieved by employees using ChatGPT to perform such tasks as writing routine letters and emails, generating simple reports, and creating presentations, for example—against the potential loss in developmental opportunities for employees performing such tasks themselves.
ChatGPT is not going away, and a new and improved version should be out within the year. Employers will ultimately need to address its use in their workplaces, as the next iteration is going to be even better.
For all the risks ChatGPT can present, employers can also leverage its benefits. The discussion has just started. Employers will likely be learning and beta testing on this for a bit, as will ChatGPT.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Write for Us: Author Guidelines
Karla Grossenbacher is a partner in Seyfarth’s labor and employment practice and leads the firm’s workplace privacy team.
To read more articles log in.
Learn more about a Bloomberg Law subscription.
Employers Should Consider These Risks When Employees Use … – Bloomberg Law
By Karla Grossenbacher