Introduction by Kate Wegrzyn with articles by Shabbi Khan and ChatGPT
My mom says you should read science-fiction if you want to know what the future will look like. In her 50+ years as an avid reader of the genre, she has seen things once considered preposterous materialize into reality. Her assessment of ChatGPT and other generative AI is that it is coming for us all.
I’m more of a historical fiction person myself, so my take on generative AI is a bit more measured – it is an impressive tool that will become an integral part of our everyday existence, much the same as the internet did in the 1990s and early 2000s.
About a month ago, my colleague Shabbi Khan and I were discussing how the use of this tool in the workplace will continue to expand. While in agreement that we should write a blog on the legal issues surrounding the use of generative AI (like ChatGPT) at work, we lamented that it would take quite a bit of time to accomplish. This is a challenge for anyone, but particularly for lawyers where the product you sell is an hour broken into six minute increments.
I mused that we should just ask ChatGPT to write the blog post for us. We had a little laugh. From there, this ‘man vs. machine’ experiment was born. I generously offered to take the task of prompting ChatGPT to write the blog post while Shabbi was tasked with drafting it the “normal” way.
Author’s note: ChatGPT is a mouthful. It must have been named by a computer programmer and not a marketer. From here on, I am going to call ChatGPT “Cathy” after “Chatty Cathy”, the 1960s pull-string doll that likewise was a technological marvel for its time.
The results of this experiment were unsurprising:
- Efficiency: +1 Point for the Machine. My time commitment to this experiment, a mere 11 minutes (or .2 non-billable hours, if you will), paled in comparison to that of Shabbi, who spent 10 non-billable hours over 8 days researching, pondering, and then finally drafting and editing the article. To no one’s surprise, Cathy wins this point.
- Bias: +1 Point for the Human. I award this point to Shabbi because Cathy did not mention that she hallucinates (that is, she makes up responses sometimes when she doesn’t know the answer). It’s funny – one would think that this lack of self-awareness would have been the human trait, but not so in this case. Perhaps this deserves extra weight because the fact that Cathy is so confident, and “doesn’t know what she doesn’t know,” could easily lull one who relies on her into a false sense that her result is more accurate than it actually is.Now, Shabbi is an IP lawyer and his list of 10 issues is mostly centered on IP topics. But I forgave this in my scoring because when Shabbi and I decided to undertake this project, we both agreed that to do it fully, we would need to ask many of our colleagues across various practice areas to weigh in (e.g., labor and employment, data privacy). It was determined that doing so would slow the process down so much that ChatGPT would be outdated technology by the time we had completed the blog post. Perhaps this demonstrates that Cathy deserves another point, but she was already awarded a point in efficiency and, frankly, I feel humans need a finger on the scale at the moment.
- Readability: +1 Point for the Machine. Cathy’s responses were snappy, short, and easy to read. But that readability came at the expense of depth. See next bullet.
- Effectiveness: +1 Point for the Human. Shabbi’s article took a much deeper dive into the topics that he raised than Cathy’s did. She also repeated some of the same issues more than once in slightly different ways to round out a list of ten. For this reason, Shabbi gets the point.
Winner: With man and machine each scoring two of the available four points on my totally made-up scoring system, we have a tied ballgame.
Takeaways: My takeaways from this experiment, as well as with the content of the articles themselves, is that there may be a place for using Cathy at work but the boundaries of when its use is appropriate are still being established. For now, here are some practical tips for using Cathy (and generative AI in general) to be more efficient while avoiding trouble:
- Figure out how to prompt it in a way that gives the best result.
- Use it for appropriate projects – blog posts on ‘man vs. machine’ style experimentation is a good example of such an appropriate project.
- Do not feed it confidential information – the user cannot control what it does with that information.
- Always verify what it gives you is accurate – trust, but verify.
- Do diligence to ensure that the response it gives you is not plagiarized.
- Include appropriate notices and disclaimers about the item being produced with ChatGPT. I expect someday this will be like a Prop65 warning – it’ll be on everything and consequently barely noticed by the reader.
The bottom line is that the way everyone works is likely about to change at a breakneck speed, and no one is perfectly clear what that means from a legal perspective. The laws will play catch up to the technology. In the meantime, here are some of the legal issues that Shabbi and Cathy identified with respect to using generative AI in the workplace:
Top 10 Legal Issues of Using Generative AI at Work
Author: |
Human (Shabbi Khan) |
ChatGPT-3.5 (Prompted by Kate Wegrzyn) |
Total Time: |
10 hours |
11 minutes |
Work Done: |
|
Note that my initial prompt was “Write a 1500 word blog article on the following topic: What are the top legal issues in using generative AI at work?” I moved to the Top 10 list, because the 1500 words seemed arbitrarily limiting.
|
Result: |
Although various versions of generative AI models have been available to the public for the past few years, it was the release of ChatGPT that has gotten everyone’s attention. Just two months after its launch, ChatGPT reached 100 million monthly users, making it the fastest growing software application in history.
Generative AI refers to artificial intelligence that can generate new content, such as text or images. Generative AI software include complex machine learning models that have been trained on massive amounts of publicly available data, such as websites, images, videos, among others. To generate text, the software uses these machine learning models to predict the next word based on previous word sequences until the complete passage of text is generated. Similarly, to generate images, the software uses machine learning models to predict the next part of an image based on other images that include similar portions until a complete image is generated.
Many professionals have started using generative AI technologies at their workplaces, for example, to generate blog articles, draft emails, generate graphics, with some even daring to use ChatGPT as an automated chatbot.The professionals, however, are often using these generative AI technologies without truly understanding the business and legal risks associated with the use of generative AI. The following are the top ten legal issues that companies and their employees using these generative AI tools should be aware of:
Unauthorized use of Copyrighted Data to Train Generative AI Models Generative AI models are trained on massive amounts of publicly available data including copyrighted data. There are concerns that the use of copyrighted data to train these machine learning models violates the copyrights of the copyright owners. Courts in the U.S. have not yet weighed in on whether generative AI systems use of copyrighted data is permissible under the fair use doctrine. That said, other jurisdictions don’t have such permissive fair use exceptions as the U.S. and so therefore, there may be exposure to generative AI companies in other jurisdictions outside the U.S. Because it is uncertain whether generative AI companies will be allowed to use these models in the future if they are found liable for copyright infringement, there is concern about end users or companies that have produced output using such models and whether liability can extend to such end users or companies.
Does the Output Violate Copyright Laws? Separate from the liability that may be imposed on generative AI companies that have trained these generative AI models, there are concerns that if the output of a generative AI system is too similar to a copyrighted work, the output may violate copyright laws and therefore, expose the entity that distributes or publishes the output to copyright infringement claims. The biggest risk here is that because users are not aware of all of the copyrighted works out there, a user will not know how similar the output is to a copyrighted work and therefore, unwittingly publish the output (likely passing the work as their own). Doing so will not give rise to copyright infringement claims but it can also result in significant damage to the user’s reputation.
Confidentiality of Input of Prompts Prompts are the queries a user inputs into a generative AI system to generate an output. These prompts can be used by the generative AI software to improve their models and for other purposes. As such, care should be taken to avoid sharing confidential or sensitive information as the generative AI system can incorporate the prompts to generate outputs for other users. Many generative AI systems mentionusing third party contractors to review both the input and the output for safety and moderation purposes, which means that submitting confidential information as a query may result in liability stemming from a breach of confidentiality.
Data Ownership of the Prompt and the Output Users of generative AI software should review the Terms of Use of the generative AI system to understand the ownership rights associated with the input or prompt and the output generated by the AI system. The user should understand what rights, if any, the generative AI system has in the input and how the AI system may use it. Similarly, the user should understand what rights, if any, the generative AI software and the user have in the output, and what restrictions the user may have on the output. Because the generative AI system may have rights to the output data, it is possible that the generative AI system may reproduce the same output for another user and if the other user. This can result in claims for copyright infringement and plagiarism claims so users should proceed with caution when dealing with outputs generated from the AI system.
Authorship Who is the author of the output of a generative AI system? Is it only the person who input the prompt or is it only the generative AI model that produced the output or is it a combination? OpenAI suggests users mention the output was generated in part using its generative AI models. For instance, OpenAI has provided some stock language that a user may use to describe the creative process, which states the following: “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.” Failure to accurately list the authorship may result in potential liability. In particular, it may violate the Terms of Use of certain generative AI systems, some of which require that the author may not represent that the output from the generative AI software was human-generated when it is not.And because the same output may be generated for another user, failing to represent that generative AI was used to generate the content could result in claims of misrepresentation once detected.
Seeking Copyright Protection on the Generative AI Content Copyright protection for AI-generated works vary from country to country. For example, in the United States, copyright laws do not protect works created solely by a computer but works in which an individual can demonstrate substantial human involvement may qualify for copyright protection. In the United Kingdom, works generated completely by a computercan be protected. In the European Union, things a less clear and it allows for human creativity to be expressed through an AI system. Without copyright protection on certain works of art, companies may not be able to enforce their rights over others in the case of blatant copying. This may be important for media companies or individuals that need copyright protection on the works that they generate. Accordingly, it is important for companies to understand the risks of not having copyright protection on such works.
Bias in Outputs Users may consider using the generative AI software for various use cases. For example, using ChatGPT as a chatbot or for evaluating resumes of candidates or for creative writing. In each of these use cases, the output from the generative AI software may exhibit bias that if left undetected, may result in discriminatory behavior. Discriminatory or offensive behavior can result in negative public relations crises, litigation exposure, and civil penalties. Accordingly, users and companies should monitor the use of generative AI to ensure that the systems are not exhibiting bias.
Factual Inaccuracies in Outputs Generative AI systems are great for automatically generating content. However, the content that is generated may not be factually correct. One of the biggest challenges with generative AI systems is the concept of hallucinations – which is the generative AI’s ability to make up information that seems true but is not. The risk of hallucinations increases when the generative AI model is asked to output larger amounts of information. Users and companies should ensure that the AI generated content they use is factually correct to avoid reaching incorrect conclusions or taking actions that are based on the factually incorrect information. Failure to do so may result in exposure to the company if others rely on the information to take ill-advised actions. For instance, a generative AI chatbot in a medical setting may hallucinate and tell the patient to take a medication to which the patient may have a severe allergic reaction.
Algorithmic Disgorgement Several generative AI software companies are facing legal challenges due to copyright infringement claims, and it is unclear what liability, or penalty, these companies may face. A more recent enforcement mechanism, referred to as algorithmic disgorgement, has been used by the Federal Trade Commission (FTC) to penalize companies that have illegitimately collected data and used that data to train an algorithm by requiring a company to either delete the data they collected, or delete the algorithm or models the company developed using the data. The threat of algorithmic disgorgement can extend to the output generated from these models and as such, users may be at risk of losing their work product that was generated using generative AI models that have violated other entities’ rights.
Indemnification Various generative AI software require users to indemnify the generative AI software as part of the Terms of Use. As such, copyright owners may consider suing a user of the generative AI software for use of the generative AI software that has been trained using the copyright owner’s copyrighted data. This risk of exposure becomes higher when using generative AI models to generate an image as output that is substantially similar to copyrighted works of a particular visual artist or if the output inserts a watermark or other insignia indicating that the model was trained using copyrighted data of the visual artist or image source.
As companies and employees begin to adopt generative AI solutions in their workplace, they should understand the risks associated with the adoption of generative AI and establish policies that will help mitigate such risks.
|
Generative Artificial Intelligence (AI) has the potential to revolutionize the way businesses operate, automate repetitive tasks, and create new products and designs. However, with any new technology, there are also legal implications to consider. Organizations using generative AI at work must be aware of the top legal issues that arise when using this technology. In this article, we will discuss the top 10 legal issues that organizations face when using generative AI at work.
Intellectual Property Rights One of the primary legal issues in using generative AI at work is intellectual property rights. When generative AI is used to create new products or designs, the question of who owns the rights to these creations becomes a concern. In many cases, the output generated by generative AI may be considered a “joint work,” with both the AI system and the human users sharing the rights. However, the specific ownership and distribution of rights can be complex and can depend on the circumstances of each case.
Data Privacy Another legal issue that organizations face when using generative AI at work is data privacy. AI systems require vast amounts of data to train and improve their performance, and this data can often be sensitive or personal in nature. Organizations must ensure that they comply with all relevant data privacy laws and regulations, including the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
Employment Law AI systems can automate many tasks that were previously performed by humans, and this can have a significant impact on the employment landscape. Organizations must ensure that they comply with all relevant employment laws and regulations, including fair selection processes for which tasks will be automated and providing appropriate training and support to affected employees.
Liability When an AI system generates a product or service, the question of who is responsible for any harm that may arise from the use of that product or service becomes a concern. Organizations must ensure that they have appropriate liability insurance in place to cover any potential claims.
Discrimination Organizations must also be aware of the potential discrimination risks that may arise from the use of generative AI. AI systems must be designed to be inclusive and avoid biases that could impact specific groups. Regular monitoring and audits should be conducted to ensure that the AI systems are working fairly and effectively.
Fairness The use of generative AI must also be fair and transparent. Organizations must ensure that the AI systems are not making decisions that are discriminatory or biased and must be able to demonstrate how decisions were made if necessary.
Contract Law Organizations must also be aware of contract law when using generative AI at work. Contracts must clearly define the rights and responsibilities of the AI system, the human users, and any other parties involved in the creation of the output.
Regulatory Compliance Organizations must ensure that they comply with all relevant regulations, including industry-specific regulations, such as financial services regulations or healthcare regulations.
Cybersecurity Cybersecurity is a major concern when using generative AI at work. Organizations must implement appropriate security measures to protect the data, including encryption, access controls, and monitoring systems.
International Law Organizations must also be aware of international law when using generative AI at work. Different countries have different laws and regulations regarding the use of AI, and organizations must ensure that they comply with the laws and regulations of each jurisdiction in which they operate.
Conclusion Generative AI has the potential to transform the way businesses operate, automate repetitive tasks, and create new products and designs. However, organizations must be aware of the legal implications of using generative AI at work.
|
Context
It is not lost on us that in this experiment we, being lawyers, used legal content as our test case – specifically asking Cathy to spot and write about the legal issues surrounding the use of generative AI. As lawyers, we are especially concerned about providing accurate content for both our client work and, as here, where our analysis is offered for general information and not considered legal advice. But accuracy is critically important in almost any field.
Thus, as noted above, even before Shabbi would post his work for public consumption, we said he would normally consult with more knowledgeable attorneys (which he didn’t do here.) The same is true for anyone who relies on Cathy. The content she generates should not be used on its own, and should be checked and vetted before anyone relies on it or offers it up to someone else as authoritative. Needless to say, the content generated by this experiment is merely an illustration and is not intended to be relied upon for its substantive accuracy.
If you have questions about generative AI, the issues raised in this article, or AI more generally, please reach out to any member of Foley & Lardner LLP’s Artificial Intelligence team.