Close
Updated:

Using AI to Create Content? Watch Out for Copyright Violations


AI & Content Creation

Businesses using generative AI programs like ChatGPT to create any content—whether for blogs, websites or other marketing materials, and whether text, visuals, sound or video—need to ensure that they’re not inadvertently using copyrighted materials in the process.

Clearly, the times they are a changing….and businesses need to adapt to the changes.  Employers should promulgate messages to their employees and contractors updating their policy manuals to ensure that communications professionals and others crafting content are aware of the risks of using AI-generated materials, which go beyond the possibility that they are “hallucinated” rather than factual—although that’s worth considering, too.

Given that generative AI is trained on billions of inputs—data from text or images found in the internet which are called Large Language Models (“LLM”)—and “learns” to put together sentences based on which words come after other words, it might seem unlikely that the output it generates would mimic any given copyrighted work that closely. But researchers and journalists, among others, have raised concerns, and in some cases, filed suit. For example, photo giant Getty Images sued the creator of an AI tool called Stable Diffusion, claiming that its photos were improperly used.

Establishing that copyright infringement has taken place requires the finding of a close resemblance both in terms of content and style, which can be easier to establish with visuals such as photographs or company logos than written text. But either type of content is fair game for anything from receiving a cease-and-desist letter to being served papers if the creator believes its copyright has been violated.

Companies like OpenAI, creator of ChatGPT, acknowledge that copyrighted works are fed into the training process for their generative AI programs but claim that using the text, images and other data as inputs into their systems should fall under Fair Use, a provision of the Copyright Act that protects reproduction of copyright materials for purposes such as teaching, research and news reporting. However, more than a dozen lawsuits were filed just last year against generative AI companies, mostly class actions.

OpenAI has said that their AI tool transforms the copyrighted works enough that they have added something new and materially changed them. They also have claimed Fair Use protection because the inputs used for training do not, in and of themselves, become accessible to the public—not, that is, until the aforementioned transformation has taken place.

So, what can businesses do to protect themselves?

For starters, they should ensure that any generative AI platforms that they use offer a written statement that their outputs have been properly vetted to ensure that they will not produce content that could be construed as copyrighted. They also should insist that the provider indemnify their business for any infringement of intellectual property that inadvertently occurs if they have not done so.

Vendor and customer agreements also should offer disclosures as to whether either party is using AI to create content, and to state plainly and in writing that both parties understand the intellectual property pitfalls and will abide by them—as well as stipulations laying out that neither party may input confidential information about the other into an AI tool.

And if a business feels that its own intellectual property has been somehow infringed upon—that’s the time to talk to your attorney about that sternly worded cease-and-desist letter and possibly to escalate to legal action.

Contact Us
Start Chat