Why does this matter to businesses?
You might be asking why a marketing company is writing a long-form blog post about AI and intellectual property rights.
Quite simply, AI tools are exploding across today’s world, with pundits alternatingly announcing the AI-driven salvation or destruction of all we hold dear. Writers, performers, artists and other creators are justifiably nervous about their place, rights and ability to make a living in this future abruptly brought to this year.
We hope to explain the issues, what the evolving landscape of rights looks like around generative AI, and help you understand how it impacts you and what best practices you should follow.
Overview and Topics
This week, the Writers Guild of America has agreed to end their strike, part of which was centered around regulation of AI creation tools and AI-created written content. https://www.cbsnews.com/losangeles/news/wga-ends-strike-releases-details-on-tentative-deal-with-studios-writers-hollywood/ The Guild’s objections that led to the strike, and the agreed-upon terms, are very interesting across situations and industries far beyond the Writers Guild and the screenwriting profession.
Topics Covered in this blog (click to skip to each section):
- What is the Writers Guild of America’s new contract as it pertains to AI-generated content?
- What do the Writers Guild of America’s contract terms on AI-generated content mean?
- How should the strike-ending WGA agreement guide the use of AI in others’ works and your own?
- Is AI-generated content ethically and morally useable in academia?
- Is AI-generated content ethically and morally useable in business?
- Are there other ramifications of the Hollywood Strike?
First, let’s look at the exact terms in the new agreement, then we’ll interpret them to understand their meaning, and then we’ll talk about the broader implications outside of screenwriting.
What is the Writers Guild of America’s new contract as it pertains to AI-generated content?
Referring to the WGA’s own contract website ( https://www.wgacontract2023.org/the-campaign/summary-of-the-2023-wga-mba ), the AI terms of the Agreement are:
We have established regulations for the use of artificial intelligence (“AI”) on MBA-covered projects in the following ways:
- AI can’t write or rewrite literary material, and AI-generated material will not be considered source material under the MBA, meaning that AI-generated material can’t be used to undermine a writer’s credit or separated rights.
- A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services.
- The Company must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material.
- The WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law.
The above text is © 2023 Writers Guild of America and is copied verbatim from the URL listed above into our article here, in case the contract website is removed at some future date. While we are very staunch defenders of not copying more than a snippet from another website, we believe the WGA wants this information to be widely disseminated and will not object to our copying under all five terms of the Fair Use Doctrine.
What do the Writers Guild of America’s contract terms on AI-generated content mean?
#1 seems to mean that WGA-covered scripts cannot contain content solely written by an AI content tool. And specifically, inserting AI-written content into a larger work does not allow any entity other than the individual human contributors of the work to enjoy any of the crediting (called Separated Rights) for the work. So, AI can’t be used by studios to write scripts, and AI can’t get credit or take credit away from human writers.
#2 says that IF the writer(s) and the company engaging them (usually a production company) BOTH agree, a writer CAN use AI tools themselves, as long as they comply with the company’s policies. The company can not REQUIRE the writer to use AI, however. This item is very straightforward!
#3 is also very clear – if an outline or other similar base materials normally provided to a writer for guidance in creating a work incorporate any AI-generated material, the company must disclose this to the writer. A very intuitive and easy-to-comply requirement.
#4 is slightly more cumbersome grammar and less clear. Axios made one early interpretation (see Editor’s note at the bottom of their article) and then walked it back, now saying that the use of writers’ work to train AI is not prohibited by this agreement, however, the union has reserved the right to assert that such a practice is prohibited by the WGA MBA Bargaining Agreement already in place, or other law. Our own interpretation is that the WGA is saying that even ignoring all the other lawsuits about training generative Large Language Model “AIs” on existing documents, the practice is already prohibited by the WGA’s own MBA, and no further debate is needed. The new contract is just reiterating WGA’s stance on this, but not requiring the signatories to all attest they agree at this point (but also constructing an obstacle to anyone disputing it later). So, a later suit or dispute could center on deciding if this issue is in fact already decided or not. But for now, the writer’s strike is over.
Now, how does this apply outside of the WGA and screenwriting? Well, this agreement will doubtless be held up as a pivotal cornerstone in the negotiation of AI-related rights for creators, both literary (like WGA) and broader. It’s always easier for lawyers to cite an already-agreed-upon legal or contract precedent when negotiating similar situations that are directly covered by the precedent.
How should the strike-ending WGA agreement guide the use of AI in others’ works and your own?
The WGA agreement has laid out some very solid and reasonable guidelines for the use of AI-generated content in commercial creative fields, where credit and compensation are vitally important. Below we’ll cover a few domains where this agreement might have bearing.
Is AI-generated content ethically and morally useable in academia?
In a word, No. In academia, the restrictions are even more stringent. It is already generally considered forbidden, in a fashion similar to plagiarism, to utilize AI-generated final content. In the same way that copying a sentence or paragraph or more from another’s work into your own would be an academic integrity violation, copying AI-generated content is similarly absolutely forbidden. The only difference is that the “original creator” you’re copying from is a computer rather than another person. It might make it harder to detect because there isn’t an exact copy lying around somewhere out in the world that a professor can discover you copied from, but it’s still 100% a violation of Academic Integrity because it’s a form of Contract Cheating. The only place where AI tools have an ethical use might be in organizing and reformatting existing original content. Writing an outline from some notes would generally be an acceptable use, but it would be critical for the student to retain a “paper trail” of the original notes or content, and demonstrate that the AI tools did not significantly contribute new “original” work, only simple transformation of human work, and they should disclose this use to their professor(s) and graders. Unmodified AI-produced words and concepts should NEVER be placed into a final academic work. So, AI-generated content is NOT ethically useable in academia, but AI-organizing tools might be reasonable. So, in this regard, the WGA agreement’s AI terms are very similar to, though MORE free than academic standards.
Is AI-generated content ethically and morally useable in business?
In a limited capacity, yes, but there are moral and functional limitations. Let’s address the functional problems first.
- AI tools often accidentally plagiarize without telling you.
- AI tools often create factually incorrect content without telling you.
- AI tools generally create very bland, boring content that content consumers don’t find engaging. The solution to this is extensive prompt engineering, which can be unpredictable and time-consuming.
Right here are three good reasons to not use, or be very careful about using AI-generated content commercially. Any produced content will need to be vetted by a person relatively experienced in the topic in order to make sure it’s not stealing/plagiarizing content, or making up damaging lies. And unless your goal is to produce boring, bad, unengaging content, almost anything it creates will need to be rewritten by a talented writer to be compelling and engaging to humans. Do you want your company or brand to be known for content that is copycat, bland, or even distasteful or wrong?
The final reason to be very cautious about AI-generated content is the fact that it undermines actually good human content-creators. AI content is generally not original, it synthesizes existing media and information, transforming it into a new and different form. GPT literally stands for Generative Pre-Trained Transformer and the name says it all. It Generates, based on Transforming Pre-trained information. If we stop creating new, original information, all of the trained models stagnate and start decaying. In fact, the training data of Large Language Models like OpenAI’s ChatGPT and Google’s Bard and Meta/Facebook and Microsofts’s LLaMA is no more recent than late 2021 or early 2022. LLM trainers are actually very afraid about training LLMs on AI-generated content because of a problem called Model Collapse that poisons the learning when it starts to consume and believe output from other AI models.
We need good content to continue to be created, so we need to support the content creators – both screenwriters and in the larger business world. If you use Generative AI tools, make sure you understand what inputs they are transforming to create the end product, and that you are not infringing on the rights of the original creator. To protect original creators, the Steam game publishing platform is already refusing to distribute games using AI generated content for which the rights situation may be unclear.
So, like academia, in business, we recommend only limited use of AI tools, for multiple reasons. They excel at being an organization executive assistant, but should not be used to generate final content.
Are there other ramifications of the Hollywood Strike?
Yes. SAG-AFTRA, the Screen Actors Guild, has similar justified concerns about AI tools replacing THEIR livelihood. Much as literary AI tools could try to replace writing, audiovisual AI tools have the potential to impinge upon the ability of actors to work for a living. Examples already abound of notable actors’ voices and appearances being AI-generated or altered for production convenience. Actors are concerned that AI has the potential to cut them out of being paid for work done by shifting work to AI-generated facsimiles of themselves without reasonable compensation. Already, films like Rogue One (pre-AI) have re-created characters portrayed by deceased actors (Peter Cushing and Carrie Fischer) using close look-alikes and Indiana Jones and the Dial of Destiny allegedly used AI to de-age Harrison Ford. It’s not unthinkable that if the money were good, studios would reanimate made-to-order AI zombies of any performer. Similar to WGA, SAG-AFTRA is looking to define the regulations and compensations under which these activities are permitted. It is likely that with the precedent of the new WGA contract, a future SAG-AFTRA contract will follow similar lines to protect the rights of their members.