Back in May, just a week into the the Writers Guild of America’s strike, John August, a member of the union’s negotiating committee and writer of Charlie’s Angels, described his personal dystopia: “the Nora Ephron problem”—a world in which artificial intelligence evolves to become a writer so profound it can mimic the style of a surefire hitmaker.
The synthetic Nora Ephron may yet come to pass, but the deal struck this week between the WGA and the Alliance of Motion Picture and Television Producers (AMPTP) will go some way toward protecting writers against its impact.
In short, the contract stipulates that AI can’t be used to write or rewrite any scripts or treatments, ensures that studios will disclose if any material given to writers is AI-generated, and protects writers from having their scripts used to train AI without their say-so. Provisions in the contract also stipulate that script scribes can use AI for themselves. At a time when people in many professions fear that generative AI is coming for their jobs, the WGA’s new contract has the potential to be precedent-setting, not just in Hollywood, where the actors’ strike continues, but in industries across the US and the world.
The strike officially ended early today, 148 days after it began, putting it less than a week shy of the longest in the union’s history. When the guild’s 11,500 members walked off the job on May 2, AI was a hot topic, but as the work stoppage continued, the looming specter of AI becoming everyone’s new coworker grew. Everyone from authors to coders to architects began to face existential questions about ways the technology could encroach on their lives and livelihoods. Negotiations over the language around AI in the WGA’s contract reportedly went down to the wire, getting settled just before the tentative agreement was announced on Sunday.
The terms, on paper at least, are a coup for writers. Beyond putting up guardrails to ensure AI can’t replace script writers outright, they also curb the more likely scenario—that writers would be asked to adapt or edit something written by a large language model or tool like ChatGPT, for less pay than producing an original work, possibly without their knowledge. (That transparency is enshrined too.) “That’s a crisis in our compensation, it’s a crisis in our residuals, and a crisis in our artistic ability to do the things we are put in this industry to do,” August said on that point back in May. (AMPTP had initially offered “annual meetings to discuss advancements in technology” rather than specific stipulations about AI’s use.)
“I think the key issue here is ‘AI-generated material can’t be used to undermine a writer’s credit or separated rights,’” says Matthew Sag, a professor of law and artificial intelligence at Emory University. “I had always understood the major concern of the writers was that studios would use AI to dilute the credit/compensation due to writers working on their shows. This agreement appears to address that concern and acknowledges the reality that many writers will choose to use AI tools to accelerate their workflow. Leaving the writers with a genuine choice in this regard is an important victory for the WGA.”
The deal is not without its quandaries. Enforcement is an overriding one, says Daniel Gervais, a professor of intellectual property and AI law at Vanderbilt University in Nashville, Tennessee. Figuring that out will likely set another precedent. Gervais agrees that this deal gives writers some leverage with studios, but it might not be able to stop an AI company, which may or not be based in the US, from scraping their work.
There are also questions around who carries the burden to reveal when AI has contributed some part of a script. Studios could argue that they took a script from one writer and gave it to another for rewrites without knowledge that the text had AI-generated components. “As a lawyer, I’m thinking, ‘OK, so what does that mean? How do you prove that? What’s the burden? And how realistic is that?’”
The future implicitly hinted at by the terms of the WGA deal is one in which machines and humans work together. From an artist’s perspective, the agreement does not villainize AI, instead leaving the door open for continued experimentation, whether that be generating amusing names for a Tolkienesque satire or serious collaboration with more sophisticated versions of the tools in the future. This open-minded approach contrasts with some of the more hysterical reactions to these technologies—hysteria that’s now starting to see some pushback.
Outside Hollywood, the agreement sets a precedent for workers in many fields—namely, that they can and should fight to control the introduction of disruptive technologies. What, if any, precedents are set may become obvious as soon as talks resume between AMPTP and the actors union, the Screen Actors Guild—American Federation of Television and Radio Artists (SAG-AFTRA). It’s unclear just how soon those negotiations will pick back up, but it’s highly likely that the guild will look to WGA’s contract as a lodestar.
The impact of the WGA’s deal on SAG-AFRTA’s negotiations is important. Actors have stronger protections in the form of the right of publicity—also known as name, image, and likeness rights—yet intense concerns remain about synthetic “actors” being built from the material of actors’ past performances. (As of this writing, SAG-AFTRA had not responded to a request for comment.) It will also be interesting to see if any of the issues that came up during the WGA’s negotiations will trickle into ongoing unionization efforts at video game studios or other tech firms. On Monday, SAG-AFTRA members authorized a strike for actors who work on video games; once again, AI was one of the issues raised.
When it comes to AI, argues Simon Johnson, an economist at MIT, the WGA has burst out in front of other unions, and everyone should take note. As he and several coauthors laid out in a recent policy memo on pro-worker AI, the history of automation teaches that workers cannot wait until management deploys these technologies; if they do, they will be replaced. (See also: the Luddites.)
“We think this is exactly the right way to think about it, which is that you don’t want to say no to AI,” he says. “You want to say the AI can be controlled and used as much as possible by workers, by the people being employed. In order to make that feasible, you’re going to have to put some constraints on what employers can do with it. I think the writers are actually, in this regard, in a pretty strong position compared to other workers in the American economy.”