“Initially, the WGA's AI proposals looked like outliers. Everything else on the list was talking about writer compensation, making sure writers were paid fairly to justify the immense value they were bringing to the studios. Over the negotiation, it became clear that the AI proposals are really part of a larger pattern. The studios would love to treat writers as gig workers. They want to be able to hire us for a day at a time, one draft at a time, and get rid of us as quickly as possible. I think they see AI as another way to do that,” John August, a screenwriter known for writing the films Charlie’s Angles and Charlie and the Chocolate Factory, told Motherboard.
“The idea that our concerns could be addressed by an annual meeting is absurd and honestly offensive. Everyone watching AI can tell you that these large language models are progressing at an incredible rate. AI-generated material isn't something that's going to become a factor in a few years. It's here now. It's lucky that we're negotiating our contract this year and not next year, before these systems become widely entrenched,” August said.
August expanded on the guild’s two AI stipulations for Vox, saying, “First, the guild wants to make sure that ‘literary material—the MBA term for screenplays, teleplays, outlines, treatments, and other things that people write—can’t be generated by an AI. If a movie made by a studio that has an agreement with the WGA has a writing credit—and that’s over 350 of America’s major studios and production companies—then the writer needs to be a person.”
“Second, the WGA says it’s imperative that ‘source material’ can’t be something generated by an AI, either. This is especially important because studios frequently hire writers to adapt source material (like a novel, an article, or other IP) into new work to be produced as TV or films,” August added. “It’s very easy to imagine a situation in which a studio uses AI to generate ideas or drafts, claims those ideas are ‘source material,’ and hires a writer to polish it up for a lower rate.”
“The immediate fear of AI isn’t that us writers will have our work replaced by artificially generated content. It’s that we will be underpaid to rewrite that trash into something we could have done better from the start. This is what the WGA is opposing and the studios want,” C. Robert Cargill, a screenwriter best known for writing the films Sinister and Doctor Strange, tweeted. “The same IP laws that prevent you from stealing our writing protects us from a machine doing it as well. Because AI is just cut and paste.”
The AMPTP’s position is yet another instance of an overblown perception of the capabilities of AI, and follows a number of corporate media shake-ups where executives decided to prioritize AI content over human-created content. Last week, BuzzFeed CEO Jonah Peretti shuttered BuzzFeed News, claiming in a letter that the digital media company would pivot to a new strategy that includes “AI enhancements.”
The reality is AI is still filled with misinformation and bias. Recently, Microsoft researchers acknowledged in a paper that GPT-4 has trouble distinguishing between true facts and guesses and personalizing outputs to users, and also tends to make far-fetched conceptual leaps. They also found that GPT-4 makes up facts that aren’t in its training data, is very sensitive to framing and wording of prompts, and inherits the prejudices and biases from its training data—something AI ethics researchers have proven time and again about machine learning systems in general.
Generative AI systems are already facing a number of copyright issues from writers and artists who claim that they were trained on their copyrighted data without permission. So far, Getty Images has filed a lawsuit against Stability AI, the company behind the text-to-image generator Stable Diffusion, for using a dataset that contains over 12 million photographs from Getty to train its AI model. Karla Ortiz, an artist and board member of an advocacy organization for artists called the Concept Art Association, is leading a fundraising effort to hire a lobbyist in Washington D.C. that can update IP laws and enforce more regulations for AI companies. There has not yet been a lawsuit filed by writers, but the desire to train AI using writers’ screenwriting is implausible without their explicit permission, which evidently and understandably, is not something that will be freely given.
Another issue that union writers are concerned about is the hardship faced by underpaid, often-foreign workers who are tasked to train, moderate, and upkeep many of the world’s largest AI models. AI experts have frequently pointed out that these systems are a lot less automated than they are often portrayed—so to cite efficiency in being able to reduce staff and rely on the AI tool is an oversimplification that further perpetuates a power imbalance and an exploitation of workers from countries with fewer workplace regulations.
“One of the things that I think that we're starting to rub up against is a laser focus on efficiency: what's the fastest way to get this done? How can we get this done cheaper? Well, that doesn't necessarily make life better,” Peeples said. “So we’re saying, before we render a judgment on whether AI is good or not, let’s talk about these things with a different lens on it—that efficiency and speed production is not of the highest value.”