Marketing consultant loses job because he doesn't understand generative AI
A marketing consultant lost his job after using ChatGPT to "research" historical film reviews for a movie trailer. The incident highlights a widespread misunderstanding of how generative AI works.
the-decoder.com
AI in practice
Aug 26, 2024
Marketing consultant loses job because he doesn't understand generative AI
via YouTube (Screenshot)
Matthias Bastian
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Profile
A marketing consultant lost his job after using ChatGPT to "research" historical film reviews for a movie trailer. The incident highlights a widespread misunderstanding of how generative AI works.
According to Deadline, marketing consultant Eddie Egan has been fired for using an AI tool such as ChatGPT to generate review quotes for a trailer for the movie "Megalopolis". The trailer included highly critical quotes about director Francis Ford Coppola's previous work that turned out to be AI-generated fabrications.
Egan's goal was to argue that "Megalopolis," like Coppola's previous films, would initially face harsh criticism but ultimately be recognized as a masterpiece. The trailer quoted renowned film critics such as Pauline Kael of The New Yorker and Andrew Sarris of The Village Voice, who supposedly called classics like "The Godfather" a "sloppy, self-indulgent movie" and "Apocalypse Now" an "epic piece of trash."
External media content (www.youtube.com) has been blocked here. When loading or playing, connections are established to the servers of the respective providers. Personal data may be communicated to the providers in the process. You can find more information in our privacy policy.
In reality, these scathing reviews never happened. On the contrary, the critics praised these films, as reported by Vulture magazine. As a result, production company Lionsgate apologized for the mistake, removed the trailer and terminated Egan's contract.
AI models generate words, not facts
This case demonstrates how easy it is to be misled by ChatGPT and similar systems if you don't understand their underlying mechanisms. The Large Language Models (LLMs) powering these tools generate words based on probabilities, influenced by the user's prompt. The resulting sentences can be either accurate or soft bullshyt - these models have no built-in fact-checking capabilities. If you ask for critical reviews, it'll generate some.
Others have fallen for the chatbots' reasoned-sounding sentences: Attorney Steven A. Schwartz initially used ChatGPT for research, unaware that the system could generate false content. In another case, attorneys used ChatGPT to find and cite supposed reference cases that turned out to be AI inventions.
These examples show that many people do not yet understand how generative AI works, and that its results should not be used unchecked. Even OpenAI itself had a factual generation error in its first SearchGPT demo.
Last edited: