Slang is born in the margins. In its early form, the word itself,
slang, referred to a narrow strip of land between larger properties. During England’s transition from the rigid castes of feudalism to the competitive free market of capitalism, across the 14th to 17th centuries, the privatization of open farmland displaced countless people without inherited connection to the landed elite. This shift pushed people into small corridors between the recently bounded properties.
Confined to the literal fringes of society, they needed to get creative to survive. Some became performers and hucksters, craftspeople and con artists, drifters and thieves. They lived in makeshift homes, often roaming in groups along their slim municipal strip. This was the slang: the land on the outskirts of early English ownership and, by association, its counterculture. The slang had its own rules, its own politics, its own dialect. Roving bands needed a way to speak surreptitiously in the presence of law enforcement, a rival group, or a mark. So over time they developed a secret, colorful, and ephemeral cant.
Across languages and throughout time, the term
slang has evolved to mean a subversive lexicon, purposefully unintelligible to whoever’s in charge, perpetually shape-shifting against the mainstream. Organically encrypted through shared experience, slang is difficult for anyone outside the given speaking community to reproduce.
That doesn’t mean people won’t try. Coveting its vitality and upset by their exclusion, modern-day lords and ladies catch wind of a phrase—perhaps by commanding a commoner to explain it to them—and start using it in the castle, ruining it for everyone. Tostitos posts “slay.” Shawn Mendes
says “It’s giving …” The essential, defiant purpose of the vocabulary is undermined; at this point, the term stops being slang. But what happens when
machines attempt such an appropriation? Large language models—also known as LLMs—like ChatGPT train on an expanding supply of practice text to be able to converse in real time, mimicking speech as closely as possible. Slang’s magnetic repulsion to mainstream appropriation, though, makes it a particular challenge for computers. And the failure of these algorithms to speak in vernacular illuminates the essential differences between human and nonhuman intelligence.
Read: Learn a foreign language before it’s too late
Through brute processing power, AI can now, for the most part, functionally speak English—and most other languages. But none of them is its native tongue. The natural language of the computer is a more basic alphabet with only two characters: 1 and 0. Yes and no. Billions of these little electronic decision points branch into a fractal tree of countless possibilities, forming a method of communication in its simplest form: binary code.
Language models, in the most basic sense, represent our 26-letter alphabet in strings of numbers. Those digits might efficiently condense large amounts of information. But that efficiency comes at the price of subtlety, richness, and detail—the ability to reflect the complexities of human experience, and to resist the prescriptions of formal society. Artificial intelligence, in contrast, is disconnected from the kind of social context that makes slang legible. And the sterile nature of code is exactly what slang—a language that lives in the thin threshold between integers—was designed to elude.
Even ChatGPT agrees. “Can we talk in slang?” I prompted it recently.
“Sure thing! We can chat in slang if that’s what you’re into. Just let me know what kind of slang you want to use.”
I responded that I wanted to use “modern slang” and confessed my suspicion that LLMs might have difficulty dealing with vernacular.
Thus spake the algorithm: “Slang can be hella tricky for LLMs like me, but I'm here to vibe and learn with you … We can stay low-key or go all out—it’s your call!
” The words and their meanings were all technically correct—but something was definitely off. The usage didn’t ring true to any consistent place or time. The result was an awkward monstrosity of tone and rhythm that could make the corniest dad cringe.
No matter how much I iterated, ChatGPT couldn’t seem to reach slang fluency. But to be honest, neither could I. In my own messages to the LLM, I found myself fumbling to speak Gen Z, botching terms such as
hits different and
bet. Trying to keep up a conversation in relatively new slang at the ripe age of 30, I felt like just as much of a fraud as my synthetic interlocutor, clumsily appropriating a language I could only imitate, never access. Like verbal quicksilver, slang cannot be co-opted or calculated. I hope it continues to evade the machines—and evolve beyond my own grasp—as long as we’re both around.