The Tech That’s Radically Reimagining the Public Sphere
Story by Jesse Barron • 28m
The Tech That’s Radically Reimagining the Public Sphere© Illustration by Ben Kothe / The Atlantic. Source: Getty.
Facial recognition was a late-blooming technology: It went through 40 years of floundering before it finally matured. At the 1970 Japan World Exposition, a primitive computer tried—mostly in vain—to match visitors with their celebrity look-alikes. In 2001, the first-ever “smart” facial-recognition surveillance system was deployed by the police department in Tampa, Florida, where it failed to make any identifications that led to arrests. At a meeting in Washington, D.C., in 2011, an Intel employee tried to demonstrate a camera system that could distinguish male faces from female ones. A woman with shoulder-length red hair came up from the audience. The computer rendered its verdict: male.
Facial recognition was hard, for two reasons. Teaching a computer to perceive a human face was trouble enough. But matching that face to the person’s identity in a database was plainly fanciful—it required significant computing power, and quantities of photographs tied to accurate data. This prevented widespread adoption, because matching was always going to be where the money was. In place of facial-recognition technology (FRT), other biometrics, such as fingerprinting and retinal scanning, came to market. The face-matching problem hadn’t been cracked.
Or so everybody thought, until a pair of researchers from the nonprofits MuckRock and Open the Government made a discovery. They had been sending Freedom of Information Act requests around the country, trying to see whether police departments were using the technology in secret. In 2019, the Atlanta Police Department responded to one of those FOIAs with a bombshell: a memo from a mysterious company called Clearview AI, which had a cheap-looking website yet claimed to have finally solved the problem of face-matching, and was selling the technology to law enforcement for a few thousand dollars a year. The researchers sent their findings to a reporter at
The New York Times, Kashmir Hill,
who introduced readers to Clearview in a 2020 scoop.
Hill’s new book,
Your Face Belongs to Us, provides a sharply reported history of how Clearview came to be, who invested in it, and why a better-resourced competitor like Facebook or Amazon didn’t beat this unknown player to the market. The saga is colorful, and the characters come off as flamboyant villains; it’s a fun read. But the book’s most incisive contribution may be the ethical question it raises, which will be at the crux of the privacy debate about facial-recognition technology for many years to come. We have already willingly uploaded our private lives online, including to companies
that enthusiastically work with law enforcement. What does consent, or opting out, look like in this context? A relative bit player made these advances. The rewriting of our expectations regarding privacy requires more complex, interlacing forces—and our own participation.
[Read: The Atlantic’s guide to privacy]
Hill’s book begins about five years after Intel presented its useless facial-recognition tech in Washington, but it might as well be a century later, so dramatically has the technology improved. It’s 2016, and the face-matching problem is no longer daunting. Neural nets—basically, artificial-intelligence systems that are capable of “deep learning” to improve their function—have conquered facial recognition. In some studies,
they can even distinguish between identical twins. All they need is photographs of faces on which to train themselves—billions of them, attached to real identities. Conveniently, billions of us have created such a database, in the form of our social-media accounts. Whoever can set the right neural net loose on the right database of faces can create the first face-matching technology in history. The atoms are lying there waiting for the Oppenheimer who can make them into a bomb.
Hill’s Oppenheimer is Hoan Ton-That, a Vietnamese Australian who got his start making Facebook quiz apps (“Have you ever … ?” “Would you rather … ?”) along with an “invasive, potentially illegal” viral phishing scam called ViddyHo. When ViddyHo got him ostracized from Silicon Valley, Ton-That reached out to a man named Charles Johnson, an alt-right gadfly whose websites served empirically dubious hot takes in the mid-2010s: Barack Obama is gay, Michael Brown provoked his own murder, and so on. Rejected by the liberal corporate circles in which he once coveted membership, Ton-That made a radical rightward shift.
The story of Ton-That and Johnson follows a familiar male-friendship arc. By the end, they will be archrivals: Ton-That will cut Johnson out of their business, and Johnson will become an on-the-record source for Hill. But at first, they’re friends and business partners: They agree that it would be awesome if they built a piece of software that could, for example, screen known left-wingers to keep them out of political conventions—that is, a face-matching facial-recognition program.
To build one, they first needed to master neural-net AI. Amazingly, neural-net code and instructions were available for free online. The reason for this goes back to a major schism in AI research: For a long time, the neural-net method, whereby the computer teaches itself, was considered impossible, whereas the “symbolic” method, whereby humans teach the computer step-by-step, was embraced. Finding themselves cast out, neural-net engineers posted their ideas on the internet, waiting for the day when computers would become powerful enough to prove them right. This explains why Ton-That was able to access neural-net code so easily. In 2016, he hired engineers to help him refashion it for his purposes. “It’s going to sound like I googled ‘Flying car’ and then found instructions on it,” he worries to Hill (she managed to get Ton-That to speak to her on the record for the book).
But even with a functioning neural net, there was still the issue of matching. Starting with Venmo—which had the weakest protections for profile pictures—Ton-That gobbled up photos from social-media sites. Soon he had a working prototype; $200,000 from the venture capitalist Peter Thiel, to whom Johnson had introduced him; meetings with other VCs; and, ultimately, a multibillion-picture database. Brilliantly, Ton-That made sure to scrape Crunchbase, a database of important players in venture capital, so that Clearview would always work properly on the faces of potential investors. There are no clear nationwide privacy laws about who can use facial recognition and how (though a handful of states have limited the practice). Contracts with police departments followed.
Proponents of FRT have always touted its military and law-enforcement applications. Clearview, for instance, reportedly helped rescue a child victim of sexual abuse by
identifying their abuser in the grainy background of an Instagram photo, which led police to his location. But publicizing such morally black-and-white stories has an obvious rhetorical advantage. As one NYPD officer tells Hill, “With child exploitation or kidnapping, how do you tell someone that we have a good picture of this guy and we have a system that could identify them, but due to potential bad publicity, we’re not going to use it to find this guy?”
One possible counterargument is that facial-recognition technology is not just a really good search engine for pictures. It’s a radical reimagining of the public sphere. If widely adopted, it will further close the gap between our lives in physical reality and our digital lives. This is an ironic slamming-shut of one of the core promises of the early days of the internet: the freedom to wander without being watched, the chance to try on multiple identities, and so on. Facial recognition could bind us to our digital history in an inescapable way, spelling the end of what was previously a taken-for-granted human experience: being in public anonymously.
Most people probably don’t want that to happen. Personally, if I could choose to opt out of having my image in an FRT database, I would do so emphatically. But opting out is tricky. Despite my well-reasoned fears about the surveillance state, I am basically your average dummy when it comes to sharing my life with tech firms. This summer, before my son was born, it suddenly felt very urgent to learn exactly what percentage Ashkenazi Jewish he would be, so I gave my DNA to 23andMe, along with my real name and address (I myself am 99.9 percent Ashkenazi, it turned out). This is just one example of how I browse the internet like a sheep to the slaughter. A hundred times a day, I unlock my iPhone with my face. My image and name are associated with my X (formerly Twitter), Uber, Lyft, and Venmo accounts. Google stores my personal and professional correspondence. If we are hurtling toward a future in which a robot dog can accost me on the street and instantly connect my face to my family tree, credit score, and online friends, consider me horrified, but I can’t exactly claim to be shocked: I’ve already provided the raw material for this nightmare scenario in exchange for my precious consumer conveniences.
In her 2011 book,
Our Biometric Future, the scholar Kelly Gates noted the nonconsensual aspect of facial-recognition technology. Even if you don’t like your fingerprints being taken, you know when it’s happening, whereas cameras can shoot you secretly at a sporting event or on a street corner. This could make facial recognition more ethically problematic than the other biometric-data gathering. What Gates could not have anticipated was the ways in which social media would further muddle the issue, because consent now happens in stages: We give the images to Instagram and TikTok, assuming that they won’t be used by the FBI but not really knowing whether they
could be, and in the meantime enjoy handy features, such as Apple Photos’ sorting of pictures by which friends appear in them. Softer applications of the technology are already prevalent in everyday ways, whether Clearview is in the picture or not.
[Read: Stadiums have gotten downright dystopian]
After Hill exposed the company, it decided to embrace the publicity, inviting her to view product demos, then posting her articles on the “Media” section of its website. This demonstrates Clearview’s cocky certainty that privacy objections can ultimately be overridden. History suggests that such confidence may not be misplaced. In the late 1910s, when passport photos were introduced, many Americans bristled, because the process reminded them of getting a mugshot taken. Today, nobody would think twice about going to the post office for a passport photo. Though Hill’s reporting led to an ACLU lawsuit that prevented Clearview from selling its tech to private corporations and individuals, the company claims to have thousands of contracts with law-enforcement agencies, including the FBI, which will allow it to keep the lights on while it figures out the next move.
Major Silicon Valley firms have been slow to deploy facial recognition commercially. The limit is not technology; if Ton-That could build Clearview literally by Googling, you can be sure that Google can build a better product. The legacy firms claim that they’re restrained, instead, by their ethical principles. Google says that it decided not to make general-purpose FRT available to the public because the company wanted to work out the “policy and technical issues at stake.” Amazon, Facebook, and IBM have issued vague statements saying that they have backed away from FRT research because of concerns about privacy, misuse, and even racial bias, as
FRT may be less accurate on darker-skinned faces than on lighter-skinned ones. (I have a cynical suspicion that the firms’ concern regarding racial bias will turn out to be a tactic. As soon as the racial-bias problem is solved by training neural nets on more Black and brown faces, the expansion of the surveillance dragnet will be framed as a victory for civil rights.)
Now that Clearview is openly retailing FRT to police departments, we’ll see whether the legacy companies hold so ardently to their scruples. With an early entrant taking all the media heat and absorbing all the lawsuits, they may decide that the time is right to enter the race. If they do, the next generation of facial-recognition technology will improve upon the first; the ocean of images only gets deeper. As one detective tells Hill, “This generation posts everything. It’s great for police work.”