mastermind
Rest In Power Kobe
https://www.washingtonpost.com/outlook/2021/01/14/trump-twitter-ban-big-tech-monopoly-private/
After the insurrection on Jan. 6, Facebook and Twitter suspended President Trump’s accounts. The emergency context and the immediate threat Trump posed justified the bans. But it was a remarkable demonstration of private power over the public sphere and represents its own threat to democracy: top-down, private control of speech in the modern public square.
Complaints by some conservatives that Trump had been “censored” by the tech platforms were greeted in many quarters with derision: The First Amendment, after all, does not typically apply to the decisions of private corporations. But that’s an artificially narrow view of the question. These companies, at the heart of our communications infrastructure, play an undeniable public role. And it was not only conservatives who expressed concern: The ACLU pointed out that “unchecked” private power was dangerous in this context, and German Chancellor Angela Merkel objected to the ban. Both suggested that a democracy should not be in a position where the decisions of a few unaccountable executives can restructure speech in politics. Moreover, the decisions cannot be viewed in isolation: These companies played an active role in getting us to where we are today by helping to promote divisiveness, racial hatred and conspiracy theories.
In short, we face overlapping democratic emergencies: the need to address the coup attempt and ongoing threats of violence, and the need to address the role played by big social media companies in our democracy — which includes enabling hyperpolarized political views, white nationalism, and general distrust of institutions and other Americans. The issue is one of extremely concentrated corporate power, and although the platforms are “private,” doing nothing would be unusual. We have a long tradition in the United States of regulating companies that dominate whole sectors of the economy, particularly in areas that profoundly affect the public sphere.
How will President-elect Joe Biden handle big tech? There are two key questions facing the new administration and Congress: Will they act to check the power of these companies, possibly by breaking them up? And will they regulate the business model of social media companies — regardless of whether they are partly dismantled — so that they do not promote extremism that can contribute to the kind of violence that took place at the Capitol?
The case for getting involved in the decisions of private companies like Facebook, Google, Apple, Twitter and Amazon is that they are no longer part of a market, in any true sense. They sit astride the market, dictating its terms. (Amazon founder Jeff Bezos owns The Washington Post.) By allowing Facebook and Google to gobble up so many different companies, including potential rivals like Instagram, policymakers enabled social media monopolies. These corporations decide how Americans communicate with one another; they hold enormous sway over the media; and innumerable smaller businesses depend on their services. In effect, we gave these giants the power of governing.
Breaking these companies up would offer one check on their power. Forty-eight state and territorial attorneys general and the Federal Trade Commission have brought lawsuits seeking to slice up Facebook, and 38 attorneys general have filed a lawsuit to break up Google. Congress could heed the call by Sen. Elizabeth Warren (D-Mass.) to pass laws to dismantle big tech in a more comprehensive way, and Biden’s FTC might also use rulemaking power that would lead to break-ups. Cutting the giants down to size and prohibiting future mergers would force them to compete on quality of service and data privacy, and give publishers and activists choices about where to speak, connect and organize.
But even this wouldn’t solve the problem of business models that steer people to ever-more-radical content. Targeted advertising and algorithmic amplification of provocative posts, videos and groups — the heart of how the companies make money — pose their own problems for democracy. After all, there is little doubt that Facebook and YouTube will continue to be central to our communications ecosystem, even if Facebook is severed from WhatsApp and Instagram, and YouTube from Google. Newspapers, small businesses, podcasts, activists, nonprofit groups and politicians will still need them to reach the public; as a public policy matter, the country will probably decide that some companies should remain fairly large, for the benefit of users. Therefore, the business model those companies have embraced demands scrutiny and — if it is deemed detrimental to democracy — regulation.
To maximize time on the sites, the companies have made a science of feeding people content that keeps them clicking, based on their past behavior and studies of collective tendencies. Several democratic harms result from this model, including the promotion of conspiracy theories, white-nationalist rhetoric and other extremist material. Two-thirds of people who joined extremist groups on Facebook, for instance, did so because they were referred to the groups by Facebook, according to the company’s data.
YouTube and Facebook both have extensive terms of service forbidding a wide range of behavior — Facebook mentions “harmful conduct towards others” — and both have taken down sites maintained by extremist groups. But individual takedowns do not solve the basic business model problem. The algorithms work at cross-purposes with the terms of service. They will tell a YouTube user with a conspiracy-minded bent, for example, that “you might like” a video of someone whose views match those of conspiracy theorist Alex Jones, even after YouTube has taken down Alex Jones.
What’s more, content bans and takedowns tend to be arbitrary. Unlike the government, which is constrained in what speech it can restrict — it cannot silence merely ugly views — the platforms can change their minds on a dime, as Google and Facebook did with Trump. Easily amended terms of service, plus private power, are no substitute for the First Amendment. (If a First Amendment standard were imposed on the companies, they could still ban illegal speech, including libel, consumer fraud and incitements to imminent violence.) Finally, of course, the degree of surveillance of citizens on the sites is anathema to widely accepted values of privacy.
There is precedent for significant regulation. In 1792, Congress passed the Postal Service Act to make sure that mail carriers did not discriminate among different kinds of content. In 1866, Congress passed the Telegraph Act, blocking private monopoly control over electronic communications. The New Deal’s Communications Act was modeled on the Interstate Commerce Act, established in 1887 to regulate railroads, because President Franklin Roosevelt believed that communication infrastructure, too, needed federal regulation to ensure openness. The Federal Communications Commission’s 2015 Open Internet Order set out the doctrine known as “net neutrality,” which forbade the companies that controlled the infrastructure of the Internet to give preference to one company over another (in matters like online speed).
In that same spirit, Congress and the FTC can regulate the business model of our dominant communications channels. Congress, for example, could stipulate that social media companies over a certain size cannot raise revenue through selling targeted ads. (Instead, they could adapt the models of other communications companies, such as user fees.) New standards of neutrality could be imposed: On Facebook and other sites, users might navigate to the content they wished to see, rather than having posts fed to them; they would have far more control over what they saw.
In its recent major report on big tech, the House Judiciary Committee’s antitrust subcommittee pointed in this direction, suggesting that if some aspects of the big tech companies were allowed to maintain monopoly power, they should be subject to what the report called “nondiscrimination principles.” Nondiscrimination, the report noted, “has been a mainstay principle for governing network intermediaries, especially those that play essential roles in facilitating . . . communications.” (I testified before the subcommittee as it was studying these issues.) On one level, this means that Google couldn’t promote its own services above those of its competitors (like Yelp). Interpreted more broadly, this would hamper sites that wanted to pick and prioritize content in general, whether that means promoting posts by Alex Jones knockoffs, “Stop the Steal” manifestos or even the work of fashion designers. The sites would become true neutral platforms, not curators.
The idea that the government must leave “private” companies untouched — regardless of the consequences — is a red herring. Corporate charters are granted by the state, and states and the federal government have an array of laws outlawing particular business practices in different industries: pyramid schemes, unfair contracts, gambling and many more. It is possible that, had the government begun regulating these platforms years ago, many of the people who stormed the Capitol might not have been radicalized, and Trump’s conspiracy theories about the election would not have taken such deep root.
There are many reasons to rein in big tech, from its recklessness regarding privacy to the quashing of smaller enterprises. But improving the national conversation — or at least reducing rabid polarization and curbing the spread of violent ideologies — remains a central rationale. The tragedy at the Capitol made that clearer than ever.
Zephyr Teachout, a professor of law at Fordham University, is the author of "Break 'Em Up: Recovering Our Freedom From Big Ag, Big Tech, and Big Money."
Last edited: