Inside Two Years of Turmoil at Big Tech's Anti-Terrorism Group

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,012
Reputation
8,229
Daps
157,676

By Paresh Dave Business

Sep 30, 2024 5:00 AM

Inside Two Years of Turmoil at Big Tech's Anti-Terrorism Group​


X has left the board of GIFCT, an organization through which tech companies exchange information to keep violent content off the web. It's the latest in a series of episodes driving tension within the ranks.

Collage of the web symbol the X and TikTok logos a world map an eye looking at a screen and a person holding a...


Photo-illustration: Jacqui VanLiew; Getty Images

Vice presidents from Meta, YouTube, Twitter, and Microsoft gathered over Zoom in March 2023 to discuss whether to allow TikTok, one of their companies’ most fearsome competitors, into their exclusive club.

The four executives comprised the board of the Global Internet Forum to Counter Terrorism (GIFCT)—where companies share tips intended to prevent their platforms from becoming hotbeds for terrorists—and they knew that TikTok needed help keeping extremist propaganda off its platform. TikTok had passed a training program they required and had addressed their questions about its ties to China. But people briefed on the discussions say the board still worried about the possibility of TikTok abusing its membership in some way that benefited the Chinese government and undermined free expression. On top of that, at the time US lawmakers were considering a ban of the app, and more content moderation mishaps for TikTok likely would add to the heat. The board ultimately didn’t approve TikTok.

A WIRED investigation into GIFCT reveals that TikTok’s bid to join the consortium failed because two of the four executives on the board abstained from voting on its application. A week later, on the fourth anniversary of a deadly terrorist attack in Christchurch, New Zealand, researchers blasted TikTok for hosting footage celebrating the rampage. These were the very videos that would have been easily flagged and removed had TikTok’s rivals granted it access to their group’s threat-spotting technology.

Around the same time, the board members declined to admit the parent company of PornHub, citing concerns over whether its content policies met the bar for membership. By comparison, the board last year quickly approved the unproblematic French social app Yubo. GIFCT’s bespoke advice enabled the startup to identify 50 suspicious accounts that it reported to law enforcement, according to Marc-Antoine Durand, Yubo’s chief operating officer.

More recently, despite having the authority to do so, Meta, Microsoft, and YouTube declined to expel Twitter (now X) from the board even as the platform’s relaxed content moderation practices under Elon Musk threatened reputational harm to the GIFCT and other member companies more broadly. This month, X quietly left the board voluntarily.

These secretive membership decisions, revealed for the first time by WIRED, show how Microsoft, Meta, YouTube, and X are gatekeeping access to anti-terrorism guidance and influencing the content users encounter across the web. Our investigation also uncovers contentious fundraising choices by the four companies and the consequences of a lack of quality control checks in their heralded system for flagging violent extremism.

To understand the consortium’s inner workings, WIRED reviewed records of GIFCT’s internal deliberations and spoke with 26 people directly connected to the organization, including leadership, staff, advisers, and partners. Several of the people believe Meta, Microsoft, YouTube, and X have steered the consortium in a way that has undermined its potential. “The result is almost certainly more users are being radicalized,” claims one of the sources, who has been in consistent contact with GIFCT since its inception. Many of the people sought anonymity because their employers or GIFCT hadn’t authorized speaking with WIRED.

The four tech giants have presided over the consortium since they announced it in 2016, when Western governments were berating them for allowing Islamic State to post gruesome videos of journalists and humanitarians being beheaded. Now with a staff of eight, GIFCT—which the board organized as a US nonprofit in 2019 after the Christchurch massacre—is one of the groups through which tech competitors are meant to work together to address discrete online harms, including child abuse and the illicit trade of intimate images.

The efforts have helped bring down some unwelcome content, and pointing to the work can help companies stave off onerous regulations. But the politics involved in managing the consortia generally stay secret.

Just eight of GIFCT’s 25 member companies answered WIRED’s requests for comment. The respondents, which included Meta, Microsoft, and YouTube, all say they are proud to be part of what they view as a valuable group. The consortium’s executive director, Naureen Chowdhury Fink, didn’t dispute WIRED’s reporting. She says TikTok remains in the process to attain membership.

GIFCT has relied on voluntary contributions from its members to fund the roughly $4 million it spends annually, which covers salaries, research, and travel. From 2020 through 2022, Microsoft, Google, and Meta each donated a sum of at least $4 million and Twitter $600,000, according to the available public filings. Some other companies contributed tens of thousands or hundreds of thousands of dollars, but most paid nothing.

By last year, at least two board members were enraged at companies they perceived as freeloaders, and fears spread among the nonprofit’s staff over whether their jobs were in jeopardy. It didn’t help that as Musk turned Twitter into X about a year ago, he kept slashing costs, including suspending the company’s optional checks to GIFCT, according to two people with direct knowledge.

AI Lab​


WIRED’s resident AI expert Will Knight takes you to the cutting edge of this fast-changing field and beyond—keeping you informed about where AI and technology are headed. Delivered on Wednesdays.

By signing up you agree to our User Agreement (including the class action waiver and arbitration provisions), our Privacy Policy & Cookie Statement and to receive marketing and account-related emails from WIRED. You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

To diversify funding, the board has signed off on soliciting foundations and even exploring government grants for non-core projects. “We'd really have to carefully consider if it makes sense,” Chowdhury Fink says. “But sometimes working with multiple stakeholders is helpful.”

Rights activists the group privately consulted questioned whether this would count as subsidies for tech giants, which could siphon resources from potentially more potent anti-extremism projects. But records show staff were considering seeking a grant of more than tens of thousands of dollars from the pro-Israel philanthropy Newton and Rochelle Becker Charitable Trust. Chowdhury Fink says GIFCT didn’t end up applying.

This year, Meta, YouTube, Microsoft, and X amended GIFCT’s bylaws to require minimum annual contributions from every member starting in 2025, though Chowdhury Fink says exemptions are possible.

Paying members will be able to vote for two board seats, she says. Eligibility for the board is contingent on making a more sizable donation. X had signaled it wouldn’t pay up and would therefore forfeit its seat, two sources say—a development that ended up happening this month. It had been scheduled to hold tiebreaking power among the four-company board in 2025. (Under the bylaws, Meta, YouTube, and Microsoft could have ejected Twitter from the board as soon as Musk acquired the company. But they chose not to exercise the power.)

Many of the people close to GIFCT who spoke with WIRED contend that Meta, YouTube, and Microsoft must banish X altogether because it has allegedly been allowing extremists to openly engage in illicit behavior on its platform, such as reportedly selling weapons. The Times last month reported on calls to oust X.

The public copy of GIFCT’s “tech solutions” code of conduct is largely redacted for “operational security,” but it does state that a company can be banned for “sustained inappropriate behavior.” By the consortium’s own telling, “membership should be recognized and appreciated as a strong indication of good stewardship for the internet and its users.” X reported this month that it suspended over 57,000 accounts in the first half of this year for violating its violent and hateful entities policy. It didn’t respond to requests for comment.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,012
Reputation
8,229
Daps
157,676
The four companies in charge of GIFCT also have had a crucial and little-examined role in crisis response, one of the organization’s most visible functions. The group uses technology—computer scripts that convert image, video, and text files into short, distinct codes—to upload these hashes of problematic content they spot to a database operated by GIFCT staff and hosted on Meta servers. Members can then compare the millions of hashes in the database against hashes of content on their services and can reasonably believe that matches may reflect posts they should remove.

After a gunman streaming live on Twitch shot to death 10 people at a Buffalo, New York, supermarket in early 2022, a worker at a member company alerted a Slack community that GIFCT hosts for sharing tips. Once GIFCT staff saw the message, they began reaching out to YouTube, Meta, Twitter, and Microsoft. At least three of the board companies had to sign off on issuing the group’s most serious threat alarm, which they eventually did for the third time about two hours post-shooting.

In the nearly 26 hours after the board sounded the alarm for Buffalo, members uploaded hashes of about 870 problematic videos and images of the shooting. Later, GIFCT hired a freelance researcher, who provided thousands of hashes covering parts of the shooter’s manifesto that people were likely to glowingly repost.

Companies contend hash sharing has brought efficiency to the complex field of moderation. They avoid the expense and potential user privacy violations of exchanging giant media files, while the collaboration arguably slows the spread of content before it becomes too impossible to contain.

But under the leadership of YouTube, Meta, Twitter, and Microsoft, GIFCT has imposed little transparency and introduced few safeguards to avoid companies erroneously erasing nonviolent content. None of its 25 members discloses the amount of content removed as a result of hash matches, let alone how many of those takedowns are appealed by their users. Only YouTube has reported its contributions to the database—about 45,000 hashes last year. The group declined to say how many are added by its own staff or as a result of tips from researchers or governments, and to what extent it spot-checks hashes from outsiders.

Even which companies currently use the database isn’t public; as of last year, just 13 members had access. The group doesn’t know how many of them manually review content—as opposed to relying on automation—before sharing hashes or acting on matches. “That's the kind of thing that I'm trying to explore a little bit more,” says Skip Gilmour, GIFCT’s recently hired director of technology policy and solutions.

The tech consortium insists that innocuous content rarely ends up in the database. But the board hasn’t allowed outside auditing or ordered comprehensive internal reviews despite confirmed lapses.

In 2022, one social media company noticed that two hashes from the database matched thousands of copies of a music video on its service. The song in question? WIRED learned it was “Never Gonna Give You Up,” the Rick Astley pop tune turned prank meme about love and affection. GIFCT previously disclosed the gaffe without naming the song. It hasn’t shared why another company submitted the song in the first place. Erin Saltman, GIFCT’s membership and programs director, says “testing” and “sample hashes” are sometimes to blame.

A small audit by several members in 2022 also flagged invalid submissions, the number of which the tech consortium never disclosed. But removing them, even as new material was being added, cut the database to 2.1 million hashes that October from 2.3 million about 18 months earlier.

This year, Australia’s eSafety Commissioner—whose counterparts in the Home Affairs Department are formal advisers to GIFCT—demanded Google, Meta, and X turn over details about their counterterrorism work. The findings are expected to be published eventually. “We do not know the answer to a number of fundamental questions about the systems, processes, and resources that these tech behemoths have in place,” the eSafety office wrote.

Some critics of how Meta, YouTube, Microsoft, and X have run the counterterrorism forum say the companies already have a playbook in front of them to protect legitimate speech and better curb violence-inciting hate. GIFCT commissioned a human rights impact assessment from the consulting firm BSR that was published in 2021 and recommended 47 changes—many of which the board has yet to carry out.

Without naming TikTok or anyone specifically, the assessors suggested approving the membership of high-risk, non-US companies “with appropriate measures” to minimize human rights harms, because it would leave the GIFCT “better positioned” to fight extremism. They recommended banning members from contributing hashes without human review. And they described a board with just the four founding members as “not a sustainable model,” proposing that activists and academics take half the seats.

GIFCT also has an independent advisory committee that formed in 2020, but Meta, YouTube, Microsoft, and X haven’t always enacted its suggestions. GIFCT’s executive director at the time billed the advisers as the consortium’s conscience. But some of the nearly two dozen professors, rights activists, and governments that compose the panel have felt ignored, with a couple “quiet quitting” this year. “We have no real power,” says Courtney Radsch, a freedom of expression advocate on the advisory panel. “[GIFCT] is yet another example of how tech self-regulation is ineffective and insufficient. To address the real abuse happening on these platforms, we need to have a more meaningful governance structure with accountability and transparency.”

Current executive director Chowdhury Fink says she appreciates the advisers’ constructive criticism. Privately, GIFCT staff have logged discussions between advisers and Meta, YouTube, Microsoft, and X as “tense,” records show. (The rights assessment recommended that GIFCT publish minutes of board and advisory panel meetings. That hasn’t happened.)

A key contention has been whether GIFCT is falling short of the “global” in its name. Staff have helped companies respond to terror attacks in about 60 countries. But some advisers want it to recruit more companies with influence outside the US. They have urged greater emphasis on tools to suppress white supremacy and far-right gangs; increased attention on studying violence in Africa (where the sub-Saharan region is regarded as the new “epicenter” of terrorism) and in Asia; and a reset from what they view as a disproportionate focus on stemming Islamist extremism.

“Less Euro-North American–centric and really more worldwide-centric,” Ghayda Hassan, a clinical psychologist at University of Quebec in Montreal who chairs the advisory panel, said on stage at the consortium’s annual summit in Singapore this year.

Hassan tells WIRED the advisors recently wrote to the board with concerns about their leadership. She expects the addition of new board members to be vital. “The board has to be more diverse and inclusive, just like GIFCT overall needs to diversify.”

Some GIFCT advisers and staff have protested the board allowing the consortium to facilitate what it called “timely and effective action to remove terrorist and violent extremist content” tied to Israel’s ongoing war with Hamas. The group has avoided becoming a conduit for content takedowns during other wars, and several employees considered the involvement in the Gaza crisis as siding with Israel, two people with direct knowledge of the concerns say. The UN, whose definitions prevail inside GIFCT, lists neither Hamas nor Israel as a terrorist group. GIFCT director Saltman says she’s been gathering advice on handling “protracted war.”

Nearly everyone WIRED spoke to, including critics of GIFCT, believe the world would be worse off without some sort of coordination. The alternative would be individual companies struggling to close interconnected gaps and governments imposing stricter censorship. “Global chaos,” says Farzaneh Badiei, who runs internet governance consultancy Digital Medusa and coauthored GIFCT research.

Services cast off by the GIFCT board, including TikTok and PornHub, have found help from Tech Against Terrorism, an initiative funded by governments such as Canada, UK, and France as well as tech companies. It automatically alerts 135 companies to extremist content on their services and will soon launch a certification program and its own database for sharing hashes of images.

GIFCT had been paying Tech Against Terrorism hundreds of thousands of dollars annually to evaluate and train potential GIFCT members. But increasing clashes between the two organizations about their overlapping goals and identities frayed the relationship. Microsoft, YouTube, Meta, and X decided to cease the contract with Tech Against Terrorism and consolidate control over the process: GIFCT staff will handle the training starting next year.

Additional reporting by Vittoria Elliott and David Gilbert.
 
Top