Key misinformation “superspreaders” on Twitter: Older women

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,051
Reputation
8,229
Daps
157,726

Key misinformation “superspreaders” on Twitter: Older women​


Some of our fellow citizens seem to voluntarily do the work of spreading fake news.​

JOHN TIMMER - 5/30/2024, 4:28 PM

An older woman holding a coffee mug and staring at a laptop on her lap.

Enlarge

Alistair Berg

186

Misinformation is not a new problem, but there are plenty of indications that the advent of social media has made things worse. Academic researchers have responded by trying to understand the scope of the problem, identifying the most misinformation-filled social media networks, organized government efforts to spread false information, and even prominent individuals who are the sources of misinformation.

All of that's potentially valuable data. But it skips over another major contribution: average individuals who, for one reason or another, seem inspired to spread misinformation. A study released today looks at a large panel of Twitter accounts that are associated with US-based voters (the work was done back when X was still Twitter). It identifies a small group of misinformation superspreaders, which represent just 0.3 percent of the accounts but are responsible for sharing 80 percent of the links to fake news sites.

While you might expect these to be young, Internet-savvy individuals who automate their sharing, it turns out this population tends to be older, female, and very, very prone to clicking the "retweet" button.

Finding superspreaders​

The work, done by Sahar Baribi-Bartov, Briony Swire-Thompson, and Nir Grinberg, relies on a panel of over 650,000 Twitter accounts that have been associated with voting registrations in the US, using full names and location information. Those voting records, in turn, provide information about the individuals, as well as location information that can be associated with the average demographics of that voting district. All of these users were active on the platform in the lead-up to the 2020 elections, although the study stopped before the post-election surge in misinformation.

The researchers first identified tweets made by these users, which contain political content, using a machine-learning classifier that had previously been validated by having its calls checked by humans. The researchers focused on those tweets that contained links to news sites. Those links were then checked against a list of "news" websites that were known to disseminate election misinformation.

This approach has a couple of caveats. The researchers can't confirm whether the voter in question had full control (or any control) over their account during the election season. And the accuracy of the individual stories behind the links being shared wasn't tested. So, while these sites may have been consistent sources of misinformation, there's still a chance they published some accurate articles that were shared. Still, due to the size of the population being checked, and the corresponding number of tweets, these aren't likely to be major considerations.

From this population, Baribi-Bartov, Swire-Thompson, and Grinberg identify just 2,107 accounts that are responsible for 80 percent of the tweets linking to sources of misinformation. They refer to these as misinformation supersharers (we'll use supersharers and superspreaders interchangeably). For the analyses they perform, the supersharers are compared to a random sample of the total population and the heaviest sharers of links to reliable news sources.

Send out the fakes​

On an average day on Twitter at the time, only 7 percent of the news stories shared linked to sites prone to publishing misinformation. Supersharers ended up accounting for most of these for two reasons. One is that they shared more news links than anyone else, an average of 16 a day compared to less than one for the random sampling (the heavy news sharers were in between the two, at five news links a day).

But they were also more heavily invested in fake news sources, which accounted for 18 percent of their links. That's in contrast to 2 percent for heavy news sharers and 3 percent for the random sampling. So, the superspreaders reached their position through a combination of volume and lack of discrimination.

The researchers wanted to know whether these individuals were simply "shouting into a void," or if their tweets were in a position to influence others. They found that over 5 percent of the total accounts were following at least one superspreader. And the tweets of superspreaders received more replies, retweets, and likes than tweets from the rest of the population did. And the researchers estimate that the superspreaders account for roughly a quarter of the links to misinformation sites that their typical followers were exposed to. For over 10 percent of their followers, they were the only source of fake news.

So, it's clear that a small subpopulation is preferentially tweeting links to sources of misinformation, and for many users, they're the most significant source of exposure to these sites. So who are these people?

They're a bit more likely to be female. While both the comparison groups were roughly evenly split between male and female, the superspreaders were 60 percent female. They're also older, on average 58 years old, nearly 20 years older than the sample as a whole. And, while much of the misinformation about the election largely circulated within Republican circles, only 64 percent of the superspreaders were registered Republicans (nearly 20 percent were registered as Democrats).

Wondering if these might be your neighbors? If you live in Arizona, Florida, or Texas, you're more likely to be right.

Real live humans​

The superspreaders really are likely to be somebody's neighbors, since the researchers find little evidence of bot-like behavior or even the use of automated tools. Software used to identify bots doesn't find anything unusual about most of them, and their activity was difficult to distinguish from the rest of the population. They weren't posting at unusual times, and the gap between posts tended to be similar to that of other users. Bursts of posting indicating a single session of Twitter use was also similar to that of the rest of the panel.

The main difference the researchers detected was that most of the superspreaders' activity came in the form of retweets. Three-quarters of the superspreaders' content were retweets.

For anyone concerned about the spread of misinformation, it's probably helpful to know that it's not all down to campaigns run by hostile foreign governments; our own citizens can be major contributors to the problem. But it's not clear what we can actually do with the information, given that most social media platforms have been largely indifferent to whether they're a source of misinformation on many topics—the researchers caution that their results probably no longer apply to the service that Twitter has now become.

So, while this data may validate your feelings about one of your older, crankier aunts, it's not likely to result in any organized effort to combat the spread of misinformation.

Science, 2024. DOI: 10.1126/science.adl4435 ( About DOIs).
 
Top