The images of Spain’s floods weren’t created by AI. The trouble is, people think they were

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,212
Reputation
8,456
Daps
159,803


The images of Spain’s floods weren’t created by AI. The trouble is, people think they were​

John Naughton

The rapid growth of ‘AI slop’ – content created by artificial tools – is starting to warp our perception of what is, or could be, real

Sat 9 Nov 2024 11.00 EST
Share


Cars piled up in the street after the ‘rain bomb’ that hit Valencia at the end of last month. Photograph: David Ramos/Getty Images

My eye was caught by a striking photograph in the most recent edition of Charles Arthur’s Substack newsletter Social Warming. It shows a narrow street in the aftermath of the “rain bomb” that devastated the region of Valencia in Spain. A year’s worth of rain fell in a single day, and in some towns more than 490 litres a square metre fell in eight hours. Water is very heavy, so if there’s a gradient it will flow downhill with the kind of force that can pick up a heavy SUV and toss it around like a toy. And if it channels down a narrow urban street, it will throw parked cars around like King Kong in a bad mood.

The photograph in Arthur’s article showed what had happened in a particular street. Taken with a telephoto lens from an upper storey of a building, it showed a chaotic and almost surreal scene: about 70 vehicles of all sizes jumbled up and scattered at crazy angles along the length of the street.

It was an astonishing image which really stopped me in my tracks. Not surprisingly, it also went viral on social media. And then came the reaction: “AI image, fake news.” The photograph was so vivid, so uncannily sharp and unreal, that it looked to viewers like something that they could have faked themselves using Midjourney or Dall-E or a host of other generative AI tools.

But it wasn’t fake, as Arthur established in a nice piece of detective work – tracking down a bar in the picture using Facebook, finding the street in Apple Maps and even “walking” down it using Street View. “It’s not obvious why these people thought that photo in particular wasn’t real”, he writes. “Perhaps it’s something about the sheen of the cars and the peculiar roundedness of the shapes, and maybe the lack of obvious damage”. Or is it that the proliferation of AI-generated fakes is already making people increasingly predisposed not to believe things that are real?

It makes sense, in a way: Meta’s profits depend on keeping users of their platforms ‘engaged’. If AI slop helps to achieve that, what’s the problem?

My hunch is that it’s the latter, because social media are being overrun by what has come to be known as “AI slop” – images and text created using generative AI tools. (Amazon’s Kindle store is having similar problems with AI-generated “books”, but that’s a different story.)

You’d have thought that the social media companies would be bothered by this tsunami of crap on their platforms. Think again. According to Jason Koebler of the tech news website 404 Media, in a recent quarterly earnings call that was overwhelmingly about AI, Meta’s chief executive, Mark Zuckerberg, said that new, AI-generated feeds were likely to come to Facebook and other Meta platforms. Zuckerberg said he was excited by the “opportunity for AI to help people create content that just makes people’s feed experiences better”.

Warming to his theme, Zuck continued: “I think we’re going to add a whole new category of content, which is AI-generated or AI-summarised content or kind of existing content pulled together by AI in some way. And I think that that’s going to be just very exciting for Facebook and Instagram and maybe Threads or other kind of feed experiences over time.”

Which makes perfect sense, in a way: Meta’s profits depend on keeping users of its platforms “engaged” – that is, spending as much time as possible on them – and if AI slop helps to achieve that goal, what’s the problem?

On the supply side, it turns out that AI-generated stuff is also profitable for those who create it. Koebler has spent a year exploring this dark underbelly of social media. In India, he ran into Gyan Abhishek, an analyst who studies online virality. Abhishek showed him a startling image being used to generate revenue – a picture of a skeletal elderly man hunched over while being eaten by hundreds of bugs.

“The Indian audience is very emotional,” Abhishek explained. “After seeing photos like this, they ‘like’, ‘comment’ and share them. So you too should create a page like this, upload photos and make money through performance bonus.” He also claims that creators of viral images can earn $100 for 1,000 “likes”, which sounds like money for jam, at least to this columnist.

So what we have here is a nice positive feedback loop in which creators of AI slop profit from feeding the engagement algorithms of social media platforms, which in turn profit from the increasing “engagement” that viral images attract. The trouble with positive feedback loops, though, is that they give rise to runaway growth, and to the question of what happens to social media when they become terminally enshyttified as a result. Which is where Meta and co are headed.
 
Top