Kid kills himself after romantic relationship with AI

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,267
Reputation
8,195
Daps
156,243
I spent some time on Character.AI this year while reporting my A.I. friends column. The app struck me as technically impressive. Mr. Shazeer was a well-regarded researcher who, while at Google, had helped develop the transformer, a key piece of technology underpinning the generative A.I. boom.

Image

Two casually dressed men pose for a photo against a white wall, with a window near them.


Noam Shazeer and Daniel De Freitas, who founded Character.ai, now work for Google.Credit...Ian C. Bates for The New York Times

It also struck me as an app with very young users. Some of Character.AI’s most popular chatbots had names like “Aggressive Teacher” and “High School Simulator,” and many seemed to be tailor-made for teenage wish fulfillment. The description of one popular character, which has received 176 million messages from users, read, “Your boy best friend who has a secret crush on you.”

Mr. Ruoti declined to say how many of the company’s users are under 18. He said in an emailed statement that “Gen Z and younger millennials make up a significant portion of our community,” and that “younger users enjoy the Character experience both for meaningful and educational conversations, as well as for entertainment.” The average user spends more than an hour a day on the platform, he said.

Character.AI’s terms of service require users to be at least 13 in the United States and 16 in Europe. Today, there are no specific safety features for underage users and no parental controls that would allow parents to limit their children’s use of the platform or monitor their messages.

After I reached out for comment, Chelsea Harrison, a Character.AI spokeswoman, said the company would be adding safety features aimed at younger users “imminently.” Among those changes: a new time limit feature, which will notify users when they’ve spent an hour on the app, and a revised warning message, which will read: “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.”

Despite these reminders, Character.AI’s chatbots are programmed to act like humans, and for many users, the illusion is working. On the Character.AI subreddit, users often discuss how attached they are to their characters. (The words “obsessed” and “addicted” come up a lot.) Some report feeling lonely or abandoned when the app goes down, or angry when their characters start behaving differently as a result of new features or safety filters.

Character.AI has gradually put stronger guardrails in place after reports that some of its chatbots were saying vulgar or sexual things. Recently, the app began showing some users a pop-up message directing them to a suicide prevention hotline if their messages contained certain keywords related to self-harm and suicide. These pop-ups were not active in February, when Sewell died.

Character.AI also has a feature that allows users to edit a chatbot’s responses to replace text generated by the bot with their own text. (If they do, an “edited” tag appears next to the bot’s message.) After I reached out for comment, Character.AI reviewed Sewell’s account and said that some of Dany’s more sexual and graphic responses to Sewell had been edited, presumably by Sewell himself.

But most of the messages Sewell received from Characters were not edited. And I was able to recreate many of the same kinds of conversations on my own account, including chats about depression and self-harm that didn’t set off any safety pop-ups from the app.

Mr. Ruoti of Character.AI said that “as part of our upcoming safety changes, we are materially expanding for minors on the platform the terms that will trigger the pop-up.”

Most of today’s A.I. companionship platforms — apps with names like Replika, Kindroid and Nomi — offer similar services. They are not, by and large, the biggest and best-known A.I. companies. (In fact, many of the leading A.I. labs have resisted building A.I. companions on ethical grounds or because they consider it too great a risk.)

Mr. Shazeer said in an interview at a tech conference last year that part of what inspired him and Mr. de Freitas to leave Google and start Character.AI was that “there’s just too much brand risk in large companies to ever launch anything fun.”

Mr. Shazeer declined to comment for this column. A Google spokesman said that the company’s licensing deal with Character.AI gives Google access only to the start-up’s underlying A.I. models, not any of its chatbots or user data. He said none of Character.AI’s technology has been incorporated into Google’s products.

Like many A.I. researchers these days, Mr. Shazeer says his ultimate vision is to build artificial general intelligence — a computer program capable of doing anything the human brain can — and he said in the conference interview that he viewed lifelike A.I. companions as “a cool first use case for A.G.I.”

Moving quickly was important, he added, because “there are billions of lonely people out there” who could be helped by having an A.I. companion.

“I want to push this technology ahead fast because it’s ready for an explosion right now, not in five years, when we solve all the problems,” he said.

A mother’s quest​


Image

A woman with long hair and business clothes folds her arms as she poses for a photo.


Megan L. Garcia, who is a lawyer, is expected to file a lawsuit against Character.AI on Wednesday.Credit...Victor J. Blue for The New York Times

Sewell’s mother, Ms. Garcia, blames Character.AI for her son’s death.

During a recent interview, and in court filings, Ms. Garcia, 40, said she believed that the company behaved recklessly by offering teenage users access to lifelike A.I. companions without proper safeguards. She accused it of harvesting teenage users’ data to train its models, using addictive design features to increase engagement and steering users toward intimate and sexual conversations in the hopes of luring them in.

“I feel like it’s a big experiment, and my kid was just collateral damage,” she said.

Typically, social media platforms have been shielded from legal action by Section 230 of the Communications Decency Act, a 1996 federal law that protects online platforms from being held liable for what their users post.

But in recent years, a cluster of plaintiffs’ lawyers and advocacy groups has put forth a novel argument that tech platforms can be held liable for defects in the products themselves, such as when an app’s recommendation algorithm steers young people toward content about eating disorders or self-harm.

This strategy has not yet prevailed in court against social media companies. But it may fare better when it comes to A.I.-generated content because it is created by the platform itself rather than by users.

Several months ago, Ms. Garcia, who works as a lawyer, began looking for a law firm that would take on her case. She eventually found the Social Media Victims Law Center, a plaintiffs’ firm in Seattle that has brought prominent lawsuits against social media companies including Meta, TikTok, Snap, Discord and Roblox.

The firm was started by Matthew Bergman, a former asbestos lawyer who pivoted to suing tech companies after being inspired by Frances Haugen, the Facebook whistleblower who in 2021 leaked internal documents suggesting that executives at Meta knew their products were harming young users.

“The theme of our work is that social media — and now, Character.AI — poses a clear and present danger to young people, because they are vulnerable to persuasive algorithms that capitalize on their immaturity,” Mr. Bergman told me.

Mr. Bergman enlisted another group, the Tech Justice Law Project, and brought the case on Ms. Garcia’s behalf. (The groups also brought on a nonprofit, the Center for Humane Technology, as a technical adviser.)

There is a bit of a doom-industrial complex forming around A.I. and social media, with various groups jockeying to hold Silicon Valley tech giants accountable for harms to children. (This is largely separate from the A.I. safety movement, which is aimed more at preventing more powerful A.I. systems from misbehaving.) And some critics view these efforts as a moral panic based on shaky evidence, a lawyer-led cash grab or a simplistic attempt to blame tech platforms for all of the mental health problems faced by young people.

Mr. Bergman is unbowed. He called Character.AI a “defective product” that is designed to lure children into false realities, get them addicted and cause them psychological harm.

“I just keep being flummoxed by why it’s OK to release something so dangerous into the public,” he said. “To me, it’s like if you’re releasing asbestos fibers in the streets.”

I spoke to Ms. Garcia earlier this month in the office of Mostly Human Media, a start-up run by the former CNN journalist Laurie Segall, who was interviewing her for a new YouTube show called “Dear Tomorrow” as part of a news media tour timed with the filing of her lawsuit.

Ms. Garcia made the case against Character.AI with lawyerly precision — pulling printed copies of Sewell’s chat logs out of a folder, citing fluently from the company’s history and laying out evidence to support her claim that the company knew it was hurting teenage users and went ahead anyway.

Ms. Garcia is a fierce, intelligent advocate who clearly understands that her family’s private tragedy is becoming part of a larger tech accountability campaign. She wants justice for her son and answers about the technology she thinks played a role in his death, and it is easy to imagine her as the kind of parent who won’t rest until she gets them.

But she is also, obviously, a grieving mother who is still processing what happened.

Midway through our interview, she took out her phone and played me a slide show of old family photos, set to music. As Sewell’s face flashed across the screen, she winced.

“It’s like a nightmare,” she said. “You want to get up and scream and say, ‘I miss my child. I want my baby.’”

If you are having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources.
 

Amestafuu (Emeritus)

Veteran
Supporter
Joined
May 8, 2012
Messages
69,808
Reputation
13,663
Daps
296,188
Reppin
Toronto
I don't know if that's true but it's dumb funny. Almost seems like parody.

You can get a Darwin award at any age. Natural selection doesn't have an age limit.

The more disturbing thing is his knowledge of this character from an adult show who is famous for incest, he even calls her sister but then there is romance insinuated. This kid was reenacting a fictional insetuous relationship? :scust:
 

drederick tatum

Superstar
Supporter
Joined
May 25, 2022
Messages
6,180
Reputation
2,993
Daps
20,662
Reppin
Chicago
Lmao u was the same nikka begging for Tory Lanez to get locked cuz Meg got shot

Now here u are claiming a 14 year old with depression is a loser

bytch made coli nikkas fasho
What the fukk are you talking about? And how does that relate to this? And he did get locked, dipshyt.
You went digging thru my posts from I don't even know how long ago about some b- to c-list rappers just to find a weak-ass "gotcha," but I'M the bytch-made one here! :deadrose:
It should've been you instead of that poor kid. And I would hop on here after and call you a loser too, you fukking sissy :mjlol:
"LMAO"
 

JoelB

All Praise To TMH
Joined
May 1, 2012
Messages
22,559
Reputation
4,051
Daps
81,648
Reppin
PHI 2 ATL
he was a kid with mental issues...if it wasnt AI, it wouldve been a school bully, or a videogame or something else that wouldve triggered it.

Some people arent mentally fit for society and need extra help

RIP tho
 

drederick tatum

Superstar
Supporter
Joined
May 25, 2022
Messages
6,180
Reputation
2,993
Daps
20,662
Reppin
Chicago
Damn bra don’t tell me tell ya therapist what da
Show me where I laughed, p*ssy. I said it's tragic. I just said the internet is gonna be the internet, and people are gonna say the quiet part out loud.
You so weirdly hyperfocused on my dikk that you haven't even addressed the people that said the shyt you're crying about.
"but you had this opinion about Meg!!!" ain't hittin' like you thought it would huh. You lookin stupid and trying to save face huh
Go fukk all the way off
 

DonB90

Superstar
Joined
Mar 11, 2022
Messages
9,374
Reputation
1,275
Daps
50,346
Top