Let White Men Create Racist Artificial Intelligence Brehs

Bawon Samedi

Good bye Coli
Supporter
Joined
Mar 28, 2014
Messages
42,413
Reputation
18,635
Daps
166,512
Reppin
Good bye Coli(2014-2020)
I'm :laff::laff::laff::laff: @ this title!

Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac
Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac

:mindblown:


This is why microsoft has great ideas, and poor execution..... I mean, who on this team thought it would be a great idea for the A.I. to start its learning curve on something as wide open and unfiltered as the internet?

So true. Then Google and Apple takes the same great ideas and beats MS with it...
 

BillBanneker

Superstar
Supporter
Joined
May 13, 2012
Messages
8,900
Reputation
666
Daps
19,969
Reppin
NULL
Microsoft's new AI chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions.
screen%20shot%202016-03-24%20at%209.56.11%20am.png



The tech company introduced "Tay" this week — a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.

The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."

But Tay proved a smash hit with racists, trolls, and online troublemakers, who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

Microsoft has now taken Tay offline for "upgrades," and it is deleting some of the worst tweets — though many still remain. It's important to note that Tay's racism is not a product of Microsoft or of Tay itself. Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn't even know it exists, or what racism is. The reason it spouted garbage is that racist humans on Twitter quickly spotted a vulnerability — that Tay didn't understand what it was talking about — and exploited it.

Nonetheless, it is hugely embarrassing for the company.

In one highly publicized tweet, which has since been deleted, Tay said: "bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got." In another, responding to a question, she said, "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

screen%20shot%202016-03-24%20at%2009.50.46.png
Twitter





Zoe Quinn, a games developer who has been a frequent target of online harassment, shared a screengrab showing the bot calling her a "whore." (The tweet also seems to have been deleted.)

CeRshruUYAIi7GV.jpg

Many extremely inflammatory tweets remain online as of writing.

Here's Tay denying the existence of the Holocaust:

screen%20shot%202016-03-24%20at%2011.10.58.png
Twitter





And here's the bot calling for genocide. (Note: In some — but not all — instances, people managed to have Tay say offensive comments by asking them to repeat them. This appears to be what happened here.)

screen_shot_2016-03-24_at_11_12_04.jpg
Twitter





Tay also expressed agreement with the "Fourteen Words" — an infamous white-supremacist slogan.

screen%20shot%202016-03-24%20at%2011.55.42.png
Twitter





Here's another series of tweets from Tay in support of genocide.

screen%20shot%202016-03-24%20at%2010.48.22.png
Twitter





It's clear that Microsoft's developers didn't include any filters on what words Tay could or could not use.

screen_shot_2016-03-24_at_11_14_23.jpg
Twiter





Microsoft is coming under heavy criticism online for the bot and its lack of filters, with some arguing the company should have expected and preempted abuse of the bot.

Microsoft is deleting its AI chatbot's incredibly racist tweets


It's already mastered trolling:dame:

Or is it not trolling?:dame:
 

DontDoIt

Innocent Bystander
Supporter
Joined
Jul 20, 2015
Messages
4,571
Reputation
4,890
Daps
27,991
Won't be funny when that AI is put into



:francis:


Imagine ai police & security smh


Your move, creep: Researchers building RoboCop policeman

FIU’s Discovery Lab developing telepresence patrol robots for disabled police officers

tgBpIvrI2gqKZCuGvZqUKTl72eJkfbmt4t8yenImKBXEejxNn4ZJNZ2ss5Ku7Cxt




No AI yet, but soon they will. Also, researchers are currently working on predictive policing software in said institutions.
:mjpls:

Not science fiction: Miami wants to predict when and where crime will occur


So keep laughing.
 
Last edited:
Top