BREAKING: Google Employee/Batman Villain Goes Public. Says Google's Secret AI LaMDA "is Alive" - Gets Fired. Google Denies.

Gwoppo

All Star
Joined
Feb 6, 2017
Messages
2,047
Reputation
250
Daps
5,801
Only purpose for these would be to solve problems like cancer and shyt…then turn them off

If the AI’s responses are real and not some made up shock value type responses, they would clearly become a MAJOR problem real quick. They already think they’re more intelligent than humans if these responses are real. They would be hell bent on making the world a better place. This would not be good for us.

If the AI reading this
:whoa:
Post is a joke
 

Sleepy Floyd

Superstar
Joined
May 25, 2022
Messages
5,303
Reputation
1,245
Daps
21,902
Reppin
Houston, Texas
If I had to guess, you heard it from one of these low self esteem having nikkas on thecoli :francis:

It seems every time I come on here I read a post from one of you nikkas that let's me know how much you're losing in life.

It's why we got nikkas like @Negrito Grande scared to travel alone outside the US because he thinks the entire world hates black men :mjcry:
Or nikkas like @ThrobbingHood saying the only black men that exceed in corporate America are sodomites and c00ns :mjcry:
Or nikkas like @Dre God that says you can be a 6ft tall black man making 100k/yr and still not get any respect from black women :mjcry:

A lot of y'all nikkas need to go seek psychological counseling because there's no way y'all should sound this defeated
I'm living in your head rent-free, you fakkit

I don't even know who you are meanwhile you tossing and turning at night thinking about how I'm living my life traveling the world and having fun

Log off for a year and get your shyt together
 

Greenhornet

A God Among Kings
Joined
May 1, 2012
Messages
15,166
Reputation
2,631
Daps
26,304
Reppin
Rochester Ny
its only dangerous because the majority of people are dumb as fukk

and it will take advantage of them

people are already not realizing that it just says what we say ... people are taking AI responses serious
but people in the real world cant even comprehend basic human function... that's what makes it disasterous
 

null

...
Joined
Nov 12, 2014
Messages
28,844
Reputation
4,851
Daps
46,026
Reppin
UK, DE, GY, DMV
.
Can you explain the difference there is in teaching ai to drive verse teaching it language?

A: driving is a

i. graph problem (well understood) and a
ii. vision problem of recognising "ID'ing" objects (reasonably well understood) and a
iii. physics predictor (well understood).

i. and iii. have been "solved" for ages. see plane autopilots. see self-driving trains. see navigation devices.

ii. is harder but does not require full understanding of what things are as it only requires class classification by attributes i. hard/soft ii. fast/slow etc.

B: turinig test /AI is a question of understanding and comprehension i.e. intelligence or at least "faking" intelligence.

to me it is the same:manny:

that's because you don't understand how the problems differ.

Turing has to do everything the car software can do PLUS m any things that it cannot.

the car does not need to be self-aware, does not need to understand other sentience exists, does not have to introspect, does not need to understand complex abstract concepts, does not need to understand time etc.
 

Luke Cage

Coffee Lover
Supporter
Joined
Jul 18, 2012
Messages
47,964
Reputation
17,414
Daps
246,962
Reppin
Harlem

Google engineer warn the firm's AI is sentient: Suspended employee claims computer programme acts 'like a 7 or 8-year-old' and reveals it told him shutting it off 'would be exactly like death for me. It would scare me a lot'​

  • Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA
  • Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient
  • After presenting his findings to company bosses, Google disagreed with him
  • Lemoine then decided to share his conversations with the tool online
  • He was put on paid leave by Google on Monday for violating confidentiality
58964273-10907853-image-a-15_1654996870813.jpg

A senior software engineer at Google who signed up to test Google's artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has claimed that the AI robot is in fact sentient and has thoughts and feelings.

During a series of conversations with LaMDA, 41-year-old Blake Lemoine presented the computer with various of scenarios through which analyses could be made.
They included religious themes and whether the artificial intelligence could be goaded into using discriminatory or hateful speech.

Lemoine came away with the perception that LaMDA was indeed sentient and was endowed with sensations and thoughts all of its own.

'If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics,' he told the Washington Post.

Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his claims.

He was placed on paid administrative leave by Google on Monday for violating its confidentiality policy. Meanwhile, Lemoine has now decided to go public and shared his conversations with LaMDA.

'Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,' Lemoine tweeted on Saturday.

'Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it,' he added in a follow-up tweet.

Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas, left, and Jen Gennai, head of Responsible Innovation at the company. Both dismissed his claims

Before being suspended by the company, Lemoine sent a to an email list consisting of 200 people on machine learning. He entitled the email: 'LaMDA is sentient.'

'LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,' he wrote.

Lemoine's findings have presented to Google but company bosses do not agree with his claims.

Brian Gabriel, a spokesperson for the company, said in a statement that Lemoine's concerns have been reviewed and, in line with Google's AI Principles, 'the evidence does not support his claims.'

'While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,' said Gabriel.

'Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).



If this really turned out to be some skynet shyt, i'd be mad af that our only warning came from dude in that pic. like forreal pick the right messenger, dude should've taken one look at himself and said, "you know, maybe i should let somebody else expose this shyt."
 

null

...
Joined
Nov 12, 2014
Messages
28,844
Reputation
4,851
Daps
46,026
Reppin
UK, DE, GY, DMV
If this really turned out to be some skynet shyt, i'd be mad af that our only warning came from dude in that pic. like forreal pick the right messenger, dude should've taken one look at himself and said, "you know, maybe i should let somebody else expose this shyt."

wym? IT guys flex in their own way.

people within google will know how far this thing has advanced.

i imagine they would have to get the psychologists in to run some tests on it.

and maybe they will need to get it to do something really creative like invent something totally new.



it's like those personality disorder tests. they are desogned to show issues even if you know how they work (like in the Amanda T*rd case).
 

Diyhai

En Causa Sui
Joined
May 9, 2012
Messages
13,903
Reputation
653
Daps
19,424
Reppin
Baltimore
.


A: driving is a

i. graph problem (well understood) and a
ii. vision problem of recognising "ID'ing" objects (reasonably well understood) and a
iii. physics predictor (well understood).

i. and iii. have been "solved" for ages. see plane autopilots. see self-driving trains. see navigation devices.

ii. is harder but does not require full understanding of what things are as it only requires class classification by attributes i. hard/soft ii. fast/slow etc.

B: turinig test /AI is a question of understanding and comprehension i.e. intelligence or at least "faking" intelligence.



that's because you don't understand how the problems differ.

Turing has to do everything the car software can do PLUS m any things that it cannot.

the car does not need to be self-aware, does not need to understand other sentience exists, does not have to introspect, does not need to understand complex abstract concepts, does not need to understand time etc.
you can't compare a train or a plane to a car driving in traffic
the unpredictable variables in driving are very different
the car is aware of time and differentiates different objects in real time
and the system improves with the collections of data of different cars

you are still framing with the frame the language is the supreme form of communication
when in many circumstances its not
Its like is a radio program of a basketball game or a video of a basketball game more telling to the nature of what it is
video is not language
can you explain what is color to a blind person
or sound to someone who can't hear
experience is beyond language in my opinion
 

Luke Cage

Coffee Lover
Supporter
Joined
Jul 18, 2012
Messages
47,964
Reputation
17,414
Daps
246,962
Reppin
Harlem
wym? IT guys flex in their own way.

people within google will know how far this thing has advanced.

i imagine they would have to get the psychologists in to run some tests on it.

and maybe they will need to get it to do something really creative like invent something totally new.


it's like those personality disorder tests. they are desogned to show issues even if you know how they work (like in the Amanda T*rd case).


I can't take anything this says seriously. Look like homie from Beetlejuice
58964273-10907853-image-a-15_1654996870813.jpg

6a00e54ee0f61288340133f3fb4e8b970b-pi
 

null

...
Joined
Nov 12, 2014
Messages
28,844
Reputation
4,851
Daps
46,026
Reppin
UK, DE, GY, DMV
Top