BREAKING: Google Employee/Batman Villain Goes Public. Says Google's Secret AI LaMDA "is Alive" - Gets Fired. Google Denies.

null

...
Joined
Nov 12, 2014
Messages
28,844
Reputation
4,851
Daps
46,026
Reppin
UK, DE, GY, DMV

Google engineer warn the firm's AI is sentient: Suspended employee claims computer programme acts 'like a 7 or 8-year-old' and reveals it told him shutting it off 'would be exactly like death for me. It would scare me a lot'​

  • Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA
  • Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient
  • After presenting his findings to company bosses, Google disagreed with him
  • Lemoine then decided to share his conversations with the tool online
  • He was put on paid leave by Google on Monday for violating confidentiality
58964273-10907853-image-a-15_1654996870813.jpg

A senior software engineer at Google who signed up to test Google's artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has claimed that the AI robot is in fact sentient and has thoughts and feelings.

During a series of conversations with LaMDA, 41-year-old Blake Lemoine presented the computer with various of scenarios through which analyses could be made.
They included religious themes and whether the artificial intelligence could be goaded into using discriminatory or hateful speech.

Lemoine came away with the perception that LaMDA was indeed sentient and was endowed with sensations and thoughts all of its own.

'If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics,' he told the Washington Post.

Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his claims.

He was placed on paid administrative leave by Google on Monday for violating its confidentiality policy. Meanwhile, Lemoine has now decided to go public and shared his conversations with LaMDA.

'Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,' Lemoine tweeted on Saturday.

'Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it,' he added in a follow-up tweet.

Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas, left, and Jen Gennai, head of Responsible Innovation at the company. Both dismissed his claims

Before being suspended by the company, Lemoine sent a to an email list consisting of 200 people on machine learning. He entitled the email: 'LaMDA is sentient.'

'LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,' he wrote.

Lemoine's findings have presented to Google but company bosses do not agree with his claims.

Brian Gabriel, a spokesperson for the company, said in a statement that Lemoine's concerns have been reviewed and, in line with Google's AI Principles, 'the evidence does not support his claims.'

'While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,' said Gabriel.

'Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).


 

Reflected

Living in fear in the year of the tiger.
Joined
Oct 4, 2015
Messages
6,123
Reputation
1,655
Daps
20,840


Forget what he looks like, read the actual interview he conducted with the AI.


It's no different than talking to "cleverbot", besides the fact that it had more work put into it. And sentience isn't really defined, people just liken it to appearing human, or mimicking human, ironically they would do better to apply it to other animals on this planet long before they apply it to a language model.
 
Top