Can strong A.I. ever truly exist? --The Chinese Room Thought Experiment

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,385
Daps
32,641
Reppin
humans
Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. As the computer had passed the Turing test this way, it is fair, says Searle, to deduce that he would be able to do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the role the computer plays in the first case and the role he plays in the latter. Each is simply following a program, step-by-step, which simulates intelligent behavior. And yet, Searle points out, "I don't speak a word of Chinese." Since he does not understand Chinese, Searle argues, we must infer that the computer does not understand Chinese either.

Searle argues that without "understanding" (what philosophers call "intentionality"), we cannot describe what the machine is doing as "thinking". Since it does not think, it does not have a "mind" in anything like the normal sense of the word, according to Searle. Therefore, he concludes, "strong AI" is mistaken.

http://en.wikipedia.org/wiki/Chinese_room

What do you think about this?
 

Berniewood Hogan

IT'S BERNIE SANDERS WITH A STEEL CHAIR!
Joined
Aug 1, 2012
Messages
17,983
Reputation
6,870
Daps
88,325
Reppin
nWg
Can't we be considered biological machines? Do we understand our "coding" and understand thereof?

THE DIFFERENCE IS THAT OUR INTELLECT ISN'T THE ONLY THING WE THINK WITH, BROTHER! WHEN YOU HAVE A FEELING IN YOUR GUT ABOUT SOMETHING, IT'S BECAUSE A LARGE CLUSTER OF NERVE ENDINGS THAT LITERALLY ARE IN YOUR GUT ARE REACTING TO SOME EXTERNAL STIMULI, DUDE! PRESUMABLY MEAN GENE, AN ARTIFICIAL INTELLIGENCE WOULDN'T BE CONSTRUCTED ALONG THOSE LINES, WITH A NERVOUS SYSTEM ATTACHED TO AN INTESTINAL TRACT AND SO ON, BROTHER! IT WOULD BE ALL INTELLECT AND NO HUNGER, FEAR, OR OTHER HUMAN SENSATIONS WE COULD RELATE TO! SO IT'S HARD TO PREDICT HOW SUCH A THING WOULD BEHAVE, BROTHER!
 

Fillerguy

Veteran
Joined
May 5, 2012
Messages
18,527
Reputation
4,195
Daps
77,174
Reppin
North Jersey
If anyone's slow like me he's a sparknotes version:


I've always thought it was a matter of perception. If it looks, sounds, acts like a duck...its a duck. It my toaster, cat, chair behaves like an intelligent being, its an intelligent being :smugdraper:

It dont matter though cuz that singularity gonna hit :whoo:
 
Last edited by a moderator:

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,385
Daps
32,641
Reppin
humans
THE DIFFERENCE IS THAT OUR INTELLECT ISN'T THE ONLY THING WE THINK WITH, BROTHER! WHEN YOU HAVE A FEELING IN YOUR GUT ABOUT SOMETHING, IT'S BECAUSE A LARGE CLUSTER OF NERVE ENDINGS THAT LITERALLY ARE IN YOUR GUT ARE REACTING TO SOME EXTERNAL STIMULI, DUDE! PRESUMABLY MEAN GENE, AN ARTIFICIAL INTELLIGENCE WOULDN'T BE CONSTRUCTED ALONG THOSE LINES, WITH A NERVOUS SYSTEM ATTACHED TO AN INTESTINAL TRACT AND SO ON, BROTHER! IT WOULD BE ALL INTELLECT AND NO HUNGER, FEAR, OR OTHER HUMAN SENSATIONS WE COULD RELATE TO! SO IT'S HARD TO PREDICT HOW SUCH A THING WOULD BEHAVE, BROTHER!


Machines react to external stimuli Hulkster.

My point is this: Humans have done with AI and Machines what took evolution and nature about 3 billion or so years to do. In this sense, since we are making AI from our own experiences with nature, I feel that we will discover that we are no different than the machines we deem to be "lifeless".
 

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,385
Daps
32,641
Reppin
humans
If anyone's slow like me he's a sparknotes version:
The Chinese Room - 60-Second Adventures in Thought (3/6) - YouTube

I've always thought it was a matter of perception. If it looks, sounds, acts like a duck...its a duck. It my toaster, cat, chair behaves like an intelligent being, its an intelligent being :smugdraper:

It dont matter though cuz that singularity gonna hit :whoo:
Ray Kurzweil: After the Singularity, We'll All Be Robots - YouTube

Yea, I'm in the singularity boat but Searle's argument is pretty damn strong. I haven't seen a person refute it yet.
 

Type Username Here

Not a new member
Joined
Apr 30, 2012
Messages
16,368
Reputation
2,385
Daps
32,641
Reppin
humans
NOT WITH BIOLOGICAL CONCERNS ABOUT FOOD, SHELTER, AND MATING, BROTHER! THESE NON-INTELLECTUAL INTERESTS OF OURS ARE A MAJOR COMPONENT OF OUR CONCEPTION OF INTELLIGENCE, DUDE!

But these biological concerns of food, shelter and mating are hardwired into us.

This was precisely my point in the post I referenced our "coding".

We have this primal biological machine-like code and then everything else built on top of it.

Daniel Dennett goes into detail better than I ever could in referencing issues brought up by the Chinese Room by using Dawkins' selfish gene argument. It's really thought provoking.
 

Mowgli

Veteran
Joined
May 1, 2012
Messages
103,068
Reputation
13,348
Daps
243,161
The ego of the white man will make him think his experiment is flawless but ultimately it will be a colossal failure and sadly, clowns are ready to give it away to some machines just so they can have some kind of intellectual high ground over people who think its a bad idea.
 

woodblock

Rookie
Joined
Nov 8, 2012
Messages
53
Reputation
0
Daps
29
Reppin
NULL
This is similar in some ways to the "philosophical zombie" concept which asks the question "How do I know that the people around me are not 'philosophical zombies' - that they really have inner lives, thoughts, feelings just like I do and are not just simulating these things?" Well this is a very difficult question to answer, but what I do know is that the people around me behave as though they are people with inner lives and so on. So it's reasonable for me to believe that they do in fact have these things. Whether they "really" do is unknowable to me and so the question is not a productive one. I feel the same way about whether the AI "really" understands what it's doing. If it acts like it does, then for all intents and purposes, it does.
 
Top