Undecidability Results in Turing Test
#1

[attachment=385]
Presented by:Amaldev M.
Undecidability Results in Turing Test


Abstract
The turing test is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science. This seminar covers the major undeeidability results and supporting facts that had been claimed or proved against the test.

Introduction

1.1 'Can machines think'
"I Propose to consider the question, 'Can machines think?' This should begin with definitions of the meaning of the terms 'machine' and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think' are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, 'Can machines think?' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."
- A.M.Turing, Computing Machinery And Intelligence, The MIND, 1950

1.2 The Imitation Game
Turing's 1950 article addresses the question whether machines can be said to think. Inorder to settle the problem experimentally and not on purely metaphysical grounds, which would try and define what intelligence is without proving what it is, Turing sets up an "imitation game" which is a kind of abstract oral examination and which will decide the matter experimentally. The test is described as,
" It is played with three people, a man (A), a woman (B), and an interrogator © who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either 'X is A and Yis B' or 'X is B and Yis A'. The interrogator is allowed to put questions to A and B ... We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?' "
- A.M.Turing, Computing Machinery And Intelligence, The MIND, 1950

Since then, Turing's ideas have been widely discussed, attacked, and defended over and over. At one extreme, Turing's paper has been considered to represent the "beginning" of artificial intelligence (AI) and the TT has been considered its ultimate goal. At the other extreme, the TT has been called useless, even harmful.
The idea behind this seminar is to study the various undeeidability results that had been claimed or proved against the test .
2 Undecidability of the Imitation game with interroga¬tor modelled as a turing machine

2.1 Role of the interrogator
A major part of studies concerning turing test deals with the capabilities of the imitated player (Human) and the imitating player (Machine). However, turing claims that one cannot distinguish humans from machine in a situation like the imitation game. This shows turing test as a study of subjectivity of the interrogator who participates in the game as a player. Sato.Y, and Ikegami T.(2004) proposes that the validity of the turing test is not attributed to the capability of the imitated player or the imitating player but rather to the capability of the interrogator.

2.2 Machine as an interrogator
For the purpose, the imitation game is modelled as
1. The imitator (A) as a turing machine
2. The interrogator © is a turing machine
3. The player (X) is either an imitator or a human (B)
4. The interrogator © is given a complete blueprint M of the imitator
5. The human is not given the blueprint of the imitator (A). Thus, he or she cannot emulate the imitator (A) intentionally.
6. The interrogator © halts and outputs YES when it finds that the player (X) is the target Turing machine M, otherwise it halts and outputs NO
7. When the imitator (A) does not halt, the interrogator © does not have to halt.
The question for the interrogator in the original imitation game 'Is player (X) a machine or not ?' can be replace an easy one 'Is player (X) the target Turing machine M or not ?' and a turing machine answers Yes or No for this question by using the information from the tele-typed communication.
It is then proved that no perfect 'C can exist by using the method of diagonalization. See Sato.Y and Ikegami T.(2004) for the proof. Followed by the proof, the application of the result in the network game quake (incapability of the system to recognize cheat-programs, or bots in the quake-lingo) is also given.


2.3 Generalization of the result
When two systems with exactly same computational abilities play an imitation game with human, it is impossible for the interrogator © to distinguish the imitator (A) from human (B). In other words, let s be a system that can work as computing machinery and let S be the set of the computing machineries including system s. Then no s' G S can be a perfect inetrrogator that can decide whether the opponent player (X) is the given s G S or not and if the the interrogator can distinguish humans from systems S for all s, the interrogator cannot belong to S. Turing's claim that humans cannot distinguish men from women in the imitation game is not falsified.
3 Undeeidability of the Imitation game with a human interrogator

3.1 The meaning of 'undeeidability'
Unlike in the above section, it hasnt been proved that whether a human interrogator will or will not be able to distinguish a machine in the imitation game. That remains a tough deal since it needs to unfold the mysteries about the human brain, nervous system, dynamic processing etc., the question which was rigorously studied is 'Does It Measure Intelligence'.
In the 1950 article turing himself provides 9 plausible contrary views, and gives justifications to each of them. See Turing (1950).

3.2 Claimed Levels of undeeidability question
There is the suggestion that The Turing Test provides logically necessary and sufficient condi¬tions for the attribution of intelligence. Second, there is the suggestion that The Turing Test provides logically sufficient - but not logically necessary - conditions for the attribution of intelligence. Third, there is the suggestion that The Turing Test provides criteria - defeasible sufficient conditions - for the attribution of intelligence. Fourth - and perhaps not importantly distinct from the previous claim - there is the suggestion that The Turing Test provides (more or less strong) probabilistic support for the attribution of intelligence.

3.2.1 (Logically) Necessary and Sufficient Conditions
It is doubtful whether there are very many examples of people who have explicitly claimed that The Turing Test is meant to provide conditions that are both logically necessary and logically sufficient for the attribution of intelligence. (Perhaps Block (1981) is one such case.) However, some of the objections that have been proposed against The Turing Test only make sense under the assumption that The Turing Test does indeed provide logically necessary and logically suf¬ficient conditions for the attribution of intelligence; and many more of the objections that have been proposed against The Turing Test only make sense under the assumption that The Turing Test provides necessary and sufficient conditions for the attribution of intelligence, where the modality in question is weaker than the strictly logical, e.g., nomic or causal.

Consider, for example, those people who have claimed that The Turing Test is chauvinistic; and, in particular, those people who have claimed that it is surely logically possible for there to be something that possesses considerable intelligence, and yet that is not able to pass The Turing Test. (Examples: Intelligent creatures might fail to pass The Turing Test because they do not share our way of life; intelligent creatures might fail to pass The Turing Test because they refuse to engage in games of pretence; intelligent creatures might fail to pass The Turing Test because the pragmatic conventions that govern the languages that they speak are so very different from the pragmatic conventions that govern human languages. Etc.) None of this can constitute objections to The Turing Test unless The Turing Test delivers necessary conditions for the attribution of intelligence.
3.2.2 Logically Sufficient Conditions
There are many philosophers who have supposed that The Turing Test is intended to pro¬vide logically sufficient conditions for the attribution of intelligence. That is, there are many philosophers who have supposed that The Turing Test claims that it is logically impossible for something that lacks intelligence to pass The Turing Test. (Often, this supposition goes with an interpretation according to which passing The Turing Test requires rather a lot, e.g., producing behavior that is indistinguishable from human behavior over an entire lifetime.)

There are well-known arguments against the claim that passing The Turing Test - or any other purely behavioral test - provides logically sufficient conditions for the attribution of intel¬ligence. The standard objection to this kind of analysis of intelligence (mind, thought) is that a being whose behavior was produced by brute force methods ought not to count as intelligent (as possessing a mind, as having thoughts).

One example is , Ned Block's Blockhead. Blockhead is a creature that looks just like a human being, but that is controlled by a game-of-life look-up tree, i.e. by a tree that contains a programmed response for every discriminable input at each stage in the creature's life. If we agree that Blockhead is logically possible, and if we agree that Blockhead is not intelligent (does not have a mind, does not think), then Blockhead is a counter-example to the claim that the Turing Test provides a logically sufficient condition for the ascription of intelligence. After all, Blockhead could be programmed with a look-up tree that produces responses identi¬cal with the ones that you would give over the entire course of your life (given the same inputs).

There are perhaps only two ways in which someone who claims that The Turing Test offers logically sufficient conditions for the attribution of intelligence can respond to Block's argu¬ment. First, it could be denied that Blockhead is a logical possibility; second, it could be claimed that Blockhead would be intelligent (have a mind, think).

In order to deny that Blockhead is a logical possibility, it seems that what needs to be denied is the commonly accepted page link between conceivability and logical possibility: it certainly seems that Blockhead is conceivable, and so, if (properly circumscribed) conceivability is sufficient for logical possibility, then it seems that we have good reason to accept that Blockhead is a logical possibility. Since it would take us too far away from our present concerns to explore this issue properly, we merely note that it remains a controversial question whether (properly circumscribed) conceivability is sufficient for logical possibility.

The question of whether Blockhead is intelligent (has a mind, thinks) may seem straight¬forward, but - despite Block's confident assertion that Blockhead has all of the intelligence of a toaster - it is not completely obvious that we should deny that Blockhead is intelligent. True enough, Blockhead is a particularly inefficient processor of information; but it is at least a processor of information, and that - in combination with the behavior that is produced as a result of the processing of information - might well be taken to be sufficient grounds for the attribution of some level of intelligence to Blockhead.
3.2.3 Criteria
In his Philosophical Investigations, Wittgenstein famously writes: An inner process stands in need of outward criteria . Exactly what Wittgenstein meant by this remark is unclear, but one way in which it might be interpreted is as follows: in order to be justified in ascribing a mental state to some entity, there must be some true claims about the observable behavior of that entity that, (perhaps) together with other true claims about that entity (not themselves couched in mentalistic vocabulary), entail that the entity has the mental state in question. If no true claims about the observable behavior of the entity can play any role in the justification of the ascription of the mental state in question to the entity, then there are no grounds for attributing that kind of mental state to the entity.

The claim that, in order to be justified in ascribing a mental state to an entity, there must be some true claims about the observable behavior of that entity that alone - i.e. without the addition of any other true claims about that entity - entail that the entity has the mental state in question, is a piece of philosophical behaviorism. It may be - for all that we are able to argue - that Wittgenstein was a philosophical behaviorist; it may be - for all that we are able to argue - that Turing was one, too. However, if we go by the letter of the account given in the previous paragraph, then all that need follow from the claim that the Turing Test is criterial for the ascription of intelligence (thought, mind) is that, when other true claims (not themselves couched in terms of mentalistic vocabulary) are conjoined with the claim that an entity has passed the Turing Test, it then follows that the entity in question has intelligence (thought, mind).

To see how the claim that the Turing Test is merely criterial for the ascription of intelli¬gence differs from the logical behaviorist claim that the Turing Test provides logically sufficient conditions for the ascription of intelligence, it suffices to consider the question of whether it is nomically possible for there to be a hand simulation of a Turing Test program. Many peo¬ple have supposed that there is good reason to deny that Blockhead is a nomic (or physical) possibility. For example, in The Physics of Immortality, Frank Tipler provides the following argument in defence of the claim that it is physically impossible to hand simulate a Turing-Test-passing program:

"If my earlier estimate that the human brain can code as much as 1015 bits is correct, then since an average book codes about 106 bits it would require more than 100 million books to code the human brain. It would take at least thirty five-story main university libraries to hold this many books. We know from experience that we can access any memory in our brain in about 100 seconds, so a hand simulation of a Turing Test-passing program would require a human being to be able to take off the shelf, glance through, and return to the shelf all of these 100 million books in 100 seconds. If each book weighs about a pound (0.5 kilograms), and on the average the book moves one yard (one meter) in the process of taking it off the shelf and returning it, then in 100 seconds the energy consumed in just moving the books is 3 x 1019 joules; the rate of energy consumption is 3 x 1011 megawatts. Since a human uses energy at a normal rate of 100 watts, the power required is the bodily power of 3 x 1015 human beings, about a million times the current population of the entire earth. A typical large nuclear power plant has a power output of 1,000 megawatts, so a hand simulation of the human program requires a power output equal to that of 300 million large nuclear power plants. As I said, a man can no more hand-simulate a Turing Test-passing program than he can jump to the Moon. In fact, it is far more difficult."


Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: upcpmt results 2006, bscit results, keralalottery results 06 03 16, s b t e t diplomac09 results, results of bariatric, dhiu interview results, nhl results,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Vehicular Ad Hoc Networks (VANETS): Status, Results, and Challenges seminar surveyer 2 4,252 14-02-2012, 12:58 PM
Last Post: seminar paper
  Finding Bugs in Web Applications Using Dynamic Test Generation and Explicit-State Mod seminar surveyer 2 2,347 14-02-2012, 12:55 PM
Last Post: seminar paper
  XMAX: X-tolerant architecture for MAXimal test compression electronics seminars 0 1,241 26-12-2009, 11:51 AM
Last Post: electronics seminars
  Automatic Test Case Generation Using Message Sequence Charts(MSCs). nit_cal 0 1,464 30-10-2009, 03:15 PM
Last Post: nit_cal
  Visualization of web search results in 3D computer science crazy 0 1,739 23-10-2009, 04:03 PM
Last Post: computer science crazy
Wink Context Disambiguation in Web Search Results computer science crazy 0 2,054 24-02-2009, 01:44 AM
Last Post: computer science crazy

Forum Jump: