4
T HIS QUESTION IS CERTAINLY NOT NEW. In 190, Alan Turing devised a test to answer it. A machine needs only to exhibit the ability to commu- nicate like a human being to pass what is now known as the Turing Test (TT). However, does it really prove that a machine thinks? In this short essay, I will examine the possibility of think- ing machines with reference to the TT, COULD A MACHINE THINK? By Moses Lemuel VOX - THE STUDENT JOURNAL OF POLITICS, ECONOMICS AND PHILOSOPHY 18

Could a Machine Think? (Issue X pp.18-21)

Embed Size (px)

DESCRIPTION

This article looks at the criteria for determining a 'thinking machine', considering the possibility of a machine that can emulate human though processes.

Citation preview

this question is Certainly not new. in 19�0, alan turing devised

a test to answer it. a machine needs only to exhibit the ability to commu-nicate like a human being to pass what

is now known as the turing test (tt). however, does it really prove that a machine thinks? in this short essay, i will examine the possibility of think-ing machines with reference to the tt,

CoulD A mAChinE think?

By Moses Lemuel

VOX - The STudenT JOurnal Of POliTicS, ecOnOmicS and PhilOSOPhy iSSue X - auTumn 2009

18

VOX - The STudenT JOurnal Of POliTicS, ecOnOmicS and PhilOSOPhy iSSue X - auTumn 2009

since it has served as a useful point of reference in discussions on artifi-cial intelligence (AI). I will find that, as with many controversial questions, there is no definite answer. Yet there is some reason to believe that a thinking machine is plausible, with a caveat that mere imitation of human behaviour is probably not enough to prove it. The TT is a modified form of a party imitation game that in-volves a judge, a man and a woman. a machine takes the place of one of the participants, excluding the judge. although the rules for the versions turing proposed are more complex, the standard interpretation of the tt is simply one where a judge communi-cates to a machine and a person with-out being able to see them. he directs a question at one of them at a time and attempts, through his questioning, to determine which of them is the ma-chine. the machine tries to convince the judge that it is the person, while the person helps the judge identify the machine. if the machine succeeds in its goal, it passes. the rationale be-hind this is if a machine exhibits the ability to communicate like a person, then it would functionally be a think-ing object, just like a person. this as-sertion is rooted in the behavioralist school of thought, which asserts that “what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions” (levin, 2004).

... there is some reason to be-lieve that a thinking machine is plausible, with a caveat that mere imitation of human behaviour is probably not enough to prove it.

there are many who disagree with this. John searle has, with his ‘Chinese room’ thought experiment, offered a seminal critique. in the experiment, he imagines being isolated in a room furnished with three sets of Chinese writing and rules written in english, which correlate the second set with the first and the third with the other two. the rules also instruct him to write the appropriate Chinese charac-ters in response to the third set. with no knowledge of Chinese and without knowing that the three sets correspond to a set of Chinese script, a story and a set of questions respectively, he might unknowingly be able to follow the rules and write, in Chinese script, the correct answers to the questions. the answers might even be “indistinguish-able from those of native Chinese speakers” (searle, 1980). yet, in this exercise, no understanding is involved at all. thus, a machine might be able to “manipulate formal symbols”, as searle was doing in the Chinese room, but in doing so it would not possess any intentionality, which only arises from an understanding of the content (searle, 1980). hence, it cannot think

19

VOX - The STudenT JOurnal Of POliTicS, ecOnOmicS and PhilOSOPhy iSSue X - auTumn 2009

20

in the sense that human beings think. searle’s conclusion demands that a machine possess “intrinsic inten-tionality” (searle, 1980) before it could be considered a thinking machine. however, there is also some practical criticism of the tt and its ability to prove a thinking machine. a referee in the first TT-inspired Loebner Prize competition held in 1991 reported negatively on it. he noted that trickery had prevailed as the winning computer program was imitating ‘whimsical con-versation’, which was unfalsifiable since it was nonsensical (shieber, 1993). ned block, also a referee, took this obser-vation to its conclusion by criticising the tt itself as “a sorely inadequate test of intelligence because it relies solely on the ability to fool people”, stating that such a test is “confound-ingly simple to pass” (shieber, 1993). the sole requirement of human imi-tation is evidently too limited, prone to mistaking crafty programming for thinking ability. such practical criticism and searle’s criticism both indicates that thinking machines are not so eas-ily realised and are perhaps unforesee-able at the current technological level. nonetheless, ai and behav-ioural experts have come up with some good responses to sceptics. one is to say that understanding is precisely the manipulation of formal symbols through the application of rules, that computers do this just as a child does when she learns to add

(abelson, 1980). Consequently, under-standing improves simply when “more and more rules about a given content are incorporated” (abelson, 1980). this implies that better programming might confer a computer the ability to understand, and with understand-ing, it would be capable of thinking. however, human beings possess sen-sorimotor learning capabilities, which allow us to understand the world in ways that the mere application of rules on symbols in the abstract can-not. as long as machines are unable to experience the world as we do, it would be possible to maintain that they are not capable of understand-ing nearly as much and can there-fore not be fully capable of thinking. the idea of the ‘super-robot’ has been proposed as a solution to this problem. this hypothesis accepts that understanding entails having “all of the information necessary to con-struct a representation of events in the outside world”, and that this must be accomplished by the mental ma-nipulation of symbols that represent the outside world and checking them

The sole requirement of human imitation is evidently too limited, prone to mistaking crafty pro-gramming for thinking ability.

against the ‘rules’ established by sen-sory experience (bridgeman, 1980). naturally, what is hence needed to fulfil the vision of a thinking machine with full person-like intentionality is a robot that is capable of sensorimotor learning (bridgeman, 1980). and as a testament to the completeness of this idea, searle has expressed his agree-ment that such a robot might indeed be a thinking machine (searle, 1980). thus, we have arrived at a possible an-swer. although our treatment of the question is certainly far from compre-hensive, we can reasonably infer from it that machines could think. however, in order to prove this, it would require much more than a trial by standard tt. Bibliography: Abelson, P. (1980). ‘Searle’s argument is just a set of Chinese symbols’, commentary/Searle: ‘Minds, brains, and programs’. The Behavioral and Brain Sciences. 3: 424-425. Bridgeman, B. (1980). ‘Brains + programs = minds’, commentary/Searle: ‘Minds, brains, and programs’. The Behavioral and Brain Sciences. 3: 427-428. Levin, J. (2004). ‘Functionalism’. The Stanford Encyclopedia of Philosophy. Available at http://plato.stanford.edu/entries/functionalism/ [Accessed 21 July 2009].

Oppy, G. and Dowe, D. (2008). ‘The Turing Test’. The Stanford Encyclopedia of Philosophy. Available at http://plato.stanford.edu/entries/turing-test/ [Accessed 21 July 2009]. Searle, J. (1980). ‘Minds, brains, and programs’. The Behavioral and Brain Sci-ences. Available at http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html [Ac-cessed 21 July 2009]. Searle, J. (1980). ‘Intrinsic intentionality’, response/Searle: Minds, brains, and pro-grams’. The Behavioral and Brain Sciences. 3: 450-456. Shieber, S. (1993). Lessons from a Re-stricted Turing Test. Available at http://www.eecs.harvard.edu/shieber/Biblio/Pa-pers/loebner-rev-html/loebner-rev-html.html [Accessed 21 July 2009].

_____________________________Moses Lemuel is a third year undergraduate reading PPE at the University of York

VOX - The STudenT JOurnal Of POliTicS, ecOnOmicS and PhilOSOPhy iSSue X - auTumn 2009

21