• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Can Computers Think? Alan Turing's Answers to Criticisms

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,064
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
I.—COMPUTING MACHINERY AND INTELLIGENCE | Mind | Oxford Academic by Alan Turing
I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
He then goes on to describe what he called the imitation game. It's looking from outside and asking whether a computer can act as if it thinks, a sort of behaviorist definition of thinking.

Something like this:
Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
The number answer is incorrect. It is really 105721. Seems like an effort to simulate human thinking, complete with not remembering a carry digit.

The chess notation is descriptive notation, which has gone out of style. What's universally used nowadays is algebraic notation, and translating it into algebraic notation has some ambiguity. Here's a version in algebraic notation: white king at e1, black king at e3, black rook at h8, black's rook. To win, black plays Rh1.

As to the Forth Bridge, I'm unfamiliar with it. I'd have more to say about the Golden Gate Bridge, however.
 
Alan Turing then explains how computers work, and what he describes is still true of even the newest CPU chips.

From his description, the "Manchester machine" had about 165,000 bits of memory or a bit more than 20 kilobytes.

My present computer's CPU is a  Kaby Lake Intel Core i5 chip, as far as I can tell, and its level-1 cache is 64 kilobytes per core -- more than the Manchester machine's entire RAM. I think that the only place that one can find a computer with that little RAM nowadays is some embedded computer in something like a microwave oven.

Now to my main subject, "6. Contrary Views on the Main Question"

He stated "I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning."

That is, 128 megabytes of RAM. Looking back to the year 2000, the first prediction came true, but the second one didn't. It still has not come true.
 
(1) The Theological Objection

Thinking is a function of man's immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.

I am unable to accept any part of this, but will attempt to reply in theological terms.
He argued that God can place a soul in a computer if he wants to.

(2) The ‘Heads in the Sand’ Objection

“The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.”

(3) The Mathematical Objection

There are a number of results of mathematical logic which can be used to show that there are limitations to the powers of discrete-state machines.
Like Goedel's theorem, which states that any formal system that contains Peano's axioms of arithmetic is one with true statements in it that cannot be proved within the system. Like a theorem T: "T is not a theorem". It's a mathematical version of the liar paradox.

Another one was discovered by AT himself, with an abstraction of a computer known as a Turing machine, an abstraction invented by AT himself. He showed that one cannot write a Turing machine that can test another Turing machine for whether or not it can halt. He also proved that a sufficiently capable Turing machine could emulate any other: "Turing universality".

A system is "Turing-complete" if it supports arbitrary array indexing and arbitrary transfer of control. Every "true" CPU is Turing-complete, with the exception of having only finite resources, though some computing hardware, like FPGA's, are not. On the software side, most programming languages are also Turing-complete, though some common Turing-incomplete ones are HTML and CSS.

Still another one was Alonzo Church's discovery that there is no lambda-calculus function that can test whether or not two other lambda-calculus functions are equivalent. The lambda calculus is essentially doing everything with functions, and the lambda part is from AC's notation for the functions that he worked with.

AT:
The short answer to this argument is that although it is established that there are limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect. But I do not think this view can be dismissed quite so lightly.
 
(4) The Argument from Consciousness

This argument is very well expressed in Professor Jefferson's Lister Oration for 1949, from which I quote. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.”

This argument appears to be a denial of the validity of our test.
In effect, one has to be that machine. But as AT himself points out, that argument also works for other people, and it implies that one can only know that one oneself is conscious.

AT then stated that "I am sure that Professor Jefferson does not wish to adopt the extreme and solipsist point of view."

Solipsism is the theory that only oneself exists.
 
(5) Arguments from Various Disabilities

These arguments take the form, “I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X”. Numerous features X are suggested in this connexion. I offer a selection:

Be kind, resourceful, beautiful, friendly (p. 448), have initiative, have a sense of humour, tell right from wrong, make mistakes (p. 448), fall in love, enjoy strawberries and cream (p. 448), make some one fall in love with it, learn from experience (pp. 456 f.), use words properly, be the subject of its own thought (p. 449), have as much diversity of behaviour as a man, do something really new (p. 450). (Some of these disabilities are given special consideration as indicated by the page numbers.)
Some machines look very beautiful to many of us. Like some cars and ships and airplanes.
No support is usually offered for these statements. I believe they are mostly founded on the principle of scientific induction. A man has seen thousands of machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a very limited purpose, when required for a minutely different purpose they are useless, the variety of behaviour of any one of them is very small, etc., etc. Naturally he concludes that these are necessary properties of machines in general. Many of these limitations are associated with the very small storage capacity of most machines. (I am assuming that the idea of storage capacity is extended in some way to cover machines other than discrete-state machines. The exact definition does not matter as no mathematical accuracy is claimed in the present discussion.)
He then goes into detail about some of these supposed disabilities.

6) Lady Lovelace's Objection

Our most detailed information of Babbage's Analytical Engine comes from a memoir by Lady Lovelace. In it she states, “The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform” (her italics). ...

A variant of Lady Lovelace's objection states that a machine can ‘never do anything really new’.
But over a lifetime of using and programming computers, I've experienced oodles of odd behavior.
The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles.

(7) Argument from Continuity in the Nervous System

The nervous system is certainly not a discrete-state machine. A small error in the information about the size of a nervous impulse impinging on a neuron, may make a large difference to the size of the outgoing impulse. It may be argued that, this being so, one cannot expect to be able to mimic the behaviour of the nervous system with a discrete-state system.
That strikes me as very weak. All one needs to do is have sufficient resolution, a sufficient number of bits in one's numerical values.
 
(8) The Argument from Informality of Behaviour

It is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances. One might for instance have a rule that one is to stop when one sees a red traffic light, and to go if one sees a green one, but what if by some fault both appear together? One may perhaps decide that it is safest to stop. But some further difficulty may well arise from this decision later. To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible. With all this I agree.

From this it is argued that we cannot be machines.
But even very simple formal systems can produce lots of complexity. John Conway discovered in 1970 a cellular-automaton system that can produce great complexity, his Game of Life. Likewise, fractal shapes often have great complexity, and they also are generated by simple rules.

9) The Argument from Extra-Sensory Perception

I assume that the reader is familiar with the idea of extra-sensory perception, and the meaning of the four items of it, viz. telepathy, clairvoyance, precognition and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one's ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies. The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be one of the first to go.

This argument is to my mind quite a strong one. One can say in reply that many scientific theories seem to remain workable in practice, in spite of clashing with E.S.P.; that in fact one can get along very nicely if one forgets about it. This is rather cold comfort, and one fears that thinking is just the kind of phenomenon where E.S.P. may be especially relevant.
This is likely from  Samuel Soal's experiments: The ESP Experiments of Soal · Rufus Pollock Online
 
Alan Turing then has a big section on "Learning Machines" but I'll leave off here.
 
Back
Top Bottom