"Minds and Machines"
"Is the Brain A Machine?"
John Searle thinks so . Is he right ?
To give a typically philosophical answer, that depends on what you mean by 'machine'.
If 'machine' means an artificial construct, then the answer is obviously 'no'.
However. Searle also thinks the the body is a machine, by which he seems to mean
that it has been understand in scientific terms, we can explain biology by in terms of
to chemistry and chemistry in terms of physics. Is the brain a machine by this definition ?
The problem is that the job of he brain is to implement a conscious mind,
just as the job of the stomch is to digest. The problem is that although our
'mechanical' understanding of the stomach does allow us to understand digestion
we do not, according to Searle himself, understand how the brain produces consciousness.
He does think that the problem of consciousness is scientifically explicable,
so yet another definition of 'machine' is needed, namely 'scientifically
explained or scientifically explicable' -- with the brain being explicable
rather than explained. The problem with this stretch-to-fit approach
to the meaning of the word 'machine' is that every time the definition of
brain is broadened, the claim is weakened, made less impactful.
PDJ 03/02/03
The Chinese Room
According to the proponents of Artificial Intelligence, a system is intelligent
if it can convince a human interlocutor that it is. This is the famous Turing Test. It focuses on external behaviour and is mute about how that behaviour is produced. A rival idea is that of the Chinese Room, due to John Searle. Searle
places himself in the room, manually executing a computer algorithm that implements intelligent-seeming behaviour, in this case getting questions written in Chinese and mechanically producing answers, without himself understanding Chinese. He thereby focuses attention on how the supposedly intelligent behaviour is produced. Although Searle's original idea was aimed at semantics, my variation is going to focus on consciousness. Likewise, although Searle's original specification has him implementing complex rules, I am
going to take it that the Chinese Room is implemented as a conceptually simple system, in line with the theorem of CS which has it that any computer can be emulated by a Turing Machine.
If you think a Chinese Room implemented with a simplistic, "dumb" algorithm can still be conscious, you are probably a behaviourist; you only care about that external stimuli get translated into the appropriate responses, not how this happens, let alone what it feels like to the system in question.
If you think this dumb Chinese Room is not conscious, but a smart one would be, you need to explain why. Any smart AI can be implemented as a dumb TM, so the more complex inner workings which supposedly implement consciousness. could be added or subtracted without making any detectable difference. This seems to add up to epiphenomenalism, the view that consciousness exists but is a bystander that doesn't cause anything
If you think that complexity makes a difference, so dumb AI's are definitely not conscious, you are probably a (strong) emergentist: the complexity is of such a nature that it cannot be reduced to a dumb algorithm such as a Turing machine, and is therefore non-computable.
(A couple of points: I am talking about strong, Broad-style emergentism here, according to which the emerging property cannot be usefully reduced to something simpler. There is also a weaker sense of 'emergence' in which what emerges is a mere pattern of behaviour, like the way the waves 'emerge' from the sea. This is not quite black-box behaviourism, since some attention is being paid to what goes on inside the head, inasmuch as the right high-level pattern needs to be produced, but it is closer to behaviourism than
to strong, irreducible, Broad-style emergentism.
Also: I casually remarked that mental behaviour 'may not be computable'. This will shock some AI proponents, for whom the Chuch-Turing thesis proves that everything is computable. More precisely, everything that is mathematically computable is computable by a relatively dumb computer, a Turing
that something can be simulated doesn't mean the simulation has all the relevant propeties of the original: flight simulators don't take off.
Thirdly the mathematical sense of 'computable' doesn't fit well with the idea of computer-simulating fundamental physics. A real number is said to be mathematically computable if the algorithm that churns it out keeps on churning out extra digits of accuracy..indefinitely.
Since such a algorithm will never finish churning out a single real number physical value, it is difficult to see how it could simulate
an entire universe. Yes, I am assuming the universe is fundamentally made of real numbers. If it is, for instance finite, fundamental physics might be more readily computable, but the computability of physics still depends very much on physics and not just on computer science).
Peter D Jones 8/6/05
The Chinese Room and Semantics
The CR concludes that an abstract set of rules is insufficient for semantics.
The objection goes: "But there must be some kind of information processing structure that implements
meaning in our heads. Surely that could be turned into rules for the operator
of the Chinese Room".
The Circularity Objection: an abstract structure must be circular and
therefore must fail to have any real semantics.
(It is plausible that any given term can be given an abstract definition that doesn't
depend on direct experience.
It is much less plausible that every *term* can be defined that
way. Such a system would be circular in the same way as:
"present: gift"
"gift: present"
but on a larger scale. If this argument, the Circularity Objection is correct,
the practice of giving abstract definitions, like "equine quadruped" only
works because somewhere in the chain of definitions are words that
have been defined directly; direct reference has been merely deferred, not
avoided altogether.)
The objection continues: "But the information processing structure in our heads has a concrete connection
to the real world: so do AI's (although the CR's are minimal)
(This is the Portability Assumption)".
But they are not the *same* concrete connections. The portability of
abstract rules is guaranteed by the fact that they are abstract.
But concrete causal connections are non-abstract, and are prima-facie
unlikely to be portable -- how can you explain colour to an alien
whose senses do not include anything like vision.
If the Circularity Objection is correct, an AI (particularly
a robotic one) could be expected to have *some*
semantics, but there is no reason it should have
*human* semantics. As wittgenstein said:
"if a lion could talk, we could not understand it".
Peter D Jones 13/11/05
Artificial Intelligence and Computers
An AI is not necessarily a computer. Not everything is a computer or computer-emulable. It just needs to be artificial and
intelligent! The extra ingredient a conscious system has need not be anything other than the physics (chemistry, biology) of
its hardware -- there is no forced choice between ghosts and machines.
A physical system can never be exactly emulated with different hardware -- the difference has to show up somewhere. It can be
hidden by only dealing with a subset of a systems abilities relevant to the job in hand; a brass key can open a door as well as an
iron key, but brass cannot be substituted for iron where magnetism is relevant. Physical differences can also be evaded by taking an
abstract view
of their functioning; two digital circuits might be considered equivalent at the "ones and zeros" level of description even though they
physically work at different voltages.
Thus computer-emulability is not a property of physical systems as such. Even if all physical laws are computable,
that does not mean that any physical systems can be fully simulated. The reason is that the level
of simulation matters. A simulated plane does not acutally fly; a simulated game of
chess reallyis chess. There seems to be a distinction between things like chess,
which can survive being simulated at a higher levleof abstraction, and planes,
which can't. Moreover, it seems that chess-like things are in minority,
and that they can be truned into an abstract programme and adequately simulated because they
are already abstract.
Consiousness. might depend on specific properties of
hardware, of matter. This does not imply parochialism, the attitude that denies consciousness to poor Mr Data just because he is
made out of silicon, not protoplasm. We know our own brains are conscious; most of us intuit that rocks and dumb Chinese Rooms
are not; all other cases are debatable.
Of course all current research in AI is based on computation in one way or another. If the Searlian idea that consciousness is
rooted in physics, strongly emergent, and non-computable is correct, then current AI can only achieve consciousness accidently. A
Searlian research project would understand how brains generate consciousness in the first place -- the aptly-named Hard Problem --
before moving onto possible artificial reproductions, which would have to have the right kind of physics and internal causal
activity -- although not necessarily the same kind as humans.
Computationalism
Computationalism is the claim that the human mind is essentially
a computer. It can be picturesquely expressed in the "yes, doctor" hypothesis --
the idea that, faced with a terminald isease, you would consent to having
your consciousness downloaded to a computer.
There are two ambiguities in "computationalism" --
consciousness vs. cognition, process vs programme -- leading
to a total of four possible meanings.
Most people would not say "yes doctor" to a process that recorded their
brain on a tape a left it in a filing cabinet. Yet, that is all you can get out
of the timeless world of Plato's heaven (programme vs process).
That intuition is, I think, rather stronge than the intuition that Maudlin's argument relies
on: that consciousness supervenes only on brain activity, not on
counterfactuals.
But the other ambiguity in computationalism offers another way out. If only cognition
supervenes on computational (and hence counterfactual) activity, then
consciousness could supervene on non-counterfactual activity -- i.e
they could both supervene on physical processes, but in different ways.
Computational counterfactuals, and the Computational-Platonic Argument for Immaterial Minds
For one, there is the argument that:
A computer programme is just a long number, a string of 1's and 0's.
(All) numbers exist Platonically (according to Platonism)
Therefore, all programmes exist Platonically.
A mind is special kind of programme (According to computaionalism)
All programmes exist Platonically (previous argument)
Therefore, all possible minds exist Platonically
Therefore, a physical universe is unnecessary -- our minds
exist already in the Platonic realm
The argument has a number of problems even allowing the assumptions of Platonism,
and computationalism.
A programme is not the same thing as a process.
Computationalism refers to real, physical processes
running on material computers. Proponents of the argument need to show
that the causality and dynamism are inessential
(that there is no relevant difference between process and programme)
before you can have consciousness implemented Platonically.
To exist Platonically is to exist eternally and necessarily. There is no
time or changein Plato's heave. Therefore, to "gain entry", a computational
mind will have to be translated from a running process into something
static and acausal.
One route is to replace the process with a programme. let's call this the
Programme approachg. After all, the programme
does specify all the possible counterfactual behaviour, and it is basically
a string of 1's and 0's, and therefore a suitabale occupant of Plato's heaven.
But a specification of counterfactual behaviour is not actual counterfactual
behaviour. The information is the same, but they are not the same thing.
No-one would believe that a brain-scan, however detailed,
is conscious, so not computationalist, however ardent,
is required to believe that a progamme on a disk, gathering dust on a shelf,
is sentient, however good a piece of AI code it may be!
Another route is "record" the actual behaviour, under
some circumstances of a process, into a stream
of data (ultimately, a string of numbers, and therefore soemthing already in Plato's heaven). Let's
call this the Movie approach.
This route loses the conditonal structure, the counterfactuals that are vital to computer programmes and therefore
to computationalism.
Computer programmes contain conditional (if-then) statements. A given run of
the programme will in general not explore every branch. yet the unexplored
branches are part of the programme. A branch of an if-then statement that is
not executed on a particular run of a programme will constitute a counterfactual,
a situation that could have happened but didn't. Without counterfactuals you
cannot tell which programme (algorithm) a process is implementing because
two algorithms could have the same execution path but different unexecuted branches.
Since a "recording" is not computation as such, the computationalist
need not attribute mentality to it -- it need not have a mind
of its own, any more than the characters in a movie.
(Another way of looking at this is via the Turing Test; a mere
recording would never pass a TT since it has no condiitonal/counterfactual
behaviour and therfore cannot answer unexpected questions).
A third approach is make a movie of all possible computational
histories, and not just one. Let's call thsi the Many-Movie approach.
In this case a computation would have to be associated
with all related branches in order to bring all the counterfactuals (or rather
conditionals) into a single computation.
(IOW treating branches individually would fall back into the problems of the Movie approach)
If a computation is associated with all branches, consciousness will also be
according to computationalism. That will bring on a White Rabbit problem with a vengeance.
However, it is not that computation cannot be associated with counterfactuals
in single-universe theories -- in the form of unrealised possibilities,
dispositions and so on. If consciousness supervenes on
computation , then it supervenes on such counterfactuals too;
this amounts to the response to Maudlin's argument in
wihch the physicalist abandons the claim that consciousness supervenes
on activity.
Of ocurse, unactualised possibilities in a single universe are never going to lead to
any White Rabbits!
Turing and Other Machines
Turing machines are the classical model of computation, but it is doubtful whether they
are the best model for human (or other organic) intelligence. Turing machines take
a fixed input, take as much time as necessary to calculate a result, and produce
a perfect result (in some cases, they will carry on refining a result forever).
Biological survival is all about coming up with good-enough answers to a tight timescale.
Mistaking a shadow for a sabre-tooth tiger is a msitake, but it is more accpetable
than standing stock still calculating the perfect interpretation of your visual information,
only to ge eaten. This doesn't put natural cognition beyone the bounds of computation,
but it does mean that the Turing Machine is not the ideal model. Biological
systems are more like real time systems, which have to "keep up" with external
events, at the expense of doing some things imprefectly.
PDJ 19/8/06