77
Machine Functionalism

Machine Functionalism. What we have learned The failures of dualism, behaviorism, and identity theory were not in vain! Each failure reveals important

Embed Size (px)

Citation preview

Machine Functionalism

What we have learned

The failures of dualism, behaviorism, and identity theory were not in vain!

Each failure reveals important features of mental states that any theory ought to be able to handle.

What we have learnedFailure of Dualism• Mental states interact causally with physical states• They are both caused by and cause physical states

Failure of Behaviorism• Mental states interact with each other.• Mental states combine in interesting and complex ways to

produce behavior.

Failure of Identity Theory• Mental states are multiply realizable.• Other organisms physically very different from us can have the

same mental states as us.

What we have learned

Put more positively, we now know that any adequate theory of the mental must account for three facts:

(1) Mental states causally interact with physical things(2) Mental states causally interact with each other(3) Mental states are multiply realizable.

Functionalism: An Introduction

Looking at these constraints, certain analogies began to be suggestive.

Consider artifacts:• Mousetrap• Hammer• Carburetor

What makes a mousetrap a mousetrap? Its physical construction? What it is made out of?

No!

Functionalism: An Introduction

What makes something a mousetrap is its function. • What it does• What it was designed to do

You can make a mousetrap out of pretty much anything, so long as it does a certain thing, it will count as a mousetrap.

Functionalism: An Introduction

Something similar is true for biological kinds like a heart:• Something is a heart just in case its primary function is to pump

blood.• Other animals can have hearts that are physically very different

from ours.• If an organism had a physically identical organ in its digestive

system it wouldn’t be a heart!

Functionalism: An Introduction

Bodily organs and artifacts like mousetraps and carburetors are functional kinds.

What makes them the kind of thing they are is a certain functional role.

These functional roles can be physically realized in many different ways.

Functionalism: An Introduction

Facts about mental states(1) Mental states causally interact with physical things(2) Mental states causally interact with each other(3) Mental states are multiply realizable.

Why not say mental states are functional kinds?• Example of the heart and carburetor show that the function of

something can be the role it plays in a larger causal system.• Can accommodate all the causal powers of mental states.• Accommodates multiple realizability!

Functionalism: An Introduction

Very roughly:

Pain just is a state that realizes the following functional role:• Typically is caused by tissue damage• Typically causes other mental states like distress, fear, anger, and

so on (depending on the antecedent overall state of the organism)

• Typically causes certain behavioral responses.

Functionalism vs. Behaviorism

It is useful to consider the view in contrast to behaviorism

According to the behaviorist mental states are individuated by:• A particular stimulus• The resulting behavior

The functionalist adds crucial details. Mental states are individuated by:• The typical physical stimuli that cause them• Their causal relations to other mental states• The behavioral responses that they cause in conjunction with

other mental states.

Examples of Behavioral AnalysisI believe that the ice is thin just in case, under such and such conditions:

• I will skate warily• I will say to others “Don’t skate there.”• I will avoid that part of the ice.• Etc.

The problem with this was that if I have different background mental states, I won’t have these dispositions.

Functionalism vs. Behaviorism

Functionalism does not have this problem.

Mental states are partly characterized by their interactions with other states.

So what it is for me to “want to get wet” is characterized by how it would interact with other mental states like: “the ice is thin over there.”

Functionalism vs. Identity TheoryThe functionalist can say that mental states are (token) identical to physical states.• A particular instance of pain is some neural state that bears a

complicated causal relationship to other states.• But the nature of pain is determined by its functional role.

So the functionalist can be a (token) physicalist, while still respecting the multiple realizability of the mental!

Functionalism

The prospects of functionalism look good, but so far it is only a vague proposal.

It still has to be some organization of neural states that produce mental states.• What does it mean for some neural state to “implement a

functional role”?• What sort of causal organization of neural states could possibly

produce mental states?• What are the prospects for a science of the mental if

functionalism is true?

Putnam’s Proposal

Hilary Putnam was the first to propose a rigorous functionalist framework.

He proposed that one could construct a functionalist theory using Turing Machines.

Alan Turing

Alan Turing

Reading

Read: Block and Fodor “What Psychological States Are Not”

Read Block excerpts from “Troubles with Functionalism”

Turing Machines

A Turing Machine is an abstract specification of the functional organization of a physical system. It has four elements:

(1) A tape of infinite length divided into squares on which things can be written.

(2) A scanner head that can read what is written on the tape and print something new on the tape.

(3) A finite set of internal states of the scanner head (q1, q2, etc.).

(4) A finite alphabet of symbols (1, #, 0,…)

Only one symbol can be written in each square.

Turing Machines

How the machine operates is governed by three simple rules:

(1) At any time the machine is in one total internal state q, and it is scanning a square.

(2) What the machine does next is entailed by its internal state q, and what it is currently scanning.

(3) The machine can do one or more of three things:(1) Print a symbol (either a new one the same one, or leave the

square blank)(2) Move right or left (or halt if it is finished).(3) Enter into a different internal state.

Fun with Turing Machines

Which of these it will do is determined by its machine table.

q1 q2

1 1Rq1 1Rq2

0 1q2 0q2

Fun with Turing Machines

q1 q2 q3

1 1Rq1 1Rq1 0q3

0 1Rq2 0Lq3 Halt

Turing Machines

You can specify a Turing machine like this to compute any function.

You can complicate the picture so that there are multiple tapes and multiple scanner heads BUT the functioning of every “multi-head” Turing Machine can be duplicated by a standard Turing Machine.

Universal Turing Machines

Turing also showed how you can construct a Universal Turing Machine.

This is a Turing Machine that can realize the Turing Table of any other Turing Machine.• The input on the tap specifies both the machine table the UTM is

supposed to use and the values to be computed.

Universal Turing Machines are the abstract basis for general purpose computers.

Features of Turing Machines

Turing machines are functionally individuated.

How you tell one TM from another is by looking at its machine table. The internal states (i.e. the q’s) are exhaustively described by their place on the machine table:• How they relate to the other internal states• What state the TM will enter given a certain input• What the TM will write on the tape given a certain input

Features of Turing Machines

A Turing machine can be built out of anything you like so long as the internal states, inputs and outputs are causally related in the way the machine table says.

In other words, a Turing Machine is multiply realizable.

Examples of Turing Machines

Examples of Turing Machines

Examples of Turing Machines

Examples of Turing Machines

Turing Machine Functionalism

Recall what we need for a good account of mental states:

(1) Mental states causally interact with physical things(2) Mental states causally interact with each other(3) Mental states are multiply realizable.

Putnam points out that the internal states of Turing Machines meet all of these criteria!

Turing Machine Functionalism

Putnam’s proposal is that we think of the mind as the internal states of a very complicated Turing Machine.

(1) The “tape” is the world.(2) The “scanner” is perception(3) The “printing” is behavior(4) The “internal states of the scanner head” are the mental

states

Turing Machine Functionalism

This provides us with a rigorous functionalist specification of a functionalist proposal.

1. What does it mean for some neural state to “implement a functional role”?• To be the internal state of a Turing Machine is to play a certain

functional role in relation to other internal states, inputs, and outputs.

Turing Machine Functionalism

This provides us with a rigorous functionalist specification of a functionalist proposal.

2. What sort of causal organization of neural states could possibly produce mental states?• Turing Machines are causal organizations that can compute

complex functions, transmit and store information, etc.

Turing Machine Functionalism

This provides us with a rigorous functionalist specification of a functionalist proposal.

3. What are the prospects for a science of the mental if functionalism is true?• Turing Machines are specified in a rigorous mathematical way.• A science would attempt to construct TM’s that approximate

human behavior and conduct investigations as to whether these are realized in human brains.

Turing Machine Functionalism

According to this proposal mental states are:• Token physical states of a complex causal system (beats

dualism)• They can interact with one another (beats behaviorism)• They are multiply realizable because you could have another

identical Turing Machine built out of completely different materials (beats identity theory)

Reading

Read: Block “Troubles for Functionalism”

Turing “Computing Machinery and Intelligence”

Problems for TM FunctionalismBlock and Fodor present a series of objections to Turing Machine functionalism.

They seem to like the general idea behind Putnam’s model, (modeling the mind with Turing Machines or producing some other computer model).

But they point out insuperable difficulties with the specific account he puts forward.

First Serious Problem

The problem for behaviorism was that it could not account for the interaction of mental states.

Putnam’s model can do this because the q’s on the machine table interact: a q1 and an input will cause the machine to go into q2.

The problem is that this is the only way in which internal Turing Machine states interact!

First Serious Problem

Recall how a Turing Machine works:

Each q is the total internal state of the scanner head. This means that the Turing Machine can only be in one q-state at a time.

If mental states are the q-states of a Turing Machine, then it would follow that we can only, strictly speaking, be in one mental state at a time!

First Serious Problem

The TM model of the mind can only account for serial causation.

q1 q2 q3….

But as we already know mental states interact in many other ways as well!

A belief and a desire together can cause a certain response.

A whole range of mental states can simultaneously interact to cause a particular behavior and/or additional mental states.

Second Serious Problem

Another problem has to do with the nature of the q’s.

Take two Turing Machines A and B.

Each has a “q1” on their machine table. Does that mean that they share some internal state?

No!• What “q1” does in A may be entirely different than what the “q1”

in B does.

Second Serious Problem

“q1” is just a label designating an internal state of a TM.

Which internal state is picked out by “q1” is completely determined by which state realizes the causal role laid out for q1 on the machine table.

In other words, what “q1” is is exhaustively determined by its causal-functional role in the machine table of a particular Turing Machine.

Second Serious Problem

If A and B have different machine tables, then none of their internal states will play precisely the same causal role.

Thus any difference in the machine tables between A and B entails that they have no q-states in common!

Second Serious Problem

Suppose that I believe the cup is black.

Presumably you can believe this too. But if Putnam’s theory is right, we cannot share the same mental state in this way!• You have lots of mental states that I don’t (and vice versa) and

they may relate to each other in ways mine don’t.• If our minds are Turing Machines, this amounts to a difference in

our machine tables.• This means that we cannot share any mental states whatsoever!

Third Serious Problem: Productivity

The q’s on the machine table are a finite list of states.

But a person could possibly be in an infinite number of mental states!

Any proposition a person can understand could be thought by them.

Third Serious Problem: ProductivityMathematical Beliefs

1+1=21+2=31+3=4

1+1=22+2=43+3=6

1+1+1=32+2+2=63+3+3=9

1+2+1=41+3+1=51+4+1=6

Third Serious Problem: ProductivityLinguistic Competence

As we said before understanding a grammar and a finite number of concepts allows a person to construct and understand an infinite number of sentences!

Third Serious Problem: ProductivityMathematical competence and linguistic competence are productive.

Given finite resources these capacities provide a subject with literally infinite representational capacities.

Of course, any single person will only ever have a finite number of thoughts.

But the possible thoughts that they can have are infinite.

Third Serious Problem: ProductivityBy the very definition of a Turing Machine, the number of internal states it can have are finite.

Therefore, the internal states of a Turing Machine cannot model mental states.

Of course the tape is infinitely long:• Block and Fodor point out that even if mental states can’t be the

q’s on the machine table they could still be the computational states of a TM.

• This is just any state a Turing Machine could get be in. • These are infinite.

Reading

Read Turing: “Computing Machinery and Intelligence”

Kim pages 156-165

To get ahead: Searle “Minds, Brains and Programs

Fourth Serious Problem: Systematicity

Suppose I believe “Sandals are comfortable and birds fly.” I will also believe (or can easily infer):• Sandals are comfortable• Birds fly.

Something about the intrinsic structure of the conjunctive belief relates it to the two simpler beliefs.

Fourth Serious Problem: Systematicity

Suppose I think to myself: “Mary is tall and Joe is short.”

That I can think this thought entails that I can also think any of the following:• Mary is tall.• Mary is short.• Joe is short.• Joe is tall.

Fourth Serious Problem: Systematicity

Contentful mental states bear systematic relations to one another.• Having a concept allows one to think about that concept in many

different situations.• Internal logical structures of thoughts can mean that having one

thought entails the ability to have another.

Fourth Serious Problem: Systematicity

Turing Machine states don’t have internal structure relevant to their functional role.• This is what made them multiply realizable!• It doesn’t matter what the physical thing realizing “q27” is, so long

as it plays a particular causal role in the system.• The intrinsic features of q27 are irrelevant to its functional role.

Final Problem: QualiaThe previous objections were mostly just problems for Putnam’s particular functionalist theory.

There is a peculiar feature of certain mental states that any version of functionalism doesn’t seem to be able to account for.

The special feel of mental states is a crucially important element of them.• Pain (shooting vs. burning vs. that thing that you feel when a bee

stings you)• The look of red vs. green• Hitting your funny bone• Anger• Happiness

Final Problem: Qualia

This is sometimes called “qualitative character,” or “phenomenology” and it has to do with what it is like to be in a certain mental state.

This is a ubiquitous and subjectively important part of our mental lives.

But qualia do not necessarily map onto their functional roles.

Inverted Spectrum

Nonvert Invert

Inverted Spectrum

Suppose Invert and Nonvert were raised in the same community.• They both apply the concept “red” to pick out the color of the

strawberry.• They both use the word “red” to refer to the color of the

strawberry.• They will both agree that the strawberry is the same color as stop

signs and old firetrucks.• They both form beliefs and desires about “red” things under the

same circumstances.

In short, we can imagine that Invert and Nonvert are functional duplicates of one another!

Inverted Spectrum

Since there is no functional difference between their mental states, then any functionalist has to say that they are in the very same state.

But this is, by supposition, false!

You can multiply examples like this:• Someone has a state with the functional role of pain that feels

different.• Two people are functionally identical but what tastes like chicken

to one tastes like beef to the other.

Final Problem: Qualia

In short qualia are an important feature of our mental lives that functionalism can’t seem to explain.

Of course, this is a problem for almost every theory of the mind (stay tuned!!!).

Is Functionalism Too Liberal?

Block (in a later article) gives a series of cases that object to functionalism generally (not just Putnam’s specific version).

His main objections seem to indicate that functionalism is too liberal in the sense that it allows too many things to count as having a mind.

Case #1

Imagine a body that is externally like a human body, but internally very different.

Inside the head, instead of a brain there are:• A bunch of little tiny men with levers• Each man is assigned a particular role.• The roles they are assigned perfectly realize the functional role of

the individual neurons of a human brain.• They take their orders from a big board on one of the walls on

which instructions are posted.

Case #1

“This is what the men do: suppose the posted card has a “G” on it. This alerts the little men who implement the G roles, G-men, they call themselves. Suppose the light representing input I17 goes on. One of the G-men has the following as his sole task: when the card reads ‘G’ and the I17 light goes on, he presses output button 0191 and changes the state card to M…” (151)

Through the efforts of (all) these little men they realize a system functionally equivalent to you.

Does this “homunculus-head” have mental states?

Case #2

Suppose we convert the leaders of China to functionalism and convince them that it would be cool to implement a human brain for an hour or so.• They organize every citizen of their country in such a way that

each citizen realizes the functional role of a neuron (or maybe a small cluster of neurons).

• For one hour, they pull levers and throw switches in a manner that perfectly realizes the causal functional organization of a human brain (say yours).

• All of these signals are communicated via radio to an artificial body that acts just like you do for an hour.

• Does this produce a mind with a lifespan of one hour?

Qualia Again

Even if you are inclined to think that the nation of China has some mental states (beliefs, desires, etc.), does it have qualia?

Is there anything that it is like to be homunculus-head or the Chinese nation in these cases?

If not, then this would be a case of Absent Qualia. It would show that functionalism can’t deal with certain kinds of mental states.• Something can be functionally identical to me without having any

qualia.• Therefore, qualia cannot be functionally individuated states.

Case 3

Suppose a wealthy sheik takes control of the country of Bolivia and forces its economy to implement the functional organization of his own brain.

Has he created a psychological duplicate of himself?

“If there are any fixed points when discussing the mind-body problem, one of them is that the economy of Bolivia could not have mental states, no matter how it is distorted by powerful hobbyists.” (159)

Putnam’s Foresight

Putnam DID see this one coming.

He recognized the possibility of cases like this and just stipulated that they don’t count.

He added a condition on his funtionalist theory of mind ruling out any system that can be decomposed into parts that separately have the right kind of functional organization to have minds.

Block’s Response: Case 4Not only is this painfully ad hoc it also seems too chauvinistic.

Suppose some universe has infinitely divisible matter.• In this universe there are very, very small (maybe photon sized) spaceships

flown by very very small people. • Their flying spaceships make up the particles out of which medium-sized

objects like people are composed.• You could only tell that they were spaceships with a more powerful

microscope than we have today.• Suppose that these little spaceships are organized in such a way that they

mimic a human body (perhaps a whole world of them!).

Could an organism composed out of such fundamental constituents have mental states?• Suppose you accidentally venture into this situation and your elementary

particles are gradually replaced with spaceship particles!

Conclusions

If you agree with Block’s intuitions these cases show that only talking about functional roles is not going to provide an adequate account of mental states.

At the very least it seems as if we need a more nuanced functionalist theory of the mind that will have the ability either:1. Make it plausible that Block is wrong about the cases and

that the systems in question are thinking.2. Have the resources to deny that Block’s imagined systems

are thinking.

Conclusions

Block’s cases (at best) show that functional role does not suffice for mental states. But that doesn’t mean it isn’t necessary.

Such an account would agree with the functionalist that functional organization is an important element of what makes something a mind, but that is not all there is to it.

Conclusions

Today, one is far more likely to find theorists endorsing a Computational Theory of Mind rather than functionalism.

The rough idea behind such theories is that the brain is some kind of computer, where the neurons are the hardware and the mental states and capacities are the software.

Such accounts can impose additional conditions for something to have a mind over and above just the functional role.