21
Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

Embed Size (px)

Citation preview

Page 1: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

Cognitive Computing 2012

The computer and the mind

4. DENNETT

Professor Mark Bishop

Page 2: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 2

Background The philosophical ‘general’ frame problem:

The crux of the paper is to enquire how can an AI program ‘know’ what knowledge is relevant to solving a particular problem without performing some kind of exhaustive search over its entire knowledge domain.

The ‘cognitive science’ frame problem:

How to represent in logic what remains unchanged as the result of an action or event.

Page 3: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 3

On bombs and robots A tale of a bomb, a battery, a trolley and a robot who power is fading…

The robot R1 database includes knowledge that inside the room there is a bomb primed to soon explode and a spare battery on a trolley.

R1 deduces that it could pull the trolley out of the room, thus retrieving its battery and ensuring continuation of its service as a functioning robot.

However, although the robot’s database included knowledge that ‘the bomb was on the trolley’ however the robot failed to investigate implications of the action, ‘retrieve the trolley’

which also retrieved the bomb with the trolley and hence destroyed the robot.

Page 4: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 4

The robot deducer At first sight the problem can be solved by creating a second robot R1D1

- the ‘robot deducer’

This new ‘robot deducer’ carefully works out the implications of any side effects to its actions.

However considering every implication of its action will result in the robot taking too long to act and hence still getting destroyed by the time bomb.

E.g. Pulling the trolley out of the room will not affect the colour of its walls…

So hitting hard against the cognitive science frame problem of how to logically process that which remains unchanged as a result of action. E.g. Essentially now solved; see Murray Shanahan, ‘Solving the frame problem’

Hence the robot must only examine ‘relevant implications’ of its actions.

Page 5: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 5

Relevant implications So the engineers design a third robot R2D1

R2D1 only Deduces Robot Relevant implications of its actions…

… but by what criteria can the R2D1 ‘know’ which are relevant other than by exhaustive search?

i.e. It must sort the implications into two piles: relevant and irrelevant!

Page 6: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

The Frame Problem of AI

Engineers run into the frame problem often in AI, specifically when designing autonomous vehicles to operate in an unconstrained environment

As the vehicle will undoubtedly come across problems not explicitly scripted in its operation

e.g. The NASA Curiosity rover on Mars.

How should the vehicle operate in such eventualities?

Reflection on the ‘frame problem’ suggests only very simple solutions to such problems may be practical…

21/04/23 (c) Bishop: The computer and the mind 6

Page 7: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 7

“Look before you leap” Humans can look (think) before we leap; but how?

By ‘expectation’: learning from experience… Hume defines such as habits of expectation as ‘Associationism’.

A classical example of ‘Humean learning’:

Two children - the first is not punished for grabbing cookies from a jar - the second is. The second ‘learns’ not to take cookies without asking.

Why - cookies followed by punishment - so idea of cookie leads to idea of punishment leads to idea of pain which the child wants to avoid.

But why should the idea of pain lead to idea of avoiding cookies and not dancing or anything else?

Only if we 'hard-wire' in what the idea of pain ‘means’ can we see why the child wants to avoid it, but then it is not a learnt idea.

But what ideas must be 'hard-wired' and what can be learnt?

Page 8: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

The ‘midnight-snack’ problem Question, “How does a stage magician saw a lady in half?”

Answer (from the ‘armchair philosopher’), “He doesn‘t really...”

The frame problem is like this magicians trick: it appears impossible from the perspective of someone in the audience, but we know it must be solved. ‘Hetero-phenomenological knowledge’ is that informational knowledge which must be known

about a problem in order to successfully solve the task.

A similar example of such a trick is that of, “How to put a midnight snack together?”

At first sight the solution appears obvious:

We hypothesise that there is food and drink in fridge; We investigate; We put ingredients together; SUCCESS !!

21/04/23 (c) Bishop: The computer and the mind 8

Page 9: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

Nature or Nurture? Yet to solve the ‘midnight-snack’ problem requires a huge amount of

seemingly banal - ‘common-sense’ - knowledge such as:

How to butter bread; How to pour beer into a glass etc.

And very little of the type of physics that one is taught at school.

The physics of ‘particles in motion’ etc.

Fortunately the ‘midnight-snacker’ knows such things, but how much of this ‘common sense’ knowledge has been learnt and how much is innate? Some of the knowledge must be innate, or direct implications of the innate; For example, that ‘one thing can't be in two places at the same time’ could imply

‘beer-in-the-glass is no longer beer-in-the-bottle’.

21/04/23 (c) Bishop: The computer and the mind 9

Page 10: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

Dennett, “On hetero-phenomenology” How much knowledge is hard-wired?

Clearly an agent without knowledge of liquids couldn’t pour itself a drink!

Work in AI forces such “hetero-phenomenological” thinking as computers effectively start from a ‘tabula rasa”.

The installation problem - what ‘kind’ of knowledge is required:

Semantic knowledge problem - what topics required. Syntactic knowledge problem - what format the knowledge: logic; semantic nets

etc.

But why can’t AI just ignore this ‘problem of learning’ and simply ‘hard wire’ all relevant knowledge in system?

21/04/23 (c) Bishop: The computer and the mind 10

Page 11: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 11

Hard-wired knowledge? No one subscribes to the view that all the knowledge required to solve say,

the midnight snack problem, is simply hard-wired [stored in a list]: An extreme ‘Language of Thought’ hypothesis …

Whereby all propositions separately enumerated

… even an encyclopedia is not organised quite like this.

But the problem is not simply one of enormous memory space required for such a list, but the time needed to search it. An agent is not intelligent if it takes a million years to solve a problem. cf. Newell & Simon; intelligence is not the raw ‘amount’ of search.

Therefore must be some kind of ‘generative system’ employed, capable of producing information as required: But it can't be a simple ‘Spinozistic’ system

one starting with a small set of facts, the rest being deducible

as there is no clear ‘deductive relation’ between many of the facts required.

Page 12: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 12

On planning … Introspectively, for some problems, (e.g. moving a piano upstairs), the

procedure seems to be planning/imagination (cf. Simnos/Cronos robot): envisage situation; perform act; see outcome (backstage knowledge); evaluate act.

But do we REALLY act to get our midnight snack in this way? Some say YES - that such planning goes on subconsciously, whilst others –

the introspective phenomenologist - deny this and say no such planning goes on..

The hetero-phenomenologist reasons that somehow information about the situation must be being processed otherwise in our explanations we must resort to magic..

And [to date] the only ‘model in town’ is the planning/imagination model. And REC??

Page 13: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 13

Counterfactual reasoning Our intelligence is assured by counterfactual reasoning

What if the butter had frozen - would we behave like Sphex wasp?

Hetero-phenomenological reasoning suggests that one is sensitive to such possibilities without searching through a long list of them.

If one is a ‘regular midnight snacker’ then we might follow ‘well trodden paths of expectations’; so that the procedure is more or less automatic.

If one is merely an ‘occasional midnight snacker’ then we might simply assemble a plan anew from well-known routines: buttering bread; pouring beer etc.

Page 14: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

Folk knowledge But even well made plans can go adrift through oversight;

consider ‘painting oneself into a corner’…

We seem to avoid such situations by applying ‘folk-knowledge’ E.g. Don't paint into a corner…

But how do we select ‘the right’ folk-knowledge and recognise impending problems as problems?

Intelligent action must involve swift context sensitive planning and should not be infallible

The intelligent midnight snack robot needs to be surprised by frozen butter etc.

21/04/23 (c) Bishop: The computer and the mind 14

Page 15: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 15

A problem of induction? Some people think the frame problem is not a problem - we just need the

right set of ‘expectations’ (scripts).

Which seems analogous to the Humean problem of induction; given a set of knowledge about the world, what should one believe about the future?

But the frame problem is not that of induction, for if we had solved the problem of induction it would still remain…

Consider R1D1, even if it had perfect knowledge about the effects of its actions (e.g. That - with P(0.7864) - it will make noise when it leaves the room etc.) - it still has the problem of finding what is the relevant knowledge.

A walking encyclopaedia with knowledge of cliffs, will still walk over an approaching one unless it can use its knowledge appropriately - in real time - to avoid it.

Page 16: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 16

Frame axioms Early AI systems described a situation {S} by a set of axioms {af}

and a set of background axioms {ab}

These background axioms frame the situation and give rise to the name of this problem.

{S} = {af} + {ab}.

The expectation was that the effect of any action, {A}, on {S} can be simply deduced by the laws of logic, to define the new situation {S'}.

Introspectively this doesn't seem how humans perform intelligent actions, (but perhaps introspective realism doesn‘t matter)

Or perhaps this method really does describe our subconscious – backstage - thought processes…

Page 17: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 17

Relevancy However Frame axioms have failed robotic research. Consider the background axioms:

If x is blue before and i move x onto y, then x is blue after.

There are countless such actions and they cannot be encapsulated by the over-riding axiom: If x is blue x stays blue.

But the above is FALSE (we may paint x black).

Initial solution to this problem assumed nothing changed, unless explicitly stated …

… but this is also wrong as actions have more than one consequence. R1 brought the trolley out but did not explicitly bring the bomb, so using the above, the bombs

position should be unchanged. Similarly, in midnight snack problem, we didn’t specify turkey stayed on the plate...

We would like to add axioms of relevancy, but these simply add to our knowledge base and do not fulfil the desired function of making the knowledge base smaller.

So just how can we engineer a system to ‘ignore most of that it knows’?

Page 18: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 18

‘Ceteris paribus’ reasoning An old puzzle, “Three Missionaries and cannibals need to cross a river. If the cannibals

ever outnumber missionaries then the missionaries will be eaten.”

The dunce: “Why not cross over the bridge upstream?” “What bridge – there is none mentioned in description” “Well no one says there isn‘t one”!

If we modify problem to exclude bridges the dunce may enquire, “What about a helicopter?” Then, after this is subsequently excluded the the dunce asks about the “availability of winged horses”...

Finally, upon revealing the solution, dunce attacks it because he points out that the boat “might not have oars” or “the boat might have a hole in it”...

In monotonic reasoning adding axioms never diminishes what can be proved from the premises.

Non-monotonic reasoning extra knowledge can reduce premises. In real life a solution can be invalidated by extra knowledge (e.g. boat had no oars).

But a ‘ceteris paribus’ assumption of ‘all things being equal’ does not specify what ‘all such things’ actually are..

Page 19: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 19

Ignoring what ‘should be’ ignored

To solve frame problem we need AI to operate in this human style of reasoning – one that ignores what should be ignored.

eg. Minsky’s frames and Schank’s scripts - define a set of expectations and are alert to divergences from them.

Such systems work well when the world behaves; when it doesn‘t they act like the Sphex wasp.

AI systems needs to be able to 'learn' to 'understand' the situation otherwise AI will remain an insect like intelligence.

Humans have backup systems when script behaviour fails and further, they work out their scripts themselves..

Page 20: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 20

Cognitive Wheels A ‘cognitive wheel’ is any cognitive theory that is profoundly un-

biological cf. wheels; these are clearly good technology but are rarely found in nature.

If a robot AI system - based on extensions of logic - ever did succeed in a real world environment, would its solutions be anything more than a ‘cognitive wheel’?

Would it add anything to our knowledge of ‘human backstage reasoning’?

Although, at the lowest level, clearly all AI systems are one form of cognitive wheel …

for program code is not the same as neurons

… nonetheless the hope in Cognitive Science is that cognitively interesting systems will shed light on human psychology at some level.

Page 21: Cognitive Computing 2012 The computer and the mind 4. DENNETT Professor Mark Bishop

21/04/23 (c) Bishop: The computer and the mind 21

Conclusions A defence of AI methods; they can illuminate potential information theoretic

problems that must be solved, even if the solutions engineers find are un-biological.

E.g. Is a list of predicates and logic the best method for knowledge representation for real world problems?

Pat Hayes describes ‘Naïve Physics’ as an alternate method which attempts to model things more directly..

But Naïve Physics has not been proven effective yet; but - Cronos - ‘physics engines’?

Dennett is rightly suspicious of parallel processing - via analogy with the brain - as a methodology that is not susceptible to the frame problem…

‘… another hand-waving solution’.

There is clearly a need to be able to specify AI problems in a manner similar to that humans employ and a method for getting the systems to perform, human ‘ceteris paribus’ style reasoning.