Upload
winifred-stokes
View
214
Download
1
Embed Size (px)
Citation preview
AI Principles, Semester 2, Week 1, Lecture 2,Cognitive Science and AI Applications
How simulations can act as scientific theories
The Computational and Representational Understanding of Mind
Boundaries of Cognitive ScienceAI and Mathematical knowledgeAI and Dynamic SystemsAI and EmotionAI and ConsciousnessEmbodied AI
AI Applications for student presentations
1
From Paradigms to Simulations
Paradigms Newtonian versus Relativity and QMBehaviourism versus 'Thought as computation'
Frameworks GOFAI and nouvelle AI
Theories Explanations for a set of empirically observable phenomena
Models Well specified theories that may give rise to precise predictions, and may be presented in formal mathematical terms
Simulations Models that can be run on a computer
2
Benefits and problems using mathematical models and Simulations
Vigorous specification of theoryprecision of termsnew tools for studying conceptsrevelation of hidden assumptionsspecification problems – choosing the wrong detailproblems when single successes are over-generalisedcommunication problems
Exploration of complex domainseconomical explanationssimulations can go beyond mathematical boundariesBonini's Paradox – simulations more complex than realityValidation problem
SerendipityEmergence and surprise
(Dawson Minds and Machines)3
The special case of simulation in AI and Cognitive Science
Kenneth Craik, The Nature of Explanation (1943):
“By a model we thus mean any physical or chemical system which has a similar relation-structure to that of the processes it imitates. By 'relation-structure' I do not mean some obscure non-physical entity which attends the model, but the fact that it is a physical working model which works in the same way as the process it parallels, in the aspects under consideration at any moment. Thus, the model need not resemble the real object pictorially; Kelvin's tide-predictor, which consists of a number of pulleys on levers, does not resemble a tide in appearance, but it works in the same way in certain essential respects” (page 51, Craik 1943)
4
The special case of simulation in AI and Cognitive Science
Dawson (2004):
“Intuitively, a model is an artifact that can be mapped on to a phenomenon that we are having difficulty understanding. By examining the model we can increase our understanding of what we are modeling. For it to be useful, the artifact must be easier to work with or easier to understand than is the phenomenon being modeled. This usually results because the model reflects some of the phenomenons properties, and does not reflect them all. A model is useful because it simplifies the situation by omitting some characteristics”
Models should be easier to work with than reality, but there is a trade-off. Some of the complexity of reality needs to be omittedby a process of abstraction.
5
The special case of simulation in AI and Cognitive Science
Kenneth Craik, The Nature of Explanation (1943):
“My hypothesis then is that thought models, or parallels reality – that its essential feature is not 'the mind,' 'the self,' 'sense-data,' nor propositions but symbolism, and that this symbolism is largely of the same kind as that which is familiar to us in mechanical devices which aid thought and calculation”
Craik is saying that Symbols in the mind are a similar kind of thing,used in a similar kind of way, to symbols used within computers.
Humans possess models of reality inside their heads in the same waythat scientists model phenomena. Models in AI form both kinds of engineering and psychological model.
6
The Computational and Representational Understanding of the Mind (CRUM)
Program----------data structures +algorithms= running program
7
Mind----------mental representations +computational procedures= thinking
Do brains work just like digital computers?
Metaphors of Mind follow the technology of the day – Victorians compared mental processes to mechanical processes. Levels ofdescription will be discussed in lecture 8 in week 4.
(Reading - Thagard, Mind: Introduction to Cognitive Science, chapter 1)
Forward engineering AND reverse engineering in AI
Daniel Dennett (1994):
The forward engineer builds an artifact (a robot or software program) that accomplishes a capability, “however he wants to”
The reverse engineer would show, “through building, that he had have figured out how the human mechanism works”
The reverse engineer makes the assumption that although “the historical design process of evolution doesn't proceed by an exact analogue of the top-down engineering process. ... Reverse engineering is just as applicable a methodology to systems designed by Nature, as to systems designed by engineers.”
This is because “even though the forward processes have been different, the products are of the same sort, so that the reverse process of functional analysis should work as well on both sorts of product.” 8
Variety of perspectives with the Computational and Representational Understanding of the Mind (CRUM)
CRUM = symbolic (GOFAI) and connectionist (nouvelle) computation
LogicRulesConceptsAnalogiesImages Connections
Dennett - Darwinian, Skinnerian, Popperian, Gregorian, Minsky - layer framework – The Emotion MachineNilsson - Iconic versus feature based, Sloman – Analogical versus FregeanNilsson – Reactive versus deliberative; Reasoning versus projecting
(Reading - Thagard, Mind: Introduction to Cognitive Science, chapters 1-8, Dennett – Kinds of Minds, Nilsson – Artificial Intelligence pp , Sloman (1971) - Interactions between Philosophy and Artificial Intelligence: The role of intuition and non-logical reasoning in intelligence) 9
Boundaries of Cognitive Science
AI and EmotionAI and ConsciousnessEmbodied AISituated AIAI and Dynamic SystemsAI and Mathematical knowledge
10
AI and EmotionHow can artifacts possess emotions?
A functional explanation for emotion
Herbert Simon (1967) – emotion as a global interrupt to processing
Emotions linked to goals
John McCarthy – The robot and the baby
Drew McDermot – Artificial Intelligence Meets Natural Stupidity
Aaron Sloman (1996) – a functional explanation for Grief
The Cognition and Affect Directory – many papers with an AIapproach to emotion
11
AI and Consciousness
Different meanings for the term 'consciousness'. Easy and hard problem (Chalmers)
Can artifacts possess consciousness?
Is this a more difficult question than that for emotion? Explainyour answer.
What has the Turing Test got to do with consciousness?
Sensory qualia: difference between Cheddar and Wensleydale cheese
Bernard Baars – global workspace theory of consciousness invokedby Stan Franklin, Murray Shanahan
12
Embodied AI
Being in the World (Heidegger, Dreyfus, Brooks)
Intelligence is essentially non-representational (by this researchers inembodied cognition mean with central representations like symbols).
Direct perception (Gibson) – rejects inferential view of perception,we perceive affordances
(Reading: Thagard, Mind chapter 10,
Haugeland, Mind Design II – chapter 6, Dreyfus FromMicro-Worlds to Knowledge Representation: AI at an Impasse, -chapter 15, Brooks, Intelligence Without Representation,
Clark Being There)
13
Dynamic Systems
Non-(computational-representational) approaches to human thinking
Thagard, (page 170)“Instead of proposing a set of representations and processes, weshould follow the successful example of physics and biology andtry to develop equations that describe how the mind changes over time.”
Theoretical tools of a dynamic systems analysis:state space, attractors, chaotic systems, phase transitions, saddle points
Will a dynamic systems analysis facilitate the engineering aims of AI?
(Reading: Thagard, Mind chapter 11, Haugeland, Mind Design II, chapter 16, van Gelder Dynamics and Cognition)
14
Mathematical knowledge
Deriving all mathematical knowledge from a few basic assumptionsis not possible
Godel's incompleteness theorem
Thagard on the argument that a computational account of mind is impossible:1 Any computer that claims to model the human mind is an instantiation of a formal system2 If this formal system is consistent and adequate for arithmetic, then by Godel's theorem it is incomplete in having a formula that is neither provable nor disprovable3 But the human mind can see that this formula is true, so there is something that the mind can do that the computer cannot do.4 Hence, the mind is not a computer
(Reading: Thagard, Mind chapter 11, Hofstader, Godel, Escher, Bach)
15
Review of subjects for student presentationsHistory of AI
Boundaries of Cognitive Science
AI Applications – list of weblinks on module page (copied from last year)
Subjects that should not form presentations
Deadline for informing me of your intended subject: Monday 29th January
16
Sources of information for student presentations
Your tutor, other lecturers, other students, demonstrators etcLook at the research pages of the school website to see the kind ofresearch different lecturers do.
I recommend emailing to arrange an appointment, this will give the lecturer time to think about your request and perhaps give you moreinformation.
Google, Wikipedia.com, The websites of famous AI Universities: Stanford, MIT, Carnegie Mellon, in the UK Edinburgh and Sussex
Text-books and magazines like New Scientist (which you can search online)
Recently published AI books for the general reader: Stan Franklin – Artificial Minds, Andy Clark – Being ThereOr other types of review book: Margaret Boden – Mind as Machine
17
Deadline ReminderEmail [email protected]
by 6th February
with your intended
subject for presentation
18