21
A New Artificial A New Artificial Intelligence 7 Intelligence 7 Kevin Warwick Kevin Warwick

A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Embed Size (px)

Citation preview

Page 1: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

A New Artificial A New Artificial Intelligence 7Intelligence 7

Kevin WarwickKevin Warwick

Page 2: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Embodiment & QuestionsEmbodiment & Questions

Page 3: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Issues of Modern AIIssues of Modern AI

• We will look here at some of the We will look here at some of the important questions facing AI todayimportant questions facing AI today

• We will open up some of the We will open up some of the directions being takendirections being taken

• We will attempt to move away from We will attempt to move away from the restrictions imposed by Classical the restrictions imposed by Classical AIAI

Page 4: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

BrainsBrains

• A brain has different neuronal A brain has different neuronal structures each with a specialised role structures each with a specialised role – sensory, motor, inter– sensory, motor, inter

• Neurons communicate through BINARY Neurons communicate through BINARY (not analogue) codes(not analogue) codes

• We know something about the physical We know something about the physical - chemical aspects of the brain- chemical aspects of the brain

• We know almost nothing about how We know almost nothing about how memories are encoded or faces are memories are encoded or faces are recognisedrecognised

Page 5: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Innate KnowledgeInnate Knowledge

• Can learning occur on a blank slate?Can learning occur on a blank slate?

• Must there be some prior bias?Must there be some prior bias?

• Are memories inherited?Are memories inherited?

• Meaningful convergence of ANNs Meaningful convergence of ANNs depends on number of neurons + depends on number of neurons + topology + learningtopology + learning

• Is this also true of a brain?Is this also true of a brain?

• Are there hard wired cognitive biases? Are there hard wired cognitive biases?

Page 6: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Genetics/emergenceGenetics/emergence

• Darwinian (natural) selection – Darwinian (natural) selection – shapes individual behavioursshapes individual behaviours

• AND/ORAND/OR• Lamarckian evolution – offspring Lamarckian evolution – offspring

inherit acquired characteristics (e.g. inherit acquired characteristics (e.g. giraffe) giraffe)

• LEAD TOLEAD TO• Strengthening of particular circuits in Strengthening of particular circuits in

the brain & weakening of othersthe brain & weakening of others

Page 7: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Plato – unsupervised Plato – unsupervised learning?learning?

• How can you enquire, Socrates, into How can you enquire, Socrates, into that which you do not already know? that which you do not already know?

• What will you put forth as the subject What will you put forth as the subject of the enquiry? of the enquiry?

• And if you find out what you want, And if you find out what you want, how will you ever know that this is how will you ever know that this is what you did not know?what you did not know?

• i.e. how can we know we are i.e. how can we know we are someplace when we do not know someplace when we do not know where we are going?where we are going?

Page 8: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

QuestionsQuestions

• Perceptions depend on distributed Perceptions depend on distributed neural codes – how are these combined?neural codes – how are these combined?

• What we perceive is highly dependent What we perceive is highly dependent on how our brain attempts to interpret a on how our brain attempts to interpret a situation/scene – how?situation/scene – how?

• How does an individual acquire How does an individual acquire language?language?

• How does a brain index temporally How does a brain index temporally related information? related information?

Page 9: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Agents + EmergenceAgents + Emergence

• Idea - The mind is organised into sets Idea - The mind is organised into sets of specialised functional units (Minsky)of specialised functional units (Minsky)

• Modular theories good for agentsModular theories good for agents

• Emergent globally intelligent Emergent globally intelligent behaviour arises from the cooperation behaviour arises from the cooperation of large numbers of agentsof large numbers of agents

• Supported by fMRI scansSupported by fMRI scans

Page 10: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

PiagetPiaget

• Humans assimilate external Humans assimilate external phenomena according to our present phenomena according to our present understandingunderstanding

• We accommodate our understanding We accommodate our understanding to the demands of the phenomenato the demands of the phenomena

Page 11: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

KantKant

• Schemata – apriori structure used to Schemata – apriori structure used to organise experience of the external organise experience of the external worldworld

• Observation is not passive and Observation is not passive and neutral but active and interpretiveneutral but active and interpretive

Page 12: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

PerceptionPerception• Perceived information never fits precisely into Perceived information never fits precisely into

our schemataour schemata

• Depends on I/O devices – in humans and Depends on I/O devices – in humans and robotsrobots

• With different I/O the real world will be With different I/O the real world will be perceived differentlyperceived differently

• Each entity has a different concept of realityEach entity has a different concept of reality

• There is NO absolute reality! (Berkeley)There is NO absolute reality! (Berkeley)

Page 13: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Embodiment in cognitionEmbodiment in cognition

• Classical AI – instantiation of a Classical AI – instantiation of a physical symbol system is irrelevant physical symbol system is irrelevant to its performance – structure is to its performance – structure is important (Brain in a vat)important (Brain in a vat)

• New AI - Intelligent action requires a New AI - Intelligent action requires a physical embodiment that allows the physical embodiment that allows the entity to be integrated in the worldentity to be integrated in the world

• Present day robot I/O limited – Present day robot I/O limited – requires more complexity in requires more complexity in interfacing interfacing

Page 14: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

CultureCulture

• Classical AI – Individual mind is the Classical AI – Individual mind is the sole source of intelligencesole source of intelligence

• But knowledge is a social construct – But knowledge is a social construct – an understanding of the social an understanding of the social context of knowledge and behaviour context of knowledge and behaviour is also important (memes!)is also important (memes!)

Page 15: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Interpretations - Interpretations - CommunicationCommunication

• Symbols are used in context – a Symbols are used in context – a domain has different interpretations, domain has different interpretations, depending on the goalsdepending on the goals

• Sign interpretation – coding systemSign interpretation – coding system

• The meaning of a symbol is The meaning of a symbol is understood in the context of its role understood in the context of its role as an interpretor as an interpretor

Page 16: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Falsifiable ComputationFalsifiable Computation

• Any number of confirming Any number of confirming experiments are not sufficient for experiments are not sufficient for confirmation of a theoryconfirmation of a theory

• Scientific theories must be falsifiableScientific theories must be falsifiable

• There must exist circumstances under There must exist circumstances under which a model is a poor approximantwhich a model is a poor approximant

• Many computational models are not Many computational models are not falsifiable – universal machines!falsifiable – universal machines!

• Need computation that is falsifiableNeed computation that is falsifiable

Page 17: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Let’s Move OnLet’s Move On

• Classical AI – (Hobbes/Locke/Aristotle) – Classical AI – (Hobbes/Locke/Aristotle) – intelligent processes conform to universal intelligent processes conform to universal laws and are understandable/modelablelaws and are understandable/modelable

• Converse (Winograd/Penrose/Weisenbaum) Converse (Winograd/Penrose/Weisenbaum) – important aspects of intelligence cannot – important aspects of intelligence cannot be modelledbe modelled

• A model/simulation is not the real thingA model/simulation is not the real thing

• The only ‘exact’ simulation of a human The only ‘exact’ simulation of a human brain would be that specific human brain brain would be that specific human brain and no other – even then it would need to and no other – even then it would need to be in its place/timebe in its place/time

Page 18: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

DifferencesDifferences

• Just because something is different Just because something is different does not make it worsedoes not make it worse

• A simulation of a human brain could A simulation of a human brain could be more/less intelligent/conscious/self-be more/less intelligent/conscious/self-aware/understandingaware/understanding

• Models/simulations are used to Models/simulations are used to explore, explain & predict – if a model explore, explain & predict – if a model is proven to be accurate for this then is proven to be accurate for this then that’s just finethat’s just fine

Page 19: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Comments on IntelligenceComments on Intelligence

• As long as we understand the basics As long as we understand the basics of what intelligence is, that is of what intelligence is, that is sufficientsufficient

• We should not be bogged down by We should not be bogged down by trying to copy exactly the functioning trying to copy exactly the functioning of the human brain, interesting of the human brain, interesting though that might bethough that might be

• More interesting is to create entities More interesting is to create entities that are intelligent in their own rightthat are intelligent in their own right

Page 20: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

NextNext

• Growing Brains – Biological AIGrowing Brains – Biological AI

Page 21: A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions

Contact InformationContact Information

• Web site: Web site: www.kevinwarwick.comwww.kevinwarwick.com

• Email: Email: [email protected]@reading.ac.uk

• Tel: (44)-1189-318210Tel: (44)-1189-318210

• Fax: (44)-1189-318220Fax: (44)-1189-318220

• Professor Kevin Warwick, Department Professor Kevin Warwick, Department of Cybernetics, University of Reading, of Cybernetics, University of Reading, Whiteknights, Reading, RG6 6AY,UKWhiteknights, Reading, RG6 6AY,UK