13
November 7, 2012 Introduction to Artificial Intelligence Lecture 13: 1 Note about Resolution Refutation You have a set of hypotheses h 1 , h 2 , …, h n , and a conclusion c. Your argument is that whenever all of the h 1 , h 2 , …, h n are true, then c is true as well. In other words, whenever all of the h 1 , h 2 , …, h n are true, then c is false. If and only if the argument is valid, then the conjunction h 1 h 2 h n c is false, because either (at least) one of the h 1 , h 2 , …, h n is false, or if they are all true, then c is false. Therefore, if this conjunction resolves

November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Embed Size (px)

Citation preview

Page 1: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

1November 7, 2012

Note about Resolution RefutationYou have a set of hypotheses h1, h2, …, hn, and a conclusion c.

Your argument is that whenever all of the h1, h2, …, hn are true, then c is true as well.

In other words, whenever all of the h1, h2, …, hn are true, then c is false.

If and only if the argument is valid, then the conjunction h1 h2 … hn c is false, because either (at least) one of the h1, h2, …, hn is false, or if they are all true, then c is false.

Therefore, if this conjunction resolves to false, we have shown that the argument is valid.

Page 2: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

2November 7, 2012

Propositional Calculus

You have seen that resolution, including resolution refutation, is a suitable tool for automated reasoning in the propositional calculus.

If we build a machine that represents its knowledge as propositions, we can use these mechanisms to enable the machine to deduce new knowledge from existing knowledge and verify hypotheses about the world.

However, propositional calculus has some serious restrictions in its capability to represent knowledge.

Page 3: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

3November 7, 2012

Propositional Calculus

In propositional calculus, atoms have no internal structure; we cannot reuse the same proposition for a different object, but each proposition always refers to the same object.For example, in the toy block world, the propositions ON_A_B and ON_A_C are completely different from each other.We could as well call them PETER and BOB instead.So if we want to express rules that apply to a whole class of objects, in propositional calculus we would have to define separate rules for every single object of that class.

Page 4: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

4November 7, 2012

Predicate Calculus

So it is a better idea to use predicates instead of propositions.

This leads us to predicate calculus.

Predicate calculus has symbols called • object constants,• relation constants, and• function constants

These symbols will be used to refer to objects in the world and to propositions about the word.

Page 5: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

5November 7, 2012

Quantification

Introducing the universal quantifier and the existential quantifier facilitates the translation of world knowledge into predicate calculus.

Examples:

Paul beats up all professors who fail him.

x(Professor(x) Fails(x, Paul) BeatsUp(Paul, x))

There is at least one intelligent UMB professor.

x(UMBProf(x) Intelligent(x))

Page 6: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

6November 7, 2012

Knowledge Representationa) There are no crazy UMB students. x (UMBStudent(x) Crazy(x))

b) All computer scientists are either rich or crazy, but not both. x (CS(x) [Rich(x) Crazy(x)] [Rich(x) Crazy(x)] )

c) All UMB students except one are intelligent. x (UMBStudent(x) Intelligent(x)) x,y (UMBStudent(x) UMBStudent(y) Identical(x, y) Intelligent(x) Intelligent(y))

d) Jerry and Betty have the same friends. x ([Friends(Betty, x) Friends(Jerry, x)] [Friends(Jerry, x) Friends(Betty, x)])

e) No mouse is bigger than an elephant. x,y (Mouse(x) Elephant(y) BiggerThan(x, y))

Page 7: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

7November 7, 2012

But now, finally…

… let us move on to…

Artificial Neural Networks

Page 8: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

8November 7, 2012

Computers vs. Neural Networks

“Standard” Computers Neural Networks

one CPU highly parallelprocessing

fast processing units slow processing units

reliable units unreliable units

static infrastructure dynamic infrastructure

Page 9: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

9November 7, 2012

Why Artificial Neural Networks?

There are two basic reasons why we are interested in building artificial neural networks (ANNs):

• Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing.

• Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing.

Page 10: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

10November 7, 2012

Why Artificial Neural Networks?

Why do we need another paradigm than symbolic AI for building “intelligent” machines?

• Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized.

• However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning.

• ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware.

Page 11: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

11November 7, 2012

How do NNs and ANNs work?

• The “building blocks” of neural networks are the neurons.

• In technical systems, we also refer to them as units or nodes.

• Basically, each neuron– receives input from many other neurons,– changes its internal state (activation) based on

the current input,– sends one output signal to many other

neurons, possibly including its input neurons (recurrent network)

Page 12: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

12November 7, 2012

How do NNs and ANNs work?

• Information is transmitted as a series of electric impulses, so-called spikes.

• The frequency and phase of these spikes encodes the information.

• In biological systems, one neuron can be connected to as many as 10,000 other neurons.

Page 13: November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses

November 7, 2012 Introduction to Artificial Intelligence Lecture 13: Neural Network Basics

13

“Data Flow Diagram”of Visual Areas inMacaque Brain

Blue:motion perception pathway

Green:object recognition pathway