13
1 Pendahuluan Pertemuan 1 Matakuliah : T0293/Neuro Computing Tahun : 2005

1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005

  • View
    217

  • Download
    2

Embed Size (px)

Citation preview

1

PendahuluanPertemuan 1

Matakuliah : T0293/Neuro Computing

Tahun : 2005

2

Basic Concept of Neural Networks• Traditional, sequential, logic-based digital

computing excels in many areas but has been less successful for other types of problems.

• Artificial Neural Networks (ANN) was motivated by a desire to try both to understand the brain and to emulate some of its strength.

• The basic idea of ANN is to adopt the human brain-like mechanism, i.e.:

1. The processing element receives many signals2. Fault tolerance3. Parallel processing

3

• A NN is characterised by 1. its pattern of connections between the neurons (= its

architecture)2. its method of determining the weights on the

connections (= learning / training algorithm)3. its activation function

• A NN consists of many processing elements called neurons / units / cells / nodes

4

• Karakteristik Neural NetworksFilosofis:

– Learning– Generalization– Abstraction– Applicability

Teknis:– Node Characteristics– Topology ( struktur rangkaian node-node )– Learning Rules

5

Neural Networks Development• The 1940s: The Beginning of Neural Nets

– McCulloch-Pitts Neurons• Warren McCulloch & Walter Pitts (1943)• The neurons can be arranged into a net to produce

any output that can be represented as a combination of logic functions

• Feature: the idea of a threshold such that (s.t.) if the net input to a neuron is greater than the threshold, then the unit fires

• Most likely used as logic circuits

6

– Hebb Learning• Donald Hebb (1949)• The first learning law for ANNs• Premise: if two neurons were active simultaneously,

then the strength of the connection between (b/w) them should be increased

• Feature: weight adjustment

7

• The 1950s & 1960s: The First Golden Age of Neural Networks – Perceptrons

• Group of researchers (Black, 1962; Minsky & Papert, 1969; Rosenblatt, 1958, 59, 62)

• Typical perceptron consists of an input layer (the retina) connected by paths with fixed weights to associator neurons; the weights on the connection paths are were adjustable

• The learning rule uses an iterative weight adjustment that is more powerful than the Hebb rule

8

– ADALINE• Bernard Widrow & Marcian Hoff (1960)• Closely related to the perceptron learning rule• The perceptron learning rule adjusts the connection

weight to a unit whenever the response of the unit is incorrect

• The delta rule adjusts the weights to reduce the difference b/w the net input to the output unit and the desired output

• The learning rule for a single layer network is a pre cursor of the BP rule for multi-layer nets

9

• The 1970s: The Quiet Years – Kohonen

• Teuvo Kohonen (1972) – Helsinsky Univ of Tech• Dealt with Associative Memory Neural Nets• Basic of the development of self-organising feature

maps that use a topological structure for the cluster units

• Application: speech recognition, solution of the TSP, etc

10

– Anderson• James Anderson, Brown Univ ( 1968, 1972)• Dealt with Assoc Mem NN

– Grossberg• Stephen Grossberg (1967 – 1988)

– Carpenter• Gail Carpenter together with Grossberg has

developed a theory of a self organizing neural networks called adaptive resonance theory for binary input patterns (ART1 & ART2)

11

• The 1980s: Renewed Enthusiasm– Backpropagation

• Overcome the problem discovered in the previous decade, which caused by two reasons: the failure of single-layer perceptrons to be able to solve such simple problems (mapping) as the XOR function and the lack of a general method of training a multilayer net.

– Hopfield Nets• David Tank, John Hopfield (1982)• A number of NNs have been developed based on fixed

weights and adaptive activations• These nets can serve as associative memory nets and

can be used to solve constraint satisfaction problem such as the TSP

12

– Neocognitron• Kunihiko Fukusima et al (1975, 1988)• A series of specialised NN for character recognition

has been developed

– Boltzmann Machine• Nets in which weights or activations are changed on

the basis of a probability density function.• These nets incorporate such classical ideas as

simulated annealing and Bayesian decision theory

13

• Prospek Masa Depan Neural Networks– Model-model Neural Networks:

• Hopfield Net• Multilayered-networks (Back Propagation)• Bidirectinal Associative Memory• Self-organizing Maps, dll

 – Berbagai Macam Aplikasi

• Pattern Recognition• Image Compression• Optimization, dll