DLP ® -Driven, Optical Neural Network Results and Future Design Emmett Redd Professor Missouri...

Preview:

Citation preview

DLP®-Driven, Optical Neural Network Results and Future Design

Emmett ReddProfessorMissouri State University

Neural Network Applications• Stock Prediction: Currency, Bonds, S&P 500, Natural Gas

• Business: Direct mail, Credit Scoring, Appraisal, Summoning Juries

• Medical: Breast Cancer, Heart Attack Diagnosis, ER Test Ordering

• Sports: Horse and Dog Racing

• Science: Solar Flares, Protein Sequencing, Mosquito ID, Weather

• Manufacturing: Welding Quality, Plastics or Concrete Testing

• Pattern Recognition: Speech, Article Class., Chem. Drawings

• No Optical Applications—We are starting with Boolean

• most from www.calsci.com/Applications.html

Optical Computing & Neural Networks

• Optical Parallel Processing Gives Speed– Lenslet’s Enlight 256—8000 Giga Multiply and

Accumulate per second– Order 1011 connections per second possible with

holographic attenuators

• Neural Networks– Parallel versus Serial– Learn versus Program– Solutions beyond Programming– Deal with Ambiguous Inputs– Solve Non-Linear problems– Thinking versus Constrained Results

Optical Neural Networks

• Sources are modulated light beams (pulse or amplitude)

• Synaptic Multiplications are due to attenuation of light passing through an optical medium (30 fs)

• Geometric or Holographic• Target neurons sum signals from many source

neurons.• Squashing by operational-amps or nonlinear

optics

Standard Neural Net Learning• We use a Training or Learning algorithm to adjust

the weights, usually in an iterative manner.

… x1

xN

y1

…yM

Learning Algorithm

Target Output (T) Other Info

FWL-NN is equivalent to a standard Neural Network + Learning Algorithm

FWL-NN

+Learning Algorithm

Optical Recurrent Neural Network

Squashing Functions

Synaptic Medium(35mm Slide)

Target Neuron Summation

Signal Source (Layer Input)

Layer OutputA Single Layer of an Optical Recurrent Neural Network. Only four synapses are shown. Actual networks will have a large number of synapses. A multi-layer network has several consecutive layers.

Micromirror Array

PresynapticOptics

Postsynaptic Optics

Recurrent Connections

Definitions

• Fixed-Weight Learning Neural Network (FWL-NN) – A recurrent network that learns without changing synaptic weights

• Potency – A weight signal• Tranapse – A Potency modulated synapse• Planapse – Supplies Potency error signal• Zenapse – Non-Participatory synapse• Recurron – A recurrent neuron• Recurral Network – A network of Recurrons

Optical Fixed-Weight Learning Synapse

W(t-1)

x(t)

Σ

x(t-1)

T(t-1)

Planapse

Tranapse

y(t-1)

Page Representation of a Recurron

Tranapse

Planapse

Tranapse

Tranapse

U(t-1)

A(t)

B(t)

Bias

O(t-1)Σ

Optical Neural Network Constraints

• Finite Range Unipolar Signals [0,+1]• Finite Range Bipolar Attenuation[-1,+1]• Excitatory/Inhibitory handled separately• Limited Resolution Signal• Limited Resolution Synaptic Weights• Alignment and Calibration Issues

Optical System

DMD or DLP®

Optical Recurrent Neural Network

Squashing Functions

Synaptic Medium(35mm Slide)

Target Neuron Summation

Signal Source (Layer Input)

Layer OutputA Single Layer of an Optical Recurrent Neural Network. Only four synapses are shown. Actual networks will have a large number of synapses. A multi-layer network has several consecutive layers.

Micromirror Array

PresynapticOptics

Postsynaptic Optics

Recurrent Connections

Design Details

• Digital Micromirror Device • 35 mm slide Synaptic Media• CCD Camera • Synaptic Weights - Positionally Encoded

- Digital Attenuation• Allows flexibility for evaluation.

and Networks

Recurrent ANDUnsigned MultiplyFWL Recurron

DMD/DLP®—A Versatile Tool

• Alignment and Distortion Correction– Align DMD/DLP® to CCD——PEGS– Align Synaptic Media to CCD——HOLES– Calculate DMD/DLP® to Synaptic Media

Alignment——Putting PEGS in HOLES– Correct projected DMD/DLP® Images

• Nonlinearities

Stretch, Squash, and Rotate

1

12 2

1

1 12 2

1

1 1

P

P

P

P P

P

x x

y y

x xD

x y x y

y y

1

x1y0 x0y1

x2y0 x1y1 x0y2

x3y0 x2y1 x1y2 x0y3

etc.

None

Linear

Quadratic

Cubic

etc.

Where We Are and Where We Want to Be

DMD Dots Image (pegs)

200 400 600 800 1000 1200

100

200

300

400

500

600

700

800

900

1000

Slide Dots Image (holes)

200 400 600 800 1000 1200

100

200

300

400

500

600

700

800

900

1000

C C'

DMD Dots Image (pegs)

200 400 600 800 1000 1200

100

200

300

400

500

600

700

800

900

1000

CCD Image of Known DLP® Positions

Automatically Finds Points via Projecting 42 Individual Pegs

C = H ● D

H = C ● DP

Slide Dots Image (holes)

200 400 600 800 1000 1200

100

200

300

400

500

600

700

800

900

1000

CCD Image of Holes in Slide Film

Manually Click on Interference to Zoom In

ZOOM OF DIFFRACTION PATTERN #1. Click on center

10 20 30 40 50 60 70 80 90

10

20

30

40

50

60

70

80

Mark the Center

Options for Automatic Alignment

• Make holes larger so diffraction is reduced– A single large Peg might illuminate only one

Hole at a time– Maximum intensity would mark Hole location

• Fit ellipse to 1st minimum— Ax2+Bxy+Cy2+Dx+Ey+1 = 0– Find its center— xc=(BE-2CD)/(4AC-B2),

yc=(BD-2AE)/(4AC-B2) – Hole location

Changeable center and orientation parameters4 3 theta

-4 2 0.523598767 0.5 0.523598767

x 4 -4 0 0y 0 0 3 -3

-2 2x' -0.535898367 -7.464101633 -5.5 -2.5 -2 -2 -1.663745 -7.464101615 -3.369187 -6.322583664y' 3.999999969 3.09401E-08 4.598076 -0.598076 5.12147 -0.270636326 0 0 -1 -1

-4 0 -4 0

General Equation resulting from rotating and translating x^2/16 + y^2/9 = 1 by h, k, and theta.A B C D E F

10.75 -6.062177764 14.25000005 98.12436 -81.24871 133.4970 0 0 0 0 0 0 0 0 0

Recovered center and orientation parametersh k theta

-4 2 0.5235987674 4 2.768049 55.71281292 11.35142 39.97506419

-10.2429 0.541272651 0 0 3.369187 6.32258366426.2294 0.073244021 0 0 1 1

-2 -2 -1.663745 -7.464101615 -3.369187 -6.322583664Gaussian Elimination Below 5.12147 -0.270636326 0 0 -1 -1

4 -10.24293658 26.22943744 -2 5.121468 -14 0.541272651 0.073244021 -2 -0.270636 -1

2.76805 0 0 -1.663745 0 -155.7128 0 0 -7.464102 0 -111.3514 3.369186883 1 -3.369187 -1 -1

1 -2.560734145 6.557359359 -0.5 1.280367 -0.250 10.78420923 -26.15619342 0 -5.392105 00 7.088237103 -18.15109078 -0.279721 -3.544119 -0.307990 142.6657023 -365.3289352 20.3923 -71.33285 12.92820 32.43715631 -73.43534183 2.306523 -15.53398 1.83786

1 0 0.34651384 -0.5 0 -0.25

-8

-6

-4

-2

0

2

4

6

-10 -5 0 5

Ellipse Fitting

Slide Dots Image (holes)

200 400 600 800 1000 1200

100

200

300

400

500

600

700

800

900

1000

Eighty-four Clicks Later

C' = M ● C

M = C' ● CP

100 200 300 400 500 600 700 800 900 1000

100

200

300

400

500

600

700

DLP® Projected Regions of Interest

D' = H-1●M-1●H●D

and

C' = H ● D'

Nonlinearities

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Measured light signals vs. Weights for FWL Recurron

Opaque slides aren’t, ~6% leakage.

Neural Networks and Results

• Recurrent AND

• Unsigned Multiply

• Fixed Weight Learning Recurron

IN

IN-1IN IN-1

1-cycle delay

Recurrent AND

IN

IN-1

IN IN-1

Bias (1)

Source Neurons (buffers)

Terminal Neurons

-14/16

10/16

10/16

-1/2

2/2

Σ

Σ

logsig(16*Σ)

linsig(2*Σ)

0

1

0

1

Recurrent AND Neural Network

Synaptic Weight Slide

Weights

0.5 10/16 -1/2

-14/16 10/16 2/2

Recurrent AND Demo

• MATLAB

200 400 600 800 1000 1200

100

200

300

400

500

600

700

800

900

1000

Pulse Image (Regions of Interest)

0 10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Output Swings Larger than Input

Synaptic Weight Slide

0 5 10 15 20 25

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Unsigned Multiply Results

About 4 bits

Blue-expected

Black-obtained

Red-squared error

Page Representation of a Recurron

Tranapse

Planapse

Tranapse

Tranapse

U(t-1)

A(t)

B(t)

Bias

O(t-1)Σ

FWL Recurron Synaptic Weight Slide

Optical Fixed-Weight Learning Synapse

W(t-1)

x(t)

Σ

x(t-1)

T(t-1)

Planapse

Tranapse

y(t-1)

Optical Recurrent Neural Network

Squashing Functions

Synaptic Medium(35mm Slide)

Target Neuron Summation

Signal Source (Layer Input)

Layer OutputA Single Layer of an Optical Recurrent Neural Network. Only four synapses are shown. Actual networks will have a large number of synapses. A multi-layer network has several consecutive layers.

Micromirror Array

PresynapticOptics

Postsynaptic Optics

Recurrent Connections

Future: Integrated Photonics

• Photonic (analog)

• i. Concept

• α. Neuron

• β. Weights

• γ. Synapses Photonics Spectra and Luxtera

Continued• ii. Needs• α. Laser• β. Amplifier (detectors and control)• γ. Splitters• δ. Waveguides on Two Layers• ε. Attenuators• ζ. Combiners• η. Constructive Interference• θ. Destructive Interference• ι. Phase

Photonics Spectra and Luxtera

Interest Generated?

• We wish to implement Optical Neural Networks in Silicon—Including Fixed Weight Learning.

• To do so, we need Collaborators.

• Much research remains, but an earlier start means an earlier finish.

• Please contact me if interested.

DLP®-Driven, Optical Neural Network Results and Future Design

Emmett Redd & A. Steven YoungerMissouri State UniversityEmmettRedd@MissouriState.edu

Recommended