27
AI + ENGINEERING From Physical to Virtual Sensors using DEEP LEARNING #aiwtb

AI + Engineering: from Physical to Virtual Sensors using Deep Learning, Enrico Busto - Add-for

Embed Size (px)

Citation preview

AI + ENGINEERINGFrom Physical to Virtual Sensors using DEEP LEARNING

#aiwtb

We developed a novel approach by adapting Deep Learning algorithms to the classical Engineering applications like signal filtering, sensor fusion and control systems.

This gave us a competitive advantage over the existing solutions.

AI + ENGINEERING

This is made possible by what we call

COMBINATORIAL INNOVATION

We Re-Engineer the Deep Learning Algorithms developed for the weband adapt it to work on generic sensor data.

HUGE demand for Deep Learning Vertical Applicationsin Industrial Environments

Examples - some of our projects:

TIRE DOT CODE RECOGNITION

DOT XB 4J R523 4213

Target Detected Class: Missile Launcher Type: MAZ-543 Uragan

MILITARY TARGETS DETECTION AND RECOGNITION

ADVANCED TRACTION CONTROLSVEHICLE DYNAMICS

DRIVING STYLE DETECTION

VIRTUAL THERMOMETERSVIRTUAL FLOW-METERSEMISSION CONTROLSTURBO SPEED

ADVANCED NONLINEAR CONTROLSWASHING MACHINES

ADAPTIVE LIGHTING CONTROLSEARLY FAULT DETECTION

PREDICTIVE MAINTENANCEPRODUCTION FORECAST

TEMPERATURE FORECASTINGHVAC OPTIMIZATION

Lets focus on FORECASTING

t

y

t0

future

Typical approach with FFNN

t

y

t0

Dimensionality problem

Recurrent Neural Networks RNN

u(t) X y(t)

Mind This LOOP !!

Xu(t0) y(t0)

Xu(t1) y(t1)

Xu(t2) y(t2)

Xu(tn) y(tn)

Xu(t2) y(t2)

Something Happens at a certain moment in time

The INTERNAL STATE keeps the event in memory

But the MEMORY of the event is soon forgotten

TheVANISHING GRADIENT PROBLEM

TIME FLOW

Input Node

Input Gate

Output Gate

Forgetting Gate

Output Node

u

y

X

σ

φ

Input

Output

State

Sigmoid [0 ÷ 1]

Tanh [-1 ÷ +1]

⊕+

π

Sum

Concatenation

Product

σ

σ

σ

φ φu yX⊕ +

π

ππ

Long-Short Term Memory LSTM

Input Node

Input Gate

u

y

X

σ

φ

Input

Output

State

Sigmoid [0 ÷ 1]

Tanh [-1 ÷ +1]

⊕+

π

Sum

Concatenation

Productσ

φu X⊕ +π

Long-Short Term Memory LSTM

Forgetting Gate

u

y

X

σ

φ

Input

Output

State

Sigmoid [0 ÷ 1]

Tanh [-1 ÷ +1]

⊕+

π

Sum

Concatenation

Product

σ

u X⊕ +

π

Long-Short Term Memory LSTM

Output Gate

Output Node

u

y

X

σ

φ

Input

Output

State

Sigmoid [0 ÷ 1]

Tanh [-1 ÷ +1]

⊕+

π

Sum

Concatenation

Product

σ

φu yX⊕ π

Long-Short Term Memory LSTM

How to build the Input Vector for the LSTM

t

y

PR

ESEN

T FUTUREPAST

This is a CONTROLLED VARIABLE: you know PAST and FUTURE

IF NEEDED here you can use a causal digital filter

The Target Signal should only be filtered with a NON-CAUSAL digital filter to avoid LAG

This is the INPUT VECTOR for the LSTM u

This is the TARGET VALUE for the LSTM (training) and PREDICTION (inference)

y

This is the TARGET VALUE for the LSTM (training) and PREDICTION (inference)

Future Values of the CONTROLLED VARIABLES can be shifted back to present

This is the (vanishing) PAST: you do not have to manage it: the internal states of the LSTM will manage (forget) it for you.

PR

ESEN

T FUTUREPAST

Remember to Subscribe to theMACHINE LEARNING ITALY Meetup

To be Updated on Examples - Code - Benchmarks

meetup.com/it-IT/Machine-Learning-Italy

Visitadd-for.com/training-material

To download the code and the sample data

Contact Meit.linkedin.com/in/ebusto

Time for CODING