21
SEMESTER REPORT TOPIC “NEURAL CIRCUITS AS COMPUTATIONAL DYNAMICAL SYSTEMS” DAVID SUSSILLO PRESENTED BY KHUSH BAKHAT

Semester presentation

Embed Size (px)

Citation preview

SEMESTER REPORT

TOPIC “NEURAL CIRCUITS AS

COMPUTATIONAL DYNAMICAL SYSTEMS”

DAVID SUSSILLO

PRESENTED BY KHUSH BAKHAT

NEURAL NETWORK

• In computer science , artificial neural network ANNS are “Computational Models” inspired by the animals’ central nervous system

• These models are capable of machine learning and pattern recognition.

WHY USE ANIMAL NERVOUS SYSTEM

• The objective of learning in biological organism is to achieve a closer optimalstate

NEURAL NETWORK TASK

• Control

• Classification

• Predication

• Approximation

NEURAL CIRCUIT

• Neurons never function in isolation they are organized into circuits that process specific kind of data

• Neural Circuit is a functional entity of interconnected neurons that are able to regulate its own activity

NEURAL CIRCUITS AS COMPUTATIONAL

DYNAMICAL SYSTEMS

• Many recent Studies of neuron recorded from “Cortex” reveal complex temporal

Dynamics

• How such dynamics embody the computations that ultimately lead to behavior

remains a mystery

• Approaching this issue requires developing plausible hypotheses couched in terms

of “Neural Dynamics”

• A tool ideally suited to aid this question is “Recurrent Neural Network” (RNN)

• RNN straddle the fields of non linear dynamical systems and machine learning

• Recently RNN have seen great advances in both theory and application

• In this paper David summarize recent theoretical and technological advances &

highlight an examples of how RNNs helped to explain perplexing high

dimensional neurophysiological data in the “Prefrontal Cortex”

SPECIAL TOPIC IN TOC“RECURRENT NEURAL NETWORK”

• RNN is a class of neural network where connections between units form“Direct circles (cycle graphs) “

• This creates an internal state of the network, which allow it to exhibit dynamictemporal behavior. RNN can use their internal memory to process an arbitrarysequence of inputs

Relationship Between TOC and Neural Network

OPTIMIZING RNNS

• A network model is designed by hand to reproduce and thus explain a setexperimental findings

• Modeling using RNNs that have optimized , or trained

• Optimized means desired inputs and outputs are first defined before training

• Optimizing a network tells the network “WHAT” it should accomplish , with avery few explicit instructions on “HOW "to do it

• RNNs becomes a method of “hypothesis generation” for futureexperimentation and data analysis

REVERSE ENGINEERING AN RNN AFTER OPTIMIZATION

• Revealing the dynamical mechanism employed by an RNN to solve a particular task involves a final step after optimization: one must reverse engineer the solution found by the RNN

• Solution was not constructed with reverse engineer step

• RNNs could understand the employing techniques from non linear dynamical systems theory

APPLICATIONS

• IN this paper reverse engineer variety of RNNs that were optimized to perform simple tasks

A memory Device

An input dependent pattern generator

• The key step in RE involves

Finding the fixed points of the network

Performing linearization of the network dynamics around those fixed points

• The Fixed Points provide a “Dynamical Skeleton” for understanding the global structure of dynamics in the state space

A 3-BIT MEMORY

• Understanding how memories can be represented in biological neural networks haslong been studied in neuroscience.

• In this toy example and he trained an RNN to generate the dynamics necessary toimplement a 3-bit memory.

• Three inputs enter the RNN and specify the states of the three bits individually.

• This 3- bit memory resistant to cross talk.

• After training, the RNN successfully implemented the 3-bit memory.

• RE RNNs for finding all the fixed points and linear system around these fixed points

• A saddle point is a fixed point with both stable and unstable dimensions

• The Saddle nodes were responsible for implementing the input- dependenttransitions between the stable attractors

Trained RNN to generate the dynamics necessary to implement a 3-bit memory

CONTEXT DEPENDENT DECISION MAKING IN PREFRONTAL CORTEX

• Animals are not limited to simple stimulus and response reflexes.

• They can rapidly and flexibly accommodate to context: as the context changes, the same stimuli can elicit dramatically different behaviors.

• To study this type of contextually dependent decision making, monkeys were trained to flexibly select and accumulate evidence from noisy visual stimuli in order to make discrimination.

• On the basis of a contextual cue, the monkeys either differentiated the direction of motion or color of a random-dot display (Figure 3a). While the monkeys engaged in the task, neural responses in prefrontal cortex (PFC) were recorded.

• These neurons showed mixed selectivity to both motion and color sensory evidence, regardless of which stimulus was relevant.

CONTINUE…..

• To discover how a single circuit could selectivity integrate one stimulus while ignoring another, despite the presence of both the RNN approach was applied

• The output of the RNN was in future to be analogous to the decision important to the saccade of the monkey

CONCLUSION:

• The study of neural dynamics at the circuit and systems level is an area of extremely active research. RNNs are a near ideal modeling framework for studying neural circuit dynamics because they share fundamental features with biological tissue, for example, feedback, nonlinearity, and parallel and distributed computing.

• By training RNNs on what to compute, but not how to compute it, researchers can generate novel ideas and testable hypotheses regarding the biological circuit mechanism.

• Further, RNNs provide a rigorous test bed in which to test ideas related to neural computation at the network level.

• The combined approaches of animal behavior and neurophysiology, alongside RNN modeling, may prove a powerful combination of handling the onslaught of high dimensional neural data that is to come