59
Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of Electrical and Computer Engineering, Faculty of Engineering, Aristotle University of Thessaloniki, Greece e-mail: [email protected] http:\\users.auth.gr\dkugiu 22 May 2018 Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Computational Neuroscience - Lecture 3

Kugiumtzis Dimitris

Department of Electrical and Computer Engineering,Faculty of Engineering, Aristotle University of Thessaloniki, Greece

e-mail: [email protected] http:\\users.auth.gr\dkugiu

22 May 2018

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 2: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Outline

1 Single-channel EEG analysis - Introduction

2 Features of signal morphology

3 Features from linear analysis

4 Features from nonlinear analysis

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 3: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Outline

1 Single-channel EEG analysis - Introduction

2 Features of signal morphology

3 Features from linear analysis

4 Features from nonlinear analysis

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 4: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Outline

1 Single-channel EEG analysis - Introduction

2 Features of signal morphology

3 Features from linear analysis

4 Features from nonlinear analysis

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 5: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Outline

1 Single-channel EEG analysis - Introduction

2 Features of signal morphology

3 Features from linear analysis

4 Features from nonlinear analysis

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 6: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Outline

1 Single-channel EEG analysis - Introduction

2 Features of signal morphology

3 Features from linear analysis

4 Features from nonlinear analysis

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 7: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Single-channel EEG analysis - IntroductionRegion of Interest (ROI) of the brain

Eyeball judgement:Doctor or EEGer may diagnose abnormalities by visual inspectionof EEG signals (in ROI).

Alternatively:Quantify automatically information from EEG signal (in ROI) ⇒univariate time series analysis: study the static and dynamicproperties of the time series (signal), model the time series.

Three main perspectives:

1 Characteristics of signal morphology, e.g. identify bursts.

2 stochastic process (linear analysis), e.g.Xt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εt

3 nonlinear dynamics and chaos (nonlinear analysis), e.g.St = f (St−1), St ∈ IRd ,Xt = h(St) + εt

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 8: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Single-channel EEG analysis - IntroductionRegion of Interest (ROI) of the brain

Eyeball judgement:Doctor or EEGer may diagnose abnormalities by visual inspectionof EEG signals (in ROI).

Alternatively:Quantify automatically information from EEG signal (in ROI) ⇒univariate time series analysis: study the static and dynamicproperties of the time series (signal), model the time series.

Three main perspectives:

1 Characteristics of signal morphology, e.g. identify bursts.

2 stochastic process (linear analysis), e.g.Xt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εt

3 nonlinear dynamics and chaos (nonlinear analysis), e.g.St = f (St−1), St ∈ IRd ,Xt = h(St) + εt

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 9: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Single-channel EEG analysis - IntroductionRegion of Interest (ROI) of the brain

Eyeball judgement:Doctor or EEGer may diagnose abnormalities by visual inspectionof EEG signals (in ROI).

Alternatively:Quantify automatically information from EEG signal (in ROI) ⇒

univariate time series analysis: study the static and dynamicproperties of the time series (signal), model the time series.

Three main perspectives:

1 Characteristics of signal morphology, e.g. identify bursts.

2 stochastic process (linear analysis), e.g.Xt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εt

3 nonlinear dynamics and chaos (nonlinear analysis), e.g.St = f (St−1), St ∈ IRd ,Xt = h(St) + εt

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 10: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Single-channel EEG analysis - IntroductionRegion of Interest (ROI) of the brain

Eyeball judgement:Doctor or EEGer may diagnose abnormalities by visual inspectionof EEG signals (in ROI).

Alternatively:Quantify automatically information from EEG signal (in ROI) ⇒univariate time series analysis: study the static and dynamicproperties of the time series (signal), model the time series.

Three main perspectives:

1 Characteristics of signal morphology, e.g. identify bursts.

2 stochastic process (linear analysis), e.g.Xt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εt

3 nonlinear dynamics and chaos (nonlinear analysis), e.g.St = f (St−1), St ∈ IRd ,Xt = h(St) + εt

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 11: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Single-channel EEG analysis - IntroductionRegion of Interest (ROI) of the brain

Eyeball judgement:Doctor or EEGer may diagnose abnormalities by visual inspectionof EEG signals (in ROI).

Alternatively:Quantify automatically information from EEG signal (in ROI) ⇒univariate time series analysis: study the static and dynamicproperties of the time series (signal), model the time series.

Three main perspectives:

1 Characteristics of signal morphology, e.g. identify bursts.

2 stochastic process (linear analysis), e.g.Xt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εt

3 nonlinear dynamics and chaos (nonlinear analysis), e.g.St = f (St−1), St ∈ IRd ,Xt = h(St) + εt

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 12: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Single-channel EEG analysis - IntroductionRegion of Interest (ROI) of the brain

Eyeball judgement:Doctor or EEGer may diagnose abnormalities by visual inspectionof EEG signals (in ROI).

Alternatively:Quantify automatically information from EEG signal (in ROI) ⇒univariate time series analysis: study the static and dynamicproperties of the time series (signal), model the time series.

Three main perspectives:

1 Characteristics of signal morphology, e.g. identify bursts.

2 stochastic process (linear analysis), e.g.Xt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εt

3 nonlinear dynamics and chaos (nonlinear analysis), e.g.St = f (St−1), St ∈ IRd ,Xt = h(St) + εt

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 13: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Single-channel EEG analysis - IntroductionRegion of Interest (ROI) of the brain

Eyeball judgement:Doctor or EEGer may diagnose abnormalities by visual inspectionof EEG signals (in ROI).

Alternatively:Quantify automatically information from EEG signal (in ROI) ⇒univariate time series analysis: study the static and dynamicproperties of the time series (signal), model the time series.

Three main perspectives:

1 Characteristics of signal morphology, e.g. identify bursts.

2 stochastic process (linear analysis), e.g.Xt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εt

3 nonlinear dynamics and chaos (nonlinear analysis), e.g.St = f (St−1), St ∈ IRd ,Xt = h(St) + εt

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 14: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Clinical applications:

predicting epileptic seizures

classifying sleep stages

measuring depth of anesthesia

detection and monitoring of brain injury

detecting abnormal brain states

others ...

Example:α-wave (8-13 Hz) is reduced in children and in the elderly,and in patients with dementia, schizophrenia, stroke, and epilepsy

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 15: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Clinical applications:

predicting epileptic seizures

classifying sleep stages

measuring depth of anesthesia

detection and monitoring of brain injury

detecting abnormal brain states

others ...

Example:α-wave (8-13 Hz) is reduced in children and in the elderly,and in patients with dementia, schizophrenia, stroke, and epilepsy

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 16: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of signal morphology

Any feature fi is computed on sliding windows i (overlapped ornon-overlapped) across the EEG signal (of length nw ).

A feature can be normalized, fi := fi/∑n

j=1 fj , where n number ofwindows.If fi is measured in several (many) channels, fi ,j , for channelsj = 1, . . . ,K : take average at each time window, e.g.fi := 1

K

∑Kj=1 fi ,j .

Features of signal morphology:Indices (statistics) that capture some characteristic of the shape ofthe signal.

Descriptive statistics:

center (mean, median)

dispersion (variance, SD, interquartile range)

higher moments (skewness, kurtosis)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 17: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of signal morphology

Any feature fi is computed on sliding windows i (overlapped ornon-overlapped) across the EEG signal (of length nw ).A feature can be normalized, fi := fi/

∑nj=1 fj , where n number of

windows.

If fi is measured in several (many) channels, fi ,j , for channelsj = 1, . . . ,K : take average at each time window, e.g.fi := 1

K

∑Kj=1 fi ,j .

Features of signal morphology:Indices (statistics) that capture some characteristic of the shape ofthe signal.

Descriptive statistics:

center (mean, median)

dispersion (variance, SD, interquartile range)

higher moments (skewness, kurtosis)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 18: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of signal morphology

Any feature fi is computed on sliding windows i (overlapped ornon-overlapped) across the EEG signal (of length nw ).A feature can be normalized, fi := fi/

∑nj=1 fj , where n number of

windows.If fi is measured in several (many) channels, fi ,j , for channelsj = 1, . . . ,K : take average at each time window, e.g.fi := 1

K

∑Kj=1 fi ,j .

Features of signal morphology:Indices (statistics) that capture some characteristic of the shape ofthe signal.

Descriptive statistics:

center (mean, median)

dispersion (variance, SD, interquartile range)

higher moments (skewness, kurtosis)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 19: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of signal morphology

Any feature fi is computed on sliding windows i (overlapped ornon-overlapped) across the EEG signal (of length nw ).A feature can be normalized, fi := fi/

∑nj=1 fj , where n number of

windows.If fi is measured in several (many) channels, fi ,j , for channelsj = 1, . . . ,K : take average at each time window, e.g.fi := 1

K

∑Kj=1 fi ,j .

Features of signal morphology:Indices (statistics) that capture some characteristic of the shape ofthe signal.

Descriptive statistics:

center (mean, median)

dispersion (variance, SD, interquartile range)

higher moments (skewness, kurtosis)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 20: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of signal morphology

Any feature fi is computed on sliding windows i (overlapped ornon-overlapped) across the EEG signal (of length nw ).A feature can be normalized, fi := fi/

∑nj=1 fj , where n number of

windows.If fi is measured in several (many) channels, fi ,j , for channelsj = 1, . . . ,K : take average at each time window, e.g.fi := 1

K

∑Kj=1 fi ,j .

Features of signal morphology:Indices (statistics) that capture some characteristic of the shape ofthe signal.

Descriptive statistics:

center (mean, median)

dispersion (variance, SD, interquartile range)

higher moments (skewness, kurtosis)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 21: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Hjorth parameters:

activity = variance = total power, Var(x(t)) =∑nf

i=0 PXX (fi ).

mobility =

√Var

(dx(t)

dt

)/Var(x(t)),

estimates the SD of PXX (f ) (along the frequency axis)

complexity = mobility(

dx(t)

dt

)/mobility(x(t)),

estimates similarity of the signal to a pure sine wave(deviation of complexity from one).

[Hjorth, Electroenceph. Clin.Neuroph., 1970]

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 22: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Hjorth parameters:

activity = variance = total power, Var(x(t)) =∑nf

i=0 PXX (fi ).

mobility =

√Var

(dx(t)

dt

)/Var(x(t)),

estimates the SD of PXX (f ) (along the frequency axis)

complexity = mobility(

dx(t)

dt

)/mobility(x(t)),

estimates similarity of the signal to a pure sine wave(deviation of complexity from one).

[Hjorth, Electroenceph. Clin.Neuroph., 1970]

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 23: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Hjorth parameters:

activity = variance = total power, Var(x(t)) =∑nf

i=0 PXX (fi ).

mobility =

√Var

(dx(t)

dt

)/Var(x(t)),

estimates the SD of PXX (f ) (along the frequency axis)

complexity = mobility(

dx(t)

dt

)/mobility(x(t)),

estimates similarity of the signal to a pure sine wave(deviation of complexity from one).

[Hjorth, Electroenceph. Clin.Neuroph., 1970]

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 24: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Hjorth parameters:

activity = variance = total power, Var(x(t)) =∑nf

i=0 PXX (fi ).

mobility =

√Var

(dx(t)

dt

)/Var(x(t)),

estimates the SD of PXX (f ) (along the frequency axis)

complexity = mobility(

dx(t)

dt

)/mobility(x(t)),

estimates similarity of the signal to a pure sine wave(deviation of complexity from one).

[Hjorth, Electroenceph. Clin.Neuroph., 1970]

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 25: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Line Length: L(i) =∑nw−1

t=1 |x(t + 1)− x(t)|.Normalized line length: L(i) := L(i)/

∑nj=1 L(j).

Median over all channels: L(i) := medianL(i , j), j denotes channel.

[Koolen et al, Clin. Neuroph., 2014]

Nonlinear energy: NLE(i) =∑nw−1

t=2 |x(t)2 − x(t − 1)x(t + 1)|.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 26: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Line Length: L(i) =∑nw−1

t=1 |x(t + 1)− x(t)|.Normalized line length: L(i) := L(i)/

∑nj=1 L(j).

Median over all channels: L(i) := medianL(i , j), j denotes channel.

[Koolen et al, Clin. Neuroph., 2014]

Nonlinear energy: NLE(i) =∑nw−1

t=2 |x(t)2 − x(t − 1)x(t + 1)|.Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 27: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

... and many other features of signal morphology:

Fisher information (from normalized spectrum of singular values)

Katz’s fractal dimension (from line length and largest distancefrom start)

Petrosian Fractal Dimension (PFD) (from number of sign changesin the signal derivative)

Hurst exponent (the Rescaled Range statistics (R/S))

Detrended Fluctuation Analysis (DFA) (long range correlationmeasure)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 28: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of linear analysis - frequency domain

Features from power spectrum PXX (fk), fk = 0, . . . , fs/2,

Power in bands:δ[0, 4]Hz, θ[4, 8]Hz, α[8, 13]Hz, β[13, 30]Hz, γ > 30Hz,

e.g. for α-band power: PXX (α) =∑fk=13

fk=8 PXX (fk)

or relative α-band power: RPXX (α) = PXX (α)/∑nf

i=0 PXX (fi )

In the same way the bands are defined on wavelets.

Spectral edge frequency (SEF), e.g. SEF(90) is the frequency at

90% of total power, f 90:∑f 90

f =0 PXX (f )/∑fs/2

f =0 PXX (f ) = 0.90

median frequency: SEF(50)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 29: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of linear analysis - frequency domain

Features from power spectrum PXX (fk), fk = 0, . . . , fs/2,

Power in bands:δ[0, 4]Hz, θ[4, 8]Hz, α[8, 13]Hz, β[13, 30]Hz, γ > 30Hz,

e.g. for α-band power: PXX (α) =∑fk=13

fk=8 PXX (fk)

or relative α-band power: RPXX (α) = PXX (α)/∑nf

i=0 PXX (fi )

In the same way the bands are defined on wavelets.

Spectral edge frequency (SEF), e.g. SEF(90) is the frequency at

90% of total power, f 90:∑f 90

f =0 PXX (f )/∑fs/2

f =0 PXX (f ) = 0.90

median frequency: SEF(50)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 30: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of linear analysis - frequency domain

Features from power spectrum PXX (fk), fk = 0, . . . , fs/2,

Power in bands:δ[0, 4]Hz, θ[4, 8]Hz, α[8, 13]Hz, β[13, 30]Hz, γ > 30Hz,

e.g. for α-band power: PXX (α) =∑fk=13

fk=8 PXX (fk)

or relative α-band power: RPXX (α) = PXX (α)/∑nf

i=0 PXX (fi )

In the same way the bands are defined on wavelets.

Spectral edge frequency (SEF), e.g. SEF(90) is the frequency at

90% of total power, f 90:∑f 90

f =0 PXX (f )/∑fs/2

f =0 PXX (f ) = 0.90

median frequency: SEF(50)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 31: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of linear analysis - frequency domain

Features from power spectrum PXX (fk), fk = 0, . . . , fs/2,

Power in bands:δ[0, 4]Hz, θ[4, 8]Hz, α[8, 13]Hz, β[13, 30]Hz, γ > 30Hz,

e.g. for α-band power: PXX (α) =∑fk=13

fk=8 PXX (fk)

or relative α-band power: RPXX (α) = PXX (α)/∑nf

i=0 PXX (fi )

In the same way the bands are defined on wavelets.

Spectral edge frequency (SEF), e.g. SEF(90) is the frequency at

90% of total power, f 90:∑f 90

f =0 PXX (f )/∑fs/2

f =0 PXX (f ) = 0.90

median frequency: SEF(50)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 32: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of linear analysis - frequency domain

Features from power spectrum PXX (fk), fk = 0, . . . , fs/2,

Power in bands:δ[0, 4]Hz, θ[4, 8]Hz, α[8, 13]Hz, β[13, 30]Hz, γ > 30Hz,

e.g. for α-band power: PXX (α) =∑fk=13

fk=8 PXX (fk)

or relative α-band power: RPXX (α) = PXX (α)/∑nf

i=0 PXX (fi )

In the same way the bands are defined on wavelets.

Spectral edge frequency (SEF), e.g. SEF(90) is the frequency at

90% of total power, f 90:∑f 90

f =0 PXX (f )/∑fs/2

f =0 PXX (f ) = 0.90

median frequency: SEF(50)

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 33: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Example:5 datasets of 100 single-channel EEG of N = 4096 (fs = 173.61 Hz)

A: scalp EEG, healthy eyes open B: scalp EEG, healthy eyes closed

C, D: intracranial EEG, interictal period E: intracranial EEG, ictal

period http://epileptologie-bonn.de/cms/front_content.php?idcat=193&lang=3

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 34: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Example:5 datasets of 100 single-channel EEG of N = 4096 (fs = 173.61 Hz)

A: scalp EEG, healthy eyes open B: scalp EEG, healthy eyes closed

C, D: intracranial EEG, interictal period E: intracranial EEG, ictal

period http://epileptologie-bonn.de/cms/front_content.php?idcat=193&lang=3

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 35: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

[Bao et al, Comput Intell Neurosci, 2011]

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 36: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of linear analysis - time domain

Autocorrelation: rX (τ) = cX (τ)s2X

=∑n−τ

t=1 (xt−x̄)(xt+τ−x̄)∑n−τt=1 (xt−x̄)2

Assuming the linear stochastic processXt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εtAutoregressive model:x̂t = φ̂0 + φ̂1xt−1 + . . .+ φ̂pxt−p

Mean square error of fit: MSE= 1n−p

∑nt=p+1(xt − x̂t)

2

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 37: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of linear analysis - time domain

Autocorrelation: rX (τ) = cX (τ)s2X

=∑n−τ

t=1 (xt−x̄)(xt+τ−x̄)∑n−τt=1 (xt−x̄)2

Assuming the linear stochastic processXt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εtAutoregressive model:x̂t = φ̂0 + φ̂1xt−1 + . . .+ φ̂pxt−p

Mean square error of fit: MSE= 1n−p

∑nt=p+1(xt − x̂t)

2

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 38: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of linear analysis - time domain

Autocorrelation: rX (τ) = cX (τ)s2X

=∑n−τ

t=1 (xt−x̄)(xt+τ−x̄)∑n−τt=1 (xt−x̄)2

Assuming the linear stochastic processXt = φ0 + φ1Xt−1 + . . .+ φpXt−p + εtAutoregressive model:x̂t = φ̂0 + φ̂1xt−1 + . . .+ φ̂pxt−p

Mean square error of fit: MSE= 1n−p

∑nt=p+1(xt − x̂t)

2

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 39: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 1Extension of autocorrelation to linear and nonlinear correlation

Entropy: information from each sample of X (assume properdiscretization of X )

H(X ) =∑x

pX (x) log pX (x)

Mutual information: information for Y knowing X and vice versa

I (X ,Y ) = H(X )+H(Y )−H(X ,Y ) =∑x ,y

pXY (x , y) logpXY (x , y)

pX (x)pY (y)

For X → Xt and Y → Xt+τ ,Delayed mutual information:

IX (τ) = I (Xt ,Xt+τ ) =∑

xt ,xt+τ

pXtXt+τ (xt , xt+τ ) logpXtXt+τ (xt , xt+τ )

pXt (xt)pXt+τ (xt+τ )

To compute IX (τ) make a partition of {xt}nt=1, a partition of{yt}nt=1 and compute probabilities for each cell from the relativefrequency.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 40: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 1Extension of autocorrelation to linear and nonlinear correlation

Entropy: information from each sample of X (assume properdiscretization of X )

H(X ) =∑x

pX (x) log pX (x)

Mutual information: information for Y knowing X and vice versa

I (X ,Y ) = H(X )+H(Y )−H(X ,Y ) =∑x ,y

pXY (x , y) logpXY (x , y)

pX (x)pY (y)

For X → Xt and Y → Xt+τ ,Delayed mutual information:

IX (τ) = I (Xt ,Xt+τ ) =∑

xt ,xt+τ

pXtXt+τ (xt , xt+τ ) logpXtXt+τ (xt , xt+τ )

pXt (xt)pXt+τ (xt+τ )

To compute IX (τ) make a partition of {xt}nt=1, a partition of{yt}nt=1 and compute probabilities for each cell from the relativefrequency.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 41: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 1Extension of autocorrelation to linear and nonlinear correlation

Entropy: information from each sample of X (assume properdiscretization of X )

H(X ) =∑x

pX (x) log pX (x)

Mutual information: information for Y knowing X and vice versa

I (X ,Y ) = H(X )+H(Y )−H(X ,Y ) =∑x ,y

pXY (x , y) logpXY (x , y)

pX (x)pY (y)

For X → Xt and Y → Xt+τ ,Delayed mutual information:

IX (τ) = I (Xt ,Xt+τ ) =∑

xt ,xt+τ

pXtXt+τ (xt , xt+τ ) logpXtXt+τ (xt , xt+τ )

pXt (xt)pXt+τ (xt+τ )

To compute IX (τ) make a partition of {xt}nt=1, a partition of{yt}nt=1 and compute probabilities for each cell from the relativefrequency.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 42: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 1Extension of autocorrelation to linear and nonlinear correlation

Entropy: information from each sample of X (assume properdiscretization of X )

H(X ) =∑x

pX (x) log pX (x)

Mutual information: information for Y knowing X and vice versa

I (X ,Y ) = H(X )+H(Y )−H(X ,Y ) =∑x ,y

pXY (x , y) logpXY (x , y)

pX (x)pY (y)

For X → Xt and Y → Xt+τ ,Delayed mutual information:

IX (τ) = I (Xt ,Xt+τ ) =∑

xt ,xt+τ

pXtXt+τ (xt , xt+τ ) logpXtXt+τ (xt , xt+τ )

pXt (xt)pXt+τ (xt+τ )

To compute IX (τ) make a partition of {xt}nt=1, a partition of{yt}nt=1 and compute probabilities for each cell from the relativefrequency.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 43: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 1Extension of autocorrelation to linear and nonlinear correlation

Entropy: information from each sample of X (assume properdiscretization of X )

H(X ) =∑x

pX (x) log pX (x)

Mutual information: information for Y knowing X and vice versa

I (X ,Y ) = H(X )+H(Y )−H(X ,Y ) =∑x ,y

pXY (x , y) logpXY (x , y)

pX (x)pY (y)

For X → Xt and Y → Xt+τ ,Delayed mutual information:

IX (τ) = I (Xt ,Xt+τ ) =∑

xt ,xt+τ

pXtXt+τ (xt , xt+τ ) logpXtXt+τ (xt , xt+τ )

pXt (xt)pXt+τ (xt+τ )

To compute IX (τ) make a partition of {xt}nt=1, a partition of{yt}nt=1 and compute probabilities for each cell from the relativefrequency.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 44: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 2Extension of linear autoregressive models to nonlinear models

All models require the state space reconstruction, points xt ∈ IRm

from scalar xt .Delay embedding:xt = [xt , xt−τ , . . . , xt−(m−1)τ ].

Nonlinear model: xt+1 = f (xt)

1 parametric models, e.g. polynomial autoregressive models

2 semilocal (black-box) models, e.g. neural networks

3 local models, e.g. nearest neighbor models.

For the prediction of xt+1 having x1, x2, . . . , xt .

Find the k nearest neighbors of xt , {xt(1), . . . , xt(k)}Fit a linear autoregressive model on the k neighbors.xt(i)+1 = φ0 + φ1xt(i) + · · ·+ φmxt(i)−(m−1)τ + εt+1

Predict x̂t+1 = φ̂0 + φ̂1xt + · · ·+ φ̂mxt−(m−1)τ

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 45: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 2Extension of linear autoregressive models to nonlinear models

All models require the state space reconstruction, points xt ∈ IRm

from scalar xt .Delay embedding:xt = [xt , xt−τ , . . . , xt−(m−1)τ ].

Nonlinear model: xt+1 = f (xt)

1 parametric models, e.g. polynomial autoregressive models

2 semilocal (black-box) models, e.g. neural networks

3 local models, e.g. nearest neighbor models.

For the prediction of xt+1 having x1, x2, . . . , xt .

Find the k nearest neighbors of xt , {xt(1), . . . , xt(k)}Fit a linear autoregressive model on the k neighbors.xt(i)+1 = φ0 + φ1xt(i) + · · ·+ φmxt(i)−(m−1)τ + εt+1

Predict x̂t+1 = φ̂0 + φ̂1xt + · · ·+ φ̂mxt−(m−1)τ

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 46: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 2Extension of linear autoregressive models to nonlinear models

All models require the state space reconstruction, points xt ∈ IRm

from scalar xt .Delay embedding:xt = [xt , xt−τ , . . . , xt−(m−1)τ ].

Nonlinear model: xt+1 = f (xt)

1 parametric models, e.g. polynomial autoregressive models

2 semilocal (black-box) models, e.g. neural networks

3 local models, e.g. nearest neighbor models.

For the prediction of xt+1 having x1, x2, . . . , xt .

Find the k nearest neighbors of xt , {xt(1), . . . , xt(k)}Fit a linear autoregressive model on the k neighbors.xt(i)+1 = φ0 + φ1xt(i) + · · ·+ φmxt(i)−(m−1)τ + εt+1

Predict x̂t+1 = φ̂0 + φ̂1xt + · · ·+ φ̂mxt−(m−1)τ

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 47: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 2Extension of linear autoregressive models to nonlinear models

All models require the state space reconstruction, points xt ∈ IRm

from scalar xt .Delay embedding:xt = [xt , xt−τ , . . . , xt−(m−1)τ ].

Nonlinear model: xt+1 = f (xt)

1 parametric models, e.g. polynomial autoregressive models

2 semilocal (black-box) models, e.g. neural networks

3 local models, e.g. nearest neighbor models.

For the prediction of xt+1 having x1, x2, . . . , xt .

Find the k nearest neighbors of xt , {xt(1), . . . , xt(k)}Fit a linear autoregressive model on the k neighbors.xt(i)+1 = φ0 + φ1xt(i) + · · ·+ φmxt(i)−(m−1)τ + εt+1

Predict x̂t+1 = φ̂0 + φ̂1xt + · · ·+ φ̂mxt−(m−1)τ

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 48: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 3

Entropy: Estimates of the entropy ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ]:

H(X) =∑

x pX(x) log pX(x)

Approximate entropy, ApEn (uses a bandwidth in theestimation of pX(x))

Sample entropy (similar to ApEn)

Permutation entropy (entropy on ranks of the components inxt)

Spectral entropy (entropy on PXX (f ))

Others, e.g. fuzzy entropy, multiscale entropy.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 49: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 3

Entropy: Estimates of the entropy ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ]:

H(X) =∑

x pX(x) log pX(x)

Approximate entropy, ApEn (uses a bandwidth in theestimation of pX(x))

Sample entropy (similar to ApEn)

Permutation entropy (entropy on ranks of the components inxt)

Spectral entropy (entropy on PXX (f ))

Others, e.g. fuzzy entropy, multiscale entropy.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 50: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 3

Entropy: Estimates of the entropy ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ]:

H(X) =∑

x pX(x) log pX(x)

Approximate entropy, ApEn (uses a bandwidth in theestimation of pX(x))

Sample entropy (similar to ApEn)

Permutation entropy (entropy on ranks of the components inxt)

Spectral entropy (entropy on PXX (f ))

Others, e.g. fuzzy entropy, multiscale entropy.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 51: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 3

Entropy: Estimates of the entropy ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ]:

H(X) =∑

x pX(x) log pX(x)

Approximate entropy, ApEn (uses a bandwidth in theestimation of pX(x))

Sample entropy (similar to ApEn)

Permutation entropy (entropy on ranks of the components inxt)

Spectral entropy (entropy on PXX (f ))

Others, e.g. fuzzy entropy, multiscale entropy.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 52: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 3

Entropy: Estimates of the entropy ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ]:

H(X) =∑

x pX(x) log pX(x)

Approximate entropy, ApEn (uses a bandwidth in theestimation of pX(x))

Sample entropy (similar to ApEn)

Permutation entropy (entropy on ranks of the components inxt)

Spectral entropy (entropy on PXX (f ))

Others, e.g. fuzzy entropy, multiscale entropy.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 53: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 3

Entropy: Estimates of the entropy ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ]:

H(X) =∑

x pX(x) log pX(x)

Approximate entropy, ApEn (uses a bandwidth in theestimation of pX(x))

Sample entropy (similar to ApEn)

Permutation entropy (entropy on ranks of the components inxt)

Spectral entropy (entropy on PXX (f ))

Others, e.g. fuzzy entropy, multiscale entropy.

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 54: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 4

Dimension and Complexity:

correlation dimension, estimates the fractal dimension ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ] (assuming chaos):d(xi , xj): distance of two points in IRm

scaling of probability of distance with r , p(d(xi , xj) < r) ∝ rν ,e.g.ν: the dimension, estimated by the slope of log p(r) vs log r .

Higuchi dimension (distance is defined in a different way).

Lempel-Ziv algorithmic complexity (turning the signal to aseries of symbols by discretization and searching for newsymbol patterns in the series).

Source: [Kugiumtzis et al, Int. J of Bioelectromagnetism, 2007]

[Kugiumtzis and Tsimpiris, J. Stat. Softw., 2010], MATS module in Matlab

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 55: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 4

Dimension and Complexity:

correlation dimension, estimates the fractal dimension ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ] (assuming chaos):d(xi , xj): distance of two points in IRm

scaling of probability of distance with r , p(d(xi , xj) < r) ∝ rν ,e.g.ν: the dimension, estimated by the slope of log p(r) vs log r .

Higuchi dimension (distance is defined in a different way).

Lempel-Ziv algorithmic complexity (turning the signal to aseries of symbols by discretization and searching for newsymbol patterns in the series).

Source: [Kugiumtzis et al, Int. J of Bioelectromagnetism, 2007]

[Kugiumtzis and Tsimpiris, J. Stat. Softw., 2010], MATS module in Matlab

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 56: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 4

Dimension and Complexity:

correlation dimension, estimates the fractal dimension ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ] (assuming chaos):d(xi , xj): distance of two points in IRm

scaling of probability of distance with r , p(d(xi , xj) < r) ∝ rν ,e.g.ν: the dimension, estimated by the slope of log p(r) vs log r .

Higuchi dimension (distance is defined in a different way).

Lempel-Ziv algorithmic complexity (turning the signal to aseries of symbols by discretization and searching for newsymbol patterns in the series).

Source: [Kugiumtzis et al, Int. J of Bioelectromagnetism, 2007]

[Kugiumtzis and Tsimpiris, J. Stat. Softw., 2010], MATS module in Matlab

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 57: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 4

Dimension and Complexity:

correlation dimension, estimates the fractal dimension ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ] (assuming chaos):d(xi , xj): distance of two points in IRm

scaling of probability of distance with r , p(d(xi , xj) < r) ∝ rν ,e.g.ν: the dimension, estimated by the slope of log p(r) vs log r .

Higuchi dimension (distance is defined in a different way).

Lempel-Ziv algorithmic complexity (turning the signal to aseries of symbols by discretization and searching for newsymbol patterns in the series).

Source: [Kugiumtzis et al, Int. J of Bioelectromagnetism, 2007]

[Kugiumtzis and Tsimpiris, J. Stat. Softw., 2010], MATS module in Matlab

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 58: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Features of nonlinear analysis - 4

Dimension and Complexity:

correlation dimension, estimates the fractal dimension ofxt = [xt , xt−τ , . . . , xt−(m−1)τ ] (assuming chaos):d(xi , xj): distance of two points in IRm

scaling of probability of distance with r , p(d(xi , xj) < r) ∝ rν ,e.g.ν: the dimension, estimated by the slope of log p(r) vs log r .

Higuchi dimension (distance is defined in a different way).

Lempel-Ziv algorithmic complexity (turning the signal to aseries of symbols by discretization and searching for newsymbol patterns in the series).

Source: [Kugiumtzis et al, Int. J of Bioelectromagnetism, 2007]

[Kugiumtzis and Tsimpiris, J. Stat. Softw., 2010], MATS module in Matlab

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3

Page 59: Computational Neuroscience - Lecture 3users.auth.gr/dkugiu/Teach/ComputationalNeuroscience/Lecture3.pdf · Computational Neuroscience - Lecture 3 Kugiumtzis Dimitris Department of

Example: 5 datasets of healthy extracranial EEG (A,B) andintracranial interictal (C,D) and ictal (E) EEG

[Bao et al, Comput IntellNeurosci, 2011]

Kugiumtzis Dimitris Computational Neuroscience - Lecture 3