49
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Embed Size (px)

Citation preview

Page 1: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Project 11: Determining the Intrinsic Dimensionality of a

Distribution

Okke Formsma, Nicolas Roussis and Per Løwenborg

Page 2: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Outline

• About the project• What is intrinsic dimensionality?• How can we assess the ID?– PCA– Neural Network– Nearest Neighbour

• Experimental Results

Page 3: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Why did we chose this Project?

• We wanted to learn more about developing and experiment with algorithms for analyzing high-dimensional data

• We want to see how we can implement this into a program

Page 4: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Papers

N. Kambhatla, T. Leen, “Dimension Reduction by Local Principal Component Analysis”

J. Bruske and G. Sommer, “Intrinsic Dimensionality Estimation with Optimally Topology Preserving Maps”

P. Verveer, R. Duin, “An evaluation of intrinsic dimensionality estimators”

Page 5: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

How does dimensionality reduction influence our lives?

• Compress images, audio and video• Redusing noise • Editing• Reconstruction

Page 6: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

This is a image going through different steps in a reconstruction

Page 7: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Intrinsic Dimensionality

The number of ‘free’ parameters needed to generate a pattern

Ex:• f(x) = -x² => 1 dimensional• f(x,y) = -x² => 1 dimensional

Page 8: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

PRINCIPAL COMPONENT ANALYSIS

Page 9: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Principal Component Analysis (PCA)

• The classic technique for linear dimension reduction.

• It is a vector space transformation which reduce multidimensional data sets to lower dimensions for analysis.

• It is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences.

Page 10: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Advantages of PCA

• Since patterns in data can be hard to find in data of high dimension, where the luxury of graphical representation is not available, PCA is a powerful tool for analysing data.

• Once you have found these patterns in the data, you can compress the data, -by reducing the number of dimensions- without much loss of information.

Page 11: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Example

Page 12: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Problems with PCA

• Data might be uncorrelated, but PCA relies on second-order statistics (correlation), so sometimes it fails to find the most compact description of the data.

Page 13: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Problems with PCA

Page 14: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

First eigenvector

Page 15: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Second eigenvector

Page 16: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

A better solution?

Page 17: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Local eigenvector

Page 18: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Local eigenvectors

Page 19: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Local eigenvectors

Page 20: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Another problem

Page 21: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Is this the principal eigenvector?

Page 22: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Or do we need more than one?

Page 23: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Choose

Page 24: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

The answer depends on your application

Low resolution High resolution

Page 25: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Challenges

• How to partition the space?• How many partitions should we use?• How many dimensions should we retain?

Page 26: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

How to partition the space?

Vector Quantization

Lloyd AlgorithmPartition the space in k setsRepeat until convergence:

Calculate the centroids of each setAssociate each point with the nearest centroid

Page 27: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Lloyd Algorithm

Set 1

Set 2

Step 1: randomly assign

Page 28: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Lloyd Algorithm

Set 1

Set 2

Step 2: Calculate centriods

Page 29: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Lloyd Algorithm

Set 1

Set 2

Step 3: Associate points with nearest centroid

Page 30: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Lloyd Algorithm

Set 1

Set 2

Step 2: Calculate centroids

Page 31: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Lloyd Algorithm

Set 1

Set 2

Step 3: Associate points with nearest centroid

Page 32: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Lloyd Algorithm

Set 1

Set 2

Result after 2 iterations:

Page 33: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

How many partitions should we use?

Bruske & Sommer: “just try them all”

For k = 1 to k ≤ dimension(set):Subdivide the space in k regionsPerform PCA on each regionRetain significant eigenvalues per region

Page 34: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Which eigenvalues are significant?

Depends on:• Intrinsic dimensionality• Curvature of the surface• Noise

Page 35: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Which eigenvalues are significant?

Discussed in class:• Largest-n

In papers:• Cutoff after normalization (Bruske & Summer)• Statistical method (Verveer & Duin)

Page 36: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Which eigenvalues are significant?

Cutoff after normalizationµx is the xth eigenvalue

With α = 5, 10 or 20.

%max

jj

ii

Page 37: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Which eigenvalues are significant?

Statistical method (Verveer & Duin)

Calculate the error rate on the reconstructed data if the lowest eigenvalue is dropped

Decide whether this error rate is significant

Page 38: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Results

• One dimensional space, embedded in 256*256 = 65,536 dimensions

• 180 images of rotatingcylinder

• ID = 1

Page 39: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Results

Page 40: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

NEURAL NETWORK PCA

Page 41: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Basic Computational Element - Neuron

• Inputs/Outputs, Synaptic Weights, Activation Function

Page 42: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg
Page 43: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

3-Layer Autoassociators

• N input, N output and M<N hidden neurons.• Drawbacks for this model. The optimal solution

remains the PCA projection.

Page 44: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

5-Layer Autoassociators Neural Network Approximators for principal surfaces

using 5-layers of neurons. Global, non-linear dimension reduction technique. Succesfull implementation of nonlinear PCA using these

networks for image and speech dimension reduction and for obtaining concise representations of color.

Page 45: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

• Third layer carries the dimension reduced representation, has width M<N

• Linear functions used for representation layer.

• The networks are trained to minimize MSE training criteria.

• Approximators of principal surfaces.

Page 46: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

Locally Linear Approach to nonlinear dimension reduction (VQPCA Algorithm)

• Much faster than to train five-layer autoassociators and provide superior solutions.

• This algorithm attempts to minimize the MSE (like 5-layers autoassociators) between the original data and its reconstruction from a low-dimensional representation. (reconstruction error)

Page 47: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

• 2 Steps in Algorithm:1) Partition the data space by VQ (clustering).2) Performs local PCA about each cluster

center.

VQPCA

VQPCA is actually a local PCA to each cluster.

Page 48: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

We can use 2 kinds of distances measures in VQPCA:1) Euclidean Distance2) Reconstruction Distance

Example intended for a 1D local PCA:

Page 49: Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg

5-layer Autoassociators vs VQPCA

• Difficulty to train 5-layer autoassociators. Faster training in VQPCA algorithm. (VQ can be accelerated using tree-structured

or multistage VQ)• 5-layer autoassociators are prone to trapping

in poor local optimal.• VQPCA slower for encoding new data but

much faster for decoding.