35
Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model (& discussion on the GPLVM tech. report by Prof. N. Lawrence, ’06) Andreas Damianou Department of Neuro- and Computer Science, University of Sheffield, UK University of Surrey, 13/06/2012

Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Tutorial on Gaussian Processes andthe Gaussian Process Latent

Variable Model(& discussion on the GPLVM tech. report by Prof. N. Lawrence, ’06)

Andreas Damianou

Department of Neuro- and Computer Science, University ofSheffield, UK

University of Surrey, 13/06/2012

Page 2: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Outline

Part 1: Gaussian processesParametric models: ML and Bayesian regressionNonparametric models: Gaussian process regressionCovariance functions

Part 2: Gaussian Process Latent Variable ModelDimensionality Reduction: MotivationFrom probabilistic PCA to Dual PPCAFrom Dual PPCA to GP-LVM

Part3: Applications of GP-LVM in vision

Page 3: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Introducing Gaussian Processes: Outline

From: ML / MAP Regression, toBayesian Regression, toGP Regression

Bayesian Non-parametric

7 7

X 7

X X

Page 4: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Maximum Likelihood Regression

Data: X → Y↓ ↓

inputs targets

• Regression: Assume a parametric model with parameters θ

• Likelihood L(θ) = p(Y |X, θ) is obtained from the PDF of theassumed distribution

• Example: Linear RegressionI Y = f(X,W ) + ε, f(X,W ) =WX, ε ∼ N (0, β−1)

I L(θ) = p(Y |X, θ) = N (Y |WX,β−1), θ = {W,β−1}

I Optimise: θ̂ = argmaxθ L(θ). Predictions based on

p(y∗|x∗, θ̂)

Page 5: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Maximum Likelihood Regression

Data: X → Y↓ ↓

inputs targets

• Regression: Assume a parametric model with parameters θ

• Likelihood L(θ) = p(Y |X, θ) is obtained from the PDF of theassumed distribution

• Example: Linear RegressionI Y = f(X,W ) + ε, f(X,W ) =WX, ε ∼ N (0, β−1)

I L(θ) = p(Y |X, θ) = N (Y |WX,β−1), θ = {W,β−1}

I Optimise: θ̂ = argmaxθ L(θ). Predictions based on

p(y∗|x∗, θ̂)

Page 6: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Maximum Likelihood Regression

Data: X → Y↓ ↓

inputs targets

• Regression: Assume a parametric model with parameters θ

• Likelihood L(θ) = p(Y |X, θ) is obtained from the PDF of theassumed distribution

• Example: Linear RegressionI Y = f(X,W ) + ε, f(X,W ) =WX, ε ∼ N (0, β−1)

I L(θ) = p(Y |X, θ) = N (Y |WX,β−1), θ = {W,β−1}

I Optimise: θ̂ = argmaxθ L(θ). Predictions based on

p(y∗|x∗, θ̂)

Page 7: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Maximum Likelihood Regression

Data: X → Y↓ ↓

inputs targets

• Regression: Assume a parametric model with parameters θ

• Likelihood L(θ) = p(Y |X, θ) is obtained from the PDF of theassumed distribution

• Example: Linear RegressionI Y = f(X,W ) + ε, f(X,W ) =WX, ε ∼ N (0, β−1)

I L(θ) = p(Y |X, θ) = N (Y |WX,β−1), θ = {W,β−1}

I Optimise: θ̂ = argmaxθ L(θ). Predictions based on

p(y∗|x∗, θ̂)

Page 8: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Bayesian parametric model

• Bayes rule:

posterior︷ ︸︸ ︷p(θ|X,Y ) =

likelihood︷ ︸︸ ︷p(Y |X, θ)

prior︷︸︸︷p(θ)

p(Y |X)︸ ︷︷ ︸evidence

• Predictions via marginalisation:

p(y∗|x∗, X, Y ) =∫p(y∗|x∗, X, Y, θ)︸ ︷︷ ︸

likelihood

p(θ|X,Y )︸ ︷︷ ︸posterior

• θ is integrated out, but we still assume a parametric model(i.e. f(X, θ =WX, θ = {W,β})

• The integral (and sometimes p(Y |X)) are often intractable

Page 9: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Bayesian parametric model

• Bayes rule:

posterior︷ ︸︸ ︷p(θ|X,Y ) =

likelihood︷ ︸︸ ︷p(Y |X, θ)

prior︷︸︸︷p(θ)

p(Y |X)︸ ︷︷ ︸evidence

• Predictions via marginalisation:

p(y∗|x∗, X, Y ) =∫p(y∗|x∗, X, Y, θ)︸ ︷︷ ︸

likelihood

p(θ|X,Y )︸ ︷︷ ︸posterior

• θ is integrated out, but we still assume a parametric model(i.e. f(X, θ =WX, θ = {W,β})

• The integral (and sometimes p(Y |X)) are often intractable

Page 10: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Bayesian parametric model

• Bayes rule:

posterior︷ ︸︸ ︷p(θ|X,Y ) =

likelihood︷ ︸︸ ︷p(Y |X, θ)

prior︷︸︸︷p(θ)

p(Y |X)︸ ︷︷ ︸evidence

• Predictions via marginalisation:

p(y∗|x∗, X, Y ) =∫p(y∗|x∗, X, Y, θ)︸ ︷︷ ︸

likelihood

p(θ|X,Y )︸ ︷︷ ︸posterior

• θ is integrated out, but we still assume a parametric model(i.e. f(X, θ =WX, θ = {W,β})

• The integral (and sometimes p(Y |X)) are often intractable

Page 11: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian process nonparametric models

Mapping Prior

Parametric f(X, θ) =WX on the function parameters(p(θ) = p(W ))

Nonparametric (GP) f ∼ GP on the function itself

• A GP is a prior over functions. It depends on a mean and acovariance function (NOT matrix!)

Prior: fn = f(xn) ∼ GP(m(xn), k(xn, x

′n))→ infinite

Joint: f∗, F ∼ N (µ∗,K∗)→ finite (F = {fn}Nn=1)

• Posterior/predictive process/distribution f∗|f is also Gaussian!

Page 12: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian process nonparametric models

Mapping Prior

Parametric f(X, θ) =WX on the function parameters(p(θ) = p(W ))

Nonparametric (GP) f ∼ GP on the function itself

• A GP is a prior over functions. It depends on a mean and acovariance function (NOT matrix!)

Prior: fn = f(xn) ∼ GP(m(xn), k(xn, x

′n))→ infinite

Joint: f∗, F ∼ N (µ∗,K∗)→ finite (F = {fn}Nn=1)

• Posterior/predictive process/distribution f∗|f is also Gaussian!

Page 13: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian process nonparametric models(modified from C. E. Rasmussen’s tutorial, “Learning with Gaussian Processes”)

• Gaussian Likelihood: Y |X, f(x) = N (Y |F, β−1I)• (Zero mean) GP prior: f(x) ∼ GP(0, k(x, x′))

• Leads to a GP posterior:

f(x)|X,Y ∼ GP(mpost = k(x,X)K−1(X,X)F,

kpost(x, x′) = k(x, x′)− k(x,X)K−1(X,X)k(X,x)

)• ... and a Gaussian predictive distribution:

y∗|x∗, X, Y ∼ N(k(x∗, X)

[K(X,X) + β−1I

]−1Y,

k(x∗, x∗) + β−1 − k(x∗, X)[K(X,X) + β−1I

]−1k(X,x∗)

)

Page 14: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian process nonparametric models(modified from C. E. Rasmussen’s tutorial, “Learning with Gaussian Processes”)

• Gaussian Likelihood: Y |X, f(x) = N (Y |F, β−1I)• (Zero mean) GP prior: f(x) ∼ GP(0, k(x, x′))• Leads to a GP posterior:

f(x)|X,Y ∼ GP(mpost = k(x,X)K−1(X,X)F,

kpost(x, x′) = k(x, x′)− k(x,X)K−1(X,X)k(X,x)

)

• ... and a Gaussian predictive distribution:

y∗|x∗, X, Y ∼ N(k(x∗, X)

[K(X,X) + β−1I

]−1Y,

k(x∗, x∗) + β−1 − k(x∗, X)[K(X,X) + β−1I

]−1k(X,x∗)

)

Page 15: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian process nonparametric models(modified from C. E. Rasmussen’s tutorial, “Learning with Gaussian Processes”)

• Gaussian Likelihood: Y |X, f(x) = N (Y |F, β−1I)• (Zero mean) GP prior: f(x) ∼ GP(0, k(x, x′))• Leads to a GP posterior:

f(x)|X,Y ∼ GP(mpost = k(x,X)K−1(X,X)F,

kpost(x, x′) = k(x, x′)− k(x,X)K−1(X,X)k(X,x)

)• ... and a Gaussian predictive distribution:

y∗|x∗, X, Y ∼ N(k(x∗, X)

[K(X,X) + β−1I

]−1Y,

k(x∗, x∗) + β−1 − k(x∗, X)[K(X,X) + β−1I

]−1k(X,x∗)

)

Page 16: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Covariance functions

• But where did k(x, x′) (and K(x,x) etc. ) come from?

• Assumptions about properties of f ⇒ define a parametricform for k, e.g:

k(x, x′) = α exp(−γ2(x− x′)T(x− x′)

)• However, a prior with this cov. function defines a whole family

of functions

• The parameters {α, γ} are hyperparameters.

Page 17: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Covariance samples and hyperparameters

• The hyperparameters of the cov. function define theproperties (and NOT an explicit form) of the sampledfunctions

Page 18: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian Process Regression - demo(source: N. Lawrence’s talk, “Learning and Inference with Gaussian Processes” (2005))

Observing more and more data: Prior + data likelihood get combined

Page 19: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian Process Regression - demo(source: N. Lawrence’s talk, “Learning and Inference with Gaussian Processes” (2005))

Observing more and more data: Prior + data likelihood get combined

Page 20: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian Process Regression - demo(source: N. Lawrence’s talk, “Learning and Inference with Gaussian Processes” (2005))

Observing more and more data: Prior + data likelihood get combined

Page 21: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian Process Regression - demo(source: N. Lawrence’s talk, “Learning and Inference with Gaussian Processes” (2005))

Observing more and more data: Prior + data likelihood get combined

Page 22: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian Process Regression - demo(source: N. Lawrence’s talk, “Learning and Inference with Gaussian Processes” (2005))

Observing more and more data: Prior + data likelihood get combined

Page 23: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian Process Regression - demo(source: N. Lawrence’s talk, “Learning and Inference with Gaussian Processes” (2005))

Observing more and more data: Prior + data likelihood get combined

Page 24: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian Process Regression - demo(source: N. Lawrence’s talk, “Learning and Inference with Gaussian Processes” (2005))

Observing more and more data: Prior + data likelihood get combined

Page 25: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian Process Regression - demo(source: N. Lawrence’s talk, “Learning and Inference with Gaussian Processes” (2005))

Observing more and more data: Prior + data likelihood get combined

Page 26: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Dimensionality Reduction: Motivation

• Each data-point is 64× 57 = 3, 648-dimensional (pixel space)

• However, intrinsic dimensionality is lower

Page 27: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Dimensionality Reduction: Motivation• Consider digit rotations

• Create a new dataset, where a prototype is repeated underone of 360 different angles

• Project into principal components 2 and 3

• Low-dimensional embedding (3, 648→ 2 dimensions) capturesall necessary information

Page 28: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Probabilistic, generative methods

• Observed (high-dimensional) data: Y ∈ RN×D

These contain redundant information

• Actual (low-dimensional) data: X ∈ RN×Q, Q� DThese are unobserved and (ideally) contain only the minimumamount of information needed to correctly describe thephenomenon

• Work “backwards”: learn f : X 7→ Y

Page 29: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Probabilistic, generative methods

• Model (compare with regression):

ynd = fd(xn,W )︸ ︷︷ ︸Wxn

+εn , εn ∼ N (0, β−1)

• p(Y |W,X, β) =∏N

n=1N (yn|Wxn, β−1I)

• W,X ∈ RN×Q, Q� D

• X are unobserved

Page 30: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

From dual PPCA to GP-LVM

• PPCA places a prior on and marginalises the latent space Xand optimises the linear mapping’s parameters W

• Dual PPCA does the opposite: the prior is placed on themapping parameters.

p(Y |W,β) =∫p(Y |X,W, β)p(X)dX

p(Y |X,β) =∫p(Y |X,W, β)p(W )dW

Page 31: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian process latent variable model(GP-LVM)

• PPCA and Dual PPCA are equivalent (equivalent eigenvalueproblems for ML solution)

• GP-LVM: Instead of placing a prior p(W ) on the parametricmapping’s parameters, we can place a prior directly on themapping function ⇒ GP prior

• A GP prior f ∼ GP(0, k(x, x′)) allows for non-linear mappingsif the kernel k is non-linear. For example:

k(x, x′) = α exp(−γ2(x− x′)T(x− x′)

)

Page 32: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Gaussian process latent variable model(GP-LVM)

Page 33: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Dimensionality reduction: Linear vsnon-linear

Image from: ”Dimensionality Reduction the Probabilistic Way”, N. Lawrence, ICML tutorial 2008

Page 34: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Applications of the GP-LVM in vision

• Modelling human motion (inverse kinematics [1] , body partsdecomposition [4] , ...) (Show video...)

• Animation [2] (Show video...)

• Tracking [3]

• Reconstruction & probabilistic generation of HD video/highres. images [5,6]

• ...

[1] Grochow et al. (2004), Style-based Inverse Kinematics (SIGGRAPH)

[2] Baxter and Anjyo (2006), Latent Doodle Space (Eurographics)

[3] Urtasun et al. (2005), Priors for People Tracking from Small Training Sets

[4] Lawrence and Moore. (2007), Hierarchical Gaussian process latent variable models (ICML)

[5] Damianou et al. (2011), Variational Gaussian process dynamical systems (NIPS)

[6] Damianou et al. (2012), Manifold Relevance Determination (ICML)

Page 35: Tutorial on Gaussian Processes and the Gaussian Process ...adamian.github.io/talks/tutorialGP_GPLVM_Surrey2012.pdf · N. D. Lawrence (2006) \Learning and inference with Gaussian processes:

Main sources:

• N. D. Lawrence (2006) “The Gaussian process latent variable model” Technical Report no

CS-06-03, The University of Sheffield, Department of Computer Science

• N. D. Lawrence (2006) “Learning and inference with Gaussian processes: an overview of

Gaussian processes and the GP-LVM”. Presented at University of Manchester, Machine

Learning Course Guest Lecture on 3/11/2006

• N. D. Lawrence (2006) “Probabilistic dimensional reduction with the Gaussian process

latent variable model” (talk)

• C. E. Rasmussen(2008), “Learning with Gaussian Processes”, Max Planck Institute for

Biological Cybernetics, Published: Feb. 5, 2008 (Videolectures.net)

• Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine

Learning. MIT Press, Cambridge, MA, 2006. ISBN 026218253X.

• N. D. Lawrence, lecture notes for “Machine Learning and Adaptive Intelligence” (2012)