35
Clustering Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B Spring 2011 UCSD

Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

Clustering

Ken Kreutz-Delgado

Nuno Vasconcelos

ECE 175B – Spring 2011 – UCSD

Page 2: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

2

Statistical Learning

Goal: Given a relationship between a feature vector x and a vector y, and iid sample data (xi,yi), find an approximating function f (x) = y y.

This is called training or learning.

Two major types of learning:

• Unsupervised Classification (aka Clustering) or Unsupervised Regression (Blind Curve-fitting): only X is known.

• Supervised Classification or Regression: both X and target value Y are known during training, only X is known at test time.

( )ˆ xy y fx( )·f

ˆ

Page 3: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

3

Unsupervised Learning – Clustering Why learning without supervision?

• In many problems labels are not available or are impossible or expensive to get.

• E.g. in the hand-written digits example, a human sat in front of the computer for hours to label all those examples.

• For other problems the classes to be labeled depend on the application.

• A good example is image segmentation:

if you want to know if this is an image of the wild or of a big city, there is probably no need to segment.

If you want to know if there is an animal in the image, then you would.

– The segmentation mask is usually not available

Page 4: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

4

Review of Supervised Classification

Although our focus on clustering, let us start by reviewing supervised classification:

To implement the optimal decision rule for a supervised classification problem, we need to

• Collect a labeled iid training set

D = {(x1,y1) , … , (xn,yn)}

where xi is a vector of observations and yi is the associated class label,

and then

LEARN a probability model for each class

This involves estimating PX|Y(x|i) and PY(i) for each class i

Page 5: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

5

Supervised Classification – Learning Learning can be done by Maximum Likelihood Estimation

MLE has two steps:

1) Choose a parametric model for each class pdf:

2) Select the parameters of class i to be the ones that maximize the probability of the iid data from that class:

|( | ; )

iX iY iP x i

( )

|

( )

|

argmax | ;

argmax

ˆ

log | ;

i i

i i

i

i X Y

i

X Y

i

i

P i

P i

Page 6: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

We have seen that MLE can be a straightforward procedure. In particular, if the pdf is twice differentiable then:

• Solutions are the parameters values such that

• One always has to check the second-order condition!

• We can also find an MLE for the class probabilities PY(i)

but here there is not much choice of probability model

‒ Bernoulli: ML estimate is the percent of training points in the class

6

max

Maximum Likelihood Estimation

(

|

)( | ; ˆ ) 0

i

i

i

X YP i

2 |

2

( )( | ; ) 0, ˆ ipT i

i i i i iY

i

XP i

Page 7: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

7

Maximum Likelihood Estimation We have worked out the Gaussian case in detail:

• D ( i ) = {x1(i) , ... , xni

(i)} = set of examples from class i

• The ML estimates for class i are

There are many other distributions for which we can derive a similar set of equations

But the Gaussian case is particularly relevant for clustering (more on this later)

( ) ( )1ˆ ˆ ˆ( )( )i i T

i j i i

i

j

j

x xn

( )1ˆ i

i

ji

jx

n ˆ ( )

Y

in

P in

Page 8: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

8

Supervised 0/1-Loss Classification

This gives probability models for each of the classes

Now we utilize the fact that:

• assuming the zero/one loss, the optimal decision rule (BDR) is the

MAP rule: Which can also be written as

• This completes the process of learning a BDR. We now have a rule for classifying any (unlabeled) future measurement x.

*

|( ) argmax ( | )

Y Xi

i x P i x

*

|( ) argmax log ( | ) log ( )

X Y Yi

i x P x i P i

Page 9: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

9

Gaussian BDR Classifier In the Gaussian case the BDR is with

This can be seen as finding the nearest class neighbor, using a “funny metric”

• Each class has its own squared-distance which is the sum of Mahalanobis-squared for that class plus the a constant

o we effectively have different metrics in different regions of the space

2*( ) argmin ( , )

i i ii

i x d x a

2 1( , ) ( ) ( )

ii

Td x y x y x y

log(2 ) 2log ( )d

i i YP ia

discriminant for (1| ) = 0.5

Page 10: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

10

Gaussian Classifier A special case of interest is when:

• All classes have the same covariance i = with

• Note: ai can be dropped when all classes have equal probability

Then this is close to the NN classifier with Mahalanobis distance

However, instead of finding the nearest neighbor, it looks for the

nearest class “prototype” or “template” i

2*( ) argmin ( , )

i ii

i x d x a

12( , ) ( ) ( )

Td x y x y x y

2log ( )i Y

P ia

discriminant for (1| ) = 0.5

Page 11: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

11

Special Case Gaussian BDR

i = for two classes (detection)

• One important property of this case is that the decision boundary is a hyperplane.

• This can be shown by computing the set of points x such that

and showing that they satisfy

This is the equation of a hyperplane with normal w. x0 can be any fixed point on the hyperplane, but it is standard to choose it to have minimum norm, in which case w and x0 are parallel

0 0 1 1

2 2( , ) ( , )d x d x a a

discriminant for (1| ) = 0.5

0( ) 0

Tw x x

x 1

x 3 x 2

x n

w

x

x0

Page 12: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

12

Special Case Gaussian BDR If all the covariances are the identity i = I, then with

This is just (Euclidean distance) template matching with class means as templates

• E.g. for digit classification, the class means (templates) are:

• Compare complexity of template matching to nearest neighbors!

2*( ) argmin ( , )

i ii

i x d x a

22( , ) || ||d x y x y

2log ( )i Y

P ia

* ?

Page 13: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

13

Unsupervised Classification - Clustering

In a clustering problem we do not have labels in the training set

However, we can try to estimate the class labels and the class pdf parameters

Here is a strategy:

• Assume k classes with pdf’s initialized to randomly chosen parameter values

• Then iterate between two steps:

1) Apply the optimal decision rule for the (estimated) class pdf’s

this assigns each point to one of the clusters, creating pseudo-labeled data

2) Update the pdf estimates by doing parameter estimation within each estimated (pseudo-labeled) class cluster found in step 1

Page 14: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

14

Unsupervised Classification - Clustering Natural question: what probability model do we assume?

• Let’s start as simple as possible (KISS)

• Assume Gaussian with identity covariances and equal PY(i)

• Each class has a (unknown) prototype i which must be learned

Resulting clustering algorithm is the k-means algorithm:

• Start with some initial estimate of the i (e.g. random, but distinct)

• Then, iterate between

1) Simple template-matching classification rule:

2) Re-estimation:

2*( ) argmin

ii

i x x

( )

1

1 i

i

i j

i

n

new

i

j

xn

Page 15: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

15

K-means Clustering (Andrew Moore, CMU)

Page 16: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

16

K-means (thanks to Andrew Moore, CMU)

Page 17: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

17

K-means (thanks to Andrew Moore, CMU)

Page 18: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

18

K-means (thanks to Andrew Moore, CMU)

Page 19: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

19

K-means (thanks to Andrew Moore, CMU)

Page 20: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

20

K-means Clustering The name comes from the fact that we are trying to learn the “k” means of “k” assumed clusters

It is optimal if you want to minimize the expected value of the squared error between vector x and template to which x is assigned

Problems:

• How many clusters? (i.e., what is k?)

• Various methods available, Bayesian information criterion, Akaike information criterion, minimum description length

• Guessing can work pretty well

• Algorithm converges to a local minimum only

• How does one initialize?

• Random can be pretty bad

• Mean Splitting can be significantly better

Page 21: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

21

Mean Splitting

For k = 1 we just need the sample mean of all points ( ( 1 ) )

To initialize means for k = 2 perturb the mean 1 randomly

• 1(2) = (1)

• 2(2) = (1+e) (1) e << 1

Then run k-means with for k = 2

Initial means for k = 4

• 1(4) = 1

(2)

• 2(4) = (1+e) 1

(2)

• 3(4) = 2

(2)

• 4(4) = (1+e) 2

(2)

Then run k-means with k = 4

etc ….

Page 22: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

22

Empty Clusters

Can be a source of headaches

At the end of each iteration of k-means

• Check the number of elements in each cluster

• If too low, throw the cluster away

• Reinitialize the mean of the most populated cluster with a perturbed version of that mean

Note that there are alternative names:

• In the compression literature this is known as the generalized Loyd algorithm

• This is actually the right name, since Loyd was the first to invent it

• It is used in the design of vector quantizers

Page 23: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

23

Vector Quantization

Is a popular data compression technique

• Find a codebook of prototypes for the vectors to compress

• Instead of transmitting each vector, transmit the codebook index

• Image compression example

• Each pixel has 3 colors (3 bytes)

• Instead, find the optimal 256 color prototypes! (256 ~ 1 byte)

Page 24: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

24

Vector Quantization we now have an image compression scheme

• Each pixel has 3 colors (3 bytes)

• Instead, find the nearest neighbor template in codebook

• We transmit the template index

• Since there are only 256 templates, only need one byte needed

• Using the index, the decoder looks up the prototype in its table

• By sacrificing a little bit of distortion, we saved 2 bytes per pixel!

Page 25: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

25

K-means Clustering There are many other applications

• E.g. image segmentation: decompose each image into component objects

• Then run k-means on the colors and look at the assignments

• The pixels assigned to the red cluster tend to be from the booth

Page 26: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

26

K-means Clustering

We can also use texture information in addition to color

• Many methods for doing this

• Here are some results

• Note that this is not the state-of-the-art in image segmentation

• But gives a good idea of what k-means can do

Page 27: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

27

Extensions

There are many extensions to the basic k-means algorithm

• One of the most important applications is the learning of accurate approximations to general, nontrivial PDF’s.

• Remember that the optimal decision rule is optimal iff the true probabilities PX|Y(x|i) are correctly estimated

• This often turns out to be impossible when we use simple parametric models like the Gaussian – the true probability is too complicated for any simple model to hold accurately

• Even if simple models provide good local approximations, there are usually multiple clusters when we take a global view

• These weakness can be addressed by use of mixture distributions

*

|( ) argmax log ( | ) log ( )

X Y Yi

i x P x i P i

Page 28: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

28

Mixture Distributions

Consider the following problem

• Certain types of traffic banned from a bridge

• We need a classifier to see if the ban is holding

• A sensor measures vehicle weight

• Want to classify each car into class = “OK” or class = “Banned”

• However, within each class there are multiple sub-classes

• E.g. OK = {compact, sedan, station wagon, SUV}

Banned = {truck, bus, semi}

• Each of the sub-classes is close to Gaussian, but for a whole class we get this

Page 29: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

29

Mixture distributions

This distribution is a mixture

• The overall pdf is determined by a number of sub-class densities

• We introduce a random variable Z to account for this

• A value Z = c means that Z points to sub-class c and thus picks out the cth component density from the mixture.

• E.g. a Gaussian mixture: # of mixture components

cth component “weight”

cth “mixture component”

= Gaussian pdf

Page 30: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

30

Mixture Distributions Learning a mixture density is a type of “soft” clustering problem

• For each training point xk we need to figure out from which component Zk = Z(xk) = j it was drawn

• Once we know how points are assigned to a component j we can estimate the jth component’s pdf parameters

This could be done with k-means

A more general algorithm is Expectation-Maximization (EM)

• A key difference from k-means: we never “hard assign” the points xk

• In the expectation step we compute posterior probabilities that a point xk

belongs to class j, for every j, conditioned on all the data D.

But we do not make a hard decision! (E.g., we do not assign the point xk to a single class via the MAP rule.)

• Instead, in the maximization step, the point xk contributes to (is assigned to) all classes weighted by the posterior class probabilities

Page 31: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

31

Expectation-Maximization (EM) In summary

1. Start with an initial parameter vector estimate (0)

2. E-step: Given current parameters (i) and observations in D,

estimate the indicator functions (Zk = j ) via the

Conditional Expection

hkj = E{ (Zk = j ) | D ; (i) } = E{ (Zk = j ) | xk ; (i) }

1. M-step: Weighting the data xk by hkj, we have a complete data MLE problem for each class j. I.e. Maximize the class j likelihoods for the parameters, i.e. re-compute (i+1)

2. Go to 2.

In a graphical form: Estimate

parameters

(i+1)

Fill in class

assignments

hkj

E-step

M-step

Page 32: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

Expectation Maximization (EM)

Note that

and

32

( ) ( )

|E (Z ; Z |) | ;

i i

k j k k Z X k kP j xh j x

1

( )

1

E ;ˆ(Z ) |i

j k j j

n

k k j

n

k k

j n n xn h

1 1

ˆC C

j j

j jn n n n

Page 33: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

33

Gaussian EM Algorithm For the Gaussian Mixture Model this yields:

Expectation Step:

Maximization Step:

Compare to the single (non-mixture) Gaussian MLE solution shown on page 7!

( )

( )

2

2

( ) ( )

( )

|( ) ( )

1

;( )( | ; )

(

,

; , )

i

i

i i

ki

k j Z X k k Ci i

k c c

j j

c

j

c

h x

x

P jx

Z

( 1)

( 1)

1

2( 1) 2 ( 1)

1 1

ˆˆ ,

1 1,

ˆ ˆ

j

ji

j k j j

n ni i

j k j k j k j k j

j j

n

k

k

k

h

x x

nn

n

h hn n

Page 34: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

34

Expectation-Maximization (EM)

Note that the difference between EM and k-means is that

• In the E-step hij is not hard-limited to 0 or 1

Doing so would make the M-step basically the same as k-means

• Plus we get estimates of the class covariances and class probabilities automatically

k-means can be seen as a greedy version of EM

At each iteration, for each point we make a hard decision (the optimal MAP BDR for identity covariances & equal class priors)

But this does not take into account the information in the points we “throw away”. I.e., potentially all points carry information about all sub-classes (density components)

Note: If the hard assignment is best, EM will learn it

To get a feeling for EM you can use

• http://www-cse.ucsd.edu/users/ibayrakt/java/em/

Page 35: Ken Kreutz-Delgado Nuno Vasconcelosdsp.ucsd.edu/~kreutz/ECE174 Support Files/ECE175B Support...Ken Kreutz-Delgado Nuno Vasconcelos ECE 175B – Spring 2011 – UCSD 2 Statistical Learning

35

END