Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
INFO 4300 / CS4300Information Retrieval
slides adapted from Hinrich Schutze’s,linked from http://informationretrieval.org/
IR 21/26: Linear Classifiers and Flat clustering
Paul Ginsparg
Cornell University, Ithaca, NY
12 Nov 2009
1 / 36
Overview
1 Recap
2 Evaluation
3 How many clusters?
4 Discussion
2 / 36
Outline
1 Recap
2 Evaluation
3 How many clusters?
4 Discussion
3 / 36
Linear classifiers
Linear classifiers compute a linear combination or weightedsum
∑
i wixi of the feature values.
Classification decision:∑
i wixi > θ?
. . . where θ (the threshold) is a parameter.
(First, we only consider binary classifiers.)
Geometrically, this corresponds to a line (2D), a plane (3D) ora hyperplane (higher dimensionalities)
Assumption: The classes are linearly separable.
Can find hyperplane (=separator) based on training set
Methods for finding separator: Perceptron, Rocchio, NaiveBayes – as we will explain on the next slides
4 / 36
Which hyperplane?
5 / 36
Which hyperplane?
For linearly separable training sets: there are infinitely manyseparating hyperplanes.
They all separate the training set perfectly . . .
. . . but they behave differently on test data.
Error rates on new data are low for some, high for others.
How do we find a low-error separator?
Perceptron: generally bad; Naive Bayes, Rocchio: ok; linearSVM: good
6 / 36
Linear classifiers: Discussion
Many common text classifiers are linear classifiers: NaiveBayes, Rocchio, logistic regression, linear support vectormachines etc.
Each method has a different way of selecting the separatinghyperplane
Huge differences in performance on test documents
Can we get better performance with more powerful nonlinearclassifiers?
Not in general: A given amount of training data may sufficefor estimating a linear boundary, but not for estimating amore complex nonlinear boundary.
7 / 36
How to combine hyperplanes for > 2 classes?
?
(e.g.: rank and select top-ranked classes)
8 / 36
What is clustering?
(Document) clustering is the process of grouping a set ofdocuments into clusters of similar documents.
Documents within a cluster should be similar.
Documents from different clusters should be dissimilar.
Clustering is the most common form of unsupervised learning.
Unsupervised = there are no labeled or annotated data.
9 / 36
Classification vs. Clustering
Classification: supervised learning
Clustering: unsupervised learning
Classification: Classes are human-defined and part of theinput to the learning algorithm.
Clustering: Clusters are inferred from the data without humaninput.
However, there are many ways of influencing the outcome ofclustering: number of clusters, similarity measure,representation of documents, . . .
10 / 36
Flat vs. Hierarchical clustering
Flat algorithms
Usually start with a random (partial) partitioning of docs intogroupsRefine iterativelyMain algorithm: K -means
Hierarchical algorithms
Create a hierarchyBottom-up, agglomerativeTop-down, divisive
11 / 36
Flat algorithms
Flat algorithms compute a partition of N documents into aset of K clusters.
Given: a set of documents and the number K
Find: a partition in K clusters that optimizes the chosenpartitioning criterion
Global optimization: exhaustively enumerate partitions, pickoptimal one
Not tractable
Effective heuristic method: K -means algorithm
12 / 36
Set of points to be clustered
b
b
b
b
b
b
b bb
b
b
b
b
b
b
b
bb
bb
13 / 36
Set of points to be clustered
b
b
b
b
b
b
b bb
b
b
b
b
b
b
b
bb
bb
14 / 36
K -means
Each cluster in K -means is defined by a centroid.
Objective/partitioning criterion: minimize the average squareddifference from the centroid
Recall definition of centroid:
~µ(ω) =1
|ω|
∑
~x∈ω
~x
where we use ω to denote a cluster.
We try to find the minimum average squared difference byiterating two steps:
reassignment: assign each vector to its closest centroidrecomputation: recompute each centroid as the average of thevectors that were assigned to it in reassignment
15 / 36
Random selection of initial cluster centers
×
×b
b
b
b
b
b
b bb
b
b
b
b
b
b
b
bb
bb
Centroids after convergence?
16 / 36
Centroids and assignments after convergence
2
2
2
2
1
1
1 122
1
1
1
11
1
11
1 1
××
17 / 36
k-means clustering
Goal
cluster similar data points
Approach:given data points and distance function
select k centroids ~µa
assign ~xi to closest centroid ~µa
minimize∑
a,i d(~xi , ~µa)
Algorithm:
randomly pick centroids, possibly from data points
assign points to closest centroidaverage assigned points to obtain new centroids
repeat 2,3 until nothing changes
Issues:
- takes superpolynomial time on some inputs- not guaranteed to find optimal solution
+ converges quickly in practice18 / 36
Outline
1 Recap
2 Evaluation
3 How many clusters?
4 Discussion
19 / 36
What is a good clustering?
Internal criteria
Example of an internal criterion: RSS in K -means
But an internal criterion often does not evaluate the actualutility of a clustering in the application.
Alternative: External criteria
Evaluate with respect to a human-defined classification
20 / 36
External criteria for clustering quality
Based on a gold standard data set, e.g., the Reuters collectionwe also used for the evaluation of classification
Goal: Clustering should reproduce the classes in the goldstandard
(But we only want to reproduce how documents are dividedinto groups, not the class labels.)
First measure for how well we were able to reproduce theclasses: purity
21 / 36
External criterion: Purity
purity(Ω,C ) =1
N
∑
k
maxj
|ωk ∩ cj |
Ω = ω1, ω2, . . . , ωK is the set of clusters andC = c1, c2, . . . , cJ is the set of classes.
For each cluster ωk : find class cj with most members nkj in ωk
Sum all nkj and divide by total number of points
22 / 36
Example for computing purity
x
o
x x
x
x
o
x
o
o ⋄o x
⋄ ⋄
⋄
x
cluster 1 cluster 2 cluster 3
To compute purity:5 = maxj |ω1 ∩ cj | (class x, cluster 1);4 = maxj |ω2 ∩ cj | (class o, cluster 2);and3 = maxj |ω3 ∩ cj | (class ⋄, cluster 3).Purity is (1/17) × (5 + 4 + 3) = 12/17 ≈ 0.71.
23 / 36
Rand index
Definition: RI = TP+TNTP+FP+FN+TN
Based on 2x2 contingency table of all pairs of documents:same cluster different clusters
same class true positives (TP) false negatives (FN)different classes false positives (FP) true negatives (TN)
TP+FN+FP+TN is the total number of pairs.
There are(
N2
)
pairs for N documents.
Example:(
172
)
= 136 in o/⋄/x example
Each pair is either positive or negative (the clustering puts thetwo documents in the same or in different clusters) . . .
. . . and either “true” (correct) or “false” (incorrect): theclustering decision is correct or incorrect.
24 / 36
As an example, we compute RI for the o/⋄/x example. We firstcompute TP + FP. The three clusters contain 6, 6, and 5 points,respectively, so the total number of “positives” or pairs ofdocuments that are in the same cluster is:
TP + FP =
(
62
)
+
(
62
)
+
(
52
)
= 40
Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄pairs in cluster 3, and the x pair in cluster 3 are true positives:
TP =
(
52
)
+
(
42
)
+
(
32
)
+
(
22
)
= 20
Thus, FP = 40 − 20 = 20. FN and TN are computed similarly.
25 / 36
Rand measure for the o/⋄/x example
same cluster different clusterssame class TP = 20 FN = 24different classes FP = 20 TN = 72
RI is then (20 + 72)/(20 + 20 + 24 + 72) = 92/136 ≈ 0.68.
26 / 36
Two other external evaluation measures
Two other measures
Normalized mutual information (NMI)
How much information does the clustering contain about theclassification?Singleton clusters (number of clusters = number of docs) havemaximum MITherefore: normalize by entropy of clusters and classes
F measure
Like Rand, but “precision” and “recall” can be weighted
27 / 36
Evaluation results for the o/⋄/x example
purity NMI RI F5
lower bound 0.0 0.0 0.0 0.0maximum 1.0 1.0 1.0 1.0value for example 0.71 0.36 0.68 0.46
All four measures range from 0 (really bad clustering) to 1 (perfectclustering).
28 / 36
Outline
1 Recap
2 Evaluation
3 How many clusters?
4 Discussion
29 / 36
How many clusters?
Either: Number of clusters K is given.
Then partition into K clustersK might be given because there is some external constraint.Example: In the case of Scatter-Gather, it was hard to showmore than 10–20 clusters on a monitor in the 90s.
Or: Finding the “right” number of clusters is part of theproblem.
Given docs, find K for which an optimum is reached.How to define “optimum”?We can’t use RSS or average squared distance from centroidas criterion: always chooses K = N clusters.
30 / 36
Exercise
Suppose we want to analyze the set of all articles published bya major newspaper (e.g., New York Times or SuddeutscheZeitung) in 2008.
Goal: write a two-page report about what the major newsstories in 2008 were.
We want to use K -means clustering to find the major newsstories.
How would you determine K?
31 / 36
Simple objective function for K (1)
Basic idea:
Start with 1 cluster (K = 1)Keep adding clusters (= keep increasing K )Add a penalty for each new cluster
Trade off cluster penalties against average squared distancefrom centroid
Choose K with best tradeoff
32 / 36
Simple objective function for K (2)
Given a clustering, define the cost for a document as(squared) distance to centroid
Define total distortion RSS(K) as sum of all individualdocument costs (corresponds to average distance)
Then: penalize each cluster with a cost λ
Thus for a clustering with K clusters, total cluster penalty isKλ
Define the total cost of a clustering as distortion plus totalcluster penalty: RSS(K) + Kλ
Select K that minimizes (RSS(K) + Kλ)
Still need to determine good value for λ . . .
33 / 36
Finding the “knee” in the curve
2 4 6 8 10
1750
1800
1850
1900
1950
number of clusters
resi
dual
sum
of s
quar
es
Pick the number of clusters where curve “flattens”. Here: 4 or 9.
34 / 36
Outline
1 Recap
2 Evaluation
3 How many clusters?
4 Discussion
35 / 36
Discussion 6
Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified DataProcessing on Large Clusters. Usenix SDI ’04, 2004.http://www.usenix.org/events/osdi04/tech/full papers/dean/dean.pdf
See also (Jan 2009):http://michaelnielsen.org/blog/write-your-first-mapreduce-program-in-20-minutes/
part of lectures on “google technology stack”:http://michaelnielsen.org/blog/lecture-course-the-google-technology-stack/
(including PageRank, etc.)
See Recap Lecture 22 for slides
36 / 36