Upload
gary-king
View
230
Download
0
Embed Size (px)
Citation preview
Towards Semantic Embedding in Visual Vocabulary
Towards Semantic Embedding in Visual Vocabulary
The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition
Session: Object Recognition II: using Language, Tuesday 15, June 2010
Rongrong Ji1, Hongxun Yao1, Xiaoshuai Sun1, Bineng Zhong1, and Wen Gao1,2
1Visual Intelligence Laboratory, Harbin Institute of Technology2School of Electronic Engineering and Computer Science, Peking University
Overview
✴ Problem Statement
✴ Building patch-labeling correspondence
✴ Generative semantic embedding
✴ Experimental comparisons
✴ Conclusions
Overview
✴ Problem Statement
✴ Building patch-labeling correspondence
✴ Generative semantic embedding
✴ Experimental comparisons
✴ Conclusions
Introduction
• Building visual codebooks– Quantization-based approaches
• K-Means, Vocabulary Tree, Approximate K-Means et al.
– Feature-indexing-based approaches• K-D Tree, R Tree, Ball Tree et al.
• Refining visual codebooks– Topic model decompositions
• pLSA, LDA, GMM et al.
– Spatial refinement• Visual pattern mining, Discriminative visual phrase et al.
Introduction
• With the prosperity of Web community
User Tags
Our Contribution
Introduction
• Problems to achieve this goal
– Supervision @ image level
– Correlative semantic labels
– Model generality
A traditional San Francisco street A traditional San Francisco street view with car and buildingsview with car and buildings
??Flower
Rose
Introduction
• Our contribution– Publish a ground truth path-labeling set
http://vilab.hit.edu.cn/~rrji/index_files/SemanticEmbedding.htm
– A generalized semantic embedding framework• Easy to deploy into different codebook models
– Modeling label correlations• Model correlative tagging in semantic embedding
Introduction
• The proposed framework
Build patch-level correspondenceSupervised visual codebook construction
Overview
✴ Problem Statement
✴ Building patch-labeling correspondence
✴ Generative semantic embedding
✴ Experimental comparisons
✴ Conclusions
Building Patch-Labeling Correspondence
• Collecting over 60,000 Flickr photos– For each photo
• DoG detection + SIFT
• “Face” labels (for instance)
Building Patch-Labeling Correspondence
• Purify the path-labeling correspondences– Density-Diversity Estimation (DDE)– Formulation
• For a semantic label , its initial correspondence set is
– Patches extracted from images with label
– A correspondence from label to local patch
1 2
1 2, { , ,..., },
{ , ,..., }i
i i n ii
i i n i
d s d s d s
D s d d d s
le le le
1 2{ , ,..., }i nD d d d
ijleis
is jd
Purify le links in <Di, si>
Building Patch-Labeling Correspondence
• Density– For a given , Density reveals its
representability for :
• Diversity– For a given , Diversity reveals it
unique score for :
ld ldDen
is
21
1exp( || || )
l
m
d l j Lj
Den d dm
ldDivld
is
ln( )l
m md
i i
n nDiv
n n
Average neighborhood distance in m
neighborsnm: number of images in
neighborhood ni: number of total images with
label si
{ | }
. . j
j j j
Purifyi j d
d d d
D d DDE T
s t DDE Den Div
High T: Concerns more on Precision,
not Recall
Building Patch-Labeling Correspondence
Building Patch-Labeling Correspondence
• Case study: “Face” label (before DDE)
Building Patch-Labeling Correspondence
• Case study: “Face” label (after DDE)
Overview
✴ Problem Statement
✴ Building patch-labeling correspondence
✴ Generative semantic embedding
✴ Experimental comparisons
✴ Conclusions
✴Generative Hidden Markov Random Field
✴ A Hidden Field for semantic modeling
✴ An Observed Field for local patch quantization
Generative Semantic Embedding
• Hidden Field– Each produces correspondence links (le) to
a subset of patch in the Observed Field– Links denotes semantic correlations
between and
1{ }mi iS s
is
ijl
is js
lij
Generative Semantic Embedding
WordNet::SimilarityPedersen et al. AAAI 04
• Observed Field– Any two nodes follow visual metric (e.g. L2)– Once there is a link between and , we
constrain by from the hidden field
1{ }ni iD d
ijle
jsjsid
id
sj
di
leij
Generative Semantic Embedding
• In the ideal case
– Each is conditionally independent given :
Neighbors in the Hidden Field
id
( | ) { ( | ) | ( | ) 0}i j i ji m
P D S P d s P d s
Feature set D is regarded as (partial) generative from the Hidden Field
Generative Semantic Embedding
– Formulize Clustering procedure
• Assign a unique cluster label to each
• Hence, D is quantized into a codebook with corresponding features
• Cost for codebook candidate C
P(C|D)=P(C)P(D|C)/P(D)
ic id
1{ }Kk kW w 1{ }Kk kV v
Generative Semantic Embedding
P(D) is a constraintSemantic Constraint
Visual Distortion
• Semantic Constraint P(C)– Define a MRF on the Observed Field
– For a quantizer assignments C, its probability can be expressed as a Gibbs distribution from the Hidden Field as
Generative Semantic Embedding
• That is– Two data points and in contribute to
if and only ifjdid kw
Observed FieldObserved Field
Hidden FieldHidden Field
ccii=c=cii
ssxx ssyy
ddjjddii
llxyxy
Generative Semantic Embedding
• Visual Constraint P(D|C)– Whether the codebook C is visually consistent
with current data D• Visual distortion in quantization
Generative Semantic Embedding
• Overall Cost• Finding MAP of P(C|D) can be converted into
maximizing its posterior probability
The visual constraint P(D|C)
The semantic constraints P(C)
Generative Semantic Embedding
• EM solution
• E step
• M step
Assign local patches to the closest clusters
Update the visual word center
Generative Semantic Embedding
Generative Semantic Embedding
Generative Semantic Embedding
• Model generation– To supervised codebook with label
independence assumption
– To unsupervised codebooks• Making all l=0
Overview
✴ Problem Statement
✴ Building patch-labeling correspondence
✴ Generative semantic embedding
✴ Experimental comparisons
✴ Conclusions
Experimental Comparisons
Case study of ratios between inter-class distance and intra-class distance with and without semantic embedding in the Flickr dataset.
Experimental Comparisons
MAP@1 comparisons between our GSE model to Vocabulary Tree [1] and GNP [34] in Flickr 60,000 database.
Experimental Comparisons
Confusion table comparisons on PASCAL VOC dataset with method in [24].
Overview
✴ Problem Statement
✴ Building patch-labeling correspondence
✴ Generative semantic embedding
✴ Experimental comparisons
✴ Conclusions
Conclusion
• Our contribution• Propagating Web Labeling from images to local
patches to supervise codebook quantization• Generalized semantic embedding framework for
supervised codebook building• Model correlative semantic labels in supervision
• Future works• Adapt one supervised codebook for different tasks• Move forward supervision into local feature
detection and extraction
Thank you!Thank you!