1
Automatic Image Co-segmentation Using Geometric Mean Saliency Koteswar Rao Jerripothula, Jianfei Cai, Fanman Meng, Junsong Yuan Nanyang Technological University 1) Saliency Enhancement: Local contrast based saliency is added to global contrast based saliency map and is brightened to avoid over penalty in step 4. 2) Subgroup Formation: Enhanced saliency maps are used as weights for weighted GIST descriptor which is used for clustering the images by k-means algorithm. 3) Pixel correspondence: Enhanced saliency maps are used as masks for masked SIFT dense correspondence to develop warped saliency maps 4) Geometric Mean Saliency: Geometric mean function is used to fuse the main saliency map and all the warped saliency maps. 5) Image Segmentation: Resultant GMS map is first regularized at super-pixel level and then foreground and background seeds are selected from it for Grab Cut segmentation. Goal: To automatically segment out the common object from set of similar images, which is also known as co-segmentation. Challenges: Co-segmentation may not always perform better than single-image segmentation. Complicated co-labeling and large number of parameters make co- segmentation difficult with increasing diversity. This Paper: Single image segmentation is done but using a combined saliency map obtained by fusing self saliency map and warped saliency maps of other images. The Idea: Saliency of weakly salient common object can be boosted by saliency of salient common objects in other images, just like in below figure. Introduction Proposed Method 1 2 1 2 Let { , ,..., }and { , ,..., } be set of images and corresponding enhanced saliency maps in a sub-group respectively. is warped saliency map of for such that ( ) ( ') where ' n n j j i j i i j I I I I M M M M n U I I U p M p p {1,.., } is the corresponding pixel in for pixel in ( ) ( ) ( ) , if ( ) , if ( ) where is a parameter and is global threshold value of . and j i j n j n i i i j i i i i i i i i I p I GMS p M p U p F GMS p p B GMS p GMS F B are foreground and background seeds. Formulation Experimental Results Source image Multi- class Object Discovery Our Results Sample Results from iCoseg dataset Comparison with others on MSRC dataset Class-wise Comparison with the state-of-the-art Quantitative comparison with other methods on various datasets by tuning the parameter Quantitative results on various datasets by using default value for parameter = 0.97 An Interesting Experiment: Mixed all the categories of MSRC into one and applied the proposed method with default =0.97 Result: J=0.676, P=87.1 Demonstrates the diversity that proposed method can handle. Evaluation metrics used: Jaccard Similarity(J): Intersection over Union score Precision(P): % of pixels correctly labelled Sample Results from Coseg-Rep dataset References: [Distributed] G.Kim,E. Xing, L. Fei-Fei, and T.Kanade. Distributed cosegmentation via submodular optimization on anisotropic diffusion. ICCV 2011. [Discriminative] A. Joulin, F.Bach, and J. Ponce. Discriminative clustering for image cosegmentation. CVPR 2010 [Multi-class] A. Joulin, F.Bach, and J. Ponce. Multi-class cosegmentation. CVPR 2012 [Object Discovery] M. Rubinstein, A. Joulin, J. Kopf, and C. Liu. Unsupervsed joint object discovery and segmentation in internet images. CVPR 2013. [Cosketch] J. Dai, Y. Wu, J. Zhou, and S. Zhu. Cosegmentation and cosketch by unsupervised learning. ICCV 2013 More Results Weakly salient common object (car) An example of weakly salient objects being helped by salient common objects Image Initial saliency map Our Results Even while using default setting, our results are comparable to state-of-the-art results (obtained by parameter tuning) Flowchart of proposed method.

Automatic Image Co-segmentation Using Geometric Mean Saliency(Top 10% paper)[poster]

Embed Size (px)

Citation preview

Page 1: Automatic Image Co-segmentation Using Geometric Mean Saliency(Top 10% paper)[poster]

Automatic Image Co-segmentation Using Geometric Mean Saliency

Koteswar Rao Jerripothula, Jianfei Cai, Fanman Meng, Junsong Yuan

Nanyang Technological University

1) Saliency Enhancement: Local contrast based saliency is added to global

contrast based saliency map and is brightened to avoid over penalty in step 4.

2) Subgroup Formation: Enhanced saliency maps are used as weights for

weighted GIST descriptor which is used for clustering the images by k-means

algorithm.

3) Pixel correspondence: Enhanced saliency maps are used as masks for masked

SIFT dense correspondence to develop warped saliency maps

4) Geometric Mean Saliency: Geometric mean function is used to fuse the main

saliency map and all the warped saliency maps.

5) Image Segmentation: Resultant GMS map is first regularized at super-pixel

level and then foreground and background seeds are selected from it for Grab

Cut segmentation.

Goal: To automatically segment out the common object from set of similar images,

which is also known as co-segmentation.

Challenges:

• Co-segmentation may not always perform better than single-image

segmentation.

• Complicated co-labeling and large number of parameters make co-

segmentation difficult with increasing diversity.

This Paper: Single image segmentation is done but using a combined saliency map

obtained by fusing self saliency map and warped saliency maps of other images.

The Idea: Saliency of weakly salient common object can be boosted by saliency of

salient common objects in other images, just like in below figure.

Introduction Proposed Method

1 2 1 2Let { , ,..., }and { , ,..., } be set of images and

corresponding enhanced saliency maps in a sub-group respectively.

is warped saliency map of for such that ( ) ( ')

where '

n n

j j

i j i i j

I I I I M M M M n

U I I U p M p

p

{1,.., }

is the corresponding pixel in for pixel in

( ) ( ) ( )

, if ( )

, if ( )

where is a parameter and is global threshold value of .

and

j i

j nj

ni i i

j i

i i

i i

i

i i

I p I

GMS p M p U p

F GMS pp

B GMS p

GMS

F B

are foreground and background seeds.

Formulation

Experimental Results

Source

image

Multi-

class Object

Discovery

Our

Results

Sample Results from iCoseg dataset Comparison with others on MSRC dataset Class-wise Comparison with the state-of-the-art

Quantitative comparison with other methods

on various datasets by tuning the parameter Quantitative results on various datasets by

using default value for parameter = 0.97

An Interesting Experiment:

• Mixed all the categories of MSRC into one and

applied the proposed method with default =0.97

• Result: J=0.676, P=87.1 • Demonstrates the diversity that proposed method

can handle.

Evaluation metrics used:

Jaccard Similarity(J): Intersection over Union score

Precision(P): % of pixels correctly labelled

Sample Results from Coseg-Rep dataset

References:

[Distributed] G.Kim,E. Xing, L. Fei-Fei, and T.Kanade. Distributed cosegmentation via submodular optimization on anisotropic diffusion. ICCV 2011.

[Discriminative] A. Joulin, F.Bach, and J. Ponce. Discriminative clustering for image cosegmentation. CVPR 2010

[Multi-class] A. Joulin, F.Bach, and J. Ponce. Multi-class cosegmentation. CVPR 2012

[Object Discovery] M. Rubinstein, A. Joulin, J. Kopf, and C. Liu. Unsupervsed joint object discovery and segmentation in internet images. CVPR 2013.

[Cosketch] J. Dai, Y. Wu, J. Zhou, and S. Zhu. Cosegmentation and cosketch by unsupervised learning. ICCV 2013

More Results

Weakly salient common object (car)

An example of weakly salient objects

being helped by salient common objects

Image

Initial saliency map

Our Results Even while using default setting, our results are comparable to state-of-the-art results (obtained by parameter tuning)

Flowchart of proposed method.