15
Local extrema co-occurrence pattern for color and texture image retrieval Manisha Verma a,n , Balasubramanian Raman b , Subrahmanyam Murala c a Department of Mathematics, Indian Institute of Technology Roorkee, Uttarakhand, India b Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, Uttarakhand, India c Department of Electrical Engineering, Indian Institute of Technology Ropar, Rupnagar, Punjab, India article info Article history: Received 1 July 2014 Received in revised form 14 February 2015 Accepted 6 March 2015 Communicated by Xiaofei He Available online 14 March 2015 Keywords: Local extrema co-occurrence pattern Local extrema patterns Gray level co-occurrence matrix Corel database MIT VisTex database STex database abstract A real world problem of image retrieval and searching is considered in this paper. In modern generation, managing images from a large storage medium is not a straightforward job. Many researchers have worked on texture features, and produced diverse feature descriptors based on uniform, rotation invariant, edges and directional properties. However, most of them convert the relationship of the center pixel and the boundary pixel into a local pattern, and use histogram to represent the local pattern as a feature vector. In this work, we propose a new image retrieval technique; local extrema co-occurrence patterns (LECoP) using the HSV color space. HSV color space is used in this method to utilize the color, intensity and brightness of images. Local extrema patterns are applied to dene the local information of image, and gray level co-occurrence matrix is used to obtain the co-occurrence of LEP map pixels. The local extrema co-occurrence pattern extracts the local directional information from local extrema pattern, and convert it into a well-mannered feature vector with use of gray level co-occurrence matrix. The presented method is tested on ve standard databases called Corel, MIT VisTex and STex, in which Corel database includes Corel-1k, Corer-5k and Corel-10k databases. Also, this algorithm is compared with previous proposed methods, and results in terms of precision and recall are shown in this work. & 2015 Elsevier B.V. All rights reserved. 1. Introduction Content based image retrieval (CBIR) is a hybrid research area, which needs knowledge of both mathematics and computer science for an efcient image retrieval system. Image retrieval is based on image matching, and image matching is performed by feature matching. Image features may contain color information, textural distribution information, object shapes, object's spatial orientation, etc. 1.1. Motivation The expansion of online and ofine images in various areas, e.g., education, news, entertainment, etc. make retrieval of images both fascinating and important. Searching, browsing and retriev- ing images from a huge database are unrealistic and unsuitable problems. Text based image retrieval is a traditional searching method which deals with key words and metadata of the image. Next, content based image retrieval came in the scenario that handles genuine content of the image rather than metadata. Fast and efcient CBIR methods are the requirement of the system these days. Many local patterns for image retrieval have been proposed by researchers, but most of the local patterns consider the frequency of each pattern in the image, and treat it as a feature descriptor using histogram. But frequency gives information, only regarding to the occurrence of the pattern alone, and it does not reveal any information regarding the mutual occurrence of patterns in the image. Mutual occurrence of patterns is considered in this work. Also, earlier works with local pattern have treated color informa- tion and texture pattern as individual features. In this work, texture feature of local pattern has been extracted using color space component not individually. 1.2. Related work Texture and color analysis are the major elds in the image retrieval process. Texture is dependent on the local intensity of image, hence, statistical features and local neighborhood features are discovered for texture patterns. Another low level feature is color, and it is presented as distribution of intensity in different color channels, therefore color histogram, color correlogram, color coherence vector, etc. were proposed for color feature descriptor. Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing http://dx.doi.org/10.1016/j.neucom.2015.03.015 0925-2312/& 2015 Elsevier B.V. All rights reserved. n Corresponding author. E-mail address: [email protected] (M. Verma). Neurocomputing 165 (2015) 255269

Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

Local extrema co-occurrence pattern for color and textureimage retrieval

Manisha Verma a,n, Balasubramanian Raman b, Subrahmanyam Murala c

a Department of Mathematics, Indian Institute of Technology Roorkee, Uttarakhand, Indiab Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, Uttarakhand, Indiac Department of Electrical Engineering, Indian Institute of Technology Ropar, Rupnagar, Punjab, India

a r t i c l e i n f o

Article history:Received 1 July 2014Received in revised form14 February 2015Accepted 6 March 2015Communicated by Xiaofei HeAvailable online 14 March 2015

Keywords:Local extrema co-occurrence patternLocal extrema patternsGray level co-occurrence matrixCorel databaseMIT VisTex databaseSTex database

a b s t r a c t

A real world problem of image retrieval and searching is considered in this paper. In modern generation,managing images from a large storage medium is not a straightforward job. Many researchers haveworked on texture features, and produced diverse feature descriptors based on uniform, rotationinvariant, edges and directional properties. However, most of them convert the relationship of the centerpixel and the boundary pixel into a local pattern, and use histogram to represent the local pattern as afeature vector. In this work, we propose a new image retrieval technique; local extrema co-occurrencepatterns (LECoP) using the HSV color space. HSV color space is used in this method to utilize the color,intensity and brightness of images. Local extrema patterns are applied to define the local information ofimage, and gray level co-occurrence matrix is used to obtain the co-occurrence of LEP map pixels. Thelocal extrema co-occurrence pattern extracts the local directional information from local extremapattern, and convert it into a well-mannered feature vector with use of gray level co-occurrence matrix.The presented method is tested on five standard databases called Corel, MIT VisTex and STex, in whichCorel database includes Corel-1k, Corer-5k and Corel-10k databases. Also, this algorithm is comparedwith previous proposed methods, and results in terms of precision and recall are shown in this work.

& 2015 Elsevier B.V. All rights reserved.

1. Introduction

Content based image retrieval (CBIR) is a hybrid research area,which needs knowledge of both mathematics and computerscience for an efficient image retrieval system. Image retrieval isbased on image matching, and image matching is performed byfeature matching. Image features may contain color information,textural distribution information, object shapes, object's spatialorientation, etc.

1.1. Motivation

The expansion of online and offline images in various areas,e.g., education, news, entertainment, etc. make retrieval of imagesboth fascinating and important. Searching, browsing and retriev-ing images from a huge database are unrealistic and unsuitableproblems. Text based image retrieval is a traditional searchingmethod which deals with key words and metadata of the image.Next, content based image retrieval came in the scenario thathandles genuine content of the image rather than metadata. Fast

and efficient CBIR methods are the requirement of the systemthese days.

Many local patterns for image retrieval have been proposed byresearchers, but most of the local patterns consider the frequencyof each pattern in the image, and treat it as a feature descriptorusing histogram. But frequency gives information, only regardingto the occurrence of the pattern alone, and it does not reveal anyinformation regarding the mutual occurrence of patterns in theimage. Mutual occurrence of patterns is considered in this work.Also, earlier works with local pattern have treated color informa-tion and texture pattern as individual features. In this work,texture feature of local pattern has been extracted using colorspace component not individually.

1.2. Related work

Texture and color analysis are the major fields in the imageretrieval process. Texture is dependent on the local intensity ofimage, hence, statistical features and local neighborhood featuresare discovered for texture patterns. Another low level feature iscolor, and it is presented as distribution of intensity in differentcolor channels, therefore color histogram, color correlogram, colorcoherence vector, etc. were proposed for color feature descriptor.

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/neucom

Neurocomputing

http://dx.doi.org/10.1016/j.neucom.2015.03.0150925-2312/& 2015 Elsevier B.V. All rights reserved.

n Corresponding author.E-mail address: [email protected] (M. Verma).

Neurocomputing 165 (2015) 255–269

Page 2: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

Gray level co-occurrence matrix (GLCM) first introduced byHaralick, and it is a very popular method for extracting statisticalfeatures of the image [1]. GLCM is a matrix, that depends on theco-occurrence of every two pixels in image. Haralick calculated thestatistical features of GLCM for texture feature extraction. GLCMwas applied directly to the image to calculate the features, butZhang et al. used edge image to extract more precise informationusing GLCM in texture images [2]. They applied the Prewitt edgedetector in four directions and calculated GLCM of edge images,and used statistical features of co-occurrence matrices for textureimage retrieval. GLCM was extended to single and multi-channelco-occurrence matrix for RGB and LUV color channels, and appliedfor color texture image retrieval [3]. Partio et al. used gray level co-occurrence matrix with statistical features for rock texture imageretrieval [4]. Gaussian smoothing and pyramid representationwere utilized for extracting multi-scale images, and GLCM isapplied to the obtained multi-scale images, and statistical featureswere calculated for image retrieval by Siqueira et al. [5]. Further,GLCM was broadly used for different applications [6–8].

Swain and Ballard presented the idea of color histogram, anddistance measure for image matching via histograms [9]. Two newschemes were presented by Stricker and Orengo for color indexingin that, first holds complete color distribution, and second con-tains only major features instead of the full distribution [10]. Forboth color and texture information, standard wavelet transformand Gabor wavelet transformwere combined with color histogramand applied for image retrieval [11]. Further, new color feature hasbeen proposed using co-occurrence and clustering. Lin et al.proposed three features, that are color co-occurrence matrix(CCM), difference between pixels of scan pattern (DBPSP) andcolor histogram for K-mean (CHKM), in which CCM and DBPSP arerelated to color and texture, and CHKM corresponds to the colorfeature [12]. Integrated color and intensity co-occurrence matrixhas been proposed for color and texture features. Composition ofcolor and texture features has been computed in it rather thanseparation. Instead of RGB, HSV color space is used for colorrepresentation, and this method is applied for image retrieval inlarge, labeled and unlabeled image database [13]. Color histogramconsiders the frequency of each intensity but it does not handlethe spatial co-relation of colors. Color correlogram was proposedthat consider the spatial co-relation of color intensity in the image[14]. Again, color correlogramwas used for feature vector, and alsoa relevance feedback technique has been applied for supervisedlearning in two ways, first is improving the query image, andsecond is learning the distance metric, and applied for improvedresult in image retrieval [15]. Color coherence vector was intro-duced for image retrieval which uses coherence and incoherenceof image pixel colors, and compared with color histogram forimage retrieval [16]. Park et al. applied artificial neural networktechnique for image clustering for fast image retrieval [17].Gaussian mixture vector quantization (GMVQ) was utilized forbetter quantization of color histogram for image retrieval [18]. Themotif co-occurrence matrix has been proposed for image retrieval,which constructs a 3D matrix, that corresponds to local statisticsof image [19]. Murala et al. proposed a method called modifiedcolor motif co-occurrence matrix (MCMCM) for image retrieval,which is an extension of the motif co-occurrence matrix that usedrelationships between color channels [20]. Again, motif matrix wasused with texton histogram using HSV color space in [21].

Ojala et al. presented local binary patterns (LBP), which provedits excellence and standard in many areas as a feature descriptor[22]. Local binary pattern was modified into uniform and rotationinvariant local binary pattern [23]. Translation, rotation and scaleinvariant method using color and edge has been proposed forcolor–texture and natural image retrieval [24]. LBP compares allneighboring pixels with center pixel, but Heikkilä et al. presented

center-symmetric local binary patterns which computes the dif-ference in four directions [25]. Tan and Triggs proposed localternary pattern (LTP) that compares neighboring pixels and centerpixel with a threshold interval, and assign a ternary pattern (1, 0,�1). Further, it is converted into two binary patterns (0, 1), andthis method is applied to face recognition [26]. LBP and LTP werebased on all neighboring pixels evenly. A direction based methodcalled directional local extrema pattern (DLEP) has been proposedfor directional edge information in 01, 451, 901 and 1351 directions,and applied for image retrieval [27]. Local extrema pattern hasbeen proposed by Murala et al., and joint histogram of color andLEP has been applied for object tracking [28]. Multi-resolutionlocal extrema patterns have been proposed, in which LEP has beenobtained from wavelet domain instead of original images [29].Moment based local binary pattern has been proposed, in whichLBP has been derived from momentgrams, and momentgramshave been constructed from moment invariants of original image[30]. Zhang et al. proposed local derivative pattern (LDP) [31], thatis a higher order local binary pattern, and applied it for facerecognition. Local ternary co-occurrence patterns (LTCoP) havebeen proposed for medical image retrieval, that utilize the proper-ties of LTP and LDP [32]. A method based on edge distributionusing local pattern was proposed, and called local maximum edgebinary pattern (LMEBP). It was obtained by considering themagnitude of local difference between the center pixel andreference eight neighborhood pixels in descending order, andLMEBP was obtained for all eight neighbor pixels. LMEBP wasapplied for image retrieval and object tracking [33]. Further,LMEBP is extended by Jasmine and Kumar [34], in which onlyfirst three uniform and rotational invariant LMEBPs were consid-ered as feature vector, also an HSV color histogram was used forfeature vector, and finally joint histogram was constructed forimage retrieval. After local binary pattern and local ternarypattern, Murala et al. proposed local tetra patterns which tookadvantage of vertical and horizontal directional neighborhoods ofeach pixel and constructed a tetra pattern, which was againconverted into binary patterns [35]. They combined it with Gabortransform, and applied it for image retrieval. Jacob et al. extendedlocal tetra patterns in RGB color channels. For each center pixel ofa particular color channel, other color channels were used forhorizontal and vertical direction pixels, and applied it for imageretrieval [36].

1.3. Main contribution

In this paper, authors have conferred a new method for imageretrieval based on color and texture features. Main involvementsin the current paper are as follows:

1. A new feature descriptor local extrema co-occurrence pattern(LECoP) has been proposed.

Pixelpair 0 1 2 3

0 0 4 0 0

1 2 2 2 3

2 0 2 1 0

3 0 4 0 0

3 1 1 0 1

0 1 2 2 1

1 0 1 2 1

3 1 3 1 3

0 1 1 3 1

2

Fig. 1. Gray level co-occurrence matrix calculation example. (For interpretation ofthe references to color in this figure caption, the reader is referred to the webversion of this paper.)

M. Verma et al. / Neurocomputing 165 (2015) 255–269256

Page 3: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

2. Color information has been extracted from hue and saturationcomponents of HSV color space, and LECoP is calculated forvalue component.

3. Joint histogram acquired, and matching has been done forimage retrieval.

4. The proposed method has been applied on Corel-1k, Corel-5k,Corel-10k, MIT VisTex and STex color texture databases, andcompared with some existing methods.

The presented paper is organized in the following manner: Section 1gives a brief introduction, which includes motivation, related workand main contribution of the paper. Section 2 involves the introduc-tion of color space, GLCM, and local patterns. The proposed methodwith similarity measure and framework is explained in Section 3.Section 4 presents experiment result and discussions. Finally, theconclusion is achieved in Section 5.

2. Color and texture descriptor

2.1. Color space

In general, images are of three types. First is the binary image,which contains only two intensities for black and white pixels.Second is the gray scale image, that can have a range of intensities,but only in one band. Next is the color image, which has multiplebands, and each band contains a range of intensity. Most generallyused, RGB images have three bands called, red, green and blue. InRGB images, three bands contain information about red, green andblue colors, and hence it is called the RGB color space. The otherone is the HSV color space that stands for hue, saturationand value.

Hue component is directly related to the color, and a humaneye can distinguish different colors. Hue is defined as an angle.

Saturation represents the brightness and lightness of color com-ponent, and the value shows the intensity of a color, that isdecomposed from the color information of the image. Huepresents an angle from 01 to 3601, and each degree occupiesdifferent colors. Saturation is numbered from 0 to 1, as it goes fromlow to high, intensity of color increases. Value also varies from 0 to1. Many researchers have shown that HSV color space is moreappropriate than RGB space as in HSV space, color, intensity andbrightness can be extracted individually from images [13,37,38]. Inthe presented work, images are converted from RGB space to HSVcolor space.

2.2. Gray level co-occurrence matrix

Haralick first proposed gray level co-occurrence matrix, andused 24 statistical features of it. It is a popular statistical methodfor computing image features. It is related to calculation ofoccurrence of pixel pairs, which exist in a particular distance andspecific direction. Matrix elements correspond to the relativeoccurrence of gray value pixel pairs. Size of matrix depends onthe maximum gray level intensity in the image.

A small example of GLCM calculation has been shown in Fig. 1.In Fig. 1(a), original matrix has been given. In Fig. 1(b), GLCM iscalculated for matrix (a). In GLCM matrix, top most row and leftmost column with green cells are pixels, presented in originalmatrix, i.e., 0,1,2,3. For each pair, for example, (0,0), (0,1), (0,2)…,(3,3), co-occurrence has been calculated. In matrix (a), pixel withintensity ‘1’ is occurring with pixel ‘0’ in a pair (1,0) with zero-degree direction (horizontal pixel pair) and one-distance (adjacentpairs), only two times as shown with green circle around them infigure. GLCM value for pixel pair (1,0) is shown with an arrow inGLCM matrix, hence, in GLCM, at position (1,0) number ‘2’ isoccurred. Similarly, other elements of GLCM are calculated.

2.3. Local patterns

2.3.1. Local binary patternsEach pixel (except boundary pixels) in the image, considered as

a center pixel for computation of local binary pattern. The LBPoperator takes a neighborhood around each pixel, and find out thelocal pattern of that pixel, which is computed by a threshold ofevery neighborhood pixel with the center pixel. Mathematicalrepresentation of LBP is given as follows:

LBPp;r ¼Xpi ¼ 1

P1ðIi� IcÞ � 2i�1 ð1Þ

11 9 5

7 6 3

2 7 1

8 4 2

16 1

32 64 128

1 1 0

1 0

0 1 0

8 4 0

16 92 0

0 64 0

5 3 -1

1 6 -3

-4 1 -5

11 9 5

7 6 3

2 7 1

5 3 -1

1 6 -3

-4 1 -5

0 1 1

0

8 4 2

1

0 4 2

6 0

Fig. 2. Local binary pattern and local extrema pattern examples.

Query Image

RGB to HSVcolor space Saturation

18/36/72 binquantizedhistogram

LocalextremaPattern

Gray levelco-occurrence

matrix

Histogramconcatenation

Hue

Value

10/20 binquantizedhistogram

Resize matrix tovector

Feature vector

Feature vectordatabase

Query imagefeature vector

Imagedatabase

SimilarityMatch

Results

Fig. 3. Proposed system flowchart.

M. Verma et al. / Neurocomputing 165 (2015) 255–269 257

Page 4: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

P1ðtÞ ¼1 tZ00 else

HðLÞj LBP ¼XMm ¼ 1

XNn ¼ 1

P2ðLBPðm;nÞ; LÞ; LA ½0; ð2p�1Þ� ð2Þ

P2ðp; qÞ ¼1 p¼ q

0 else

LBPp;r computes the local binary pattern of each pixel, where p andr are the number of neighboring pixels and the radius of circletaken for computation. Eq. (2) calculates the histogram of obtainedlocal binary patterns where the size of the image is M�N.

2.3.2. Local extrema patternsLocal binary patterns relate the center pixel and reference

neighborhood pixel by comparing intensity values. The localextrema pattern is proposed by Murala et al., and this is acontinuation of LBP in such a way that, it deals with edgeinformation in different directions. It compares pixels in 01, 451,901 and 1351 directions with the center pixel and assigns 1 if bothneighboring pixels are in a particular direction, either greater orless separately as compared to the center pixel, and in the samemanner it assigns 0 if one pixel is greater and one pixel is lesserthan the center pixel. For a center pixel Ic and the correspondingneighbor pixel Ii, LEP is calculated as follows:

I0i ¼ Ii� Ic; i¼ 1;2;…;8

I0iðψ Þ ¼ P3ðI0k; I0kþ4Þ; k¼ ð1þψ=45Þ; 8ψ ¼ 01;451;901;1351

Fig. 4. Corel 1k sample images.

0 20 40 60 80 10040

45

50

55

60

65

70

75

80

No. of images retrieved

Prec

isio

n %

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

0 20 40 60 80 1005

10

15

20

25

30

35

40

45

50

55

No. of images retrieved

Rec

all %

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Fig. 5. Results of precision and recall on the basis of number of images retrieved of Corel-1k database.

M. Verma et al. / Neurocomputing 165 (2015) 255–269258

Page 5: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

P3ðI0k; I0kþ4Þ ¼1 I0k � I0kþ4Z00 else

(ð3Þ

LEPðIcÞ ¼Xψ2ψ=45 � I0kðψ Þ 8ψ ¼ 01;451;901;1351 ð4Þ

HðLÞj LEP ¼XMm ¼ 1

XNn ¼ 1

P2ðLEPðm;nÞ; LÞ; LA ½0;15� ð5Þ

LEPs are calculated by Eq. (4), where ψ is the angle of LEPcalculation and Eq. (5) generates the histogram of local extremapattern map.

A 3�3 window example of both LBP and LEP is explained inFig. 2. In Fig. 2, a 3�3 matrix is taken for explanation, and everyboundary pixel is subtracted from the center pixel. LBP pattern isobtained, and LBP pattern is multiplied by weights, and summedup for local binary pattern value. Also in LEP example, same 3�3window is chosen, and the surrounding pixels are subtracted fromthe center pixel. In LEP pattern calculation, inside and outsidearrow has been shown for positive and negative values respec-tively. If both arrows are different (one is inside and the other oneis outside) in a specific direction (for example, 01, 451, 901 and1351), then the value of pattern is ‘0’, and if both arrows are samethen the pattern value is ‘1’ in that direction. Now, obtained fourvalues are multiplied by weights, and LEP values are obtained.

3. Proposed method

In the presented work, authors tried to extract more informa-tion from images with the help of earlier explained methods. Anew image retrieval method has been proposed using color andtexture information of images. As explained before, texture andcolor both are salient features of the image. HSV color space isused for extraction of details of the image in hue, saturation andvalue parts. Initially, RGB image is converted into HSV color space[39]. Hue corresponds to the color component, and varies from0 to 360. Each number correspond to different colors. In theproposed work, three different quantization of hue component,i.e., 18, 36 and 72 have been used, and performance of theproposed method has been observed. All three quantizationschemes divide all colors into different sections so that optimumcolor information can be taken. Similarly, saturation is quantizedinto 10 and 20 bins for reasonable information extraction. Allpossible combinations of hue and saturation have been used,and performance has been observed on different datasets in

Section 4.7. The color histogram has proven its excellence in imageretrieval [9]. The histogram is constructed for both hue andsaturation part.

In the HSV color space, value component is nearly close to thegray scale conversion of the RGB image, therefore, value compo-nent is used for texture extraction method. Histogram helps toextract the global information of image, that is extracted from hueand saturation histograms. Local information of each pixel corre-spond to the texture of image, and it has been extracted using localextrema patterns. LEP is applied to the value channel of originalimage. It gives a LEP map same as the image size with entries from0 to 15 as explained in Section 2.3.2. Histogram extracts theinformation about the frequency of intensity, that only impliesthe occurrence of every pattern in whole image, and neglect theinformation regarding to the co-occurrence of pixels. Gray levelco-occurrence matrix reveals the relative occurrence of intensitypairs in the image so that the local information of occurrence ofevery pixel pairs in the LEP map can be extracted in the matrixform. Hence, instead of histogram, GLCM is calculated of LEP map.GLCM in 01 with 1 distance has been used in this proposed work.

Size of GLCM in this case is 16�16 since values varying in LEPmap are 0–15, hence, all possible pixel pairs are 16�16. Eachposition in the GLCM matrix represents the occurrence of corre-sponding pixel pair. For histogram concatenation, GLCM is againconverted into a single vector, and a combined histogram is madeof GLCM single vector, hue and saturation histograms. Total lengthof feature vector depends on the quantization number of hue andsaturation part.

Feature vector can be normalized with a factor n, according tothe database images. In the proposed work, all databases taken forexperiment, are benchmark databases, therefore, the size ofimages in databases is same for a particular database. Normal-ization factor n can vary, if size of images are different in adatabase. Hence, according to the size of images in database,normalization factor is chosen 1000 for database 1 (Corel-1k), and100 for other databases (Corel-5k, Corel-10k, MIT VisTex and STex)in experiment section since images size in database 1 are biggerthan other databases.

3.1. Similarity measure and query matching

Feature extraction has to be done for all the images of databaseand query image, and a feature vector database has been con-structed for the full image database. After applying the featureextraction process, similarity has been performed for query image.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Prec

isio

n

0 10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Number of top matches

F−m

easu

re

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Fig. 6. Precision–recall curve and F-measure curve for Corel-1k database.

M. Verma et al. / Neurocomputing 165 (2015) 255–269 259

Page 6: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

The following distance measures have been used for thesimilarity match.

d1 distance

dðdbi; qÞ ¼Xlens ¼ 1

Fdbi ðsÞ�FqðsÞ1þFdbi ðsÞþFqðsÞ

�������� ð6Þ

Canberra distance

dðdbi; qÞ ¼Xlens ¼ 1

Fdbi ðsÞ�FqðsÞFdbi ðsÞþFqðsÞ

�������� ð7Þ

Manhattan distance

dðdbi; qÞ ¼Xlens ¼ 1

Fdbi ðsÞ�FqðsÞ�� �� ð8Þ

Euclidean distance

dðdbi; qÞ ¼Xlens ¼ 1

ðFdbi ðsÞ�FqðsÞÞ2��� ���

!1=2

ð9Þ

where dðdbi; qÞ measures the distance between ith database imagedbi and the query image q. Length of feature vector is denoted bylen, and Fdbi and Fq are the feature vector of ith database image andfeature vector of the query image respectively. Shorter distancegives the best relevant result.

3.2. Proposed system framework

Algorithm of the system is explained below and a flowchart ofpresented work has been presented in Fig. 3.

Input: Query image.Output: Retrieved images from the database.

(1) Upload the input image.(2) Convert it from RGB to HSV color space.(3) Quantize the hue and the saturation part into 18/36/72 and

10/20 bins respectively (according to the requirement ofdatabase), and construct histograms for both.

Table 1Results of Corel-1k, Corel-5k and Corel-10k in precision (for n¼10) and recall(for n¼100).

Method Corel-1k Corel-5k Corel-10k

Precision Recall Precision Recall Precision Recall

CS_LBPþcolorhist 75.88 48.14 54.39 25.47 44.08 18.57LEPSEGþcolorhist 75.80 36.15 43.66 17.62 35.58 13.48LEPINVþcolorhist 72.47 38.56 50.41 20.44 41.25 15.74Waveletþcolorhist 67.59 40.65 52.15 24.43 42.28 17.34Joint LEP colorhist 75.13 37.90 53.89 22.85 44.14 16.77Joint Colorhist 75.15 37.90 53.64 22.71 43.96 16.66PM 78.58 51.87 62.96 31.16 52.50 23.29

Fig. 7. Corel 5k image samples (one image per category).

M. Verma et al. / Neurocomputing 165 (2015) 255–269260

Page 7: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

(4) Apply LEP on the value part of HSV color space, and obtainLEP map.

(5) Construct GLCM of LEP map.(6) Convert GLCM into a vector form.(7) Concatenate the histogram of step 3 and GLCM vector of step

6, and construct the final histogram as a feature vector.(8) Use similarity distance measure for comparing the query

feature vector and feature vectors of the feature vectordatabase.

(9) Sort the distance measure, and produce the correspondingimages of the best match vectors as final results.

3.3. Advantages of proposed method over the existing methods

(1) The proposed method collects the joint relationship betweenthe LEP map and GLCM. Whereas existing methods collect onlythe simple histogram. Histogram gives information onlyregarding the occurrence of the pattern alone, and it doesnot reveal any information regarding mutual occurrence ofpatterns in the image.

(2) The HSV color space contains three different information, whereasRGB contains only the gray level information. Hence, it is obviousto say that HSV color space performs better than the RGB forfeature extraction (it has been proved by experiments). The H andS color spaces are used for the histogram calculation and theproposed method is applied on V color space.

4. Experimental results and discussions

For testing of the presented method, five benchmark databasesof color images have been used. Existing techniques related tocolor and texture have been compared with the proposed method,and well-known precision and recall measures calculated for alldatabases and all methods. Precision and recall are strong evalua-tion measure but they are individual. F-measure is defined as arelation between both precision and recall, and it gets larger whenboth precision and recall are large. In every experiment, eachimage of the database is used as query image, and image retrievalhas been performed for each query image. Some specific numberof images retrieved for each image, and evaluation measures havebeen calculated for all the images, and the average performance ofthe method is obtained.

The precision of the system represents a ratio of the number ofrelevant images in retrieved images and the total number ofretrieved images from the database. In the same manner, recallgives the proportion of the number of relevant images in retrievedimages and the total number of relevant images in the database.For a given query image ‘i’, if total n images are being retrieved,then precision and recall can be calculated as

Pði;nÞ ¼Number of relevant images retrievedn

ð10Þ

Rði;nÞ ¼Number of relevant image retrievedNic

ð11Þ

where Nic indicates the total number of relevant images in thedatabase, i.e., number of images in each category of the database.Average precision and average recall are formulated as

Pavgðj;nÞ ¼1Nic

XNic

i ¼ 1

Pði;nÞ ð12Þ

Ravgðj;nÞ ¼1Nic

XNic

i ¼ 1

Rði;nÞ ð13Þ

where j denotes the number of categories. Finally, total precisionand total recall for the whole database are calculated as

PtotalðnÞ ¼1Nc

XNc

j ¼ 1

Pavgðj;nÞ ð14Þ

RtotalðnÞ ¼1Nc

XNc

j ¼ 1

Ravgðj;nÞ ð15Þ

where Nc is the number of total categories that exists in thedatabase. F-measure on the basis of precision and recall iscalculated as follows:

F�measure¼ 2� precision� recallprecisionþrecall

ð16Þ

In order to measure the capability of the proposed method,experiments have been done on aforementioned five databases,and performance measurements are calculated. Average precision,recall and F-measure have been calculated for each database, andgraphs have been plotted. Also, the proposed method is comparedwith some existing local patterns for validity. In the presentedwork, we have compared our results with CS_LBP (center-sym-metric local binary pattern), LEPSEG, LEPINV, wavelet and colorhistogram [11], joint color histogram and joint histogram of colorand LEP [28].

Since presented method is based on color and texture both, wehave concatenated RGB color histogram (8 bins for each band, i.e.,total 24 bins) in CS_LBP, LEPSEG and LEPINV separately. Differentmethods have been abbreviated as follows:

CS_LBPþcolorhist center-symmetric local binary pattern [25] þRGB color histogram

LEPSEGþcolorhist local edge pattern for segmentation [24] þRGB color histogram

LEPINVþcolorhist local edge pattern for image retrieval [24] þRGB color histogram

Waveletþcolorhist discrete wavelet transform þ RGB color his-togram [11]

Joint LEP colorhist joint histogram of color and LEP [28]Joint colorhist joint histogram of RGB colorPM proposed method

4.1. Database 1

Database 1 includes the Corel-1k database [40], that consist1000 natural images. It has 1000 images in 10 categories, and eachcategory is having 100 images. It includes images of Africans,beaches, buildings, dinosaur, elephant, flower, buses, hills, moun-tains and food. Size of images in this database is either 384�256or 256�384. Some sample images from Database 1 are shown inFig. 4, in which 3 images per category are shown. Precision, recalland F-measure are obtained according to Eqs. (12)–(16).

In experiments, for each query image different number ofimages are retrieved separately and performance is calculated.Fig. 5 shows the performance of the presented method accordingto the number of images retrieved. Precision–recall curve andF-measure curve have been shown in Fig. 6, and it indicates thatthe proposed method is better than other methods. Table 1 showsthe clear results of precision and recall in terms of percentage alongwith other methods. In terms of precision the proposed methodaccuracy has been increased from Waveletþcolorhist, CS_LBPþco-lorhist, Joint LEP colorhist, Joint colorhist, LEPINVþcolorhist andLEPSEGþcolorhist up to 16.26%, 3.56%, 4.59%, 4.56%, 8.43% and3.67% respectively.

M. Verma et al. / Neurocomputing 165 (2015) 255–269 261

Page 8: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

10 20 30 40 50 60 70 80 90 10020

25

30

35

40

45

50

55

60

65

No. of images retrieved

Prec

isio

n %

10 20 30 40 50 60 70 80 90 1005

10

15

20

25

30

35

No. of images retrieved

Rec

all %

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 500

20

40

60

80

100

No. of image category

Pre

cisi

on

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 500

20

40

60

80

100

No. of image category

Rec

all

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Fig. 8. Corel-5k plots between precision and (a) number of images retrieved (c) category number and plots of recall and (b) number of images retrieved (d) category number.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Prec

isio

n

0 10 20 30 40 50 60 70 80 90 1000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Number of top matches

F−m

easu

re

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Fig. 9. Precision–recall curve and F-measure curve for Corel-5k database.

M. Verma et al. / Neurocomputing 165 (2015) 255–269262

Page 9: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

4.2. Database 2

The Corel-5k database [41] is the second database for experi-ment, and it has 5000 images of random categories. It involvesimages of animals, e.g., bear, fox, lion, tiger, etc., human, naturalscenes, buildings, paintings, fruits, cars, etc. It is a collection oftotal 5000 images of 50 categories, and 100 images per category.Sample images from Database 2 are collected, and shown in Fig. 7.One image is taken from each category of the Corel-5k database insample image figure.

Average precision and average recall are obtained using Eqs.(12)–(15), and F-measure is calculated using Eq. (16). Results interms of precision and recall are shown in Fig. 8 category wise andaccording to the number of images retrieved. Precision–recallcurve and F-measure curve have been plotted for Corel-5k data-base in Fig. 9. Table 1 illustrates the results of retrieval on thesubject of evaluation measures of Corel-5k database. Results intable and figures clearly indicate that the performance of the

presented technique is significantly better than other techniques.The proposed method accuracy has been significantly improvedfrom Waveletþcolorhist, CS_LBPþcolorhist, Joint LEP colorhist,Joint colorhist, LEPINVþcolorhist and LEPSEGþcolorhist up to20.72%, 14.10%, 12.37%, 12.44%, 12.90% and 5.90% respectively.

4.3. Database 3

Database 3 [41] is a continuation of the Corel-5k database.Extra 5000 images are appended to the Corel-5k database to makebigger and versatile. Hence, it has 10,000 images of 100 type, and100 images are in each type. In addition of Corel-5k database, ithas images of ships, buses, food, textures, airplanes, furniture,army, ocean, cats, fishes, etc.

Same as previous database, for this database also precision,recall and F-measure are collected with the help of Eqs. (12)–(16).Value of Nic (number of images in each category) is 100 and Nc

(number of categories) is 100 in this database. Figs. 10 and 11

10 20 30 40 50 60 70 80 90 10015

20

25

30

35

40

45

50

55

No. of images retrieved

Pre

cisi

on %

10 20 30 40 50 60 70 80 90 1004

6

8

10

12

14

16

18

20

22

24

No. of images retrieved

Rec

all %

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 1000

20

40

60

80

100

No. of image category

Pre

cisi

on

0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 1000

20

40

60

80

100

No. of image category

Rec

all

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Fig. 10. Graphs of Corel-10k database (a) precision and images retrieved (b) recall and images retrieved from database (c) precision and category number (d) recall andcategory number.

M. Verma et al. / Neurocomputing 165 (2015) 255–269 263

Page 10: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

explain the results of Corel-10k regrading precision, recall andF-measure as compared to other methods, and Table 1 indicatesthat the presented technique outperforms other existing methods.Precision of the proposed method has been considerably raisedfrom Waveletþcolorhist, CS_LBPþcolorhist, Joint LEP colorhist,Joint colorhist, LEPINVþcolorhist and LEPSEGþcolorhist up to20.72%, 14.10%, 12.37%, 12.44%, 12.90% and 5.90% respectively.

4.4. Database 4

Database 4 is collected from the MIT VisTex database [42]. Thisdatabase contains a large amount of colored texture images, and40 textures are selected for the experiment. The size of each imageis 512�512. For retrieval purpose, all 40 images are divided into

16 block images of size 128�128 and hence, 16 images belong toeach category, and total 40 categories are there with total 640images. This database completes the requirement of images whichcontains both color and texture (Fig. 12).

Graphs in Figs. 13 and 14 demonstrate the result of Database4 evidently in terms of precision and recall individually, precision–recall curve and F-measure curve as computed in Eqs. (14)–(16).Table 2 illustrates the results in percentage of total precision andrecall. Retrieval performance is presented as the number of topmatches during experiment for each image in the database. Averageretrieval rate of the proposed method has been increased fromWaveletþcolorhist, CS_LBPþcolorhist, Joint LEP colorhist, Joint col-orhist, LEPINVþcolorhist and LEPSEGþcolorhist up to 22.99%, 14.48%,12.66%, 12.51%, 15.19% and 16.35% respectively. It is acquired that the

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Prec

isio

n

0 10 20 30 40 50 60 70 80 90 1000

0.05

0.1

0.15

0.2

0.25

Number of top matches

F−m

easu

re

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Fig. 11. Precision–recall curve and F-measure curve for Corel-10k database.

Fig. 12. MIT VisTex color texture database image samples (one image per category).

M. Verma et al. / Neurocomputing 165 (2015) 255–269264

Page 11: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

presented method in this work is more advantageous than others interms of accuracy.

4.5. Database 5

One more color–texture database has been used for experiment inthis work. The Salzburg Texture Image Database (STex) is a bigcollection of texture images [43]. It contained total 476 images, andeach image is divided into 16 non-overlapping sub-images. Total 7616images obtained from this database with having 476 categories. Some

sample images from STex database is given in Fig. 15.Average precision and recall have been calculated for all images in

the database. In Fig. 16, plots of precision and recall have beenpresented with number of images retrieved. Precision–recall curveand F-measure curve have been shown in Fig. 17. Average recall rate ofthe presented method and other methods has been shown in Table 2,which clearly shows that the ARR of proposed method is greater thanothers. Average retrieval rate of the proposed method has been exce-ptionally increased from Waveletþcolorhist, CS_LBPþcolorhist, JointLEP colorhist, Joint colorhist, LEPINVþcolorhist and LEPSEGþcolorhistup to 64.50%, 39.03%, 23.79%, 23.79%, 54.15% and 59.90% respectively.

10 20 30 40 50 60 70 80 90 10010

20

30

40

50

60

70

80

90

100

No. of images retrieved

Pre

cisi

on %

10 20 30 40 50 60 70 80 90 10050

55

60

65

70

75

80

85

90

95

100

No. of images retrieved

Rec

all %

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Fig. 13. MIT VisTex database results of (a) average precision and (b) average recall.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.75

0.8

0.85

0.9

0.95

1

Recall

Prec

isio

n

0 2 4 6 8 10 12 14 160.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of top matches

F−m

easu

re

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Fig. 14. Precision–recall curve and F-measure curve for MIT VisTex database.

Table 2Average retrieval rate (ARR) for both MIT VisTex and STex databases.

Database CS LBPþ colorhist LEPSEGþ colorhist LEPINVþ colorhist Waveletþ colorhist Joint LEP colorhist Joint colorhist PM

MIT VisTex 81.23 79.92 80.73 75.61 82.54 82.65 92.99STEX 53.33 46.37 48.10 45.08 59.90 59.90 74.15

M. Verma et al. / Neurocomputing 165 (2015) 255–269 265

Page 12: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

Fig. 15. STex color texture database sample images.

16 32 48 64 80 96 1120

10

20

30

40

50

60

70

80

No. of images retrieved

Prec

isio

n %

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+ColorhistLEPSEG+colorhistPM

16 32 48 64 80 96 11240

50

60

70

80

90

100

No. of images retrieved

Rec

all %

Fig. 16. STex database results of (a) average precision and (b) average recall.

M. Verma et al. / Neurocomputing 165 (2015) 255–269266

Page 13: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

4.6. Experiment results with different distance measure

Four different distance measures, d1, Canberra, Manhattan andEuclidean (Eqs. (6)–(9)) have been used for measuring the simi-larity between images. Comparison between all four distancemeasures has been shown in Table 3 for five databases in theterms of average retrieval rate (ARR) and average precision rate(APR). Experiments show that d1 distance measure is giving betterresults among others.

4.7. Proposed method with different quantization levels

In HSV color space, hue, saturation and value have their ownimportance. Proposed method is analyzed with different quantiza-tion levels of hue and saturation components for all the databasesin Table 4. Performance with different quantization methods differin different databases, e.g., hue 72 and saturation 20 provide bestresult for Corel 1k database, on the other hand it is not better forother databases. In the same manner, performance has been

observed for other quantization schemes. Performance of themethod depends on the color distribution and the texture ofimages present in database.

4.8. Computational complexity

Speed of extracting similar images to the query image dependson the feature vector length of the image. Lengthy feature vectortakes more time in calculating the difference between query imageand database images. Comparison of feature vector of the pro-posed method with other methods has been given in Table 5 forspeed evaluation. Also, feature extraction time for one image hasbeen given in Table 5 for all methods including the proposedmethod. For the proposed method feature extraction time hasbeen mentioned with all quantization levels.

As demonstrated in the table, the feature vector length of theproposed method is greater than CS_LBPþcolorhist, LEPINVþcolorhist and Waveletþcolorhist, however, it outperforms thesemethods in terms of accuracy as mentioned in different database

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Prec

isio

n

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

0 2 4 6 8 10 12 14 160.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Number of top matches

F−m

easu

re

Wavelet+colorhistCS_LBP+colorhistJoint LEP colorhistJoint colorhistLEPINV+colorhistLEPSEG+colorhistPM

Fig. 17. Precision–recall curve and F-measure curve for STex database.

Table 3Experimental results of the proposed method with different distance measures.

Distance measure Corel 1k Corel 5k Corel 10k MIT VisTex STEX

APR ARR APR ARR APR ARR APR ARR APR ARR

d1 78.58 51.87 62.96 31.16 52.50 23.29 92.99 99.22 74.15 90.03Canberra 75.90 46.77 60.48 27.23 49.84 21.57 91.05 98.76 70.61 87.34Manhattan 72.86 48.78 52.47 24.62 42.25 18.04 82.80 97.00 65.87 84.95Euclidean 64.55 41.43 43.61 19.66 34.65 14.32 71.81 92.72 54.18 75.92

Table 4Precision and recall of the proposed method with different quantization schemes for all databases.

Quantization level Corel 1k Corel 5k Corel 10 MIT VisTex STEX

APR ARR APR ARR APR ARR APR ARR APR ARR

HSV(18_10_256) 78.32 50.58 62.96 31.16 52.50 23.29 92.54 99.03 72.63 88.80HSV(18_20_256) 77.98 51.35 63.10 30.61 52.47 22.93 92.99 99.22 73.25 89.43HSV(36_10_256) 78.50 50.70 61.56 30.27 51.18 22.60 92.14 99.08 73.37 89.44HSV(36_20_256) 78.66 51.72 62.89 30.84 52.52 23.05 92.95 99.23 74.15 90.03HSV(72_10_256) 78.02 50.87 61.23 29.53 51.22 22.13 91.52 99.08 73.32 89.36HSV(72_20_256) 78.58 51.87 60.46 28.72 50.86 22.21 92.18 99.26 74.01 89.90

M. Verma et al. / Neurocomputing 165 (2015) 255–269 267

Page 14: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

experimental sections. Also, LEPSEGþcolorhist, Joint LEP colorhistand Joint colorhist have more feature vector length, and take moretime than the proposed method in extracting the similar images tothe query image.

5. Conclusion and future work

A novel feature descriptor LECoP is proposed for color andtexture image retrieval. It utilizes the properties of local patternand co-occurrence matrix using the HSV color space. This methodextracts local directional information of each pixel in terms of LEP,and constructs GLCM to obtain the co-occurrence of each pair inLEP map. The HSV color space is used for color feature. Inparticular, hue and saturation are used to extract color andbrightness respectively using histograms. Value component isused to apply LECoP, and the combined feature vector is appliedto benchmark Corel, MIT VisTex and STex databases. For validation,the proposed method is compared with previous methods of colorand texture retrieval techniques. The results for the proposedmethod and previous methods are explained using graphs withevaluation measures, and results show that the proposed methodoutperforms other methods.

In this work, HSV color space has been used. Further, othercolor spaces, e.g., L*a*b, YCbCr, etc. can be used for color informa-tion. GLCM with other direction and distance can be used for morecomprehensive feature extraction. In local feature extraction,instead of LEP, other local patterns with less histogram valuemay be used.

Acknowledgment

This work was supported by the Ministry of Human Resourceand Development (MHRD) Grant, India under Grant MHRD-02-23-200-304. Authors would like to thank the associate editor andanonymous reviewers for thoughtful comments and valuablesuggestions to improve the quality, which have been incorporatedin this paper.

References

[1] R.M. Haralick, K. Shanmugam, I. Dinstein, Textural features for image classi-fication, IEEE Trans. Syst. Man Cybern. 6 (1973) 610–621.

[2] J. Zhang, G. Li, S. He, Texture-based image retrieval by edge detection matchingGLCM, in: 10th IEEE International Conference on High Performance Comput-ing and Communications, 2008, pp. 782–786.

[3] C. Palm, Color texture classification by integrative co-occurrence matrices,Pattern Recognit. 37 (5) (2004) 965–976.

[4] M. Partio, B. Cramariuc, M. Gabbouj, A. Visa, Rock texture retrieval using graylevel co-occurrence matrix, in: Proceeding of the 5th Nordic Signal ProcessingSymposium, vol. 75, 2002.

[5] F.R. Siqueira, W.R. Schwartz, H. Pedrini, Multi-scale gray level co-occurrencematrices for texture description, Neurocomputing 120 (2013) 336–345.

[6] A. Baraldi, F. Parmiggiani, An investigation of the textural characteristicsassociated with gray level cooccurrence matrix statistical parameters, IEEETrans. Geosci. Remote Sens. 33 (2) (1995) 293–304.

[7] V. Kovalev, M. Petrou, Multidimensional co-occurrence matrices for objectrecognition and matching, Graphical Models Image Process. 58 (3) (1996)187–197.

[8] L.S. Davis, S.A. Johns, J.K. Aggarwal, Texture analysis using generalized co-occurrence matrices, IEEE Trans. Pattern Anal. Mach. Intell. 3 (1979) 251–259.

[9] M.J. Swain, D.H. Ballard, Indexing via color histograms, in: Active Perceptionand Robot Vision, 1992, pp. 261–273.

[10] M.A. Stricker, M. Orengo, Similarity of color images, in: Proceedings of theStorage and Retrieval for Image and Video Databases, SPIE, 1995, pp. 381–392.

[11] S. Murala, A.B. Gonde, R.P. Maheshwari, Color and texture features for imageindexing and retrieval, in: IEEE International Conference on Advance Comput-ing, 2009, pp. 1411–1416.

[12] C.H. Lin, R.T. Chen, Y.K. Chan, A smart content-based image retrieval systembased on color and texture feature, Image Vis. Comput. 27 (6) (2009) 658–665.

[13] A. Vadivel, S. Sural, A.K. Majumdar, An integrated color and intensity co-occurrence matrix, Pattern Recognit. Lett. 28 (8) (2007) 974–983.

[14] J. Huang, S.R. Kumar, M. Mitra, W.J. Zhu, R. Zabih, Image indexing using colorcorrelograms, in: IEEE Computer Society Conference on Computer Vision andPattern Recognition, 1997, pp. 762–768.

[15] J. Huang, S.R. Kumar, M. Mitra, Combining supervised learning with colorcorrelograms for content-based image retrieval, in: Proceedings of the 5thACM International Conference on Multimedia, 1997, pp. 325–334.

[16] G. Pass, R. Zabih, J. Miller, Comparing images using color coherence vectors, in:Proceedings of the 4th ACM International Conference on Multimedia, 1997,pp. 65–73.

[17] S.S. Park, K.K. Seo, D.S. Jang, Expert system based on artificial neural networksfor content-based image retrieval, Exp. Syst. Appl. 29 (3) (2005) 589–597.

[18] S. Jeong, C.S. Won, R.M. Gray, Image retrieval using color histograms generatedby Gauss mixture vector quantization, Comput. Vis. Image Underst. 94 (1)(2004) 44–66.

[19] N. Jhanwar, S. Chaudhuri, G. Seetharaman, B. Zavidovique, Content basedimage retrieval using motif cooccurrence matrix, Image Vis. Comput. 22 (14)(2004) 1211–1220.

[20] S. Murala, Q.M.J. Wu, R.P. Maheshwari, R. Balasubramanian, Modified colormotif co-occurrence matrix for image indexing and retrieval, Comput. Electr.Eng. 39 (3) (2013) 762–774.

[21] S.K. Vipparthi, S.K. Nagar, Multi-joint histogram based modelling for imageindexing and retrieval, Comput. Electr. Eng. (2014).

[22] T. Ojala, M. Pietikäinen, D. Harwood, A comparative study of texture measureswith classification based on featured distributions, Pattern Recognit. 29 (1)(1996) 51–59.

[23] T. Ojala, M. Pietikäinen, T. Maenpaa, Multiresolution gray-scale and rotationinvariant texture classification with local binary patterns, IEEE Trans. PatternAnal. Mach. Intell. 24 (7) (2002) 971–987.

[24] C.H. Yao, S.Y. Chen, Retrieval of translated, rotated and scaled color textures,Pattern Recognit. 36 (4) (2003) 913–929.

[25] M. Heikkilä, M. Pietikäinen, C. Schmid, Description of interest regions withcenter-symmetric local binary patterns, in: Computer Vision, Graphics andImage Processing, 2006, pp. 58–69.

[26] X. Tan, B. Triggs, Enhanced local texture feature sets for face recognition underdifficult lighting conditions, in: Analysis and Modeling of Faces and Gestures,2007, pp. 168–182.

[27] S. Murala, R.P. Maheshwari, R. Balasubramanian, Directional local extremapatterns: a new descriptor for content based image retrieval, Int. J. Multimed.Inf. Retr. 1 (3) (2012) 191–203.

[28] S. Murala, Q.M.J. Wu, R. Balasubramanian, R.P. Maheshwari, Joint histogrambetween color and local extrema patterns for object tracking, in: IS&T/SPIEElectronic Imaging, International Society for Optics and Photonics, SanFrancisco, 2013, p. 86630T.

[29] M. Verma, R. Balasubramanin, S. Murala, Multi-resolution local extremapatterns using discrete wavelet transform, in: 7th IEEE International Con-ference on Contemporary Computing (IC3), 2014, pp. 577–582.

[30] G.A. Papakostas, D.E. Koulouriotis, E.G. Karakasis, V.D. Tourassis, Moment-based local binary patterns: a novel descriptor for invariant pattern recogni-tion applications, Neurocomputing 99 (2013) 358–371.

[31] B. Zhang, Y. Gao, S. Zhao, J. Liu, Local derivative pattern versus local binarypattern: face recognition with high-order local pattern descriptor, IEEE Trans.Image Process. 19 (2) (2010) 533–544.

[32] S. Murala, Q.J. Wu, Local ternary co-occurrence patterns: a new featuredescriptor for MRI and CT image retrieval, Neurocomputing 119 (6) (2013)399–412.

[33] S. Murala, R.P. Maheshwari, R. Balasubramanian, Local maximum edge binarypatterns: a new descriptor for image retrieval and object tracking, SignalProcess. 92 (6) (2012) 1467–1479.

Table 5Feature vector length, feature extraction and image retrieval time of differentmethods.

Method Feature vector length Featureextraction time(s)

Image retrievaltime (s)

CS_LBPþcolorhist 16þ24¼40 0.1216 0.51LEPSEGþcolorhist 512þ24¼536 0.0243 0.59LEPINVþcolorhist 72þ24¼96 0.0709 0.52Waveletþcolorhist 24þ192¼216 0.0757 0.53Joint LEP colorhist 16� 8� 8� 8¼ 8192 0.1676 2.52Joint colorhist 8� 8� 8¼ 512 0.0360 0.58LECoP(H18S10V256) 18þ10þ256¼284 0.2407 0.54LECoP(H18S20V256) 18þ20þ256¼294 0.2414 0.54LECoP(H36S10V256) 36þ10þ256¼302 0.2418 0.54LECoP(H36S20V256) 36þ20þ256¼312 0.2422 0.55LECoP(H72S10V256) 71þ10þ256¼338 0.2427 0.56LECoP(H72S20V256) 72þ20þ256¼348 0.2449 0.56

M. Verma et al. / Neurocomputing 165 (2015) 255–269268

Page 15: Local extrema co-occurrence pattern for color and texture image …iie.fing.edu.uy/investigacion/grupos/gti/timag/trabajos/... · 2017-02-24 · Local extrema co-occurrence pattern

[34] K.P. Jasmine, P.R. Kumar, Integration of HSV color histogram and LMEBP jointhistogram for multimedia image retrieval, in: Intelligent Computing, Network-ing, and Informatics, 2014, pp. 753–762.

[35] S. Murala, R.P. Maheshwari, R. Balasubramanian, Local tetra patterns: a newfeature descriptor for content-based image retrieval, IEEE Trans. ImageProcess. 21 (5) (2012) 2874–2886.

[36] I.J. Jacob, K.G. Srinivasagan, K. Jayapriya, Local oppugnant color texture patternfor image retrieval system, Pattern Recognit. Lett. 42 (2014) 72–78.

[37] S.H. Sural, G. Qian, S. Pramanik, Segmentation and histogram generation usingthe hsv color space for image retrieval, in: International Conference on ImageProcessing, vol. 2, 2002, pp. 589–592.

[38] H. Yu, M. Li, H.J. Zhang, J. Feng, Color texture moments for content-basedimage retrieval, in: International Conference on Image Processing, vol. 3, 2002,pp. 929–932.

[39] A.R. Smith, Color gamut transform pair, ACM SIGGRAPH Comput. Graphics 12(3) (1978) 12–19.

[40] Corel 1k Database, Available online: ⟨http://wang.ist.psu.edu/docs/related/⟩.[41] Corel 5k and Corel 10k Database, Available online: ⟨http://www.ci.gxnu.edu.

cn/cbir/⟩.[42] MIT Vision and Modeling Group, Cambridge, “Vision texture”, Available

online: ⟨http://vismod.media.mit.edu/pub/⟩.[43] R. Kwitt, P. Meerwald, Salzburg Texture Image Database, September 2012,

Available online: ⟨http://www.wavelab.at/sources/STex/⟩.

Manisha Verma was born in India in 1989. Shereceived the B.Sc. degree from Maharani's College,Rajasthan University, Jaipur, India in 2009, the M.Sc.degree in Industrial Mathematics and Informatics fromthe Department of Mathematics, Indian Institute ofTechnology Roorkee, Roorkee, India in 2012. Currently,she is pursuing her Ph.D. in the Department of Mathe-matics, Indian Institute of Technology Roorkee, Roor-kee, India. Her major area of interests are content basedimage retrieval, face and palmprint recognition, andobject tracking.

Balasubramanian Raman is an associate professor inthe Department of Computer Science and Engineeringat Indian Institute of Technology Roorkee from 2013. Hehas obtained M.Sc. degree in Mathematics from MadrasChristian College, University of Madras in 1996 andPh.D. from Indian Institute of Technology Madras in2001. He was a post-doctoral fellow at the University ofMissouri Columbia, USA in 2001–2002 and a post-doctoral associate at Rutgers, the State University ofNew Jersey, USA in 2002–2003. He joined Departmentof Mathematics at Indian Institute of Technology Roor-kee as a lecturer in 2004 and became an assistantprofessor in 2006 and an associate professor in 2012.

He was a visiting professor and a member of Computer Vision and Sensing SystemsLaboratory in the Department of Electrical and Computer Engineering at Universityof Windsor, Canada during May–August 2009. His area of research includes visiongeometry, digital watermarking using mathematical transformations, image fusion,biometrics and secure image transmission over wireless channel, content basedimage retrieval and hyperspectral imaging.

Subrahmanyam Murala was born in India in 1985. Hereceived the B.E. degree in Electrical and ElectronicsEngineering from Andhra University, Andhra Pradesh,India in 2007. Afterwards, he received his M.Tech. andPh.D. degrees from the Department of Electrical Engi-neering, Indian Institute of Technology Roorkee, Roor-kee, India in 2009 and 2012 respectively. He was a post-doctoral researcher in the Department of Electrical andComputer Engineering at University of Windsor, Wind-sor, ON, Canada from July 01, 2012 to June 30, 2014.Currently, he is working as an assistant professor in theDepartment of Electrical Engineering, IIT Ropar, Rupna-gar, Punjab, India. His major fields of interests are

content based image retrieval, medical imaging and object tracking.

M. Verma et al. / Neurocomputing 165 (2015) 255–269 269