14
Multiple features data fusion method in color texture analysis q Yan Wu a,b, * , Ming Li a , Guisheng Liao a a National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China b School of Electronics Engineering, Xidian University, Xi’an 710071, China Abstract A new algorithm is developed to represent colored texture by effectively merging the texture feature, color feature, together with spatial correlation of color and texture based on incomplete tree-structured wavelet decomposition. Exper- iments are conducted on a set of 20 natural colored texture images in which multiple features fusion and classification per- formance are compared on the basis of the pyramid wavelet decomposition (PWD), Incomplete tree-structured wavelet decomposition (ICTSWD) and wavelet packet decomposition (WPD). Class correct rates of multiple features fusion based on PWD is 85.78% and class correct rates based on WPD is 91.03% with the dimensionality increased exponentially, how- ever, the dimensionality of feature fusion based on ICTSWD is descended greatly because of selective decomposition in subband, which class correct rates is 90.63%. It is demonstrated that multiple features fusion based on ICTSWD has better classification performance and anti-noise ability than fusion based on PWD and WPD. Ó 2006 Elsevier Inc. All rights reserved. Keywords: Colored texture; Multiple features fusion; Incomplete tree-structured wavelet decomposition; Classification 1. Introduction More recently, the analysis of color texture images has become an important topic, which has a wide appli- cation in many areas such as computer vision, remote sensing of earth resources and biomedical diagnosis. However, the most of the existing schemes texture analysis has only focused on gray-level texture images. In the analysis of gray texture images, multiresolution representations give rise to an inter-esting class of texture analysis methods. Strong arguments for their use can be found in psychovisual research, which offers evidence that the human visual system processes images in a multiscale way [1].Wavelet provides a convenient way to obtain a multiresolution representations [2,3], from which texture features are easily extracted. The so-called energy signatures have proven to be very powerful for texture analysis [4–7]. For image analysis, color and texture are two of the most important properties, especially when one is deal- ing with real world images. Classical image analysis schemes only take into the pixel gray-levels account, 0096-3003/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.06.116 q This work was supported by the National Defence Foundation (51431020204DZ0101) and the Postdoctoral Science Foundation of China (J63104020156). * Corresponding author. Address: National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China. E-mail addresses: [email protected] (Y. Wu), [email protected] (M. Li). Applied Mathematics and Computation 185 (2007) 784–797 www.elsevier.com/locate/amc

Multiple features data fusion method in color texture analysis

  • Upload
    yan-wu

  • View
    221

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Multiple features data fusion method in color texture analysis

Applied Mathematics and Computation 185 (2007) 784–797

www.elsevier.com/locate/amc

Multiple features data fusion method in color texture analysis q

Yan Wu a,b,*, Ming Li a, Guisheng Liao a

a National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, Chinab School of Electronics Engineering, Xidian University, Xi’an 710071, China

Abstract

A new algorithm is developed to represent colored texture by effectively merging the texture feature, color feature,together with spatial correlation of color and texture based on incomplete tree-structured wavelet decomposition. Exper-iments are conducted on a set of 20 natural colored texture images in which multiple features fusion and classification per-formance are compared on the basis of the pyramid wavelet decomposition (PWD), Incomplete tree-structured waveletdecomposition (ICTSWD) and wavelet packet decomposition (WPD). Class correct rates of multiple features fusion basedon PWD is 85.78% and class correct rates based on WPD is 91.03% with the dimensionality increased exponentially, how-ever, the dimensionality of feature fusion based on ICTSWD is descended greatly because of selective decomposition insubband, which class correct rates is 90.63%. It is demonstrated that multiple features fusion based on ICTSWD has betterclassification performance and anti-noise ability than fusion based on PWD and WPD.� 2006 Elsevier Inc. All rights reserved.

Keywords: Colored texture; Multiple features fusion; Incomplete tree-structured wavelet decomposition; Classification

1. Introduction

More recently, the analysis of color texture images has become an important topic, which has a wide appli-cation in many areas such as computer vision, remote sensing of earth resources and biomedical diagnosis.However, the most of the existing schemes texture analysis has only focused on gray-level texture images. Inthe analysis of gray texture images, multiresolution representations give rise to an inter-esting class of textureanalysis methods. Strong arguments for their use can be found in psychovisual research, which offers evidencethat the human visual system processes images in a multiscale way [1].Wavelet provides a convenient way toobtain a multiresolution representations [2,3], from which texture features are easily extracted. The so-calledenergy signatures have proven to be very powerful for texture analysis [4–7].

For image analysis, color and texture are two of the most important properties, especially when one is deal-ing with real world images. Classical image analysis schemes only take into the pixel gray-levels account,

0096-3003/$ - see front matter � 2006 Elsevier Inc. All rights reserved.

doi:10.1016/j.amc.2006.06.116

q This work was supported by the National Defence Foundation (51431020204DZ0101) and the Postdoctoral Science Foundation ofChina (J63104020156).

* Corresponding author. Address: National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China.E-mail addresses: [email protected] (Y. Wu), [email protected] (M. Li).

Page 2: Multiple features data fusion method in color texture analysis

Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797 785

which represents the total amount of visible light at the pixels position. The performance of such schemes canbe improved by adding color information [8]. The color of a pixel is typically represented with the RGB tri-stimulus values, each corresponding to the red, green, blue frequency bands of the visible light spectrum.Color is then a feature in the 3-dimensional RGB color space, which contains information regarding the spec-tral distribution of light complementary to the gray-level information.

An important topic when processing color images is their representation. The RGB representation is fre-quently being transformed into other color spaces. A large variety of (linear and non-linear) transformsand standard color spaces can be found in the literature [9,10]. The performance of an image analysis systemcan strongly depend on the choice of the color representation [11,12]. However, there does not appear to be asystematic means of determining an optimum color-coordinate system for a particular task.

In the analysis of color images, a rather limited number of systems use combined information of color andtexture, and even when they do, both aspects are mostly dealt with using separate methods [13,14]. It is onlyrecently that attempts are being made to combine both aspects in a single method [15–18], This combinationcan be made more formal by defining ‘‘color-texture’’ as the set of local statistical properties of the colors ofimage regions. Furthermore, some researchers have performed experiments on color texture analysis. Forexample, Caelli and Reye have proposed a method in which they extract features from three color spectralchannels by using three multiscale isotropic filters [19]. A related approach by Tan and Kittler extract featuresfrom three channels with a discrete cosine transforms [20]. Both methods do not consider correlation betweenspectral bands. Some recent methods have tried to include this aspect. The well known cooccurrence matrixmethod has been investigated by Hauta-Kasari et al. [21]. A Markov random field model for color textures hasbeen proposed by Panjwani and Healey [22]. Though there are these examples, the total amount of work oncolor texture analysis is still very small. Efficient characterization of color texture requires the exploitation ofthe texture and color features in feature-level fusion by considering spatial correlations, correlation betweencolor bands and correlation between color and spatial frequency.

We propose a scheme for the analysis of colored texture images. A new feature extraction using incompletetree-structured wavelet decomposition (ICTSWD) is described by effectively merging the texture and colorinformation in feature-level, which can extract the texture feature, color feature, together with the textureand color spatial correlation feature by weighted values fusion of the feature level in YIQ color space. Exper-iment results will demonstrate that this method not only describes colored texture feature more complete andaccurate but also has better classification performance and anti-noise ability.

The outline of this paper is as follows: in the next section the use of the ICTSWD for feature fusion extrac-tion of color texture is discussed and the YIQ color space transform is introduced. Section 3 explains the tech-niques used for image classification and several experiments are conducted and their results are discussed. Inthe last section some conclusions are drawn.

2. Multiple features fusion based on ICTSWD for color texture images

The traditional pyramid wavelet decomposition (PWD) recursively decomposes subsignals in the low fre-quency channels, However, since texture is quasi-periodic signal, the most significant information of a textureoften appears in the middle frequency channels, and further decomposition just in the lower frequency regionmay not help for the purpose of classification. Therefore, wavelet packet decomposition (WPD) is used to rep-resent adequately texture feature because it can decompose all subsignals in each scale in texture analysis. Thelibrary of the 1-D wavelet packet basis functions {Wl,n,k} can be generated from a given function W0(x) asfollows:

W l�1;2nðxÞ ¼ffiffiffi2p X

k

hðkÞW l;n x� 2�lk� �

; ð1Þ

W l�1;2nþ1ðxÞ ¼ffiffiffi2p X

k

gðkÞW l;n x� 2�lk� �

; ð2Þ

where W0(x) can be identified with the scaling function, and with W!(x) the mother wavelet. The discrete filtersh(k) and g(k) are quadrature mirror filters. Each element of the library is determined by a subset of the indices:a scaling parameter l, a localization parameter k and an oscillation parameter n, where l = �1�,

Page 3: Multiple features data fusion method in color texture analysis

786 Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797

n = 0 � 2�1 � 1, k 2 Z. The 2-D wavelet packet basis functions can be expressed by the tensor product of two1-D wavelet packet basis functions along the horizontal and vertical directions. The corresponding 2-D filtercoefficients can be expressed as

hLLðk; lÞ ¼ hðkÞhðlÞ; hLHðk; lÞ ¼ hðkÞgðlÞ; ð3ÞhHLðk; lÞ ¼ gðkÞhðlÞ; hHHðk; lÞ ¼ gðkÞgðlÞ; ð4Þ

where the first and second subscripts denote the lowpass and highpass filtering characteristics in the x- andy-directions, respectively.

However, a full decomposition of wavelet packet has still been a shortage with the feature dimensionincreased exponentially, which will make the classification difficulty. The idea leads naturally to a new typewavelet transform called the tree-structured wavelet transform. The key difference between this algorithmand the traditional pyramid algorithm is that the decomposition is no longer simply applied to the low fre-quency subsignals recursively. Insteadly, it can be applied to the output of any filter hLL, hLH, hHL andhHH. It has to be noted that it is usually unnecessary and expensive to decompose all subsignals in each scaleto achieve a full decomposition. To avoid a full decomposition, we may consider a criterion to decide whetherdecomposition is needed for a particular output. Thus, incomplete tree-structured wavelet decomposition(ICTSWD) is adopted to extract feature in order to avoid the shortages from the PWD and WPD.

In the analysis of color texture, color images are usually represented by RGB color space. However, R, Gand B three channels in RGB color space have stronger correlation. In order to keep luminance separate fromchrominance and extract the useful color features. RGB space is often translated into the YIQ space. It can beexpressed by

Y

I

Q

0B@

1CA ¼

0:299 0:587 0:114

0:596 �0:274 �0:322

0:211 �0:523 0:312

0B@

1CA �

R

G

B

0B@

1CA; ð5Þ

where Y is the image luminance and the I and Q carry the chrominance information.In YIQ color space, each component is decomposed by ICTSWD. The energy criterion is used to decide

whether decomposition is needed for a particular output. It is expressed as

e ¼ 1

MN

XN

y¼1

XM

x¼1

dðx; yÞ � dðx; yÞ��� ���; ð6Þ

where M and N are the row and column, respectively in a subimage. dðx; yÞ is mean value for d(x,y).Since Y is corresponding to the luminance component of gray image from color texture image by transla-

tion, energy Y is used to ascertain decomposition scale in ICTSWD. Let emax,Y be the largest energy value in

Fig. 1. Incomplete tree-structured wavelet decomposition. (a) Y component of Fabric 0. (b) Decomposition subimages.

Page 4: Multiple features data fusion method in color texture analysis

Fig. 2. (a) Y component decomposition of Fabric 0. (b) Quadtree structure of component.

Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797 787

the same scale, and ep,Y (p = 1–4) be energy value in pth subimage. If ep,Y P C Æ emax,Y (p = 1–4), we couldcontinue the decomposition in this region since it contains more information. If not, we stop decomposing thisregion, where C is a constant less than 1. This decomposition is well known quadtree structure. Its decompo-sition images are showed in Figs. 1 and 2. Since ICTSWD makes color texture image decomposed in selection,its feature dimension is descended greatly, which is very important for color texture analysis.

Quadtree representation of ith component based on ICTSWD is showed in Fig. 3. Let n be the total numberof leaf nodes and its arrangement sequence from left to right. Let dim(x,y), m = 0, 1, 2, . . . , (n � 1) be a decom-position subimage corresponding to each leaf node. Since di0(x,y) is just decomposition results in the lowerfrequency region, it is not helpful for classification and can be neglected. Therefore, feature extraction is com-pleted only in subimage dim(x,y), m = 1, 2, . . . , (n � 1). Its energy is

l2im ¼

Xx;y

dimðx; yÞ � dimðx; yÞ� �2

; i ¼ 1; 2; 3; m ¼ 1; 2; . . . ; ðn� 1Þ; ð7Þ

where fl21mgm¼1;2;...;ðn�1Þ is corresponding to gray energy feature, fl2

2mgm¼1;2;...;ðn�1Þ and fl23mgm¼1;2;...;ðn�1Þ are cor-

responding to color energy features.In the next part, let dim(x,y) and djm(x,y) be the filtered images of different spectral bands i and j, respec-

tively. We normalize dim(x,y) and djm(x,y) to have unit energy divided by lim and ljm, respectively. Define thedifference of normalized dim(x,y) and djm(x,y) by

cijmðx; yÞ ¼dimðx; yÞ � dimðx; yÞ� �

lim�

djmðx; yÞ � djmðx; yÞ� �

ljm; m ¼ 1; 2; . . . ; ðn� 1Þ; ð8Þ

Fig. 3. Quadtree representation of ICTSWD.

Page 5: Multiple features data fusion method in color texture analysis

788 Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797

m2ijm is the energy in the normalized difference image cijm(x,y), we can write Eq. (8) as

m2ijm ¼

Xx;y

c2ijmðx; yÞ; m ¼ 1; 2; . . . ; ðn� 1Þ: ð9Þ

Eq. (8) is substituted into Eq. (9), we obtain

m2ijm ¼

Px;y dimðx; yÞ � dimðx; yÞ� �2

l2im

þP

x;y djmðx; yÞ � djmðx; yÞ� �2

l2jm

� 2X

x;y

dimðx; yÞ � dimðx; yÞ� �

djmðx; yÞ � djmðx; yÞ� �

limljm; m ¼ 1; 2; . . . ; ðn� 1Þ; ð10Þ

we remove this redundant information, Eq. (10) is written as

m2ijm ¼ 2� 2

Xx;y

dimðx; yÞ � dimðx; yÞ� �

djmðx; yÞ � djmðx; yÞ� �

limljm; m ¼ 1; 2; . . . ; ðn� 1Þ: ð11Þ

Form the Eq. (11), we see that fm2ijmg

i;j¼1;2;3;i6jm¼1;2;...;ðn�1Þ reflects actually the correlation in a subimage corresponding

to the same leaf node between different color planes, which represents the exploitation of spatial correlationbetween the color and texture properties of the image.

In a subimage corresponding to mth leaf node, six features are extracted about l2im

��i¼1;2;3

and m2ijm

���i;j¼1;2;3;i<j

.

When the number of leaf node is (n � 1), fused feature dimension is 6(n � 1). Since the feature vectors abovereflects color feature, texture feature and spatial correlation of color and texture, their physical meanings is notthe same. We choose to normalize the features so that they all have equal variances.

Suppose that feature vector by F ¼ ðf ij1 ; f

ij2 ; . . . ; f ij

N Þ, the normalization of feature vector is performed by

f ijgs ¼

f ijs � f ij

s

rijs

; ð12Þ

where f ijs is denoted as the sth feature from the feature vector, i and j respectively express different channel

components, normalization of f ijs is expressed as f ij

gs:

f ijs ¼

1

N

XN

s¼1

f ijs ; rij

s ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1

N

XN

s¼1

f ijs � f ij

s

� �2

vuut : ð13Þ

A problem is that this normalization could be affected by outliers. These are points in feature space whichdue to uncontrollable cause deviate from the underlying probability density and deteriorate the estimates of f ij

s

and rijs . A solution is to replace f ij

s by the median mðf ijs Þ of the f ij

s and rijs by 1

N

PNj¼1 f ij

s � mðf ijs Þ

�� �� in Eqs. (12)and (13) which are known to be less affected by outliers. We refer to the last procedure whenever we will statethat the feature is normalized.

Multiple features weighted values fusion by normalization is expressed as

xg l2gim

n oi¼1

m¼1;2;...;ðn�1Þ;xc l2

gim

n oi¼2;3

m¼1;2;...;ðn�1Þ;xgc m2

gijm

n oi;j¼1;2;3;i<j

m¼1;2;...;ðn�1Þ

� ; ð14Þ

where xg, xc and xgc are weighted values and xg + xc + xgc = 1. Eq. (14) adequately reflects the texture, colorinformation, together with spatial correlation of texture and color, which makes classification performance ofthe colored texture improved greatly.

3. Classification experiments and discussion

20 real-world (512 · 512) RGB color images from different natural scenes [23], presented in Fig. 4, wereselected. A database of 1280 color image regions of 20 texture classes was constructed by subdividing eachimage into 64 non-overlapping (64 · 64) subimages. Each image region was decomposed using the ICTSWD,

Page 6: Multiple features data fusion method in color texture analysis

Fig. 4. A set of 20 natural colored texture images (from right to left and top to bottom): Bark 4, Bark 6, Bark 9, Brick 1, Brick 5, Buildings9, Fabric 0, Fabric 7, Fabric 13, Fabric 18, Food 0, Food 2, Food 5, Grass 1, Grass 2, Sand 0, Tile 7, Water 6, Wood 1, Wood 2.

Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797 789

and multiple features were extracted and fused. Thus, classification of 1280 color image regions of 20 texturehas been performed.

3.1. Classification

Since the emphasis in this work is on the feature extraction and fusion stage, a k-nearest neighbor classifier(knn-classifier) is designed. Knn-classifier can provide an efficient and robust classification. Scheme for eval-uation of classifier performance and comparison of feature sets [24,25]. It is a supervised classifier, i.e. itrequires a set of labeled feature vectors derived from the available data samples as a design set. Classificationof feature vector~x is performed by searching the k closest design vectors according to some metric dð~x;~yÞ. Thevector~x is assigned to that class to which the majority of these k nearest neighbors belong.

It is now clear how the Knn approach can be employed to estimate probability densities. Let us do so for aclassification problem with again N design samples in c classes with Ni samples in class xi (i = 1,2,. . . ,c). Toclassify~x we find the K samples closest to it and determine the volume V in which these samples reside. Sup-pose that there are ki (k1 + k2 + � � � + kc = K) samples of class x among those K nearest neighbors. The classlikelihoods the unconditional density and the class prior can be estimated as follows:

P ð~x=xiÞ �ki

N iV; Pð~xÞ � K

NV; PðxiÞ �

Ni

N: ð15Þ

Page 7: Multiple features data fusion method in color texture analysis

790 Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797

Inserting Eq. (15) this in the following Bayes decision rule yield:

~x 2 xi if P ð~x=xiÞP ðxiÞP Pð~x=xjÞP ðxjÞ 8j) ki > kj 8j: ð16Þ

This leads to a very simple classification procedure: the sample~x should be assigned to that class to whichmost of its K nearest neighbors belong. Since this rule is derived from the Bayes rule it is optimal in the sensethat it minimizes the conditional risk.

One particularly satisfying property is that the Knn-classifier converges asymptotically to the Bayes classi-fier. This is the theoretical optimal classifier i.e. for a given set of parameters no other classifier can outperformit. However, the Bayes classifier can never be constructed in practice since it requires exact knowledge of theclass conditional probability densities.

The conditions for convergence of the Knn-classifier to the Bayes classifier is that K!1 and K/N! 0while N!1. In a word: K should be taken as high as possible when given a fixed amount of data samples.The optimal value for K thus depend on N. However a straightforward expression for K in function of N doesnot exist in general and remains to be experimentally determined. We found K = 9 to be satisfactory for exper-iments with large N (when classifying nuclei) and K = 1 or 3 for low N.

To evaluate the performance of the knn-classifier, one desires to know how well it classifies ‘‘unseen’’ data,i.e. data not used to design the classifier. One approach is to divide all data in a design set used for designingthe classifier and a test set used for performance evaluation. This is the hold-out method which is known tohave a rather big negative bias on the error estimation and does not use the available data very economicalsince a portion of it is not used for designing the classifier. As an alternative, we have employed the leave-one-out method, we have employed the leave-one-out method, which sequentially picks each available datasample and classifies it (by the knn rule) using the remaining sample. This ensures that all data (but one sam-ple) is used for designing and that an independent test set is kept. When dealing with colored images, the num-ber of extracted features of color texture is much larger compared with that of the gray-level case. Therefore, itis crucial to adopt a feature selection (or extraction) scheme to find a (sub-) optimal set of features. In ourwork, the Karhuhen–Loeve-Transform (KLT) is adopted based on the minimum mean square error criterion.

Suppose that there are c possible classes with Ni (i = 1,2, . . . ,c) feature vectors from the ith class, each avail-able sample is thus employed once as a test sample. Classifier performance is measured by using the correctcounting approach. The class correct rate is defined as

gi ¼nci

N i; ð17Þ

nci is correctly classified test samples from class, and the total correct classification rate is

g ¼ ncN¼ 1

c

Xc

i¼1

gi; ð18Þ

where N ¼Pc

i¼1N i is the total number of samples. This number estimates the percentage of test samples clas-sified correctly and is used as a measure for the performance of the classifier.

Color texture classification process is detailed as follows:

• Learning phase(1) Given 64 samples obtained from the same color texture, decompose each sample with the tree-struc-

tured wavelet transform and determine the tree structure and its number of leaf nodes in eachsubimage.

(2) Calculate fusion features by normalization at its leaves using Eq. (14).(3) Repeat the process for all colored textures. It is worthwhile to point out that the number of color tex-

ture features is different by using ICTSWD. There are five species of cases for 1280 color image regions,i.e. the number of leaf nodes is n = 28, 25, 22, 18, 13.

• Classification phase(1) Decompose an unknown color texture with the tree-structured wavelet transform, determine tree struc-

ture and the number of leaf nodes, and then calculate normalized fusion feature. In unknown sample

Page 8: Multiple features data fusion method in color texture analysis

Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797 791

and samples of learning set, with respect to five species different of cases, respectively reduce their fea-tures to a certain dimensionality of s (0 6 s 6 30).

(2) Classify the unknown texture using knn-classifier. In the process above, the leave-one-out method isadopted. Thus, correct classification is performed at a certain dimensionality of s with respect to1280 color image samples.

3.2. Discussion

3.2.1. Multiple features fusion based on ICTSWD

On the implementation of ICTSWD, we selected Coif4 wavelet filter and threshold C = 0.1. In order toevaluate the quality of multiple features fusion in color texture analysis, the following three feature sets wereconducted:

(1) Gray-level texture energy (i.e. energy Y, (n � 1) features): fl2gimg

i¼1m¼1;2;...;ðn�1Þ;

(2) Texture and color energy (i.e. energy YIQ, 3(n � 1) features): xgfl2gimg

i¼1m¼1;2;...;ðn�1Þ;xcfl2

gimgi¼2;3m¼1;2;...;ðn�1Þ

n o,

where xg = xc = 0.5;(3) Texture and color energy, together with spatial correlation of color texture (i.e. energy YIQ and spatial

correlation, 6(n � 1) features): xgfl2gimg

i¼1m¼1;2;...;ðn�1Þ;xcfl2

gimgi¼2;3m¼1;2;...;ðn�1Þ;xgcfm2

gijmgi;j¼1;2;3;i<jm¼1;2;...;ðn�1Þ

n o, where

xg = 0.5, xc = 0.3, xgc = 0.2.

For the first feature set, we selected maximum value of leaf nodes (i.e. n = 28 ) to conduct feature vector inorder to make feature dimensionality of samples equal. However, for the second and third feature sets, we stillpreserved the original number of leaf nodes of tree structure in each sample to conduct fusion feature. Theirclassification performance is depicted in Fig. 5, in which we find that the first feature set entirely lost colorinformation because it was extracted in gray-level texture image converted from color texture image. Thus,its correct classification rate (CCR) is the lowest in three types of feature sets. However, for the second featureset, its CCR was improved greatly because color information of I and Q components in YIQ space was mergedon the basis of the first feature set. Based on two types of feature sets above, we conducted the third feature setin which the spatial correlation of color texture was merged. Its CCR is the highest in three types of featuresets.

Table 1 shows the assigned weighted values and corresponding CCRs of various feature sets based onICTSWD. From the Table 1, we see that the CCRs are lower with respect to single texture or color feature.

0 5 10 15 20 25 300.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate

ICTSWD third feature setICTSWD second feature setICTSWD first feature set

Fig. 5. Comparison of classification performance of three feature sets.

Page 9: Multiple features data fusion method in color texture analysis

Table 1The assigned weighted values and corresponding classification performance of various features based on ICTSWD

xg xc xgc CCR (20 dim.)

Gray texture feature (first feature set) 1 0 0 0.7011Color feature 0 1 0 0.6731Fusion of two features (second feature set) 0.5 0.5 0 0.8281Fusion of three features

Case 1 1/3 1/3 1/3 0.8938Fusion of three features

Case 2 0.4 0.3 0.3 0.9000Fusion of three features

Case 3 (third feature set) 0.5 0.3 0.2 0.9063Fusion of three features

Case 4 0.5 0.2 0.3 0.8969Fusion of three features

Case 5 0.6 0.2 0.2 0.9015

0 5 10 15 20 25 300.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate

PWD third feature set

PWD second feature set

PWD first feature set

WPD third feature set

WPD second feature set

WPD first feature set

ICTSWD third feature set

ICTSWD second feature set

ICTSWD first feature set

Fig. 6. Comparison of classification performance using PWD, ICTSWD and WPD

0 5 10 15 20 25 300.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate

originalSNR15dBSNR10dBSNR5dBSNR1dB

Fig. 7. CCRs of PWD with noisy data.

792 Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797

Page 10: Multiple features data fusion method in color texture analysis

Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797 793

When two types of feature sets from texture and color were assigned weighted values to fuse, their CCRs wereobviously improved. Finally, five sorts of assigned case of weighted values with respect to three types of fea-ture sets were built up, and then we see that CCR of multiple features data fusion is improved greatly in anyassigned weighted values. Thus, multiple features fusion can effectively increase classification performance ofcolored texture.

3.2.2. Comparison of classification performance based on three kinds of decomposition

In order to compare classification performance based on PWD, ICTSWD and WPD, we selected decom-position level of PWD and WPD to be 3. Classification performance of three decompositions is shown inFig. 6 in which we see that CCR of the first feature set is the lowest and CCR of the third feature set isthe highest in each decomposition, simultaneously, we also see that CCRs of three types of feature sets basedon ICTSWD and WPD are higher than that of three types of feature sets based on PWD. Results aboveexpress that the most significant information of a texture often appears in the middle frequency channels. Fea-ture extraction and fusion based on ICTSWD and WPD can reflect texture, color information, together withspatial correlation of color and texture more adequately. For ICTSWD and WPD algorithm, in the third fea-ture set, CCR of WPD algorithm is slightly higher than that of ICTSWD algorithm, but in the first and secondfeature sets, CCR of ICTSWD algorithm is slightly higher than that of WPD algorithm, which indicates thatCCR of ICTSWD algorithm is almost nearest to that of WPD algorithm. But now, we also note that dimen-sionality of features from WPD algorithm is increased exponentially and dimensionality of features from ICT-SWD algorithm is dropped greatly because of selection decomposition in interested frequency channels. Thus,multiple features fusion based on ICTSWD is more suitable for the colored texture analysis.

Table 2 shows comparison of colored texture classification performance of three feature sets based onPWD, ICTSWD and WPD at a dimensionality of 15. From the Table 2, we see that classification performanceof the third feature set is always higher than that of the second feature set and classification performance of thesecond feature set is always higher than that of the first feature set with respect to each decomposition. Fur-thermore, in the case of greatly descended dimensionality based on ICTSWD, performance of ICTSWD fea-tures fusion is almost nearest to that of WPD.

Table 2Comparison of colored texture classification performance of three feature sets based on PWDICTSWD and WPD

Third feature set (15 dim.) Second feature set (15 dim.) First feature set (15 dim.)

WPD ICTSWD PWD WPD ICTSWD PWD WPD ICTSWD PWD

Bark 4 1.000 0.9375 0.8750 0.8150 0.8125 0.8125 0.7500 0.8125 0.7500Bark 6 0.5625 0.5375 0.5175 0.3850 0.4325 0.3750 0.3125 0.3250 0.3125Bark 9 1.000 1.000 0.9565 0.9375 0.9375 0.7500 0.6250 0.6875 0.4375Brick 1 0.9375 1.000 0.9375 0.5475 0.5625 0.5725 0.0625 0.3250 0.3125Brick 5 0.9375 0.9375 0.8875 0.8750 0.8750 0.7800 0.6250 0.6875 0.4375Buildings 9 1.000 1.000 0.9375 1.000 1.000 0.8750 0.8150 0.8150 0.7500Fabric 0 1.000 1.000 0.9565 0.9375 0.9375 0.8750 0.8125 0.8250 0.8125Fabric 7 1.000 0.9650 0.9375 0.9563 1.000 0.9375 0.8150 0.8275 0.8150Fabric 13 1.000 1.000 1.000 1.000 0.9560 1.000 0.9375 0.9375 0.9375Fabric 18 1.000 1.000 0.9565 1.000 1.000 0.9375 0.8275 0.8250 0.8125Food 0 0.8175 0.7500 0.6850 0.8175 0.8750 0.8125 1.000 0.8750 0.8125Food 2 0.6250 0.6875 0.6275 0.4325 0.5125 0.4375 0.3750 0.3750 0.1250Food 5 1.000 1.000 0.9375 0.8125 0.9375 0.8750 0.6850 0.6875 0.6250Grass 1 0.6875 0.6875 0.6250 0.6250 0.5625 0.5000 0.3750 0.4375 0.4375Grass 2 0.8750 0.9375 0.8750 0.8125 0.8750 0.5225 0.6850 0.6750 0.1875Sand 0 1.000 0.9650 1.000 0.9375 0.9375 0.9650 0.8125 0.8750 0.8750Tile 7 1.000 1.000 1.000 0.9375 0.8750 0.9375 0.8125 0.7500 0.8125Water 6 1.000 1.000 1.000 0.9375 0.9560 0.9375 0.8750 0.8875 0.8750Wood 1 0.8125 0.7525 0.7500 0.4575 0.5700 0.6250 0.3850 0.4575 0.4375Wood 2 1.000 1.000 1.000 0.9565 0.9375 0.8875 0.8750 0.9375 0.8725Average value 0.9128 0.9063 0.8731 0.8101 0.8281 0.7707 0.6700 0.7011 0.6213

Page 11: Multiple features data fusion method in color texture analysis

0 5 10 15 20 25 300.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate

originalSNR15dBSNR10dBSNR5dBSNR1dB

Fig. 8. CCRs of ICTSWD with noisy data.

794 Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797

3.2.3. Comparison of anti-noise ability based on three kinds of decompositionIn order to compare the sensitivity of the PWD, ICTSWD and WPD classification algorithm to noisy data,

white noise with zero mean was added to sample images before classification. Signal-to-noise ratio (SNR)levels are fixed in 15 dB, 10 dB, 5 dB and 1 dB. In this classification, the third feature set was used to extractfeature. Comparisons of classification performance using PWD, ICTSWD and WPD for different SNR areshown in Figs. 7–9, respectively. From the figures we see that the classification rate is almost unaffected whenthe SNR is higher than 15 dB. However, for lower SNR, classification rates of WPD are evidently descendedthan that of PWD and ICTSWD. This reflects that WPD is sensitive to noisy data. However, the classificationrate of ICTSWD is almost nearest to that of PWD. Comparison of classification performance using PWD,ICTSWD and WPD for different SNR is shown in Fig. 10(a)–(e), respectively. From the five figures we seethat multiple features weighted values fusion based on ICTSWD classification is our first selection algorithmin the case of considering classification performance and anti-noise ability completely.

This result may be straightly shown in Fig. 11. The index of the horizontal axis represents signal-to-noiseratio (SNR) level: 1. original image; 2. 15 dB; 3. 10 dB; 4. 5 dB; 5. 1 dB; in Fig. 11 we see that the anti-noise

0 5 10 15 20 25 300

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate

originalSNR15dBSNR10dBSNR5dBSNR1dB

Fig. 9. CCRs of WPD with noisy data.

Page 12: Multiple features data fusion method in color texture analysis

0 5 10 15 20 25 300.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate

PWD

WPD

ICTSWD

0 5 10 15 20 25 300.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate

PWD

WPD

ICTSWD

0 5 10 15 20 25 300.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate

PWD

WPD

ICTSWD

0 5 10 15 20 25 300.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate

PWD

WPD

ICTSWD

0 5 10 15 20 25 300.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

Feature set dimensionality/s

Co

rrec

t cl

assi

fica

tio

n r

ate PWD

WPD

ICTSWD

c

e

d

Fig. 10. Comparison of classification performance using PWD, ICTSWD and WPD for different SNR.

Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797 795

ability of ICTSWD and PWD classification algorithm is evidently higher than that of WPD algorithm and theanti-noise ability of ICTSWD algorithm is almost nearest to that of PWD algorithm. Therefore, multiple fea-tures weighted values fusion based on ICTSWD has better classification performance and anti-noise ability incolor texture analysis.

Page 13: Multiple features data fusion method in color texture analysis

1 1.5 2 2.5 3 3.5 4 4.5 50.4

0.5

0.6

0.7

0.8

0.9

1

Signal–to–noise ratio level/dB

Co

rrec

t cl

assi

fica

tio

n r

ate

PWDICTSWDWPD

Fig. 11. Comparison of classification performance using PWD, ICTSWD and WPD with noisy data.

796 Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797

4. Conclusion

In this paper, we have discussed the extraction of fusion feature which can effectively merge texture feature,color feature, together with space correlation feature of texture and color to adequately describe the concept ofcolor texture. Best representation of color space has been obtained in the YIQ space.

It is shown in this research that the ICTSWD provides a good analytic tool for texture analysis. Althoughthe conventional PWD is suitable for images with energy concentrated in the low frequency region and a fullWPD can represent adequately texture feature with the feature dimension increased exponentially, the ICT-SWD is more natural and effective for textures which have dominant middle frequency channels.

Experiments are conducted on a set of 20 natural colored texture images in which feature-level fusion andclassification can be performed on the basis of the PWD, ICTSWD and WPD. It is worthwhile to point outthat colored texture feature based on ICTSWD is not only more adequate and accurate but also has betterclassification performance and anti-noise ability than other fusion features based on the PWD and WPD.

References

[1] T.S. Lee, Image representation using 2d gabor wavelets, IEEE Trans. Pattern Anal. Mach. Intell. 18 (10) (1996) 959–971.[2] I. Daubechies, Ten Lectures on Wavelets, Capital City Press, Montepellier, Vermont, 1992.[3] S. Mallat, A theory for multiresolution signal decomposition: the wavelet representation, IEEE Trans. Pattern Anal. Mach. Intell. 11

(7) (1989) 674–693.[4] T. Chang, C.-C.J. Kuo, Texture analysis and classification with tree-structured wavelet transform [J], IEEE Trans. Image Process. 2

(4) (1993) 429–441.[5] A. Laine, J. Fan, Texture classification by wavelet packet signatures [J], IEEE Trans. Pattern Anal. Mach. Intell. 15 (11) (1993) 1186–

1190.[6] M. Unser, Texture classification and segmentation using wavelet frames [J], IEEE Trans. Image Process. 4 (11) (1995) 1549–1560.[7] O. Pichler, A. Teuner, B.J. Hosticka, A comparison of texture feature extraction using adaptive gabor filtering, pyramidal and tree

structured wavelet transforms [J], Pattern Recogn. 29 (5) (1996) 733–742.[8] G.J. Klinker, S.A. Shafer, T. Kanade, A physical approach to color image understanding, Int. J. Comput. Vision 4 (1990) 7–38.[9] G. Wyszecki, W.S. Stiles, Color Science, Concepts and Methods, Quantitative Data and Formulas, second ed., J. Wiley and Sons,

New York, 1982.[10] Q.-T. Luong, Color in computer vision, in: C.H. Chen, L.F. Pau, P.S.P. Wang (Eds.), Handbook of Pattern Recognition &

Computer Vision, World Scientific, Singapore, 1993, pp. 311–368 (Chapter 2.3).[11] Y.I. Ohta, T. Kanade, T. Sakai, Color information for region segmentation, Comp. Graph. Image Process. 13 (1980) 222–241.[12] W.K. Pratt, Spatial transform coding of color images, IEEE Trans. Comm. Tech. 19 (6) (1971) 980–992.

Page 14: Multiple features data fusion method in color texture analysis

Y. Wu et al. / Applied Mathematics and Computation 185 (2007) 784–797 797

[13] D. Lee, R. Barber, W. Niblack, M. Flickner, J.H. Afner, D. Petkovic, Indexing for complex queries on a query-by-content imagedatabase, in: Proc. of the 12nd IAPR Int. Conf. on Pattern Recog., vol. 1, Jerusalem, Israel, 1994, pp. 142–146.

[14] J.R. Smith, S. Chang, Local color and texture extraction and spatial query, in: IEEE Proc. Int. Conf. on Im. Proc., vol. 3, 1996,pp. 1011–1014.

[15] C.S. Alfred, S.H. Thomas, Segmentation of road scenes using color and fractal-based texture classification, in: Proc. of IEEE Inter.Conf. Image Process., vol. 3, July, 1994, pp. 328–332.

[16] Y.F. Liang, E.P. Simoncelli, Color channels decorrelation by ICA transformation in the wavelet domain for color texture analysis andsynthesis, in: Proc. of IEEE Inter. Conf. Image Process., vol. 5, October, 2000, pp. 378–384.

[17] M. Mirmehdi, M. Petrou, Segmentation of color textures [J], IEEE Trans. Pattern Anal. Mach. Intell. 22 (2) (2000) 142–159.[18] Y. Deng, Unsupervised segmentation of color texture regions in images and video [J], IEEE Trans. Pattern Anal. Mach. Intell. 23 (8)

(2001) 800–810.[19] T. Caelli, D. Reye, On the classification of image regions by color texture and shape [J], Pattern Recogn. 26 (4) (1993) 461–470.[20] S.C. Tan, J. Kittler, On colour texture representation and classification [C], in: Proc. of the 2nd Int. Conf. on Image Process., 1992,

pp. 390–395.[21] M. Hauta-Kasari, J. Parkkinen, T. Jaaskelainen, R. Lenz, Generalized co-occurrence matrix for multispectral texture analysis [C], in:

Proc. of the 13th Int. Conf. on Pattern Recogn., 1996, pp. 785–789.[22] D.K. Panjwani, G. Healey, Markov random field models for unsupervised segmentation of textured color images [J], IEEE Trans.

Pattern Anal. Mach. Intell. 17 (10) (1995) 939–954.[23] VisTex. Color image database. Available from: <http://www-white.media.mit.edu/vismod/imagery/VisionTexture>, 1995. MIT

Media Lab.[24] D. Michie, D.J. Spiegelhalter, C.C. Taylor (Eds.), Machine Learning, Neural and Statistical Classification, Ellis Horwood, 1994.[25] K. Fukunaga, Introduction to Statistical Pattern Recognition, second ed., Academic Press, London, 1990.