6
Abstract This paper presents the current status of a new initiative aimed at developing a versatile framework and image database for empirical evaluation of texture analysis algo- rithms. The proposed Outex framework contains a large collection of surface textures captured under different con- ditions, which facilitates construction of a wide range of texture analysis problems. The problems are encapsulated into test suites, for which baseline results obtained with algorithms from literature are provided. The rich function- ality of the framework is demonstrated with examples in texture classification, segmentation and retrieval. The framework has a web site for public dissemination of the database and comparative results obtained by research groups world wide. 1. Introduction "This is an awful state of affairs for the engineers whose job is to design and build image analysis or machine vision systems." With the above comment Haralick criticized the ques- tionable status quo in his mid 1990’s survey on perfor- mance characterization in computer vision [7]. The criticism applies to texture analysis as well, for despite over three decades of active research, no performance char- acterization has been established in the texture analysis lit- erature. The final outcome of an empirical evaluation of a texture analysis algorithm depends on numerous factors, both in terms of the possible built-in parameters in the tex- ture analysis algorithm and the various decisions in the experimental setup. Due to the lack of a widely accepted benchmark, all experimental results should be considered to be applicable only to the reported setup. ‘Standardization’ of empirical evaluation in texture analysis has not really advanced beyond the point of using particular sets of texture images. The most widely deployed texture images have been the Brodatz album of textures [2], the VisTex database [28], the MeasTex data- base [26] and the Curet database [5]. Additionally, specific limited sets of images used in previous publications are often employed as a benchmark in new studies. For exam- ple, the images first used by Ohanian and Dubes [13] and Randen and Husoy [25] have appeared in several publica- tions later on. Unfortunately, using images from the same database gives no guarantee of obtaining comparable experimental results. Individual decisions in selecting the textures included in the experiment, preprocessing the images, extracting (non)overlapping subimages from the source images, and partitioning the image data into training and testing data can have a great impact on the final outcome of the evaluation process. And even fixing all these factors related to the image data is not enough, as different perfor- mance criteria can be employed in quantifying the final empirical performance evaluation. In a more general scope of computer vision, several researchers have argued in favor of systematic comparative evaluation of algorithms [6][7][9][20][21][24]. In their overview on empirical evaluation in computer vision, Bowyer and Phillips [1] summarized the benefits of such functionality: 1) it would place computer vision on a solid experimental and scientific ground, 2) it would assist in developing engineering solutions to practical problems, 3) it would allow accurate assessment of the state of the art, and 4) it would provide convincing evidence to potential users that computer vision research has indeed found a practical solution to their problems. Another overview of performance characterization is provided by Christensen and Föerstner [4]. This paper describes the current status of a new initia- tive named Outex, a framework for empirical evaluation of texture classification and segmentation algorithms. 2. Outex framework 2.1. Design principles The proposed framework is being constructed accord- ing to the following design principles: Large versatile image database. The image database con- tains a large collection of textures, both in form of surface textures and natural scenes. The collection of surface tex- tures exhibits well defined variations to a given reference in terms of illumination, rotation and spatial resolution. A wide range of texture classification, retrieval and seg- mentation problems. A large collection of texture classifi- cation, retrieval and segmentation problems, both super- vised and unsupervised, is constructed using the image Outex - New Framework for Empirical Evaluation of Texture Analysis Algorithms Timo Ojala, Topi Mäenpää, Matti Pietikäinen, Jaakko Viertola, Juha Kyllönen and Sami Huovinen Machine Vision and Media Processing Unit, University of Oulu, Finland {firstname.lastname}@ee.oulu.fi http://www.outex.oulu.fi

Outex - New Framework for Empirical Evaluation of Texture ... · database for empirical evaluation of texture analysis algo- ... which includes a Macbeth Spec- ... Bilinear interpolation

  • Upload
    ngobao

  • View
    218

  • Download
    4

Embed Size (px)

Citation preview

Abstract

This paper presents the current status of a new initiativeaimed at developing a versatile framework and imagedatabase for empirical evaluation of texture analysis algo-rithms. The proposed Outex framework contains a largecollection of surface textures captured under different con-ditions, which facilitates construction of a wide range oftexture analysis problems. The problems are encapsulatedinto test suites, for which baseline results obtained withalgorithms from literature are provided. The rich function-ality of the framework is demonstrated with examples intexture classification, segmentation and retrieval. Theframework has a web site for public dissemination of thedatabase and comparative results obtained by researchgroups world wide.

1. Introduction

"This is an awful state of affairs for the engineers whose job is to design and build image analysis or

machine vision systems."

With the above comment Haralick criticized the ques-tionable status quo in his mid 1990’s survey on perfor-mance characterization in computer vision [7]. Thecriticism applies to texture analysis as well, for despiteover three decades of active research, no performance char-acterization has been established in the texture analysis lit-erature. The final outcome of an empirical evaluation of atexture analysis algorithm depends on numerous factors,both in terms of the possible built-in parameters in the tex-ture analysis algorithm and the various decisions in theexperimental setup. Due to the lack of a widely acceptedbenchmark, all experimental results should be consideredto be applicable only to the reported setup.

‘Standardization’ of empirical evaluation in textureanalysis has not really advanced beyond the point of usingparticular sets of texture images. The most widelydeployed texture images have been the Brodatz album oftextures [2], the VisTex database [28], the MeasTex data-base [26] and the Curet database [5]. Additionally, specificlimited sets of images used in previous publications areoften employed as a benchmark in new studies. For exam-ple, the images first used by Ohanian and Dubes [13] andRanden and Husoy [25] have appeared in several publica-

tions later on.Unfortunately, using images from the same database

gives no guarantee of obtaining comparable experimentalresults. Individual decisions in selecting the texturesincluded in the experiment, preprocessing the images,extracting (non)overlapping subimages from the sourceimages, and partitioning the image data into training andtesting data can have a great impact on the final outcome ofthe evaluation process. And even fixing all these factorsrelated to the image data is not enough, as different perfor-mance criteria can be employed in quantifying the finalempirical performance evaluation.

In a more general scope of computer vision, severalresearchers have argued in favor of systematic comparativeevaluation of algorithms [6][7][9][20][21][24]. In theiroverview on empirical evaluation in computer vision,Bowyer and Phillips [1] summarized the benefits of suchfunctionality: 1) it would place computer vision on a solidexperimental and scientific ground, 2) it would assist indeveloping engineering solutions to practical problems, 3)it would allow accurate assessment of the state of the art,and 4) it would provide convincing evidence to potentialusers that computer vision research has indeed found apractical solution to their problems. Another overview ofperformance characterization is provided by Christensenand Föerstner [4].

This paper describes the current status of a new initia-tive named Outex, a framework for empirical evaluation oftexture classification and segmentation algorithms.

2. Outex framework

2.1. Design principles

The proposed framework is being constructed accord-ing to the following design principles:

Large versatile image database. The image database con-tains a large collection of textures, both in form of surfacetextures and natural scenes. The collection of surface tex-tures exhibits well defined variations to a given referencein terms of illumination, rotation and spatial resolution.

A wide range of texture classification, retrieval and seg-mentation problems. A large collection of texture classifi-cation, retrieval and segmentation problems, both super-vised and unsupervised, is constructed using the image

Outex - New Framework for Empirical Evaluation of Texture Analysis Algorithms

Timo Ojala, Topi Mäenpää, Matti Pietikäinen, Jaakko Viertola, Juha Kyllönen and Sami HuovinenMachine Vision and Media Processing Unit, University of Oulu, Finland

{firstname.lastname}@ee.oulu.fi http://www.outex.oulu.fi

database. The diversity of the surface textures provides arich foundation for building the problems. For example, inaddition to ‘standard’ texture classification, problems ofillumination/rotation/resolution invariant texture classifica-tion, or their combinations, are also available. Differentmisclassification cost functions and a prior probabilities ofclasses are also incorporated.

Precise problem definition with test suites. Outex frame-work provides a steadily increasing number of test suites,which encapsulate a problem to have precisely specifiedinput and output data. Specifications are provided in formof generic text and image files, hence the user of the frame-work is not constrained to any given programming envi-ronment. Test suites are delivered as individual zip files,which expand to a ‘standardized’ directory and file struc-ture. This allows processing a large number of experimentswith one software implementation.

Outex also provides a public repository for test suitesconstructed by other groups and/or from other image datanot directly available at the site. Contributed test suiteshave to comply with the given general testing protocol andpackaging specifications. Currently, there are eight contrib-uted suites, which have been used in about dozen differentjournal articles. We believe that in the long run this type ofextensibility can lead to a fruitful public scrutinizing ofnew texture analysis methods and tasks.

Public WWW access. The framework, the image databaseand the test suites, are publicly available in the WWW(http://www.outex.oulu.fi). The web site allows searching,browsing and downloading of the image databases, search-ing and downloading of test suites and uploading of results.The web site is implemented with PHP and MySQL.

Collaborative development and trust. We are invitingother research groups to join the effort, for the purpose of amore thorough and efficient long term development of theframework, and to aid in gaining acceptance of the frame-work in the research community. We trust other researchgroups in that they will upload to the site unbiased honestresults, which are obtained in accordance with the giventest suite specifications.

Continuing maintenance and refinement. The host rese-arch group has a history of about two decades of textureanalysis research and current plan is to stay in the businessfor at least another decade, hence the commitment to main-tain and refine the framework is strong.

2.2. Image database of surface textures

Image acquisition. Surface textures are captured using thesetup shown in Fig. 1, which includes a Macbeth Spec-traLight II Luminare light source and a Sony DXC-755Pthree chip CCD camera attached to a GMFanuc S-10, a 6-axis industrial robot. A workstation controls the light

source for the purpose of switching on the desired illumi-nant, the camera for the purpose of selecting desired zoomdictating the spatial resolution, the robot arm for the pur-pose of rotating the camera into the desired rotation angleand a frame grabber for capturing the images.

Each surface texture is captured using three differentsimulated illuminants provided in the light source: 2300Khorizon sunlight denoted as ‘horizon’ , 2856K incandescentCIE A denoted as ‘inca’ , and 4000K fluorescent TL84denoted as ‘tl84’ . The camera was calibrated using the‘inca’ illuminant. It should be noted that despite of the dif-fuse plate the imaging geometry is different for each illu-minant, due to their different physical location in the lightsource. Each texture is captured using six spatial resolu-tions (100, 120, 300, 360, 500 and 600 dpi) and nine rota-tion angles (0o, 5o, 10o, 15o, 30o, 45o, 60o, 75o and 90o).

The frame grabber produces 24-bit RGB images of size516 (height) x 716 (width) pixels. The aspect ratio of thepixels is roughly 1.04 (height/width). The aspect ratio iscorrected by stretching the images in horizontal directionto size 538x746 using the imresize command with bilinearinterpolation provided in Matlab’s Image Processing Tool-box. Bilinear interpolation is employed instead of bicubic,because the latter may introduce halos or extra noisearound edges or in areas of high contrast, which would beharmful to texture analysis. Horizontal stretching is usedinstead of vertical size reduction, because sampling imagescaptured by an interline transfer camera along scan linesproduces less noise and digital artifacts than samplingacross the scan lines.

Given three illuminants, six spatial resolutions and ninerotation angles, 162 24-bit RGB images are captured fromeach texture sample. Each RGB image is accompaniedwith the corresponding 8-bit intensity image.

Collection of surface textures. The collection of surfacetextures is expanding continuously. At this very moment itcontains 319 textures, both macrotextures and microtex-tures. Many textures have varying local color properties,which results in challenging local gray scale variations inthe intensity images. Some of the source textures have a

30o

Diffuse plate

Light sources

Fig. 1. a) Imaging setup. b) Relative positions oftexture sample, illuminant and camera. (c) Spec-tra of the illuminants.

(b)(a) (c)

large tactile dimension, which can induce considerablelocal gray scale distortions due to shadows. Each sourcetexture is imaged according to the procedure described inthe previous paragraph, hence the current database of sur-face textures comprises of 51678 images, in both RGB and8-bit gray scale, totaling about 82 GB of disk space. Exam-ples of surface textures captured at 100 dpi using ‘inca’illuminant are shown in Fig. 2.

Ground truth data. The ground truth data contains theknown class labels assigned to each source texture. Cur-rently, four different ground truth images (templates)shown in Fig. 3 are used in constructing texture mosaics fortexture segmentation problems.

In three of the four templates the boundaries between

adjacent textures are sinusoids with random magnitude andwavelength, instead of artificially straight lines. The fourthtemplate is the 16-texture layout used by Randen andHusoy [25].

Fig. 3. Different ground truth images used in con-structing texture mosaics.

GT_09_3x3GT_05_GS GT_16_4x4 GT_16_RH

Fig. 2. One example from each category in the current collection of surface textures. The number inparentheses denotes the number of textures in that category. The images are 512x512 pixels in sizeprinted at 464 dpi and histogram equalized for visualization purposes.

barleyrice004 (11) canvas023 (46) cardboard001 (1) carpet009 (12)

sand005 (5) sandpaper007 (8)

pasta003 (6)paper001 (10)mineral001 (6)

foam002 (4)

leather004 (5)

flour002 (13)

groats004 (7)gravel007 (7)

flakes010 (10) fur002 (12) granite001 (10) granular003 (3)

crushedstone008 (8)chips011 (23)

tile004 (7) wool002 (2)

rubber001 (1)

wood011 (12)wallpaper019 (20)

quartz004 (6)plastic007 (47)

seeds005 (13)

pellet001 (4)

2.3. Test suites with baseline results

‘Black box’ approach. The algorithm being tested istreated as a ‘black box’ . In other words, in contrast to e.g.[3], we are not interested in the internal properties of thealgorithm, such as the number of features or discriminantfunctions employed. Instead, the algorithm is presentedwith a precisely defined problem and a specification of therequired output. What happens between the input of theproblem and the output of the results, is totally on theresponsibility of the developer of the algorithm.

The reasoning behind our ‘black box’ treatment of thetexture analysis algorithm is very straightforward. Given atask, what really matter in terms of performance, are onlythe quality of the output and the (computational) cost atwhich the output was obtained, both of which are incorpo-rated in the performance evaluation.

Generation of image data and ground truth dataincluded in the suites. The images used in a texture classi-fication suite are extracted from the given set of sourceimages (particular texture classes, illuminations, spatialresolutions, and rotation angles) by centering the samplinggrid so that equally many pixels are left over on each sideof the sampling grid. Thus, for example for window sizes128x128, 64x64 and 32x32 pixels 20, 88 and 368 samplesin total, respectively, are obtained from a given sourceimage, assuming the texture sample spans over the wholeimage.

To remove the effect of global first and second ordergray scale properties in intensity images, each intensityimage is individually normalized to have an average inten-

sity of 128 and a standard deviation of 20. If the trainingand testing images of a particular texture classificationproblem are extracted from the same set of source images,the images are divided randomly into two halves of equalsize for the purpose of obtaining an unbiased performanceestimate. Random partitioning may be repeated N times,resulting in a test suite of N individual problems, whichfacilitates more reliable performance evaluation than just asingle shot experiment.

The texture mosaics used in (un)supervised texture seg-mentation suites are generated from a set of candidatesource images employing a ground truth image (template)defining the layout of the mosaic. If template contains Rregions, R different images are randomly chosen from thecandidate images. The image region included in the mosaicand the training image needed for supervised segmentationare randomly extracted from the source image so that theydo not overlap.

Test suite types and their testing protocols. The propo-sed framework contains four basic types of test suites: tex-ture classification (“TC”), texture retrieval (“TR”), super-vised texture segmentation (“SS”), and unsupervisedtexture segmentation (“US”).

The purpose of an individual test suite is to encapsulatea meaningful entity used in the empirical evaluation of atexture analysis algorithm. A test suite may contain a largenumber of classification or segmentation problems, whichhave the same basic structure, but for example differentpartitioning of the image data into training and testing sets,or different collection of textures.

Texture classification

Suite IDOutex_

NImagetype

TexturesWindow

sizeDifferent

illum. rotat. scales costs a prioriBest result

Score Method Comments

TC_00000 100 gray 24 128x128 . . . . 99,5 Gabor filtering [11] 24 textures, 128x128 window size

TC_00001 100 gray 24 64x64 . . . . 97,8 Gabor filtering [11] 64x64 window size

TC_00002 100 gray 24 32x32 . . . . 92,2 Gabor filtering [11] 32x32 window size

TC_00003 100 gray 24 128x128 . . yes . 99,5 LBP [16] different classwise misclassification costs

TC_00004 100 gray 24 64x64 . . yes . 97,9 Gabor filtering [11] same with 64x64 window size

TC_00005 100 gray 24 32x32 . . yes . 92,3 Gabor filtering [11] same with 32x32 window size

TC_00006 100 gray 24 64x64 . . . yes 97,9 Gabor filtering [11] different classwise priori probablities

TC_00007 100 gray 24 32x32 . . . yes 94,8 Gabor filtering [11] same with 32x32 window size

TC_00008 100 gray 24 64x64 . . yes yes 97,8 Gabor filtering [11] diff. classwise costs and a priori probablities

TC_00009 100 gray 24 32x32 . . yes yes 94,8 Gabor filtering [11] same with 32x32 window size

TC_00010 1 gray 24 128x128 . yes . . . 97,9 LBPP,Rriu2/VARP,R [17] rotation invariant analysis

TC_00011 1 gray 24 128x128 . yes . . 99,2 Gabor filtering [11] scale invariant analysis

TC_00012 2 gray 24 128x128 yes yes . . . 87,2 LBPP,Rriu2/VARP,R [17] rotation and illumination invariant analysis

TC_00013 1 RGB 68 128x128 . . . . . 94,7 3-D RGB histogram [12] color texture analysis

TC_00014 1 RGB 68 128x128 yes . . . . 69,0 LBP16,2riu2 [12] color texture analysis with illum. invariance

Texture retrieval

Suite IDOutex_

NImagetype

TexturesWindow

sizeDifferent

illum. rotat. scales costs a prioriBest result

Score Method Comments

TR_00000 6380 gray 319 128x128 . . . . . 63,7 LBP multiresolution [14] large-scale texture retrieval experiment

Table 1: Current texture classification and texture retrieval test suites based on Outex surface textures.

The motivation in packing a large number of problemswith known variation, for example in terms of textures orillumination, is to facilitate thorough and rigorous evalua-tion of the relevant properties of the algorithm, e.g. compu-tational complexity and robustness with respect to its built-in parameters. For the latter purpose, any external parame-ters provided manually by the user are required to remainconstant throughout the suite. Further, individual test suitesmay be combined into challenging ‘grand suites’ assessingthe performance in many different respects. Detaileddescriptions of the testing protocols of each test suite typeare available at the Outex web site.

Texture classification test suites. Current texture classifi-cation suites based on Outex surface textures are listed inTable 1. Suites Outex_TC_00000 -00012 are based on a setof 24 textures, by varying different parameters of theexperimental design. Suites Outex_TC_00013-14 deal withcolor texture analysis on a larger set of 68 textures.

These example test suites demonstrate the versatileconstruction of texture classification problems facilitatedby the framework and the database of surface textures. Thecollection of texture classification suites will be augmentedwith more challenging problems involving a larger numberof textures and different experimental designs.

Texture retrieval test suites. Currently there is just onetexture retrieval test suite involving all 319 textures in theOutex database (Table 1). This suite was succesfullyemployed in a recent empirical evaluation of MPEG-7 tex-ture descriptors [14]. Considering that the best retrievalrate obtained in the suite is only 63.7%, there is still aplenty of room for improvement by future studies and newdescriptors. However, we will add even more challengingretrieval suites, by including different illuminations, spatialresolutions and rotation angles.

Texture segmentation test suites. Currently, there is onlyone suite for both supervised and unsupervised texture seg-mentation (Table 2). The 100 texture mosaics included inthe supervised texture segmentation suite Outex_SS_00000are constructed from a set of candidate source images (12textures, 100 dpi resolution, 9 rotation angles, ‘inca’ illu-minant) using the ground truth image GT_05_GS. Thesame 100 texture mosaics are used in the unsupervised tex-ture segmentation suite Outex_US_00000 as well, but thistime the corresponding training images are not provided.

The two results listed for Outex_US_00000 differ sothat method [15] determines the number of regions itself

(75,0% of cases correctly determined, for which the seg-mentation accuracy is 95.6%), whereas method [8] is pro-vided with the number of regions (structure of the mosaicdetected correctly in 93,0% of the cases, for which the seg-mentation accuracy is 95,7%).

The collection of segmentation test suites will expand,involding more complex ground truth images (i.e. mosaiclayouts) and larger sets of candidate source images.

Contributed test suites. Current collection of contrib-uted test suites is described in Table 3. All these problemshave been used in works published in well-known journals,hence they provide a useful benchmark for future studies.

3. Future work

We believe that the Outex framework is now suffi-ciently functional to serve as a starting point for collabora-tive development with other research groups. We think thatthis type of cooperation is the best way to make sure thatOutex at least has a chance to gain the acceptance of theresearch community.

Several features of the framework deserve much morethoughtful attention than what we have been able to con-tribute so far, such as performance metrics, in particular forsegmentation of natural scenes. Another planned effort tospur further interest towards the framework is to organize apublic competition in texture segmentation, classificationand retrieval.

Parallel with the collaborative development, we willcontinue adding new textures, test suites and baselineresults to the database. An important addition to the imagedatabase will be natural scenes. We have captured a num-ber of natural outdoor scenes for this purpose. We are cur-rently busy both creating reliable manual segmentations forthem, to be employed as ground truth in segmentationproblems, and manually extracting regions of identifiednatural texture categories for classification problems.

Welcome to the Outex site http://www.outex.oulu.fi!

Suite IDOutex_ N GT

Candidate source imagesstextures ill. rot. scales

Best resultscore method

SS_00000 100 GT_05_GS 12 1 9 1 89.4 [19]

US_00000 100 GT_05_GS 12 1 9 1 75.0/95.693.0/95.7

[15][8]

Table 2: Current texture segmentation test suites.

Suite IDContrib_

Short descriptionwith related reference(s)

TC_00000 Rotation invariant texture classification, 16 Brodatz textures,10 rotation angles, original setup of [23], [17]

TC_00001 Rotation invariant texture classification, 16 Brodatz textures, 10 rotation angles, more challenging setup modified from [23], [17]

TC_00002 Rotation invariant texture classification, 15 Brodatz textures,7 rotation angles, 32x32 window size [22]

TC_00003 Rotation invariant texture classification, 15 Brodatz textures,7 rotation angles, 64x64 window size [22]

TC_00004 Texture classification, 32 Brodatz textures [27][19]

TC_00005 Texture classification, 11 mixtures of barley and rice [18]

SS_00000 Supervised texture segmentation, 12 mosaics from [25], [19]

US_00000 Unsupervised texture segmentation, 5 mosaics, [10][15]

Table 3: Current contributed test suites.

Acknowledgments

The authors wish to thank the numerous organizationsand individuals who have contributed to Outex in form ofimagery, samples and baseline results. Please see the Outexweb site for a complete listing.

The financial support from the Academy of Finland andthe Graduate School in Electronics, Telecommunicationsand Automation is gratefully acknowledged.

References

[1] K.W. Bowyer and P.J. Phillips, “ Overview of Work in Empir-ical Evaluation of Computer Vision Algorithms” , In Empiri-cal Evaluation Techniques in Computer Vision (ed. K.W.Bowyer and P.J. Phillips), IEEE Computer Society Press, LosAlamitos, 1998, pp. 1-11.

[2] P. Brodatz, Textures: A Photographic Album for Artists andDesigners, Dover, New York, 1966.

[3] K.I. Chang, K.W. Bowyer and M. Sivagurunath, “ Evaluationof Texture Segmentation Algorithms” , Proc. IEEE ComputerSociety Conference on Computer Vision and Pattern Recog-nition, Ft. Collins, CO, 1999, vol. 1, pp. 294-299.

[4] H. Christensen and W. Föerstner, Special Issue on Perform-ance Evaluation, Machine Vision and Applications, 1997,vol. 9.

[5] K.J. Dana, B. van Kinneken, S.K. Nayar and J.J. Koenderink,“ Reflectance and Texture of Real-World Surfaces” , ACMTransactions on Graphics, 1999, vol. 18, pp. 1-34. http://www.cs.columbia.edu/CAVE/curet/.

[6] R.M. Haralick, “ Computer Vision Theory: The LackThereof” , Computer Vision, Graphics, and Image Process-ing, 1992, vol. 36, pp. 372-386.

[7] R.M. Haralick, “ Performance Characterization in ComputerVision” , CVGIP: Image Understanding, 1994, vol. 60, pp.245-249.

[8] T. Hofmann, J. Puzicha and JM Buhmann, “ UnsupervisedTexture Segmentation in a Deterministic Annealing Frame-work” , IEEE Transactions on Pattern Analysis and MachineIntelligence, 1998, vol. 20, pp. 803-818.

[9] R. Jain and T. Binford, “ Ignorance, Myopia, and Naivete inComputer Vision Systems” , CVGIP: Image Understanding,1991, vol. 53, pp. 112-117.

[10] A.K. Jain and K. Karu, “ Learning Texture DescriptionMasks” , IEEE Transactions on Pattern Analysis and MachineIntelligence, 1996, vol. 18, pp. 195-205.

[11] B.S. Manjunath and W.Y. Ma, “ Texture Features for Brows-ing and Retrieval of Image Data” , IEEE Transactions on Pat-tern Analysis and Machine Intelligence, 1996, vol. 18, pp.837-842.

[12] T. Mäenpää, M. Pietikäinen and J. Viertola, “ SeparatingColor and Pattern Information for Color Texture Discrimina-tion” , Proc. 16th International Conference on Pattern Recog-nition, Quebec, Canada, 2002, in press.

[13] P.P. Ohanian and R.C. Dubes, “ Performance Evaluation forFour Classes of Textural Features” , Pattern Recognition,1992, vol. 25, pp. 819-833.

[14] T. Ojala, T. Mäenpää, J. Viertola, J. Kyllönen and M. Pie-tikäinen, “ Empirical Evaluation of MPEG-7 TextureDescriptors with A Large-Scale Experiment” , Proc. 2ndInternational Workshop on Texture Analysis and Synthesis,Copenhagen, Denmark, 2002, in press.

[15] T. Ojala and M. Pietikäinen, “ Unsupervised Texture Segmen-tation Using Feature Distributions” , Pattern Recognition,1999, vol. 32, pp. 477-486.

[16] T. Ojala, M. Pietikäinen and D. Harwood, “ A ComparativeStudy of Texture Measures with Classification Based on Fea-ture Distributions” , Pattern Recognition, 1996, vol. 29, pp.51-59.

[17] T. Ojala T, M. Pietikäinen and T. Mäenpää, “ MultiresolutionGray Scale and Rotation Invariant Texture Classificationwith Local Binary Patterns” , IEEE Transactions on PatternAnalysis and Machine Intelligence, 2002, 24(7), in press.

[18] T. Ojala, M. Pietikäinen and J. Nisula, “ Determining Compo-sition of Grain Mixtures by Texture Classification Based onFeature Distributions” , International Journal of Pattern Rec-ognition and Artificial Intelligence, 1996, vol. 10, pp. 73-82.

[19] T. Ojala, K. Valkealahti, E. Oja and M. Pietikäinen, “ TextureDiscrimination with Multidimensional Distributions ofSigned Gray Level Differences” , Pattern Recognition, 2001,vol. 34, pp. 727-739.

[20] T. Pavlidis, “ Why Progress in Machine Vision Is So Low” ,Pattern Recognition Letters, 1992, vol. 13, pp. 221-225.

[21] P.J. Phillips and K.W. Bowyer, “ Empirical Evaluation ofComputer Vision Algorithms” , IEEE Transactions on PatternAnalysis and Machine Intelligence, 1999, vol. 21, pp. 289-290.

[22] M. Pietikäinen, T. Ojala T and Z. Xu, “ Rotation-invariantTexture Classification Using Feature Distributions” , PatternRecognition, 2000, vol. 33, pp. 43-52.

[23] R. Porter R and N. Canagarajah, “ Robust Rotation-InvariantTexture Classification: Wavelet, Gabor Filter and GMRFBased Schemes” , IEE Proc. - Vision, Image and SignalProcessing, 1997, vol. 144, pp. 180-188.

[24] K. Price, “ Anything You Can Do, I Can Do Better (No YouCan’ t)” , Computer Vision, Graphics, and Image Processing,1986, vol. 36, pp. 387-391.

[25] T. Randen and J.H. Husoy, “ Filtering for Texture Classifica-tion: A Comparative Study” , IEEE Transactions on PatternAnalysis and Machine Intelligence, 1999, vol. 21, pp. 291-310.

[26] G. Smith and I. Burns, “ Measuring Texture ClassificationAlgorithms” , Pattern Recognition Letters, 1997, vol. 18, pp.1495-1501. MeasTex Image Texture Database and Test Suite,CSSIP, University of Queensland, Australia. http://www.cssip.uq.edu.au/staff/meastex/meastex.html.

[27] K. Valkealahti and E. Oja, “ Reduced MultidimensionalCooccurrence Histograms in Texture Classification” , IEEETransactions on Pattern Analysis and Machine Intelligence,1998, vol. 20, pp. 90-94.

[28] VisTex Vision Texture Database, Vision and ModelingGroup, MIT Media Laboratory, 1995, http://www-white.media.mit.edu/vismod/imagery/VisionTexture/.