13
--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 --- VISUAL SEARCH Lov Loothra, Ashish Goel, Prateek and Shikha Vashistha Department of Information Technology and Computer Science Engineering Amity School of Engineering and Technology, Bijwasan Abstract This paper describes the implementation of an application which accepts an image as input from the user and finds images that are similar to it from a specified directory. Similar images may be defined as images that bear an exact (pixel to pixel) resemblance to the query image or images that depict some likeness to the query image in terms of their intensities (color), overall shape (texture) or a combination of these two factors. The application also aims to index or sort the images of the database in order of their similarity to the query image, i.e., from the most similar to the least similar image. Index Terms – edge detection, hausdorff distance, image codification, image comparison, image indexing, image similarity 1. INTRODUCTION As of now, almost all popular search engines are text or tag based, i.e., they search for a web page, an image, a video etc. on the basis of keywords used to describe/store them. This provides for extremely accurate and practical results when we want to search for a particular topic or information contained in a web page. But the same method usually leads to somewhat inaccurate results when we’re specifically searching for images, videos or related media for the simple reason that one person’s description may not be accurate enough to cover all keywords. Instead, if we use an image itself as the search ‘keyword’ and check for images that are similar to it, we’re bound to get more accurate results. This is especially useful when the user knows what he wants to obtain as a result of the search: it could be an image similar to the one he inputs, an image of higher quality (better resolution) or an image that ‘contains’ the image he’s input. 2. IMAGE & IMAGE SIMILARITY A digital image is a function f (x, y) which has been discretized in spatial coordinates and brightness. It can also be represented as a matrix, in which the rates of line and column identify a point in the image, and the content value in the matrix identifies the level of gray (or color) in that point (pixel). The volume of the required data for the storage (and processing) of an image, makes it convenient to work on a codification of the image, trying to work on a minimal set of data which respects (and allows to reconstruct) the most important characteristics of the image. Besides, codification usually allows the deletion of redundant information and it is easy to work on the improvement and analysis of the image directly on the codified representation of the same. Obviously, the reduction level of the image original data can be associated to a relative loss of information. It is always convenient that the codification admits inversion (i.e., recovering the original image or an approximation of that original image with the slightest error). Also, despite modifications made to the image, such as color, scale or texture changes, it would be important to maintain codification invariability. But this, at the same time, requires the codified representation to store some extra information to make such an inversion possible. Traditionally, the problem of image similarity analysis – i.e., the problem of finding the subset of an image bank with similar characteristics to a given image – has been solved by computing a "signature" (codification) of each image to be compared, so then, correspondence between the signatures could be analyzed by means of a distance function that measures the degree of approximation between the two given signatures. Traditional methods to compute signatures are based on some attributes of the image (for example, color histogram, recognition of a fixed pattern, number of components of a given type, etc). This "linearity" of the signature makes it really difficult to obtain data about attributes which were not considered in the signature (and which could be relevant to the similarity or difference between two images). For instance, if we only take into account color histograms, we would not take into account image texture, nor we would be able to recognize similar objects painted in different colors. There are several well-researched methods in the domain of image processing that can be used to formulate a working visual-query based database search application. The techniques used in our project are briefly described below. Furthermore, this paper elucidates the nuances of the actual implementation of the visual search application.

Visual Search

Embed Size (px)

DESCRIPTION

Image indexing using edge-detection and Hausdorff distance computation.

Citation preview

Page 1: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

VISUAL SEARCHLov Loothra, Ashish Goel, Prateek and Shikha Vashistha

Department of Information Technology and Computer Science EngineeringAmity School of Engineering and Technology, Bijwasan

Abstract – This paper describes the implementation of an application which accepts an image as input from the user and finds images that are similar to it from a specified directory. Similar images may be defined as images that bear an exact (pixel to pixel) resemblance to the query image or images that depict some likeness to the query image in terms of their intensities (color), overall shape (texture) or a combination of these two factors. The application also aims to index or sort the images of the database in order of their similarity to the query image, i.e., from the most similar to the least similar image.

Index Terms – edge detection, hausdorff distance, image codification, image comparison, image indexing, image similarity

1. INTRODUCTION

As of now, almost all popular search engines are text or tag based, i.e., they search for a web page, an image, a video etc. on the basis of keywords used to describe/store them. This provides for extremely accurate and practical results when we want to search for a particular topic or information contained in a web page. But the same method usually leads to somewhat inaccurate results when we’re specificallysearching for images, videos or related media for the simple reason that one person’s description may not be accurate enough to cover all keywords.

Instead, if we use an image itself as the search ‘keyword’ and check for images that are similar to it,we’re bound to get more accurate results. This is especially useful when the user knows what he wants to obtain as a result of the search: it could be an image similar to the one he inputs, an image of higher quality (better resolution) or an image that ‘contains’ the image he’s input.

2. IMAGE & IMAGE SIMILARITY

A digital image is a function f (x, y) which has been discretized in spatial coordinates and brightness. It can also be represented as a matrix, in which the rates of line and column identify a point in the image, and the content value in the matrix identifies the level of gray (or color) in that point (pixel).

The volume of the required data for the storage (and processing) of an image, makes it convenient to work

on a codification of the image, trying to work on a minimal set of data which respects (and allows to reconstruct) the most important characteristics of the image. Besides, codification usually allows the deletion of redundant information and it is easy to work on the improvement and analysis of the image directly on the codified representation of the same.

Obviously, the reduction level of the image original data can be associated to a relative loss of information. It is always convenient that the codification admits inversion (i.e., recovering the original image or an approximation of that original image with the slightest error). Also, despite modifications made to the image, such as color, scale or texture changes, it would be important to maintain codification invariability. But this, at the same time, requires the codified representation to store some extra information to make such an inversion possible.

Traditionally, the problem of image similarity analysis – i.e., the problem of finding the subset of an image bank with similar characteristics to a given image – has been solved by computing a "signature" (codification) of each image to be compared, so then, correspondence between the signatures could be analyzed by means of a distance function that measures the degree of approximation between the two given signatures.

Traditional methods to compute signatures are based on some attributes of the image (for example, color histogram, recognition of a fixed pattern, number of components of a given type, etc). This "linearity" of the signature makes it really difficult to obtain data about attributes which were not considered in the signature (and which could be relevant to the similarity or difference between two images). For instance, if we only take into account color histograms, we would not take into account image texture, nor we would be able to recognize similar objects painted in different colors.

There are several well-researched methods in the domain of image processing that can be used to formulate a working visual-query based database search application. The techniques used in our project are briefly described below. Furthermore, this paper elucidates the nuances of the actual implementation of the visual search application.

Page 2: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 2 -

3. HASHING

A cryptographic hash function is a transformation that takes an input (or 'message') and returns a fixed-size string, which is called the hash value. The ideal hash function has three main properties - it is extremely easy to calculate a hash for any given data, it is extremely difficult or almost impossible in a practical sense to calculate a text that has a given hash, and it is extremely unlikely that two different messages, however close, will have the same hash.

By computing and then comparing the hash of each image, it can be quickly ascertained whether the images were identical or not.

4. COLOR MAP

A pixel by pixel image comparison of two images can also determine whether two images are alike. This, however, becomes highly inefficient for large images and at the same time doesn’t take into account the regional or spatial similarity or dissimilarity. Hence we use Color Maps. In our implementation, a Color Map represents an image divided into blocks. These blocks (of a predetermined size) are made of a group of pixels and are used to represent the average pixel intensity of a particular area of the image.

Corresponding blocks of two image maps can then be compared to determine similarity or dissimilarity.

5. EDGE DETECTION

Edges characterize boundaries and are, therefore, a problem of fundamental importance in image processing. Edges in images are areas with strong intensity contrasts – a jump in intensity from one pixel to the next. Detecting the edges of an image significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image.

6. HAUSDORFF DISTANCE

The Hausdorff distance [1] measures the extent to which each point of a ‘model’ set lies near some point of an ‘image’ set and vice versa. Thus, this distance can be used to determine the degree of resemblance between two objects that are superimposed on one another. Computing the Hausdorff distance between all possible relative positions of the query image and the database image can solve the problem of detecting image containment. The Hausdorff distance computation differs from many other shape comparison methods in that no correspondence between the query image and database image(s) is derived [1]. The method is quite tolerant of small position errors as occur with edge detectors and other feature extraction methods. Moreover, the method extends naturally to the problem of comparing a portion of a model against an image.

7. DETAILS OF IMPLEMENTATION

The application, while searching, considers: Exact match(es) (of the Source Image) Color Texture (Shape)

The first point involves searching the target directory for an image or for images that are exact replicas of the query image. This is accomplished using the hashing technique (explained below). The second and third points involve searching for non-exact images that bear some degree of resemblance to the query image. For this, the images (query and database) are first subjected to the edge-detection filter and,subsequently, the Hausdorff metric of the filtered database images with respect to the query image is computed. Also, the generated Color Maps of the images are compared trivially to generate difference metric. These are used to determine the degree of similarity. The nuances of the implementation of the above techniques are detailed below.

7.1 HASHING TECHNIQUE

The SHA hash functions are a set of cryptographic hash functions designed by the National Security Agency (NSA) and published by the NIST as a U.S.Federal Information Processing Standard. SHA stands for Secure Hash Algorithm. The five algorithms are denoted SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512. The latter four variants are sometimes collectively referred to as SHA-2. SHA-1 produces a message digest that is 160 bits long; the number in the other four algorithm names denote the bit length of the digest they produce. The classes used for computing these hashes are predefined in System.Security.Cryptography [6] whichcan be freely used in any .NET or Visual Studio implementation.

Hashing is a faster method to compare the images to allow the tests to complete in a timely manner, rather than comparing the individual pixels in each image using GetPixel (x, y) [5][6]. Hashes of two images should match if and only if the corresponding images also match. Small changes to the image result in large unpredictable changes in the hash. This property of the generated hashes can be used to find exact matches (duplicates) of the query image.

The ComputeHash[6] method of this class takes a byte array of data as an input parameter and produces a 256 bit hash of that data. By computing and then comparing the hash of each image, it would be quickly able to tell if the images were identical or not.The problem was hence to device a way to convert the image data stored in the Bitmap[5][6] objects to a suitable form for passing to the ComputeHashmethod, namely, a byte array. TheImageConvertor[6] class was thus used to allow

Page 3: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 3 -

us to convert the Image (or Bitmap) objects to the hash-able byte array.

Examples: [7.1.1], [7.1.2].

7.2 COLOR MAPS

Color Maps can be easily and efficiently generated for small images by taking the respective Red, Green and Blue averages of a Block (16x16 in our implementation) at a time dynamically using:

IntnstyAvg = (IntnstyAvg * (p – 1) + CIntnsty)/p

where p represent the current pixel location, and CIntensity represents the present calculated intensity value.

However this method fast deteriorates as image size increases and the number of pixels go up to a few million. The most practical and efficient solution is to Scale the image down to a fixed size. For this we need to know the scale factor, sf, based on the image dimensions and the size itself:

MAX_DIM = Max(Img_Width, Img_Height)sf = FIXED_SIZE / MAX_DIM

So therefore, we have:

New_Width = sf * Img_WidthNew_Height = sf * Img_Height

Once an image is scaled the Intensity Average for a block is computed and stored. The intensity of a particular pixel is obtained by the trivial GetPixel(x, y) method. These stored values of the regional blocks (say A1, B1 for two images A, B)can then be compared by a simple absolute difference scaled over the 8-bits used to represent the color component (RGB):

Difference = 1 - |Blk_A1_Avg - Blk_B1_Avg| / 255

Examples: [7.2.1], [7.2.2].

7.3 SOBEL EDGE DETECTION

There are many ways to perform edge detection. However, most of the different methods may be grouped into two categories: gradient and Laplacian. The gradient method detects the edges by looking for the maximum and minimum in the first derivative of the image. The Laplacian method searches for zero crossings in the second derivative of the image to find edges.

Suppose we have a signal, with an edge shown by thejump in intensity as shown in [FIG 7.3.1]. If we take

the gradient of this signal (which, in one dimension, is just the first derivative with respect to t) we get a signal as shown by [FIG 7.3.2].

Clearly, the derivative shows a maximum located at the center of the edge in the original signal. This method of locating an edge is characteristic of the ‘gradient filter’ family of edge detection filters and includes the Sobel method [3]. A pixel location is declared an edge location if the value of the gradient exceeds some threshold. As mentioned before, edges will have higher pixel intensity values than those surrounding it.

Based on this one-dimensional analysis, the theory can be carried over to two-dimensions as long as there is an accurate approximation to calculate the derivative of a two-dimensional image. The Sobel operator performs a 2-D spatial gradient measurement on an image. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image.

The Sobel edge detector uses a pair of 3x3 convolution masks[3], one estimating the gradient in the x-direction (columns, Gx) [FIG 7.3.3] and the other estimating the gradient in the y-direction (rows, Gy) [FIG 7.3.3]. A convolution mask is usually much smaller than the actual image. As a result, the mask is slid over the image, manipulating a square of pixels at a time. An approximate magnitude can then be calculated using: |G| = |Gx| + |Gy|[3]

The actual algorithm involves the computation of the grayscale of the image (if required) followed by the application of the gradient masks.

In our implementation, we used the Bitmap class to represent the image. The GetPixel(x,y) method was used to obtain the value of the Color[5][6] of the pixel located at x, y. The working loop traversed the entire dimensions of the image and obtained the Color value (24 bit value for modern images). By taking the average of the RGB component of the Color value, we converted it to an 8-bit grayscale. The computed value was then stored in a matrix as a simple integer between 0 – 255 for easy recall.

The active pixel region, consisting of the current pixel location (say x, y) was then subjected to a gradient. The region included 8 pixels adjacent to the active pixel for a total of 9 pixels which could be directly correlated (using Hadamard product) with the 3x3 gradient matrices and summed to produce the gradient values in x and y directions. The computed gradient was then compared to the threshold of the 8-bit Bitmap, i.e., 0 & 255 and an appropriate intensity value was assigned.

Examples: [7.3.4], [7.3.5].

Page 4: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 4 -

7.4 CANNY EDGE DETECTION

The Canny edge detection[2] algorithm is known to many as the optimal edge detector. It enhances the many edge detectors already available. It is important that edges occurring in images should not be missed and that there be NO responses to non-edges. Likewise, it is also important that the edge points be well localized. In other words, the distance between the edge pixels as found by the detector and the actual edge is to be at a minimum.

The detector draws upon the implementation of the Sobel filter discussed previously. But before applying the Sobel filter to the image, there is a need to eliminate noise from the image. This noise removal is done with the help of a Gaussian filter which basically blurs the image. This is done by applying a Gaussian mask over the image. For the purpose of implementation, we used a 3x3 mask [FIG 7.4.1] and slid it over the image; manipulating a square of pixels at a time by simple convolution.

After the application of the Gaussian and Sobel filters, we obtain an image (over an 8-bit grayscale) that approximates the intensity change areas of the image. The problem statement now is to remove the gray factor which is a local maximum but a non-maximum when viewed w.r.t. its neighbors. This is known as non-maximum suppression and is done by determining the edge direction and then following it to remove the regional non-maximums. This step was clubbed with the implementation of the Sobel filter as the direction could be trivially deduced as: θ = tan-1 Gy/Gx, with appropriate exceptions being made when Gx and/or Gy compute to 0, as: orientation = (Gy == 0) ? 0 : 90.

Once the edge direction is known, the next step is to relate the edge direction to a direction that can be traced in an image. So if the pixels of a 5x5 image are aligned as in [FIG 7.4.2], then, it can be seen by looking at the centre pixel, a, there are only four possible directions when describing the surrounding pixels:

0 degrees (in the horizontal direction), 45 degrees (along the positive diagonal), 90 degrees (in the vertical direction), or 135 degrees (along the negative diagonal)

Hence the obtained direction is now resolved into one of these four directions depending on which direction it is closest to. As an example, if the orientation angle is found to be 3 degrees, make it zero degrees. The resolved angle is stored in an array for further reference and recall.

Following the computation of the edge directions, we are now in a position to perform non-maximum suppression [2]. Therefore, we now need to trace

along the edge in the edge direction and suppress any pixel value (set it equal to 0) that is not considered to be an edge (i.e., has a value less than its neighbor).This will give a thin line in the output image. This is accomplished by simply comparing the current pixel value under consideration with its two nearest neighbors in one (of the four possible) direction that has been determined previously. The lower values can be ignored.

Finally, hysteresis is used as a means of eliminating streaking[2]. Streaking is the breaking up of an edge contour caused by the operator output fluctuating above and below a particular threshold. If a single threshold, T1 is applied to an image, and an edge has an average strength equal to T1, then, due to noise, there will be instances where the edge dips below the threshold. Equally it will also extend above the threshold making an edge look like a dashed line.

To avoid this, hysteresis uses 2 thresholds: high and low. Any pixel in the image that has a value greater than T1 is presumed to be an edge pixel, and is marked as such immediately. Then, any pixels that are connected to this edge pixel and that have a value greater than T2 are also selected as edge pixels. To follow an edge, start with a gradient of T2 and stop when you get a gradient below T1. This step is very similar to the following of edges and suppression of non-maximums and hence can be clubbed together in the final implementation.

Example: [7.4.3].

7.5 HAUSDORFF DISTANCE COMPUTATION

Given two finite point sets A = {a1,...ap} and B = {b1,...bq}, the hausdorff distance between them is defined as:

H(A, B) = max(h(A, B), h(B, A)) [1]

where h(A, B) = max a є A min b є B || a -b || and || - || is some underlying norm on the points of A and B (for a visual representation of hausdorff distance refer [7.5.1]).

The function h(A, B) is called the directed hausdorff distance [1] from A to B. It identifies thepoint a є A that is farthest from any point of B, and measures the distance from a to its nearest neighbor in B (using the given norm || - ||, Euclidean in this case). That is, h(A, B) in effect ranks each point of A based on its distance to the nearest point of B, and then uses the largest ranked such point as the distance (the most mismatched point of A). Intuitively, if h(A, B) = d, then each point of A must be within distance d of some point of B, and there also is some point of A that is exactly distance d from the nearest point of B (the most mismatched point).

Page 5: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 5 -

The hausdorff distance, H(A, B), is the maximum of h(A, B) and h(B, A). Thus it measures the degree of mismatch between two sets, by measuring the distance of the point of A that is farthest from any point of B and vice versa. Intuitively, if the hausdorff distance is d, then every point of A must be within a distance d of some point of B and vice versa. Thus the notion of resemblance encoded by this distance is that each member of A be near some member of B and vice versa. Unlike most methods of comparing shapes, there is no explicit pairing of points of A with points of B (for example many points of A may be close to the same point of B) [1].

The extraction of the point sets from the images is based on the result of the Canny Edge detector. The implementation uses those points of the Canny-filtered image that actually constitute an edge. These points can be trivially determined by checking for only the non-zero intensity pixels.

The function h(A, B) can be trivially computed in time O(pq) for two point sets of size p and q respectively using the following Brute-Force Algorithm:

1. h = 02. for every point ai of A,

2.1 shortest = INF;2.2 for every point bj of B

dij = d (ai , bj )if dij < shortest then

shortest = dij2.3 if shortest > h then

h = shortest

Our implementation used a slightly modified version of the above algorithm which makes certain assumptions and eliminations based on thecomputation of the Hausdorff metric. The steps to improve computation time are summarized below:

7.5.1 Termination at Zero Distance

This builds on the fact that the result of the distance norm (Euclidean norm was used in our implementation; i.e., d = √ {(x1 - x2)2 + (y1- y2)2}) can never be less than 0. Hence, once the inner loop of the above algorithm (Loop 2.2) computes the shortest distance to be 0, we can safely stop considering any further points from B to compute the distance from the particular point ai є A). This considerably speeds up the computation time by skipping a significant chunk of unconsidered points.

7.5.2 Threshold Distance Window

We can eliminate the need to consider a point if it lies outside a particular threshold distance window or block. This can be understood with the help of an

example. Given a threshold distance τ and the point (Bx, By), we need only consider it for distance computation from the point (Ax, Ay) iff: (Ax – τ)≤ Bx ≤ (Ax + τ) AND (Ay – τ) ≤ By ≤ (Ay + τ). This speeds up computations for smaller values of τ and limits the maximum possiblehausdorff distance. Visual inaccuracies may occur when seemingly similar but translated images are compared under this assumption.

7.5.3 Termination at Infinite Distance

It can be noted that the outer loop of the algorithm (Loop 2) simulates the maximum distance retention.This assumption builds on the previous assumption in the sense that given the boundaries of the threshold distance window, there may be a few points from A which are not in the vicinity of any point from B. Hence the computed distance will retain the initial value of infinity. Further consideration of any point hereafter is trivially meaningless as the maximum value of infinity was retained.

7.5.4 Scaling

Even after the application of the above techniques, the computation efficiency rapidly deteriorates as image size increases and the number pixels go up to a few million. Hence, as was discussed in section 7.2,the image is scaled down to a fixed size on the basis of a scale factor to effectively reduce the number of pixels significantly.

The above assumptions do affect the overall accuracy of the hausdorff metric but are useful nonetheless for a much required speed-up.

7.6 CONCLUSION AND OBSERVATIONS

Hence, given any two images under consideration, we can easily compute their hash-values and their mutual hausdorff metric (after Canny filter application). While on one hand the hash value comparison can trivially determine whether or not the given images are exact in all respects; the hasudorff metric signifies the ‘closeness’ of the two images. A hausdorff metric of 0 indicates exactness as far as features are concerned, whereas further values reveal increasing dissimilarity between images.

This implementation can be extended intuitively to consider a database of images.

Examples:[7.6.1] Source Database[7.6.2] Filtered images[7.6.3] Hausdorff distances computed w.r.t. Firefox_Logo_Normal (Source Image)

Results sorted in order of decreasing similarity.

Page 6: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 6 -

8. SUMMARY OF IMPLEMENTATION

A summary of the implementation is presented below in the form of a pseudo-code.

8.1 Input Source Image, SI8.2 Input Target Directory, TD-- Preprocessing Phase8.3 For each image in the TD:

8.3.1 Compute & store the hash value (HV)8.3.2 Compute and Store Color Details (CD)8.3.3 Apply the Canny (Sobel based) filter8.3.4 Compute the location of non-zero pixels and store in a matrix

-- Preparation Phase8.4 Compute HV for SI8.5 Compute & Store Color Details of SI8.6 Apply Canny filter to SI8.7 Compute & store location of non-zero pixels-- Comparison Phase8.8 For each image in the TD:

8.8.1 Compare HV of SI with stored HVs of the image8.8.2 Compare CD of SI with stored CDs of the image8.8.3 Compute Hausdorff metric b/w SI and the image using the stored location of non-zero pixels8.8.4 Assign rank to image based on HV comparison, computed Hausdorff metric and the Color Details

-- Sorting Phase8.9 Sort images of TD based on rank8.10 Display images in sort-order

9. REFERENCES

[1] Daniel P. Huttenlocher, Gregory A. Klanderman, and William J. Rucklidge. Comparing Images Using the Hausdorff Distance. IEEE Trans. Pattern Analysis and Machine Intelligence, September 1993.

[2] J. Canny. A Computational Approach To Edge Detection. IEEE Trans. Pattern Analysis and Machine Intelligence, November1986.

[3] I. Sobel, G. Feldman. ‘A 3x3 Isotropic Gradient Operator for Image Processing’. Presented at a talk at the Stanford Artificial Project in 1968; Pattern Classification and Scene Analysis, 1973.

[4] H. Alt, B. Behrends and J. Blomer. Measuring the resemblance of Polygon Shapes. Proc. Seventh ACM Symposium on Computational Geometry, 1991.

[5] Herbert Schildt. C# 2.0: The Complete Reference, Second Edition. Tata McGraw-Hill, 2006.

[6] MSDN Library. msdn.microsoft.com/en-us/library/default.aspx

Page 7: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 7 -

FIGURES

[7.1.1]

[7.1.2]

Page 8: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 8 -

[7.2.1]

[7.2.2]

Page 9: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 9 -

[7.3.1]

[7.3.2]

[7.3.3]

Page 10: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 10 -

[7.3.4]

[7.3.5]

Page 11: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 11 -

[7.4.1]

[7.4.2]

[7.4.3]

Page 12: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 12 -

[7.5.1]

Page 13: Visual Search

--- Technical Paper on ‘Visual Search’ by Group C6 of B.Tech. (CSE) for Minor Project, November 2008 ---

- 13 -

[7.6.1]

[7.6.2]

[7.6.3]