25

Image recogonization

Embed Size (px)

Citation preview

Page 1: Image recogonization
Page 2: Image recogonization

1. INTRODUCTION

2. BITMAP

3. DISCRETE COSINE TRANSFORMATION

4. ENERGY COMPACTION

5. CHARACTERSTICS EQUATIONS

6. EIGEN VALUE & EIGEN VECTOR

7. LEAST MEANS SQURE EQUATION

8. CONCLSION

Page 3: Image recogonization

The aim of this paper is to recognize a queryimage from a database of images. This processinvolves finding the principal component of theimage, which distinguishes it from the otherimages. Principal Component Analysis (PCA) is aclassical statistical method and is widely used indata analysis. The main use of PCA is to reducethe dimensionality.PCS Is used for recognize theimage by extracting the principle components.

From Bitmap file format it is easy to extractingthe image attributes. Discrete Cosine Fouriertransformation is used for reduce the size of thedata.

Page 4: Image recogonization

Bitmaps are a standard devised jointly byMicrosoft and IBM for storing images on IBMcompatible PC’s running the Windowssoftware.

A bitmap is logically divided into two basicportions, the header section and the datasection. The size of the header is 54 bytes.

The information regarding image attributessuch as the image resolution, height, width,and the number of bits per pixel he header.

Page 5: Image recogonization

Bitmaps are simple to store and the format is

easy to understand. This makes the extractionand eventually the manipulation of the dataeasy.

Page 6: Image recogonization

Several algorithms may be used for therecognition process. Some of them areMultidimensional Scaling, Singular ValueDecomposition (SVD) and Factor analysis (FA).

MDS provides a visual representation of thepatterns of proximities in the data set. It isapplied on a matrix containing distance measuresbetween the variables of the data set.

SVD can be applied only to singular matrices. Itattempts to represent a singular matrix.

Factor analysis aims to highlight the variancesamongst a large number of observable datathrough a set of unobservable data called factors.FA can be approached in two ways. They area) Principal Component Analysisb) Common Factor Analysis.

Page 7: Image recogonization

Common factor analysis(CFA) considers thecommon variances of the variables concerned. Itresults in a testable model that explainsintercorrelations amongst the variables.

Principal Component Analysis (PCA) considersthe total variance of the data set. Hence given animage input, PCA would summarize the totalvariance in image values.

Advantage of PCA is It can accept bothsubjective and objective attributes and always

results in a unique solution.

Page 8: Image recogonization

Transform coding constitutes an integral componentof contemporary image/video processingapplications.

Some properties of the DCT, which are of particularvalue to image processing applications:

De correlation: The principle advantage of imagetransformation is the removal of redundancybetween neighbouring pixels. This property helps usto manipulate the uncorrelated transformcoefficients as they can be operated uponindependently.

Energy Compaction: This property can beexplained by using a few examples given below withfigs..

Page 9: Image recogonization
Page 10: Image recogonization

The example comprises of four broad imageclasses. Figure (a) and (b) contain large areas ofslowly varying intensities. These images can beclassified as low frequency images with low spatialdetails.

Figure (c) contains a number of edges (i.e., sharpintensity variations) and therefore can beclassified as a high frequency image with lowspatial content.

Figure (d) However, the image data exhibits highcorrelation, which is exploited by the DCTalgorithm to provide good energy compaction

Page 11: Image recogonization

The general equation for a 2D (N by M image) DCT isdefined by the following equation:

The good performance shown by 2D DCT with PCAmethod is direct result of coupling 2D DCT coefficientswith PCA method. It mainly benefits from using 2Ddiscrete cosine transform to eliminate the correlationbetween rows and columns and get energyconcentrated at the same

time.

Page 12: Image recogonization

Covariance is a property that gives us theamount of variation of two dimensions fromtheir mean with respect to each other. Thisproperty is used on the DCT-matrix so that aform of distance measure is performed on theimage pixel values thus providing their relativeintensity measures.

The covariance of two variants Xi And Xj andprovides a measure of how strongly correlatedthese variables are, and the derived quantity.

Page 13: Image recogonization

where σi , σj are the standard deviations, iscalled statistical Correlation of xi and xj . Thecovariance is symmetric since cov (x, y)= cov ( y, x) Covariance is performed on theextracted 3*3 DCT matrix and covariancematrix is calculated using the followingformulae Given n sets of variants denoted{x1},.....,{xn}, the first-order covariancematrix is defined by

Page 14: Image recogonization

Where µi is the mean.Higher order matrices aregiven by

An individual matrix element vij = cov(xi, xj) is called the covariance of xi and xj .

Page 15: Image recogonization

Then a characteristic equation is generatedfrom the covariance matrix. This characteristic

equation is a cubic root equation and themaximum root is found out using Cardan’smethod. The maximum root is the eigen valuei.e. the principal component of the data setwhich uniquely identifies the image.

Page 16: Image recogonization

From a symmetric matrix such as the covariancematrix, we can calculate an orthogonal basis byfinding its eigen values and eigen vectors. Theeigenvectors ei and the corresponding eigenvalues λi are the solutions of the equation

For simplicity we assume that the eigen values iare distinct. These values can be found, forexample, by finding the solutions of thecharacteristic equation

Page 17: Image recogonization

where the I is the identity matrix having the sameorder than Cx and the |.| denotes the determinantof the matrix. We are here faced with contradictorygoals:

1. On one hand, we should simplify the problem byreducing the dimension of the representation.

2. On the other hand we want to preserve as muchas possible of the original information content.

PCA offers a convenient way to control the trade off

between loosing information and simplifying theproblem at hand.

Page 18: Image recogonization

The database of images is a collection of the Eigenvectors of the different images that correspond tothe maximum Eigen value.

The Eigen vectors play an important role in theimage recognition process. We need to definestepwise rules that need to be executed to achieveimage recognition.

There are basically two kinds of environmentsclassified based on the adaptation nature. They aredescribed below:

1. Static Environment : The system is said to bestatic in nature if there is a negative output for anew image given as a input. The system respondsnegatively even when a existing image in thedatabase is rotated or translated or is subjected tochanges in contrast, brightness, scaling factors etc.

Page 19: Image recogonization

2. Dynamic Environment: The system is saidto be dynamic in nature if there exists alearning process if any image is fed for therecognition process.

Least Mean Square Algorithm:Generally, LMS algorithm is implemented usingthe instantaneous values for the input function.The name itself defines the rough formula, asLMS is a least value of the squares of themeans calculated with respect to a maximumEigen value calculated for a particular Eigenspace. The major property of LMS algorithm isthat it operates linearly.

Page 20: Image recogonization

Advantages of LMS algorithm: Thereasons of adopting LMS in our paper aredue to its properties like Modelindependence, Robustness, Optimality inresults and Simplicity.

LMS formula: After we defined ageneral formula, as we adopted astatistical and matrices concepts, wenecessarily define a specific formula asfollows:

Page 21: Image recogonization
Page 22: Image recogonization

ALGORITHM1. Give the query image as the input.2. Convert the input from BMP to data file.3. Perform Discrete Cosine Transform on thedataset.4. Extract 3X3 matrix based on subjective analysis.5. Perform covariance on the 3X3 matrix.6.Generate characteristic equation from the

covariance matrix and solve for maximum Eigenvalue.

7. Perform comparison of Eigen values by LeastMean Square Algorithm (LMS).

8 If LMS=0, recognize and display image; else no match found.

Page 23: Image recogonization

INFERENCES

For a static environment, the algorithm gives amaximum robustness because we appliedmatrices and statistics that are constant withrespect to time. Unless we subject the image torotation or translation or scaling factors, theLMS will be effective.

The sensitivity (S) of the LMS algorithm iscalculated on the basis of a Condition Number(CN). CN is defined as the ratio of maximumEigen value to the minimum Eigen value.

Page 24: Image recogonization

This paper finds immense applicationspertaining to security aspects in governmentagencies and public places with economicalsoftware implementation. Though such asystem may prove to be an asset for securitypurposes it fails when the application involvesthe classification and categorization of imagesbased on their content.

Secondly, an image whose brightness orcontrast has been changed from the original,though present in the database will not berecognized by the system.

The use of bitmapped images results in theoccupation of a significant amount of memory.

Page 25: Image recogonization