9
© 2015, IJARCSMS All Rights Reserved 13 | P a g e ISSN: 2321-7782 (Online) Volume 3, Issue 7, July 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com Iris Recognition System Using Circular Hough Transform Mrigana walia 1 Computer Science Department Chitkara university (Baddi (H.P))India Dr. Shaily Jain 2 Computer Science Department Chitkara university (Baddi (H.P))India Abstract: Based on specific features of an individual, the biometric system identify a person automatically. There have been many developments of biometric systems especially for identification using biometrics. Typically, iris recognition uses the approach of computer vision and image processing. These methods contain different stages like image segmentation, feature extraction, image recognition and image normalization. As sound measure, Iris Biometry has been proposed. In the iris segmentation step, localization of the iris region in the image will be done. For many algorithms and assuming near-frontal representation of the pupil, the iris boundaries are shown as two circles. The inner circle is the boundary between the pupil and iris whereas the outer circle is the limbic boundary between the iris and the sclera. After that the normalization stage changes segmented image into the rectangular block to insure the removal of dimensional inconsistencies present. The feature extraction step encodes the particular iris image texture into standard bit vector code. The relative matching stage estimates the distance between converted codes and gives the rate of recognition for the systems. Biometrics is globally applicable in many fields like restricted access to secure facilities and Labs, verification of secured financial transfers, protection from welfare frauds and immigration checking while visiting other countries. An iris recognition system is proposed here having four steps. First one, image segmentation which is achieved using Canny Edge Detector then iris Circular Hough transformation (CHT) is second step to localize the pupil and iris regions. In third step segmented iris is normalized and features are extracted using standard symlet wavelet 4. Finally in last step, the comparison of iris code is done. After comparing with existing system, a high recognition rate is obtained whereas the FAR and FRR values obtained remain low for our proposed system. Keywords: Iris recognition, Biometrics, Iris Image, Segmentation, symlet wavelet. I. INTRODUCTION A novel bucketization and partitioning structure is proposed which then inflfenced many of the papers in literature. An algebraic framework is described for query rewriting over encrypted attributes. The main idea is to map the plaintext values to ciphertext values by splitting the domain values of plaintexts into some partitions and giving them bucket ids. Each relation R (A1, A2, ..., AN) is stored as an encrypted relation: RS (encrypted tuple, A1_S, A1_S, ... , A1_S) where the attribute encrypted tuple is the encrypted string that corresponds to a tuple in R. Each attribute Ai_S is the index for the attribute Ai. The domain of Ai is partitioned into partitions p1, p2, ..., pn such any two partitions do not overlap and the partitions taken as a whole cover the whole domain. Different attributes may be partitioned using different partitions functions. These partition functions may be any two functions satisfying the above two conditions. The rapid development of technology and services in our life, have increased the activities and transactions, where reliable personal identification is important. The chances of unauthorized access are increased due to the advancements in information technology field. The person’s identity is becoming more important with the enlargement and enhancement of mankind activity range. There are traditional and knowledge based methods such as passwords, ID cards or keys for authentication. The

Iris Recognition System Using Circular Hough · PDF filedescribes iris recognition system with the use of 1-D Gabor filter. Moreover, defines a feature of customized iris extraction

Embed Size (px)

Citation preview

© 2015, IJARCSMS All Rights Reserved 13 | P a g e

ISSN: 2321-7782 (Online) Volume 3, Issue 7, July 2015

International Journal of Advance Research in Computer Science and Management Studies

Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

Iris Recognition System Using Circular Hough Transform Mrigana walia1

Computer Science Department Chitkara university (Baddi (H.P))India

Dr. Shaily Jain2 Computer Science Department

Chitkara university (Baddi (H.P))India Abstract: Based on specific features of an individual, the biometric system identify a person automatically. There have been

many developments of biometric systems especially for identification using biometrics. Typically, iris recognition uses the

approach of computer vision and image processing. These methods contain different stages like image segmentation, feature

extraction, image recognition and image normalization. As sound measure, Iris Biometry has been proposed. In the iris

segmentation step, localization of the iris region in the image will be done. For many algorithms and assuming near-frontal

representation of the pupil, the iris boundaries are shown as two circles. The inner circle is the boundary between the pupil

and iris whereas the outer circle is the limbic boundary between the iris and the sclera. After that the normalization stage

changes segmented image into the rectangular block to insure the removal of dimensional inconsistencies present. The

feature extraction step encodes the particular iris image texture into standard bit vector code. The relative matching stage

estimates the distance between converted codes and gives the rate of recognition for the systems. Biometrics is globally

applicable in many fields like restricted access to secure facilities and Labs, verification of secured financial transfers,

protection from welfare frauds and immigration checking while visiting other countries. An iris recognition system is

proposed here having four steps. First one, image segmentation which is achieved using Canny Edge Detector then iris

Circular Hough transformation (CHT) is second step to localize the pupil and iris regions. In third step segmented iris is

normalized and features are extracted using standard symlet wavelet 4. Finally in last step, the comparison of iris code is

done. After comparing with existing system, a high recognition rate is obtained whereas the FAR and FRR values obtained

remain low for our proposed system.

Keywords: Iris recognition, Biometrics, Iris Image, Segmentation, symlet wavelet. I. INTRODUCTION

A novel bucketization and partitioning structure is proposed which then inflfenced many of the papers in literature. An

algebraic framework is described for query rewriting over encrypted attributes.

The main idea is to map the plaintext values to ciphertext values by splitting the domain values of plaintexts into some

partitions and giving them bucket ids. Each relation R (A1, A2, ..., AN) is stored as an encrypted relation: RS (encrypted tuple,

A1_S, A1_S, ... , A1_S) where the attribute encrypted tuple is the encrypted string that corresponds to a tuple in R. Each attribute

Ai_S is the index for the attribute Ai. The domain of Ai is partitioned into partitions p1, p2, ..., pn such any two partitions do not

overlap and the partitions taken as a whole cover the whole domain. Different attributes may be partitioned using different

partitions functions. These partition functions may be any two functions satisfying the above two conditions.

The rapid development of technology and services in our life, have increased the activities and transactions, where reliable

personal identification is important. The chances of unauthorized access are increased due to the advancements in information

technology field. The person’s identity is becoming more important with the enlargement and enhancement of mankind activity

range. There are traditional and knowledge based methods such as passwords, ID cards or keys for authentication. The

Mrigana et al., International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 7, July 2015 pg. 13-21

© 2015, IJARCSMS All Rights Reserved ISSN: 2321-7782 (Online) 14 | P a g e

passwords can be judged and tracked. The ID-cards and keys may get misplaced or stolen. So, these methods are not so reliable.

A new method named as biometrics is more revealing and promising. Biometrics is derived from Bio (means life) and Metrics

(means system used for measurement). This means that biometrics, on whole means technology of measuring and analyzing

physiological or biological characteristics of living body for identification and verification purposes. The possible biometrics

methods include face, fingerprint, iris, hand gesture, gait, signature etc.

It is a method to analyze and measure biological and physical characteristics of a body to verify and identify an individual

person. Introduction of biometric technology is explained in brief, which includes verification and identification processes in

biometric systems. Some types of biometrics are also described and explained in this chapter. Furthermore, table for comparison

of several biometric methods is mentioned, which explains about parameters of different biometric techniques. This chapter has

also presents architectural view of Human Iris, Overview of Iris Recognition Technology and application of Iris Recognition

Technology.

Biometric technology

Biometrics is the science of automated recognition of identity of persons based on one or more physiological or behavioural

characteristics. The biometric methods include face, fingerprint, iris, hand shape, gait and signature. Biometrics is widely used

in many applications, such as access control to secure facilities, Verification of financial transactions, welfare fraud protection,

law enforcement, and immigration status checking when entering into other country.

Types of biometrics

Biometric system is broadly categorized into two types: Physiological and behavioural. In figure 1, there are two main

types are shown further these types are classified into subtypes.

1) Physiological.

a) Face: It is a type of physiological biometrics in which identity of a person is verified by analyzing and comparing face

patterns.

b) Fingerprint: In fingerprint recognition system, identity of person is verified by using dermal skin patterns of fingers.

c) Hand: In hand recognition system, identity of person is verified on comparing the hand geometry patterns of different

persons.

2) Behavioral

a) Keystroke: In this biometrics system identity of person is verified by matching the speed and pressure of pressing and

stroking of keys.

b) Signature: In Signature biometrics system identity of person is verified by comparing the signature view and style of

different persons.

Figure:1. Types of biometrics

Mrigana et al., International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 7, July 2015 pg. 13-21

© 2015, IJARCSMS All Rights Reserved ISSN: 2321-7782 (Online) 15 | P a g e

II. LITERATURE SURVEY

Steve Zhou et al .(2013) in their paper entitled “A Novel Approach for Code Match in Iris Recognition” gave an iris

recognition system in which the iris localization is done with histogram analysis technique by converting the gray scale eye

image into the binary form which contains only two values either 0 or 1. Normalization was carried out using Daugman’s

Rubber Sheet Model.

Jimenez Lopez et al. (2013) in their paper entitled “Biometric Iris Recognition Using Hough Transform” described the

segmentation and normalization process for automatic biometric iris recognition system, implemented in MATLAB. They used

grayscale database images and performed Hough Transform as the segmentation technique.

J. Daugman et al. (2004) proposed that in “How iris recognition works“Iris recognition recognizes their iris patterns to

identify the humans. These are protected as internal flat organs or claims to exhibit the stability and epigenetic randomness over

decades. They captured on-the-move or at-a-distance and one-to-many identification with the help of fast rotation-invariant

comparators.

Khan, Mohd Tariq et al. (2013) proposed in the” Feature extraction through iris images using 1-D Gabor filter on

different iris datasets." human iris is established as a strong biometric. Biometric works with the identification of different

person that based on the behavioral feature. It is one of the most reliable and broadly used biometric methods available. Author

describes iris recognition system with the use of 1-D Gabor filter. Moreover, defines a feature of customized iris extraction

algorithm that directly applied on iris images.

P. J. Phillips (2000) proposed in “An introduction to evaluating biometric systems,” the author developed algorithms to

recognizing the persons by their iris patterns and they have tested in several laboratory and field trials, developing no wrong

matches in various million comparison tests. The principle of recognition has been failure of a test of stable independence on

iris phase structure that is encoded by multi-scale quadrature wavelets

Khan, M.T. et al. (2013) in their paper entitled “Feature Extraction through Iris Images using 1-D Gabor filter”

performed three main parts namely pre-processing- pupil segmentation, iris segmentation, normalization, feature extraction,

matching or comparisons. Their database contained iris images which included noisy image and noise free images. Their

assumption for segmentation process was that the pupil color is very much different from iris and iris color different from

sclera.

K. W. Bowyer et al. (2008) presents in “understanding for iris biometrics: A survey. Comp. Vision and Image

Underst. “The two-stage iris segmentation framework has compared with traditional methods that have major three advantages:

First one, modules are easily extracted to incorporate with multiple sophisticated schemes for different tasks, combining a trade-

off between segmentation accuracy and computation time.

Amel Saeed Tuama (2012) in his paper entitled “Iris Image Segmentation and Recognition” presented the iris

recognition system with several steps. First, he performed image pre-processing followed by extracting the iris portion .The iris

portion contains the limbus boundary and the pupil boundary. He then normalized the extracted part using the rubber sheet.

Ma, Li, et al. (2004) proposed in the "Efficient iris recognition by characterizing key local variations." The different

aspect of iris comes from uniformly distributed characteristics. It leads with high reliability for the personal identification at

same time, the difficulty of effective display details in an image. Author describes an effective algorithm for the iris recognition

by individualizing the key local variations.

H. Proenc¸a. (2010) proposed in “Iris recognition: On the segmentation of degraded images acquired in the visible

wavelength” After locating the center and radius of the pupil, they created a vector that holds pixel intensities of the imaginary

Mrigana et al., International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 7, July 2015 pg. 13-21

© 2015, IJARCSMS All Rights Reserved ISSN: 2321-7782 (Online) 16 | P a g e

row passing through the center of the pupil with a certain width say w. Thus, the circle was drawn to find pupil and iris edges,

using the calculated iris and pupil center coordinate and the radius.

III. PRESENT WORK

3.1 Problem statement

1. Mohd. T. Khan et al. used a very complicated method for segmentation as they first applied threshold, then they used

Freeman’s Chain Code algorithm, then linear contrast filter and after that they calculated the average window vector.

2. Secondly, they have used 1-D Gabor Filter for extracting the unique pattern from the Iris.

3.2 Proposed System

» Algorithm will be tested or evaluated on real and noisy image database.

» Simpler technique will be used for segmentation or localization of the Iris in the eye image.

» Feature extraction would be done using Symlet wavelet

3.3 Objectives

1. The objective is to implement an Iris Recognition System in order to enhance the accuracy in terms of Recognition

Rate, False Acceptance Rate, False Rejection Rate of Iris Recognition Technology.

2. There will be five steps in the system: Eye image capturing, Segmentation, Normalization, Feature extraction and

Matching.

3. UBIRIS image will be fed to Segmentation process which will detect the desired Iris region from eye image and will

discard the undesired region of the image.

4. Further, Segmented image will pass through Normalization process which will transform radial image into angular

image and it will pass the output to Feature extraction process.

5. Feature extraction process will compress the image and converts it into binary data template. Further, the Iris data

template will get stored in database.

6. Lastly matching process will be performed, matching will be done between new input Iris data template and stored data

templates.

3.4 Methodology

Figure:2 .Methodology steps

This section explains about the methodology used in whole process of Iris Recognition Technology. The five methodology

steps with algorithms used in each step is shown in figure 3. In figure 3, the first step of Iris Recognition system includes eye

Mrigana et al., International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 7, July 2015 pg. 13-21

© 2015, IJARCSMS All Rights Reserved ISSN: 2321-7782 (Online) 17 | P a g e

image capturing process which is done by using high quality cameras. For capturing eye image a specified distance is

determined by working on different eye image samples

3.5 Tools used

To evaluate the performance of the proposed system, the eye images are collected from the available UBIRIS database

version 1. Total 54 eye images of different persons are taken from the database to test on Iris Recognition System. The entire

testing of the system is done onMatlab R2012a platform and the laptop with configuration speed of 2.40 GHz processor and 4

GB RAM is used to run our prototype model.

Figure:3 captured image of eyes

3.6 Segmentation

In Segmentation process Circular Hough transform based approach is used for detecting the upper and the lower eyelids of

the eye, and the iris and pupil boundaries. This procedure involves generation of an edge map, which is done through employing

Canny Edge Detection technique. In this, gradients are biased in the vertical direction for the outer iris (sclera boundary), as

recommended by Wildes et al. system.

3.6.1 Canny Edge Detection

Canny edge detection is one of the basic algorithms used in shape recognition. The algorithm uses a multi-stage process to

detect a wide range of edges in images. Steps for Canny Edge Detector algorithm are as follows:

1. Image smoothing: This is done to reduce the noise in the image.

2. Calculating edge strength and edge direction.

3. Directional non-maximum suppression to obtain thin edges across the image.

4. Invoking threshold with hysteretic to obtain only the valid edges in an image.

Figure:4

3.6.2 1 Image smoothing

Image smoothing is the first stage of the Canny edge detection. The pixel values of the input image are convolved with

predefined operators to create an intermediate image. This process is used to reduce the noise within an image or to produce a

less pixelated image. Image smoothing is performed by convolving the input image with a Gaussian filter [16]. A Gaussian filter

is a discrete version of the 2-dimensional function shown in equation

An example of the conversion of a 2-dimensional 5x5 discrete Gaussian filter function is shown in equation. The function is

obtained for = 1.4 by substituting integer values for x and y and renormalizing.

Mrigana et al., International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 7, July 2015 pg. 13-21

© 2015, IJARCSMS All Rights Reserved ISSN: 2321-7782 (Online) 18 | P a g e

3.6.3 Calculation of edge strength and direction

In this stage, the blurred image obtained from the image smoothing stage is convolved with a 3x3Sobel operator. The Sobel

operator is a discrete differential operator that generates a gradient image.

The edge strength for each pixel in an image obtained from the above equation is used in non-maximum suppression stage,

which is applied further. The edge directions will be obtained as rounded off value as four angles: 0 degree, 45 degree, 90

degree or 135 degree, before using it in non-maximum suppression.

Figure:5

3.6.4 Non Maximum Suppression

Non-maximum suppression is used normally in edge detection algorithms. It is a process in which all pixels whose edge

strength is not maximal are marked as zero within a certain local neighborhood. This local neighborhood can be a linear window

at different directions of length 5 pixels. The linear window considered is in accordance with the edge direction of the pixel

under consideration for a block in an image.

3.7 Normalization and Enhancement

Daugman’s rubber sheet model technique was employed to demonstrate the normalization of the iris regions. The reference

point here refers to the centre of the pupil, and the radial vectors that are considered pass through the iris region.

Where

3.8 Feature Extraction

For feature extraction we have used symlet wavelet 4.

A wavelet is a waveform of effectively limited duration that has an average value of zero.

A mathematical representation of the Fourier transform:

Mrigana et al., International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 7, July 2015 pg. 13-21

© 2015, IJARCSMS All Rights Reserved ISSN: 2321-7782 (Online) 19 | P a g e

Meaning: the sum over all time of the signal f(t) multiplied by a complex exponential, and the result is the Fourier

coefficients F()

Those coefficients, when multiplied by a sinusoid of appropriate frequency , yield the constituent sinusoidal component

of the original signal

Wavelet functions :

3.9 Matching

For the comparison of the two iris codes, the hamming distance algorithm is employed. Since the iris region contains

features with very high degrees of freedom, and each iris produces a unique bit-pattern which is independent to that produced by

any another iris, whereas the codes produced by the same iris would be similar. If two bits patterns are completely independent,

then the ideal Hamming distance between the two patterns will be equal to 0.5. It happens because independent bit pattern are

completely random. Therefore, half of the bits will agree and half will disagree between the two patterns. The Hamming

distance is the matching metric system employed by Daugman, and calculation of the Hamming distance is taken only in bits

that are generated from the actual iris region.

The Hamming distance is defined in form of a mathematical expression as follows,

HD=1/N ∑_(j=1)^N▒〖X_j⊕Y_j 〗

Where Xj and Yj are the two bit wise templates which have to be compare and ‘N’ is the number of bits represented by

each templates. The Hamming distance can be computed by using the elementary logical operator XOR (Exclusive-OR) and

thus can be completed very quickly. In the present case, the HD is 0.445 which signifies that if the hamming distance between

the two templates is below 0.445 than both the irises are of same eye and if the HD value falls above 0.445, it signifies that both

the irises are from different eyes.

IV. RESULTS

The results for the proposed methods are shown in the figures below.

Figure:6 Segmented eye image

Figure:7 Normalised iris image

dtetfwF iwt)()(

aby

abx

abbayx

yxyx ,1, ,,

Mrigana et al., International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 7, July 2015 pg. 13-21

© 2015, IJARCSMS All Rights Reserved ISSN: 2321-7782 (Online) 20 | P a g e

Figure:8 Cropped Iris image (noise removed)

Figure:9 Final enhanced image for Feature extraction

Figure: 10 Inter Hamming distance

Figure:11. Inter and inner hamming distance

Figure:12. Inner hamming distance

Figure 13 : ROC curve

Researcher name FAR FRR Recognition rates

Li Ma 0.02% 1.98% 98%

Daugman 0.01% 0.09% 100%

Avilia 0.03% 2.08% 97.89%

M.T Khan 3.87% 9.29% 93.45%

Proposed 0.9630% 0 99.5186%

Figure:12 Table

Mrigana et al., International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 7, July 2015 pg. 13-21

© 2015, IJARCSMS All Rights Reserved ISSN: 2321-7782 (Online) 21 | P a g e

V. CONCLUSION AND FUTURE SCOPE

This paper has proposed an Iris Recognition system, which has been tested using UBIRIS database of greyscale eye images

in order to verify the claimed performance of Iris Recognition technology.

Iris recognition is being used for identification and verification purposes at several places in world. In several countries it is

being considered for biometric Identity Cards. Airports worldwide use these algorithms with iris cameras for passenger

screening and immigration control in passport presentation, including 10 UK airport terminals

Proposed Segmentation process result is 99.5%, which is tested on total 54 UBIRIS images, which may be improved by

using other algorithm, which further helps to improve the accuracy of the system in terms of Recognition Rate and speed of

algorithms used in segmentation process.

Iris Recognition System may be tested on large number of UBIRIS database eye images, to improve the working of the

system on real time applications

ACKNOWLEDGEMENT

This work is supported and guided by my research guide. I am very thankful to my research guide Dr. Shaily Jain,

Associate Professor, Chitkara University Baddi (HP), India for her guidance and support.

References

1. Steve Zhou and Junping Sun, “A Novel Approach for Code Match in Iris Recognition,” 2013 IEEE/ACIS 12th International Conference on Computer and Information Science (ICIS), pp. 123-128, 16-20June, 2013.

2. Jimenez Lopez, F.R., Pardo Beainy, C.E. & Umana Mendez, O. E. , “Biometric Iris Recognition Using Hough Transform,” 2013 XVIII Symposium of Image, Signal Processing, and Artificial Vision(STSIVA),pp. 1-6, 11-12 Sept. 2013

3. J. Daugman. How iris recognition works. IEEE Trans. Circ.Syst. Video Techn., 14(1):21–30, 2004.

4. Khan, Mohd Tariq, Deepak Arora, and Shashwat Shukla. "Feature extraction through iris images using 1-D Gabor filter on different iris datasets." Contemporary Computing (IC3), 2013 Sixth International Conference on. IEEE, 2013.

5. P. J. Phillips, A. Martin, C. L. Wilson, and M. Przybocki, “An introduction to evaluating biometric systems,” Computer, vol. 33, no. 2, pp. 56–63, 2000.

6. Khan, M.T; Arora, D. & Shukla, S., “Feature Extraction through Iris Images using 1-D Gabor Filter,” 2013 Sixth International Conference on Contemporary Computing (IC3), pp. 445-450, 8-10 Aug.2013.

7. K. W. Bowyer, K. Hollingsworth, and P. J. Flynn. Image understanding for iris biometrics: A survey. Comp. Vision and Image Underst., 110(2):281 – 307, 2008.

8. A.S Tuama, “Iris Image Segmentation and Recognition,” International Journal of Computer Science & Emerging Technologies, IJCSET, Vol.3, no. 2,E- ISSN-2044-6004, April, 2012.

9. Ma, Li, et al. "Efficient iris recognition by characterizing key local variations." Image Processing, IEEE Transactions on 13.6 (2004): 739-750.

10. H. Proenc¸a. Iris recognition: On the segmentation of degraded images acquired in the visible wavelength. IEEE Trans. Pattern Anal. Mach. Intell., 32(8):1502–1516, 2010.