183
2 IRIS LOCALIZATION USING GRAYSCALE TEXTURE ANALYSIS AND RECOGNITION USING BIT PLANES Abdul Basit A thesis submitted to the College of Electrical and Mechanical Engineering National University of Sciences and Technology, Rawalpindi, Pakistan, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Engineering College of Electrical and Mechanical Engineering National University of Sciences and Technology, Rawalpindi, Pakistan 2009

Nust Thesis

Embed Size (px)

Citation preview

Page 1: Nust Thesis

2

IRIS LOCALIZATION USING GRAYSCALE TEXTURE ANALYSIS AND RECOGNITION USING BIT PLANES

Abdul Basit

A thesis submitted to the College of Electrical and Mechanical Engineering

National University of Sciences and Technology, Rawalpindi, Pakistan,

in partial fulfillment of the requirements for the degree of

Doctor of Philosophy

Department of Computer Engineering

College of Electrical and Mechanical Engineering

National University of Sciences and Technology, Rawalpindi, Pakistan

2009

Page 2: Nust Thesis

-ii-

Abstract Identification and verification of human beings is very important because of

today’s security condition throughout the world. From the beginning of 19th century, iris

is being used for recognition of humans. Recent efforts in computer vision have made it

possible to develop automated systems that can recognize individuals efficiently and with

high accuracy. The main functional components of existing iris-recognition systems

consist of image acquisition, iris localization, feature extraction and matching. While

designing the system, one must understand physical nature of the iris, image processing

and their analysis to make an accurate system. The most difficult and time consuming

part of iris recognition is iris localization. In this thesis, performance of iris localization

and normalization processes in iris recognition systems has been enhanced through

development of effective and efficient strategies. Bit plane and wavelet based features

has been analyzed for recognition.

Iris localization is the most important step in iris recognition systems. Iris is

localized by first finding the boundary between pupil and iris using different methods for

different databases. This is because the iris image acquiring devices and environment is

different. Non-circular boundary of pupil is obtained by dividing the circular pupil into

specific points and then these points are forced to shift at exact boundary position of

pupil which are linearly joined.

The boundary between iris and sclera is obtained by finding points of maximum

gradient in radially outwards different directions. Redundant points are discarded by

finding certain distance from the center of the pupil to the concerned relevant point. This

is because the distance between center of pupil and center of iris is very small. The

domain for different directions is left and right sectors of iris when pupil center is at the

origin of the axes.

Eyelids are detected by fitting parabolas using points satisfying specific criterions.

Experimental results show that the efficiency of the proposed method is very high as

compared to other existing methods.

Improved localization results are reported using proposed methods. The

experiments are carried out for four different iris image datasets. Correct localization rate

of 100% (pupil circular boundary), 99.8% (non-circular pupil), 99.77% (iris outer

Page 3: Nust Thesis

-iii-

boundary), 98.91% (upper eyelid detection) and 96.6% (lower eyelid detection) has been

achieved for different datasets.

To compensate the change in size of the iris due to pupil constriction / dilation

and camera to eye distance, different normalization schemes have been designed and

implemented based on difference reference points.

Mainly two different features extraction methodologies have been proposed. One

is related to the bit planes of normalized image and other utilizes the properties of

wavelet transform.

Recognition results based on bit plane features of the iris have also been obtained

and correct recognition rate of up to 99.64% has been achieved using CASIA version 3.0.

Results on other databases have also provided encouraging performance with accuracy of

94.11%, 97.55% and 99.6% on MMU, CASIA version 1.0 and BATH iris databases

respectively.

Different wavelets have been applied to get best iris recognition results. Different

levels of wavelet transforms (Haar, Daubechies, Symlet, Coiflet, Biorthogonal and

Mexican hat) along with different number of coefficients have been used. Coiflet wavelet

resulted in high accuracies of 99.83%, 96.59%, 98.44% and 100% on CASIA version 1.0,

CASIA version 3.0, MMU and BATH iris databases respectively.

Page 4: Nust Thesis

-iv-

Acknowledgement First and foremost, I would like to express my deepest gratitude and innumerous thanks

to the most merciful, the most beneficent, and the most gracious Almighty Allah who

gave me the courage and motivation to undertake this challenging task.

I would like to express my sincere gratitude to my advisor Prof. Dr. Muhammad Younus

Javed for the continuous support of my PhD study and research, for his patience,

motivation, enthusiasm, and immense knowledge. His guidance helped me in all the time

of research and writing of this thesis. Without his excellent guidance and tremendous

support, this research work would have been impossible. Besides my advisor, I would

like to thank the rest of my thesis committee: Prof. Dr. Azad Akhter Siddiqui, Prof. Dr.

Shoab Ahmad Khan, and Dr. Muid Mufti, for their insightful comments and

encouragement.

I am grateful to Mr. Saqib Masood for his continuous motivation throughout the degree

and Mr. Muhammad Abdul Samad for his valuable suggestions, generous help and ideas

in completing this thesis. In particular, I would like to thank Mr. Haroon-ur-Rasheed for

enlightening me the first glance of the research area.

I am deeply thankful to my parents, my wife, and siblings for their tremendous moral

support and uncountable prayers to support me spiritually throughout my life.

I am indebted to my many of my colleagues, Dr. Qamar-ul-Haq, Dr. Salman, Dr. Almas,

for their support.

I would like to thank Higher Education Commission for the award of scholarship and my

office which grant me leave for higher studies.

Lastly, I offer my regards and blessings to all of those who supported me in any respect

during the completion of the degree.

Page 5: Nust Thesis

-v-

Dedication

This work is dedicated to my family.

Page 6: Nust Thesis

-vi-

Table of Contents

Chapter 1: Introduction......................................................................................... 1 1.1 Biometrics ................................................................................................................. 1

1.1.1 Properties for a Biometric.................................................................................. 2 1.2 Some Biometrics....................................................................................................... 3

1.2.1 Face Recognition ............................................................................................... 3 1.2.2 Fingerprint.......................................................................................................... 3 1.2.3 Hand Geometry.................................................................................................. 4 1.2.4 Retina ................................................................................................................. 4 1.2.5 Signature Verification........................................................................................ 5 1.2.6 Voice Authentication ......................................................................................... 5 1.2.7 Gait Recognition ................................................................................................ 6 1.2.8 Ear Recognition ................................................................................................. 6 1.2.9 Iris Recognition.................................................................................................. 6

1.3 Location of Iris in Human Eye.................................................................................. 7 1.3.1 Color of the eye.................................................................................................. 8 1.3.2 Working of the Eye............................................................................................ 8 1.3.2 Anatomy and Structure of Iris............................................................................ 9

1.4 Research on Iris Recognition .................................................................................. 10 1.5 Iris Recognition System.......................................................................................... 10

Chapter 2: Existing Iris Recognition Techniques.............................................. 11 2.1 Background........................................................................................................... 11 2.2 Iris Image Acquisition........................................................................................... 11 2.3 Iris Localization .................................................................................................... 12

2.3.1 Edge Detectors ................................................................................................ 12 2.3.2 Existing Iris Localization Methods................................................................. 17

2.4 Iris Normalization ................................................................................................. 20 2.4.1 Existing Methods ............................................................................................ 20

2.5 Feature Extraction................................................................................................. 22 2.5.1 Gabor Filter..................................................................................................... 22 2.5.2 Log Gabor Filter ............................................................................................. 23 2.5.3 Zero Crossings of 1D Wavelets ...................................................................... 23 2.5.4 Haar Wavelet .................................................................................................. 24

2.6 Matching Algorithms ............................................................................................ 24 2.6.1 Normalized Hamming Distance...................................................................... 24 2.6.2 Euclidean Distance.......................................................................................... 25 2.6.3 Normalized Correlation .................................................................................. 25

Chapter 3: Proposed Methodologies................................................................... 27 3.1 Proposed Iris Localization Method....................................................................... 27

3.1.1 Pupil Boundary Detection............................................................................... 27 3.1.2 Non-Circular Pupil Boundary Detection ........................................................ 31 3.1.3 Iris Boundary Detection.................................................................................. 33 3.1.4 Eyelids Localization........................................................................................ 34

3.2 Proposed Normalization Methods......................................................................... 37 3.2.1 Normalization via Pupil Center ...................................................................... 37

Page 7: Nust Thesis

-vii-

3.2.2 Normalization via Iris Center.......................................................................... 39 3.2.3 Normalization via Minimum Distance............................................................ 40 3.2.4 Normalization via Mid-point between Iris and Pupil Centers ........................ 41 3.2.5 Normalization using Dynamic Size Method................................................... 42

3.3 Proposed Feature Extraction Methods .................................................................. 43 3.3.1 EigenIris Method or Principal Component Analysis ...................................... 44 3.3.2 Bit Planes ........................................................................................................ 45 3.3.3 Wavelets.......................................................................................................... 46

3.4 Matching ............................................................................................................... 48 3.4.1 Euclidean Distance.......................................................................................... 48 3.4.2 Normalized Hamming Distance...................................................................... 49

Chapter 4: Design & Implementation Details.................................................... 50 4.1 Iris Localization .................................................................................................... 50

4.1.1 Circular Pupil Boundary Detection................................................................. 50 4.1.2 Non-Circular Pupil Boundary Detection ........................................................ 60 4.1.3 Iris Boundary Detection.................................................................................. 61 4.1.4 Eyelids Localization........................................................................................ 67

4.2 Normalization Methods ........................................................................................ 71 4.2.1 Normalization From Pupil Module................................................................. 71 4.2.2 Normalization From Iris Module .................................................................... 71 4.2.3 Normalization From Minimum Distance Module .......................................... 72 4.2.4 Normalization From Mid-point Module ......................................................... 72 4.2.5 Normalization With Dynamic Size Module ................................................... 72

4.3 Feature Extraction Methods.................................................................................. 74 4.3.1 Principal Component Analysis ....................................................................... 74 4.3.2 Bit planes ........................................................................................................ 74 4.3.3 Wavelets.......................................................................................................... 75

4.4 Matching ............................................................................................................... 77 4.4.1 Euclidean Distance.......................................................................................... 77 4.4.2 Normalized Hamming Distance...................................................................... 77

Chapter 5: Results & Discussions ....................................................................... 79 5.1 Databases Used for Evaluation ............................................................................. 79 5.2 CASIA Version 1.0............................................................................................... 81

5.2.1 Pupil Localization ........................................................................................... 81 5.2.2 Non-circular Pupil Localization...................................................................... 82 5.2.3 Iris Localization .............................................................................................. 83 5.2.4 Eyelids Localization........................................................................................ 83

5.3 CASIA Version 3.0............................................................................................... 84 5.3.1 Pupil Localization ........................................................................................... 85 5.3.2 Non-circular Pupil Localization...................................................................... 86 5.3.3 Iris Localization .............................................................................................. 86 5.3.4 Eyelids Localization........................................................................................ 86

5.4 University of Bath Iris Database (free version) .................................................... 88 5.4.1 Pupil Localization ........................................................................................... 88 5.4.2 Non-circular Pupil Localization...................................................................... 89 5.4.3 Iris Localization .............................................................................................. 89

Page 8: Nust Thesis

-viii-

5.4.4 Eyelids Localization........................................................................................ 89 5.5 MMU Version 1.0................................................................................................. 91

5.5.1 Pupil Localization ........................................................................................... 91 5.5.2 Non-circular Pupil Localization...................................................................... 92 5.5.3 Iris Localization .............................................................................................. 92 5.5.4 Eyelids Localization........................................................................................ 93

5.6 Errors in Localization ........................................................................................... 95 5.6.1 Errors in Circular Pupil Localization.............................................................. 95 5.6.2 Errors in Non-circular Pupil Localization....................................................... 96 5.6.3 Errors in Iris Localization ............................................................................... 97 5.6.4 Errors in Eyelids Localization ........................................................................ 99

5.7 Comparison with Other Methods........................................................................ 100 5.7.1 Accuracy ....................................................................................................... 100 5.7.2 Computational Complexity........................................................................... 104

5.8 Normalization ..................................................................................................... 105 5.9 Feature Extraction and Matching........................................................................ 108

5.9.1 Principal Component Analysis ..................................................................... 108 a. Experiment Set 1 (Dimension Reduction) .................................................... 109 b. Experiment Set 2 (Training Images)............................................................. 113 c. Experiment Set 3 (Training Classes) ............................................................ 117 5.9.2 Bit planes ...................................................................................................... 119 a. Results on BATH.......................................................................................... 120 b. Results on CASIA version 1.0 ...................................................................... 123 c. Results on CASIA version 3.0 ...................................................................... 125 d. Results on MMU........................................................................................... 127 5.9.3 Wavelets........................................................................................................ 128 a. Results on CASIA version 1.0 using Daubechies 2...................................... 129 b. Results using other wavelets on CASIA version 1.0 .................................... 131 c. Results on CASIA version 3.0 ...................................................................... 138 d. Results on MMU........................................................................................... 138 e. Results on BATH.......................................................................................... 139

Chapter 6: Conclusions and Future Research Work...................................... 141 6.1 Design & Implementation Methodologies.......................................................... 141 6.2 Performance of the Developed System............................................................... 142 6.3 Future Research Work ........................................................................................ 143

Appendix I ..................................................................................................................... 145 Appendix II.................................................................................................................... 153 References...................................................................................................................... 163

Page 9: Nust Thesis

-ix-

List of Figures Figure 1.1: Location of Iris ................................................................................................. 7 Figure 1.2: Different colors of Iris...................................................................................... 8 Figure 1.3: Structure of the eye........................................................................................... 9 Figure 3.1: Schematic diagram of iris recognition system ............................................... 28 Figure 3.2: Finding non-circular boundary of pupil ......................................................... 32 Figure 3.3: Normalization using pupil center as reference point...................................... 38 Figure 3.4: Normalization using iris center as reference point ......................................... 39 Figure 3.5: Minimum distance between the points at same angle. ................................... 40 Figure 3.6: Mid-point of centers of iris and pupil as reference point ............................... 42 Figure 3.7: Concentric circles at pupil center P and dynamic iris normalized image ...... 43 Figure 3.8: Haar Wavelet.................................................................................................. 47 Figure 3.9: Daubechies Wavelets ..................................................................................... 47 Figure 3.10: Coiflets Wavelts ........................................................................................... 48 Figure 3.11: Symlets Wavelets ......................................................................................... 48 Figure 4.1: Flow chart for detection of pupil boundary module....................................... 51 Figure 4.2: Steps for Pupil Localization CASIA version 1.0 ........................................... 54 Figure 4.3: Used symmetric lines for finding points on circle ......................................... 56 Figure 4.4: Steps involved in Pupil Localization CASIA Version 3.0 ............................. 57 Figure 4.5: Steps involved in Pupil Localization for MMU Database ............................. 59 Figure 4.6: Non-circular pupil boundary .......................................................................... 62 Figure 4.7: Steps for Iris Localization CASIA version 1.0............................................... 64 Figure 4.8: Steps for Iris Localization CASIA version 3.0............................................... 65 Figure 4.9: Steps for Iris Localization MMU Iris database .............................................. 66 Figure 4.10: Steps for Iris Localization MMU iris database ............................................ 68 Figure 4.11: Steps for Upper Eyelid localization CASIA Ver 1.0 Iris database .............. 70 Figure 4.12: Normalized images with different methods ................................................. 73 Figure 4.13: One step decomposition of an image ........................................................... 76 Figure 5.1: Images in different datasets............................................................................ 80 Figure 5.2: Some correctly localized images in CASIA version 1.0 ................................ 84 Figure 5.3: Some correctly localized images in CASIA version 3.0 ................................ 87 Figure 5.4: Some correctly localized images in BATH Database free version ................ 90 Figure 5.5: Some correctly localized images in MMU Database version 1.0 .................. 94 Figure 5.6: Comparison of steps in iris localization in different databases ...................... 94 Figure 5.7: Inaccuracies in circular pupil localization...................................................... 95 Figure 5.8: Inaccuracies in non-circular pupil localization .............................................. 97 Figure 5.9: Inaccuracies in iris localization ...................................................................... 98 Figure 5.10: Inaccuracies in eyelid localization ............................................................... 99 Figure 5.11: Time comparison of Normalization methods............................................. 106 Figure 5.12: Time comparison of normalization using iris center as reference point .... 107 Figure 5.13: Results of Normalized 4 using PCA for CASIA version 3.0 iris database 111 Figure 5.14: Results of Normalized 4 using PCA for BATH iris database .................... 113 Figure 5.15: PCA using different training image on CASIA version 1.0....................... 114 Figure 5.16: PCA using different training image on CASIA version 3.0....................... 115 Figure 5.17: PCA using different training image on MMU............................................ 115

Page 10: Nust Thesis

-x-

Figure 5.18: PCA using different training image on BATH........................................... 116 Figure 5.19: Accuracy of PCA on all databases using three training images................. 117 Figure 5.20: Training time of PCA on all databases using three training images .......... 118 Figure 5.21: Recognition time of PCA on all databases using three training images .... 118 Figure 5.22: ROC curves for different features with six enrolled images ...................... 122 Figure 5.23: Results of iris recognition on CASIA version 3.0 using bit plane 5 .......... 126 Figure 5.24: Iris recognition rate using bit plane 5 on MMU iris database .................... 127 Figure 5.25: Results of iris recognition using Daubechies 2 on CASIA version 1.0 ..... 129 Figure 5.26: Results of iris recognition including average training images ................... 130 Figure 5.27: ROC using Coiflet 5 wavelets for CASIA version 1.0............................... 137 Figure 5.28: Iris recognition results on CASIA version 3.0 using Coiflet 5 wavelet ..... 138 Figure 5.29: Results of Coiflet 5 wavelet on MMU iris database .................................. 139 Figure 5.30: Results of Coiflet 5 wavelet on BATH iris database.................................. 140

Page 11: Nust Thesis

-xi-

List of Tables

Table 5.1: Some attributes of the datasets ........................................................................ 79 Table 5.2: Results of Iris localization in CASIA version 1.0 ........................................... 83 Table 5.3: Results of Iris localization in CASIA version 3.0 ........................................... 87 Table 5.4: Results of Iris localization in BATH (free version)......................................... 90 Table 5.5: Results of Iris localization in MMU version 1.0 ............................................. 93 Table 5.6: Results of iris localization for CASIA version 1.0 ........................................ 100 Table 5.7: Results of Pupil localization for CASIA version 1.0..................................... 102 Table 5.8: Results of iris localization for CASIA version 3.0 ........................................ 102 Table 5.9: Results of iris localization for BATH iris database ....................................... 103 Table 5.10: Results of iris localization for MMU Iris Dataset ....................................... 104 Table 5.11: Radii of pupil and iris in the databases........................................................ 107 Table 5.12: Iris recognition rate with Normalized 2 using PCA for CASIA version 1.0110 Table 5.13: Accuracy with Normalized 2 using PCA for MMU iris database ............... 112 Table 5.14: Results of recognition for BATH Iris dataset .............................................. 121 Table 5.15: Effect of image resolution on accuracy on CASIA version 1.0 .................. 124 Table 5.16: Results with 50*256 image resolution on CASIA version 1.0.................... 124 Table 5.17: Result of CASIA version 3.0 when normalized iris width is 49 pixels....... 126 Table 5.18: Results of iris recognition with image resolution 58*256 on MMU........... 128 Table 5.19: Results of iris recognition with different wavelets on CASIA version 1.0 . 132 Table 5.20: Iris recognition results on CASIA version 1.0 including average image .... 135 Table 5.21: Results with Coiflet 5 wavelet at image resolution 43*256 ........................ 137

Page 12: Nust Thesis

Chapter 1 Introduction

-1-

Chapter 1: Introduction

1.1 Biometrics History of identification of humans is as old as human beings. With the development in

science and technology in the today’s modern world, human activities and transactions

have been growing tremendously. Authenticity of users has become an inseparable part

of all transactions involving human computer interaction. Most conventional modes of

authentication are based on knowledge based systems i.e. “what we know” (e.g.

passwords, PIN code etc) and / or token based systems i.e. “what we have” (e.g. ID cards,

passports, driving license etc.)[1]. Biometrics bring in stronger authentication capabilities

by adding a third factor, “who we are” based on our inherent physiological or behavioral

characteristics. The term "biometrics" is derived from the Greek words bio (life) and

metric (to measure). In other words, bio means living creature and metrics means the

ability to measure an object quantitatively [2]. The use of biometrics has been traced back

as far as the Egyptians, who measured people to identify them. Biometric technologies

are hence becoming the foundation of an extensive array of highly protected

identification and personal verification systems.

Biometrics is the branch of science which deals in automated methods of recognizing a

person based on a physiological or behavioral characteristic. This technology involves in

capturing and processing an image of a unique feature of an individual and comparing it

with a processed image captured previously from the database. The behavioral

characteristics are voice, odor, signature, gait, and voice whereas physiological

characteristics are face, fingerprint, hand geometry, ear, retina, palm prints and iris. All

biometric identification systems rely on forms of random variation among persons based

on these characteristics. More complex is the randomness, the more unique features for

identification; because more dimensions of independent variation produce code having

greater uniqueness.

Every biometric system has the following layout. First, it captures a sample of the

feature, such as recording a digital sound signal for voice recognition, or taking a digital

color image for face recognition or iris recognition, or retina scan for retina recognition.

Page 13: Nust Thesis

Chapter 1 Introduction

-2-

The sample is then transformed using some sort of mathematical function into a

biometric template. The biometric template will provide a normalized, efficient and

highly discriminating representation of the features, which then can be compared with

other templates in order to determine identity.

Most biometric systems allow two modes of operation. An enrolment mode for adding

templates to a database, and matching mode, where a template is created for an individual

and then a match is searched for in the database of pre-enrolled templates in two ways.

One is called “verification” in which one-to-one comparison is carried out and other is

“identification” in which one template is compared throughout the database.

If any physiological part has the following properties then it would be considered as a

biometric [3].

1.1.1 Properties for a Biometric

• Universality

Each person should have the characteristic.

• Distinctiveness

Any two persons should be sufficiently different in terms of the characteristic.

• Permanence

The characteristic should be sufficiently invariant (with respect to the matching

criterion) over a period of time.

• Collect-ability

The characteristic can be measured quantitatively.

• User-friendliness

People must be willing to accept the system, the scanning procedure does not

have to be intrusive and the whole system should be easy to use.

• Accuracy

Accuracy of the system must be high enough, there must be a balance between

FAR (False Accept Rate) and FRR (False Reject Rate) depending upon the use of

the system.

Page 14: Nust Thesis

Chapter 1 Introduction

-3-

However, in a biometric system these should be practically implemented [4]. In addition

to that, there are number of other issues that should be considered, such as:

• Performance: It refers to the achievable recognition accuracy and speed, the

resources required to achieve the desired recognition accuracy and speed, as well

as the operational and environmental factors that affect the accuracy and speed.

• Acceptability: It indicates the extent to which people are willing to accept the use

of a particular biometric identifier (characteristic) in their daily lives.

• Circumvention: It reflects how easily the system can be fooled using fraudulent

methods.

• Cost: It is always a concern. In this case, the life-cycle cost of system

maintenance must also be taken into account.

1.2 Some Biometrics Based on some basic definitions of biometrics as illustrated above, this section will give a

brief description of different biometric systems [5] as elaborated below.

1.2.1 Face Recognition

Face recognition is one of the most active research areas in computer

vision and pattern recognition [6-14]. A wide range of applications

that includes forensic identification, access control, face-based video

indexing and browsing engines, biometric identity authentication,

human-computer interaction and multimedia monitoring/surveillance.

The task of a face recognition system is to compare an input face image against a

database containing a set of face samples with known identity [15-22]. Facial recognition

has had some shortcomings, especially when trying to identify individuals in different

environmental settings (such as changes in lighting, changes in the physical, facial

features of people, such as new scars, beard etc.).

1.2.2 Fingerprint

Fingerprint imaging technology has been in existence for centuries. The use of

fingerprints as a unique human identifier starts back in second century B.C. in China,

Page 15: Nust Thesis

Chapter 1 Introduction

-4-

where the identity of the sender of an important document could be verified by his

fingerprint impression in the wax seal.

Fingerprint imaging technology looks to capture or read the

unique pattern of lines on the tip of one's finger. These unique

patterns of lines can either be in a loop, whorl or arch pattern.

The most common method involves recording and comparing

the fingerprint's “minutiae points”. Minutiae points can be

considered the uniqueness of an individual's fingerprint [23]. In

a typical fingerprint [24] that has been scanned by a fingerprint

identification system, there are generally between 30 and 40 minutiae. The research in

fingerprint identification technology has improved the identification rate to greater than

98 percent and a false positive (false reject) rate to smaller than one percent within the

Automated Fingerprint Identification System (AFIS) criminal justice program.

1.2.3 Hand Geometry

Hand geometry is essentially based on the fact that virtually

every individual's hand is shaped differently than another

individual's hand and with the passage of time the shape of the

person's hand does not significantly change [25]. The basic

principle of operation behind the use of hand geometry is to

measure or record the physical geometric characteristics of an individual's hand. From

these measurements, a profile is constructed that can be used to compare against

subsequent hand readings by the user [26].

There are many benefits to use hand geometry as a solution to general security issues

including speed of operation, reliability, accuracy, small template size, ease of integration

into an existing system, and user-friendliness. Now, there are thousands of locations all

over the world that use hand geometry devices for access control and security purposes.

1.2.4 Retina

Retinal biometric involves analyzing the layer of blood vessels situated at the back of the

eye. Retinal scans involve a low-intensity infrared light that is projected through the back

Page 16: Nust Thesis

Chapter 1 Introduction

-5-

of the eye and onto the retina. Infrared light is used based on the fact that the blood

vessels on the retina absorb the infrared light faster than surrounding eye tissues. The

infrared light with the retinal pattern is reflected back to a video camera.

The video camera captures the retinal pattern and converts it into

data that is 35 elements in size [27]. This is not particularly

convenient if you are wearing glasses or concerned about having

close contact with the reading device. For these reasons, retinal

scanning is not warmly accepted by all users, although the

technology itself can work well. The current hurdle for retinal identification is the

acceptance by the users. Retinal identification has several disadvantages including

susceptible to disease damage (i.e. cataracts), viewed as intrusive and not very user

friendly, high amount of both user and operator skill required.

1.2.5 Signature Verification

Signatures are analyzed in the way a user signs his / her name.

Signing features include speed, velocity and pressure on writing

material. These features are as important as the finished

signature's static shape [28-31]. Signature verification enjoys a

synergy with existing processes that other biometrics do not. People are used to

signatures as a means of transaction-related identity verification and most would see

nothing unusual in extending this to encompass biometrics. Surprisingly, relatively few

significant signature applications have emerged compared with other biometric

methodologies.

1.2.6 Voice Authentication

Despite the inherent technological challenges, voice

recognition technology’s most popular applications will likely

provide access to secure data over telephone lines. Voice

biometrics has potential for growth because it requires no new

hardware. However, poor quality and surrounding noise can affect verification process. In

addition, the enrollment procedure is more complicated than other biometrics being not

Page 17: Nust Thesis

Chapter 1 Introduction

-6-

user-friendly. Speaker recognition systems [32] fall into two basic types: text-dependent

and text-independent. In text-dependent recognition, the speaker says a predetermined

phrase. This technique inherently enhances recognition performance, but requires a

cooperative user. In text independent recognition, the speaker neither says a

predetermined phrase nor cooperates or even not to be aware of the recognition system.

Speaker recognition suffers from several limitations. Different people can have similar

voices [33-35], and anybody’s voice can vary over time because of changes in health,

emotional state and age. Furthermore, variation in handsets or in the quality of a

telephone connection complicates the recognition process.

1.2.7 Gait Recognition

Gait recognition is relatively a new field in biometrics. A unique

advantage of gait as a biometric is that it offers potential for

recognition at a distance or at low resolution when other biometrics

might not be perceivable [36-41]. Recognition can be based on the

(static) human shape as well as walking, suggesting a richer recognition cue. Further, gait

can be used when other biometrics are obscured. It is difficult to conceal and/or disguise

motion as this generally impedes movement.

1.2.8 Ear Recognition

Ear recognition is carried out by three different methods: (i) taking a

photo of an ear, (ii) taking “earmarks” by pushing ear against a flat

glass and (iii) taking thermogram pictures of the ear [42-45]. The most

interesting parts of the ear are the outer ear and ear lope, but the whole

ear structure and shape is used [46]. Taking photo of the ear is the most commonly used

method in research. The photo is taken and it is combined with previous taken photos for

identifying a person. Ear database is publicly available via internet [47].

1.2.9 Iris Recognition

Iris recognition is a method of biometric authentication that uses pattern recognition

techniques based on images of the irises of an individual's eyes [1, 48-64]. Iris

Page 18: Nust Thesis

Chapter 1 Introduction

-7-

recognition uses camera technology and subtle IR illumination to reduce specular

reflection from the convex cornea to create images of the detail-rich

intricate structures of the iris. These unique structures are converted

into digital templates. They provide mathematical representations of

the iris that yield unambiguous positive identification of an

individual.

Iris recognition efficacy is rarely impeded by glasses or contact lenses. Iris technology

has the smallest outlier (those who cannot use/enroll) group of all biometric technologies.

The only biometric authentication technology has been designed for use in a one-to-many

search environment. A key advantage of iris recognition is its stability or template

longevity as barring trauma and a single enrollment can last a lifetime [65].

Among the physiological characteristics, iris is the best biometric. It has all the

capabilities of a good biometric.

1.3 Location of Iris in Human Eye Iris is the colored part of eye which is visible when eye is open. If we observe an eye

image then blackish round shaped part is pupil. Iris is the only internal organ which can

be seen externally. Iris can be seen around the pupil and inside sclera, as shown in Figure

1.1.

Figure 1.1: Location of Iris

Page 19: Nust Thesis

Chapter 1 Introduction

-8-

1.3.1 Color of the eye

The iris gives color to the eye which depends on the amount of pigment present. If the

pigment is dense, the iris is brown. If there is a little pigment, the iris is blue. In some

cases, there is no pigment at all. So, the eye is light. Different pigments color eyes in

various ways to create the eye colors such as gray, green, etc. In bright light, the iris

muscles constrict the pupil thereby reducing the amount of light entering the eye.

Conversely, the pupil enlarges in dim light in order to allow greater amount of light to

enter in retina. Some irises with different colors are shown in Figure 1.2 [66].

Figure 1.2: Different colors of Iris

1.3.2 Working of the Eye

Light passes through the front structures of the eye (i.e. the cornea, lens and so forth).

These structures focus the light on the retina, a layer of light receptors at the back of the

eye. These receptors translate the image into a neural message which travels to the brain

via the optic nerve [67].

Light passes through a layer of transparent tissues at the front of the eye called the

cornea. The cornea bends the light and it is the first element in the eye's focusing system.

The light then passes through the anterior chamber, a fluid-filled space just behind the

cornea. This fluid is called the aqueous humor and it is produced by a gland called the

ciliary body. The light then passes through the pupil. The iris is a ring of pigmented

Page 20: Nust Thesis

Chapter 1 Introduction

-9-

muscular tissue that controls the size of the pupil. It regulates how much light enters the

eye - the pupil grows larger in dim light and shrinks to a smaller hole in bright light. The

light passes through the lens that helps focus the light from the pupil onto the retina.

Light from the lens passes through the vitreous body which is a clear jelly-like substance

that fills the back part of the eyeball. It is focused onto the retina that is a layer of light-

sensitive tissue at the back of the eye. The retina contains light-sensitive cells called

photoreceptors. It translates the light energy into electrical signals. These electrical

signals travel to the brain via the optic nerve. The retina is nourished by the choroids (a

highly vascularized membrane that exists just behind the retina). Aside from the

transparent cornea at the front of the eye, the eyeball is encased by a tough, white and

opaque membrane called the sclera [68].

Figure 1.3: Structure of the eye

1.3.2 Anatomy and Structure of Iris

The iris is a circular and adjustable diaphragm with the pupil. It is located in the chamber

behind the cornea. The iris is the extension of a large and smooth muscle which also

connects to the lens via a number of suspensor ligaments. These muscles expand and

contract to change the shape of the lens and to adjust the focus of images onto the retina

[26]. A thin membrane beyond the lens provides a light-tight environment inside the eye.

Thus, preventing stray light from confusing or interfering with visual images on the

retina. This is extremely important for clear high-contrast vision with good resolution or

Page 21: Nust Thesis

Chapter 1 Introduction

-10-

definition. The most frontal chamber of the eye, immediately behind the cornea and in

front of the iris, contains a clear watery fluid that facilitates good vision. It helps to

maintain eye shape, regulates the intra-ocular pressure, provides support for the internal

structures, supplies nutrients to the lens and cornea and disposes off the eye's metabolic

waste. The rear chamber of the front cavity lies behind the iris and in front of the lens. It

helps provide optical correction for the image on the retina. Some recent optical designs

also use coupling fluids for increased efficiency and better correction.

1.4 Research on Iris Recognition Apparently, the first use of iris recognition as a basis for personal identification goes back

to efforts to distinguish inmates in the Parisian Penal System by visually inspecting their

irises, especially the patterning of color. In 1936, ophthalmologist Frank Burch proposed

the concept of using iris patterns as a method to recognize an individual [69]. By the

1980s, the idea had appeared in James Bond films but it still remained in science fiction

and conjecture [70]. In 1985, Leonard Flom and Aran Safir, ophthalmologists, proposed

the concept that no two irises are alike and were awarded a patent for the iris

identification concept in 1987 [63]. Flom approached John Daugman to develop an

algorithm to automate identification of the human iris. In 1993, the Defense Nuclear

Agency began work to test and deliver a prototype unit which was successfully

completed by 1995 with their combined efforts. In 1994 [64], Daugman was awarded a

patent for his automated iris recognition algorithms.

1.5 Iris Recognition System The iris recognition system consists of an automatic segmentation system that is based on

the edge detector and is able to localize the circular iris and pupil region, occluding

eyelids, eyelashes and reflections. The extracted iris region is then normalized into a

rectangular block with constant dimensions to account for imaging inconsistencies.

Features are extracted with different feature extraction methods to encode the unique

pattern of the iris into biometric template. The Hamming distance was employed for

classification of iris templates and two templates were found to match if hamming

distance is grater than a specific threshold.

Page 22: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-11-

Chapter 2: Existing Iris Recognition Techniques

2.1 Background A complete iris recognition system is composed of four parts: image acquisition, iris

localization, feature extraction and matching. The image acquisition step captures the iris

images. Infrared illumination is used in most iris image acquisition. The iris localization

step localizes the iris region in the image. For most algorithms, assuming near-frontal

presentation of the pupil, the iris boundaries are modeled as two circles which are not

necessarily concentric. The inner circle is the pupillary boundary or iris inner boundary

(i.e. between the pupil and the iris). The outer circle is the limbic boundary or iris outer

boundary (i.e. between the iris and the sclera). The noise processing is often included in

the segmentation stage. Possible sources of noise are eyelid occlusions, eyelash

occlusions and specular reflections. Most localization algorithms are gradient based in

order to find edges between the pupil & iris and the iris & sclera. The feature extraction

stage encodes the iris image features into a bit vector code. In most algorithms, filters are

utilized to obtain information about the iris texture. Then the outputs of the filters are

encoded into a bit vector code. The corresponding matching stage calculates the distance

between iris codes and decides whether it is a match (in the verification context) or

recognizes the submitted iris from the subjects in the data set (in the identification

context).

2.2 Iris Image Acquisition Iris recognition has been an active research area for the last few years due to its high

accuracy and the encouragement of both the government and private entities to replace

traditional security systems, which suffer noticeable margin of error. However, early

research was obstructed by the lack of iris images. Now several free databases exist on

the internet for testing usage. A well known database is the CASIA Iris Image Database

(version 1.0 and 3.0) provided by the Chinese Academy of Sciences [71]. The CASIA

version 1.0 iris image database includes 756 iris images from 108 eyes collected over two

sessions over a period of two months. The images, taken in almost perfect imaging

Page 23: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-12-

conditions, are noise-free with size 320*280 pixels. The CASIA Iris Image Database

Version 3.0 includes 2655 iris images of size 320*280 pixels from 396 eyes. Iris image

dataset of University of Bath (BATH) free version contains 1000 iris images from 50

different eyes. Another iris database of Multi-Media University (MMU) is also used for

experiments. MMU iris database [72] contains 450 images from 45 people. Left and right

eyes are captured five times each that makes a total of 90 classes. Each image has

320*240 pixels resolution in grayscale.

2.3 Iris Localization Iris localization is the most important step in iris recognition systems because all the

subsequent steps depend on its accuracy. In general, this step involves in detecting edges

using some edge detectors followed by boundary detection algorithms. Following section

describe some commonly used edge detectors.

2.3.1 Edge Detectors

An edge operator is a neighborhood operation which determines the extent to which each

pixel's neighborhood can be partitioned by a simple arc passing through the pixel. Pixels

in the neighborhood on one side of the arc have one predominant value and pixels in the

neighborhood on the other side of the arc have a different predominant value [73, 74].

Usually gradient operators, Laplacian operators, zero-crossing operators are used for edge

detection. The gradient operators compute some quantity related to the magnitude of the

slope of the underlying image gray tone intensity surface of the image. The Laplacian

operators calculate some quantity related to the Laplacian of the underlying image gray

tone intensity surface. The zero-crossing operators determine whether or not the digital

Laplacian or the estimated second direction derivative has a zero-crossing within the

pixel [75].

2.3.1.1 Gradient Based

In this edge detection method, the assumption is that edges are the pixels with a high

gradient. A fast rate of change of intensity at some direction given by the angle of the

gradient vector is observed at edge pixels. The magnitude of the gradient indicates the

Page 24: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-13-

strength of the edge. Natural images do not have the ideal discontinuity or the uniform

regions and the magnitude of the gradient is calculated to detect the edge pixels. A

threshold is fixed with respect to magnitude. If the gradient magnitude is larger than the

threshold then the corresponding pixel is an edge pixel. An edge pixel is described using

two important features:

• Edge strength : magnitude of the gradient

• Edge direction : angle of the gradient

Actually, gradient is not defined at all for a discrete function. Instead the gradient, which

can be defined for the ideal continuous image, is estimated using some operators. Among

these operators "Roberts”, “Sobel” and “Prewitt" are commonly used.

2.3.1.1.1 Roberts Operator

The Roberts method finds edges using the Roberts approximation to the derivative. It

returns edges at those points where the gradient of image is maximum. The Roberts

operator provides a simple approximation to the gradient magnitude using the following

equation [76].

x yR R R= + 2.1

Where xR and yR are calculated using following convolution filters.

1 00 1

xR ⎡ ⎤⎢ ⎥=

−⎢ ⎥⎣ ⎦ and 0 1

1 0yR ⎡ ⎤−

⎢ ⎥=⎢ ⎥⎣ ⎦

2.3.1.1.2 Sobel Operator

The Sobel operator is one of the most commonly used edge detectors. In this operator,

gradient is calculated in 3 x 3 neighborhood pixels for the gradient calculations. The

Sobel operator is magnitude of the gradient computed by the following equation [76].

2 2x yMag S S= + 2.2

Where xS and yS are the first order partial derivatives in x and y direction respectively.

If 3 x 3 neighborhood of pixel (i,j) is as follows:

a1 a2 a3

a4 [i,j] a5

a6 a7 a8

Page 25: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-14-

Then xS and yS are computed using the equations 2.3 and 2.4.

3 5 8 1 4 6( ) ( )xS a ca a a ca a= + + − + + 2.3

1 2 3 6 7 8( ) ( )yS a ca a a ca a= + + − + + 2.4

where the constant c = 2.

These are implemented using convolution masks:

1 0 12 0 21 0 1

xS⎡ ⎤−⎢ ⎥

= −⎢ ⎥⎢ ⎥−⎣ ⎦

and 1 2 10 0 01 2 1

yS⎡ ⎤⎢ ⎥

= ⎢ ⎥⎢ ⎥− − −⎣ ⎦

This operator places an emphasis on pixels that are closer to the center of the mask.

2.3.1.1.3 Prewitt Operator

The Prewitt method [76] finds edges using the Prewitt approximation to the derivative. It

returns edges at those points where the gradient of I is maximum. Unlike the Sobel

operator, this operator does not place any emphasis on pixels that are closer to the center

of the masks. It uses the equations 2.3 and 2.4 for computing the partial derivatives along

x and y-directions using the constant c = 1.

These are implemented using following convolution masks:

1 0 11 0 11 0 1

xP⎡ ⎤−⎢ ⎥

= −⎢ ⎥⎢ ⎥−⎣ ⎦

and 1 1 10 0 01 1 1

yP⎡ ⎤⎢ ⎥

= ⎢ ⎥⎢ ⎥− − −⎣ ⎦

2.3.1.2 Laplacian Based or Zero Crossing Based

The Laplacian of Gaussian (LoG) method finds edges by looking for zero crossings after

filtering image with a LoG filter [76]. The edge points of an image can be detected by

finding the zero crossings of the second derivative of the image intensity. However,

second derivative is very sensitive to noise. This noise should be filtered out before edge

detection. To achieve this, “Laplacian of Gaussian” is used [77]. This method combines

Gaussian filtering with the Laplacian for edge detection. Following equation is used to

obtain LoG:

2 2

22 2

24 2

1( , ) 12

x yx yLoG x y e σ

πσ σ

+−⎡ ⎤+

= − −⎢ ⎥⎣ ⎦

2.5

where “σ ” is the smoothing factor. In LoG edge detection, following three steps are

significant:

Page 26: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-15-

• Filtering

• Enhancement

• Detection

Gaussian filter is used for smoothing and the second derivative of which is used for the

enhancement step. The detection criterion is the presence of a zero crossing in the second

derivative to the corresponding large peak in the first derivative. Those pixels having

locally maximum gradient are considered as edges by the edge detector in which zero

crossings of the second derivative are used. To avoid detection of insignificant edges,

only the zero crossings, whose corresponding first derivative is above some threshold, are

selected as edge point. The edge direction is obtained using the direction in which zero

crossing occurs.

In the LoG, there are two methods which are mathematically equivalent [77]:

• Convolve the image with a Gaussian smoothing filter and compute the Laplacian

of the result.

• Convolve the image with the linear filter that is the LoG filter.

This is also the case in the LoG. Smoothing (filtering) is performed with a Gaussian

filter. Enhancement is done by transforming edges into zero crossings and detection is

done by detecting the zero crossings.

2.3.1.3 Canny Operator

This edge detection method is optimal for step edges corrupted by white noise. Canny

[78] used three criteria to design his edge detector. The first requirement is reliable

detection of edges with low probability of missing true edges and a low probability of

detecting false edges. In second requirement, the detected edges should be close to the

true location of the edge. For last requirement, there should be only one response to a

single edge [79]. The Canny method finds edges by looking for local maxima of the

gradient of the image intensity. The gradient is calculated using the derivative of a

Gaussian filter. The method uses two thresholds to detect strong and weak edges. It

includes the weak edges in the output only if they are connected to strong edges. This

method is, therefore, less likely than the others to be fooled by noise and more likely to

detect true weak edges.

Page 27: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-16-

The Canny operator works in a multi-stage process. First of all, the image is smoothed by

Gaussian convolution. Then, a simple 2-D first derivative operator (somewhat like the

Roberts Cross) is applied to the smoothed image to highlight regions of the image with

high first spatial derivatives. Edges give rise to ridges in the gradient magnitude image.

The algorithm then tracks along the top of these ridges and sets to zero all pixels that are

not actually on the ridge top so as to give a thin line in the output (a process known as

non-maximal suppression). The tracking process exhibits hysteresis controlled by two

thresholds: T1 and T2 (with T1 > T2). Tracking can only begin at a point on a ridge

higher than T1. Tracking then continues in both directions out from that point until the

height of the ridge falls below T2. This hysteresis helps to ensure that noisy edges are not

broken up into multiple edge fragments.

2.3.1.4 Hough Transform

To find the simple shapes (like lines, circles, ellipse in the images), Hough transform is a

nice option. The simplest case of Hough transform is a Hough linear transform. In the

image, the straight line can be described as:

y mx c= + 2.6

It is plotted for each pair of values (x, y), where m is the slope of the line and c is y-

intercept. For computational purposes, however, it is better to parameterize the lines in

the Hough transform with two other parameters, commonly called r and θ. The parameter

r represents the smallest distance between the line and the origin, while θ is the angle of

the locus vector from the origin to this closest point. Using this parameterization, the

equation of the line can be written as:

cos sinr x yθ θ= + 2.7

It is, therefore, possible to associate to each line of the image, a couple (r, θ) which is

unique if θ belongs to [0, π ] and r is real or if θ belongs [0, 2π ] and r is greater than 0.

The (r, θ) plane is sometimes referred to as Hough space [80]. It is well known that an

infinite number of lines can go through a single point of the plane. If that point has

coordinates 0 0( , )x y in the image plane, all the lines that go through it obey the following

equation:

0 0( ) cos sinr x yθ θ θ= + 2.8

Page 28: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-17-

This corresponds to a sinusoidal curve in the (r, θ) plane, which is unique to that point. If

the curves corresponding to two points are superimposed, the locations (in the Hough

space) where they cross, correspond to lines (in the original image space) that pass

through both points. More generally, a set of points that form a straight line will produce

sinusoids which cross at the parameters for that line. Thus, the problem of detecting co-

linear points can be converted to the problem of finding concurrent curves.

Hough transform algorithm uses an array called accumulator to detect the existence of a

line. The dimension of the accumulator is equal to the number of unknown parameters of

Hough transform problem. For each pixel and its neighborhood, Hough transform

algorithm determines if there is enough evidence of an edge at that pixel. If so, it will

calculate the parameters of that line, and then look for the accumulator's bin that the

parameters fall into, and increase the value of that bin. By finding the bins with the

highest value, the most likely lines can be extracted and their geometric definitions can

be read off. The simplest way of finding these peaks is by applying some form of

threshold.

2.3.2 Existing Iris Localization Methods

In 1993, Daugman [81] built an iris recognition system. The localization accuracy was

98.6% using an integro-differential operator to locate the boundaries of the iris. Wildes

[55] system used border detection based on the gradient and Hough transforms to locate

the iris in the image. Cui [59] used course to find strategy and modified Hough transform.

Shen [57] applied wavelet analysis for localization of iris.

2.3.2.1 Daugman’s Method

Daugman [81] presented the first approach to computational iris recognition, including

iris localization. An integro-differential operator is proposed for locating the inner and

outer boundaries of an iris. The operator assumes that pupil and limbus are circular

contours and performs as a circular edge detector. Detecting the upper and lower eyelids

is also performed using the Integro-differential operator by adjusting the contour search

from circular to a designed accurate [54]. Integro-differential operator is defined as

Page 29: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-18-

0 00 0

, ,( , , )

( , )max ( )*2r x yr x y

I x yG r dsr rσ π∂∂ ∫ 2.9

where I(x, y) is an image containing an eye. The integro-differential operator searches

over the image domain (x, y) for the maximum in the blurred partial derivative with

respect to increasing radius r of the normalized contour integral of I(x, y) along a circular

arc ds of radius r and center coordinates (x0, y0). The symbol ∗ denotes convolution and

( )G rσ is a smoothing function such as a Gaussian of scale σ and is defined as:

202

( )21( )

2

r r

G r e σσ πσ

−−

= 2.10

The integro-differential operator behaves as a circular edge detector. It searches for the

gradient maxima over the 3D parameter space, so there are no threshold parameters

required as in the Canny edge detector [78]. Daugman simply excludes the upper and

lower most portions of the image, where eyelid occlusion is expected to occur.

2.3.2.2 Wildes’s Method

Wildes [55] had proposed an iris recognition system in which iris localization is

completed by detecting edges in iris images followed by use of a circular Hough

transform [82] to localize iris boundaries. In a circular Hough transform, images are

analyzed to estimate the three parameters of one circle ,0 0( , )x y r using following

equations:

0 0 0 0( , , ) ( , , , , )i ii

H x y r h x y x y r=∑ 2.11

where ( , )i ix y is an edge pixel and i is the index of the edge pixel

0 00 0

1, ( , , , , ) 0( , , , , )

0,i i

i i

if g x y x y rh x y x y r

otherwise=⎧

= ⎨⎩

where

2 2 2

0 0 0 0( , , , , ) ( ) ( )i i i ig x y x y r x x y y r= − + − − 2.12

Page 30: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-19-

The location 0 0( , , )x y r with the maximum value of 0 0( , , )H x y r is chosen as the

parameter vector for the strongest circular boundary. Wildes’ system models the eyelids

as parabolic arcs. The upper and lower eyelids are detected by using a Hough transform

based approach similar to that described above. The only difference is that it votes for

parabolic arcs instead of circles. One weak point of the edge detection and Hough

transform approach is the use of thresholds in edge detection. Different settings of

threshold values may result in different edges that in turn affect the Hough transform

results significantly [58].

2.3.2.3 Boles’s Method

Boles at el. [52] proposed an iris recognition method. Iris localization is started by

locating the pupil of the eye, which was done by using some edge detection technique. As

it was a circular shape, the edges defining it are connected to form a closed contour. The

centroid of the detected pupil is chosen as the reference point for extracting the features

of the iris. Iris outer boundary is also detected by using the edge-image.

2.3.2.4 Li Ma’s Method

Ma et. al. [53] estimated the pupil position using pixel intensity value projections and

thresholding. Centroid of the specific region is calculated to obtain the center of pupil.

After that a circular Hough transform is applied to detect the iris outer boundary.

2.3.2.5 Other methods

Some other methods have been proposed for iris localization but most of them are minor

variants of integro-differential operator or combination of edge detection and Hough

transform. For example, Cui et. al. [59] computed a wavelet transform and then used the

Hough transform to locate the iris’ inner boundary while using integro-differential

operator for the outer boundary. Tain et. al. [60] used Hough transform after

preprocessing of edge image. Masek et. al. [56] implemented an edge detection method

slightly different from the Canny operator and then used a circular Hough transform for

iris boundary extraction. Rad et. al. [61] used gradient vector pairs at various directions to

coarsely estimate positions of the circle and then used integro-differential operator to

refine the iris boundaries. Kim et. al. [51] used mixtures of three Gaussian distributions to

Page 31: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-20-

coarsely segment eye images into dark, intermediate & bright regions and then used a

Hough transform for iris localization. All previous research work on iris localization used

only image gradient information and the rate of iris extraction is not high in practice.

2.4 Iris Normalization In this section, a brief description of different iris recognition system with respect to iris

normalization is provided. Iris normalization is a step in which iris is unwrapped to a

rectangular strip for feature extraction. Iris images of the same eye have different iris

sizes due to the difference between camera and eye. Illumination has direct impact on

pupil size and causes non-linear variations of iris patterns. A proper normalization

technique is expected to transform the iris image to compensate these variations.

2.4.1 Existing Methods

Existing techniques for iris normalization are explained in the succeeding sections.

2.4.1.1 Daugman’s Method

Daugman’s system [49] uses radial scaling to compensate for overall size as well as a

simple model of pupil variation based on linear stretching. This scaling serves to map

Cartesian image coordinates (x, y) to dimensionless polar coordinates ( , )r θ according to

the following equations:

( , ) (1 ) ( ) ( )p ix r r x rxθ θ θ= − + 2.13

( , ) (1 ) ( ) ( )p iy r r y ryθ θ θ= − + 2.14

where

0( ) ( ) ( )p p px x r cosθ θ θ= + 2.15

0( ) ( ) ( )p p py y r sinθ θ θ= + 2.16

0( ) ( ) ( )i i ix x rcosθ θ θ= + 2.17

0( ) ( ) ( )i i iy y r sinθ θ θ= + 2.18

This model is called rubber sheet model which assumes that in radial direction, iris

texture change linearly. This maps the iris texture from pupil to iris outer boundary into

the interval [0, 1] and θ is cyclic over [0, 2π ]. Here ( ( ), ( ))p px yθ θ and ( ( ), ( ))i ix yθ θ are

Page 32: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-21-

the coordinates of the iris inner and outer boundaries in the direction θ and

0 0( ( ), ( ))p px yθ θ and 0 0( ( ), ( ))i ix yθ θ are the coordinates of pupil and iris centers

respectively. Daugman compensates rotation invariance in matching process by circular

shifting the normalized iris linearly in different directions.

2.4.1.2 Wildes’s Method

Wildes [55] has proposed a technique in which image is normalized to compensate both

scaling and rotation in matching step.

This approach geometrically warps a newly acquired image ( , )aI x y into alignment with

a selected database image ( , )dI x y according to a mapping function ( ( , ), ( , ))u x y v x y such

that for all the image intensity value at ( , ) ( ( , ), ( , ))x y u x y v x y− in aI is close to that at

( , )x y in dI . More precisely, the mapping function ( , )u v is taken to minimize the

following error function:

2( ( , ) ( , ))d ax y

errfn I x y I x u y v dxdy= − − −∫ ∫ 2.19

Constrained is to capture a similarity transformation of image coordinates ( , )x y to

( , )x y′ ′ , i.e.

( )x x x

sRy y y

φ′⎛ ⎞ ⎛ ⎞ ⎛ ⎞= −⎜ ⎟ ⎜ ⎟ ⎜ ⎟′⎝ ⎠ ⎝ ⎠ ⎝ ⎠

2.20

Where s is scaling factor and ( )R φ is a matrix representing rotation by φ . The

parameters s and φ are recovered by an iterative minimization procedure [83].

2.4.1.3 Boles’s Method

Boles [52] proposed the normalization of images at the time of matching. When two

images are considered, one image is considered as a reference image. The ratio of the

maximum diameter of the iris in this image to that of the other image is calculated. This

ratio is used to make the virtual circles on which data for feature extraction is picked up.

The dimensions of the irises in the images are scaled to have the same constant diameter

regardless of the original size in the image.

Page 33: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-22-

2.4.1.4 Li Ma’s Method

Ma [53] used a combination of methods of iris normalization that were proposed by

Daugman [54] and Bole [52]. In this method, the normalization process is carried out by

using center of the pupil as a reference point.

2.4.1.5 Other Methods

Other methods of iris normalization are almost the same as proposed by Daugman. The

normalization method makes the iris invariant to scale, translation and pupil dilation

changes. The rectangular image after normalization is not rotation invariant. In general,

circular shift in different directions is used for achieving rotation invariance during

matching process.

2.5 Feature Extraction Features are extracted using the normalized iris image. The most discriminating

information in an iris pattern must be extracted. Only the significant features of the iris

must be encoded so that comparisons between templates can be made.

2.5.1 Gabor Filter

A Gabor filter is constructed by modulating a sine/cosine wave with a Gaussian. This is

able to provide the optimum conjoint localization in both space and frequency, since a

sine wave is perfectly localized in frequency, but not localized in space. Modulation of

the sine with a Gaussian provides localization in space, though with loss of localization in

frequency. Decomposition of a signal is accomplished using a quadrature pair of Gabor

filters. A real part is specified by a cosine modulated by a Gaussian and an imaginary part

is specified by a sine modulated by a Gaussian. The real and imaginary filters are also

known as the even symmetric and odd symmetric components respectively.

The centre frequency of the filter is specified by the frequency of the sine/cosine wave.

The bandwidth of the filter is specified by the width of the Gaussian. Daugman [49, 54,

64, 81] makes uses of a 2D version of Gabor filters in order to encode iris pattern data. A

2D Gabor filter over an image domain (x,y) is represented as:

Page 34: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-23-

2 20 0

2 20 0 0 0

( ) ( )[ ]2 [ ( ) ( )]( , )

x x y yi u x x v y yG x y e e

ππα β

− −− +

− − + −= 2.21

where 0 0( , )x y specify position in the image, ( , )α β specify the effective width and

length and 0 0( , )u v specify modulation.

2.5.2 Log Gabor Filter

A disadvantage of the Gabor filter is that the even symmetric filter will have a DC

component whenever the bandwidth is larger than one octave [84]. However, zero DC

components can be obtained for any bandwidth by using a Gabor filter which is Gaussian

on a logarithmic scale. It is known as the Log-Gabor filter. The frequency response of a

Log-Gabor filter is given as:

20

20

(log( / ))2 (log( / ))( )

f ffG f e σ

= 2.22

where 0f represents the centre frequency and σ gives the bandwidth of the filter.

2.5.3 Zero Crossings of 1D Wavelets

Boles et. al. [52] made use of 1D wavelets [85] for encoding iris pattern data. The mother

wavelet ψ is defined as the second derivative of a smoothing function ( )xθ .

2

2

( )( ) d xxdxθψ = 2.23

The zero crossings of dyadic scales of these filters are then used to encode features. The

wavelet transform of a signal f(x) at scale s and position x is given by:

22

2

22

2

( )( ) * ( )

( * )( )

s

s

d xW f x f s xdx

ds f xdx

θ

θ

⎛ ⎞= ⎜ ⎟

⎝ ⎠

=

2.24

Where

(1/ ) ( / )s s x sθ θ= 2.25

Page 35: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-24-

( )sW f x is proportional to the second derivative of ( )f x smoothed by ( )s xθ and the zero

crossings of the transform correspond to points of inflection in * ( )sf xθ . The motivation

for this technique is that zero-crossings correspond to significant features with the iris

region.

2.5.4 Haar Wavelet

Lim et. al. [50] also used the wavelet transform to extract features from the iris region.

Both the Gabor transform and the Haar wavelet are considered as the mother wavelet.

From multi-dimensionally filtering, a feature vector with 87 dimensions is computed.

Since each dimension has a real value ranging from -1.0 to +1.0, the feature vector is sign

quantized so that any positive value is represented by 1 and negative value as 0. This

results in a compact biometric template consisting of only 87 bits. Lim et. al. [50]

compared the use of Gabor transform and Haar wavelet transform and showed that the

recognition rate of Haar wavelet transform was slightly better than Gabor transform (i.e.

by 0.9%).

2.6 Matching Algorithms Once features are extracted and template that is generated in the feature extraction

process will also need a corresponding matching metric, which gives a measure of

similarity between two iris templates. This metric should give one range of values when

comparing templates generated from the same eye (known as intra-class comparisons)

and another range of values when comparing templates created from different irises

(known as inter-class comparisons). These two cases should give distinct and separate

values so that a decision can be made with high confidence as to whether two templates

are from the same iris or from two different irises.

2.6.1 Normalized Hamming Distance

In iris recognition systems, the most widely used similarity metric is normalized

Hamming distance. In information theory, the Hamming distance between two strings of

equal length is the number of positions for which the corresponding symbols are

Page 36: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-25-

different. In other words, the number of digit positions in which the corresponding digits

of two binary words of the same length are different. In feature extraction module, if the

features are converted in binary format then the Hamming distance is used to find the

match. A threshold is defined regarding to normalized Hamming distance. Hamming

distance less than the threshold value is assumed as match. The minimum the normalized

Hamming distance, maximum is the matching factor.

Normalized Hamming distance is defined as follows [76]:

1

1 ( )n

i ii

HD X XOR Yn =

= ∑ 2.26

where X and Y are strings with length of “n” bits.

2.6.2 Euclidean Distance

Euclidean distance between two points in p-dimensional space is a geometrically shortest

distance on the straight line passing through both the points.

For a distance between two p-dimensional features 1 2( , , , )px x x x= … and

1 2( , , , )py y y y= … , the Euclidean metric is defined as [86]:

12

2

1( , ) ( )

p

i ii

d x y x y=

⎡ ⎤= −⎢ ⎥⎣ ⎦∑ 2.27

In matrix notation, this is written as the following:

( , ) ( ) ( )td x y x y x y= − − 2.28

2.6.3 Normalized Correlation

Normalized correlation is also used as classification metric. Correlation addresses the

relationship between two different factors (variables). The statistic is called a correlation

coefficient. A correlation coefficient can be calculated when there are two (or more) sets

of scores for the same individuals or matched groups. A correlation coefficient describes

direction (positive or negative) and degree (strength) of relationship between two

variables. Higher the correlation coefficient means that stronger the relationship between

the quantities. The coefficient is also used to obtain a p value indicated whether the

degree of relationship is greater than expected by chance.

Page 37: Nust Thesis

Chapter 2 Existing Iris Recognition Techniques

-26-

Normalized correlation is advantageous over standard correlation since it is able to

account for local variations in image intensity that corrupt the standard correlation

calculation used by Wildes [55].

This is represented as:

1 11 1

1 ( , )n m

i jp i j

mnµ =

= =∑∑ 2.29

2

1 1 11 1

1 ( ( , ) )n m

i j

p i jmn

σ µ=

= =

−∑∑ 2.30

And then normalized correlation between 1p and 2p is defined as:

1 2 1 1 2 21 2 1 1

1( , ) ( ( , ) )( ( , ) )n m

i jNormCorr p p p i j p i j

mnµ µ

σ σ = =

= − −∑∑ 2.31

where 1p and 2p are two images of size n by m pixels, 1µ and 1σ are the mean and

standard deviation of 1p , and 2µ and 2σ are the mean and standard deviation of 2p .

Page 38: Nust Thesis

Chapter 3 Proposed Methodologies

-27-

Chapter 3: Proposed Methodologies Iris localization is the most important step in iris recognition systems. All the subsequent

steps (feature extraction, encoding and matching) depend on its accuracy [48]. If iris is

not correctly localized, then performance of the system is degraded. In the iris

localization step, iris region in the image is separated by means of different algorithms. In

the algorithms, assuming frontal presentation of the pupil, the iris boundaries are modeled

as two circles, which are not necessarily concentric. The inner circle is the pupil

boundary or iris inner boundary (i.e. between the pupil and the iris). The outer circle is

the limbic boundary or iris outer boundary (i.e. between the iris and the sclera). The noise

processing is often included in the segmentation stage. Possible sources of noise are

eyelid occlusions, eyelash occlusions and specular reflections. Most localization

algorithms are gradient based involving in finding the edges between the pupil & iris and

the iris & sclera. After localization, next step is normalization of the iris. Iris controls the

amount of light entering the eye. Its response to different light conditions is non-linear

because of distribution of iris muscles [67].

3.1 Proposed Iris Localization Method In iris localization, pupil boundary is detected by using the following methods. The

schematic diagram of iris localization system is shown in Figure 3.1. First step in the iris

localization method is detection of pupil which is followed by localization of the pupil in

which parameters of pupil are determined and non-circular boundary is calculated. After

that, iris outer boundary is localized in which iris parameters are found. Then, eyelids are

detected to completely localize the iris [48].

3.1.1 Pupil Boundary Detection Detection of pupil boundary is the first step towards iris localization. Pupil parameters

(center and radius) are calculated by assuming pupil as a circular region. Algorithms 1

and 2 are proposed to find the pupil parameters.

a. Algorithm 1

1. Read the image of iris.

2. Apply decimation algorithm.

Page 39: Nust Thesis

Chapter 3 Proposed Methodologies

-28-

Figure 3.1: Schematic diagram of iris recognition system

3. Find a point in the pupil.

4. Initialize previous centroid with the point in pupil.

5. Repeat until single pixel accuracy is achieved

• Select the region.

• Obtain centroid.

• Compare the previous centroid with current.

6. Calculate radius of the pupil.

Explanation of the algorithm 1 is given after decimation algorithm.

Feature Extraction

PCA

Bit plane

Statistical Features

Wavelets Features

Iris Localization

Pupil Localization

Iris Localization

Eyelids Detection

Iris Center

Mid-point of iris and pupil centers

Minimum Distance

Input Image

Pupil Detection

Dynamic Size

Pupil Center

Normalization via reference point as

Iris Normalization

Decision

RGB to Grayscale Conversion

(if colored image)

Matching

Hamming Distance

Euclidean Distance

Test Image

Training Image

Yes No

Get features using anyone method

Save Features

Database

Eyelashes Removal

Page 40: Nust Thesis

Chapter 3 Proposed Methodologies

-29-

Decimation Algorithm

Following equation is used to decimate the image. Before applying this equation, a

parameter “L” is assigned as integer value, which is the size of the squared mask W.

21 1 1 1

1( , ) ( , ). ( , )M N L L

i j y xD i j I x i y j W x y

L= = = =

= + +∑∑ ∑∑ 3.1

Where I(x, y) is the original image, ( , )D i j is the decimated image of size M×N and W is

defined as:

1 1 11 1 1

1 1 1 L L

W

×

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

3.2

Explanation

After applying decimation algorithm following formulas are used to find a point inside

the pupil.

arg min ( , )xcol

P D x y= ∑ 3.3

arg min ( , )yrow

P D x y= ∑ 3.4

where ( , )D i j is the decimated image.

Once a point in the pupil is found, next step is to make binary image. For making the

processing fast, a squared region is selected assuming the point ,( )x yP P as point of

intersection of two diagonals of the square. A threshold is selected adaptively based on

maximum value in histogram of the region. Centroid of the region is obtained using the

following equations:

x

xMCA

= 3.5

y

yMCA

= 3.6

where

Page 41: Nust Thesis

Chapter 3 Proposed Methodologies

-30-

y

w

M xdA= ∫∫ 3.7

x

w

M ydA= ∫∫ 3.8

and

w

A dxdy= ∫∫ 3.9

where “A” is the area of window “w”. Centroid of the binary image provides the center

of the pupil. This procedure (i.e. selecting squared region, obtaining histogram, making

binary image, calculating centroid) is repeated till the single pixel accuracy is achieved.

This point is the exact center of the pupil.

As exact center is determined, radius of pupil is calculated by finding the average of

maximum number of consecutive non-zeros in four different directions from the center of

the pupil.

{ . sec }Radius mean no of con utive non zero pixels= − 3.10

b. Algorithm 2

Algorithm 2 is for CASIA iris database version 3.0 in which each image have eight white

small circles in the pupil [48]. These small circles are making a circular shape. Following

algorithm is used to obtain pupil parameters.

1. Read the image of iris

2. Apply decimation algorithm

3. Find a point in the pupil

4. Apply edge detector

5. Remove the small edges

6. Repeat until exact pupil center is reached

• Evolve lines in different directions.

• Find the same edge intersection with maximum lines

• Adjust the location for pupil center

7. Calculate radius of the pupil

In algorithm 2, steps are same till a point in pupil is obtained. Edge detector “canny” is

applied to original image as the pupil has small circle. To remove the effect of edges of

Page 42: Nust Thesis

Chapter 3 Proposed Methodologies

-31-

those circles, edges with small length are deleted so that an image with pupil edge,

containing no edge inside pupil, is obtained.

As location of a point in pupil is known, lines in different directions are evolved from this

point. Points of intersection between edges and these lines are calculated. As these lines

are emerging from the point inside the pupil so an edge in circular form has maximum

number of intersecting lines. This edge is the pupil boundary. Other edges are deleted.

Average of each line shifts the point in pupil towards the center of the pupil. This process

of emerging lines and finding average of intersection of two points on each line to shift

the center of the pupil to new center is repeated till the single pixel accuracy is achieved.

This is the exact center of the pupil.

3.1.2 Non-Circular Pupil Boundary Detection Pupil boundary is not circular due to non-linear behavior of iris muscles with respect to

different illumination conditions, even if the images are acquired at orthogonal to the eye.

After finding the pupil center and radius, following method/procedure is adopted to get

the non-circular pupil boundary.

Points with calculated radius of the pupil on the circle are used to form the non-circular

boundary. These points are same degree apart from each other where center of the pupil

is assumed as origin. Following procedure is applied to each point.

To find the exact boundary of the pupil, points on the pupil are forced to change their

position towards the exact boundary points. This change is carried out by inspecting

maximum gradient on the line with equation 3.13 of length 25 pixels. Mid-point of the

line is the point 1 1( , )x y on the circle. Let ( , )c cx y be the center of the circle and “r”

be its radius, then equation of circle is:

2 2 2 2 22( )c c c cx y x x y y r x y+ − + = − − 3.11

Therefore, slope of tangent to the circle at any point ( , )x y is:

c

c

x xmy y−

= −− 3.12

Equation of line passing through a point 1 1( , )x y and perpendicular to the tangent is:

Page 43: Nust Thesis

Chapter 3 Proposed Methodologies

-32-

1 1 1

1 1

c c c

c c

y y x y y xy xx x x x

⎛ ⎞− −= +⎜ ⎟− −⎝ ⎠

3.13

Distance from the point to the position of the maximum gradient value is termed as “d”

(say). If maximum gradient value is outside the circle then “d” is added to the point

otherwise it is subtracted from the points. After addition or subtraction, distance from the

neighbouring points is measured. If this distance is noticeably different then this change

is reverted and new point is the mid-point of the neighbouring points. This new point is

on the exact boundary of the pupil.

The change of point, from circle to maximum gradient is applied after dividing the pupil

circular boundary into a specific number "PtPupilBoundary” given in equation 3.14.

( )PtPupilBoundary round rπ= × 3.14

All the points are adjusted to their new positions and then joined linearly. This joined

curve is the non-circular boundary of the pupil. In Figure 3.2, the process of finding exact

boundary of pupil is displayed.

Figure 3.2: Finding non-circular boundary of pupil

Lines Perpendicular to tangent at circular boundary

Estimated circular boundary

Actual pupil boundary

Iris boundary

Page 44: Nust Thesis

Chapter 3 Proposed Methodologies

-33-

3.1.3 Iris Boundary Detection For iris localization, iris outer boundary detection is the most difficult step because the

contrast between iris and sclera is low as compared to the contrast between iris and pupil.

This contrast is so low that sometimes it is hardly possible to detect the boundary by

human eye observation. Algorithm 3 is used to find the iris boundary.

Algorithm 3

1. Gaussian filter is applied to the image.

2. From the center of the pupil two virtual circles depending upon the

radius of pupil are drawn, boundary between iris and sclera lies in

these circles.

3. An array of pixels is picked from the lines radially outwards within the

virtual circles.

4. Each array is convolved with 1D Gaussian filter.

5. On each of these convolved lines, three points with highest gradient

are chosen to draw the circle of iris.

6. Redundant points are discarded using Mahalanobis distance.

7. Call the draw Circle module.

Explanation

To reduce the effect of sharp features in determining iris/sclera boundary, Gaussian filter

of size 27×27 with standard deviation sigma of value three is applied to the image and

filtered image is used for further estimation of the boundary. Different sizes of the filter

are experimented. Smaller size does not provide the image with sufficient blurring and

larger size filter blends the iris boundary too much. After that a band of two circles is

calculated within which iris boundary falls. This band is used to reduce the computation

time. The radii of outer and inner circles of the band are based upon the radius of pupil

and the distance of first crest along horizontal line passing through the center of pupil in

the filtered image respectively. Let’s assume pupil center as origin of coordinate axes in

the image. In lower left and lower right quadrants, a sequence of different one

dimensional signals (radially outwards) are used to pick the boundary pixel coordinates

which has significant gradient. Mahalanobis distance [86] is determined from these points

to the center of the pupil by using the following formula.

Page 45: Nust Thesis

Chapter 3 Proposed Methodologies

-34-

( ) ( )1tDist x c x c−= − Σ − 3.15

whereΣ is the covariance matrix and is defined as follows.

( )( ) ( )tx c x c p x dxΣ = − −∫ 3.16

where x are boundary points and c is the coordinates of center of pupil and ( )p x is the

probability of point x . Maximum number of points with almost same Mahalanobis

distance in a band of eight pixels are used as an adapted threshold to select the points on

iris. This threshold is reckoned on the fact that iris and pupil centers are near to each

other and selected points are passing through a circle. Therefore, remaining (noisy) points

are deleted. Parameters of iris circle A, B and C are calculated from the selected points

using the following equation:

2 2 0x y Ax By C+ + + + = 3.17

Center of the iris is (-A/2, -B/2) and radius of iris is:

2 21 42

r A B C= + − 3.18

This method is effectively applied to both datasets.

3.1.4 Eyelids Localization

After localizing the iris with non-circular pupil boundary and circular iris boundary, now

eyelids are to be detected and removed for further processing. So the region of interest is

inside the iris boundary. Eyelids outside the iris boundary have no effect on the system.

Both upper and lower eyelids are checked for their presence inside the iris.

a. Upper Eyelid Detection

Upper eyelashes are normally heavy and affect the eyelid boundary detection process.

Detection of upper eyelids is carried out by using following algorithm.

Algorithm 4

1. Iris is cropped vertically from the image.

2. Upper half image is taken for further processing.

3. A virtual parabola above the half pupil is drawn.

4. Data from the virtual parabola to upper end of the image is taken.

Page 46: Nust Thesis

Chapter 3 Proposed Methodologies

-35-

5. Moving average filter is applied.

6. Points with maximum sharp change of rate in intensity values are

selected. Redundant points are deleted using three conditions.

7. If points are greater than fifteen then least square fit parabola is

applied on the remaining points otherwise eyelid does not cover the

iris.

Explanation

Iris from the image is pruned from left and right boundaries. Upper image portion is not

deleted whereas lower portion from center of pupil is discarded and the remaining part

(i.e. upper semicircle of iris) of the image is used for upper eyelid processing. A virtual

parabola near pupil upper boundary with following equation is drawn.

2 4y ax= − 3.19

where a is some positive number representing the distance of directrix, from the vertex of

the parabola and (x, y) is a point on the parabola. Parabola is a set of points that are

equidistant from a fixed point and a fixed line. This fixed line is called directrix [87]. The

virtual parabola passes through three non-linear points. Two points are near the left and

right iris boundary and third is three pixels above the pupil boundary in vertical line of

pupil center. This virtual parabola makes the processing fast by letting less number of

points in further processing. One dimensional signals starting from first row going

vertically downwards till virtual parabola are picked from the original image and are

smoothed by applying moving average filter of five taps. This smoothness is to reduce

the effect of single eyelashes in the image. Maximum three points on each signal are

selected based on rate of change in the intensity value. If the selected points are not in the

iris region and less than a significant number then it is assumed that iris is not occluded

by the upper eyelid. Among these points, exact eyelid points are selected using following

criterion.

(a) P(x, y) < 120

The intensity value of image P(x, y) at the point (x, y) must be less than 120 as eyelid

is darker part of the image so it has values in the range from 0 to 119. If the value is

120 or higher then that point will not be considered as eyelid point.

(b) P(x, y) ≈ { P(x-1, y-1) or P(x-1, y) or P(x-1, y+1)}

Page 47: Nust Thesis

Chapter 3 Proposed Methodologies

-36-

Among the left three neighboring points (i.e. upper left, immediate left or lower left),

at lease one point should have almost same intensity value as of the point under

consideration because eyelids are horizontal convex up or concave up curves.

(c) P(x, y) ≈ { P(x+1, y-1) or P(x+1, y) or P(x+1, y+1)}

Among the right three neighboring points (upper right, immediate right or lower

right), one point should be of the same intensity value as of the point under

consideration.

If a point satisfies all criterion, then it will be a candidate point for parabola. In this way,

points, which are not on eyelid boundary, are deleted and the effect of eyelashes in

finding upper eyelid is minimized. Afterwards, a parabola is fitted recursively passing

through the remaining points using least square curve fit method which determines

exactly the upper eyelid.

b. Lower Eyelid Detection To detect the lower eyelids, same algorithm is used but with minor differences. These

differences are described below.

• Vertically cropped lower half iris image from the center of pupil is used for lower

eyelid detection.

• Third point for virtual parabola is three pixels below the pupil boundary.

• Parabolic equation for lower eyelid is:

2 4y ax= 3.20

where a is some positive number representing the distance of directrix, from the

vertex of the parabola and (x, y) is a point on the parabola.

• One dimensional signals are picked in the opposite direction (i.e. from last row to

the parabola).

Remaining algorithm is same and these changes are specified in the parameters of the function.

c. Eyelashes Removal

After localizing the iris and detection of eyelids, as a last step eyelashes are removed

from the image. This step is done after iris normalization as shown in Figure 3.1. In the

first part of eyelash removal, the histogram of the localized iris is taken. As eyelashes are

Page 48: Nust Thesis

Chapter 3 Proposed Methodologies

-37-

of low intensity values, therefore, initial part of the histogram reflects the presence of

eyelashes. If the number of pixels in initial part of histogram are within the specified

threshold value then the eyelash removal is carried out otherwise it is considered that

localized iris is free from eyelashes. Once presence of eyelashes in localized, iris image is

verified. Then, image is passed through a high pass filter whose cut off frequency is

defined by the maximum intensity value inside the initial part of the histogram. The

resultant image is completely localized iris image free from all noises (i.e. eyelids, pupil,

sclera, etc.).

3.2 Proposed Normalization Methods When iris becomes fully segmented, then it is normalized to make it persistent and

unvarying in nature against the effect of camera to eye distance and variation in size of

the pupil within iris. Iris is normalized using some reference point in the pupil. In general,

majority of the methods [1, 49, 53, 62, 64, 81] use pupil center as a reference point.

Reference point acts as the center of the swapping ray like the center point in radars. Iris

is sampled under the swapping ray based upon the width of the iris at a particular ray

position. For example, if iris has width of 128 pixels and normalized image has width of

64 pixels, then every second pixel is picked as iris data. If iris has width of 32 pixels and

normalized image has width of 64 pixels, then every pixel is picked twice to keep the size

of the normalized image constant. In this way the iris is normalized.

3.2.1 Normalization via Pupil Center Before normalization, image pixels above upper and below lower eyelids are turned black

because these parabolic curves are ignored during the process of un-wrapping the iris.

Figure 3.3 shows a model of iris with two non-concentric circles with different radii.

Inner circle represents boundary between pupil and iris whereas outer circle represents

boundary between iris and sclera. Right side triangle is representing the same triangle

( )X IP∆ as in circles but zoomed. C P is an horizontal line segment. In this

processing, normalized image of size R×S pixels is obtained, where R and S are numbers

of rows and columns respectively.

Page 49: Nust Thesis

Chapter 3 Proposed Methodologies

-38-

In the previous processing, parameters of pupil and iris are calculated. Coordinates of

points P and I are known in preprocessing step since they represent the centers of pupil

and iris respectively. “X” is the point on the boundary of the iris (between iris and sclera)

and is rotated throughout the outer circle in counter clockwise direction. The concerned

part of the line (which is normalized to unity every time for un-wrapping the iris) is

between points A and X. For finding the length of line segment A X , following

mathematics is used.

θ β α= − 3.21

1PA PB r= = 3.22

2 2 22 1AX dcos r d sin rθ θ= + + − 3.23

Figure 3.3: Normalization using pupil center as reference point

On each line, R equidistant samples are picked and then an unwrapped normalized iris

image of size R×S pixels is inputted to other module for feature extraction. If there are

curves on the left and right end of normalized iris, it will represent presence of lower

eyelid. Whereas centered parabola is mapped to upper eyelid. A normalized image

without any parabola implies that iris corresponding to this image is not occluded by any

eyelid. This mapping technique is applied when pupil boundary is assumed as circle.

When pupil boundary is non-circular, then following changes are made in the above

method of iris normalization. Pupil boundary is assigned maximum grayscale value in the

method of non-circular pupil boundary detection. As the reference point is known and the

coordinates of point X are with respect to the angle at which iris will be normalized. The

line joining the point P and X have a point with maximum grayscale value. This value is

P

I

X

B A

P

I

B

A

X

C

d

r2

r1α

β

Page 50: Nust Thesis

Chapter 3 Proposed Methodologies

-39-

searched. Distance between its coordinates (maximum grayscale value) and reference

point is normalized to unity. Subsequently, samples of the iris are picked up.

3.2.2 Normalization via Iris Center Let I and P are the iris and pupil center respectively. X is a point on the boundary of the

iris at certain angle as depicted by Figure 3.4. A is the point of intersection between the

line I X and the pupil circle. Now the line segment AX is normalized to unity and

samples are picked up from the iris.

Figure 3.4: Normalization using iris center as reference point

To find the length of line segment AX following formula is used

1 sinAB r α= 3.24

1 cosIB r dα= − 3.25

2 22 1 12 cosAX r r d r d α= − + − 3.26

Where d is the distance between pupil and iris centers. Algorithm 5

1. Read the iris image.

2. Find circular parameter of pupil.

3. Find parameters of iris.

4. Take reference point as iris center and find S number of points on iris

boundary.

5. Repeat for each point on iris boundary

P

I

X

A

P

I

A

X

d

r2

r1α

B

Page 51: Nust Thesis

Chapter 3 Proposed Methodologies

-40-

• Find point of intersection A on pupil circle and the line joining the

points X and I.

• Normalize the distance between point A and X.

• Pick up R number of equidistant sample points.

This algorithm results in a normalized iris image of size R×S pixels.

3.2.3 Normalization via Minimum Distance A normalization method based on the minimum distance of the points on the pupil

boundary from the ones on the iris boundary is proposed. In this method, “S” equidistant

points are chosen on the pupil boundary and the corresponding points on the iris

boundary are selected. These points are calculated using the angle difference of 2 / Sπ

radians (i.e. points at zero degree on pupil and zero degree at iris boundary are

corresponding to each other), where S is the number of columns of normalized image.

Similarly, points at 90 degree to pupil boundary and same at iris boundary are related to

each other. Normalized iris is obtained based on the minimum distance between the

corresponding points at the same angle. This minimum distance is divided into “R”

number of equidistant points and iris samples are picked up from these point. Figure 3.5

shows the points “A” and “X” corresponding to angle at α from horizontal line on pupil

and iris boundaries respectively. “d” is the distance between iris and pupil centers and

β is the angle between the line joining pupil and iris centers to horizontal line. r1 and r2

are radii of pupil and iris respectively.

Figure 3.5: Minimum distance between the points at same angle.

P

I

X

A

P

I

A

X

C

d

r2

r1α

B α

β

Page 52: Nust Thesis

Chapter 3 Proposed Methodologies

-41-

In order to get the length of line segment AX, the following mathematical formula is

applied.

2 22 1 2 1( cos cos cos ) ( sin sin sin )AX d r r d r rβ α α β α α= + − + + − 3.27

In the case, when iris and pupil centers are on the same position (i.e. I = P), the length of

line segment is obtained using equation 3.28.

2 1AX r r= − 3.28

The proposed normalization method is given in Algorithm 6.

Algorithm 6

1. Read the iris image.

2. Find circular parameter of pupil.

3. Find parameters of iris.

4. Find S number of points on iris and pupil boundary with 2 / Sπ degree

angle difference.

5. Repeat for each point on iris boundary

• Find a corresponding point A on pupil boundary.

• Normalize the distance between point A and X.

• Pick up R number of equidistant sample points.

3.2.4 Normalization via Mid-point between Iris and Pupil Centers Another method of normalization is proposed. In this method reference points is taken as

the mid-point between the lines joining the two centers (i.e. pupil center and iris center).

Figure 3.6 represents the pupil center P, iris center I and their mid-point M. Point X is

determined by the angle which changes from zero to 2π with a difference of 2 / Sπ ,

where S is the length of the normalized image. A is the point of intersection between the

pupil boundary circle and the line joining the points X and M. The distance between A to

X is subdivided into R equal distances to pick up the data. R is the width of the

normalized image. Experiments with this reference point have also been conducted to

find out which normalization method performs well.

Page 53: Nust Thesis

Chapter 3 Proposed Methodologies

-42-

Figure 3.6: Mid-point of centers of iris and pupil as reference point

Algorithm 7 is used for normalization of iris via mid-point M as a reference point.

Algorithm 7

1. Read the iris image.

2. Find circular parameter of pupil.

3. Find parameters of iris.

4. Take reference point as mid-point of pupil and iris centers.

5. Find S number of points on iris boundary by finding intersection of the

circle and line at S different angles.

6. Repeat for each point on iris boundary.

• Find point of intersection A on pupil circle and the line joining the

point X on iris boundary.

• Normalize the distance between point A and X.

• Pick up R number of equidistant sample points.

This algorithm results in a normalized iris image of size R×S pixels.

3.2.5 Normalization using Dynamic Size Method

In addition to above mentioned methods, another method of iris normalization has been

implemented. In this method, size of the normalized image is dynamic. It is based on the

radii of the pupil and iris. Samples of iris are picked up in circular form, from each point

on the pupil boundary with an increment of one pixel in the radius till the first point on

iris boundary. In this case, size of the normalized image is like a trapezium as shown in

Figure 3.7. For elaboration purpose, each line in the dynamically normalized image is

P

I

X

AM

Page 54: Nust Thesis

Chapter 3 Proposed Methodologies

-43-

representing single pixel boundary. The trapezium has two parallel edges, short parallel

side is the data sampled from pupil boundary and gradual increase represents the data

samples towards iris boundary.

Algorithm 8 is proposed to achieve this type of normalization.

Algorithm 8

1. Read the iris image.

2. Find parameters of pupil.

3. Find parameters of iris.

4. Initialize radius r with pupil radius.

5. Repeat till a point on iris boundary is picked.

• Pick each point on the circle of radius r, total number of points are

approximately 2 .rπ

• Increment one pixel in radius r.

Figure 3.7: Concentric circles at pupil center P and dynamic iris normalized image

3.3 Proposed Feature Extraction Methods Any classification method uses a set of features or parameters to characterize each object,

where these features should be relevant to the task at hand. For supervised classification,

a human expert determines categorization of object classes and also provides a set of

sample objects with known classes. The set of known objects is called the training set

because it is used by the classification programs to learn how to classify objects [88].

There are two phases to construct a classifier. In the training phase, the training set is

Page 55: Nust Thesis

Chapter 3 Proposed Methodologies

-44-

used to decide how the parameters ought to be weighted and combined in order to

separate the various classes of objects. In the application phase, the weights determined

in the training set are applied to a set of objects that do not have known classes in order to

determine what their classes are likely to be. For unsupervised classification, only

spectral features are extracted without use of ground truth data. Clustering is an

unsupervised classification in which a group of the spectral values will regroup into a few

clusters with spectral similarity.

In the present case, features are extracted to make a template of the image. Efforts are

made to use minimum number of features with maximum accuracy of the system.

3.3.1 EigenIris Method or Principal Component Analysis

Principal Component Analysis (PCA) or Hotelling transform is a method of

dimensionality reduction by combining the features of normalized iris images, identifying

patterns in data and expressing the data to highlight their similarities and differences.

Since in high dimension data it is hard to find patterns (where the luxury of graphical

representation is not available), PCA is a powerful tool for analyzing data. Once patterns

have been extracted from the data, and one needs to compress the data (i.e. by reducing

the number of dimensions) without much loss of information. In terms of information

theory, the idea of using PCA is to extract the relevant information in a normalized iris

image, encode it as efficiently as possible and compare test iris encoding with a database

of similarly encoded models. A simple approach to extract the information, contained in

an image, is to somehow capture the variations in a collection of images independent of

judgment of features and use this information to encode and compare individual iris

images [89]. In mathematical terms, the purpose of using PCA is to find the principal

components of distribution of iris textures or the eigenvectors of the covariance matrix of

the set of iris images, treating each image as a point (vector) in a very high dimensional

space. The eigenvectors are ordered, each one accounting for a different amount of

variation among the normalized iris images. These eigenvectors can be thought of as a set

of features that together characterize the variation between iris images. Each image

location contributes more or less to each eigenvector. Each individual iris can be

represented exactly in terms of a linear combination of the eigenirises and can also be

Page 56: Nust Thesis

Chapter 3 Proposed Methodologies

-45-

approximated using only the “best” eigeniris – those that have the largest eigenvalues,

and which therefore account for the most variance within the set of normalized iris

images.

The algorithm 9 has been implemented for this purpose as shown below:

Algorithm 9

1. Input all training images

2. Image preprocessing

• Call pupil segmentation module

• Call iris localization module

• Call eyelid detection module

• Call iris normalization module

3. Calculate eigenvalues and eigenvectors

• Calculate mean of training images

• Carry out image centering

• Find out covariance matrix of centered images

• Obtain eigenvalues and eigenvectors of covariance matrix

4. Sort the eigenvalues and corresponding eigenvectors in ascending order

5. Carry out dimension reduction through selection of highest eigenvalues

and eigenvectors

6. Project the image in PCA subspace

7. Carry out image recognition

• Load test image

• Repeat the steps 2 to 6

• Obtain Euclidean distance of test projection with training images

projection

• Find out closest match

8. Display image with closest match

3.3.2 Bit Planes

A binary image is a digital image that has only two possible values for each pixel. Binary

images are, also called bi-level or two-level. The names black-and-white (B&W),

Page 57: Nust Thesis

Chapter 3 Proposed Methodologies

-46-

monochrome or monochromatic are often used for this concept, but may also designate

any images that have only one sample per pixel, such as grayscale images.

Binary images often arise in digital image processing as masks or as the result of certain

operations such as segmentation, thresholding and dithering. Some input/output devices

such as laser printers, fax machines and bi-level computer displays can only handle bi-

level images.

Digital medium (such as image or sound) is a set of bits having the same position in the

respective binary numbers [90]. For example, for 16-bit data representation there are 16

bit planes: the first bit plane contains the set of the most significant bit and the 16th

contains the least significant bit. It is possible to see that the first bit plane gives the

roughest but the most critical approximation of values of a medium. The higher is the

number of the bit plane, the lesser is its contribution to the final stage. So, addition of bit

plane gives a better approximation [91]. Thus, bit planes of normalized iris image are

used as features of the iris.

3.3.3 Wavelets

Feature extraction is one of the most important part in recognition systems. Different

experiments have been conducted using Haar, Daubechies, Symlet, Biorthogonal and

Mexican hat wavelets to extract features. Approximation coefficients as well as details

coefficients at different levels of the wavelets have been used as features. CWT function

is used to implement the Mexican hat to get features. Wavelets are applied on normalized

iris images and then combined to make one dimensional feature vector.

a. Haar Wavelet

Any discussion of wavelets begins with Haar which is discontinuous and resembles a step

function as shown in Figure 3.8. This wavelet has been used for extracting the features of

normalized iris and comparing the results with other wavelets.

Page 58: Nust Thesis

Chapter 3 Proposed Methodologies

-47-

Figure 3.8: Haar Wavelet

b. Daubechies

Daubechies, called compactly supported orthonormal wavelets, make discrete wavelet

analysis practicable. The names of the Daubechies family wavelets are written dbN,

where N is the order, and db the “surname” of the wavelet. The db1 wavelet is same as

Haar. Next nine members of the Daubechies family are shown in Figure 3.9.

Figure 3.9: Daubechies Wavelets

c. Coiflets Coeiflets were built by Daubechies at the request of Coifman [92]. The wavelet function

has 2N moments equal to 0 and the scaling function has 2N-1 moments equal to 0. The

Page 59: Nust Thesis

Chapter 3 Proposed Methodologies

-48-

two functions have a support of length 6N-1. Coiflets wavelets of different lengths are

shown in Figure 3.10.

Figure 3.10: Coiflets Wavelts

d. Symlets The Symlets are nearly symmetrical wavelets proposed by Daubechies as modifications

to the db family. The properties of the two wavelet families are similar. Shapes of

Symlets are shown in Figure 3.11.

Figure 3.11: Symlets Wavelets

3.4 Matching In order to match the feature vector, the following two commonly used metrics are used

in the proposed iris recognition system.

3.4.1 Euclidean Distance

For obtaining the distance between feature vectors extracted by using PCA, the used

similarity measure is Euclidean distance. Euclidean distance between two points in p-

dimensional space is a geometrically shortest distance on the straight line passing through

both the points. Euclidean distance is defined in equation 2.27 and its matrix notation is

given in equation 2.28.

Page 60: Nust Thesis

Chapter 3 Proposed Methodologies

-49-

3.4.2 Normalized Hamming Distance

Hamming distance is defined as the number of bits by which two n-bit vectors differ. For

example, the Hamming distance between 001101 and 001110 is 2. To find the

normalized Hamming distance, the result of Hamming distance is divided by the number

of total number of bits. In case of above example, the total number of bits is 6. Therefore,

normalized Hamming distance is 2/6 = 0.33 that means the two bit strings differ in a

fraction of 0.33. Normalized Hamming distance is used so frequently in iris recognition

area that it is commonly known as Hamming distance. In feature extraction module, the

features are converted in binary format so that it is used efficiently to find the match. A

threshold for matching the two feature vectors is defined. Hamming distance less than the

threshold value is assumed as match. The minimum is the hamming distance, the

maximum is the matching factor.

Page 61: Nust Thesis

Chapter 4 Design & Implementation Details

-50-

Chapter 4: Design & Implementation Details Different modules have been implemented to ensure error-free and correct operation of

the proposed system. MATLAB 7.04 is used as tool for development of algorithms. The

system comprises of the following four parts:

• Iris localization

• Normalization

• Feature extraction

• Iris matching

4.1 Iris Localization Iris localization is the main part of the research work in which pupil boundary detection

is the first step.

4.1.1 Circular Pupil Boundary Detection

A number of pupil detection modules have been developed based upon the properties of

images in different databases. Parameters of the pupil are calculated using the following

modules.

a. Detection of Pupil Boundary Module

A module named “PupilFind” detects the pupil circular boundary. This module uses a

number of functions (mean2, min, ind2sub, max and sum) to determine the parameters of

pupil. A function known as “Centroid” has also been developed to obtain centroid of the

given region.

Input of the module is an image containing iris of defined size and output contains pupil

radius and pupil center. In initialization step; size of the image is obtained in “m” rows

and “n” columns. “bod” variable is border size initialized with integer which describes

number of pixels to exclude from the border in computation. “wd” variable is used for

finding the size of decimation mask. Size of the mask is “(2wd+1)×(2wd+1)” and

“winsize” variable is initialized with integer which is used for size of window for finding

the centroid. Size of the window is “(winsize + 1)×(winsize +1)”.

Page 62: Nust Thesis

Chapter 4 Design & Implementation Details

-51-

Flow chart of this module is shown in Figure 4.1.

Figure 4.1: Flow chart for detection of pupil boundary module

The procedure adopted to find a point in pupil by using the mentioned parameters is as

under. Decimation mask is applied to the image excluding border. The position of the

minimum intensity value in the masked image is determined. The border width is added

to get the exact position of the point. This minimum value always exists inside the pupil.

This point is used to find centroid of the image using the function “Centroid” which gives

new center of pupil and a binarized image. Centroid is called iteratively till single pixel

accuracy is achieved in finding center of the pupil. Now to calculate the radius of the

pupil, binarized image as variable binarizewindow is used in which pupil is white and

remaining part is black. After completion of the iteration process, pupil is in the center of

YES

NO

Conversion to Grayscale

Image Scanning for Minimum Value

Decimation Mask Generation

Convolving Mask with Image

Colored Image

Input Image from Database and

Initialize variables

START

Pupil center is assigned point inside the pupil

A square window is binarized using

histogram

YES

NO

Calculate Centroid of binary window

Centroid == Pupil center

Assigned Centroid to Pupil center

Calculate Centroid of binary window

Scan for black pixels to obtain radius of pupil

END

Page 63: Nust Thesis

Chapter 4 Design & Implementation Details

-52-

binarizewindow. From center, pixels are counted till a black pixel is found in left, right,

upward and downward directions and mean of these is taken as radius of pupil.

b. Centroid Module

Centroid of a region is center of mass of that region and is defined as a point at which the

center of mass is located if the region is constructed using material of constant density.

Image, winsize and PointInPupil are its input parameters whereas output parameters are

newcenter and binarizewindow. Binary image is used to find the centroid to make the

pupil of constant density. Equations 3.5 to 3.9 have been employed to find centroid.

Before calculating the center of mass of the window sized (winsize + 1)×(winsize +1)

with center at PointInPupil, density of the area is smoothed to constant by converting the

gray scale image into binary image. Usually, this window is inside the image but if the

center of the window is at some corner then maximum possible image is taken for finding

the centroid. Histogram of the image is obtained.

Highest peak in histogram of this window is taken and this gray scale value plus fifteen is

used as threshold for making it as binary image. Fifteen pixels are included because

values around pupil edge have such gray scale value. This binary image is used for

calculating the centroid. Actual center coordinates, in the main image, are found by

adding Cx and Cy using equations 3.5 and 3.6 in x and y-coordinates of pupil respectively

and subtracting winsize. These coordinates are sent as output “newcenter” along with the

binary image “binarizewindow”. This process is repeated till single pixel accuracy is

achieved. Newcenter and radius are included in variable “pcr” (i.e. pupil center & radius)

to output from the module PupilFind. These parameters are fine tuned using the function

“FineTuneExactPupil”. Fine tuning means that the parameters of the pupil are changed to

exact position by inspecting change in the values of center & radius of the pupil pixel by

pixel.

c. Fine Tuning Module

Original image, pupil radius and center are used as input parameters of this module (i.e.

FineTuneExactPupil). Output of the module is the pupil parameter but with more

accuracy. In this function, existing parameters of pupil are fine tuned to get exact

Page 64: Nust Thesis

Chapter 4 Design & Implementation Details

-53-

parameters in two ways. One is current radius that changes from -5 pixels to 10 pixels

(i.e. pupil radius varies from smaller to current radius and then larger up to 10 pixels).

Second, these variations (in radius) are studied at every 10th degree. This module uses

other modules to find the exact parameters of the pupil.

d. Confirm Pupil Module

Another module has been incorporated to study the change (known as “ConfirmPupil”).

It takes parameters of pupil and original image as input parameters and returns a score

which is an estimate of how well the input parameters are for the given image. This score

is based on inner and outer band of current radius of pupil. Width of each band is three

pixels and number of sample points on each band is same. Sum of the intensity values

from the inner band are subtracted from the sum of intensity values of original image

from outer band to get the score. Larger the score is, more accurate the pupil parameters

are. So during the fine tuning, position of pupil center is checked by 36×16 times. Center

and radius of pupil are shifted to some new positions where score of ConfirmPupil

module is maximum. Steps for pupil localization are shown by different snapshots of the

image as in Figure 4.2. Pupil Detection Method 2

e. Scanning for Pupil Radius Module

Another method of pupil detection has also been developed. Pupil parameters (i.e. pupil

radius and pupil center) are obtained using the function named as ScanForPupilRadius.

Input parameter is the eye image. Image size is obtained using MATLAB function size.

A variable “slice” is initialized with 10. First and last 60 columns are not used for

processing because pupil is inside the iris. Data of every 10th row is passed to a function

called “FindMaxNoOfZeros” which outputs the maximum number of consecutive zeros.

These zeros are counted for every 10th row then maximum of them is taken along-with

the row number that corresponds to maximum number of zeros. Then the function

FindMaxNoOfZeros is called 9 rows above and 9 rows below the previously obtained

row of maximum number of consecutive zeros. Row number according to the maximum

of these calculations is found which is the x-coordinate of the center of the pupil. Same

procedure is carried out column-wise to get the y-coordinate of the pupil. Maximum

Page 65: Nust Thesis

Chapter 4 Design & Implementation Details

-54-

number of zero pixels gives the diameters along the coordinate axes. Radius of the pupil

is calculated by adding the number of zero pixels on two diameters and dividing it by

four.

Figure 4.2: Steps for Pupil Localization CASIA version 1.0

Finding Point inside Pupil

Calculating Centroid

Retrieving Radius

Applying Moving Average Filter

Fine Tuning Parameters

Original Image Radius of Pupil : r = 38 pixels Coordinates of : x = 136 Pupil Center : y = 183

Page 66: Nust Thesis

Chapter 4 Design & Implementation Details

-55-

f. Finding Maximum Zeros Module

Input parameter of this module (i.e. FindMaxNoOfZeros) is an array containing the row

or column data from the image. Output of the module is a positive integer mentioning the

number of consecutive zeros. One dimensional array is convolved with a filter [1 -1] to

find the derivative of the data. Since the pupil is a smooth area so the first derivative

converts that area into zeros. One more condition is applied that is values less than 0.3 are

considered as zero.

g. Draw Circle Module

Two modules for drawing circle have been utilized in order to identify correct parameters

of pupil and iris boundary. Input parameters for both the modules are image on which to

draw circle, center, radius of the circle and a positive integer “newvalue”. In the first

module, points are obtained using equations 4.1 & 4.2 and pixel values corresponding to

these points are changed to newvalue which show a circle in the image.

coscx x r θ= + 4.1

coscy y r θ= + 4.2

Where cx and cy are the coordinates of the center. Values of x and y-coordinates are

obtained using radius r of the circle at specific angles. Value of angle θ varies from 0 to

2π at an interval of 2 / Nπ where N is the total number of points on the circle. Larger

value of N draws better circle whereas smaller value does not draw the circle in a perfect

manner. For example, if the value of N is four then only four points are drawn to serve as

a circle. Second module uses the property of symmetry. As circle is symmetric about

coordinate axes and lines y = ±x. Values of the coordinates are calculated for segment 1

shown in Figure 4.3 and coordinates in remaining segments are calculated.

Coordinates of points in Segment 1 are symmetric to:

• Segment 2 about the line y = x

• Segment 6 about the line y = -x

• Segment 8 about x-axis

• Segment 4 about y-axis

Page 67: Nust Thesis

Chapter 4 Design & Implementation Details

-56-

Then using Segment 2, one can find Segment 3, Segment 5 and Segment 7 with

symmetry about y-axis, x-axis and line y = -x respectively.

Figure 4.3: Used symmetric lines for finding points on circle

h. Finding Pupil in CASIA version 3.0 Iris Database

For finding the pupil in iris database CASIA version 3.0, following method has been

proposed. The images in this database are having eight white circles, nearly in the center

of the pupil. These small circles are making a round shape. To find the pupil in such

images, the following modules have been designed.

(1) Find Pupil Module

Input to this module (i.e. PupilFindV3) is the image containing iris and output of this

module is the parameters of pupil. A point in pupil is obtained using the module a. Edge

image is taken and edges with small length are deleted to obtain an image in which there

is no edge inside pupil edge. As location of a point in pupil is known, horizontal and

vertical lines are used to find the first crossing point in left, right, upward and downward

directions. Average of first left and first right intersection points on horizontal line from

the point in pupil is new x-coordinate and average of first upper and lower intersections is

new y-coordinate of the pupil. Number of these lines originating from the new center

Page 68: Nust Thesis

Chapter 4 Design & Implementation Details

-57-

Figure 4.4: Steps involved in Pupil Localization CASIA Version 3.0

coordinates is increased, gradually from four to sixteen (i.e. number of sectors are

increased from four to sixteen) and new center coordinates are average of bisecting these

Original Image Radius of Pupil : r = 59 pixels Coordinates of : x = 152 Pupil Center : y = 165

Applying Edge Detector Operator (Canny)

Removing edges of length greater than 90 pixels

Finding point inside pupil

Calculating pupil parameters

Drawing pupil parameters

Page 69: Nust Thesis

Chapter 4 Design & Implementation Details

-58-

chords. Figure 4.4 shows the steps used to determine boundary of pupil in CASIA Iris

Database version 3.0.

(2) Finding Pupil in MMU Database

By observing an eye image, it can be seen that pupil is a dark region as compared to iris

and iris is darker than sclera. A white spot is present inside the pupil for the images of

MMU database because of the image acquiring device. In order to detect pupil boundary

in this database following steps are performed:

• First significant peak of the histogram of image is selected that represents

majority of pupil area pixels. This peak always lies in low intensity values. In the

case of MMU dataset, it is between 15 to 30 index values of histogram.

• In order to find full region of pupil, intensity threshold value is shifted k local

minima forward. The value of k is determined through experiments. On MMU iris

database, value of k is 7. Minima is defined as:

(freq(x-1) >= freq(x)) && (freq(x) < freq(x+1))

It means that gray value x will be called local minima if frequency of its previous

gray value is greater than or equal to its value and frequency of next gray value is

also greater than its value.

• Image is binarized by threshold value and resultant binary image has gap due to

the reflection of light source in the image acquisition process inside the pupil.

• Gaps in the pupil area are filled by white color. Gap means closed region

surrounded by pupil area.

• To find center of the pupil, row and column with maximum number of connected

black pixels are found and the point of cross over is considered as initially

estimated center of pupil.

• Initially estimated center coordinates of the pupil are adjusted using the property

of intersecting chords method passing through center of each other. In other

words chords passing through the center of a circle bisect each other.

• Radius = (Length of chord1+Length of chord2)/4

These steps are applied to each image in the database and result of the steps discussed

above is depicted in Figure 4.5.

Page 70: Nust Thesis

Chapter 4 Design & Implementation Details

-59-

Figure 4.5: Steps involved in Pupil Localization for MMU Database

(b) Histogram of the image is taken to find out the threshold value. Initial part of the histogram is shown.

(c) & (d) Binarized image is obtained using threshold value. Reflection in the pupil area is shown in left image which is filled in right image.

(e) Pupil center and radius are calculated by intersection chord and finding the length of the chord.

(f) Original iris image with pupil center and pupil boundary is shown in white circle.

(a) Original image from MMU iris database.

Page 71: Nust Thesis

Chapter 4 Design & Implementation Details

-60-

4.1.2 Non-Circular Pupil Boundary Detection

Modules written to obtain non-circular boundary of pupil are IrregularPupil,

InternalPoints and WindowRangeGp. Detail of these modules is given below.

a. Irregular Pupil Points Module

Input parameters are original image variable (imo), pupil parameters variable (pcenter),

number of points variable (N) and half width variable (wid). wid number of pixels are

checked inside for maximum gradient and same number of pixels are checked from pupil

circular boundary to outer-side. Output is the image with non-circular boundary and two

dimensional array containing coordinates of points on the pupil boundary. First column

represents x-coordinate of each point and second column is the corresponding y-

coordinate of each point. N defines the total number of points on the pupil boundary to

change towards inside or outside. Points are obtained by using equations 4.1 and 4.2.

Data of size 2*wid+1, perpendicular to the tangent at that point is picked up and is

smoothed. Module WindowRangeGp is called to obtain distance to the point of maximum

gradient from dark to bright portion as pupil is darker than iris. This distance is added to

the point under consideration radially to get new position of the point. The selected N

points are repositioned. Then, these new points are linearly interpolated using the module

InternalPoints to get the non-circular boundary of the pupil.

b. Internal Points Module

Points are interpolated in this module. Some of the obtained points lie at incorrect

position because some eyelashes are near the pupil boundary. To avoid this problem,

criteria for ignoring a point is proposed based upon the distance between the points as the

distance between the points before repositioning was same. If the distance from the point

under consideration to second point is less than 80% of the distance from the point to first

point then first point is considered as noise and is ignored. Similarly, percentages of 80%,

65%, 50% and 30% are used to ignore the next points with reference to the point under

consideration. It means that if the distance from the point to third point is less than 65%

of the distance between the point and first point, then both first and second points are

Page 72: Nust Thesis

Chapter 4 Design & Implementation Details

-61-

ignored and third point is joined with the point under discussion. This criterion makes

circular shaped pupil and avoids unnecessary zigzag pattern in the detection of pupil.

c. Window Range Module

Input parameter of this module named as “WindowRangeGp” is a one dimensional array

of numbers whereas output is position of values which are greater than a specific

statistical range. After taking derivative of the input values, positions of values greater

than mean of the values plus Standard Deviation of the values are stored in a variable out.

This variable represents the positions of maximum gradient in the input array which

corresponds to the edge boundary.

Figure 4.6 (a) and (b) show the result of non-circular pupil boundary for CASIA iris

database version 1.0 and version 3.0 respectively.

4.1.3 Iris Boundary Detection

Iris boundary is obtained by finding its parameters (i.e. iris center and radius). A module

called IrisRadiusCenter has been implemented to localize the iris. It is a robust method

which performs well on all datasets.

a. Detection of Iris Parameters Module

A module named “IrisRadiusCenter” has been developed to determine the parameters of

iris outer boundary. Input parameters for this module are original image as imo and pupil

parameters (center and radius) as pcr. Output of the module is iris center and radius. In

obtaining iris parameters, sharpness of the iris diverts the efforts towards wrong

judgment. For this purpose, a Gaussian filter is applied to the image. Size of the filter is

an important factor, neither it should be too small to sharp the iris patterns nor it should

be very large to merge the iris with sclera. With repeated experiments, size of Gaussian

filter is selected as 25×25 and value of sigma equal to three is used. Image is convolved

with the filter. A band of circles is determined for fast computation such that outer iris

boundary lies between them. Two radii are necessary to make a band in shape of donut.

Radius of outer circle (“rout”) is based on the pupil radius. Its value is pi times the radius

Page 73: Nust Thesis

Chapter 4 Design & Implementation Details

-62-

of pupil. For inner circle radius (“rin”), an estimated distance from the center of pupil to

first valley along the horizontal line is taken. Position of valleys is calculated

Figure 4.6: Non-circular pupil boundary

Original images with drawn circular pupil

Obtaining new points for non-circular pupil

Joining obtained points

(a) CASIA version 1.0 (b) CASIA version 3.0

Page 74: Nust Thesis

Chapter 4 Design & Implementation Details

-63-

by taking the data values from the horizontal line passing through the center of the pupil.

Information about maximum and minimum (valleys and peaks) is gathered using a

function “FindMaxMin1D” on the line. Starting from the center column, location of first

valley which ever comes first, either on left side or right side, is taken as radius of the

inner circle “rin”. If this radius of inner circle is less than 1.4 times of the radius of pupil

than it is assigned a value equal to 1.4 times the radius of pupil. Number of lines

“noofline” are selected on which tentative points on iris boundary are to be determined. If

the difference between these radii is less than fifteen pixels then outer radius is set at

fifteen pixels away from the inner radius in order to assure that iris boundary lies in it. It

is assumed that center of the pupil is on the origin so that angles for these lines are in two

sets. First set has equally spaced noofline lines in polar coordinates from the line with

equation / 6θ π= − to the line with equation /12θ π= , whereas second set has angle

range from the line θ π= to the line 6 / 5θ π= with same number of equally spaced lines

as in first set. These lines are virtually drawn between the circles. On each line data is

picked and is convolved with a 1D moving average filter and a maximum of three points

are obtained when this filtered data is input in the function WindowRangeGp.

These points are candidate points for iris boundary. Mahalanobis distance [86] is applied

to these points using equation 3.15. Maximum points with same Mahalanobis distance in

a band of eight pixels are used as an adapted threshold to select the points on iris. This

threshold is reckoned on the fact that iris and pupil centers are near to each other and

selected points are passing through a circle. Therefore, remaining (noisy) points are

deleted. Parameters of the circles are obtained by using MATLAB function for solving

simultaneous equations when the values of the points are substituted in the general

equation 3.11.

b. Finding Maximum & Minimum Module

In this module (known as FindMaxMin1D), input is an array of numbers and output is

also an array of numbers -1 and 1 of same size as that of input array. Number -1 shows

the position of a valley and +1 shows position of a peak. In other words -1 and 1

represent the positions of local minima and maxima respectively. Figure 4.7, Figure 4.8

Page 75: Nust Thesis

Chapter 4 Design & Implementation Details

-64-

Figure 4.7: Steps for Iris Localization CASIA version 1.0

First trough

Graph of horizontal line passing through pupil center

Calculating Band of Circle Deleting extra points

Finding points for iris boundary

Original Image Radius of Iris : r = 96 pixels Coordinates : x = 135 of Iris Center : y = 179

Convolution

Low Pass Filter

Adding Pupil Boundary

Displaying Iris Boundary

Inte

nsity

val

ues

Number of columns

Page 76: Nust Thesis

Chapter 4 Design & Implementation Details

-65-

Figure 4.8: Steps for Iris Localization CASIA version 3.0

First trough

Graph of horizontal line passing through pupil center

Calculating Band of CircleDeleting extra points

Finding points for iris boundary

Original Image Radius of Iris : r = 106 pixels Coordinates : x = 150 of Iris Center : y = 164

Convolution

Low Pass Filter

Adding Pupil Boundary

Displaying Iris Boundary

Inte

nsity

val

ue

Column number

Page 77: Nust Thesis

Chapter 4 Design & Implementation Details

-66-

Figure 4.9: Steps for Iris Localization MMU Iris database

First trough

Graph of horizontal line passing through pupil center

Calculating Band of Circle Deleting extra points

Finding points for iris boundary

Original Image Radius of Iris : r = 50 pixels Coordinates : x = 123 of Iris Center : y = 182

Convolution

Low Pass Filter

Adding Pupil Boundary

Displaying Iris Boundary

Inte

nsity

val

ue

Columns number

Page 78: Nust Thesis

Chapter 4 Design & Implementation Details

-67-

and Figure 4.9 show the steps involved for CASIA Iris Database version 1.0, CASIA Iris

Database version 3.0 and MMU Iris Database version 1.0 respectively.

c. Iris Localization Using Histogram Processing

Another method of iris localization has been designed and implemented based upon

histogram processing. This method is implemented to find the outer iris boundary for

MMU iris dataset. After finding the pupil center and radius, two sub-images are

converted to polar coordinates to obtain three points on the iris boundary. These sub-

images are parts of the original image, outside the pupil region. These sub-images are

binarized using adaptive threshold which is obtained from the histogram processing.

Maximum of first hundred entries from histogram are found and threshold is seven

valleys (minimas) ahead of maximum value. Binarization process gives a clear boundary

between sclera and iris in polar images. Three points are picked up from these images

which are mapped back to original image as shown in Figure 4.10 (e). These three points

are nonlinear and it is well known fact that a unique circle passes through three nonlinear

points. Once three points are obtained using general equation of circle parameters of the

circle are obtained and circle is drawn to depict the iris boundary.

4.1.4 Eyelids Localization

To detect the eyelids in the iris region following modules have been developed. Upper

and lower eyelids are determined by the function named Eyelids. However, inputs are

different for upper and lower eyelids.

a. Eyelids Module

Inputs of the module are original image ímo and parameters of the iris icrpcr (i.e. centers

and radii of iris and pupil). Output choice can be: (a) image with eyelid taken as white

pixels or (b) image is filled black above the eyelid. First of all region of half iris is

cropped. For upper eyelid, this cropping is done using the MATLAB colon operator with

the following command.

ilidportion = imo(1:icrpcr(1)-1, icrpcr(2)-icrpcr(3):icrpcr(2)+icrpcr(3));

Page 79: Nust Thesis

Chapter 4 Design & Implementation Details

-68-

Where first colon operator is cropping the image from row one to icrpcr(1) which is the

x-coordinate of the iris center (i.e. it takes the upper half iris image). Second colon

operator crops the iris part only. For lower eyelid, following command is applied.

Figure 4.10: Steps for Iris Localization MMU iris database

(e) Three points are picked using the binary images. One point from the middle of left image and two points from right image at top and bottom are taken.

(f) Circle is drawn passing through the three points that localized the iris image.

(a) Each right and left sector comprising of 30 degrees (as shown in the image) is converted to polar coordinates assuming origin at pupil

(b) Polar transformed images.

(c) Histogram of the respective images and threshold is obtained for both images.

(d) Binarized images

Num

ber o

f pix

els

Grayscale value

Num

ber o

f pix

els

Grayscale value

Page 80: Nust Thesis

Chapter 4 Design & Implementation Details

-69-

ilidportion = imo(icrpcr(1):end, icrpcr(2)-icrpcr(3):icrpcr(2)+icrpcr(3));

where first colon is cropping the lower half of the image and second colon operator is

same as for upper eyelid.

As iris has very sharp changes near pupil so to nullify this effect a virtual parabola is

drawn by using three points in the image. These points are shown in Figure 4.11 (c). To

draw the parabola passing through these points, MATLAB function polyfit is used. The

main purpose of this function is to fit in a polynomial passing through the input points.

As parabola is a polynomial of degree two so polyfit is passed with the parameters three

points and a number two. Its output is a curve of degree two with two variables. Points on

the curve are obtained by varying one variable. When this virtual parabolic curve is

drawn, points for upper eyelid are picked on each column between first row to the virtual

parabolic curve and points for lower eyelid are picked from last row to virtual parabola in

upward direction. The intensity values in an array along with a variable string mentioning

upper or lower are inputted to a function named as MaxDiffEyelids. At most three points

are output from the function based on the maximum difference with respect to minimum

distance.

When points are selected, there come some redundant points which are deleted. Now if

the remaining points are less than fifteen, it means that eyelid is not covering the iris

otherwise a parabola is fitted statistically on the points with least square error.

b. Eyelids Extreme Values

A module named as MaxDiffEyelids has been implemented to obtain extreme points for

eyelids localization. Its input is an array of numbers from the image and output is the

location of points where gradient is maximum while going from brighter to darker. After

a close view of the image, it is clear that mostly upper portion of upper eyelid in the

image is brighter. Positions of maximas and minimas are obtained by function

FindMaxMin1D and difference of the positions & relative values in the array are

multiplied to get weighted results according to change in the intensity values. These

positions of maximum weight in an array are output of the module.

Page 81: Nust Thesis

Chapter 4 Design & Implementation Details

-70-

Figure 4.11: Steps for Upper Eyelid localization CASIA Ver 1.0 Iris database

(a) is part of the original image (b) after applying moving average filter (c) result of sobel horizontal filter (d) deleting pupil edge from (c), (e) image in which points near iris boundary are deleted (f) least square fit parabola (g) Image with upper eyelid

(a) (b)

(c) (d)

(e) (f)

(g)

Page 82: Nust Thesis

Chapter 4 Design & Implementation Details

-71-

4.2 Normalization Methods Normalization is necessary to avoid the effects of dilation and constriction of human

pupil under different illumination conditions which changes the size of iris. Also the

camera to eye distance changes the size of the iris . Before this step, iris is localized and

its parameters are stored in a variable icrpcr and image is turned black above to upper

eyelid and below to lower eyelid. Following normalization modules have been developed

for un-wrapping the iris into a rectangular array.

4.2.1 Normalization From Pupil Module

Inputs to this module are variables, icrpcr: iris and pupil parameters, imo: original image,

widthrect: width of the rectangular strip and noofpoints: length of the rectangular strip

which is same as the number of points picked up from each circle on the iris. Output of

the module is the normalized image in variable nor. In the processing first of all, output

variable is initialized with zeros. Theta, a new variable is defined and is assigned

noofpoints angles on a circle with equal spacing. A point on the pupil boundary is

calculated. Then, a line passing through the point with slop equal to tangent of the angle

Theta is obtained. Afterwards, a point of intersection between iris outer circle and the line

is worked out. Distance between this point of intersection and point on pupil boundary is

normalized to one and widthrect number of equidistant points are used to pick up the

grayscale intensity value of the iris. This data is assigned to the subsequent columns

(starting from first to on-wards) of the output normalized iris image. After completion of

this process for each angle, normalized iris image is obtained.

4.2.2 Normalization From Iris Module

In this module, normalized iris image is obtained using the reference point as center of

iris. Input and output of the module are same as of module Normalization From Pupil.

Points on boundary of the pupil are given intensity value 255. Using the linspace function

of MATLAB, number of points is selected on iris outer boundary. Intensity values from

each point to the reference point are pickup and 255 is searched within these values

which represents pupil boundary. MATLAB function improfile computes pixel value

cross sections along line segments in an image. Improfile selects equally spaced points

Page 83: Nust Thesis

Chapter 4 Design & Implementation Details

-72-

along the path specified, and then uses interpolation to find the intensity value for each

point. This function is used to obtain normalized pixel values from pupil boundary to iris

boundary.

4.2.3 Normalization From Minimum Distance Module

In this module, centers of both iris and pupil are playing a role. Input and output variables

to this module are same as that of Normalization From Pupil Module. Noofpoints on the

pupil circle with equal distance are stored in a variable A and the same number of points

are picked from iris boundary and stored in B. First point in variable A refers to a point on

pupil boundary at 0 degree and the first point in variable B is on iris boundary at 0 degree

angle. Minimum distance between the two points is normalized to unity and widthrect

numbers of equal spaced points are used to obtain data from the iris. Similarly second

point in the variable A & B refer to next points on pupil and iris boundary respectively.

Middle point in variables A & B represent the points along –ve x-axis.

4.2.4 Normalization From Mid-point Module

Input and output of this module is same as discussed for module 4.2.1. Both centers of

pupil and iris are used to find the reference point in this case. Mid-point of the centers is

taken as new reference point and normalization is carried out by finding the points of

intersection of circles with lines at different angles. Each line starts from reference point

and ends at the boundary of the image. Each line initially intersects the pupil boundary

and then intersects the iris boundary. Distance between these points of intersection is

normalized to unity and data values are picked up to make the normalized iris image.

4.2.5 Normalization With Dynamic Size Module

This is very different type of normalization. In this normalization module, size of the

different normalized image is different. First row of the normalized image is the data

values of first circle just after pupil circle and second row is data picked up from the next

circle towards iris boundary and so on till the first point on the iris boundary is selected.

This makes the normalized iris image like a trapezium, as normalized image is

Page 84: Nust Thesis

Chapter 4 Design & Implementation Details

-73-

rectangular so remaining part is shown black. Normalized images by using all the

described normalization processes are shown in Figure 4.12.

Figure 4.12: Normalized images with different methods

(a) Original image

(b) Pupil as reference point

(c) Iris as reference point

(d) Minimum Distance

(e) Mid-point of iris and pupil centers

(f) Dynamic size

Page 85: Nust Thesis

Chapter 4 Design & Implementation Details

-74-

4.3 Feature Extraction Methods Features from the normalized images are extracted by the following methods.

4.3.1 Principal Component Analysis

Principal component analysis (PCA) is used for finding features from a normalized iris

image. It is a way of identifying patterns in data and expressing the data in such a way as

to highlight their similarities and differences. Since patterns in data can be difficult to

find in high dimensions, PCA is a powerful tool for analyzing data. A module named

“PCA” is developed for obtaining features from the image.

For implementation of PCA, first of all following variables are initialized. Total number

of training samples are stored in a variable named “samples”, number of images of each

eye are put in “trained” and number of dimensions to which variance in the data is taken

into consideration is handled by variable “dimension”. A module named “PCAmean” is

cultivated for calculating the mean of the training samples. During training, first of all,

mean is subtracted from the image to delete the common features and then to make it

square matrix, it is multiplied by its transpose because eigen vectors and eigen values of

rectangular matrices is not possible. Eigen vectors and Eigen values are obtained and

number of eigen vectors corresponding to highest eigen values are stored as features of

the image. Minimum distance between the feature vectors of test image and trained

dataset is taken to match test image with the corresponding image.

4.3.2 Bit planes

Every image is composed of unsigned integral values in general of type uint8. In case of

colored image RGB, these values correspond to level of three colors Red, Green and Blue

at a particular position. For grayscale images, these integral values range from zero to

255; zero corresponds to black and 255 represents white. So a maximum of 256 = 28

values in a variable of type uint8 are possible. These values are stored in 8 bits.

Therefore, the image is composed of 8 bit planes. Each bit plane has its own contribution

in the image, so based upon this concept; bit planes are used as features of the normalized

iris image. In order to obtain a bit plane of the normalized iris image, a function named as

“bitget” of MATLAB is used. The syntax of the function bitget is as follows:

Page 86: Nust Thesis

Chapter 4 Design & Implementation Details

-75-

C = bitget(A, BIT);

It returns the value of the bit at position BIT in A. A must be an unsigned integer or an

array of unsigned integers, and BIT must be a number between 1 and the number of bits

in the unsigned integer class of A.

4.3.3 Wavelets

The Wavelet Transform (WT) is based on sub-band coding. It is easy to implement and

reduces the computation time and resources required. The foundations of WT go back to

1976 [93] when techniques to decompose discrete time signals were devised. Similar

work was done in speech signal coding which was named as sub-band coding. In 1983, a

technique similar to sub-band coding was developed which was named pyramidal coding

[93]. Later many improvements were made to these coding schemes which resulted in

efficient multi-resolution analysis schemes. In continuous WT, the signals are analyzed

using a set of basis functions which relate to each other by simple scaling and translation.

In the case of discrete WT, a time-scale representation of the digital signal is obtained

using digital filtering techniques. The signal to be analyzed is passed through filters with

different cutoff frequencies at different scales.

Filters are one of the most widely used signal processing functions. Wavelets can be

realized by iteration of filters with rescaling. The resolution of the signal which is a

measure of the amount of detail information in the signal is determined by the filtering

operations and the scale is determined by upsampling and downsampling (subsampling)

operations. At each decomposition level, the half band filters produce signals spanning

only half the frequency band. This doubles the frequency resolution as the uncertainty in

frequency is reduced by half. In accordance with Nyquist’s rule if the original signal has

a highest frequency of ω which requires a sampling frequency of 2ω radians then it now

has a highest frequency of ω/2 radians. It can now be sampled at a frequency of ω

radians, thus discarding half the samples with no loss of information. This decimation by

2 halves the time resolution as the entire signal is represented by only half the number of

samples. Thus, while the half band low pass filtering removes half of the frequencies and

halves the resolution, the decimation by 2 doubles the scale.

Page 87: Nust Thesis

Chapter 4 Design & Implementation Details

-76-

Two-dimensional WT decomposes an image into “subbands” that are localized in

frequency and orientation. A image is passed through a series of filter bank stages. The

high-pass filter (wavelet function) and low-pass filter (scaling function) are finite impulse

response filters. In other words, the output at each point depends only on a finite portion

of the input. The filtered outputs are then down sampled by a factor of 2 in the horizontal

direction. These signals are then filtered by an identical filter pair in the vertical direction.

Decomposition of the image ends up into 4 subbands denoted by LL, HL, LH, HH. Each

of these subbands can be thought of as a smaller version of the image representing

different image properties. The band LL is a coarser approximation to the original image.

The bands LH and HL record the changes of the image along horizontal and vertical

directions respectively. The HH band shows the high frequency component of the image.

Second level decomposition can then be conducted on the LL subband. Under frequency-

based representation, only high-frequency spectrum is affected (called high-frequency

phenomenon). This one step decomposition is shown in Figure 4.13. After decomposition

of an image, LL subband is called “Approximation coefficients” whereas remaining three

subbands are called details. LH, HL and HH are known as Horizontal, Vertical and

Diagonal details.

Figure 4.13: One step decomposition of an image

A module named as “wavelet_fn” has been implemented to extract the features from the

normalized iris images using different wavelets at different levels. Input parameters of

this module are normalized iris image, wavelet name, level of decomposition and features

Page 88: Nust Thesis

Chapter 4 Design & Implementation Details

-77-

to extract. Features to be extracted could be approximation coefficients or any detail

coefficients or any of their combination of the specific level of wavelet. Output of the

module is a feature vector which has been used in matching. The wavelets discussed

previously have been incorporated in this module.

4.4 Matching After extraction of features from normalized iris images, a matching metric is required to

find the similarity between the two irises. This matching metric should have the property

that results of matching the irises from the same class should be clearly separate and

distinct than the results of matching the irises from different class. The metric used for

the proposed system is normalized Hamming distance and Euclidean distance.

4.4.1 Euclidean Distance

When features are extracted using PCA then the used similarity measure is Euclidean

distance. Euclidean distance between two points in p-dimensional space is a

geometrically shortest distance on the straight line passing through both the points.

Euclidean distance is defined in equation 2.27 and its matrix notation is given in equation

2.28.

4.4.2 Normalized Hamming Distance

It is widely used similarity metric in iris recognition systems. In information theory, the

Hamming distance between two strings of equal length is the number of positions for

which the corresponding symbols are different. In another way, it measures the minimum

number of substitutions required to change one into the other, or the number of errors

that transformed one string into the other [94]. If the result of Hamming distance is

divided by the total length of the strings then it is called Normalized Hamming distance.

In feature extraction module, features are converted to binary format. Then the

Normalized Hamming distance is used to find the match.

In iris recognition community, normalized Hamming distance is used so frequently that

many researchers simply mention it as Hamming distance [95]. A threshold is defined for

Page 89: Nust Thesis

Chapter 4 Design & Implementation Details

-78-

finding a match. Hamming distance less than the threshold value is assumed as a match.

The minimum is the Hamming distance, the maximum is the matching factor.

Page 90: Nust Thesis

Chapter 5 Results & Discussions

-79-

Chapter 5: Results & Discussions Different databases have been used to check the validity and efficiency of the proposed

schemes. MATLAB 7.0.4 has been used as a tool for the implementation of

methodologies. Results of each method applied to different datasets have been presented.

First of all, the results of iris localization methods have been described followed by the

results of normalization methods. After that, performance of feature extraction and

recognition methods has been elaborated.

5.1 Databases Used for Evaluation

Different iris databases of different universities / institutes have been used for testing the

implemented schemes. Two databases of Chinese Academy of Sciences, Institute of

Automation (CASIA) Iris Database Version 1.0 and 3.0 [71], one from University of

Bath (BATH), UK [96] and one from Multi Media University (MMU), Malaysia [72]

have been used for evaluation. CASIA Version 3.0 (Interval) is the largest database

which is publicly available via internet. Number of total images, file format, number of

classes and dimension of images are given in Table 5.1 corresponding to each database

name.

Table 5.1: Some attributes of the datasets

S. No.

Name of Iris Database

File Format

Number of

images

Number of

classes

Number of images in each

class

Dimensions of image in pixels

(rows×columns)

a. CASIA Version 1.0 bmp 756 108 7 280×320

b. CASIA Version 3.0 (Interval)

jpg 2655 396 1-26 280×320

c. BATH (free version) bmp 1000 50 20 960×1280

d. MMU Version 1.0 bmp 450 90 5 240×320×3

Page 91: Nust Thesis

Chapter 5 Results & Discussions

-80-

As the acquiring device as well as environment is not unique for all databases, therefore,

different types of pupil images are present in the databases. Some of the images are

shown in Figure 5.1. Image (a) in Figure 5.1 belongs to CASIA version 1.0 in which the

pupil is turned black automatically so that any light reflection is vanished and in Figure

5.1 (b) image taken from CASIA version 3.0 is shown (there are eight white small circles

in round shape inside the pupil). Figure 5.1 (c) is the image from BATH iris database in

which iris is not occluded by eyelids in most of images but there is a bright spot in the

pupil in every image. MMU version 1.0 iris image database contains image shown in

Figure 5.1 (d). It also contains a bright spot in the pupil.

Figure 5.1: Images in different datasets

As exact results of localization are not available from the database providers, therefore, results presented here are obtained by observing the images.

(a) CASIA version 1.0

(c) BATH

(b) CASIA version 3.0 (Interval)

(d) MMU

Page 92: Nust Thesis

Chapter 5 Results & Discussions

-81-

5.2 CASIA Version 1.0

There was not any public iris database whereas there are many face and fingerprint

databases. Lack of iris data for testing has been a main hurdle for carrying out research

on iris biometric. To promote the research, National Laboratory of Pattern Recognition

(NLPR), Institute of Automation (IA), Chinese Academy of Sciences (CAS) has provided

free iris database for evaluation of iris recognition systems [71].

Most of the research work has been conducted on this database because it is first database

available via internet. It is widely distributed to a large number researchers/teams from

many countries and regions of the world for research. The pupil regions of all iris images

in CASIA version 1.0 were automatically detected and replaced with a circular region of

constant intensity to mask out the specular reflections from the Near Infra Red (NIR)

illuminators. CASIA version 1.0 iris image database contains 756 images from 108

different people. For each eye, 7 images have been captured in two sessions, where three

samples are collected in the first session and four in the second session. Each iris image is

in grayscale with a resolution of 280×320 pixels.

5.2.1 Pupil Localization

The accuracy of pupil localization is the main phase of iris localization. Pupil detection

and finding pupil parameters play pivotal role for pupil localization. Detection of pupil is

very important because once a pupil is localized correctly; probability of correct iris

localization is increased. To find the pupil, first of all a point inside the pupil is searched.

a. Point inside the pupil

The image acquiring setups are different for different dataset, so different methods are

implemented to obtain pupil center and radius. To locate a point inside the pupil, a robust

method has been used in this research work which performs well for all datasets. A point

inside the pupil for all the images in CASIA version 1.0 is correctly detected for all

images as mentioned in Table 5.2. This perfect detection is because of uniform intensity

values inside the pupil.

For finding a point inside the pupil, size of decimation filter and border width is obtained

adaptively, which are dependent on dimensions of the image. Size of decimation filter is

Page 93: Nust Thesis

Chapter 5 Results & Discussions

-82-

taken as “w×w” where “w’ is equal to 10% of the total number of rows of the image. In

the case of CASIA version 1.0, its value is 28 pixels. Similarly, border width “bw” is

15% of the total rows and its value is 42 pixels. Border width is used to exclude the “bw”

pixels in finding point inside the pupil. Thus, both “w” and “bw” are dependent on the

size of image.

b. Pupil Parameters

As pupil is firstly assumed as circle which is later changed to non-circular boundary

detection. Therefore, the term pupil parameters refer to center coordinates and radius of

the pupil. Pupil region is replaced with a circular region of constant intensity to mask out

the specular reflections from the NIR illuminators by the dataset provider [97]. The

results of finding pupil parameters using methods discussed in Section 4.1.1 for the

database are given in Table 5.2. Pupil parameters are found with 100% accuracy. These

parameters affect the accuracy of iris localization.

5.2.2 Non-circular Pupil Localization

When bright light is shone on the eye, it is automatically constricted. This is the pupillary

reflex [65]. Furthermore, the pupil will dilate if a person sees an object of interest. The

oculomotor nerve, specifically the parasympathetic part coming from the Edinger-

Westphal nucleus, terminates on the circular iris sphincter muscle. When this muscle

contracts, it reduces the size of pupil. Size of iris sphincter muscle is not necessarily

equal. That is why, the pupil is of non-circular shape. Moreover, non-orthogonal images

(i.e. the images acquired at an angle other than normal to the eye ball) or off-angle

images have non-circular pupil.

Non-circular pupil boundary is calculated by using the pupil parameters. A specific

number of points on pupil circular boundary are selected and this number is calculated by

equation 3.14. The results of correct non-circular boundary of pupil are given in Table

5.2. Accuracy of 98.28% is achieved for correct non-circular pupil boundary. These

results depend on accurate pupil parameters and number of points on pupil circular

boundary. If the numbers of points on the pupil are less than the points selected by

equation 3.14, then the accuracy of non-circular pupil localization is decreased. Because

Page 94: Nust Thesis

Chapter 5 Results & Discussions

-83-

of the large distance among the selected points and when these points are joined, they did

not look like a circle. Similarly, if the numbers of points on the pupil are more than the

specific numbers of points then accuracy is not increased because the mutual distance

among the points is very small, even some of the points are connected to each other.

1.72% incorrect boundaries of the pupil are due to long eyelashes and very rich texture of

iris near the pupil boundary which confuses with the pupil boundary.

5.2.3 Iris Localization

The boundary between iris and sclera is named as iris boundary which is the most

important parameter for iris localization. Proposed method in Section 4.1.3 has been

applied to the database and its results are shown in Table 5.2. High accuracy of iris

localization is very important because this part of the image is actual iris data which is

used for recognition. The proposed method yields correct iris localization rate of 99.6%

for CASIA version 1.0. As pupil parameters are found with very high accuracy (i.e.

100%), so it plays a key role in finding such high accuracy in iris localization.

5.2.4 Eyelids Localization

Iris outer and inner boundaries have been worked out in Section 4.1.4. CASIA version

1.0 iris database has some images in which eyelashes are very dense and long. The

proposed method in the module gives good response for finding eyelids as indicated in

the results in Table 5.2. Upper eyelids are correctly localized with an accuracy of

98.91%. The results achieved for lower eyelids are accurate up to 97.8%

Table 5.2: Results of Iris localization in CASIA version 1.0

S. No. Name of Stage Total number of images Accuracy

a. Point Inside Pupil 756 100%

b. Pupil Parameters 756 100%

c. Non-Circular Pupil Localization 756 98.28%

d. Iris Localization 756 99.6%

Page 95: Nust Thesis

Chapter 5 Results & Discussions

-84-

e. Upper Eyelids 756 98.91%

f. Lower Eyelids 756 97.8%

Some of the correctly localized images are shown in Figure 5.2. Eyelids in these images

are masked so that normalized image may not contain noisy portion. Iris and pupil

centers are shown clearly in the images.

Figure 5.2: Some correctly localized images in CASIA version 1.0

5.3 CASIA Version 3.0

CASIA Version 3.0 includes three subsets that are labeled as CASIA Version 3.0

Interval, CASIA Version 3.0 Lamp and CASIA Version 3.0 Twins. CASIA Version 3.0

Interval Iris database contains a total of 2655 iris images from 249 subjects. All iris

images are 8 bit gray-level JPEG files which are collected under NIR illumination.

Almost all subjects are Chinese except a few. CASIA version 3.0 Interval is used for

evaluation of the proposed methods and from onward CASIA Version 3.0 Interval is

referred as CASIA version 3.0 which is a superset of CASIA Version 1.0. Images of left

and right eyes are stored in separate folders with a total of 498 (249×2) folders. There are

102 empty folders. Therefore, total number of classes are 396 (498-102) as given in Table

5.1.

Page 96: Nust Thesis

Chapter 5 Results & Discussions

-85-

5.3.1 Pupil Localization

Inner boundary of iris is termed as pupil boundary. In this database, pupil has eight white

small circles like a revolver chamber. These circles need another technique for pupil

localization. Finding exact location of pupil is a main step in iris localization. Pupil

detection and its parameters have been obtained for determining pupil localization. Good

localization of iris depends on exact localization of pupil because its center is used for

further processing. For pupil detection, a point inside the pupil is searched using the

proposed algorithm.

a. Point inside the pupil

Presence of light reflection in pupil is different for different dataset as a result of

implementation of different methods to obtain pupil center and radius. To locate a point

inside the pupil, the proposed method has been used. In this method, number of rows in

the image is used to find the size of decimation filter and border width which are taken as

10% and 15% of the total rows. In the case of CASIA version 3.0 (Internal) dataset, these

values are 28 and 42 pixels. Border width of 42 pixels is excluded in finding point inside

the pupil. Since the illumination and contrast in the image of the dataset is widely

varying, so the results of 99.92% have been achieved for finding a point inside the pupil

correctly whereas these results for CASIA version 1.0 database have been 100% correct.

b. Pupil Parameters

Coordinates of pupil center and length of radius are determined while assuming pupil as

circle. Figure 5.1 (b) shows an image of this dataset. It has eight white circles inside the

pupil. For eyes having small pupil, these white circles are on the boundary of pupil which

makes it very difficult to find the pupil boundary. The results of finding pupil parameters

using methods discussed in Section 4.1.1 are given in Table 5.3. The proposed method

correctly calculated the pupil parameters of 2648 images out of 2653 images. In only two

images, point inside the pupil was incorrectly detected. While calculating accuracy for

the database, all the images instead of 2653 images were used. The accuracy achieved in

this case is 99.69% whereas 100% correct results have been obtained for CASIA version

1.0. If two images are subtracted from the total number of images (because of incorrect

Page 97: Nust Thesis

Chapter 5 Results & Discussions

-86-

point inside the pupil) then the result for pupil parameter increases from 99.69% to

99.81%.

5.3.2 Non-circular Pupil Localization

Pupil boundary is not an exact circle. So, a specific number of equidistant points from

equation 3.14 are shifted to original boundary of the pupil and then joined linearly to get

the exact boundary (non-circular) of the pupil. In non-circular pupil localization, pupil

parameters plays vital role. If the circular boundary is incorrect (more than 13 pixels

away from exact pupil boundary), then non-circular boundary will not be determined

correctly. The result of correct non-circular boundary of pupil is given in Table 5.3.

Accuracy achieved for non-circular pupil boundary is 99.35% in case of CASIA version

3.0 whereas it is 98.28% for CASIA version 1.0 [98].

5.3.3 Iris Localization

Iris boundary is relatively difficult to find because contrast between pupil and iris is

higher than the contrast between iris and sclera. This boundary defines the region inside

the iris. Proposed method in section 4.1.3 has been applied to the database and the results

obtained are shown in Table 5.3. Since pupil has different light spots / reflections in

different databases so different methods have been implemented for pupil boundary

whereas for iris localization, a generic method has been proposed which gives equally

better results for all databases. The results of correct iris localization have been achieved

up to 99.21% for CASIA version 3.0.

5.3.4 Eyelids Localization

Iris outer and inner boundaries have been determined and the results of eyelid

localization module as mentioned in Section 4.1.4 are presented. In this module, the

eyelids are considered as parabolas. Obtaining eyelids boundary, particularly upper eyelid

in an image, is very difficult because of the presence of eyelashes. Very dense eyelashes

make detection of eyelid more challenging. The proposed method performs well and

results have been achieved up to 90.02% and 91.9% for upper and lower eyelids

respectively. These results of 98.91% and 97.8% have been obtained for CASIA version

Page 98: Nust Thesis

Chapter 5 Results & Discussions

-87-

1.0. CASIA version 3.0 has more percentage of blurred images as compared to version

1.0. Therefore, overall results are better in CASIA version 1.0, particularly in the

process of detection of upper and lower eyelids [48].

Table 5.3: Results of Iris localization in CASIA version 3.0

S. No. Name of Phase Total number of images Accuracy

a. Point Inside Pupil 2655 99.92%

b. Pupil Parameters 2655 99.69%

c. Non-Circular Pupil Localization 2655 99.35%

d. Iris Localization 2655 99.21%

e. Upper Eyelids 2655 90.02%

f. Lower Eyelids 2655 91.90%

Figure 5.3 contains some of the images in which iris is localized correctly. Parts of the

image above upper eyelid and below lower eyelid have been masked because these parts

contain noise and are not used for further processing.

Figure 5.3: Some correctly localized images in CASIA version 3.0

Page 99: Nust Thesis

Chapter 5 Results & Discussions

-88-

5.4 University of Bath Iris Database (free version)

BATH iris dataset (free version) has 1000 iris images of high resolution from 50 different

eyes in 25 folders. The folders are indexed numerically as 0001, 0002, etc. Within each

folder, there are two subfolders - L (left) and R (right), each containing 20 images of the

respective eyes. The free images are JPEG2000 which are compressed to 0.5 bits per

pixel and are in grayscale with 1280×960 resolution.

5.4.1 Pupil Localization

Pupil is very large in this iris image dataset because of high resolution of the images. A

white reflection of light source is present in the pupil. Dataset contains images of eyes

with lenses. Pupil localization means finding the location of pupil and its parameters.

First step in pupil localization is detection of pupil location. For this purpose, a point

inside the pupil is looked for using the algorithm. Exact localization of pupil plays major

role in iris localization because center of pupil is exploited for further processing.

a. Point inside the pupil

Once a point inside the pupil is confirmed then it is easy to locate the pupil boundary

because of high contrast between pupil and sclera. Light reflection in pupil is different for

different datasets. As a result, different methods have been implemented to obtain pupil

parameters. To locate a point inside the pupil, a proposed method is used which employs

number of rows in the image to find the size of decimation filter and border width, which

are taken as ten and fifteen percent of the total rows. For this database, size of the

decimation filter is 96 by 96 pixels and border width is 144 pixels. 100% accurate results

are achieved by finding a point inside the pupil for this database because in all of these

images, pupil pixels have almost same intensity values.

b. Pupil Parameters

Different methods have been proposed for different database for pupil parameters

detection. Pupil parameters are acquired while finding a square inscribing the pupil. In

this method instead of finding the pupil circle, a square sub-image containing the pupil is

separated. This square image has tightly fitted pupil in it and considering pupil as

Page 100: Nust Thesis

Chapter 5 Results & Discussions

-89-

complete circle, coordinates of pupil center and length of radius are calculated. Figure

5.1(c) shows an image of this dataset. It has a white spot in the pupil. The results of

finding pupil parameters for the database are shown in Table 5.4. Due to high resolution

of the image and using a different approach, results attained for BATH iris database are

100% correct.

5.4.2 Non-circular Pupil Localization

A closer view of the iris image demonstrates that the pupil boundary is zigzag. Therefore,

a number of points from equation 3.14 are forced to shift at genuine boundary of the

pupil and then linearly joined to get the exact boundary of the pupil. The correct

localization rate of non-circular boundary is given in Table 5.4. 98.8% accurate results

have been achieved for BATH iris database whereas 98.28% and 99.35% are the results

of correct non-circular pupil localization for CASIA version 1.0 and 3.0 respectively.

Accuracy in non-circular pupil localization for BATH database is 0.52% better than

CASIA version 1.0 and 0.55% worse than CASIA version 3.0. These 0.52% better results

are because in BATH database less number of images have very high frequency iris

patterns near pupil boundary and long eyelashes are also not present near pupil boundary.

5.4.3 Iris Localization

Images of this database have better contrast between iris and sclera as compared to other

databases because of high resolution of images. Iris boundary is localized by applying the

proposed method to this database. After finding pupil parameters, candidate points for iris

boundary are selected using the defined procedure as discussed in Section 4.1.3 and the

results are given in Table 5.4. The results of iris localization have been obtained with

accuracy up to 99.4% for BATH iris database. The results for CASIA version 1.0 and 3.0

are 99.6% and 99.21% respectively.

5.4.4 Eyelids Localization

Iris and pupil boundaries have been processed. The results of eyelid localization for

BATH iris database are given in Table 5.4. The accuracy of results for correct upper and

lower eyelids detection is 84.5% and 96.6% respectively. The result of upper eyelid

Page 101: Nust Thesis

Chapter 5 Results & Discussions

-90-

localization is the worst in the case of BATH database because the images in this

database have very prominent upper eyelashes and the size of the image also affects the

accuracy. In case of CASIA version 1.0 and 3.0, the correct upper eyelid detection

percentages are 98.91% and 90.02% respectively whereas lower eyelids spotted well in

these databases with accuracies of 97.8% and 91.9% respectively. The result for lower

eyelid localization of BATH iris database is 4.7% higher than CASIA version 3.0 and is

slightly 1.2% less than CASIA version 1.0 iris database.

Table 5.4: Results of Iris localization in BATH (free version)

S. No. Name of Phase Total number of images Accuracy

a. Point Inside Pupil 1000 100%

b. Pupil Parameters 1000 100%

c. Non-Circular Pupil Localization 1000 98.80%

d. Iris Localization 1000 99.40%

e. Upper Eyelids 1000 84.50%

f. Lower Eyelids 1000 96.60%

Figure 5.4: Some correctly localized images in BATH Database free version

Page 102: Nust Thesis

Chapter 5 Results & Discussions

-91-

5.5 MMU Version 1.0

MMU Version 1.0 iris database contains a total number of 450 iris images which have

been taken using LG IrisAccess®2200. This camera is semi-automated and it operates at

the range of 7-25 cm. These iris images are contributed by 100 volunteers with different

age and nationality. They come from Asia, Middle East, Africa and Europe. Each of them

has 5 iris images for each eye. Five left eye iris images have been excluded from the

database due to cataract disease.

5.5.1 Pupil Localization

The accuracy of pupil localization is the main phase of iris localization. Pupil detection

and finding its parameters is the initial process. Exact localization of iris mainly depends

upon accurate localization of pupil because its center is used for finding iris boundary.

For pupil detection, a point inside the pupil is searched using the algorithm given in

Section 4.1.1. The images of this database are colored so they are converted to grayscale

as a first step of processing.

a. Point inside the pupil

For finding pupil parameters, a point inside the pupil is detected. As image acquiring

devices are different for different datasets, therefore, the nature of pupil in the image is

different for different databases. For instance, eight white small circles are present in

pupil in CASIA version 3.0 iris dataset. As a result different methods have been proposed

to find parameters of pupil. In order to locate a point inside the pupil, number of rows in

the image is effectively used. To get the size of decimation filter and border width 10%

and 15% of the total rows have been used. For this database these values are 24 and 36

pixels. Border width is excluded in finding point inside the pupil because pupil is almost

in the center of iris. The results are presented in Table 5.5 and a point inside the pupil is

detected with 100% accuracy. This very high accuracy is attained because the intensity

level of the pupil is almost same for each image in this database although a white spot is

present inside the pupil. The results to search a point inside the pupil for CASIA version

1.0 and BATH iris databases are also 100% whereas it is 99.92% for CASIA version 3.0.

Page 103: Nust Thesis

Chapter 5 Results & Discussions

-92-

b. Pupil Parameters

Pupil parameters include coordinates of pupil center and length of radius. Accurate pupil

center is very critical because it is used in finding iris boundary. Figure 5.1 (d) shows an

image of this dataset in which a spot due to reflection of light source is present in the

pupil. To find the pupil parameters for this database, complete procedure has been shown

in Figure 4.5. The results achieved for calculated pupil parameters are up to 98.44% as

shown in Table 5.5. These results are 1.42%, 1.25% and 1.56% less than the results of

pupil localization of CASIA version 1.0, CASIA version 3.0 and BATH iris databases

respectively. These inaccuracies in MMU database are due to large number of images in

which pupil is occluded by eyelids and long dense eyelashes.

5.5.2 Non-circular Pupil Localization

Size of pupil changes constantly even in constant illumination and its boundary is not an

exact circle. To localize it perfectly, a specific number of points based upon length of

pupil radius using equation 3.14 are shifted to original boundary of the pupil. This shift

enables us to get the exact (non-circular) boundary of the pupil. The result of correct non-

circular boundary of pupil is given in Table 5.5. Non-circular boundary of the pupil has

correct localization rate of 96.6% for MMU Iris database whereas 98.28%, 99.35% and

98.8% accurate results are achieved for CASIA version 1.0, CASIA version 3.0 and

BATH iris databases respectively. Size of each image in MMU database is the smallest of

all the studied databases. The reasons for low non-circular boundary results are large

percentage of images with long eyelashes near the pupil boundary, occlusion of pupil

with upper eyelid and eyelashes.

5.5.3 Iris Localization

Another method is proposed to localize the iris for this database [99] which is presented

step by step in Figure 4.10. Correct localization of iris is a challenging task because of

low contrast between iris and sclera. Using this method, the results attained for iris

localization are 96.86%. When the proposed method of iris boundary detection with

minor changes (algorithm 3) is applied to this database, correct results achieved are up to

99.77%. The results of iris localization are shown in Table 5.5. Although the image size

Page 104: Nust Thesis

Chapter 5 Results & Discussions

-93-

is relatively small in this database but the proposed algorithm performs very well because

it captures the gradient at the boundary of the iris. Correct iris localization of 99.6%,

99.21% and 99.4% has been obtained for CASIA version 1.0, CASIA version 3.0 and

BATH iris databases respectively.

5.5.4 Eyelids Localization

Iris circular and pupil non-circular boundaries have been obtained and the results of

eyelid localization module as mentioned in Section 4.1.4 are presented in Table 5.5.

Upper eyelid normally has eyelashes curving down, which cover some part of iris as well

as of pupil. Lower eyelids have eyelashes which in general do not cover the iris. Eyelids

are considered as parabolas while detecting their boundaries. The results of correct upper

and lower eyelids detection are 84.66% and 96.22% respectively for MMU iris database.

Lower eyelids localization results of MMU database are 4.32% better than CASIA

version 3.0 database whereas they are 0.38% and 1.58% less than BATH and CASIA

version 1.0 iris databases respectively. Similarly, results of correct upper localization are

slightly (0.16%) better than BATH iris database and are less than CASIA version 1.0 and

3.0. The results of upper eyelids are low (i.e. 84.66%) because of large number of images

with long upper eyelashes which occlude the eyelid.

Table 5.5: Results of Iris localization in MMU version 1.0

S. No. Name of Phase Total number of images Accuracy

a. Point Inside Pupil 450 100%

b. Pupil Parameters 450 98.44%

c. Non-Circular Pupil Localization 450 96.60%

d. Iris Localization 450 99.77%

e. Upper Eyelids 450 84.66%

f. Lower Eyelids 450 96.22%

Page 105: Nust Thesis

Chapter 5 Results & Discussions

-94-

Some of the correctly localized iris images are shown in Figure 5.5. A comparison of all

steps of iris localization is graphically represented in Figure 5.6 . Accuracy of each step is

Figure 5.5: Some correctly localized images in MMU Database version 1.0

75

80

85

90

95

100

Point Inside Pupil Pupil Parameters Pupil Non-circular Localization

Iris Localization Upper Eyelid Lower Eyelid

Accu

racy

Steps in Iris Localization

Iris LocalizationCASIA version 1.0 CASIA version 3.0 BATH MMU

Figure 5.6: Comparison of steps in iris localization in different databases

Page 106: Nust Thesis

Chapter 5 Results & Discussions

-95-

given on the y-axis and steps of iris localization are on x-axis. The most important step in

iris localization is iris boundary detection, which has accuracy of more than 99.2% for all

the databases with a total of 4861 images.

5.6 Errors in Localization

During the experiments, irises in many images were unable to be localized exactly. There

are certain errors in difference phases of iris localization. In some images, pupil gets

incorrect boundary due to white spot in it. Some times, iris in the image has incomplete

boundary. These errors contribute significantly to other phases of iris recognition. These

errors are described in the following sections.

5.6.1 Errors in Circular Pupil Localization

In the first phase of iris localization, pupil is localized by assuming it as a complete

circle. There are two types of mistakes found during this process. First is inaccurate pupil

center and second is inaccurate length of pupil radius. In Figure 5.7, inaccuracies in pupil

localization are depicted. These errors are due to non-circular shape of the pupil, locating

wrong point while finding a point inside the pupil, eyelashes on the boundary of pupil

Figure 5.7: Inaccuracies in circular pupil localization

(a) Non-circular pupil (b) Wrong point inside the pupil

(c) Long eyelashes near pupil boundary

(d) Wong length of radius of pupil

(e) Pupil is occluded by eyelashes

(f) Pupil is occluded by upper eyelid

Page 107: Nust Thesis

Chapter 5 Results & Discussions

-96-

and eyelid coving the pupil. In most of the cases, pupil boundary is not an exact circle. If

a circle is drawn on the boundary of the pupil, there is high chance that it will either

cover some part of the pupil or some part of iris. Figure 5.7 (a) is from CASIA version

1.0 with incorrect circular localization of pupil. In this case, some part of iris is covered

with pupil estimated boundary. Figure 5.7 (b), (c) and (d) are from CASIA version 3.0

and Figure 5.7 (e) and Figure 5.7 (f) are from MMU iris database. If the point inside pupil

is not correctly found then pupil boundary will not be localized correctly as shown in

Figure 5.7 (b). In any of the image, point inside circle becomes the key location for

finding the pupil circular boundary. White bright circles on the boundary of pupil also

produce inaccuracies in pupil localization. Long eyelashes near pupil boundary as shown

in Figure 5.7 (c), pupil occluded by long eyelashes (Figure 5.7 (e)) and half open eye or

pupil covered with eyelid as shown in Figure 5.7 (f) are other sources of error in this

process. Translation of center and adjustment of radius can remove majority of these

errors.

5.6.2 Errors in Non-circular Pupil Localization

After finding the parameters of circular pupil localization, a number of points using

equation 3.14 on the circular boundary of the pupil are picked up to adjust their position

towards the exact boundary of pupil. The adjusted points are then linearly joined to get

the exact boundary of the pupil. Errors in this phase are because of long eyelashes near

the pupil boundary, white spots in the pupil, very sharp features close to pupil boundary

and position of eyelid in the vicinity of pupil. Some of incorrect non-circular pupil

images are shown in Figure 5.8. Image presented in Figure 5.8 (a) is from CASIA version

1.0 with inaccuracy because of long eyelashes near pupil boundary. Same inaccuracy is

also shown in Figure 5.8 (b) which is from CASIA version 3.0. White circles in the pupil

used by capturing device on and near pupil boundary divert the non-circular pupil finding

module towards a mistake as show in Figure 5.8 (c) of CASIA version 3.0 iris database.

Inaccuracies due to very sharp patterns of iris near pupil boundary turned out to be the

main reason of non-circular pupil boundary in images of BATH iris database. One image

with this inaccuracy is shown in Figure 5.8 (d). A white spot in the vicinity of the pupil

Page 108: Nust Thesis

Chapter 5 Results & Discussions

-97-

Figure 5.8: Inaccuracies in non-circular pupil localization

boundary and upper eyelids occluding the iris near pupil boundary are the root causes of

inaccuracies in MMU iris image datasets as indicated in Figure 5.8 (e) and Figure 5.8 (f).

5.6.3 Errors in Iris Localization

Obtaining iris boundary is a difficult task in the images where the contrast between iris

and sclera is very low. Human visual power is marvelous; one can define a virtual

circular boundary of iris even if it is mixed with sclera. Such detection using an algorithm

is a challenging job. This challenge is fulfilled with the proposed method but there is a

small number of images on which proposed algorithm fails. Main sources of errors in

locating of iris boundary are long eyelashes parallel to iris boundary, incomplete iris in

the image, very sharp pattern in iris, extremely low contrast between iris and sclera and

another boundary outside iris boundary due to refection of light or curvature of eyeball.

Some of the incorrect images are portrayed in Figure 5.9. Images in first and second row

in Figure 5.9 are associated with CASIA version 1.0 and 3.0 iris image databases

respectively. Images Figure 5.9 (g) and Figure 5.9 (h) are from BATH free version iris

dataset and last image is from MMU version 1.0 database. Inaccurate iris boundary in

(a) Long eyelashes near pupil boundary

(b) Long eyelashes near pupil boundary

(c) White circle near pupil boundary

(d) Very sharp pattern of iris near pupil boundary

(e) White spot near pupil boundary

(f) Eyelid near pupil boundary

Page 109: Nust Thesis

Chapter 5 Results & Discussions

-98-

images Figure 5.9 (a), (b), and (i) is due to the presence of long eyelashes near the iris

boundary. Iris boundary is not even visible in Figure 5.9 (a) and (c) on right side and

Figure 5.9 (f) on left side that is why iris boundary is not localized perfectly. In Figure

5.9 (d), lens boundary is obtained instead of iris boundary on right side. Errors in Figure

5.9 (e) and Figure 5.9 (h) are due to the sharp iris patterns which guide the algorithm

towards detection of wrong iris boundary. Figure 5.9 (g) has incorrect iris boundary

because it has white shade concentric with iris center. These inaccuracies could be

removed by changing the parametric values in modules.

Figure 5.9: Inaccuracies in iris localization

(a) Long eyelashes (b) Long eyelashes (c) Iris boundary is not visible (right side)

(d) Lens boundary (e) Sharp iris pattern (f) Iris boundary is not visible (left side)

(g) White shade inside the iris

(h) Sharp iris pattern (i) Long eyelashes

Page 110: Nust Thesis

Chapter 5 Results & Discussions

-99-

5.6.4 Errors in Eyelids Localization

For finding eyelids, the image portion between the vertical boundaries of iris is

processed. As eyelid shape is parabolic, therefore, two parabolas; one for upper and other

for lower eyelids are calculated. Points are selected as already discussed and parabolas

are fitted through these points. Length and density of eyelashes affect the proposed

method. There is a wide variety of eyelids in the images e.g. in some images upper eyelid

is covered with eyelashes so much that the boundary of the eyelid on iris is occluded,

some images have same case for lower eyelid. It has been observed that probability of

occlusion of iris with upper eyelid is higher than lower eyelid. Main reason of inaccurate

eyelid localization is selection of incorrect points which is due to multiple eyelashes, very

dense eyelashes, eyelashes parallel to eyelids and a bright layer on the eyelid. Some

inaccurate eyelids are shown in Figure 5.10 along with a reason of the inaccuracy. Each

of these images can be converted to correctly localized image by varying the parameters

in the eyelid detection module.

Figure 5.10: Inaccuracies in eyelid localization

(a) Multiple eyelashes (b) Very dense eyelashes

(c) Multiple eyelashes

(d) Multiple eyelashes (e) Very dense eyelashes

(f) Eyelashes parallel to lower eyelid

Page 111: Nust Thesis

Chapter 5 Results & Discussions

-100-

5.7 Comparison with Other Methods

The best results in iris localization using proposed method have been achieved up to

99.6% for CASIA version 1.0 iris database which is most widely used iris database in

research. The results of proposed scheme of iris localization are compared with the

results of other researchers in terms of accuracy and computational complexity.

5.7.1 Accuracy

Upon comparing the proposed method with existing methods, proposed method performs

better in accuracy and execution time. In terms of correct localization, proposed method

has shown the best results. Hough transform method has been used by most of the

researchers for iris localization. Edge detection followed by a Hough transform is a

standard machine vision technique for fitting simple contour models to images [100]. For

CASIA version 1.0 iris image database, results are very impressive as shown in Table

5.6. After applying canny edge detector to the image, Hough transform is used for iris

localization on the same dataset and iris boundary is correctly localized with an accuracy

of 83.45% and correct pupil localization is achieved up to 97.48% as given in Table 5.7.

Average time consumed on each image is 129.3 seconds using Hough transform. Masek

implementation of Daugman’s Method has given accuracy of iris localization of 82.54%

and pupil localization of 99.07%. The results of pupil localization for CASIA version 1.0

are given in Table 5.7.

Table 5.6: Results of iris localization for CASIA version 1.0

Time (seconds) Method Accuracy

Mean Min Max

Daugman [81] 98.6% 6.56 6.23 6.99

Wildes [55] 99.9% 8.28 6.34 12.54

Masek [56] 82.54% 17.5 6.3 43.3

Cui et. al. [59] 99.34% 0.24 0.18 0.33

Page 112: Nust Thesis

Chapter 5 Results & Discussions

-101-

Hough Transform 83.45% 129.3 77.1 192.3

Shen et. al. [57] Not mentioned 3.8 - -

Zaim [101] 92.7% 5.83 - -

Zhu et. al. [102] 88% 0.5 - -

Narote et. al. [103] 97.22% 0.96 - -

Proposed 99.6% 0.33 0.24 0.41

It is obvious from the results given in Table 5.6 that proposed system has higher accuracy

than Daugman, Masek, Cui, Hough transform, Zaim, Zhu and Narote’s iris localization

methods. Average time used by the proposed system is very low as compared to all other

systems except Cui. Maximum time spent to localize iris is 0.41 seconds which is almost

17 times less than Daugman, 30 times less than Wildes and 105 times less than Masek

whereas it takes only 0.08 seconds more than Cui’s method. It is approximately 26 times

faster than Daugman, Wildes & Masek and 321 times faster than Hough transform

method while comparing minimum time usage. It has also been observed that accuracy of

the proposed system is slightly less (i.e. 0.3%) than that of Wildes method but Wildes

method is very time consuming. Average time used by Wildes system is 8.28 seconds per

image. On the other hand, the proposed system is utilizing average time of only 0.33

seconds which is 25 times faster than that of Wildes. It is more than 19, 53, 391, 11, 17,

1.5 and 2.9 times quicker than Daugman, Masek, Hough transform, Shen, Zaim, Zhu and

Narote methods respectively whereas Cui’s method takes 0.09 seconds less but its

accuracy is also less than the proposed method.

Accuracy of pupil localization for CASIA version 1.0 iris image dataset is compared with

other methods in Table 5.7. All methods perform circular localization of the pupil while

the proposed method has also been extended to the non-circular boundary detection of the

pupil. The correct results have been obtained with 100% accuracy in pupil circular

localization using the proposed method. Narote et. al. [103] and Mehrabian et. al. [104]

have also mentioned 100% results for finding pupil parameters. Hough transform and

Masek’s implementation of Daugman method are producing results with accuracy of

Page 113: Nust Thesis

Chapter 5 Results & Discussions

-102-

97.48% and 99.07% respectively. The results of non-circular pupil localization are

98.28% for this database.

Table 5.7: Results of Pupil localization for CASIA version 1.0

Methods Accuracy

Mehrabian et. al. [104] 100%

Hough Transform 97.48%

Narote et. al. [103] 100%

Masek [56] 99.07%

Proposed 100% (circular) 98.28% (non-circular)

In view of the above results, the proposed method of iris localization for CASIA version

1.0 (the mostly widely used iris image database) has performed very well in terms of

accuracy and efficiency.

For CASIA version 3.0, results of iris localization are shown in Table 5.8. In this

database, quantity of blur and defocused images is greater than CASIA version 1.0.

Results of iris localization of Wildes method is producing correct rate of 89.09% and

accuracy of iris localization with the Masek method is 82.56%. From the tabulated

values, it is clear that the results of iris localization using the proposed method are the

best for this database.

Table 5.8: Results of iris localization for CASIA version 3.0

Methods Accuracy

Masek [56] 82.56%

Wildes [55] 89.09%

Proposed 99.21%

Page 114: Nust Thesis

Chapter 5 Results & Discussions

-103-

The proposed algorithm has been successfully applied to BATH iris database. Results of

iris localization are compared with the results of other researchers in Table 5.9. Kennell

et. al. [105] applied binary morphology and local statistics to obtain pupil and iris

boundaries localization with accuracy 96% and 92.5% respectively on the same database.

Grabowski et. al. [106] achieved iris localization for BATH iris database with 96%

correct results by finding zero-cross points in first derivative of histogram of the images.

Guang-Zhu et. al. [107] used the property of local areas in the image and segmentation

accuracy of 98% is reported for the same database. The proposed method has performed

well as compared to Kennell, Grabowski and Guang-Zhu’s methods. Proposed method

exhibited 6.9%, 3.4% and 1.4% better results than Kennell, Grabowski and Guang-Zhu’s

methods for iris boundary localization and in case of pupil boundary localization

proposed method has displayed 4% high accuracy as compared to Kennell’s method

while others did not mention the accuracy of pupil boundary.

Table 5.9: Results of iris localization for BATH iris database

Methods Accuracy

Kennell et. al. [105] 96% (Pupil boundary) 92.5% (Iris boundary)

Grabowski et. al. [106] 96.0% (Iris boundary)

Guang-Zhu et. al. [107] 98.0% (Iris boundary)

Proposed 100% (Pupil boundary) 99.4% (Iris boundary)

For MMU version 1.0 iris image database, results are obtained by using methods

mentioned in Table 5.10. Teo et. al. [108] reported the results with accuracy of 98% on

the same database for iris localization. The same accuracy has been achieved using the

proposed method of histogram processing [99]. Result of iris localization using Wildes

and Masek’s method give correct iris localization of 92.66% and 96.7% respectively

whereas the best iris localization results of 99.77% have been achieved using the

proposed method.

Page 115: Nust Thesis

Chapter 5 Results & Discussions

-104-

Table 5.10: Results of iris localization for MMU Iris Dataset

Methods Accuracy

Teo et. al. [108] 98.0%

Wildes [55] 92.66%

Masek [56] 96.7%

Proposed (histogram processing) [99] (gradient processing) [48]

98.0% 99.77%

5.7.2 Computational Complexity

If the methods are compared with respect to their computational complexity then it is

evident from the tabulated results that the proposed method has less complexity. The

Generalized Hough Transform (GHT) is useful for detecting or locating translated two-

dimensional objects. However, a weakness of the GHT is its storage requirements and

hence the increased computational complexity [109].

In Hough transform, all the points in the edge image are considered as center and on each

radius virtual circle is drawn. Points lying on the circle with specific radius are voted to

the corresponding layer in Hough space. Then the point with maximum number of votes

becomes the center of the circle and corresponding layer is the radius of the circle. So,

Hough space is four-dimensional (i.e. x, y, z, v, where x, y are the coordinates of point in

the image, z is the number of radii to look for and v is the value at position (x,y,z) in the

space) which makes it less efficient.

Let “r” and “c” be the rows and column of the image and “n” be the number of points in

edge image. Let “rad” be number of radii used in Hough space then computational

complexity of the Hough transform is O(n×rad). As the number of points in the edge

image and radii for which search is carried out are increased, the time and number of

operations performed during the process are increased accordingly. Same computations

are required while obtaining iris outer boundary. As far as memory consumption is

concerned, it is O(r×c×rad) because dimensions of the image multiplied by number of

Page 116: Nust Thesis

Chapter 5 Results & Discussions

-105-

radii must be in the work space along with other parameters most of the time during iris

localization.

In Daugman iris localization, Daugman used integro-differential operator to find the

boundary of iris and pupil which act as circular edge detector. Let “n” be number of

points selected on each arc/circle for finding boundaries of iris. Integro-Differential

Operator (IDO) first sums the image points which are on the arch and then finds

difference of subsequent sums followed by convolution with Gaussian function. Last step

in the Daugman iris localization is to find the location of maximum value through the 3D

space. Let “rad” be the radii in the domain of IDO and “a” be the size of Gaussian, then

the computational complexity of the operator is O(n×rad×a) whereas memory

consumption is less than that of Hough space.

Computation cost of the proposed algorithm is calculated by considering the following

method. Let “n” the number of points obtained for finding circle on each radially outward

line from pupil center. Outliers are deleted from the “n” points to reduce the points.

Difference between the points on each line contributes towards selecting a point and a

maximum of three points are selected on each line. Only 38 lines are processed so

maximum of 114 (38×3) points are selected. Therefore, the computational complexity is

O(k), where k is a constant. Thus, the time consumption in order to achieve iris

localization is less than other algorithms.

5.8 Normalization

All the normalization methods perform correctly. This process is not only a

transformation from rectangular to polar coordinates system but also compensation of

width of irises. Methods have been explained in the previous chapters. Five different

normalization methods have been implemented. Three are named as normalization using

reference point as pupil center, iris center and mid-point of pupil & iris centers. The other

two methods are named as normalization using minimum distance and normalization

using dynamic size. Time utilization of four methods for each image is given in Figure

5.11. Time utilized in normalization using reference center as pupil center is 0.05

seconds per image for all the databases whereas normalization via mid-point of pupil and

centers as reference point took 0.07 seconds per image for all the database images. Time

Page 117: Nust Thesis

Chapter 5 Results & Discussions

-106-

0

0.02

0.04

0.06

0.08

0.1

0.12

Pupil center Mid-point Minimum distance

Dynamic size

time

(sec

onds

)

Normalization Method

Time Comparison of Normalization MethodsCASIA version 1.0 CASIA version 3.0

Figure 5.11: Time comparison of Normalization methods

consumed for minimum distance normalization is 0.03 seconds per image for all

databases. The differences among normalization methods using a reference point lie in

the selection of reference point while normalization using minimum distance method

exploits the property of minimum distance between two points. Dynamic size

normalization method depends on the pupil radius and minimum width of the iris. If pupil

and iris centers coincide, then normalization using reference point as pupil center, iris

center, mid-point and minimum distance results in same normalized iris image. Time

consumption depends on the width of the iris in normalization using dynamic size. Time

consumed for dynamic size normalization increases with the increase in width of iris.

Pupil and iris radii are given in Table 5.11. BATH iris database contains maximum

average iris width. This is the main reason that normalization of iris images using

dynamic size method in BATH database took more time (i.e. 0.107 seconds per image) as

compared to other databases. As the width of irises is almost same in CASIA versions 1.0

and 3.0 iris databases so time required for normalization is almost same i.e. 0.022

seconds per image. Iris width is the smallest in MMU iris database so it took the

minimum time (i.e. 0.007 seconds per image). Time utilized in normalizing an iris image

using iris center as reference point is given in Figure 5.12.

Page 118: Nust Thesis

Chapter 5 Results & Discussions

-107-

1.322 1.329

18.38

1.401

02468

1012141618

Iris Center Method

time

(sec

onds

)

CASIA version 1.0 CASIA version 3.0 Bath MMU

Figure 5.12: Time comparison of normalization using iris center as reference point

The average iris radii sizes in CASIA version 1.0, CASIA version 3.0, BATH and MMU

iris databases are 102.21, 101.37, 232.80 and 51.75 pixels respectively. Comparison of

pupil and iris radii is tabulated in Table 5.11. Average width of irises in BATH iris

database is maximum which is 136.46 (232.80 - 96.34) pixels and is minimum in MMU

iris database with only 26.74 (51.75 – 25.01) pixels. Average width of irises in BATH

database is greater than five times the average width of irises in MMU database. CASIA

versions 1.0 and 3.0 have approximately same average iris width.

Table 5.11: Radii of pupil and iris in the databases

Pupil Radii (pixels) Iris Radii (pixels) Database Name

Average Minimum Maximum Average Minimum Maximum

CASIA version 1.0 45.90 30 64 102.21 83.35 142.92

CASIA version 3.0 42.88 24.37 91.70 101.37 75.73 147.96

BATH 96.34 59 164 232.80 162.28 285.61

MMU 25.01 17 36 51.75 42.49 60.82

Page 119: Nust Thesis

Chapter 5 Results & Discussions

-108-

5.9 Feature Extraction and Matching

Iris image is localized and the normalized using the proposed methods. Features of

normalized iris images are extracted using the methods mentioned in the text and

matching has been carried out. Euclidean distance and Hamming distance have been used

as matching classifiers. Principal Component Analysis, bit planes and wavelets have been

implemented for using them as features of normalized iris image.

5.9.1 Principal Component Analysis

The Principal Component Analysis (PCA) is a way of identifying patterns in data and

expressing the data in such a way as to highlight their similarities and differences. Since

in high dimension data it is hard to find patterns, where the luxury of graphical

representation is not available, PCA is a powerful tool for analyzing data. Once patterns

have been extracted from the data and one needs to compress the data (i.e. by reducing

the number of dimensions) without much loss of information, PCA is a good choice for

it. In terms of information theory, the idea of using PCA is to extract the relevant

information in an iris image, encode it as efficiently as possible and compare test iris

encoding with a database of similarly encoded models. A simple approach to extract the

information contained in an image or iris is to somehow capture the variations in a

collection of iris images independent of judgment of features and use this information to

encode and compare individual irises [89].

The main use of PCA is to reduce the dimensionality of a data set while retaining as

much information as possible. It computes a compact and optimal description of the data

set. The first principal component is the combination of variables that explains the

greatest amount of variation. The second principal component defines the next largest

amount of variation and is independent of the first principal component. The mean m of

training set is calculated, each image is centered by subtracting mean from it. This

produces a dataset whose mean is zero. In next step, two dimensional variance called

covariance of this dataset is calculated. As covariance matrix is a square matrix, its

eigenvalues and eigenvectors are calculated which provide the information about patterns

in the data. These eigenvalues are ordered from highest to lowest and similarly the

Page 120: Nust Thesis

Chapter 5 Results & Discussions

-109-

corresponding eigenvectors which provides data components in order of significance.

This arrangement of data allows to decide on ignoring the data of less significance.

In this way, some information is lost, but if the eigenvalues are small, much information

is not lost and the final dataset will have lesser dimensions than the original. Finally, this

reduced dimension data is transposed so that eigenvectors are put in a row (with most

significant eigenvector at the top) and multiplied by the transpose of centered image. This

new data matrix is projection of iris image in eigeniris space.

During the research work, PCA has been implemented and results on mentioned

databases have been obtained. Three different sets of experiments have been carried out.

In the first case, reduction in number of dimensions is varied from 64 to one while

keeping the number of training images constant and effect of dimension reduction is

studied with respect to correct recognition rate. In the second set of experiments, the

numbers of training images are altered while keeping the dimension of PCA constant and

correct iris recognition rate has been determined. In the third set of experiments, numbers

of classes are increased and effect of this increase has been analyzed.

a. Experiment Set 1 (Dimension Reduction)

This set of experiment has been repeated on all the images obtained by all the proposed

normalization methods. Experiments have been conducted by reducing the dimensions of

the eigenirises and results have been discussed when number of training images are three.

There are fourteen categories of normalized iris images which are described as follows:

Normalized 1: Normalization of iris images by considering pupil center as a reference

point and without eyelids localization.

Normalized 2: Normalization of iris images by studying iris center as a reference point

and without eyelids localization.

Normalized 3: Normalization of iris images by taking mid-point of pupil center and iris

center as a reference point and without eyelids localization.

Normalized 4: Normalization of iris images by utilizing the minimum distance between

the iris and pupil boundaries and without eyelids localization.

Normalized 5: Normalization of iris images by dynamic size model and without eyelids

localization.

Page 121: Nust Thesis

Chapter 5 Results & Discussions

-110-

Similarly the same normalizations have been carried out in Normalized 6 to Normalized

10 with eyelids localization and in Normalized 11 to Normalized 14, normalization of iris

is obtained by using non-circular pupil boundary. Normalization of dynamic size does not

apply with non-circular pupil boundary because in this case size of the normalized image

progresses with the increase of the radius starting from pupil to iris. Therefore, zigzag

boundary of pupil is not considered for this case. Best results have been mentioned in

Table 5.12 for CASIA version 1.0, whereas complete and detailed results of all

normalized methods are given in Appendix I. For CASIA version 1.0, the best results of

59.16% accuracy has been produced using image of category Normalized 2 (i.e.

normalization of iris by considering iris center as reference point and without eyelid

localization) when the dimension of PCA is one. In this case, worst results of 47.23%

have been obtained for iris recognition when 64 vectors for dimensions of PCA are

considered. This shows that as the dimension of PCA reduces, the accuracy of results

increases. This increase is because of the structure of iris in normalized image which is

better separated in lower dimension space. In dimension of PCA, the numbers of

elements in one vector are 64 whereas numbers of elements (when dimensions of PCA

are 64) are 4096 (64×64). Time utilized for the complete database to train is 1.17 seconds

and recognition takes place in 2.27 seconds for CASIA version 1.0 iris database, when

the number of dimension of PCA is one.

Table 5.12: Iris recognition rate with Normalized 2 using PCA for CASIA version 1.0

Dimensions of PCA Accuracy Training Time (Seconds)

Recognition Time (Seconds)

64 47.23% 3.14 11.24 61 46.89% 2.94 8.72 58 46.89% 2.87 8.31 55 46.72% 2.86 5.82 52 47.06% 2.78 5.75 49 47.06% 2.69 5.36 46 47.73% 2.63 5.3 43 47.73% 2.28 5.06 40 47.9% 2.24 4.8 37 48.07% 2.09 5.03 34 47.9% 2.01 4.85

Page 122: Nust Thesis

Chapter 5 Results & Discussions

-111-

31 48.4% 1.91 5.63 28 48.57% 1.86 5.4 25 48.07% 1.84 4.13 22 49.24% 1.77 3.88 19 49.08% 1.65 3.59 16 49.75% 1.48 3.18 13 49.08% 1.42 3.19 10 48.91% 1.33 2.91 7 47.9% 1.28 2.63 4 50.76% 1.22 2.46 1 59.16% 1.17 2.27

Results of PCA, in terms of accuracy and time consumption for CASIA version 3.0 are

shown in Figure 5.13. CASIA version 3.0 iris database has 396 classes with different

number of images in it (ranges from 1 to 26). In these results, only that classes (246) are

included which have seven or more than seven images. Three images have been used for

PCA on CASIA version 3.0

0

10

20

30

40

50

60

70

80

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64

Dimensions of PCA

Acc

urac

y (%

) and

Tim

e(s)

Accuracy Training Time Recognition Time

Figure 5.13: Results of Normalized 4 using PCA for CASIA version 3.0 iris database

Page 123: Nust Thesis

Chapter 5 Results & Discussions

-112-

training and remaining images have been used as test images. Accuracy of 59.29% has

been achieved with only one vector of PCA and time required to train the database is 3.4

seconds while recognition has been completed in 10.7 seconds. As the numbers of

dimensions for PCA are increased to 64, time required to train the database is 18.98

seconds whereas 82.52 seconds are utilized for recognition. These results are obtained on

normalized 4 category (i.e. Normalization of iris images by utilizing the minimum

distance between the iris and pupil boundaries and without eyelids localization).

Best results for MMU iris data using PCA are given in Table 5.13. Numbers of training

images are kept constant which is equal to three. Maximum accuracy of 70.67% with

only one PCA vector has been achieved where 62.44% is the minimum iris recognition

rate for this database. Training time and recognition time are increased with the increase

in the dimensions of PCA. This is because of large memory consumption and more

computations for high dimensions of PCA.

Table 5.13: Accuracy with Normalized 2 using PCA for MMU iris database

Dimensions of PCA Accuracy Training Time (Seconds)

Recognition Time (Seconds)

64 62.44% 3.47 5.42 61 62.89% 3.34 3.33 58 62.89% 3.22 4.97 55 63.11% 3.12 4.67 52 62.89% 2.99 4.28 49 62.89% 2.89 2.96 46 63.33% 2.77 4.35 43 63.11% 2.68 4.22 40 63.56% 2.55 3.97 37 63.11% 2.45 3.65 34 63.33% 2.34 3.49 31 63.56% 2.23 2.49 28 63.56% 2.11 2.31 25 62.89% 2.01 2.95 22 63.78% 1.89 2.96 19 63.33% 1.72 2.44 16 63.56% 1.6 1.75 13 64% 1.52 1.71

Page 124: Nust Thesis

Chapter 5 Results & Discussions

-113-

10 62.89% 1.42 1.63 7 64.44% 1.44 1.42 4 67.11% 1.33 1.46 1 70.67% 1.04 1.26

In case of BATH iris database, normalized 4 performs best with an accuracy of 72.9%.

Time consumed to train the database for three images per class is 0.68 seconds and

recognition of complete database of 1000 images required 4.95 seconds. Results are

shown in Figure 5.14. Database is trained on only 150 images whereas total test images

are 850.

0

10

20

30

40

50

60

70

80

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64

Accu

racy

(%)

/ Tim

e (s

)

Dimensions of PCA

PCA on BATHAccuracy Training Time Recognition Time

Figure 5.14: Results of Normalized 4 using PCA for BATH iris database

b. Experiment Set 2 (Training Images)

In this set of experiments, numbers of training images are increased gradually to find out

which normalization method performs better in terms of iris recognition accuracy. As

shown in Figure 5.15 for CASIA version 1.0, best results have been achieved using

normalized 2 method (i.e. normalization of iris images using iris as reference point

Page 125: Nust Thesis

Chapter 5 Results & Discussions

-114-

without eyelid localization) when number of images used in training the database are 1, 3

and 4. Accuracy in percentage versus number of training images for all normalization

methods is presented in Figure 5.15. Normalized 1 (i.e. normalization using pupil center

as a reference point without eyelid localization) performs better when the number of

images used in training the database are 2, 5 and 6.

20

30

40

50

60

70

80

90

1 2 3 4 5 6

Accu

racy

(%

)

Training Images

PCA on CASIA version 1.0

Normalized 1

Normalized 2

Normalized 3

Normalized 4

Normalized 5

Normalized 6

Normalized 7

Normalized 8

Normalized 9

Normalized 10

Normalized 11

Normalized 12

Normalized 13

Normalized 14

Figure 5.15: PCA using different training image on CASIA version 1.0

The same set of experiments has also been conducted on CASIA version 3.0. Numbers of

classes included in the experiments are 246. These are the classes which has more than

six images in it. Results of PCA are shown in Figure 5.16. On x-axis normalized

categories are given whereas accuracy is on y-axis. Each normalized category has six

bars corresponding to number of training images (from one to six). Normalized category

number 4 (i.e. normalization of iris images by utilizing the minimum distance between

the iris and pupil boundaries and without eyelids localization) has the highest accuracy

Page 126: Nust Thesis

Chapter 5 Results & Discussions

-115-

for all number of training images. The best accuracy of 91.06% has been achieved when

number of training images is six.

PCA on CASIA version 3.0 with different training images

10

20

30

40

50

60

70

80

90

1 2 3 4 5 6 7 8 9 10 11 12 13 14Normalized Categories

Acc

urac

y (%

)

Training Image 1

Training Images 2

Training Images 3

Training Images 4

Training Images 5

Training Images 6

Figure 5.16: PCA using different training image on CASIA version 3.0

0102030405060708090

100

1 2 3 4 5 6 7 8 9 10 11 12 13 14

Acc

urac

y (%

)

Normalized Categories

PCA on MMU with different training imagesTraining on 1 image Training on 2 imagesTraining on 3 images Training on 4 images

Figure 5.17: PCA using different training image on MMU

Page 127: Nust Thesis

Chapter 5 Results & Discussions

-116-

Results of different training images using PCA for MMU and BATH iris databases are

shown in Figure 5.17 and Figure 5.18 respectively. MMU iris database has five image in

each folder, up to four images of each class are used in training and best accuracy

achieved is 86.67% when the normalized category is 2.

Maximum of seven images out of twenty has been used for BATH iris database to obtain

the results using PCA. Accuracy is directly proportional to the number of training

images. Best results of 83.5% have been achieved for normalized 4 category when the

number of training images are seven.

0102030405060708090

1 2 3 4 5 6 7 8 9 10 11 12 13 14

Accu

racy

(%)

Normalized Categories

PCA on BATH with different training images

Training Image 1 Training Images 2 Training Images 3 Training Images 4

Training Images 5 Training Images 6 Training Images 7

Figure 5.18: PCA using different training image on BATH

The results of these experiments show that PCA performs better when normalization

category is 2 (i.e. normalization of iris images by studying iris center as a reference point

and without eyelids localization) for CASIA version 1.0 and MMU iris datasets and

normalization category 4 (i.e. normalization of iris images by utilizing the minimum

distance between the iris and pupil boundaries and without eyelids localization) for

CASIA version 3.0 and BATH iris databases.

Page 128: Nust Thesis

Chapter 5 Results & Discussions

-117-

c. Experiment Set 3 (Training Classes)

In these set of experiments, numbers of classes are increased only for the normalization

categories two (for CASIA version 1.0 and MMU) and four (for CASIA version 3.0 and

BATH) while keeping the number of training images constant (three). The results of

accuracy, training time and testing time for this set of experiments for all the databases is

shown in Figure 5.19, Figure 5.20 and Figure 5.21 respectively. Accuracy of 87.5%,

80%, 71.14% and 67.14% has been achieved for BATH, MMU, CASIA version 1.0 and

CASIA version 3.0 respectively when ten classes are used which decreases to 71.9%,

72.4%, 58.29% and 61.71% when number of classes reaches to 50. This decrease in

accuracy is because of the increase in number of test images.

40

50

60

70

80

90

10 20 30 40 50

Accu

racy

(%)

Number of Classes

PCA with increase in classesBATH CASIA version 1.0 MMU CASIA version 3.0

Figure 5.19: Accuracy of PCA on all databases using three training images

Time consumed for training the PCA using three images of each class is same for all the

databases because same number of images of each database is used. As the number of

classes are increased, the time utilized in training also increases as shown in Figure 5.20.

Page 129: Nust Thesis

Chapter 5 Results & Discussions

-118-

0

0.5

1

1.5

2

10 20 30 40 50

Tim

e (s

)

Number of Classes

Training Time for PCABATH CASIA version 1.0 MMU CASIA version 3.0

Figure 5.20: Training time of PCA on all databases using three training images

Time for recognition of BATH database is higher than all other databases because

number of test images in each class is seventeen whereas in case of MMU database it is

only two. That is why, MMU iris database consumes lowest time.

0123456

10 20 30 40 50

Tim

e (s

)

Number of Classes

Recognition Time for PCABATH CASIA version 1.0 MMU CASIA version 3.0

Figure 5.21: Recognition time of PCA on all databases using three training images

It is clear from the mentioned results that the best results of PCA have been achieved for

normalization categories 2 and 4. Therefore, normalization of iris images is performed by

studying iris center as a reference point & without eyelids localization and normalization

of iris images by utilizing the minimum distance between the iris & pupil boundaries and

without eyelids localization.

Page 130: Nust Thesis

Chapter 5 Results & Discussions

-119-

5.9.2 Bit planes

A bit plane (in an image) is a set of bits having the same position in the respective binary

numbers. For example, for 16-bit data representation, there are 16 bit planes: the first bit

plane contains the set of the most significant bit and the 16th contains the least significant

bit. It is possible to see that the first bit plane gives the roughest but the most critical

approximation. The higher the number of the bit planes, the lesser is its contribution to

the final stage [91]. Thus, adding bit plane gives a better approximation. Incrementing a

bit plane by 1 gives the final result half of a value of a previous bit plane. If a bit is set to

1, the half value of a previous bit plane is added, otherwise it does not define the final

value.

In Pulse Code Modulation (PCM), sound encoding the first bit in sample denotes the sign

of the function, or in the other words defines the half of the whole amplitude values

range, and the last bit defines the precise value. Replacements of more significant bits

result in more distortion than replacement of lesser significant bits. In lossy media

compression that uses bit planes, it gives more freedom to encode less significant bit

planes and it is more critical to preserve the more significant ones [110].

Bit plane is sometimes used as synonymous to bitmap; however, technically the former

refers to the location of the data in memory and the latter to the data itself. One aspect of

using bit planes is determining whether a bit plane is random noise or contains significant

information. One method for calculating this is to compare each pixel (x,y) to three

adjacent pixels (x-1,y), (x,y-1) and (x-1,y-1). If the pixel is same in at least two of the

three adjacent pixels, it is not noise [111]. The result of bit plane is a binary image.

A binary image is a digital image that has only two possible values for each pixel. Binary

images are also called bi-level or two-level. The names black-and-white (B&W),

monochrome or monochromatic are often used for this concept. But may also designate

any images that have only one sample per pixel such as grayscale images.

Binary images often arise in digital image processing as masks or as the result of certain

operations such as segmentation, thresholding and dithering. Some input/output devices

(such as laser printers, fax machines and bi-level computer displays) can only handle bi-

level images. The interpretation of the pixel's binary value is also device-dependent.

Some systems interpret the bit value of 0 as black and 1 as white, while others used its

Page 131: Nust Thesis

Chapter 5 Results & Discussions

-120-

reverse for processing of binary images. A binary image is usually stored in memory as a

bitmap, a packed array of bits. Binary images can be interpreted as subsets of the two-

dimensional integer lattice Z2; the field of morphological image processing was largely

inspired by this view.

a. Results on BATH

Iris database of BATH has 1000 images from 50 different eyes. All the images are in

grayscale, bmp format of size 1.2 MB with 1280 x 960 pixels resolution. This database

includes some of the people wearing lenses. The presented algorithm is equally good for

localization of iris from eye images with lenses although lens incorporate an extra circle

around the iris. The resolution of the images in the database is very high so discriminative

features from such images can also be extracted easily. As the features are the bit planes

of the resolved strip and iris code is in Boolean format, so it makes a very efficient

decision.

Experiments using the proposed algorithm have been conducted and the results of iris

localization algorithm for the complete database reach up to 99.4% as shown in Table

5.4. Recognition has been obtained in two modes: (1) identification mode, in which

correct recognition rate is calculated and (2) verification mode, in which FAR (False

Accept Rate) and FRR (False Reject Rate) have been measured. Results of identification

for different types of features are given in Table 5.14. Recognition rate is given with

respect to number of enrolled images for training. It is clear from the Table 5.14 that

feature type 4 corresponding to bit plane 5 performs better in first two experiments in

which numbers of enrolled images are one and two. Feature type 3 corresponding to bit

plane 4, gives results closer to feature type 4 (i.e. difference between recognition rate of

the two features in first and second experiment is 0.7% and 0.1% respectively).

Maximum difference with other features in experiment one is 54.2% corresponding to

feature type 1 and in experiment two, maximum difference is reduced to 50.9% which is

also with feature type 1.

Feature type 3 presents best results when number of enrolled images is greater than two

and less than six but when enrolled images are greater than five then both of feature types

3 and 4 give the same highest recognition. Feature type 1 portraits the worst results in

Page 132: Nust Thesis

Chapter 5 Results & Discussions

-121-

each experiment as this feature is corresponding to bit plane 2 which next to least

significant bit so this bit plane does not prove to be an appropriate feature because of very

high frequency components in it which do not cater for the discriminative features of the

iris. Feature types 3 and 4 perform better than other features because both have middle

frequency components. In case of three and four training images its recognition rates are

96.7% and 99.6% respectively. When the training images are six or more, results of

feature type 3 and 4 remain the same. As the number of enrolled images is increased,

overall recognition rate is increased and difference between the best and worst

recognition rates is decreased. Features based on bit planes 2 to 7 are analyzed so bit

plane 4 is presenting the best results in case of small as well as large number of training

images. While comparing all the features, it has been observed that correct recognition

rate increases when the feature type increases up to feature type 4 and then it decreases

for last two feature types. It can be concluded that feature type 3 and 4 are better than 1,

2, 5, and 6 so corresponding bit planes 4 and 5 have better discriminating factors with

respect to iris images. If the number of enrolled images is 50, then total test images are

950. So, numbers of misclassified irises with respect to feature type 1 to 6 are 584, 213,

49, 42, 94 and 241 respectively. In case of feature types 3 and 4, only six out of twenty

images (i.e. 30.0%, which is less than 42.85% (three out of seven) normally used in the

literature) are used for training to get 96.6% recognition rate. If in training six images of

each eye are used, then feature type 1 to 6 misclassify 317, 61, 4, 4, 29 and 112 irises. In

feature type 3 and 4, only four images are misclassified because these images have

different illumination than those included in training. Therefore, these features are

sensitive to illumination.

Table 5.14: Results of recognition for BATH Iris dataset

Correct Recognition Rate (%)

using Feature Types (FT) Enrolled

images FT1

(bp* = 2) FT2

(bp* = 3) FT3

(bp* = 4) FT4

(bp* = 5) FT5

(bp* = 6) FT6

(bp* = 7) 1 41.6 76.9 95.1 95.8 90.6 75.9

2 45.5 88.1 96.3 96.4 93.8 79.3

Page 133: Nust Thesis

Chapter 5 Results & Discussions

-122-

3 51.5 92.3 96.7 96.4 94.1 82.7

4 56.4 92.2 99.6 96.6 94.2 86.8

5 61.5 92.9 99.6 96.6 94.2 86.7

6 68.3 93.9 99.6 99.6 97.1 88.8

* bp = bit plane

In verification mode, the Receiver Operating Characteristic (ROC) curves are obtained

and are shown in Figure 5.22 for all feature types. ROC curve is a FAR versus FRR

which measures the best feature type and shows the overall performance of an algorithm.

FAR is the probability of a non-authorized person accepted as authorized and FRR is the

probability of an authorized user rejected as non-authorized person by the system. Equal

Error Rate (EER) is the point where ROC curve passes through a line of slope 1 (i.e. the

Figure 5.22: ROC curves for different features with six enrolled images

Page 134: Nust Thesis

Chapter 5 Results & Discussions

-123-

point where the FAR is equal to FRR). In case of six training images, EER (in

percentage) is 0.262, 0.1, 0.049, 0.041, 0.096 and 0.17 for feature type 1 to 6

respectively. It also shows that feature type 4 distinguishes the irises better than that of

other feature types.

Based upon these results, feature Type 4 corresponding to the bit plane 5 of normalized

iris images outperforms other features used for bit planes. Therefore, correct iris

recognition rate in other databases is incurred with only bit plane 5. Results on different

sizes of normalized images by varying threshold for Bath iris database are given in

Appendix II. By threshold, we mean the minimum normalized Hamming distance which

is essential for matching two irises. If this threshold is less than the normalized Hamming

distance, the irises are considered to be unmatched and from different eyes. Threshold is

actually the normalized hamming distance between the two irises.

b. Results on CASIA version 1.0

After obtaining the correct iris recognition rate of 99.6% using bit planes of normalized

iris images of BATH database, the same method has been applied to other databases.

BATH iris database contains very clear and high resolution iris images. Based on the

results of iris recognition using BATH database, bit plane 5 has been selected as Feature

Vector (FV). This FV has been used for obtaining the results on CASIA version 1.0 iris

database. Three normalized images of each class have been used for training and

remaining images have been used as test image. A variation in the size of normalized

image regarding the width of iris has been carried out in order to study the effect of width

of iris on its recognition rate. Size of each normalized image is 64×256 where 64 and 256

are radial resolution and angular resolution of the iris respectively. The effects of

normalized iris image resolution on CASIA version 1.0 are shown in Table 5.15.

Accuracy of correct iris recognition rate increases as iris width is increases up to certain

image resolution then it decreases again. Maximum accuracy of 94.11% has been

achieved in this scenario when the image resolution is 50×256. It means FV (i.e. bit plane

5) is affected by the width of the iris image.

Page 135: Nust Thesis

Chapter 5 Results & Discussions

-124-

Table 5.15: Effect of image resolution on accuracy on CASIA version 1.0

Experiment No. Image Resolution Accuracy Threshold

1. 40 × 256 91.93% 0.47 2. 41 × 256 91.93% 0.47 3. 42 × 256 91.93% 0.47 4. 43 × 256 92.60% 0.47 5. 44 × 256 92.77% 0.47 6. 45 × 256 92.94% 0.47 7. 46 × 256 93.27% 0.47 8. 47 × 256 93.61% 0.47 9. 48 × 256 93.10% 0.47 10. 49 × 256 93.615 0.47 11. 50 × 256 94.11% 0.47 12. 51 × 256 93.78% 0.47 13. 52 × 256 93.78% 0.47 14. 53 × 256 93.94% 0.47 15. 54 × 256 93.78% 0.47 16. 55 × 256 93.44% 0.47 17. 56 × 256 93.94% 0.47 18. 57 × 256 93.61% 0.47 19. 58 × 256 93.61% 0.47 20. 59 × 256 93.61% 0.47 21. 60 × 256 93.44% 0.47 22. 61 × 256 93.61% 0.47 23. 62 × 256 93.44% 0.47 24. 63 × 256 93.61% 0.47 25. 64 × 256 93.61% 0.47

The reason of this low-high-low accuracy against iris width is that the maximum

discriminatory information captured by FV is obtained when the iris width is 50 pixels. If

the width of iris is less than 50 pixels in case of CASIA version 1.0, then the binary bit

plane 5 does not contain required information for classification. Same is true when the

width increases beyond 50 pixels. Complete results with different threshold values

against the specific resolution of the normalized iris images are given in Table 5.16. Best

results of 94.11% have been achieved with eight false rejects and 27 false accepts.

Table 5.16: Results with 50×256 image resolution on CASIA version 1.0

Threshold Number of False Reject Number of False Accept Accuracy

0.3 296 0 50.25%

Page 136: Nust Thesis

Chapter 5 Results & Discussions

-125-

0.31 296 0 50.25% 0.32 296 0 50.25% 0.33 296 0 50.25% 0.34 296 0 50.25% 0.35 296 0 50.25% 0.36 295 0 50.42% 0.37 292 0 50.92% 0.38 285 0 52.10% 0.39 270 0 54.62% 0.4 253 0 57.47% 0.41 239 0 59.83% 0.42 215 1 63.69% 0.43 171 1 71.09% 0.44 126 4 78.15% 0.45 75 10 85.71% 0.46 34 20 90.92% 0.47 8 27 94.11% 0.48 2 41 92.77% 0.49 0 44 92.60%

c. Results on CASIA version 3.0

Bit plane five (i.e. feature Type 4) has been used as FV for CASIA version 3.0 and results

with accuracy of 99.64% have been achieved. Results of iris recognition for the database

using bit plane five are shown in Figure 5.23. These results are for the classes which have

seven or more images where three images of each class have been used as training

images and recognition is carried out on remaining images. Change in the normalized iris

image resolution produces the highest accuracy of 99.64% when iris width is 49. If

complete image is taken, then the result is 99.5% accurate. Therefore, accuracy of 0.14%

has been increased. The reason for this increase is that optimal width for iris recognition

which has best discriminating information by using bit plane five is 49 pixels. It means

that iris has more information towards pupil boundary rather than near iris boundary. In

other words, information near iris boundary is not useful for classification because iris

muscles are connected in that portion.

Results with complete details using normalized image width of 49 pixels using bit plane

five as FV for CASIA version 3.0 are given in Table 5.17. Theses results have been

obtained by changing threshold and calculating FRR and FAR and total number of error.

Page 137: Nust Thesis

Chapter 5 Results & Discussions

-126-

Results iris recognition for CASIA version 3.0 using bit plane 5

99.399.3599.4

99.4599.5

99.5599.6

99.6599.7

40 42 44 46 48 50 52 54 56 58 60 62 64

Iris Width (pixels)

Acc

urac

y (%

)

Figure 5.23: Results of iris recognition on CASIA version 3.0 using bit plane 5

Maximum iris recognition rate of 99.64% have been achieved with FRR and FAR of

0.001% and 0.002% respectively. This concludes that information for classification lies

in pupillary part of the iris that is only 49/64×100 = 76.5% of the iris width is sufficient

to obtain a reasonable recognition accuracy. In other words, if 1/4th part of iris is

occluded by eyelids or eyelashes then accuracy of more than 99.6% can be achieved.

Table 5.17: Result of CASIA version 3.0 when normalized iris width is 49 pixels

Threshold False Reject Rate (%) False Accept Rate (%) Accuracy (%)0.22 0.002143 0.002143 99.57 0.23 0.002143 0.002143 99.57 0.24 0.002143 0.002143 99.57 0.25 0.001429 0.002143 99.64 0.26 0.001429 0.002857 99.57 0.27 0.001429 0.003571 99.5 0.28 0.001429 0.003571 99.5 0.29 0.001429 0.005 99.35 0.3 0.001429 0.006429 99.21

0.31 0.001429 0.007143 99.14 0.32 0.001429 0.013571 98.5 0.33 0.001429 0.020714 97.78 0.34 0.001429 0.027857 97.07 0.35 0.001429 0.043571 95.5 0.36 0.001429 0.057857 94.07 0.37 0.000714 0.074286 92.5 0.38 0.000714 0.105714 89.35 0.39 0.000714 0.135714 86.35

Page 138: Nust Thesis

Chapter 5 Results & Discussions

-127-

d. Results on MMU

Experiments using bit plane five as feature vector have been conducted on MMU iris

database. Three images of each eye have been used for training and remaining images

have been utilized as test images. Two sets of experiments have been applied to this

dataset. In the first set, database is trained with enrollment of three images of each class.

Effects of variation of iris width and change in threshold value have been studied. In the

second set of experiments, database is trained with three images of the same class and

average of the three trained images is also included as another training image. The

results of correct iris recognition against iris width are shown in Figure 5.24. Accuracy of

96.66% has been achieved using three training images when iris width is 57 pixels (i.e.

resolution of normalized image is 57×256) at a threshold of 0.43. Addition of average of

the three training images improves the overall accuracy of iris recognition system from

96.66% to 97.55%. In general, accuracy for MMU iris database is increased at each iris

width; minimum increase of 0.67% in the accuracy has been noted for two iris widths i.e.

Iris recognition using bit plane 5 on MMU

9494.5

95

95.596

96.597

97.598

50 51 52 53 54 55 56 57 58 59 60 61 62 63 64

Iris width (in pixels)

Acc

urac

y (%

)

Without Average With Average

Figure 5.24: Iris recognition rate using bit plane 5 on MMU iris database

52 pixels and 54 pixels whereas maximum increase of 1.55% in accuracy is observed

when the width of iris is 58 pixels. An important point regarding the width of iris

Page 139: Nust Thesis

Chapter 5 Results & Discussions

-128-

discussed above is the width of iris from the normalized iris image and not the actual

width of iris.

The average width of iris in MMU iris database is 26.74 (51.75 - 25.01) pixels as given in

Table 5.11. Details of iris recognition results for second set of experiment are shown in

Table 5.18.

Table 5.18: Results of iris recognition with image resolution 58×256 on MMU

Threshold FRR (%) FAR (%) Accuracy (%)

0.3 31.77 31.77 68.22 0.31 30.88 30.88 69.11 0.32 28.22 28.22 71.77 0.33 25.11 25.11 74.88 0.34 23.11 23.11 76.88 0.35 20.22 20.22 79.77 0.36 17.11 17.11 82.88 0.37 13.77 13.77 86.22 0.38 11.55 11.55 88.44 0.39 7.55 7.55 92.44 0.4 5.77 5.77 94.22 0.41 3.55 3.33 96.44 0.42 2.44 1.77 97.56 0.43 2.88 1.11 97.11 0.44 4.22 0.44 95.78 0.45 5.11 0 94.89 0.46 6.44 0 93.56 0.47 6.44 0 93.56 0.48 6.44 0 93.56 0.49 6.44 0 93.55

5.9.3 Wavelets

Experiments have been conducted on different wavelets. Optimal features have been

determined using Daubechies 2 wavelets on CASIA version 1.0 and then these features

are used to obtain the results on other wavelets. In all these experiments, coefficients of

wavelet are quantized. As not all the coefficients of a wavelet transform have the

information required for recognition, so coefficient optimization has been carried out by

defining threshold value. This threshold value is defined in such a way that image quality

and coefficients required for recognition are not compromised. The coefficients below

this threshold are made zero and above this threshold are made one which help in

reducing overall computational burden. Threshold for the wavelet coefficient is zero, all

Page 140: Nust Thesis

Chapter 5 Results & Discussions

-129-

the values less than zero are made zero and positive values are made one. After this

process, each value in FV is either zero or one which makes it binary.

a. Results on CASIA version 1.0 using Daubechies 2

Many different combinations of FV have been used to find the best features. When an

image is decomposed using wavelet of level one, it is converted into four sub-images (i.e.

Approximation Coefficients (AC), Horizontal Details (HD), Vertical Details (VD) and

Diagonal Details (DD)). For further decomposition to level two, AC of level one is used

as image which is decomposed to obtain four sub-images of level two. Similarly, AC of

level two is used for decomposition into level three and so on. Results of iris recognition

on CASIA version 1.0 using Daubechies 2 have been given in Figure 5.25 with many

combinations of FVs. Different FVs are used for obtaining the results using two types

(original and enhanced) of normalized images.

70.0075.0080.0085.0090.0095.00

100.00

AC

3V

D 3

HD

3D

D 3

AC

3, H

D 3

AC

3, D

D 3

AC

3, V

D 3

VD

3, D

D 3

HD

3, V

D3

HD

3, D

D 3

AC

3, H

D 3

, VD

3A

C 3

, HD

3, D

D 3

AC

3, V

D 3

, DD

3H

D 3

, VD

3, D

D 3

VD

2H

D 2

HD

2, V

D 2Ac

cura

cy (%

)

Feature Vectors

Iris Recognition Results on CASIA version 1.0 using Daubechies 2

Original Images Enhanced Images

Figure 5.25: Results of iris recognition using Daubechies 2 on CASIA version 1.0

Enhancement is carried out by subtracting background from original normalized image.

Decimation algorithm is applied with decimation factor 16 to find the background of the

Page 141: Nust Thesis

Chapter 5 Results & Discussions

-130-

normalized image. To make the size of both images same, decimated image is resized to

the size of normalized image and the subtraction of images is carried out.

Used FV for recognition are AC 3 (i.e. Approximation Coefficients of level 3), VD 3 (i.e.

Vertical Details of level 3), HD 3 (i.e. Horizontal Details of level 3), DD 3 (i.e. Diagonal

Details of level 3) and so on. When two or more FVs are combined (e.g. AC 3 and HD

3), it means concatenation of vectors AC 3 and HD 3. Similarly, other FVs are given in

the Figure 5.25. Best results of 99.33% have been achieved with combination of HD 3

and VD 3 when the number of training images are three out of seven of each iris. Same

accuracy of iris recognition has been obtained when the FV is selected as concatenation

of AC 3, HD 3 and VD 3.

Iris Recognition Results on CASIA version 1.0 using Daubechies 2 including Average

of Training Images

50.00

55.00

60.00

65.00

70.00

75.00

80.00

85.00

90.00

95.00

100.00

AC 3VD 3

HD 3DD 3

AC 3, H

D 3

AC 3, D

D 3

AC 3, VD 3

VD 3, D

D 3

HD 3, VD3

HD 3, D

D 3

AC 3, H

D 3, VD 3

AC 3, H

D 3, D

D 3

AC 3, VD 3,

DD 3

HD 3, VD 3,

DD 3

VD 2HD 2

HD 2, VD 2

Feature Vectors

Acc

urac

y (%

)

Original Images Enhanced Images

Figure 5.26: Results of iris recognition including average training images

Page 142: Nust Thesis

Chapter 5 Results & Discussions

-131-

The reason for getting best results with combination of HD and VD is that the features in

the normalized iris images are placed in horizontal and vertical directions. The reason of

minimum accuracy when using original images with FV AC 3 is that these coefficients

are the low frequency components of level 3 and low frequency values do not contain

discriminating information because the patterns of iris are best described by middle

frequency components.

Same FVs are used to find the accuracy of iris recognition when average of the training

images is also included as a training image. This process is also repeated with enhanced

images. Results in graphical form are presented in Figure 5.26. Minimum and maximum

correct iris recognition rates for CASIA version 1.0 using original normalized images are

54.62% and 99.33% respectively. These results are corresponding to AC 3 and [AC 3,

HD 3, VD 3]. When normalized images are enhanced and same process of training is

applied (i.e. three images of each iris and average of these three images is included in

training) then optimum correct iris recognition rates of 93.61% and 98.99% have been

achieved corresponding to DD 3 and [AC 3, VD 3] respectively. FV with combination of

HD 3 & VD3 has presented accuracy of 98.82% which is only 0.17% less than the

maximum accuracy. Based upon the results obtained by using different combination of

features, HD 3, VD 3 is giving best results. Therefore, concatenation of HD 3 and VD 3

are used to find the iris recognition results on other wavelets.

b. Results using other wavelets on CASIA version 1.0

Best results after applying different wavelets are given in Table 5.19. All the results have

been obtained by including FV which is combination of horizontal and vertical details of

level three [HD 3, VD 3] for the different wavelets. Resolution of the normalized images

against best accuracy and corresponding threshold values are also given. Length of FV

and time consumed to complete the results for 34 different resolutions (i.e. from 31×256

to 64×256) are also presented in the Table 5.19. Applied wavelets include Haar,

Daubechies 2, Daubechies 4, Daubechies 6, Daubechies 8, Daubechies 10, Biorthogonal

5.5, Biorthogonal 6.8, Coiflet 1, Coiflet 3, Coiflet 4, Coiflet 5, Symlet 2, Symlet 4,

Symlet 8 and Mexican Hat.

Page 143: Nust Thesis

Chapter 5 Results & Discussions

-132-

Table 5.19: Results of iris recognition with different wavelets on CASIA version 1.0

S. No. Wavelet FV Resolution(pixels)

Accuracy (%) Threshold

FV Length

(elements)

Time (sec.)

1. Haar HD 3, VD 3 49×256 98.82 0.35 448 284.11

2. Db2 HD 3, VD 3 55 × 256 99.33 0.34 612 615.00

3. Db2 HD 3, VD 3, DD3 41 × 256 99.33 0.38 714 633.24

4. Db4 HD 3, VD 3 54 × 256 98.15 0.30 912 466.21

5. Db6 HD 3, VD 3 48× 256 97.98 0.38 1230 585.32

6. Db8 HD 3, VD 3 47× 256 98.49 0.35 1710 733.88

7. Db10 HD 3, VD 3 31 × 256 98.82 0.39 1920 892.92

8. Bior5.5 HD 3, VD 3 45 × 256 97.48 0.34 1230 906.95

9. Bior6.8 HD 3, VD 3 45 × 256 97.31 0.36 1840 1024.35

10. Bior6.8 AC 3 HD 3, VD 3 48 × 256 98.49 0.39 2760 1069.04

11. Bior6.8 HD 3, VD 3, DD3 44 × 256 98.32 0.39 2760 1114.53

12. Coif1 HD 3, VD 3 45 × 256 98.66 0.40 720 295.34

13. Coif3 HD 3, VD 3 50 × 256 97.82 0.45 3800 1054.43

14. Coif3 HD 3, VD 3 45 × 256 98.49 0.37 1840 1025.47

15. Coif4 HD 3, VD 3 48 × 256 98.66 0.4 2704 1210.32

16. Coif5 HD 3, VD 3 46 × 256 99.49 0.4 3534 1438.16

17. Sym2 HD 3, VD 3 55 × 256 98.66 0.34 612 616.55

18. Sym4 HD 3, VD 3 43 × 256 97.98 0.36 760 290.13

19. Sym8 HD 3, VD 3, DD3 49 × 256 98.49 0.37 2565 818.60

20. Mexican Hat HD 2, VD2 32 × 256 97.82 0.46 8192 990.19

After Image Enhancement

S. No. Wavelet FV Resolution(pixels)

Accuracy (%) Threshold

FV Length

(elements)

Time (sec.)

1. Haar HD 3, VD 3 60×256 98.82 0.33 512 322.60

2. db2 HD 3, VD 3 46 × 256 99.33 0.39 612 196.60

3. db2 HD 3, VD 3, DD3 37 × 256 99.33 0.41 714 210.11

Page 144: Nust Thesis

Chapter 5 Results & Discussions

-133-

4. db4 HD 3, VD 3 35 × 256 98.66 0.37 912 428.81

5. db6 HD 3, VD 3 45× 256 98.66 0.40 1230 551.99

6. db8 HD 3, VD 3 43× 256 98.99 0.40 1620 696.16

7. db10 HD 3, VD 3 30 × 256 98.82 0.39 1920 457.30

8. bior5.5 HD 3, VD 3 53 × 256 97.82 0.35 1230 391.98

9. bior6.8 HD 3, VD 3 33 × 256 97.82 0.37 1748 527.44

10. bior6.8 AC 3 HD 3, VD 3 38 × 256 96.97 0.25 2622 578.31

11. bior6.8 HD 3, VD 3, DD3 45 × 256 98.32 0.39 2760 577.90

12. coif1 HD 3, VD 3 39 × 256 98.82 0.40 648 239.78

13. coif3 HD 3, VD 3 51 × 256 98.15 0.45 3800 562.20

14. coif3 HD 3, VD 3 35 × 256 98.82 0.37 1748 543.64

15. coif4 HD 3, VD 3 30 × 256 98.82 0.38 2392 718.70

16. coif5 HD 3, VD 3 32 × 256 99.66 0.39 3306 964.85

17. sym2 HD 3, VD 3 46 × 256 98.82 0.39 544 195.53

18. sym4 HD 3, VD 3 43 × 256 98.49 0.37 836 281.43

19. sym8 HD 3, VD 3, DD3 48 × 256 98.49 0.4 2565 406.72

20. Mexican Hat HD 2, VD2 37 × 256 98.32 0.46 9472 1028.12

Optimum features have been evaluated for these wavelets. Best iris recognition rate of

99.49% has been achieved using Coeiflet wavelets. This high iris recognition accuracy

corresponds to the resolution of normalized iris images of 46×256 pixels and length of

FV is 3534 elements. Time utilized for the complete database with thirty four different

resolutions is 1438.16 seconds. For each resolution, average time consumed is 42.29

(=1438/34) seconds and for each image, it is further reduced to 0.07 seconds. This is

average time (per image) for training the database and recognizing the test images. When

the same wavelets are applied after enhancing the images then the results are also

improved and best iris recognition rate of 99.66% have been achieved using Coiflet 5

wavelets. In this case, less than 50% of the normalized iris images have been used and the

size of the images for finding FV is smaller. Therefore, the length of FV (3306 elements)

is 228 elements less than the length of FV with image enhancement. Similarly, the time

Page 145: Nust Thesis

Chapter 5 Results & Discussions

-134-

consumed while getting best results with smaller normalized images is also reduced to

964.85 seconds from 1438.16 seconds.

Maximum accuracy of 98.82% has been obtained by using Haar wavelets. Length of FV

is smaller due to the nature of Haar wavelet. It is the only wavelet which produces best

results with relatively large radius of iris which is 60 pixels. Among the Daubechies

wavelets, Daubechies 2 wavelet performs better than others with best iris recognition

accuracy of 99.33% with two combinations of FVs (i.e. HD 3, VD 3 and HD 3, VD 3,

DD 3). The length of FV [HD 3, VD 3, DD 3] is 714 elements which is larger than 612

elements. Using biorthogonal wavelets best accuracy of 98.49% has been attained with

the FV combination of AC 3 HD 3, VD 3. Results of Coiflet wavelets have already been

discussed. In case of Symlet wavelets, application of Symlet 2 presented the best results

with iris recognition accuracy of 98.82% with relatively smaller FV of 544 elements.

Mexican hat wavelet is also applied to the CASIA version 1.0 iris database. Results of

iris recognition obtained using this wavelets are more than 97.3% with original images

and when images are enhanced by subtracting the background the accuracy is improved

to 98.32%.

The same experiments have been conducted with a little variation in the training set.

Average of the three training images is also included as a training image in the database.

Consequently the one iris image is increased in the enrolled images against each iris

image. Also this process is repeated, after enhancing the images and results of these

experiments are given in Table 5.20. On observing these results, it is concluded that

increase of average image in the training set improves the overall results, which are

further raised when normalized images are enhanced. Qualitative behaviour of the results

is almost same as the results obtained without including average of the training image in

training process.

Same six wavelets with their different variations are applied for this set of experiments.

In most of the cases, FV is combination of Horizontal and Vertical details of level three.

Resolution of normalized iris images ranges (row-wise) from 30 pixels to 64 pixels and it

is maintained to find the best iris width. As mentioned earlier, elements of FV are zero or

one, so making the FV binary reduces the computational time. Time utilized for all these

Page 146: Nust Thesis

Chapter 5 Results & Discussions

-135-

resolutions in training and testing processes is presented in the last columns of Table 5.19

and Table 5.20.

Table 5.20: Iris recognition results on CASIA version 1.0 including average image

S. No. Wavelet FV Resolution

(pixels) Accuracy

(%) Threshold FV

Length (elements)

Time (sec.)

1. Haar HD 3, VD 3 63×256 98.82 0.36 512 404.262. db2 HD 3, VD 3 45 × 256 98.49 0.36 544 707.433. db2 HD 3, VD 3, DD3 32 × 256 98.32 0.4 612 739.804. db4 HD 3, VD 3 53 × 256 98.15 0.30 912 672.715. db6 HD 3, VD 3 32× 256 97.82 0.37 1066 794.166. db8 HD 3, VD 3 48× 256 98.66 0.38 1710 1033.437. db10 HD 3, VD 3 31 × 256 98.66 0.39 1920 1006.858. bior5.5 HD 3, VD 3 45 × 256 97.31 0.34 1230 1016.369. bior6.8 HD 3, VD 3 30 × 256 97.31 0.34 1656 1178.91

10. bior6.8 AC 3 HD 3, VD 3 48 × 256 98.49 0.39 2760 1237.8511. bior6.8 HD 3, VD 3, DD3 45 × 256 97.98 0.39 2760 1259.7612. coif1 HD 3, VD 3 47 × 256 98.82 0.40 720 286.1013. coif3 HD 3, VD 3 51 × 256 98.32 0.44 3800 1230.4514. coif3 HD 3, VD 3 55 × 256 98.49 0.36 1932 1186.1015. coif4 HD 3, VD 3 47 × 256 98.49 0.4 2704 1400.0716. coif5 HD 3, VD 3 45 × 256 99.66 0.39 3534 1689.9017. sym2 HD 3, VD 3 45 × 256 98.49 0.36 544 713.9118. sym4 HD 3, VD 3 46 × 256 98.15 0.38 836 306.6719. sym8 HD 3, VD 3, DD3 49 × 256 98.49 0.37 2565 958.91

20. Mexican Hat HD 2, VD2 35 × 256 97.98 0.46 8960 1439.47

After Image Enhancement

1. Haar HD 3, VD 3 63×256 99.16 0.37 512 455.312. db2 HD 3, VD 3 37 × 256 99.33 0.38 476 230.983. db2 HD 3, VD 3, DD3 37 × 256 99.33 0.41 714 250.634. db4 HD 3, VD 3 50 × 256 98.82 0.37 912 629.865. db6 HD 3, VD 3 32× 256 98.66 0.39 1148 837.936. db8 HD 3, VD 3 46× 256 99.16 0.39 1620 1076.617. db10 HD 3, VD 3 50 × 256 98.82 0.4 2112 522.788. bior5.5 HD 3, VD 3 44 × 256 97.65 0.34 1230 454.289. bior6.8 HD 3, VD 3 34 × 256 98.15 0.36 1748 622.23

Page 147: Nust Thesis

Chapter 5 Results & Discussions

-136-

10. bior6.8 AC 3 HD 3, VD 3 39 × 256 97.14 0.24 2622 676.2511. bior6.8 HD 3, VD 3, DD3 45 × 256 98.15 0.39 2760 720.3912. coif1 HD 3, VD 3 39 × 256 99.16 0.39 648 297.8413. coif3 HD 3, VD 3 51 × 256 98.15 0.45 3800 669.0314. coif3 HD 3, VD 3 43 × 256 98.82 0.38 1840 625.7515. coif4 HD 3, VD 3 41 × 256 98.66 0.4 2600 843.3816. coif5 HD 3, VD 3 43 × 256 99.83 0.34 3420 1095.3517. sym2 HD 3, VD 3 37 × 256 98.66 0.38 476 231.2318. sym4 HD 3, VD 3 45 × 256 98.49 0.38 836 317.5719. sym8 HD 3, VD 3, DD3 52 × 256 98.66 0.39 2565 469.47

20. Mexican Hat HD 2, VD2 37 × 256 98.66 0.46 9472 1494.26

Haar wavelet performs better when almost all the iris width (i.e. 63 rows out of 64 rows)

is used. Its best iris recognition accuracies of 98.82% and 99.16% have been observed

with and without enhancement of iris image respectively. Among the Daubechies

wavelets, Daubechies 8 wavelet outperforms other Daubechies with highest accuracy of

99.16% when results are obtained using enhanced normalized iris images. All the results

of Daubechies have accuracy more than 97.8%. Information discrimination power of

Daubechies 10 wavelet is very high because it uses less than half of the iris width for an

accuracy of 98.82%. Also Daubechies 6 utilizes 50% of the normalized iris images and

performs well with accuracies of 97.82% and 98.66% with and without enhancement of

images. Minimum length of FV among all the wavelets is obtained by Daubechies and

Symlet 2 but the results of Symlet wavelets are less than Daubechies wavelets. Similarly,

Mexican hat and Biorthogonal wavelets provide good information discrimination

capacity but Coiflet 5 wavelet gives the best results with highest iris recognition accuracy

of 99.83% with image enhancement and 99.66% with using the original images. Coiflet is

a discrete wavelet which is more symmetrical than the Daubechies wavelet. This makes

Coiflet the right choice for iris recognition. Complete results with Coiflet 5 wavelets on

CASIA version 1.0 are given in Table 5.21. It uses only 67.18% of the normalized iris

width. Only one image is false rejected whereas no false accept is noted when threshold

value is 0.34. False reject decreases and false accept increases with the increase in the

threshold value.

Page 148: Nust Thesis

Chapter 5 Results & Discussions

-137-

Table 5.21: Results with Coiflet 5 wavelet at image resolution 43×256

Threshold False Reject False Accept FRR (%) FAR (%) Accuracy (%)0.3 38 0 6.39 0.00 93.61 0.31 31 0 5.21 0.00 94.79 0.32 15 0 2.52 0.00 97.48 0.33 3 0 0.50 0.00 99.50 0.34 1 0 0.17 0.00 99.83 0.35 1 5 0.17 0.84 98.99 0.36 0 9 0.00 1.51 98.49 0.37 0 14 0.00 2.35 97.65 0.38 0 14 0.00 2.35 97.65 0.39 0 14 0.00 2.35 97.65 0.4 0 14 0.00 2.35 97.65

In view of the above, Coiflet 5 wavelet with FV concatenation of HD 3 and VD 3 is the

best wavelet for iris recognition. Same wavelet with same FV is applied to other iris

databases and results have been obtained.

1 00.0639 00.0521 00.0252 00.005 0

0.0017 00.0017 0.0084

0 0.01510 0.02350 0.02350 0.02350 0.02350 1

ROC curve for Coiflet 5 wavelet at image resolution 43*256

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

FAR

FRR

00.0020.0040.0060.0080.01

0.0120.0140.0160.0180.02

0 0.005 0.01 0.015 0.02

Figure 5.27: ROC using Coiflet 5 wavelets for CASIA version 1.0

0

0.005

0.01

0.015

0.02

0 0.005 0.01 0.015 0.02

Page 149: Nust Thesis

Chapter 5 Results & Discussions

-138-

ROC using Coiflet 5 wavelet has been obtained as shown in Figure 5.27 and EER of

0.0017 is achieved.

c. Results on CASIA version 3.0

Coiflet 5 wavelet has been applied to find results of iris recognition on CASIA version

3.0 iris image database and results are shown in Figure 5.28. In the first experiments,

three out of seven training images are used to train the database. Enhanced images have

been used in the second experiment. Average of the three training image as a training

image is included in third experiment whereas enhanced normalized images are used in

forth experiment. Maximum iris recognition accuracy of 96.59% has been achieved on

CASIA version 3.0. The main reason of less than 97% results is that large numbers of

images in this dataset are blurred or defocused.

Iris recognition using Coif5 wavelet on CASIA version 3.0

92.00

92.50

93.00

93.50

94.00

94.50

95.00

95.50

96.00

96.50

97.00

1 2 3 4Experiment Number

Acc

urac

y (%

)

Figure 5.28: Iris recognition results on CASIA version 3.0 using Coiflet 5 wavelet

d. Results on MMU

Coiflet 5 wavelet is used to find the iris recognition rate on MMU iris database with FV

combination of horizontal and vertical details of level three. Four types of experiments

have been conducted. In the first experiment, three original iris images of each class are

Page 150: Nust Thesis

Chapter 5 Results & Discussions

-139-

used for training and remaining images are used as test images. In second experiment,

first experiment is repeated with enhanced normalized iris images. Third experiment

includes average of the three training images as an enrolled image and remaining images

have been used as test image. Fourth experiment has been conducted by using the

enhanced images after background subtraction. Iris recognition rate of 98.22% has been

achieved for first experiment and remaining experiments resulted with an accuracy of

98.44% as shown in Figure 5.29. The difference between original and enhanced

normalized iris images appears in terms of threshold values which changes from 0.4 to

0.32. Length of FV in all the experiments is 3534 elements and best results have been

achieved with resolution (number of rows) of normalized image from 46 pixels to 50

pixels.

Iris recognition results on MMU with Coif5 Wavelet

98.198.15

98.298.25

98.398.35

98.498.45

98.5

1 2 3 4Experiment Number

Acc

urac

y (%

)

Figure 5.29: Results of Coiflet 5 wavelet on MMU iris database

e. Results on BATH

Coiflet 5 wavelet performs best on this database with 100% iris recognition rate. With

three training images, in all the conducted experiments (like with and without

enhancement of images, including average of training images in training process), 100%

accuracy has been achieved as shown in Figure 5.30. In this case, threshold value

decreases from 0.32 to 0.3 after enhancement of images. Length of binary FV is 3306

elements in all experiments. Only 30 pixels width of iris is necessary for obtaining the

best results. It indicates that after normalization, less than 50% of the image is sufficient

Page 151: Nust Thesis

Chapter 5 Results & Discussions

-140-

Iris recognition results on BATH with Coif5 Wavelet

0102030405060708090

100

1 2 3 4Experiment Number

Acc

urac

y (%

)

Figure 5.30: Results of Coiflet 5 wavelet on BATH iris database

to get very high iris recognition rate. Therefore, if half of an iris is occluded by eyelids,

then this iris can also be identified correctly. Similarly, localization of eyelids is an

overhead if it covers less than half of the iris.

Page 152: Nust Thesis

Chapter 6 Conclusions and Future Research Work

-141-

Chapter 6: Conclusions and Future Research Work With the current stress on security and surveillance, intelligent personal identification

has been an important consideration. Iris has been widely studied for personal

identification because of its extraordinary structure and non-touch capturing mode. Iris

has proved to be the most reliable and accurate among the biometric traits. The main

components of iris recognition system consist of image acquisition, iris localization,

feature extraction and matching.

6.1 Design & Implementation Methodologies Normally iris localization takes more than half of the total time used in order to

recognize a person through iris recognition system. System is designed in such a way that

maximum accuracy of the localization is achieved. Iris is localized by first finding the

boundary between pupil and iris by different methods for different databases. Different

methods have been implemented for pupil localization because different databases have

different image capturing devices under different environments (illumination conditions).

Irregular boundary of pupil has been obtained by using circular boundary of pupil. Each

iris image has three prominent areas; (a) pupil, (b) iris and (c) sclera and eyelids. While

inspecting the histogram of an iris image, it has been observed that in general, it has three

overlapping parts. First part has the information of pupil, second part is related to iris and

last part corresponds to sclera and outer part of iris. In order to localize iris, a new

method has been designed and implemented based on the gradient in intensity values.

This method performs well on all the databases.

After localizing the iris, next step is to compensate for the variation in size of the iris

due to camera to eye distance and pupil dilation and constriction. For normalizing the iris,

five different methods have been implemented. Four out of five methods depend on the

selection of reference point (e.g. pupil center, iris center, mid-point of pupil and iris

centers) whereas the last method depends on the width of the iris.

For feature extraction, bit plane and different combination of wavelets coefficients

have been investigated in order to obtain maximum accuracy. Coefficients of Haar,

Daubechies, Symlet, Coiflet, Biorthogonal and Mexican hat wavelets have been used. In

Page 153: Nust Thesis

Chapter 6 Conclusions and Future Research Work

-142-

addition, width of the iris is varied from thirty one to sixty four pixels to find out its

effect on iris recognition.

6.2 Performance of the Developed System In this thesis, mainly the performance of iris localization methods on different

datasets has been analyzed. A point inside the pupil is obtained to find the location of the

pupil in the image. 100% results have been achieved to correctly observe the point in

CASIA version 1.0, BATH and MMU iris databases. In case of CASIA version 3.0, this

point has been detected accurately in 99.93% images.

Exact boundary of the pupil is ciphered by divide and conquer rule. Radially a

specified number of points are selected. These points are repositioned with respect to the

maximum gradient and then linearly joined together to obtain exact boundary of the

pupil. The worst result attained for complete correct pupil localization is 99.3% on

CASIA version 3.0 and the best result of 99.8% for CASIA version 1.0 has been

achieved.

For outer iris boundary, a band is calculated within which iris outer boundary lies.

One dimensional signals are picked along radial direction from the determined band in a

sequence at different angles to obtain the outer circle of the iris. Redundant points are

discarded by finding certain distance from the center of the pupil to the point. This is

because the distance between center of pupil and center of iris is very small. The domain

for different directions is left and right lowers half quadrants when pupil center is at the

origin of the axes. This proposed method performs very well on all the databases and the

highest accuracy is 99.7% on MMU version 1.0 and the lowest accuracy is 99.21% on

CASIA version 3.0 iris image databases. Whereas, the results of correct iris localization

on CASIA version 1.0 and BATH iris databases are 99.6% and 99.4% respectively.

Eyelids are detected by fitting parabolas using points satisfying different criteria.

Experimental results show that the proposed method is most effective on CASIA version

1.0. The results with accuracy of 98.91% for upper eyelid and 97.8% for lower eyelid

have been obtained. The results of upper eyelid localization have been achieved with

accuracy of 84.5%, 84.66% and 90.02% for BATH, MMU and CASIA version 3.0 iris

image databases respectively. In case of lower eyelid localization, the correct localization

Page 154: Nust Thesis

Chapter 6 Conclusions and Future Research Work

-143-

outcomes have been attained up to 96.22%, 96.6% and 91.9% for MMU, BATH and

CASIA version 3.0 iris datasets respectively.

Five different normalization methods have been proposed and implemented termed

as: (1) normalization of iris using a reference point as pupil center, (2) iris center, (3)

mid-points of iris and pupil centers, (4) normalization using minimum distance and (5)

dynamic size normalization. The results of these normalized images have been analyzed.

Minimum time consumed for normalization is 0.007 seconds per image for MMU iris

database with dynamic size normalization method and 18.38 seconds is maximum time

utilized in normalization for BATH iris database using normalization via reference point

as iris center. Time consumed for each image of every database in normalization via

reference point as pupil center is 0.05 seconds and for normalization using mid-point of

iris & pupil centers as reference point is 0.07 seconds per image. Minimum distance

normalization method consumes 0.033 seconds per image for every dataset.

Bit planes have been used as features of the normalized iris images. Experiments on

bit plane two to seven have been conducted and best results obtained are on bit plane

five. Correct iris recognition rate of up to 99.64% has been achieved using CASIA

version 3.0. Results on other databases have also given encouraging performance with

accuracy of 94.11%, 97.55% and 99.6% on MMU, CASIA version 1.0 and BATH iris

databases respectively.

Different wavelets transforms have been used for iris recognition. Best feature vector

is determined by analyzing a large number of features. Selected feature vector is

combination of horizontal and vertical details of level three. Coiflet 5 wavelet

outperforms all the wavelets. Best iris recognition accuracies of 99.83%, 96.59%, 98.44%

and 100% have been achieved on CASIA version 1.0, CASIA version 3.0, MMU and

BATH iris databases respectively.

6.3 Future Research Work Research in the following directions can enable the researchers to make an error free

human iris identification system.

Images acquired from the cameras should be checked in iris image quality phase. Iris

image quality can be determined by evaluating certain parameters like focus, occlusion,

area of iris, lighting, image capturing environment and other factors. System performance

Page 155: Nust Thesis

Chapter 6 Conclusions and Future Research Work

-144-

can be improved by using a quality metric in the matching or by omitting the poor quality

images. There is no generally accepted measure of iris image quality. Thus, iris image

quality metric can be found.

Iris localization is very active research area. Many methods have been proposed to

segment the iris in the images. Two segmentation topics to research further are as

follows: One is pupil and iris boundaries are not approximated as circle when the images

are acquired off angle or when the acquired eye is not orthogonal to the capturing device

and second is the segmentation of iris from noisy parts of the eye like eyelids, eyelashes,

specular reflections and head hairs particularly of stylish females. In case of occlusion of

iris by the mentioned noises, iris localization is a real challenge.

Many feature extraction methods have been proposed by different researchers for

analyzing iris textures but there is no general agreement on which form of features gives

the best results. To find the features or combination of different features which perform

best is another possible area of future research.

Recognition of human beings using iris images of high resolution while the object is

on the move is another area of research. Video of the object can be acquired and frames

in which iris images are clear can be used for recognition.

Iris as a biometric can not be used in eyes with many diseases like cataract, glaucoma,

albinism, aniridia etc. To identify people with such diseases multimodal biometrics

systems are needed. Therefore, it is recommended to research on multi biometrics

technologies using different combinations of biometrics like iris and face, iris and ear, iris

and fingerprint, iris and hand geometry etc. This will not only accommodate the people

with diseases as mentioned above but will also improve the results of the system and save

the system from intruders, spoof attacks etc. Some researchers [18, 112-114] have

already worked in this direction but a complete system is still the need of the day.

Page 156: Nust Thesis

Appendix I

-145-

Appendix I Results of PCA for database CASIA version 1.0 with different normalization

methodologies are presented below, where Dim stands for number of dimensions of PCA,

Ttime is the time utilized for training in seconds and Rtime is time used for recognition in

seconds and Accuracy is in percentage of the total images in the database.

Normalized 1 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 57.31 1.16 2.29 4 50.25 1.22 2.48 7 49.41 1.26 2.65 10 49.24 1.34 2.95 13 49.58 1.39 3.19 16 49.41 1.47 3.13 19 50.08 1.57 3.56 22 49.08 1.64 3.67 25 48.74 1.73 3.98 28 49.58 1.81 5.52 31 48.74 1.95 5.69 34 49.08 2.00 4.62 37 48.74 2.07 4.80 40 48.40 2.22 4.75 43 48.74 2.28 4.88 46 49.24 2.35 5.03 49 49.08 2.45 5.25 52 48.40 2.52 5.41 55 48.74 2.65 5.69 58 48.57 2.78 8.03 61 48.74 2.87 8.43 64 47.73 2.93 10.53 Normalized 2 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 59.16 1.17 2.27 4 50.76 1.22 2.46 7 47.90 1.28 2.63 10 48.91 1.33 2.91 13 49.08 1.42 3.19 16 49.75 1.48 3.18 19 49.08 1.65 3.59 22 49.24 1.77 3.88

Page 157: Nust Thesis

Appendix I

-146-

25 48.07 1.84 4.13 28 48.57 1.86 5.40 31 48.40 1.91 5.63 34 47.90 2.01 4.85 37 48.07 2.09 5.03 40 47.90 2.24 4.80 43 47.73 2.28 5.06 46 47.73 2.63 5.30 49 47.06 2.69 5.36 52 47.06 2.78 5.75 55 46.72 2.86 5.82 58 46.89 2.87 8.31 61 46.89 3.14 8.72 64 47.23 2.94 11.24 Normalized 3 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 58.82 1.19 2.30 4 51.26 1.23 2.42 7 48.74 1.30 2.71 10 49.24 1.40 3.06 13 49.41 1.46 3.30 16 48.24 1.54 3.24 19 47.90 1.57 3.62 22 48.24 1.68 3.80 25 48.24 1.86 4.12 28 47.73 1.97 5.58 31 47.23 1.94 5.89 34 47.56 2.21 6.34 37 47.39 2.27 5.00 40 47.73 2.28 4.85 43 47.06 2.47 5.09 46 46.55 2.53 5.17 49 47.06 2.64 5.47 52 47.23 2.73 5.60 55 47.56 2.82 5.73 58 47.56 2.97 8.45 61 47.39 3.14 8.48 64 47.56 3.31 11.43 Normalized 4 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 58.49 1.19 2.36 4 50.42 1.26 2.56

Page 158: Nust Thesis

Appendix I

-147-

7 47.23 1.31 2.73 10 47.39 1.37 3.04 13 47.23 1.45 3.25 16 47.39 1.48 3.16 19 47.56 1.56 3.53 22 47.90 1.64 3.70 25 47.23 1.75 3.96 28 46.72 1.80 5.33 31 47.06 1.91 5.63 34 47.06 2.00 4.70 37 46.72 2.08 5.02 40 46.22 2.18 4.74 43 45.88 2.26 4.92 46 46.22 2.37 5.14 49 46.22 2.47 5.34 52 46.55 2.65 5.75 55 45.71 2.88 5.79 58 46.22 3.00 8.93 61 46.39 3.08 9.49 64 46.05 3.20 12.50 Normalized 5 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 57.65 0.38 1.56 4 48.07 0.48 1.66 7 48.57 0.57 1.82 10 47.56 0.64 1.87 13 47.90 0.74 2.13 16 47.90 0.85 2.41 19 47.73 0.91 2.69 22 47.06 1.04 2.76 25 46.72 1.32 2.98 28 46.39 1.26 3.33 31 46.72 1.34 3.06 34 46.55 1.65 5.88 Normalized 6 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 54.45 1.20 2.36 4 50.25 1.27 2.47 7 48.24 1.31 2.73 10 48.07 1.41 3.07 13 47.90 1.48 3.31 16 48.07 1.55 3.24

Page 159: Nust Thesis

Appendix I

-148-

19 48.40 1.65 3.70 22 49.08 1.75 3.83 25 48.40 1.85 4.16 28 48.74 1.94 5.99 31 47.90 2.03 6.20 34 47.90 2.14 6.62 37 47.56 2.25 5.21 40 48.07 2.35 4.88 43 47.06 2.47 5.12 46 47.56 2.55 5.24 49 47.56 2.68 5.48 52 47.56 2.75 5.60 55 47.39 2.86 5.83 58 46.72 2.96 8.83 61 46.89 3.08 9.36 64 47.06 3.19 12.57 Normalized 7 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 53.11 1.18 2.35 4 48.24 1.25 2.57 7 46.22 1.32 2.74 10 45.88 1.39 2.97 13 46.39 1.48 3.29 16 46.89 1.56 3.29 19 46.55 1.66 3.51 22 46.72 1.61 3.69 25 46.55 1.69 3.94 28 46.05 1.78 5.29 31 45.88 1.92 5.70 34 45.55 2.14 4.94 37 45.88 2.25 5.31 40 46.05 2.35 4.94 43 45.88 2.44 5.09 46 46.22 2.55 5.20 49 46.22 2.68 5.47 52 45.38 2.75 5.67 55 45.55 2.85 5.76 58 45.21 2.97 8.87 61 45.55 3.07 9.43 64 45.71 3.19 12.50 Normalized 8 ------------------------------------------------------- Dim. Accuracy Ttime Rtime

Page 160: Nust Thesis

Appendix I

-149-

1 53.78 1.19 2.35 4 49.08 1.27 2.47 7 47.56 1.33 2.74 10 47.39 1.41 3.06 13 47.90 1.46 3.30 16 47.06 1.56 3.25 19 46.39 1.65 3.70 22 47.06 1.74 3.79 25 46.55 1.84 4.14 28 46.39 1.95 5.96 31 46.72 2.04 6.24 34 47.39 2.13 6.62 37 47.56 2.25 5.25 40 47.06 2.34 4.92 43 47.23 2.46 5.11 46 46.72 2.57 5.18 49 46.72 2.66 5.48 52 46.55 2.77 5.57 55 46.22 2.88 5.84 58 46.39 2.96 8.82 61 45.71 3.07 9.37 64 45.71 3.18 12.56 Normalized9 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 53.61 1.19 2.36 4 49.92 1.25 2.57 7 48.07 1.31 2.74 10 46.89 1.39 2.98 13 48.40 1.48 3.29 16 48.07 1.56 3.25 19 47.73 1.65 3.62 22 48.07 1.75 3.85 25 48.07 1.84 4.14 28 48.07 1.95 5.86 31 47.06 2.06 6.23 34 46.72 2.13 4.93 37 47.56 2.26 5.30 40 47.23 2.36 4.91 43 47.06 2.46 5.10 46 46.89 2.56 5.24 49 47.39 2.68 5.50 52 47.56 2.76 5.66 55 48.07 2.87 5.79

Page 161: Nust Thesis

Appendix I

-150-

58 47.06 2.97 8.89 61 47.39 3.08 9.42 64 47.06 3.11 10.28 Normalized10 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 56.97 0.25 1.16 4 49.58 0.37 1.23 7 48.07 0.39 1.30 10 48.24 0.43 1.40 13 47.90 0.47 1.47 16 48.91 0.50 1.75 19 48.91 0.53 1.86 22 48.74 0.57 1.95 25 48.40 0.61 2.03 28 47.39 0.65 2.17 31 46.89 0.70 2.05 34 47.73 0.74 3.44 Normalized11 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 52.77 1.17 2.26 4 50.08 1.22 2.38 7 46.55 1.24 2.60 10 47.73 1.30 2.96 13 47.90 1.39 3.19 16 47.73 1.47 3.16 19 47.39 1.52 3.50 22 47.23 1.59 3.63 25 47.73 1.68 3.92 28 47.90 1.76 5.34 31 47.73 1.87 5.56 34 47.73 1.95 5.89 37 48.24 2.05 4.89 40 47.23 2.13 4.67 43 48.24 2.23 4.85 46 47.23 2.31 4.99 49 47.23 2.42 5.23 52 46.72 2.49 5.36 55 46.55 2.61 5.55 58 46.89 2.68 7.88 61 46.89 2.78 8.16 64 46.72 2.88 10.18

Page 162: Nust Thesis

Appendix I

-151-

Normalized12 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 53.95 1.15 2.24 4 49.92 1.22 2.50 7 46.39 1.25 2.61 10 46.72 1.32 2.89 13 46.72 1.38 3.12 16 47.06 1.44 3.11 19 47.23 1.53 3.46 22 47.23 1.61 3.67 25 46.72 1.68 3.92 28 46.89 1.77 5.28 31 47.39 1.89 5.56 34 47.23 1.96 4.66 37 47.06 2.07 4.96 40 46.89 2.15 4.71 43 47.90 2.26 4.86 46 48.24 2.32 5.07 49 48.24 2.43 5.23 52 47.73 2.51 5.40 55 47.56 2.60 5.58 58 46.72 2.76 7.96 61 46.55 2.79 8.24 64 46.22 2.90 10.22 Normalized13 Dim. Accuracy Ttime Rtime 1 53.28 1.16 2.25 4 49.24 1.20 2.38 7 46.72 1.26 2.64 10 48.74 1.32 2.98 13 47.90 1.41 3.18 16 48.24 1.44 3.09 19 47.56 1.52 3.51 22 47.39 1.60 3.63 25 47.90 1.69 3.92 28 47.73 1.79 5.36 31 47.73 1.87 5.55 34 47.90 1.96 5.89 37 47.73 2.08 4.89 40 47.90 2.14 4.68 43 47.73 2.23 4.88 46 47.73 2.39 5.10 49 47.39 2.42 5.20 52 47.23 2.51 5.36

Page 163: Nust Thesis

Appendix I

-152-

55 46.72 2.60 5.56 58 46.72 2.69 7.88 61 46.72 2.79 8.22 64 46.72 2.90 10.22 Normalized14 ------------------------------------------------------- Dim. Accuracy Ttime Rtime 1 54.29 1.16 2.24 4 49.92 1.20 2.46 7 48.91 1.28 2.63 10 47.90 1.31 2.91 13 49.24 1.40 3.12 16 48.74 1.44 3.12 19 48.40 1.52 3.46 22 48.07 1.61 3.73 25 48.57 1.70 3.92 28 48.91 1.78 5.30 31 48.24 1.91 5.58 34 48.40 1.99 4.66 37 48.57 2.04 4.93 40 48.40 2.15 4.69 43 47.90 2.23 4.85 46 47.56 2.32 4.99 49 47.90 2.42 5.22 52 48.57 2.49 5.37 55 48.24 2.60 5.51 58 48.57 2.70 7.92 61 48.74 2.78 8.21 64 49.58 2.87 10.10

Page 164: Nust Thesis

Appendix II

-153-

Appendix II Results of bit plane 5 for BATH iris database with different number of rows of

normalized iris image are presented below. Experiments have been conducted by

changing total number of rows in normalized images starting from 20 to 64, results of

only 50 to 64 number of rows are given here. Other rows do not produce better results.

Total number of images used in the experiment is 1000 and size of each normalized

image is 64 by 256. Threshold value is changed from 0.3 to 0.49 to obtain false reject and

false accept along with total errors occurred during matching and at the end maximum

accuracy is given with corresponding threshold value and number of errors.

Image Rows = 50 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 38 4 42 0.31 31 4 35 0.32 21 5 26 0.33 13 5 18 0.34 7 6 13 0.35 7 6 13 0.36 1 6 7 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 7 Accuracy = 99.30 ************************************************************************ Image Rows = 51 -----------------------------------------------------------------------------------------------------------

Page 165: Nust Thesis

Appendix II

-154-

Threshold False Reject False Accept Total Errors 0.30 39 4 43 0.31 33 4 37 0.32 21 5 26 0.33 13 5 18 0.34 7 6 13 0.35 7 6 13 0.36 1 6 7 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 7 Accuracy = 99.30 ************************************************************************ Image Rows = 52 ---------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 40 4 44 0.31 33 4 37 0.32 20 5 25 0.33 13 5 18 0.34 7 6 13 0.35 7 6 13 0.36 1 6 7 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7

Page 166: Nust Thesis

Appendix II

-155-

0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 7 Accuracy = 99.30 ************************************************************************ Image Rows = 53 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 41 4 45 0.31 33 4 37 0.32 20 5 25 0.33 13 5 18 0.34 7 6 13 0.35 7 6 13 0.36 1 6 7 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 7 Accuracy = 99.30 ************************************************************************ Image Rows = 54 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 41 4 45 0.31 34 4 38 0.32 19 5 24 0.33 13 5 18 0.34 7 6 13 0.35 5 6 11

Page 167: Nust Thesis

Appendix II

-156-

0.36 1 6 7 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 7 Accuracy = 99.30 ************************************************************************ Image Rows = 55 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 42 4 46 0.31 35 4 39 0.32 20 5 25 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40

Page 168: Nust Thesis

Appendix II

-157-

************************************************************************ Image Rows = 56 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 42 4 46 0.31 35 4 39 0.32 21 5 26 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40 ************************************************************************ Image Rows = 57 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 34 4 38 0.32 23 5 28 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7

Page 169: Nust Thesis

Appendix II

-158-

0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40 ************************************************************************ Image Rows = 58 Threshold False Reject False Accept Total Errors ----------------------------------------------------------------------------------------------------------- 0.30 43 3 46 0.31 34 4 38 0.32 23 5 28 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40 ************************************************************************ Image Rows = 59 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 36 4 40 0.32 23 5 28 0.33 15 5 20 0.34 7 5 12 0.35 4 6 10

Page 170: Nust Thesis

Appendix II

-159-

0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40 ************************************************************************ Image Rows = 60 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 38 3 41 0.32 24 4 28 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40

Page 171: Nust Thesis

Appendix II

-160-

************************************************************************ Image Rows = 61 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 38 3 41 0.32 25 4 29 0.33 13 5 18 0.34 8 5 13 0.35 3 6 9 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40 ************************************************************************ Image Rows = 62 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 39 3 42 0.32 24 4 28 0.33 14 4 18 0.34 8 5 13 0.35 3 6 9 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7

Page 172: Nust Thesis

Appendix II

-161-

0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40 ************************************************************************ Image Rows = 63 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 45 3 48 0.31 38 3 41 0.32 26 3 29 0.33 16 4 20 0.34 7 5 12 0.35 3 4 7 0.36 1 3 4 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 4 Accuracy = 99.60 ************************************************************************ Image Rows = 64 ----------------------------------------------------------------------------------------------------------- Threshold False Reject False Accept Total Errors 0.30 48 2 50 0.31 38 3 41 0.32 27 3 30 0.33 15 4 19

Page 173: Nust Thesis

Appendix II

-162-

0.34 6 4 10 0.35 3 5 8 0.36 1 3 4 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 4 Accuracy = 99.60

Page 174: Nust Thesis

References

-163-

References [1] A. Basit, M. Y. Javed, and M. A. Anjum, "Efficient iris recognition method for

human identification," in International Conference on Pattern Recognition and

Computer Vision (PRCV 2005), vol. 1, 2005, pp. 24-26.

[2] B. Miller, "Vital signs of identity," Spectrum IEEE, vol. 31, pp. 22-30, 1994.

[3] A. K. Jain, A. Ross, and S. Prabhakar, "Introduction to Biometric recognition,"

IEEE Transaction on Circuits and Systems for Video Technology, vol. 14, pp. 4-

20, 2004.

[4] A. Jain, L. Hong, and S. Pankati, "Biometric Identification," Communications of

the ACM, vol. 43, pp. 91-98, 2000.

[5] T. Ruggles, "Comparison of Biometric Techniques," http://www.biometric-

consulting.com/bio.htm, 1998.

[6] M. A. Anjum, M. Y. Javed, and A. Basit, "Face Recognition using Double

Dimension Reduction," in International Conference on Pattern Recognition and

Computer Vision (PRCV 2005), 2005, pp. 43-46.

[7] M. A. Anjum, M. Y. Javed, and A. Basit, "A New Approach to Face Recognition

Using Dual Dimension Reduction," International Journal of Signal Processing,

vol. 2, pp. 1-6, 2005.

[8] M. A. Anjum, M. Y. Javed, A. Nadeem, and A. Basit, "Face Recognition using

Scale Invariant Algorithm," in IASTED, International Conference Applied

Simulation & Modeling, 2004, pp. 309-312.

[9] B. Moghaddam, W. Wahid, and A. Pentland, "Beyond Eigenfaces: Probabilistic

Matching for Face Recognition," in 3rd IEEE International Conference on

Automatic Face and Gesture Recognition, 1998.

[10] A. Nefian and M. Hayes, "An embedded HMM-based approach for face detection

and recognition," in IEEE international Conference on Acoustics, Speech, and

Signal Processing, 1999.

[11] Z. M. Hafed and M. D. Levine, "Face Recognition Using the Discrete Cosine

Transform," International Journal of Computer Vision, vol. 43, pp. 167-188,

2001.

Page 175: Nust Thesis

References

-164-

[12] R. Chellappa, S. Sirohey, C. Wilson, and C. Barnes, "Human and machine

recognition of faces: A survey," Technical Report CAR-TR-731, CS-TR-3339,

University of Maryland, 1994.

[13] S. Z. Li and J. Lu, "Face Recognition Using the Nearest Feature Line Method,"

IEEE Transactions on Neural Networks, vol. 10, pp. 439-443, 1999.

[14] H. Moon and P. J. Phillips, "Computational and performance aspects of PCA-

based face-recognition algorithms," Perception, vol. 30, pp. 303-321, 2001.

[15] Y. Wang, C. Chua, and Y. Ho, "Facial feature detection and face recognition from

2D and 3D images," Pattern Recognition Letters, vol. 23, pp. 1191-1202, 2002.

[16] A. Bronstein, M. Bronstein, and R. Kimmel, "Expression-invariant 3D face

recognition," in 4th International Conference Audio and Video based Biometric

Person Authentication, 2003, pp. 62-70.

[17] K. Iwano, T. Hirose, E. Kamibayashi, and S. Furui, "Audio-visual person

authentication using speech and ear images," in ACM Workshop on Multimodal

User Authentication, 2003, pp. 85-90.

[18] C. Sanderson, S. Bengio, H. Bourlard, J. M. R. Collobert, M. BenZeghiba, F.

Cardinaux, and S. Marcel, "Speech and face based biometric authentication," in

International Conference on Multimedia and Expo, 2003.

[19] P. Aleksc and A. Katsaggelos, "An audio-visual person identification and

verification system using FAPs as visual features," in ACM Workshop on

Multimodal User Authentication, 2003, pp. 80-84.

[20] K. Chang, K. Bowyer, and V. Barnabas, "Comparison and Combination of Ear

and Face Images in Appearance-Based Biometrics," IEEE Trans. Pattern Analysis

and Machine Intelligence, vol. 25, pp. 1160-1165, 2003.

[21] X. Chen, P. Flynn, and K. W. Bowyer, "Visible-light and infrared face

recognition," in ACM Workshop on Multimodal User Authentication, 2003, pp.

48-55.

[22] T. Hazen, E. Weinstein, and A. Park, "Towards robust person recognition on

handheld devices using face and speaker identification technologies," in 5th

international conference on Multimodal Interfaces, 2003, pp. 289-292.

[23] Computer Business Review, 1998.

Page 176: Nust Thesis

References

-165-

[24] S. Prabhakar and A. K. Jain, "Decision-level fusion in fingerprint verification,"

Pattern Recognition, vol. 35, pp. 861-874, 2002.

[25] R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzalez-Marcos, "Biometric

Identification through Hand Geometry Measurements," IEEE Trans. on Pattern

Analysis & Machine Intelligence, vol. 22, pp. 1168-1171, 2000.

[26] http://www.eyedesignbook.com/ch3/eyech3-i.html accessed, 2007.

[27] Industry Information: Biometrics, 1996.

[28] A. K. Jain, F. D. Griess, and S. D. Connell, "On-line signature verification,"

Pattern Recognition, vol. 35, pp. 2963-2972, 2002.

[29] R. Plamondon and G. Lorette, "Automatic signature verification and writer

identification - the state of the art," Pattern Recognition, vol. 22, pp. 107-131,

1989.

[30] R. Plamondon and S. N. Srihari, "On-line and off-line handwriting recognition: A

comprehensive survey," IEEE Transactions on Pattern Analysis and Machine

Intelligence, vol. 22, pp. 63-84, 2000.

[31] I. Yoshimura and M. Yoshimura, "Off-line writer verification using ordinary

characters as the object," Pattern Recognition, vol. 24, pp. 909-915, 1991.

[32] F. Borowski, "Voice activity detection for speaker verification systems,"

Proceedings of SPIE - The International Society for Optical Engineering, vol.

6937, 2008.

[33] W. M. Campbell, J. P. Campbell, D. A. Reynolds, E. Singer, and P. A. Torres-

Carrasquillo, "Support vector machines for speaker and language recognition,"

Computer Speech and Language, vol. 20, pp. 210-229, 2006.

[34] B. Xiang, U. V. Chaudhari, J. Navrátil, G. N. Ramaswamy, and R. A. Gopinath,

"Short-time Gaussianization for robust speaker verification," presented at

ICASSP, IEEE International Conference on Acoustics, Speech and Signal 2002.

[35] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, "Speaker verification using

adapted Gaussian mixture models," Digital Signal Processing: A Review Journal,

vol. 10, pp. 19-41, 2000.

[36] S. V. Stevenage, M. S. Nixon, and K. Vince, "Visual Analysis of Gait as a Cue to

Identity," Applied Cognitive Psychology, vol. 13, pp. 513-526., 1999.

Page 177: Nust Thesis

References

-166-

[37] L. Wang, W. Hu, and T. Tan, "Recent developments in human motion analysis,"

Pattern Recognition, vol. 36, pp. 585-601, 2002.

[38] P. S. Huang, C. J. Harris, and M. S. Nixon, "Statistical approach for recognizing

humans by gait using spatial-temporal templates," IEEE International Conference

on Image Processing, vol. 3, pp. 178-182, 1998.

[39] C.-Y. Yam, M. S. Nixon, and J. N. Carter, "Gait Recognition by Walking and

Running: a Model-Based Approach," in Asian Conference on Computer Vision

(ACCV-2002), 2002, pp. 1-6.

[40] M. S. Nixon and J. N. Carter, "Automatic recognition by gait," Proceedings of the

IEEE, vol. 94, pp. 2013-2024, 2006.

[41] A. Kale, A. Sundaresan, A. N. Rajagopalan, N. P. Cuntoor, A. K. Roy-

Chowdhury, V. Kru ̈ger, and R. Chellappa, "Identification of humans using gait,"

IEEE Transactions on Image Processing, vol. 13, pp. 1163-1173, 2004.

[42] A. J. Hoogstrate, H. Van Den Heuvel, and E. Huyben, "Ear identification based

on surveillance camera images," Science and Justice - Journal of the Forensic

Science Society, vol. 41, pp. 167-172, 2001.

[43] B. Moreno, A. Sanchez, and J. F. Velez, "On the Use of Outer Ear Images for

Personal Identification in Security Applications," in IEEE 33rd Annual

International Carnahan Conference on Security Technology, 1999, pp. 469-476.

[44] R. Purkai and P. Singh, "A test of individuality of human external ear pattern: Its

application in the field of personal identification," Forensic Science International,

vol. 178, pp. 112-118, 2008.

[45] A. Iannarelli, Ear Identification, Forensic Identification Series: Paramont

Publishing, Freemont, Califoria, 1989.

[46] A. J. Hoogstrate, H. Van den Heuvel, and E. Huyben, "Ear Identification Based

on Surveillance Camera’s Images " http://www.forensic-evidence.com/site/

ID/IDearCamera.html, 2003.

[47] http://www.UNDBiometricsDatabase.html, accessed, 2005.

[48] A. Basit and M. Javed, Y., "Localization of iris in gray scale images using

intensity gradient," Optics and Lasers in Engineering, vol. 45, pp. 1107-1114,

2007.

Page 178: Nust Thesis

References

-167-

[49] J. Daugman, "How iris recognition works," IEEE Transactions on Circuits and

Systems for Video Technology, vol. 14, pp. 21-30, 2004.

[50] S. Lim, K. Lee, O. Byeon, and T. Kim, "Efficient iris recognition through

improvement of feature vector and classifier," ETRI Journal, vol. 23, pp. 61-70,

2001.

[51] J. Kim, S. Cho, and J. Choi, "Iris recognition using wavelet features," Journal of

VLSI Signal Processing, vol. 38, pp. 147-156, 2004.

[52] W. Boles and B. Boashash, "A Human Identification Technique Using Images of

the Iris and Wavelet Transform," IEEE Trans. Signal Processing, vol. 46, pp.

1185-1188, 1998.

[53] L. Ma, T. Tan, Y. Wang, and D. Zhang, "Personal identification based on iris

texture analysis," IEEE Trans. Pattern Analysis and Machine Intelligence, vol.

25, pp. 1519 1533, 2003.

[54] J. G. Daugman, "The importance of being random: Statistical principles of iris

recognition," Pattern Recognition, vol. 36, pp. 279-291, 2003.

[55] R. Wildes, "Iris recognition: an emerging biometric technology," Proceedings of

the IEEE, vol. 85, pp. 1348-1363, 1997.

[56] L. Masek and P. Kovesi, "Biometric Identification System Based on Iris Patterns

" in The School of Computer Science and Software Engineering: The University

of Western Australia, 2003.

[57] Y. Z. Shen, M. J. Zhang, J. W. Yue, and H. M. Ye, "A new iris locating

algorithm," in International Conference on Artificial Reality and Telexistence--

Workshops (ICAT'06), 2006, pp. 438-441.

[58] H. Proenca and L. A. Alexandre, "Ubiris: A noisy iris image database," in 13th

International Conference on Image Analysis and Processing, 2005, pp. 970-977.

[59] J. Cui, Y. Wang, T. Tan, L. Ma, and Z. Sun, "A fast and robust iris localization

method based on texture segmentation," in SPIE Defense and Security

Symposium, vol. 5404, 2004, pp. 401-408.

[60] C. Tian, Q. Pan, Y. Cheng, and Q. Gao, "Fast Algorithm and Application of

Hough Transform in Iris Segmentation," in 3rd International Conference on

Machine Learning and Cybernetics, 2004, pp. 3977-3980.

Page 179: Nust Thesis

References

-168-

[61] A. Rad, R. Safabakhsh, N. Qaragozlou, and M. Zaheri, "Fast iris and pupil

localization and eyelid removal using gradient vector pairs and certainty factors,"

in Irish Machine Vision and Image Processing Conference, 2004, pp. 82-91.

[62] L. Ma, Y. Wang, and T. Tan, "Iris recognition based multi-channel Gabor

filtering," in The fifth Asian conference on computer vision, 2002, pp. 23-25.

[63] L. Flom and A. Safir, "Iris recognition system," U.S. Patent 4 641 349, 1987.

[64] J. Daugman, "Biometric Personal Identification System Based on Iris Analysis,"

US patent 5 291 560, 1994.

[65] "Iris Recognition," http://en.wikipedia.org/wiki/Iris_recognition accessed, 2007.

[66] http://www.chinahistoryforum.com/index.php?showtopic=21366&st=0 accessed

2008.

[67] P. Kronfeld, Groos anatomy and embryology of the eye, H. Davson ed: The Eye,

Academic Press, London, 1962.

[68] "Eyes," http://www.ratbehavior.org/Eyes.htm accessed, 2007.

[69] A. K. Bachoo and J. R. Tapamo, "A segmentation method to improve iris-based

person identification," in 7th AFRICON Conference in Africa, 2004, pp. 403-408.

[70] "BMIris," http://ctl.ncsc.dni.us/biomet%20web/BMIris.html accessed, 2007.

[71] "CASIA-Iris Image Database version 1.0 and 3.0," Chinese Academy of Sciences

– Institute of Automation China, http://www.sinobiometrics.com accessed 2003

and 2006.

[72] "Multimedia University, Iris database," http://persona.mmu.edu.my/~ accessed,

2006.

[73] J. S. Lim, Two-Dimensional Signal and Image Processing: Englewood Cliffs, NJ,

Prentice Hall, 1990.

[74] J. R. Parker, Algorithms for Image Processing and Computer Vision: John Wiley

& Sons, Inc. New York, 1997.

[75] "Zero crossing," http://www.ii.metu.edu.tr/~ion528/demo/lectures/6/1/index.html

accessed, 2007.

[76] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Second ed: Prentice

Hall, Upper Saddle River, New Jersey, 2002.

Page 180: Nust Thesis

References

-169-

[77] "Image prcessing algoritms,"

http://www.ii.metu.edu.tr/%7Eion528/demo/demochp.html accessed, 2007.

[78] J. Canny, "A Computational Approach to Edge Detection," IEEE Transactions on

Pattern Analysis and Machine Intelligence, vol. PAMI-8, pp. 679-698, 1986.

[79] "Canny," http://homepages.inf.ed.ac.uk/rbf/HIPR2/canny.htm accessed, 2007.

[80] "Hough Transform," http://en.wikipedia.org/wiki/Hough_transform accessed,

2007.

[81] J. G. Daugman, "High confidence visual recognition of persons by a test of

statistical independence," IEEE Transactions on Pattern Analysis and Machine

Intelligence, vol. 5, pp. 1148-1161, 1993.

[82] P. V. C. Hough, "Method and means for recognizing complex patterns," U.S.

Patent 3 069 654, 1962.

[83] J. R. Bergen, P. Anandan, K. Hanna, and R. Hingorani, "Hierarchical model-

based motion estimation," in Euro. Conf. Computer Vision, 1991, pp. 5-10.

[84] D. J. Field, "Relations between the statistics of natural images and the response

properties of cortical cells," Journal of the Optical Society of America, vol. 4, pp.

2379-2394, 1987.

[85] C. Burrus, R. Gopinath, and H. Guo, Introduction to Wavelets and Wavelet

Transforms: Prentice Hall, New Jersy, 1998.

[86] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, second ed: Wiley-

Interscience publication, 2001.

[87] A. W. Goodman, Analytic goemetry and the calculus, fourth ed. New York:

Collier Macmillan International Editions, 1980.

[88] "Methods for classification," http://sundog.stsci.edu/rick/SCMA/node2.html

accessed, 2007.

[89] M. Turk and A. Pentland, "Eigenfaces for Face Recognition," Journal of

Cognitive Neuroscience, vol. 3, pp. 71-86, 1991.

[90] "Bit Plane," PC Magazine, 2007.

[91] "Bit-plane," http://en.wikipedia.org/wiki/Bit-plane accessed, 2008.

Page 181: Nust Thesis

References

-170-

[92] "Coiflets,"

http://documents.wolfram.com/applications/wavelet/FundamentalsofWavelets/1.4

.5.html accessed, 2007.

[93] G. Strang and T. Nguyen, Wavelets and Filter Banks: Wellesley Cambridge Press,

1995.

[94] E. W. Weisstein, "Hamming Distance ": From MathWorld--A Wolfram Web

Resource http://mathworld.wolfram.com/HammingDistance.html accessed, 2007.

[95] K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, "Image understanding for iris

biometrics: A survey," Computer Vision and Image Understanding, vol. 110, pp.

281-307, 2008.

[96] "University of Bath Iris image database," UK http://www.bath.ac.uk/elec-

eng/research/sipg/irisweb/database.htm accessed, 2006.

[97] P. J. Phillips, K. W. Bowyer, and P. J. Flynn, "Comments on the CASIA version

1.0 iris dataset," IEEE Transactions on Pattern Analysis & Machine Intelligence,

vol. 29, pp. 1-2, 2007.

[98] A. Basit, M. Y. Javed, and S. Masood, "Non-circular Pupil Localization in Iris

Images," presented at 4th International Conference on Emerging Technologies

(IEEE ICET 2008), Rawalpindi, Pakistan, 2008.

[99] K. Masood, M. Y. Javed, and A. Basit, "Iris Recognition using Wavelets," in

International Conference on Emerging Technologies (ICET 2007) 2007, pp. 253-

256.

[100] J. Illingwroth and J. Kittler, "A survey of Hough transform," Computer Vision,

Graphics and Image Processing, vol. 44, pp. 87-116, 1998.

[101] A. Zaim, "Automatic segmentation of iris images for the purpose of

identification," in IEEE International Conference on Image Processing (ICIP-

2005) 2005, pp. III-273-6.

[102] R. Zhu, J. Yang, and R. Wu, "Iris recognition based on local feature point

matching," in International Symposium on Communications and Information

Technologies (ISCIT '06). Bangkok, 2006, pp. 451-454.

Page 182: Nust Thesis

References

-171-

[103] S. P. Narote, A. S. Narote, L. M. Waghmare, and A. N. Gaikwad, "An automated

segmentation method for iris recognition," in TENCON 2006, IEEE Region 10

Conference. Hong Kong, 2006, pp. 1-4.

[104] H. Mehrabian and P. Heshemi-Tari, "Pupil boundary detection for iris recognition

using graph cuts," in International Conference on Image and Vision Computing

New Zealand (IVCNZ -2007). New Zealand, 2007, pp. 77-82.

[105] L. R. Kennell, R. W. Ives, and R. M. Gaunt, "Binary morphology and local

statistics applied to iris segmentation for recognition," in IEEE International

Conference on Image Processing (ICIP-2006), 2006, pp. 293-296.

[106] K. Grabowski, W. Sankowski, M. Napieralska, M. Zubert, and A. Napieralski,

"Iris recognition algorithm optimized for hardware implementation," in IEEE

Symposium on Computational Intelligence and Bioinformatics and Computational

Biology, 2006, pp. 1-5.

[107] X. Guang-Zhu, Z. Zai-feng, and M. Yi-de, "An image segmentation based method

for iris feature extraction," The Journal of China universities of posts and

telecommunications, vol. 15, pp. 96-117, 2008.

[108] C. Teo and H. Ewe, "An efficient one dimensional fractal analysis," in 13th

WSCG International Conference in Central Europe on Computer Graphics,

Visualization and Computer Vision. Czech Republic, 2005, pp. 157-160.

[109] A. A. Kassim, T. Tan, and K. H. Tan, "A comparative study of efficient

generalized Hough transform techniques," Image and Vision Computing, vol. 17

pp. 737-748, 1999.

[110] C.-Y. Cho, H.-S. Chen, and J.-S. Wang, "Smooth Quality Streaming With Bit-

Plane Labelling," Visual Communications and Image Processing, Proceedings of

the SPIE, vol. 5690, pp. 2184-2195, 2005.

[111] T. Strutz, "Fast Noise Suppression for Lossless Image Coding," in Picture Coding

Symposium (PCS'2001), 2001.

[112] K. I. Chang, "New multi-biometric approaches for improved person

identification," PhD dissertation, University of Notre Dame, 2004.

Page 183: Nust Thesis

References

-172-

[113] K. I. Chang, K. W. Bowyer, and P. J. Flynn, "An evaluation of multimodal

2D+3D face biometrics," IEEE Transactions on Pattern Analysis and Machine

Intelligence, vol. 27, pp. 619-624, 2005.

[114] N. Poh, S. Bengio, and J. Korczak, "A multi-sample multi-source model for

biometric authentication," in IEEE International Workshop on Neural Networks

for Signal Processing, 2002, pp. 375-384.