26
24 CHAPTER 2 LITERATURE SURVEY 2.1 FINGERPRINT ENHANCEMENT Sherlock et al (1994) proposed a new technique which uses contextual filtering in the Fourier domain and block-wise processing. Choi & Krishnapuram (1997) proposed a vigorous fuzzy logic based fingerprint image enhancement for removing impulse noise, smoothing out non-impulse noise and preserving edges of the image. Three different filters are used based on weighted or fuzzy least squares method. Kasaei et al (1997) proposed a new fingerprint image enhancement procedure based on local dominant ridge directions. The image is standardized and enhanced using directional filtering, which uses a library of filters based on Dominant Ridge Directions (DRD). The DRDs are used to form the block direction images, where the core and the delta points are effectively enhanced. The proposed algorithm results in an effective representation of the fingerprint images with appreciable quality. The main drawback is that the procedure takes more time to enhance the image when compared to other methods. Hong et al (1998) proposed a fast fingerprint enhancement algorithm using Gabor band pass filters which are tuned to the corresponding ridge frequency and orientation to remove the undesired noise while preserving true ride-valley structures, where all the operations are done in spatial domain. This algorithm can adaptively improve the clarity of ridge and furrow structures of the input images based on the estimated local ridge

CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

24

CHAPTER 2

LITERATURE SURVEY

2.1 FINGERPRINT ENHANCEMENT

Sherlock et al (1994) proposed a new technique which uses

contextual filtering in the Fourier domain and block-wise processing. Choi &

Krishnapuram (1997) proposed a vigorous fuzzy logic based fingerprint

image enhancement for removing impulse noise, smoothing out non-impulse

noise and preserving edges of the image. Three different filters are used based

on weighted or fuzzy least squares method.

Kasaei et al (1997) proposed a new fingerprint image enhancement

procedure based on local dominant ridge directions. The image is

standardized and enhanced using directional filtering, which uses a library of

filters based on Dominant Ridge Directions (DRD). The DRDs are used to

form the block direction images, where the core and the delta points are

effectively enhanced. The proposed algorithm results in an effective

representation of the fingerprint images with appreciable quality. The main

drawback is that the procedure takes more time to enhance the image when

compared to other methods.

Hong et al (1998) proposed a fast fingerprint enhancement

algorithm using Gabor band pass filters which are tuned to the corresponding

ridge frequency and orientation to remove the undesired noise while

preserving true ride-valley structures, where all the operations are done in

spatial domain. This algorithm can adaptively improve the clarity of ridge and

furrow structures of the input images based on the estimated local ridge

Page 2: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

25

orientation and frequency. The algorithm also identifies unrecoverable

regions (which are harmful for minutiae extraction) in the fingerprint and

removes them from further processing. This method produces good

performance in terms of goodness index value. The main drawback is the

error produced in the orientation estimation block gets propagated to

frequency estimation thus producing imperfect reconstruction.

Natchegael & Kerre (2000) proposed a new methodology for

fingerprint image enhancement based on fuzzy. Greenberg et al (2002)

anticipated the use of an anisotropic filter that adapts its parameters to the

structure of the underlying sub region. In this method, Wiener filter is used

for de-noising and the adaptive anisotropic filter is used for determining the

local ridge orientation. The enhanced anisotropic filter does not use the

diffusion technique and it is robust to noise, thus restoring the true

ridge/valley of the fingerprint image. This method is less efficient for bad

quality images.

Yang et al (2003) modified the method proposed above by discarding

the inaccurate prior assumption of sinusoidal plane wave and making the

parameter selection process independent of fingerprint image.

Wu et al (2004) planned to convolve a finger print image with an anisotropic

filter to remove the Gaussian noise and then apply directional median filter to

remove impulse noise. The fingerprint image is normalized to reduce the

variations of gray-level values along the ridges and valleys and then the

orientation fields are computed based on chain-code. This causes restoration

discontinuities at the block boundaries. The algorithm fails, when image

regions are contaminated heavily with noise and orientation estimation

becomes too hard.

Chikkerur et al (2007) follows a block-wise Short Term Fourier

Transform (STFT) based enhancement followed by contextual filtering using

Page 3: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

26

raised cosines. This method estimates all the intrinsic properties of the

fingerprint images such as the foreground region mask, local ridge orientation

and local ridge frequency. This method needs more space requirement than

the other Fourier domain methods. The method proposed is probabilistic and

does not suffer from outliers. This methodology utilizes the full contextual

information like orientation, frequency and angular coherence for

enhancement. In addition to this, the method reduces the space requirements

when compared to other methods. The main drawback is that the proposed

approach doe not uses any technique for smoothing the image before

enhancement.

Fronthaler et al (2007) introduced a new method to enhance the quality

of the given fingerprint image for achieving better recognition performance.

This method adopts a Laplacian like image scale pyramid to decompose the

original fingerprint into three smaller images corresponding to three different

frequency bands. Then, contextual filtering is applied using the three pyramid

levels and one dimensional Gaussian, where the filtering directions are

derived from the linear symmetry features.

Kyung & Bae (2008) proposed a novel method for enhancement which

is heavily dependent on the quality of the fingerprint images. The image is

divided into three classes based on the quality features. Then adaptive

enhancement is applied separately for oily image and dry images. This

method has the possibility of introducing more false minutiae points into the

fingerprint image.

Chengpu et al (2008) proposed an enhancement technique based on

the combination of Gabor filters and diffusion filters. In this paper, an

effective and robust algorithm for fingerprint enhancement has been

proposed. Contrast stretching approach is used to improve the clarity between

foreground and background of the fingerprint image. Then the structure tensor

Page 4: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

27

property is utilized to estimate the fingerprint orientation and thus it improves

the accuracy of orientation estimation. The advantages of Gabor filtering

method and diffusion filtering method are incorporated and a low-pass filter is

used at the direction that is parallel to the ridge and a band-pass filter is

adopted at the direction that is perpendicular to the ridge. The results show

that the proposed algorithm performs better within less time

Fronthaler et al (2008) proposed a method for enhancement which

is based on the linear symmetry features of fingerprint image. In this method,

the enhancement is applied progressively in the spatial domain. Both absolute

frequency and orientation of the fingerprint image are used for enhancing the

image. All the needed image processing operations are done in spatial domain

thus avoiding block artifacts which reduces the biometric information. In

addition to this, the parabolic symmetry property is used for extracting the

minutiae points from the fingerprint images.

Bansal et al (2009) proposed fingerprint enhancement techniques

by reducing impulse noise from digital images using type-2 fuzzy logic filters.

Karimimehr et al (2010) proposed a novel wavelet based approach for image

enhancement which uses both Gabor wavelet and Gobor filter for the

enhancement purpose. Sixty four Gabor wavelets based on sixteen directions

and four directions are designed to find the local orientation and frequency of

the region in 16 x 16 block. Since this method performs frequency and

orientation estimations independently and simultaneously, the error from each

stage does not influence the other stages. This method does not improve the

blur regions of the fingerprint image effectively.

Bahaghighat et al (2010) developed a new fingerprint enhancement

algorithm by extracting simultaneously the frequency and orientation of the

local ridge in the fingerprint image using Gabor wavelet filter bank. This

robust fingerprint image enhancement procedure is based on the integration of

Page 5: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

28

Gabor filters and directional median filters. Gabor filters are used to reduce

the Gaussian noises and impulse noises are reduced by directional median

filters. The proposed procedure significantly improves the accuracy rate

when compared to the existing methods with high time complexity.

Cheunhorng & Lin (2010) proposed fingerprint enhancement based

on adaptive color histogram and texture features. Ryu et al (2011) presented a

novel technique for enhancing low quality fingerprint images using Stochastic

Resonance (SR). Stochastic resonance refers to the process of adding Gaussian

noise to low quality fingerprint images to improve the signal-to-noise-ratio

thereby increasing the enhancement.

Stephen & Reddy (2012) proposed a new methodology for fingerprint

image enhancement with ridge orientation using neural network followed by

ternarization. This method utilizes back propagation network with eleven

input nodes, eleven hidden nodes and one output node for learning. This is

followed by ridge orientation estimation using the response obtained during

the learning process. The use of neural network has reduced the rate of false

minutiae extraction to a great extent. The quality of the enhancement image is

not satisfactory.

Babatunde et al (2012) proposed a modified version of a mathematical

algorithm for improving the quality of the fingerprint image enhancement.

The modified algorithm consists of sub-models for fingerprint segmentation,

normalization, ridge orientation estimation, Gabor filtering, binarization and

thinning. The proposed method performs well for synthetic and real

fingerprint images with free or minimum noise level.

Raajan & Pannirselvam (2012) proposed an efficient methodology for

fingerprint image enhancement using high boost Gaussian filter. In this

Page 6: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

29

method, the original fingerprint image is first high pass filtered and then

Gaussian filter is applied to remove noise. This step is followed by the

application of a high boost filter to achieve better performance.

Selmani et al (2013) proposed a robust filtering method based on fuzzy

logic was proposed. The main feature of the proposed filter is that it tries to

determine the best filter for all the noise intensities. The filter is able to

perform a very strong noise cancellation compared with static median filter.

Bartunek et al (2013) proposed an adaptive fingerprint enhancement

technique based on contextual filtering. The method involves preprocessing of

data on global and local level using the non-linear successive mean

quantization transform dynamic range adjustment method to enhance the

global contrast of the fingerprint image. The proposed method combines and

updates the existing processing blocks in to a new and robust fingerprint

enhancement system which leads to a drastically increased performance

where the equal error rate and the area above the curve is increased. The

proposed algorithm is insensitive to the various characteristics of the

fingerprint images obtained by different sensors.

From all the above discussed survey of fingerprint enhancement

techniques, it is clear that the enhancement techniques, either spatial or

frequency domain could not meet the needs for any real time AFIS in

improving the valley clarity and ridge flow continuity. The performance of

enhancement techniques relies heavily on local ridge orientation of the

fingerprint image. By considering the inefficiency of the existing techniques,

there is a need to propose a new enhancement methodology which focuses on

ridge orientation to improve the quality of the fingerprint image.

Page 7: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

30

2.2 FINGERPRINT MINUTIAE EXTRACTION AND FALSE

MINUTIAE REMOVAL

Leung et al (1991) proposed a neural network based approach for

the extraction of minutiae where the preprocessing technique is first applied

to a clean and thinned binary fingerprint ridge structure. A multilayer

perceptron network with three layers is trained to extract minutiae from

thinned binary image. The original fingerprint image is first convolved with

complex Gabor filter and the resulting magnitude and angle are passed as

inputs to a back propagation neural network to identify minutiae points. This

method produces good detection ratio and also it leads to low false alarm rate.

Ratha et al (1995) introduced a new algorithm in which the flow

direction is computed by viewing the fingerprint image as a directional

textured image. This method used orientation flow field to design adaptive

filters that are applied on the input fingerprint images. A waveform projection

based ridge segmentation algorithm is used to detect ridges. Morphological

operations are used for smoothing the ridge skeleton image. The spurious

minutiae points are removed using some heuristics and the problem with these

heuristics is that they do not eliminate all possible defects in the input gray

level fingerprint image. This method produces reasonable goodness index

value when compared with other methods.

Xiao & Rafaat (1995) proposed a false minutiae removal algorithm

based on both statistical and structural information. This method heavily

depends on connectivity which makes it complex and unreliable to bad

quality fingerprints. Jain et al (1997) proposed a minutiae extraction

algorithm that directly extracts minutiae from gray level images based on

distance and connectivity criteria. This method eliminates the true minutiae

points and also eliminates false minutiae points.

Page 8: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

31

Maio & Maltoni (1998) proposed a minutiae extraction algorithm

that directly extracts minutiae from gray level images by following ridge lines

with the help of the local orientation field. This method is based on a ridge

line following algorithm that follows the image ridge lines until a ridge

ending or a bifurcation occurs. This method is superior in terms of efficiency

and robustness when compared to other methods but is has high

computational complexity.

Sagar & Beng (1999) proposed a fuzzy rule method based on

human linguistics for minutiae extraction for gray level images.

Farina et al (1999) proposed a novel method for extracting minutiae points

from skeletonized and binarized images. This method proposes a new method

for bridge cleaning based on ridge position instead of directional maps. New

algorithms are proposed for ridge point validation and bifurcation validation

which are more reliable and can be used in different applications. The

proposed method reduces the spurious minutiae points considerably with high

computational power.

Chikkerur et al (2004) proposed a new method for extracting global

and local fingerprint features based on Fourier analysis. A chain coded

contour following method is proposed which uses lossless representation of

contours for effective minutiae point detection. This paper adopts the usage of

heuristic rules based on structural properties of the minutiae points for

eliminating false fingerprint features. The algorithm detects the true minutiae

points effectively but the chain code representation increases the complexity.

Hwang et al (2005) proposed a fast method for minutiae extraction

based on horizontal and vertical run length encoding from un-thinned binary

images without using a computationally expensive thinning process.

Page 9: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

32

Gamassi et al (2005) proposed a new square based method to

identify minutiae points in the fingerprint images based on the analysis of

local properties. Ridge endings and bifurcations are identified by studying the

intensity along the squared path of the un-thinned binarized image. This

method achieves remarkable identification accuracy. The major drawback is

that the method is not fully adaptive with respect to all parameters.

Fronthaler et al (2005) proposed a new method for local feature

extraction in fingerprint images using complex filtering technique. This paper

proposes a pair of local feature descriptors namely linear symmetry and

parabolic symmetry features for fingerprint feature extraction. Minutiae

points are detected by means of complex filters which reveals not only on the

feature position but also on directions of the feature points. The proposed

methodology is fast and efficient in extracting minutiae points, but it does not

consider either the global features or the size of the available fingerprint area.

Shi & Govindaraju (2006) introduced chain code processing

method for minutiae extraction which is extensively used in document

analysis and they are mainly meant for un-thinned binarized images. The

chain code representation allows efficient image quality enhancement and

detection of fine minutiae points from fingerprint images. The enhanced

fingerprint image is subjected to binarization using a locally adaptive

binarization method. The minutiae points are detected using a more

sophisticated ridge contour following procedure. A new post processing stage

for removing spurious minutiae points is also added. The chain code method

is efficient in extracting minutiae points but the post processing stage do not

remove added minutiae and exchanged minutiae due to noise from sweat

pores in ridges.

Humbe et al (2007) introduced a new method for removing

unnecessary information for true minutiae extraction technique based on

Page 10: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

33

mathematical morphology. This morphological operation based algorithm

removes spurs, spikes and dots effectively and also it clearly extracts a map

structure from the input fingerprint image. This algorithm produces better

accuracy rate but some of the true minutiae points are missed.

Kaur et al (2008) developed an enhanced thinning algorithm for

effective minutiae extraction by eliminating erroneous pixels and by

preserving the connectivity property of each pixel. This paper uses distance

criteria for false minutiae elimination. The enhanced thinning algorithm

improves the complexity of thinning process.

Kim et al (2009) proposes a new robust minutiae post-processing

algorithm which uses orientation and flow of ridges for detecting minutiae

points. False minutiae removal is achieved by using simple decision rules

which are framed on distance, connectivity, orientation and flow of ridges.

This method improves the fingerprint matching performance by effectively

eliminating false minutiae points while retaining true minutiae points. The

main drawback is that this method produces incorrect acceptance and false

rejection rate for bad quality fingerprint images.

Alibeigi et al (2009) proposed a hardware scheme based on

pipelined architecture for minutiae extraction and false minutiae elimination.

The proposed method extracts the fingerprint minutiae from binary image

effectively but with high computational complexity.

Bansal et al (2010) proposed a new algorithm for extracting

minutiae form fingerprint image using the binary hit and miss transform of

mathematical morphology. This method uses a pre-processing stage which

involves morphological operators to remove superfluous information

followed by thinning. Minutiae extraction using hit and miss transform

reduces the effort of post processing stage since more number of false

Page 11: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

34

minutiae points are removed comparatively while detecting the points itself.

Simple distance based false minutiae removal method is adopted in this paper.

Gnanasivam & Muttan (2010) proposed an efficient algorithm for

fingerprint feature extraction based on vertical orientation of ridges and

connected component analysis concept. The minutiae extraction process is

accomplished by block processing which involves a line based feature

extraction algorithm, connected components and ridge tracing approach. The

proposed minutiae extraction algorithm improves the performance of

matching fingerprints with some limitations in vertical orientation process and

also the proposed method has high computation time and high cost.

Gao et al (2010) proposed a new minutiae extraction method which

is based on Gabor phase. This method works in the transform domain of

fingerprint image where the image is convolved by a Gabor filter which

results in a complex image. The complex image is then transformed into its

corresponding amplitude and phase part. Finally a minutiae extractor extracts

the minutiae points directly from the Gabor phase field.

Stephen et al (2012) developed a new idea for extracting minutiae

points and removal of false minutiae points by implementing some simple

fuzzy rules based on the distance criteria. The false minutiae points are then

exported to a text file in the workspace. Stephen et al (2013) proposed a post

processing technique for the removal of false minutiae points from the

fingerprint image. A new fuzzy rule based false minutiae elimination system

is proposed with modified if-then rules by considering the issue of not

removing the true minutiae points in post processing. The paper achieves the

aim effectively with respect to easy implementation and high computational

effort.

Page 12: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

35

From all the above discussed minutiae extraction techniques, it is

observed that most of the extraction technologies and false minutiae

elimination methodologies suffer from problems associated with the handling

of poor quality impressions, distortions in both geometric position and

orientation and difficulties in getting a match among multiple impressions of

the same fingerprint. To overcome the above said limitations, an efficient post

processing stage is necessary to achieve good false accept rate and false reject

rate in fingerprint matching with less computational efficiency.

2.3 FINGERPRINT RECOGNITION

The problem of automatic fingerprint recognition has attracted wide

attention among researchers worldwide and has led to extensive research in

this area. Grasselli (1969) was the first to find the linguistic approach for

fingerprint classification. Jain et al (2001) proposed a hybrid matching

algorithm that uses both minutiae information and texture information for

matching the fingerprints. The computational requirement of the hybrid

matcher is dictated by the convolution operation associated with the Gabor

filters. The proposed method substantially improves the system performance

but the speed of the algorithm is very low when compared to other methods.

Willis & Myers (2001) developed a robust algorithm which allows

good recognition of low quality fingerprints by simultaneously smoothing and

enhancing poor quality fingerprints derived from a database of imperfect

fingerprints. A number of neural network based classifiers are analyzed to

select an optimal classifier. Correlation based matching technique with slight

improvement was adopted to cross correlate the wavelet transform of two

fingerprints. The proposed method works well for good fingerprints. The

method is computationally inexpensive as long as the resolution of the

fingerprint image is kept low.

Page 13: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

36

Abdallah (2005) proposes a new fingerprint identification

technique based on ANN, in which a novel clustering algorithm is used to

detect similar feature groups from multiple template images, thus forming

cluster set. In the proposed method, quick response is achieved by

manipulating the search order inside the experimental database. The

configuration of the artificial neural network system used in this approach

provides good generalization ability and sufficient discrimination capability.

The proposed approach provides efficient one-to-many matching of

fingerprints on large databases.

Cappelli et al (2006) evaluates the performance of fingerprint

verification systems based on the theoretical and practical issues related to the

performance evaluation of Fingerprint Verification Competition (FVC 2004).

The paper introduced a simple and effective method for comparing algorithms

at the score level and studies error correlation and algorithm fusion. This

paper provides huge information about the verification systems which are

useful for research.

Barreto et al (2006) provides method for fingerprint image

enhancement using neural networks. A computational segmentation method is

applied to detect the region of interest on fingerprint images. The approach is

based on the hypothesis that a small fingerprint fragment resembles a two-

dimension sinusoid function. Therefore, its Fourier spectrum must present a

well-defined pattern. Since neural networks are very suitable for solving

pattern recognition problems, a multi layer perceptron network is used to

discriminate the regions containing fingerprint fragments from the rest of the

image. The proposed model is tested over fingerprint images obtained from

the NIST special database 27, and the obtained results demonstrate that the

approach works reasonably well for images with different noise and contrast

levels.

Page 14: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

37

Rashid & Hossain (2006) proposed a fingerprint recognition

scheme in which certain features of fingerprint image are applied to the back

propagation network for training purpose and the values of the nodes are

updated and stored in a relational knowledge base. For fingerprint

recognition, the verification part of the system identifies the fingerprint of a

person with the help of the previous experimental values which are store in

relational database. The accuracy produced by the proposed method is not

satisfactory when compared to other existing methods, because only the

position of the minutiae points are considered (orientation of the ridge, core

and delta points are not considered).

Gu et al (2006) proposed a novel representation of fingerprint

which includes both minutiae features and model based orientation field.

Fingerprint matching is done by combining the decision of the matcher based

on global structure (orientation) and local cue (minutiae). This ensemble

classifier considerably improves the performance of the system. The system is

more robust but it takes more time for completion.

Arivazhagan et al (2007) introduced a new approach for fingerprint

verification based on Gabor co-occurrence features of the fingerprint image.

The proposed Gabor wavelet transform based method provides both local and

global information of the fingerprint in fixed length finger code. Then

fingerprint matching is done by means of finding the Euclidean distance

between the two corresponding finger codes. The recognition rate of the

proposed method is not satisfactory for non-overlapping images.

Ravi et al (2009) proposes fingerprint recognition using minutiae

score matching method, in which the extracted minutiae are stored in matrix

form and matching is done using matching score. . During matching process,

each minutiae point is compared with the template minutiae point. The

outputs produced are the reference points which are used to convert the

Page 15: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

38

remaining data points to polar coordinates. The proposed method is not

effective for low quality images.

Ashwini & Mukesh (2010) has proposed an effective fingerprint

matching algorithm based on feature extraction. A novel fingerprint

recognition technique for minutiae extraction and a minutiae matching has

also been introduced which is base on alignment based method.

Thai & Tam (2010) proposed a new fingerprint recognition system

using standardized fingerprint model which is used to synthesize the

templates of fingerprints. In this model, after pre-processing step,

transformation between templates, parameter adjustment and fingerprint

synthesization are done for achieving effective fingerprint matching. The

accuracy rate is stumpy when compared to other existing methods.

Chandrabhan et al (2010) has combined many methods to build a

minutiae extractor and a minutiae matcher. This method adopts alignment

based matching algorithm consisting of two stages. The first phase is called as

the alignment stage where the minutiae points of two fingerprints are aligned

based on a similarity measure. This stage is followed by the match phase,

where an elastic match algorithm is used to count the matched minutiae pairs.

Fingerprint alignment and matching stage improves the matching score but

the system is not generally reliable.

Kekre & Bharadi (2010) has used correlation based fingerprint

recognition based on multiple features derived from the fingerprint and are

collectively used for consistent core point detection. This method integrates

the sine component of the orientation field with three segments and is linearly

summed to produce a good approximation of the fingerprint with more

number of iterations. Then core point is estimated using poincare index. The

overall accuracy rate for unconstrained database is low.

Page 16: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

39

Lourde & Khosla (2010) discussed the issue of selection of an

optimal algorithm for fingerprint matching in order to design a system that

matches required specifications in performance and accuracy. This paper also

says that in order to achieve desired accuracy and system performance, it is

necessary to completely understand the specifications and implementation

details of the existing methods.

Pornpanomchai & Phaisitkulwiwat (2010) adopted the traditional

method of fingerprint recognition based on Euclidean distance method. The

recognition process consists of three steps. The first step deals with the

calculation of Euclidean distance between the core point and the bifurcation

point in the sixteen sectors. Second step compares all the sixteen Euclidean

distance between the training and testing data sets. Finally, best match is

selected. This method improves the system performance but the average

access time per image is high when compared to other systems.

Sengar et al (2012) designed a supervised neural network for

fingerprint images using the three basic patterns whorls, arches and loops.

The output produced by this method improves the recognition rate to a certain

extent but the complexity in implementing the proposed algorithm is high.

Mirzaei et al (2013) proposed a new recognition approach based on

the number, location and surrounded area of the singular points. The classifier

is rule based, where the rules are generated independent of a given set. The

proposed method is invariant to translation, rotation and scale changes. The

accuracy of this method is improved significantly. The main drawback with

this method is that some of the images are misclassified because of inefficient

rules.

Zhou et al (2013) introduced a new recognition technique based on

scale invariant feature transformation descriptors. These descriptors are

Page 17: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

40

employed to fulfill the verification tasks associated with low quality

fingerprints with lot of cuts or scratches. A two-step matcher called as

improved all descriptor pair matching is also proposed to implement the 1:N

verifications in real time. The proposed fingerprint identification scheme

achieves a significant improvement in accuracy when compared with the

conventional minutiae based methods with high complexity.

Goranin et al (2013) introduced the concept of using genetic algorithm in

optimizing the choice of positioning the fingerprint.

From all the aforementioned fingerprint recognition techniques, it is

noted that most of the recognition techniques has its own weaknesses like

poor recognition because of complex distortions in images, creation and usage

of fingerprint test databases and high time complexity for recognizing low

quality fingerprint images. To provide better solutions for the above said

limitations, a well-organized recognition system is obligatory to deal with the

low quality fingerprint images in an optimal way.

2.4 FINGERPRINT SECURITY DURING TRANSMISSION

2.4.1 Data Hiding

Weinberger & Sapiro (2000) introduced the JPEG-LS predictor

which aims in reducing the difference value. For eliminating replay attack,

that is, where a previously intercepted biometric is replayed, Ratha et al

(2001) proposed a challenge/response based system. A pseudo-random

challenge is presented to the sensor by a secure transaction server. The sensor

acquires the current biometric signal and computes the response for that

challenge. Then, the acquired signal and the response computed are compared

against the received signal in the transaction server for consistency. An

inconsistency reveals the possibility of resubmission attack. A novel method

is also developed to protect templates from false usage which involves an

Page 18: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

41

imprecise and non-invertible version of the biometric signal or the feature

vector. In addition to this, the fingerprint is secured by hiding messages by the

use of authentication stamps such as personal identification information in the

compressed domain.

Tian (2003) introduced a difference expansion based reversible

watermarking technique which creates space by expanding a difference. The

data and the secondary information are further added to the expanded

difference and are embedded into the image. In this method the differences

between adjacent pixels are doubled to generate a new Least Significant Bit

(LSB) plane for accommodating additional data.

Alattar (2004) proposed an attack on a face recognition system

where a specific user is attacked through synthetically generated images. At

each step, several images are multiplied with a weight and added to the

current candidate image. The modified image is given as input to the new

candidate image. These iterations are repeated until no improvement in

matching score is obtained, which is calculated as a sigmoidal function.

Kundar & Karthik (2004) proposed a method for watermarking in

which the content owner encrypts the signs of host DCT coefficients and each

content user uses a different key to decrypt a subset of the coefficients, so that

a series of different fingerprint versions are generated. Wu & Memon (2004)

proposed the Gradient Adjusted Predictor (GAP) which is used in context

based adaptive lossless image coding algorithm to provide better results.

Kuribayashini & Tanaka (2005) proposed a method in which each sample of a

cover signal is encrypted by a public key mechanism and a homomorphic

property of encryption is used to embed some additional data in to the

encrypted signal. Kamstra & Heijmans (2005) and Coltuc & Chassery (2007)

calculated the expanded difference by taking the difference between the

adjacent pixels. Ni et al (2006) projected a method in which a data hider can

Page 19: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

42

also perform reversible data hiding using a histogram shift mechanism, which

utilizes zero and peak points of the histogram and slightly modifies the pixel

gray values to embed data into the image.

Thodi & Rodriguez (2007) and Sachnev et al (2009) introduced a

method in which the differences between the predicted pixels are taken into

account. Lian et al (2007) proposed joint data hiding and encryption scheme

in which a part of cover data is used to carry the additional message and the

rest of the data is encrypted. In this method, the vector difference and the

DCT coefficients are encrypted, while a watermark is embedded into the

amplitudes of the DCT coefficients. Soutar proposed a hill climbing attack for

a simple image recognition system which is based on filter based correlation.

Synthetic templates are gradually subjected as input to a biometric

authentication system. Soutar also showed that the system could be

compromised till the point of incorrect positive identification.

Lee et al (2008) developed a method for watermarking by

considering the pixels of a block and the mean value of the block. To

minimize the difference value the watermarking techniques are built on high

performance predictors. Luo et al (2010) introduces reversible watermarking

scheme through interpolation technique. Hong et al (2010) uses orthogonal

projection and prediction error based modification for watermarking.

Cancellaro et al (2010) introduced a method in which the cover data in higher

and lower bit planes of transform domain are respectively encrypted and

watermarked.

Coltuc (2011) proposed a modified data embedding procedure for

prediction error expansion reversible watermarking scheme in which the

prediction error is not only embedded into the current pixel but also into its

prediction context. Zhang (2011) proposed a lossy compression and iterative

reconstruction for encrypted images. A novel reversible data hiding scheme is

Page 20: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

43

also proposed in which the data of the original cover is entirely encrypted and

the additional message is embedded by modifying a part of the encrypted

data. Coltuc (2012) proposed a low distortion transform based digital

watermarking scheme for secured transmission of data in which the classical

prediction error expansion is split uniformly into four parts thus reducing the

distortion when compared to methods based on high performance predictors.

From the above mentioned survey, it is found that, low difference

values ensure low distortion of embedding. In order to minimize such

differences, most of the watermarking schemes are built on high performance

predictors, which increase the mathematical complexity. The scheme

proposed by Coltuc (2012) reduces the distortion introduced by the

watermarking by considering a simple predictor (JPEG4) together with an

optimized data embedding procedure.

2.4.2 Data Integrity

Authentication, verification and identification system helps in

identifying a person. Accurate automatic personal identification is becoming

more and more important to the operation of the increasingly interconnected

information society. Conventional automatic personal identification

technologies such as password, tokens, identification card etc., verify the

identity of a person and are no longer reliable to satisfy the security

requirements of electronic transactions. All the traditional techniques suffer

from a common problem of their inability to differentiate between an

authorized person and an impostor who illegally acquires the access privilege

of the authorized person. In addition to this, if they are not properly

implemented, then it leads to misevaluation of the results.

In network security, data integrity is the assurance that, the data

received by the receiver is exactly the same as that of the authorized sender

Page 21: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

44

without any modification, insertion, deletion or replay. Thus integrity refers to

the validity of data. Message authentication is a mechanism that is used to

verify the integrity of a message. It ensures that the data received are exactly

as sent by the sender and that the purported identity of the sender is valid.

Symmetric encryption provides authentication among those who share the

of authentication.

Biometric is a technology that uniquely identifies a person based on

something you are and therefore they can naturally differentiate an authorized

person from an un-authorized one. However, fingerprint based authentication

workstation, which is potentially a pathetic point in the security system.

Though the communication channel is encrypted, the stored fingerprint

images might be fraudulently transmitted. To enhance this, additional

information is directly embedded in fingerprint images using appropriate data

hiding techniques.

The two most common cryptographic techniques for message

authentication are Message Authentication Code (MAC) and hash code. MAC

is an algorithm which takes a variable length message and a secret key as

input and produces an authentication code. This code is compared with the

code generated by the recipient to verify the integrity of the message. A hash

function maps a variable length message into a fixed length hash value or

message digest, which serves as the authenticator. The hash algorithms MD5

and SHA-1 are inbuilt with compression function to avoid collision and

generates message digest without the key. MD5 algorithm (RFC1321) was

developed by Ron Rivest in the year 1990 that converts variable size input to

128 bit message digest (Stallings 2006). Secure Hash Algorithm (SHA-1) was

developed by the National Institute of Standards and Technology (NIST) and

Page 22: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

45

published as a Federal Information Processing Standard (FIPS 180) in 1993; a

revised version was issued as FIPS 180-1 in 1995 and is generally referred to

as SHA-1 (Stallings 2006). SHA-1 converts variable size input to a 160 bit

fixed size output.

2.5 NEURAL NETWORKS

Neural Networks (NN) are simplified models of the biological

nervous system. A neural network is a highly interrelated network consisting

of a large number of processing elements called as neurons and they are

inspired by the brain. Neural network is learned by examples. Neural

networks exhibit mapping capabilities in the sense that they can map input

patterns into its associated output patterns. Neural network is trained with

known examples of a problem to acquire knowledge about it. They possess

the capability to generalize, that is, they can predict new outcomes from past

trends. They are usually robust systems and are fault tolerant. They process

information in parallel at high speed in a distributed manner. Neural networks

adopt various learning mechanisms of which supervised learning and

unsupervised methods have turned out to be very popular. In supervised

learning, teacher is assumed to be present during the learning process. The

network aims to minimize the error between the target (desired) output

presented by the teacher and the computed output to achieve better

performance (Jang & Sun 1997).

Neural network architectures have been broadly classified as single

layer feed forward networks, multilayer feed forward networks, and recurrent

networks. Some of the familiar neural network systems include Back

Propagation Network (BPN), Perceptron, ADALINE (Adaptive Linear

Element), Associative Memory, Boltzmann Machine, Adaptive Resonance

Theory, Self-Organizing Map, and Hopfield Network (Fausett 2004). Neural

networks have been beneficially applied to the problems in the fields of

Page 23: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

46

pattern recognition, image processing, data compression, forecasting, and

optimization. Neural algorithms are applied in feature extraction of zero-

watermark scheme for digital images (Sang et al 2006). Neural algorithms are

also used in steganalysis of still images. The embedded information is fed to

the neural networks to get secret message. This method is used for internet

security, watermarking, etc. The new release V3.0 of NEUROGRAPH is the

first soft-computing simulation environment combining neural networks, fuzzy

logic and genetic algorithms.

2.6 GENETIC ALGORITHM

Decision making features arise in all fields of human activities such

as scientific and technological applications and can affect every globe of our

life. Modeling biological evolution was done even in the formative years of

urbanized evolutionary algorithms for optimization and machine learning. In

1965, Rechenberg introduced evolution strategies, a method used to optimize

real valued parameters for devices. Genetic algorithms were invented by

Holland in the year 1960 and were later developed by Holland, his students

and colleagues at the University of Michigan in the year 1970 (Goldberg &

Deb 2008). Adaptation in Natural and Artificial Systems

presented the genetic algorithms as a generalization of biological evolution

and gave a hypothetical framework for adaptation under the genetic

algorithms. Genetic algorithms are computerized search and optimization

algorithms based on the technicalities of natural genetics and natural selection

(Davis 1991).

-world applications took

place due

pattern recognition, flow control devices, structural optimization, micro-chip

design, aerospace applications and micro-biology. Genetic algorithm is

Page 24: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

47

defined as a probabilistic search algorithm that iteratively transforms a set

(called a population) of mathematical objects (typically fixed-length binary

character strings), each with an associated fitness value, into a new population

selection and

using operations that are patterned after naturally occurring genetic

operations, such as crossover (sexual recombination) and mutation

(Rajasekaran & Pai 2003).

Till date, most of the GA studies are accessible through some books

by Davis (1991), Golberg (1989), Holland (1975) and Deb (1995). The first

application towards structural engineering was carried by Goldberg &

Samtani (1986). They applied genetic algorithm to the optimization of a ten-

member plane truss. Jenkins applies genetic algorithm to a trussed beam

structure. Deb (1999) applied GA to structural engineering problems. Apart

from structural engineering, it can also be used in biology, computer science,

image processing, pattern recognition and neural networks.

A hybrid methodology for combining genetic algorithms and search

algorithms has time-honored considerable attention. Nowadays hybrid genetic

algorithms are used to find optimal solutions in large search space. To find the

optimal solution in large search space corresponding to a given problem, the

better individuals have to be copied to the next generation for improving

convergence and finding the best individual to increase the efficiency and to

increase the efficacy of finding the best individual.

2.7 FUZZY LOGIC

Uncertain information can take on many different forms. Uncertainty

arises because of complexity, unawareness, arbitrariness, inadequate

measurements and lack of knowledge. Fuzzy sets afford an arithmetical way

to represent vagueness and fuzziness in humanistic systems. Fuzzy logic is a

Page 25: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

48

form of many-valued logic which deals with reasoning that is approximate

rather than fixed and exact. Compared to traditional binary sets (where

variables may take on true or false values) fuzzy logic variables may have a

truth value that ranges in degree between 0 and 1. When linguistic variables

are used, these degrees may be managed by specific functions. The term

"fuzzy logic" was introduced in the year 1965 by Zadeh. Fuzzy logic has been

applied to many fields, from control theory to artificial intelligence. All the

physical processes are based largely on imprecise human reasoning. This

imprecision is a form of information that can be quite useful to humans. The

ability to embed such reasoning in determined and complex problems is the

decisive factor by which the efficacy of fuzzy logic is judged. From the

historical point of view, the subject of uncertainty has not always been

embraced within the scientific community (Klir et al 1988). In the traditional

view of science, uncertainty represents an undesirable state that must be

avoided at all costs. The leading theory in quantifying uncertainty in scientific

models from the late nineteenth century until the late twentieth century has

been the probability theory. Black (1973) expresses uncertainty using

probability theory. Zadeh (1965) introduced his influential idea in a

continuous valued logic called as fuzzy set theory. In the year 1980, other

investigators showed a tough relationship between evidence theory,

probability theory and possibility theory.

In 1973, with the basic theory of Zadeh fuzzy controllers, other

researchers began to relate fuzzy logic to various mechanical and industrial

processes. Professors Terano and Shibata in Tokyo, along with Professors

Tanaka and Asai in Osaka, made major contributions both to the development

of fuzzy logic theory and its applications. In 1980 Professor Mandani in the

United Kingdom, designed the first fuzzy controller for a steam engine with

great success. In 1987 Hitachi used a fuzzy controller for the Sendai train

control. It was also during this year of 1987 when the company Omron

Page 26: CHAPTER 2 LITERATURE SURVEYshodhganga.inflibnet.ac.in/bitstream/10603/49349/7/07... · 2018-07-03 · is heavily dependent on the quality of the fingerprint images. The image is divided

49

developed the first commercial fuzzy controllers. So the year 1987 is

roducts based on

fuzzy logic to be traded. In 1993, Fuji applied fuzzy logic to control chemical

injection water treatment plants for the first time in Japan. In addition to the

study of the applications of fuzzy logic, Professors Takagi and Sugeno

developed the first approach to construct fuzzy rules. The fuzzy rules, or rules

of a fuzzy system, define a set of overlapping patches that narrate a full range

of inputs to a full range of outputs. In that sense, the fuzzy system

approximates some mathematical function or equation of cause and effect.

Recent advances in neural networks and genetic algorithms are certainly a

fitting complement to fuzzy logic. Neuro-fuzzy systems uses learning

methods based on neural networks to identify and optimize.