42
Face Detection… 1 A Dissertation On Face Detection BY Jugal Patel (105110693014) Aakash Mehta (105110693030) OF Institute of Science & Technology for Advance Study and Research (ISTAR) SUBMITTED TO Gujarat Technological University As a partial fulfillment of MCA for the Academic year 2012-2013

Report Face Detection

Embed Size (px)

Citation preview

Face Detection…

1

A

Dissertation

On

Face Detection

BY

Jugal Patel (105110693014) Aakash Mehta (105110693030)

OF

Institute of Science & Technology for Advance Study and Research (ISTAR)

SUBMITTED TO

Gujarat Technological University

As a partial fulfillment of MCA for the Academic year 2012-2013

Face Detection…

2

ACKNOWLEDGEMENTS

We being the student of Gujarat Technological University feel glad and

believe that we are very lucky for getting this opportunity to compile, organize

and present the material collected from various sources in the form a white paper

or a report of good value for MCA.

We express our heart-felt gratitude to respected Mr. Minjal Mistry, the

Assistant professor of our college for providing all facilities to complete

Dissertation.

Our special thanks is extended to ISTAR College - whose continuous

encouragement, suggestion and constructive criticism have been invaluable

assets throughout our research work.

We especially thanks to Mrs. Priya Swaminarayan who guided us

throughout the research. As she always help at any time. We heartily thanks to

all our Staff Member for giving us their valuable guidance without whom this

research might not be successful.

Jugal Patel (105110693014)

Aakash Mehta (105110693030)

Face Detection…

3

Table of Contents

Sr. Topics Page No.

1. Abstract 4

2. Key Terms 5

3. List of Images 6

4. Introduction 7

5. Literature Survey 10

6. Description 11

6.1 Image Segmentation 11

6.2 Face Detection Techniques 17

6.3 Morphological Image processing 22

6.4 Skin Color based Face Detection 26

7. Advantages And Disadvantages 30

8. Results 31

9. Some Problems 32

10. Case Study 33

11. Conclusion 41

12. References 42

Face Detection…

4

ABSTRACT

Face Detection is already becomes a big issue today. Face detection is attracting

much attention in the market of image processing. For the applications of

videophone and teleconferencing, the assistance of face recognition also

provides a more efficient coding scheme. In this dissertation, we give many

techniques which are used for detecting the face from the given image. Here,

several techniques are explained like neural networks, knowledge-based,

pattern-based, Dilation, Erosion etc. RGB and YCbCr color models are also used

for how to detect face on color based technique. Also we have gone through

some software application which support face detection like Lenovo Veriface 3.6,

FaceAccess, and Cyber shot digital camera HX1, Visage|SDK.

Face Detection…

5

IMPORTANT TERMS

Image

Image Segmentation

Thresholding

Edge based Segmentation

Region based Segmentation

Face Detection

Knowledge based

Pattern based

Case Study

Lenovo VeriFace 3.6

Cyber-shot Digital Camera HX1

FastAccess

visage|SDK™

Face Detection…

6

LIST OF IMAGES

No. List of Images Page No.

1. Face Detection 7

2. Image segmentation 12

3. Thersholding 14

4. Edge Relaxation 15

5. Region based 16

6. Template 17

7. Face Contour 19

8. Facial Features 20

9. Motion Detection 20

10. Four and eight connected 23

11. Dilation operation 24

12. Skin color based detection 26

13. YCbCr images 28

14. Region containing part 29

15. Results 31

16. Some problems 32

Face Detection…

7

INTRODUCTION

What is Face Detection…?

Face detection is the process of locating human faces in still photographs

and videos.

Face detection is a one kind of special application of image segmentation.

Here, we have to segment the image or video frame and then have to

identify the segment or region which contains a human face.

Face detection is a necessary preprocessing step for any automatic face

recognition or face expression analyzer system.

Some figures given below try to give overall idea about what the ‘Face

detection’ does.

Figure 1: Face is given by - a) Rectangle. b) Ellipse.

Face Detection…

8

Why we detect Face…?

Face detection and interpretation can have a number of applications in

machine vision. Some of the important applications are:

Person identification – face recognition.

Human-computer interaction – eye, lip tracking.

Video telephone, video conference

Why Face Detection is a challenging task…?

There are many factors which makes the task of face detection

challenging. Some of these factors are:

Scale.

Pose or orientation.

Facial expression.

Brightness, caused by varying illumination.

Resolution of an image or video frame.

Disguise, due to spectacles, beards.

Back ground, simple-complex, and static-dynamic.

How can we detect Face…?

Several approaches have been proposed over last several years for

solving this problem. Face detection techniques can be classifieds into two main

families:

Implicit or Pattern Based

Explicit or Knowledge Based.

Face Detection…

9

Face detection based on Image segmentation.

The process of partitioning an image into several constituent components

is called Image Segmentation. Its main goal is to divide an image into parts that

have a strong correlation with objects or areas of the real world contained in the

image. Segmentation plays a very important role in applications, which contains

analysis of the image data.

This document starts with the basics of image and image segmentation. It

follows brief description of various basic techniques used for image

segmentation, like thresholding, edge-based segmentation and region based

segmentation.

Face detection is the process of detecting a location of a human face in a

given arbitrary image. A description about importance of face detection and

challenges in the process of detecting a face is given. It follows the brief overview

about various face detection techniques. Among various techniques, human skin

color based face detection has been chosen for implementation purpose. The

remaining portion of document contains some basics of morphological

operations. After these basics, the implementation details are given with its pros

and cons. Results of the working of this technique are given as a set of output

images, some of them containing false detections.

At the end, a ‘conclusion’ concludes the Dissertation work, describing

important points about face detection.

Face Detection…

10

LITERATURE SURVEY

Face Detection can perform by independently matching with templates like

(eyes, nose and mouth). The configuration of the components during

classification is unconstrained since the system did not include a geometrical

model of the face.

We can also use other techniques for face detection which are based on

models. Models are used into compute the quotient Image. Just detecting face is

not enough for this. It is also necessary to normalize it, because it is not efficient

for performance of face detection.

Graph matching is also used for face detection. Which is little better as

compare with other.

In literature, it was also observed that negatives of photos of faces are

most difficult to detect properly. It gives the importance to top lights for face

detection was demonstrated using different task as matching surface images of

faces to determine whether they were identical or not.

It is also recognize that due to facial expression, we cannot find familiar

image of the same image.

[A]Wiskott et al. making use of geometry of local features, proposed a

structural matching category named as Elastic Bunch Graph Matching (EBGM).

They used Gabor wavelets and a graph consisting of nodes and edges to

represent a face. With the face graph, the model is invariant to distortion, scaling,

rotation and poses. [B]Support Vector Machines (SVM) is a new technique for

face detection. The SVM use binary tree recognition for tackle the face detection.

It also use like optimal separating hyper plane, binary tree, Eigen face for

detecting the face. Applying a different technique in image based approaches,

[C]Rowley et al. adopt a Neural Network approach which trained by using

multiple multilayer perceptrons with different receptive fields then merging is don

on the overlapping detections within one network.

Face Detection…

11

Description

IMAGE SEGMENTATION

Image:

An image may be defined as a two dimensional function, f (x, y), where x

and y are spatial (plane) coordinates, and the amplitude of f at any pair of

coordinates (x, y) is called the intensity or gray level of the image at that point.

A digital image is composed of a finite number of elements, each of which

has a particular location and value. Such element is called a pixel. Based on a

representation of a pixel, means which information is conveyed by a pixel, image

can be an indexed image, gray-scale image, or true color image.

Image Segmentation:

“The process of partitioning an image into meaningful groups of connected

pixels is called segmentation”.

Here, the word ‘meaningful’ draws the importance. Basically, digital image

is a collection of pixels, represented by some values. So, to partition these values

in meaningful groups makes the segmentation one of the hardest problems.

Image segmentation is one of the most important steps leading to the

analysis of processed image data. Its main goal is to divide an image into parts

that have a strong correlation with objects or areas of the real world contained in

the image.

Segmentation is an important part of practically any automated image

recognition system, because it is at this moment that one extracts the interesting

objects, for further processing such as description or recognition.

Face Detection…

12

Segmentation of an image is in practice the classification of each image

pixel to one of the image parts. For example, if the goal is to recognize black

characters, on a gray background, pixels can be classified as belonging to the

background or as belonging to the characters: the image is composed of regions

which are in only two distinct gray value ranges, dark text on lighter background.

The following figure clarifies the process of image segmentation. Here, a

football match scene image is segmented. Two level of segmentation are

displayed. The segmentation process extracts the regions of an image which

contains the real word objects and makes the image.

Figure 2: Image Segmentation

Face Detection…

13

Segmentation can be complete or partial based on the correspondence

between resulting regions and actual input image objects.

Complete Segmentation:

Set of disjoint regions uniquely corresponding with objects in the

input image.

Partial Segmentation:

Regions do not correspond directly with image objects.

Image is divided into separate regions that are homogeneous with respect to a

chosen property such as brightness, color, reflectivity, texture, etc.

Face Detection…

14

Segmentation Methods:

1. Thresholding(Global approaches):

Thresholding is the transformation of an input image f to an output

(segmented) binary image g as follows:

g (i, j) = 1 for f (i, j) >= T

= 0 for f (i, j) < T

Where T is the threshold, g (i, j) = 1 for image elements of

objects, and g (i, j) = 0 for image elements of the background (or vice

versa). Threshold can be brightness, or some other parameter, such as

redness, depending upon the type of the image.

Here, segmentation is accomplished based on the distribution of

pixel properties, such as gray-level values or color.

Thresholding is computationally inexpensive and fast. Thresholding

can easily be done in real time using specialized hardware.

If objects do not touch each other, and if their gray-levels are

clearly distinct from back-ground gray-levels, thresholding is a suitable

segmentation method.

Figure 3: a) Original image b) Threshold image

Face Detection…

15

2. Edge based Segmentation:

Goal: Connect edges to produce object contours

Edges in images are areas with strong intensity contrasts – a jump

in intensity from one pixel to the next.

Edges characterize boundaries and are therefore a problem of

fundamental importance in image segmentation. Edge detecting an image

significantly reduces the amount of data and filters out useless

information, while preserving the important structural properties in an

image.

In a video, we can determine the motion by tracking the edges

resulting from differencing the consecutive video frames. Or we might

have a 3D wire-frame model of some object, like a cube or some complex

one like a car, and we can find that object’s pose and position using

edges. Or some particular grouping of edges may represent an object like

a face or something else.

Edges can be detected by using edge detection operators like as

Sobel, Canny. Once we have edge map of an image, this edges can be

combined based on neighborhood properties to form boundaries, and thus

we can have segments of an image.

In short, here, segmentation is approached by finding boundaries

between regions based on discontinuities in gray levels or in some other

parameters.

Figure 4: Edge Based Segmentation

Face Detection…

16

3. Region based Segmentation:

The objective of segmentation is to partition an image into regions.

Here, segmentation is achieved by finding the regions directly. Regions

are constructed directly here without first finding borders between them.

Methods used here are generally better in noisy images, where

borders are extremely difficult to detect.

Homogeneity is an important property of regions and is used as the

main segmentation criterion in region growing, whose basic idea is to

divide an image into zones of maximum homogeneity. The criteria for

homogeneity can be based on gray-level, color, texture, shape, model,

etc.

The main idea here is to classify a particular image into a number

of regions or classes. Thus for each pixel in the image we need to

somehow decide or estimates which class it belong to. There are a variety

of approaches to do region based segmentation and to our understanding

the performance does not change from one method to the other

considerably. Since the emphasis of this dissertation lies on an integrated

boundary finding approach given the raw image and the region classified

image, it does not matter too much which method is being used to get the

region classified image as long as the output of that method gives

reasonable results.

Figure 5: Region Growing Example

Face Detection…

17

Face Detection Techniques

Implicit or Pattern Based:

This family tries to detect in a pure pattern recognition task. These

approaches work mainly on still gray images as no color is needed. They

work searching a face at every position of the input image, applying

commonly the same procedure to the whole image. This point of view is

designed to afford the general problem, the real problem, which is

unconstrained: Given any image, a black and white or a color image, the

question is to know if there is none, one or more faces in it.

Various techniques are –

1. Templates

2. Neural networks

3. Distribution based

4. SVM (Support Vector Machines)

5. PCA (Principal Component Analysis)

6. ICA (Independent Component Analysis)

Templates:

Performs cross-covariance with the given image and a template.

Applies template on various parts of an input image with different scales.

Figure 6: a) Ratio templates, b) Average face, c) PDM template Potential

Templates

Face Detection…

18

PCA:

Principal Component Analysis.

Reduces dimensionality of the data set.

Steps for performing PCA:

1. Obtain the image data

2. Subtract the mean value from each image value

3. Calculate the covariance matrix

4. Calculate the eigenvectors and eigenvalues of the covariance

matrix

5. Choose components and form a feature vector

6. Derive the new data set; create image vector from a given set of

sample images.

LDA:

Linear Discriminant Analysis.

Reduces dimensionality of the data set, and also preserves class

discriminatory information.

Considers scatter within classes as well as between classes.

More capable of distinguishing image variation.

Training set small → PCA outperforms LDA

Training set large → LDA is better

Neural Network:

From a set of training data, networks are trained.

Face samples, as well as non-face samples are used to train networks.

Multilayer networks trained with multiple prototypes at different scales.

Very reliable, performed well.

Computationally expensive.

Time consuming.

Requires large database of samples.

Face Detection…

19

Explicit or Knowledge Based:

This family takes into account face knowledge explicitly, exploiting

and combining cues or invariants such as color, motion, face geometry, facial

features information, and facial appearance.

Various techniques are –

1. Face Contours

2. Facial Features (geometry)

3. Motion Detection

4. Color.

Face Contours:

Uses knowledge about face contours.

Two curves around eye-brows & lips.

Circle enclosed in two curves for eye.

Two circles for nostrils

Figure 7: Face contour

Face Detection…

20

Facial Features:

Uses face geometry and appearance.

Uses features such as two eyes, two nostrils, nose-lip junction.

Features are detected and compared against relative distances.

Face symmetry can be very useful.

Figure 8: Facial Features

Motion Detection:

Sequential procedure of four steps:

1. Frame difference

2. Threshold

3. Noise removal

4. Adding moving pixels

Figure 9: a) Ref image; b) current frame; c) Difference between current and ref

image; d) Difference between current and previous frame

Face Detection…

21

Color Based:

Uses knowledge about human skin color

Observations have shown that- “Although skin colors of different people

appear to vary over a wide range; they differ much less in color than in

brightness”.

No need of any training

Pose and orientation invariant

Not so efficient when background is complex

Background contains objects having color like as skin

When image is too dark, or of low resolution

Face Detection…

22

MORPHOLOGICAL IMAGE PROCESSING

Basic Terms and Operations:

This chapter provides insight into some basic terms and operations which

will be used during implementation. It particularly deals with the morphological

operations on the input image like as erosion and dilation. Not all morphological

operations are covered here, but some basic ones are given below.

Morphology

Pixel Connectivity

Structuring Element

Neighborhood

Dilation

Erosion

Opening

Closing

Connected Component Labeling

Morphology:

Morphology is a technique of image processing based on shapes. The

value of each pixel in the output image is based on a comparison of the

corresponding pixel in the input image with its neighbors. By choosing the

size and shape of the neighborhood, you can construct a morphological

operation that is sensitive to specific shapes in the input image.

Morphological operations apply a structuring element to an input image,

creating an output image of the same size. The most basic morphological

operations are dilation and erosion.

Face Detection…

23

Pixel Connectivity:

Connectivity defines which pixels are connected to other pixels.

Morphological processing starts at the peaks in the marker image and

spread throughout the rest of the image based on the connectivity of the pixels.

Two well-known connectivities in 2-D space are 4-connected and 8–connected.

4-connected:

Pixels are connected if their edges touch. This means that a pair of

adjoining pixels is part of the same object only if they are both on, and are

connected along the horizontal or vertical direction. This is represented in

following figure (a).

8-connected:

Pixels are connected if their edges or corners touch. This means

that if two adjoining pixels are on, they are part of the same object,

regardless of whether they are connected along the horizontal, vertical, or

diagonal direction. This is represented in following figure (b).

Figure 10: a) 4-connected, b) 8-connected

Face Detection…

24

Structuring Element:

Matrix used to define a neighborhood shape and size for

morphological operations, including dilation and erosion. It consists of only

0's and 1's and can have an arbitrary shape and size. The pixels with

values of 1 define the neighborhood.

Neighborhood:

Set of pixels that are defined by their locations relative to the pixel

of interest. A neighborhood can be defined by a structuring element or by

specifying connectivity.

Dilation:

Dilation adds pixels to the boundaries of objects in an image.

The rule for dilation operation says that the value of the output pixel is the

maximum value of all the pixels in the input pixel's neighborhood.

In a binary image, if any of the pixels is set to the value 1, the output pixel

is set to 1.

Figure 11: Dilation operation

Face Detection…

25

Erosion:

Erosion removes pixels to the boundaries of objects in an image.

The rule for erosion operation says that the value of the output pixel is the

minimum value of all the pixels in the input pixel's neighborhood. In a

binary image, if any of the pixels is set to the value 0, the output pixel is

set to 0.

Connected Component Labeling:

This is a method for identifying each object in a binary image. This

procedure return a matrix, called a label matrix, which is an image, the

same size as the input image, in which the objects in the input image are

distinguished by different integer values in the output matrix.

Morphological Opening:

A morphological opening of an image is erosion followed by dilation

operation, using the same structuring element for both operations.

It is used to remove small objects from an image while preserving the

shape and size of larger objects in the image.

Morphological Closing:

A morphological closing of an image is dilation followed by erosion

operation, using the same structuring element for both operations.

It is used to smoothen the image by eliminating small holes between

objects.

Face Detection…

26

SKIN COLOR BASED FACE DETECTION

Human Skin Color:

Human skin color ranges from almost black to pinkish white in different

people. In general, people having ancestors from sunny regions have

darker skins compared to others having ancestors from less-sunny regions. Also,

on average, women have slightly lighter skin than man.

Generally the input images are in RGB format, which is sensitive to the

lighting condition because of brightness and color information are coupled

together. Therefore, it is not suitable for color segmentation under unknown

lighting conditions. Therefore, color system transformation is needed for skin

color segmentation.

The other color model is YCbCr color model; Where Y represents the

luminance component while Cb (Blue-difference) and Cr (Red-difference)

represent the chrominance components of a color image. The color distribution of

skin colors of different people was found to be clustered in a small area of the

chromatic color space, as shown in following figure.

Figure 12: a) Skin colors cloud.Fig b): Projection of a) on Cb-Cr plane.

Face Detection…

27

Observations have shown that although skin colors of different people appear to

vary over a wide range, they differ much less in color than in brightness. In other

words, skin colors of different people are very close, but they differ mainly in

intensities.

A manually selected skin samples from color images can be used to

determine the color distribution of human skin in chromatic color space.

Observations have shown that Cb component ranges from 77 to 127, while Cr

component ranges from 133 to 173. We can use this observation to filter out the

input image for human skin, means to detect skin regions in an image.

Human skin color based Face Detection:

The process for face detection based on human skin color involves two

stages.

1. Filtering, to find human skin colored regions from an input image.

2. Finding out region containing face from probable human skin

colored regions.

Stage 1: Filtering:

In this first stage, the image containing RGB color format is transformed to

YCbCr color format. For this conversion we can use the following

formulae.

Y = (0.299) R + (0.587) G + (0.114) B

Cb = (-0.169) R - (0.332) G + (0.500) B

Cr = (0.500) R + (-0.419) G + (-0.081) B

After having image in YCbCr format, we can apply the following

thresholding to obtain the regions involving colors like human skin.

Cbskin = 1, for 77 <= Cb <= 127

0, otherwise.

Face Detection…

28

Crskin = 1, for 133 <= Cr <=173

0, otherwise.

For better segmentation, the intersection between two components only is

considered as follows:

f = 1, if (Cbskin ∩ Crskin) is true.

0, otherwise.

Where f is the binary skin color map output image.

The figures given below show the resultant images after applying these

mathematical operations.

Original and YCbCr image:

Figure 13: Original image, YCbCr image

Components of YCbCr model:

Figure: Y component, Cb component, Cr component

Face Detection…

29

Thresholding and Ending operation:

Figure: Thresholded Cb component, Cr component, ANDing operation

Stage 2: Finding out region containing facial part:

Eroded, Dilated and Hole-filled Images:

Figure 14: eroded image, dilated image, hole-filled image

Original and Face Detected image:

Figure: Original image, Face Detected image (shown using Rectangle)

Face Detection…

30

Advantages And Disadvantages

Advantages:

This method does not need time-consuming process to train any neural

network or classifier.

Also there is no need of computing distance measures between every

possible region in the image.

This method is orientation independent.

The public are already aware of its capture and use for identity verification

purposes.

It is non-intrusive – the user does not have to touch or interact with a

physical device for substantial timeframe to be enrolled.

Facial photographs do not disclose information that the person does not

routinely disclose to the general public.

Disadvantages:

Face Detection is affected by changes in lighting, the person’s hair, the

age, and if the person wear glasses.

Requires camera equipment for user identification; thus, it is not likely to

become popular until most PCs include cameras as standard equipment.

Face Detection…

31

RESULTS

Successful Face Detections:

The figure given below represents the results as a output. We can see that

it works pretty well under many unusual situations like dark image, side-face, and

face with spectacles, etc.

Figure 15

Face Detection…

32

SOME PROBLEMS:

Figure 16

Figure: 1. Region having color like as skin is attached to face.

2. Non-facial region having larger area than facial part.

3. Low resolution and too dark image.

Face Detection…

33

CASE STUDY

1. Lenovo VeriFace 3.6:

Use of digital face recognition in commercial products is increasing. In

computer login to provide security, traditional lengthy passwords are still

dominant. Lenovo laptop vendors developed face recognition systems as

replacement to passwords for login security. This applications works by taking a

number of images of a legitimate user and store it in a database to later match

with an authentication request. When a user require a login, the system matches

the current user with the images stored in the database and make the decision to

either allow or deny the request.

VeriFace is facial recognition software which can do following features.

Windows Login:

VeriFace will pop out a camera window on login frame. Just put your face in the

window. VeriFace can recognize your face and makes you login automatically

Figure : Lenovo VeriFace Recognition

Face Detection…

34

VeriFace provides maximum user convenience rather than more robustness.

VeriFace stores images in black and white form. The system restricts still the

photo problem by detecting aliveness using eye movements of the user. Also it

does not work for nonhuman faces like cats, dogs, birds etc.

Experiments

Experiments were conducted with consideration to the constraints. Measuring

different parameters is a tough task. We tried to incorporate all possibilities

required for the observed parameters.

i. Spectacles

This parameter includes people with glasses and lenses. Due to our resource

constraints we limited ourselves to glasses only. In a face recognition system,

people with glasses are an important parameter both due to the growing

population and issues surrounding their seamless recognition.

ii. Light & Distance

The surrounding environment condition (especially light) results in significant

variations. Moreover, we tried to measure the greater possible distance of a

candidate from a computer screen, where the candidate was still recognized

correctly by the system.

Results Analysis

The results of the study were interesting in some aspects and we will consider

them with respect to our parameters.

The results of the experiments are given in Table. These results may have

different interpretations, possibly more than one. We left this interpretation open

for this report and will not consider any single interpretation.

Figure: Ratio of Success to Failure, All with Spectacle

Face Detection…

35

Conclusion

Face recognition is an important application of biometrics in an identification

system. Several face recognition systems for laptop login application exist.

This study experiments with Lenovo VeriFace laptop login system. We carried

out small experiments to check the system for different parameters. Data

Collected by the experiments can have different meanings, and interpretations

may be different based on an analysis. This study is based on the experiments;

and thus is insufficient to provide any strong statements about Lenovo Veriface

face recognition technology[D].

Leaving a video message:

For unauthorized users, VeriFace provides a function which allows them to leave

a video message to computer user. By click the related button on the window to

switch to leaving a message mode. And the computer can check out later in log

review module of the program.

The message function allows you to leave a video message when you cannot

reach a user. Click the Add Message button.

Click the button to start recording a message.

Face Detection…

36

After the message is recorded, click the Stop Message button to stop recording

messages.

After you successfully leave a message for a user of this computer, the user can

receive your message after logon.

Face Detection…

37

Login log review

VeriFone provides users a function to check out who has tried to login their

computers. Besides that he/she can also get when he/she logged in.

File encryption/decryption:

VeriFone provides a function which can encrypt their files by their faces or their

windows account password. The decryption bases on the way how they encrypt

their files.

Encrypting files

Right-click on the file or folder that you want to encrypt. Select Encrypt by

VeriFace. The VeriFace face recognition window will pop up. After the

recognition is finished, it will start to encrypt the file automatically

Face Detection…

38

To encrypt a file by using the password of the current user, close the face

recognition window. The password input window is displayed, as shown in the

following figure. Enter the password of the current user. Click OK.

Decrypting files

To decrypt an encrypted file double-click the file or right-click the file and choose

Decrypt by VeriFace. The VeriFace face recognition window is displayed to verify

the user. If the file is encrypted by the user's password, the password input

window is displayed. Users without the original file maker's password cannot

access the file. During file encryption, if the disk space is insufficient, the file

cannot be encrypted.

2. Cyber-shot Digital Camera HX1:

Face Detection technology detects up to eight individual faces and controls flash,

focus, exposure, and white balance to deliver accurate, natural skin tones with

reduced red-eye. It can also give priority to children or adults. Newly added Face

Motion Detection adjusts ISO sensitivity and accelerates the shutter speed when

facial movement is detected, reducing blur in the subjects face.

Sony's Picture Motion Browser (PMB) analyses photo, associates photos with

identical faces so that they can be tagged accordingly, and differentiates

between photos with one person, many persons and nobody.

Face Detection…

39

3. FastAccess:

FastAccess technology is so strong that it’s used by banks and hospitals, and

now FastAccess is available for home and small business users.

Facial recognition to log into Windows

FastAccess learns about your face as you use your computer. Unlike other

biometrics, there is no manual enrollment required. Simply log in normally and

FastAccess updates its facial database automatically.

Automatic login to websites secured by facial recognition

Use your face to not only log into Windows but to websites as well. FastAccess

remembers usernames and passwords to websites and will enter them for you –

but only when your face is visible!

Continuous security

Lock the desktop automatically when you walk away.

Photo audit logs

Always know who accessed your computer with the photo audit log. FastAccess

optionally takes a small photo of the user during each login to Windows.

4.visage|SDK™

visage|SDK™ integrates a comprehensive range of computer vision and

character animation technologies in an easy-to-use, fully documented Software

Development Kit to support a wide area of applications.

visage|SDK™ FaceDetect package contains powerful techniques to find faces

and facial features in still images in form of a well-documented C++ Software

Development Kit.

Face Detection…

40

FaceDetect FaceTrack VirtualTTS LipSync

The feature detector identifies the facial features in still images containing a

human face. The image should contain a face in frontal pose and with a neutral

expression. Most standard image formats such as JPEG, GIF etc. are supported

for input images. The result is a full set of MPEG-4 facial features (lips contour,

nose, eyes, eyebrows, hairline, chin, cheeks):

FaceDetect

Facial feature detection in still images.

FaceTrack

Real time or off-line head- and facial feature tracking in video (facial motion

capture) driving facial animation and numerous other applications.

LipSync

Real-time or off-line lip sync from audio file or microphone, with on-the-fly

rendering and MPEG-4 FBA output.

Face Detection…

41

CONCLUSION

Image segmentation is a lucrative field in image processing. Applications

of image segmentation ranges from military ones, like target recognition, to

civilian ones, like security. We studied here basic techniques for segmenting an

image.

A special application of image segmentation, face detection, is also a

good choice for people who are interested in research in image processing field.

Face detection using human skin color is described here. Face can be detected

based on a human skin color. Results are quite good for this technique. This

dissertation has been a great deal of learning for me. After having worked on this

project, two things really impressed me lot. One is the field of image processing,

and the other is MATLAB tool. It’s quite useful tool, without which to work with

images is very cumbersome. It took a lot of burden from me in processing

images.

Face Detection…

42

REFERENCES

Books:

[B1]: Digital Image Processing, by Rafael Gonzalez, Richard Woods

[B2]: Image Processing, Analysis, and Machine Vision, by Milan Sonka,

Vaclav Hlavac, Roger Boyle

[B3]: A Guide to MATLAB, by Brian R. Hunt, Ronald L. Lips man,

Jonathan M. Rosenberg

Web-sites:

[W1]: www.civs.stat.ucla.edu

[W2]: www.cs.cf.ac.uk

[W3]: www.icaen.uiowa.edu

[A] Wiskott, et al. (1997) “Face recognization by elastic bunch graph matching. IEEE Trans. Patt. Anal. Mach. Intel. 19, 775-779. [B] Guodong Guo, Stan Z. Li, Kapluk Chan et al. 2000 “Face recognization by SVM” [C] Rowley, H., S. Baluja and T. Kanade. (1998). “Neural Network-based Face detection”. IEEE Transactions on Pattern Analysis and machine Intelligence. 20, 23-38. [D]. Lenovo Face Recognition, http://lenovoblogs.com/insidethebox/?p=132