73
CHAPTER 1 INTRODUCTION 1.1 IMPORTANCE OF FACE RECOGNITION The information age is quickly revolutionizing the way transactions are completed. Everyday actions are increasingly being handled electronically, instead of with pencil and paper or face to face. This growth on electronic transactions has resulted in a greater demand for fast and accurate user identification and authentication. Access codes for buildings, banks accounts and computer systems often use PIN’s for identification and security clearances. Using the proper PIN gain access, but the user of the PIN is not verified. When credit and ATM cards are lost and stolen , an unauthorized user can often come up with the correct personal codes. Despite warning, many people continue to choose easily guessed PIN’s and passwords: birthdays, phone numbers and social security numbers. Recent cases of identity 1

2.2€¦  · Web viewMeasurable means that the characteristic or trait can be easily presented to a sensor, located by it, and converted into a quantifiable, digital format. This

  • Upload
    voanh

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

CHAPTER 1

INTRODUCTION

1.1 IMPORTANCE OF FACE RECOGNITION

The information age is quickly revolutionizing the way transactions

are completed. Everyday actions are increasingly being handled

electronically, instead of with pencil and paper or face to face. This

growth on electronic transactions has resulted in a greater demand for

fast and accurate user identification and authentication. Access codes for

buildings, banks accounts and computer systems often use PIN’s for

identification and security clearances.

Using the proper PIN gain access, but the user of the PIN is not

verified. When credit and ATM cards are lost and stolen , an unauthorized

user can often come up with the correct personal codes. Despite warning,

many people continue to choose easily guessed PIN’s and passwords:

birthdays, phone numbers and social security numbers. Recent cases of

identity theft have heightened the needs for methods to prove that

someone is truly who he/she claims to be.

Face recognition technology may solve this problem since a face is

undeniably connected to its owner expect in the case of identical twins.

Its nontransferable. The system can then compare scans to records stored

in a central or local database or even on a smart card. The strong need for

user-friendly systems that can secure our assets and protect our privacy without

losing our identity in a sea of numbers is obvious.

1

1.2 FACE RECOGNITION TECHNOLOGY

1.2.1 INTRODUCTION TO BIOMETRICS

A biometric is a unique, measurable characteristic of a human being

that can be used to automatically recognize an individual or verify an

individual’s identity. Biometrics can measure both physiological and

behavioral characteristics.

“Any automatically measurable, robust and distinctive physical

characteristics or personal trait that can be used to identify an individual

or verify the claimed identity of an individual.

This definition requires elaboration:-

Measurable means that the characteristic or trait can be easily

presented to a sensor, located by it, and converted into a quantifiable,

digital format. This measurability allows for matching to occur in a matter

of seconds and makes it an automated process.

Biometric identifiers are the distinctive, measurable characteristics used

to label and describe individuals. Biometric identifiers are often categorized as

physiological versus behavioral characteristics. Physiological characteristics are

related to the shape of the body. Examples include, but are not limited

to fingerprint, palm veins, face recognition, DNA, palm print, hand

geometry, iris recognition, retina and odor/scent. Behavioral characteristics are

related to the pattern of behavior of a person, including but not limited to typing

rhythm, gait.

2

CHAPTER 2

LITERATURE SURVEY

2.1 NEUROSCIENCE ISSUES RELEVANT TO FACE RECOGNITION

Human recognition processes utilize a broad spectrum of stimuli,

obtained from many, if not all, of the senses (visual, auditory, olfactory, tactile,

etc.). In many situations, contextual knowledge is also applied, for example,

surroundings play an important role in recognizing faces in relation to where

they are supposed to be located. It is futile to even attempt to develop a system

using existing technology, which will mimic the remarkable face recognition

ability of humans. However, the human brain has its limitations in the total

number of persons that it can accurately “remember.”

A key advantage of a computer system is its capacity to handle large

numbers of face images. In most applications the images are available only in

the form of single or multiple views of 2D intensity data, so that the inputs to

computer face recognition algorithms are visual only. For this reason, the

literature reviewed in this section is restricted to studies of human visual

perception of faces. Many studies in psychology and neuroscience have direct

relevance to engineers interested in designing algorithms or systems for

machine recognition of faces.

2.2 FACE RECOGNITION FROM STILL IMAGES

As illustrated in the problem of automatic face recognition involves three

key steps/subtasks:

(1) detection and rough normalization of faces,

(2) feature extraction and accurate normalization of faces,

(3) identification and/or verification.

3

Sometimes, different subtasks are not totally separated. For example, the

facial features (eyes, nose, mouth) used for face recognition are often used in

face detection. Face detection and feature extraction can be achieved

simultaneously, as indicated in Figure 1. Depending on the nature of the

application, for example, the sizes of the training and testing databases, clutter

and variability of the background, noise, occlusion, and speed requirements,

some of the subtasks can be very challenging.

Though fully automatic face recognition systems must perform all three

subtasks, research on each subtask is critical. This is not only because the

techniques used for the individual subtasks need to be improved, but also

because they are critical in many different applications.

2.3 FACE RECOGNITION FROM IMAGE SEQUENCES

A typical video-based face recognition system automatically detects face

regions, extracts features from the video, and recognizes facial identity if a face

is present. In surveillance, information security, and access control applications,

face recognition and identification from a video sequence is an important

problem. Face recognition based on video is preferable over using still images,

since as demonstrated , motion helps in recognition of (familiar) faces when the

images are negated, inverted or threshold. It was also demonstrated that humans

can recognize animated faces better than randomly rearranged images from the

same set.

Though recognition of faces from video sequence is a direct extension of

still-image-based recognition, in our opinion, true videobased face recognition

techniques that coherently use both spatial and temporal information started

only a few years ago and still need further investigation. Significant challenges

for video-based recognition still exist; we list several of them here.

4

2.4 EVALUATION OF FACE RECOGNITION SYSTEMS

Given the numerous theories and techniques that are applicable to face

recognition, it is clear that evaluation and benchmarking of these algorithms is

crucial. Previous work on the evaluation of OCR and fingerprint classification

systems provided insights into how the evaluation of algorithms and systems

can be performed efficiently. One of the most important facts learned in these

evaluations is that large sets of test images are essential for adequate evaluation.

It is also extremely important that the samples be statistically as similar as

possible to the images that arise in the application being considered. Scoring

should be done in a way that reflects the costs of errors in recognition. Reject

error behavior should be studied, not just forced recognition.

In planning an evaluation, it is important to keep in mind that the

operation of a pattern recognition system is statistical, with measurable

distributions of success and failure. These distributions are very application-

dependent, and no theory seems to exist that can predict them for new

applications. This strongly suggests that an evaluation should be based as

closely as possible on a specific application.

5

CHAPTER 3

EXISTING SYSTEM

One of the most successful applications of image analysis and

understanding, face recognition has recently received significant attention,

especially during the past several years. At least two reasons account for this

trend: the first is the wide range of commercial and law enforcement

applications, and the second is the availability of feasible technologies after 30

years of research. Even though current machine recognition systems have

reached a certain level of maturity, their success is limited by the conditions

imposed by many real applications. For example, recognition of face images

acquired in an outdoor environment with changes in illumination and/or pose

remains a largely unsolved problem. In other words, current systems are still

far away from the capability of the human perception system.

The still image problem has several inherent advantages and

disadvantages. For applications such as drivers’ licenses, due to the controlled

nature of the image acquisition process, the segmentation problem is rather

easy. However, if only a static picture of an airport scene is available, automatic

location and segmentation of a face could pose serious challenges to any

segmentation algorithm. But the small size and low image quality of faces

captured from video can significantly increase the difficulty in recognition.

Recognizing a 3D object from its 2D images poses many challenges. The

illumination and pose problems are two prominent issues for appearance- or

image-based approaches. Many approaches have been proposed to handle these

issues, with the majority of them exploring domain knowledge

Face recognition has received increased attention and has advanced

technically. Many commercial systems for still face recognition are now

6

available. Recently, significant research efforts have been focused on face

modeling/tracking, recognition, and system integration. New datasets have been

created and evaluations of recognition techniques using these databases have

been carried out. It is not an overstatement to say that face recognition has

become one of the most active applications of pattern recognition, image

analysis and understanding.

DISADVANTAGES:

Duplications are possible.

Low image quality will increase difficulty in recognition.

Changes in illumination or pose is a big and unsolved problem.

Detecting faces during live is difficult.

7

CHAPTER 4

PROPOSED SYSTEM

A new technique called Hull point analysisis used to detect and recognize

the faces efficiently than other recognition techniques. Proposing new

algorithms and build more systems, measuring the performance of new systems

and of existing systems becomes very important. Face recognition has received

significant attention during the past several years. It has wide range of

commercial and law enforcement applications. Hence they should be highly

efficient and secured. Even though the current recognition system has reached a

certain level of maturity, their success is limited by the conditions imposed by

many real applications.

Paper provides an up-to-date critical survey of face recognition research.

It eliminates the unsolved problem of recognition of face images acquired in

outdoor environment with changes in illumination or pose. Face recognition

here it is done through hull point analysis for accuracy. Hull point analysis is a

method for plotting six points in shape of hexagon with a center point. Then

these points are connected with lines and the values are stored in database along

with their names and we have also added live recognition where the stored

images can be identified in live can also be deleted and the album can be reset.

To get more accuracy this technique calculates smile value and eye blink

value to avoid duplications thus this applications can be used for security

purposes. In a face Human recognition processes and utilize a broad spectrum

of stimuli, obtained from many, if not all, of the senses (visual, auditory,

olfactory, tactile, etc.). In many situations, contextual knowledge is also

applied, for example, surroundings play an important role in recognizing faces

in relation to where they are supposed to be located. It is futile to even attempt

to develop a system using existing technology, which will mimic the

8

remarkable face recognition ability of humans. However, the human brain has

its limitations in the total number of persons that it can accurately “remember.”

A key advantage of a computer system is its capacity to handle large numbers

of face images. In most applications the images are available only in the form

of single or multiple views of 2D intensity data, so that the inputs to computer

face recognition algorithms are visual only. For this reason, the literature

reviewed in this section is restricted to studies of human visual perception of

faces.

In this project we have added live recognition where the saved users

details will be displayed in live if the users are in the frame. Many studies in

psychology and neuroscience have direct relevance to engineers interested in

designing algorithms or systems for machine recognition of faces. For example,

findings in psychology about the relative importance of different facial features

have been noted in the engineering literature. On the other hand, machine

systems provide tools for conducting studies in psychology and neuroscience.

For example, a possible engineering explanation of the bottom lighting effects

studied in Johnston is as follows: when the actual lighting direction is opposite

to the usually assumed direction, a shape-from-shading algorithm recovers

incorrect structural information and hence makes recognition of faces harder.

9

CHAPTER 5

COMPONENTS OF SYSTEM

5.1 QUALCOMM SNAPDRAGON

Snapdragon is a suite of System-on-Chip (SoC) semiconductor products

designed and marketed by Qualcomm for mobile devices. The

Snapdragon central processing unit (CPU) uses the ARM RISC instruction set,

and a single SoC may include multiple CPU cores, a graphics processing

unit (GPU), a wireless modem, and other software and hardware to support a

smartphone's global positioning system(GPS), camera, gesture recognition and

video. Snapdragon semiconductors are embedded in devices of various systems,

including Google Android mobile and Windows Phone devices.

Benchmark tests of the Snapdragon 800's processor by PC

Magazine found that its processing power was comparable to similar products

from Nvidia. Benchmarks of the Snapdragon 805 found that the Adreno 420

GPU resulted in a 40 percent improvement in graphics processing over the

Adreno 330 in the Snapdragon 800, though there was only slight differences in

processor benchmarks.

Snapdragon processors enable next-level user experiences. Each one is a

comprehensive all-in-one system, specifically designed to enable best-in-class

mobile experiences with all-day battery life. Snapdragon processors also enable

advanced connectivity, jaw-dropping graphics, and powerful and efficient

processing and multitasking.

10

5.1.1 FEATURES

CPU

Up to 2.3 GHz quad-core

(4x Qualcomm® Krait™ 400)

GPU

Qualcomm® Adreno™ 330 GPU

Up to OpenGL ES 3.0

DSP

Qualcomm® Hexagon™ DSP

Camera

Up to 21 MP camera

Dual Image Sensor Processor (ISP)

Video

Up to 4K Ultra HD capture and

playback

H.264 (AVC)

LTE Connectivity

4G LTE Advanced World

Mode

LTE Category 4:

Up to 150 Mbps DL

Up to 50 Mbps UL

Downlink Features:

2x10 MHz carrier aggregation

64-QAM

Uplink Features

1x20 MHz

16-QAM

Global Mode

LTE FDD and TDD

WCDMA (DC-HSDPA, DC-

HSUPA) TD-SCDMA

11

Display

Up to 2K display on device

1080p and 4K external display

support

Charging

Qualcomm® Quick Charge™ 2.0

EV-DO and CDMA 1x

GSM/EDGE

Additional features include:

LTE Broadcast

HD Voice over VoLTE and 3G

Wi-Fi

1-stream 802.11n/ac

Fig 5.1 Snapdragon 805 Processor

12

5.2 CAMERA

A front-facing camera is a feature of cameras, mobile phones and similar

mobile devices that allows taking a self-portrait photograph or video while

looking at the display of the device, usually showing a live preview of the

image.

A facial recognition system is a computer application capable

of identifying or verifying a person from a digital image or a video frame from

a video source. One of the ways to do this is by comparing selected facial

features from the image and a facial database.

It is typically used in security systems and can be compared to

other biometrics such as fingerprint or eye iris recognition systems.

13

CHAPTER 6

SYSTEM DESIGN ANALYSIS AND MODULES

6.1 IMAGE CAPTURING

Camera which is in our Smartphone is used to capture images live and

used to recognize faces and store in database. The camera input is fed into

Snapdragon processor

6.2 QUALCOMM SDK MODULE

The processor formulates the values with high processing power and

GPU. The face in that live preview excluding the background and noise is done

by Qualcomm SDK which basically uses Hull point Analysis algorithm.

6.3 USE OF HULL POINT ANALYSIS ALOGORITHM

Hull point analysis is an algorithm where is marks the points on the outer

surface of the face and each point distance is measured in pixels for accuracy

and most deviated angle difference is noted and points are marked and

connected to circum point of the face which is in middle and the distance are

calculated and sent to our Android App for ease of use for user.

6.4 PARSING MODULE

The data from android app are parsed and values are stored in SQlite under

corresponding user with help of unique ID. Updating user takes the same step

and the data obtained are updated with help of ID.

14

Snapdragon ProcessorCameraSQliteData ParsingAndroid

App

Qualcomm SDK (HULL

Point Analysis)

Fig 6.1: Flow Diagram

15

CHAPTER 7

DEVELOPMENT ENVIRONMENT AND FEATURES

7.1 ANDROID STUDIO

Android Studio is the official IDE for Android app development,

based on  IntelliJ IDEA. On top of IntelliJ's powerful code editor and developer

tools, Android Studio offers even more features that enhance your productivity

when building Android apps, such as:

1. A flexible Gradle-based build system

2. Build variants and multiple APK file generation

3. Code templates to help you build common app features

4. A rich layout editor with support for drag and drop theme editing

5. Lint tools to catch performance, usability, version compatibility, and

other problems

6. Code shrinking with ProGuard and resource shrinking with Gradle

7. Built-in support for Google Cloud Platform, making it easy to integrate

Google Cloud Messaging and App Engine

7.1.1 DEVELOPER SERVICES

Android Studio supports enabling these developer services in your app:

16

1. Ads using AdMob

2. Analytics Google Analytics

3. Authentication using Google Sign-in

4. Notifications using Google Cloud Messaging

Enabling a developer service adds the required dependencies and, when

applicable, also modifies the related configuration files. To activate the service,

you must perform service-specific updates, such as loading an ad in

the MainActivity class for ad display.

To enable an Android developer service, select the File > Project

Structure menu option and click a service under theDeveloper Services sub-

menu. The service configuration page appears. In the service configuration

page, click the service check box to enable the service and click OK. Android

Studio updates your library dependencies for the selected service and, for

Analytics, updates the AndroidManifest.xml and other tracker configuration

files.

7.1.2 GRADLE

Android Studio uses Gradle as the foundation of the build system, with

more Android-specific capabilities provided by the Android Plugin for Gradle.

This build system runs as an integrated tool from the Android Studio menu and

independently from the command line. You can use the features of the build

system to:

1. Customize, configure, and extend the build process.

2. Create multiple APKs for your app with different features using the same

project and modules.

17

3. Reuse code and resources across source sets.

The flexibility of Gradle enables you to achieve all of this without modifying

your app's core source files.

7.1.3 MEMORY AND CPU MONITORING

Android Studio provides a memory and CPU monitor view so you can

more easily monitor your app's performance and memory usage to track CPU

usage, find deallocated objects, locate memory leaks, and track the amount of

memory the connected device is using. With your app running on a device or

emulator, click the Android tab in the lower left corner of the runtime window

to launch the Android runtime window. Click the Memory or CPU tab.

Fig 7.1: CPU Monitoring

7.1.4 MANAGING AVD MANAGER

The AVD Manager is a tool you can use to create and manage Android

virtual devices (AVDs), which define device configurations for the Android

Emulator.

18

HARDWARE OPTIONS

If you are creating a new AVD, you can specify the following hardware

options for the AVD to emulate:

Characteristic Description Property

Device ram size The amount of physical RAM on the

device, in megabytes. Default value

is "96".

hw.ramSize

Touch-screen

support

Whether there is a touch screen or

not on the device. Default value is

"yes".

hw.touchScreen

Trackball support Whether there is a trackball on the

device. Default value is "yes".

hw.trackBall

Keyboard support Whether the device has a QWERTY

keyboard. Default value is "yes".

hw.keyboard

DPad support Whether the device has DPad keys.

Default value is "yes".

hw.dPad

GSM modem Whether there is a GSM modem in hw.gsmModem

19

support the device. Default value is "yes".

Camera support Whether the device has a camera.

Default value is "no".

hw.camera

Maximum

horizontal camera

pixels

Default value is "640". hw.camera.maxH

orizontalPixels

Maximum vertical

camera pixels

Default value is "480". hw.camera.maxV

erticalPixels

GPS support Whether there is a GPS in the device.

Default value is "yes".

hw.gps

Battery support Whether the device can run on a

battery. Default value is "yes".

hw.battery

Accelerometer Whether there is an accelerometer in

the device. Default value is "yes".

hw.accelerometer

Audio recording

support

Whether the device can record audio.

Default value is "yes".

hw.audioInput

20

Audio playback

support

Whether the device can play audio.

Default value is "yes".

hw.audioOutput

SD Card support Whether the device supports

insertion/removal of virtual SD

Cards. Default value is "yes".

hw.sdCard

Cache partition

support

Whether we use a /cache partition on

the device. Default value is "yes".

disk.cachePartitio

n

Cache partition size Default value is "66MB". disk.cachePartitio

n.size

Abstracted LCD

density

Sets the generalized density

characteristic used by the AVD's

screen. Default value is "160".

hw.lcd.density

7.2 SQLITE

SQLite is a software library that implements a self-

contained, serverless, zero-configuration, transactionalSQL database engine.

SQLite is the most widely deployed database engine in the world.

7.2.1 FEATURES

1. Transactions are atomic, consistent, isolated, and durable (ACID) even

after system crashes and power failures.

2. Zero-configuration - no setup or administration needed.

21

3. Full SQL implementation with advanced features like partial

indexes and common table expressions. (Omitted features)

4. A complete database is stored in a single cross-platform disk file. Great

for use as an application file format.

5. Supports terabyte-sized databases and gigabyte-sized strings and blobs.

(See limits.html.)

6. Small code footprint: less than 500KiB fully configured or much less

with optional features omitted.

7. Simple, easy to use API.

8. Written in ANSI-C. TCL bindings included. Bindings for dozens of other

languages available separately.

9. Well-commented source code with 100% branch test coverage.

10. Available as a single ANSI-C source-code file that is easy to compile and

hence is easy to add into a larger project.

11. Self-contained: no external dependencies.

12. Cross-platform: Android, *BSD, iOS, Linux, Mac, Solaris, VxWorks,

and Windows (Win32, WinCE, WinRT) are supported out of the box.

Easy to port to other systems.

13. Sources are in the public domain. Use for any purpose.

14. Comes with a standalone command-line interface (CLI) client that can be

used to administer SQLite databases.

7.2.2 USES

22

1. Database For The Internet Of Things. SQLite is popular choice for the

database engine in cellphones, PDAs, MP3 players, set-top boxes, and

other electronic gadgets. SQLite has a small code footprint, makes

efficient use of memory, disk space, and disk bandwidth, is highly

reliable, and requires no maintenance from a Database Administrator.

2. Application File Format. Rather than using fopen() to write XML, JSON,

CSV, or some proprietary format into disk files used by your application,

use an SQLite database. You'll avoid having to write and troubleshoot a

parser, your data will be more easily accessible and cross-platform, and

your updates will be transactional. (more...)

3. Website Database. Because it requires no configuration and stores

information in ordinary disk files, SQLite is a popular choice as the

database to back small to medium-sized websites.

4. Stand-in For An Enterprise RDBMS. SQLite is often used as a surrogate

for an enterprise RDBMS for demonstration purposes or for testing.

SQLite is fast and requires no setup, which takes a lot of the hassle out of

testing and which makes demos perky and easy to launch.

5. Zero-Configuration

6. SQLite does not need to be "installed" before it is used. There is no

"setup" procedure. There is no server process that needs to be started,

stopped, or configured. There is no need for an administrator to create a

new database instance or assign access permissions to users. SQLite uses

no configuration files. Nothing needs to be done to tell the system that

SQLite is running. No actions are required to recover after a system crash

or power failure. There is nothing to troubleshoot.

7. SQLite just works.

23

8. Other more familiar database engines run great once you get them going.

But doing the initial installation and configuration can be intimidatingly

complex.

SERVERLESS

9. Most SQL database engines are implemented as a separate server process.

Programs that want to access the database communicate with the server

using some kind of interprocess communication (typically TCP/IP) to

send requests to the server and to receive back results. SQLite does not

work this way. With SQLite, the process that wants to access the database

reads and writes directly from the database files on disk. There is no

intermediary server process.

10. There are advantages and disadvantages to being serverless. The main

advantage is that there is no separate server process to install, setup,

configure, initialize, manage, and troubleshoot. This is one reason why

SQLite is a "zero-configuration" database engine. Programs that use

SQLite require no administrative support for setting up the database

engine before they are run. Any program that is able to access the disk is

able to use an SQLite database.

11. On the other hand, a database engine that uses a server can provide better

protection from bugs in the client application - stray pointers in a client

cannot corrupt memory on the server. And because a server is a single

persistent process, it is able control database access with more precision,

allowing for finer grain locking and better concurrency.

12. Most SQL database engines are client/server based. Of those that are

serverless, SQLite is the only one that this author knows of that allows

multiple applications to access the same database at the same time.

Single database file

24

An SQLite database is a single ordinary disk file that can be located

anywhere in the directory hierarchy. If SQLite can read the disk file then it can

read anything in the database. If the disk file and its directory are writable, then

SQLite can change anything in the database. Database files can easily be copied

onto a USB memory stick or emailed for sharing.

Other SQL database engines tend to store data as a large collection of

files. Often these files are in a standard location that only the database engine

itself can access. This makes the data more secure, but also makes it harder to

access. Some SQL database engines provide the option of writing directly to

disk and bypassing the file system all together. This provides added

performance, but at the cost of considerable setup and maintenance complexity.

Compact

When optimized for size, the whole SQLite library with everything

enabled is less than 500KiB in size (as measured on an ix86 using the "size"

utility from the GNU compiler suite.) Unneeded features can be disabled at

compile-time to further reduce the size of the library to under 300KiB if desired.

Most other SQL database engines are much larger than this. IBM boasts

that its recently released CloudScape database engine is "only" a 2MiB jar file -

an order of magnitude larger than SQLite even after it is compressed! Firebird

boasts that its client-side library is only 350KiB. That's as big as SQLite and

does not even contain the database engine. The Berkeley DB library from

Oracle is 450KiB and it omits SQL support, providing the programmer with

only simple key/value pairs.

25

CHAPTER 8

IMPLEMENTATION OF FACIAL RECOGNITION TECHNOLOGY

8.1 FACIAL PROCESSING

Transform your apps with the ability to profile faces. The Snapdragon

SDK makes it possible to detect a smile, determine where the eyes are looking,

and detect blinking. Using this, you can create new interactions and enhance the

user experience by doing things like integrating with real-time camera preview

or analyzing photos in a photo album.

You can track a variety of facial properties with each frame:

1. Blink Detection – measure how open each eye is

2. Gaze Tracking – assess where the subject is looking

3. Smile Value – estimate the degree of the smile

4. Face Orientation – track the Yaw, Pitch and Roll of the head

These capabilities work with both real-time and stored images or videos, so

your app can integrate them for different kinds of uses.

8.2 FACIAL RECOGNITION

26

Go beyond face detection and perform real-time face analysis to identify

people.  You can use these Snapdragon SDK for Android capabilites to develop

apps that can add users to an internal database through face registrion and then

identify users based on facial analysis. Thes features do not use any cloud-based

recognition and are done entirely offline.

These features allow your app to interact with a user in more ways:

1. Profiles – enable per-user settings and preferences

2. Turn-based gaming – allow players to take turns by setting up the UI

specific to each player

3. Photo apps – run automated framing or facial processing to set up

preferred users for picture taking

This feature set works with both real-time and stored images or videos.

Requirements

1. Hardware Requirements: Snapdragon S4, 200, 400, 600, or 800

2. Software Requirements: Android 4.0.3 and up

3. Sample Devices: Nexus 4, Samsung Galaxy S4 (Quad-core), LG Optimus

G, HTC One, Sony Xperia Z Tablet

Introduction Face detection from an image is a key problem in human

computer interaction studies and in pattern recognition researches. Many

researchers on automatic face detection have been proposed recently. The

researchers of face detection are divided into a various of approaches.

The feature-based approaches required the detection and measurement of

salient facial points used geometrical distances and angles between primary

27

facial features such as eyes, nose and mouth to classify faces using an economic

representation of the face where the elements were based on their relative

positions and sizes. A template-matching strategy was based on the earlier work

of using feature-based templates of the mouth, eyes and nose, in addition to

whole face templates. suggested that the expected shape of geometric features

could be used to construct deformable templates in which templates could be

translated, rotated and deformable to fit the best representation of their shape

present in the image. However, low-level computer vision algorithms such as

feature-based approaches were not powerful enough to find out all possible face

regions and there were not likely to perform well in case of small faces or low

quality images. The deformable templates were computationally expensive and

not robust to everyday variation. Also, although the PCA was a very efficient

designed specifically to characterize face region, it was not invariant to image

transformations such as scaling, shift of rotation in its original form and requires

complete relearning of the training data to add new individuals to the database.

Although performance of pattern method approaches reported was quite well,

and some of them could detect non-frontal faces, the approaches were extremely

computationally expensive. This approach for face recognition by overcoming

disadvantages of existing method and an advanced effective proposal is figured

to achieve face recognition through Qualcomm SDK. Since this is a industrial

project, we are basically using for attendance and payroll tracking system. The

system provides good results at lower computational cost than other detection

techniques. We find the face candidate by skin and hair color like and the face

by adapting intersection relationship (ICH) between a convex-hull of skin color

regions(SCH) and a convex-hull of hair color regions(HCH).

Algorithms that construct hulls of various objects have a broad range of

applications in mathematics and computer science.

28

In computational geometry, numerous algorithms are proposed for

computing the hull of a finite set of points, with various computational

complexities.

Computing the hull means that a non-ambiguous and

efficient representation of the required convex shape is construct

8.3 METHODOLOGY

The implementation of face recognition technology include the following

four stages:

1. Data acquisition.

2. Input processing.

3. Face image and decision making.

8.3.1 DATA ACQUISITION

Fig 8.2: Hull Point Analysis

29

The input can be recorded video of the speaker or a still image. A

sample of 1sec duration consists of a 25 frame video sequence. More

than one camera can be used to produce a 3D representation of the face

and to protect against the usage of photographs to gain unauthorized

access.

8.3.2 INPUT PROCESSING:

A pre – processing module locates the eye position and takes care

of the surrounding lighting condition and colour variance. First the

presence of faces or face in a scene must be detected . Once the face is

detected , it must be localized and normalization process may be required

to bring the dimensions of the live facial sample in alignment with the

one on the templates. Some facial recognition approaches uses the whole

face while others concentrate on facial components and/ or regions such as

lips, eyes etc. The appearance of the face can change considerably during

speech and due to facial expressions. In particular the mouth is subjected

to fundamental changes but is also very important source for

discriminating faces. So an approach to persons recognition is developed

based on spatio – terminal modeling of features extracted from talking

face. Models are trained specific to a persons speech articulate and the

way that the person speaks. Person identification is performed by tracking

mouth movements of the talking face and by estimating the likelyhood of

each model of having generated the observed sequence of features. The

model with the highest likelyhood is chosen as the recognized person.

30

Talking face

Lip Tracker

Normalization

Thresholding

Alignment

Score and decision Accept / Reject

BLOCK DIAGRAM

Fig 8.3: Block Diagram

8.3.3 FACE IMAGE AND DECISION MAKING

Synergetic computer are used to classify optical and audio features,

respectively. A synergetic computer is a set of algorithm that stimulate

synergetic phenomena. In training phase the BIOID creates a prototype

called faceprint for each person. A newly recorded pattern is

preprocessed and compared with each faceprint stored in the database.

31

FACE EXTRACTION

LIP MOVEMENT

SYNERGETIC COMPUTER

SYNERGETIC COMPUTER

DECISION STRATEGY

As comparisons are made, the system assigns a value to the comparison

using a scale of one to ten. If a score is above a predetermined

threshold, a match is declared.

Fig 8.4: Detection Flow

From the image of the face, a particular trait is extracted. It may

measure various nodal points of the face like the distance between the

eyes , width of the nose etc. It is fetched to the synergetic computer

which consists of algorithms to capture ,process, compare the sample with

the one stored in the database. System can also track the lip movements

which is also fed to the synergetic computer. Observing the likelyhood

each of the sample with the one stored in the database we can accept or

reject the sample.

32

FACE

IMAGE

CHAPTER 9

REQUIREMENTS

9.1 HARDWARE REQUIREMENTS:

1) Processor : Qualcomm Snapdragon of 400MHZ or higher .

2) RAM : Minimum 64MB primary memory.

3) Hard disk : Minimum 1GB hard disk space.

4) Moniter : Preferably color monitor (16 bit color) and above

5) Web camera

9.2 SOFTWARE REQUIREMENTS:

1) Operating System : Windows operating system.

2) Languages : JAVA, Structured query Language and PHP.

3) Front – end tool : Android Studio emulator .

33

4) Back – end to: My SQL Alpha version 6.0;

5) JDK : JDK 1.5 and above.

Qualcomm SDK 800 specification :

Features MSM8000 Capability

Processors

Applications Qualcomm SDK 8000 cores upto

1.2GHz.

• 64 bit processor

• Quad core, 512 KB L2 Cache

• Primary boot processor

Memory Support

System memory via EBI

External Memory via SDC1

LPDDR3 SDRAM; 32 bit

wide ;upto 533 MHz.

Emmc v4.5/SD flash devices

Configurable GPIO’s

Number of GPIO’s ports

Input configurations

12 GPIO’s - GPIO_0 to

GPIO_121 Pull up , Pull

down, keeper or no pull.

34

Output configurations

Programmable drive current.

Camera Inerfaces

Configurations supported

General Camera Features

Pixel manipulations ,image effects

and post processing

techniques,inclcuding detective pixel

correction.

I2C Conrol

Step 1: connect the pc with SDK board via Ethernet cable

Step 2: The JTAG is connected to WARP 1 to system USB port

Step 3: To open the Android studio software by using this icon-

Step 4: Now type the java code for all the modules and check it using

android studio emulator

Step 5: Download the java net bit streams to SDK Boards. The latest net

bit stream is available here. The bit stream can be downloaded either via

an external JTAG cable or a Compact Flash Card.

Step 6: After downloading the code onto the board now create .APK file

by using build option in android studio software.

35

CHAPTER 10

CONCLUSION

Thus an effective approach for face recognition by overcoming

disadvantages of existing method and an advanced effective proposal is figured

to achieve face recognition through Qualcomm SDK. Since this is a industrial

project, we are basically using for attendance and payroll tracking system. The

system provides good results at lower computational cost than other detection

techniques. Prisoners photos are taken and stored, in case of any escape we can

track them through signal cameras, as this hull point analysis helps in

identifying the right person instantly. Hash map technique is used to avoid

duplications. Live recognition where the stored images can be identified in live

and they can also be deleted and the album can be reset.

.

36

CHAPTER 11

APPENDIX

11.1 CODING

package com.qualcomm.snapdragon.sdk.recognition.sample;

import java.util.Arrays;

import java.util.HashMap;

import java.util.Map;

import com.qualcomm.snapdragon.sdk.face.FacialProcessing;

import com.qualcomm.snapdragon.sdk.face.FacialProcessing.FEATURE_LIST;

import com.qualcomm.snapdragon.sdk.face.FacialProcessing.FP_MODES;

import android.os.Bundle;

import android.os.Vibrator;

import android.annotation.SuppressLint;

import android.app.Activity;

import android.app.AlertDialog;

import android.content.Context;

import android.content.DialogInterface;

37

import android.content.Intent;

import android.content.SharedPreferences;

import android.util.Log;

import android.view.Menu;

import android.view.View;

import android.widget.AdapterView;

import android.widget.GridView;

import android.widget.Toast;

public class FacialRecognitionActivity extends Activity {

private GridView gridView;

public static FacialProcessing faceObj;

public final String TAG = "FacialRecognitionActivity";

public final int confidence_value = 58;

public static boolean activityStartedOnce = false;

public static final String ALBUM_NAME = "serialize_deserialize";

public static final String HASH_NAME = "HashMap";

HashMap<String, String> hash;

@Override

protected void onCreate(Bundle savedInstanceState) {

super.onCreate(savedInstanceState);

setContentView(R.layout.activity_facial_recognition);

hash = retrieveHash(getApplicationContext()); // Retrieve the previously

38

// saved Hash Map.

if (!activityStartedOnce) // Check to make sure FacialProcessing object

// is not created multiple times.

{

activityStartedOnce = true;

// Check if Facial Recognition feature is supported in the device

boolean isSupported = FacialProcessing.isFeatureSupported(FEATURE_LIST.FEATURE_FACIAL_RECOGNITION);

if (isSupported) {

Log.d(TAG, "Feature Facial Recognition is supported");

faceObj = (FacialProcessing) FacialProcessing.getInstance();

loadAlbum(); // De-serialize a previously stored album.

if (faceObj != null) {

faceObj.setRecognitionConfidence(confidence_value);

faceObj.setProcessingMode(FP_MODES.FP_MODE_STILL);

}} else // If Facial recognition feature is not supported then

// display an alert box.

{Log.e(TAG, "Feature Facial Recognition is NOT supported");

new AlertDialog.Builder(this).setMessage("Your device does NOT support Qualcomm's Facial Recognition feature. ").setCancelable(false).setNegativeButton("OK",new DialogInterface.OnClickListener() {

public void onClick(DialogInterface dialog,int id) {

39

FacialRecognitionActivity.this.finish();

}}).show();

}}// Vibrator for button press

final Vibrator vibrate = (Vibrator) FacialRecognitionActivity.this.getSystemService(Context.VIBRATOR_SERVICE);

gridView = (GridView) findViewById(R.id.gridview);

gridView.setAdapter(new ImageAdapter(this));

gridView.setOnItemClickListener(new AdapterView.OnItemClickListener() {

public void onItemClick(AdapterView<?> parent, View v,int position, long id) {

vibrate.vibrate(85);

switch (position) {

case 0: // Adding a person

addNewPerson();

break;

case 1: // Updating an existing person

updateExistingPerson();

break;

case 2: // Identifying a person.

identifyPerson();

break;

case 3: // Live Recognition

40

liveRecognition();

break;

case 4: // Reseting an album

resetAlbum();

break;

case 5: // Delete Existing Person

deletePerson();

break;

}}}}

/*

* Method to handle adding a new person to the recognition album

*/

private void addNewPerson() {

Intent intent = new Intent(this, AddPhoto.class);

intent.putExtra("Username", "null");

intent.putExtra("PersonId", -1);

intent.putExtra("UpdatePerson", false);

intent.putExtra("IdentifyPerson", false);

startActivity(intent);

}

/*

* Method to handle updating of an existing person from the recognition

41

* album

*/

private void updateExistingPerson() {

Intent intent = new Intent(this, ChooseUser.class);

intent.putExtra("DeleteUser", false);

intent.putExtra("UpdateUser", true);

startActivity(intent);

}

/*

* Method to handle identification of an existing person from the

* recognition album

*/

private void identifyPerson() {

Intent intent = new Intent(this, AddPhoto.class);

intent.putExtra("Username", "Not Identified");

intent.putExtra("PersonId", -1);

intent.putExtra("UpdatePerson", false);

intent.putExtra("IdentifyPerson", true);

startActivity(intent);

}

/*

* Method to handle deletion of an existing person from the recognition

42

* album

*/

private void deletePerson() {

Intent intent = new Intent(this, ChooseUser.class);

intent.putExtra("DeleteUser", true);

intent.putExtra("UpdateUser", false);

startActivity(intent);

}

/*

* Method to handle live identification of the people

*/

private void liveRecognition() {

Intent intent = new Intent(this, LiveRecognition.class);

startActivity(intent);

}

/*

* Method to handle reseting of the recognition album

*/

private void resetAlbum() {

// Alert box to confirm before reseting the album

new AlertDialog.Builder(this).setMessage("Are you sure you want to RESET the album? All the photos saved will be LOST")

43

.setCancelable(true).setNegativeButton("No", null).setPositiveButton("Yes",

new DialogInterface.OnClickListener() {

public void onClick(DialogInterface dialog, int id) {

boolean result = faceObj.resetAlbum();

if (result) {

HashMap<String, String> hashMap = retrieveHash(getApplicationContext());

hashMap.clear();

saveHash(hashMap, getApplicationContext());

saveAlbum();

Toast.makeText(getApplicationContext(),

"Album Reset Successful.",

Toast.LENGTH_LONG).show();

} else {

Toast.makeText(

getApplicationContext(),

"Internal Error. Reset album failed",

Toast.LENGTH_LONG).show();

}}

}).show();

}

@Override

public boolean onCreateOptionsMenu(Menu menu) {

44

// Inflate the menu; this adds items to the action bar if it is present.

getMenuInflater().inflate(R.menu.facial_recognition, menu);

return true;

}

protected void onPause() {

super.onPause();

}

protected void onDestroy() {

super.onDestroy();

Log.d(TAG, "Destroyed");

if (faceObj != null) // If FacialProcessing object is not released, then

// release it and set it to null

{

faceObj.release();

faceObj = null;

Log.d(TAG, "Face Recog Obj released");

} else {

Log.d(TAG, "In Destroy - Face Recog Obj = NULL");

}}

@Override

protected void onStop() {

super.onStop();

45

}

protected void onResume() {

super.onResume();}

@Override

public void onBackPressed() { // Destroy the activity to avoid stacking of

// android activities

super.onBackPressed();

FacialRecognitionActivity.this.finishAffinity();

activityStartedOnce = false;

}

/*

* Function to retrieve a HashMap from the Shared preferences.

* @return

*/

protected HashMap<String, String> retrieveHash(Context context) {

SharedPreferences settings = context.getSharedPreferences(HASH_NAME, 0);

HashMap<String, String> hash = new HashMap<String, String>();

hash.putAll((Map<? extends String, ? extends String>) settings.getAll());

return hash;

}

/*

* Function to store a HashMap to shared preferences.

46

* @param hash

*/

protected void saveHash(HashMap<String, String> hashMap, Context context) {

SharedPreferences settings = context.getSharedPreferences(HASH_NAME, 0);

SharedPreferences.Editor editor = settings.edit();

editor.clear();

Log.e(TAG, "Hash Save Size = " + hashMap.size());

for (String s : hashMap.keySet()) {

editor.putString(s, hashMap.get(s));

}

editor.commit();

}

/*

* Function to retrieve the byte array from the Shared Preferences.

*/

public void loadAlbum() {

SharedPreferences settings = getSharedPreferences(ALBUM_NAME, 0);

String arrayOfString = settings.getString("albumArray", null);

byte[] albumArray = null;

if (arrayOfString != null) {

String[] splitStringArray = arrayOfString.substring(1,arrayOfString.length() - 1).split(", ");

47

albumArray = new byte[splitStringArray.length];

for (int i = 0; i < splitStringArray.length; i++) {

albumArray[i] = Byte.parseByte(splitStringArray[i]);

}

faceObj.deserializeRecognitionAlbum(albumArray);

Log.e("TAG", "De-Serialized my album");

}}

public void saveAlbum() {

byte[] albumBuffer = faceObj.serializeRecogntionAlbum();

SharedPreferences settings = getSharedPreferences(ALBUM_NAME, 0);

SharedPreferences.Editor editor = settings.edit();

editor.putString("albumArray", Arrays.toString(albumBuffer));

editor.commit();

}

}

if (result) {

int numFaces = faceObj.getNumFaces();

if (numFaces == 0) {

Log.d("TAG", "No Face Detected");

if (drawView != null) {

preview.removeView(drawView);

drawView = new DrawView(this, null, false);

48

preview.addView(drawView);

}

} else {

faceArray = faceObj.getFaceData();

if (faceArray == null) {

Log.e("TAG", "Face array is null");

11.2 IMAGE PROCESSING AND RESULTS

In this project architecture for facial recognition systems is

implemented using Qualcomm SDK board . In the proposed system we

applied Hull point analysis algorithm method and back end Php database

to improve security of the database and produce good quality image

Fig 11.1 Adding user using add user module.

After saving the face of the person in the add user module, it is saved

in the system which is shown in the fig 11.2

49

Fig 11.2 Update user using Update existing user

module.

After updating the user identification with changes in database, it can

identify the user which is generally used in attendance payroll system

shown in fig 11.3.

Fig 11.3 Identify persons using identity People module.

50

After identifying the persons , the facial recognition system can undergo

live recoginition process which is used in real time system which is

shown in fig 11.4

Fig 11.4 Identify persons using Live Reognition module.

As the facial recognition application is run by SQL programs at the back

end of the modules it is easy to add or delete an user which is shown

in fig 11.5

Fig 11.5 Remove user by using Delete Existing User module.

51

52