96
Human Robotic Interaction Based On Gesture Identification 1.INTRODUCTION Robots are artificial agents with capacities of perception and action in the physical world often referred by researchers as workspace. Their use has been generalized in factories but nowadays they tend to be found in the most technologically advanced societies in such critical domains as search and rescue, military battle, mine and bomb detection, scientific exploration, law enforcement, entertainment and hospital care. These new domains of applications imply a closer interaction with the user. The concept of closeness is to be taken in its full meaning, robots and humans share the workspace but also share goals in terms of task achievement. This close interaction needs new theoretical models, on one hand for the robotics scientists who work to improve the robots utility and on the other hand to evaluate the risks and benefits of this new "friend" for our modern society. Robots are poised to fill a growing number of roles in today’s society, from factory automation to service applications to medical care and entertainment. While Dept. of ECE, SJCET, Palai 1

Human-Robot Interaction Based On Gesture Identification

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

1. INTRODUCTION

Robots are artificial agents with capacities of perception and action in the

physical world often referred by researchers as workspace. Their use has been

generalized in factories but nowadays they tend to be found in the most

technologically advanced societies in such critical domains as search and rescue,

military battle, mine and bomb detection, scientific exploration, law enforcement,

entertainment and hospital care.

These new domains of applications imply a closer interaction with the user.

The concept of closeness is to be taken in its full meaning, robots and humans share

the workspace but also share goals in terms of task achievement. This close

interaction needs new theoretical models, on one hand for the robotics scientists who

work to improve the robots utility and on the other hand to evaluate the risks and

benefits of this new "friend" for our modern society.

Robots are poised to fill a growing number of roles in today’s society, from

factory automation to service applications to medical care and entertainment. While

robots were initially used in repetitive tasks where all human direction is given a

priori, they are becoming involved in increasingly more complex and less structured

tasks and activities, including interaction with people required to complete those

tasks. This complexity has prompted the entirely new endeavour of Human-Robot

Interaction (HRI), the study of how humans interact with robots, and how best to

design and implement robot systems capable of interacting with humans. The

fundamental goal of HRI is to develop the principles and algorithms for robot systems

that make them capable of direct, safe and effective interaction with humans. Many

facets of HRI research relate to and draw from insights and principles from

psychology, communication, anthropology, philosophy, and ethics, making HRI an

inherently interdisciplinary endeavour.

Dept. of ECE, SJCET, Palai 1

Page 2: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

A robot is a mechanical or virtual intelligent agent that can perform tasks

automatically or with guidance, typically by remote control. In practice a robot is

usually an electro-mechanical machine that is guided by computer and electronic

programming. Robots can be autonomous,semi-autonomous or remotely controlled.

The word robot first appeared in the play Rossum’s Universal Robots by the Czech

writer Karel Čapek in 1920.

Robots are used in an increasingly wide variety of tasks such as vacuuming

floors, mowing lawns, cleaning drains, building cars, in warfare, and in tasks that are

too expensive or too dangerous to be performed by humans such as exploring outer

space or at the bottom of the sea. Robots range from humanoids such

as ASIMO and TOPIO to Nano robots, Swarm robots, Industrial robots, military

robots, mobile and serving robots The branch of technology that deals with robots is

robotics.

At present there are two main types of robots, based on their use: general-

purpose autonomous robots and dedicated robots. Robots can be classified by

their specificity of purpose. A robot might be designed to perform one particular task

extremely well, or a range of tasks less well. Of course, all robots by their nature can

be re-programmed to behave differently, but some are limited by their physical form.

With the advance in artificial intelligence, the research is focusing on one part

towards the safest physical interaction. But also on a socially correct interaction,

dependent on cultural criteria. The goal is to build an intuitive and easy

communication with the robot through speech, gestures, and facial expressions.

Dautenhan refers to friendly Human-robot interaction as "Robotiquette"

defining it as the "social rules for robot behaviour (a ‘robotiquette’) that is

comfortable and acceptable to humans.The robot has to adapt itself to our way of

expressing desires and orders and not the contrary. But every day environments such

as homes have much more complex social rules than those implied by factories or

even military environments.

Dept. of ECE, SJCET, Palai 2

Page 3: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

2. HUMAN ROBOTIC INTERACTION

Human–robot interaction is the study of interactions between humans and

robots. It is often referred as HRI by researchers. Human–robot interaction is a

multidisciplinary field with contributions from HCI, artificial

intelligence, robotics, natural language understanding, and social sciences.

Human-robot interaction has been a topic of both science fiction and academic

speculation even before any robots existed. Because HRI depends on knowledge of

(sometimes natural) human communication, many aspects of HRI are continuations

of human communications topics that are much older than robotics per se.

The origin of HRI as a discrete problem was stated by 20th-century

author Isaac Asimov in 1941, in his novel I, Robot. He states the Three Laws of

Robotics as,

1. A robot may not injure a human being or, through inaction, allow a human

being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such

orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not

conflict with the First or Second Law.

These three laws of robotics determine the idea of safe interaction. The closer

the human and the robot get and the more intricate is the relationship the more the

risk of a human being injured rises. Nowadays in advanced societies manufacturers

employing robots solve this issue by not letting human and robot share the workspace

at any time. This is achieved by the extensive use of safe zones and cages. Thus the

Dept. of ECE, SJCET, Palai 3

Page 4: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

presence of humans is completely forbidden in the robot workspace while it is

working.

With the advances of artificial intelligence, the autonomous robots could

eventually have more proactive behaviours, planning their motion in complex

unknown environments. These new capabilities would have to keeping safety as a

primer issue and as second efficiency. To allow this new generation of robot, research

is being made on human detection, motion planning, scene reconstruction, intelligent

behaviour through task planning.

The basic goal of HRI is to define a general human model that could lead to

principles and algorithms allowing more natural and effective interaction between

humans and robots.Many in the field of HRI study how humans collaborate and

interact and use those studies to motivate how robots should interact with humans.

HRI has continued to be a topic of academic and popular culture interest. In

fact, real-world robots have come into existence long after plays, novels, and movies

developed them as notions and began to ask questions regarding how humans and

robots would interact, and what their respective roles in society could be. While not

every one of those popular culture works has affected the field of robotics research,

there have been instances where ideas in the research world had their genesis in

popular culture.

In I, Robot, the three laws were examined relative to commands that humans

give robots, methods for humans to diagnose malfunctions, and ways in which robots

can participate in society. The theoretical implications of how the three laws are

designed to work has impacted the way that robot and agent systems operate today,

even though the type of autonomous reasoning needed for implementing a system that

obeys the three laws does not exist yet.

On the other end of HRI research the cognitive modelling of the "relationship"

between human and the robots benefits the psychologists and robotic researchers the

user study are often of interests on both sides. This research endeavours part of

human society.

Dept. of ECE, SJCET, Palai 4

Page 5: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Philip K. Dick’s novel Do Androids Dream of Electric Sheep (1968) is set in a

future world (originally in the late ’90s) where robots (called replicants) mingle with

humans. The replicants are humanoid robots that look and act like humans, and

special tests are devised to determine if an individual is a human or a replicant. The

test is related to the Turing Test, in that both involve asking probing questions that

require human experiences and capacities in order to answer correctly. As is typical,

the story also featured a battle between humans and replicants.

George Lucas’ Star Wars movies (starting in 1977) feature two robot

characters (C3P0 and R2D2) as key characters, which are active, intuitive, even

heroic. One of the most interesting features from a robot design point of view is that,

while one of the robots is humanoid in form (C3PO) and the other (R2D2) is not, both

interact effectively with humans through social, assistive, and service interactions.

C3P0 speaks, gestures, and acts as a less-than-courageous human. R2D2, on the other

hand, interacts socially only through beeps and movement, but is understood and

often preferred by the audience for its decisiveness and courage.

In the television show Star Trek: The Next Generation (1987-1994), an

android named Data is a key team member with super-human intelligence but no

emotions. Data’s main dream was to become more human, finally mastering emotion.

Data progressed to becoming an actor, a poet, a friend, and often a hero, presenting

robots in a number of potentially positive roles.

Dept. of ECE, SJCET, Palai 5

Page 6: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Fig 2.1 An example of an HRI testbed: a humanoid torso on a mobile platform, and a

simulation of the same system.

The short story and movie The Bicentennial Man, features a robot who

exhibits human-like creativity, carving sculptures from wood. Eventually, he strikes

out on his own, on a quest to find like-minded robots. His quest turns to a desire to be

recognized as a human. Through cooperation with a scientist, he develops artificial

organs in order for him to bridge the divide between himself and other humans,

benefiting both himself and humanity. Eventually, he is recognized as a human when

he creates his own mortality.

These examples, among many others, serve to frame to scope of HRI research

and exploration. They also provide some of the critical questions regarding robots and

society that have become benchmarks for real-world robot systems.

Scholtz describes five roles that a human may have when interacting with a

robot: supervisor, operator, teammate, mechanic/programmer, and bystander. One or

more of these values would be assigned to the INTERACTION-ROLE classification.

A supervisory role is taken by a human when it needs to monitor the behavior

of a robot, but does not need to directly control it. For example, a supervisor of an

Dept. of ECE, SJCET, Palai 6

Page 7: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

unmanned vehicle may tell the robot where it should move, then the robot plans and

carries out its task.

An operator needs to have more interaction with a robot, stepping in to

teleoperate the robot or needing to change the robot’s behavior.

A teammate works with a robot to accomplish a task. An example of this

would be a manufacturing robot that accomplished part of an assembly while a

human worked on another part of the assembly of the item.

A mechanic or programmer needs to physically change the robot’s hardware

or software.

A bystander does not control a robot but needs to have some understanding of

what the robot is doing in order to be in the same space. For example, a person who

walks into a room with a robot vacuum cleaner needs to be able to avoid the robot

safely.

2.1 HRI RESEARCH CHALLENGES

The study of HRI contains a wide variety of challenges, some of them of basic

research nature, exploring concepts general to HRI, and others of domain-specific

nature, dealing with direct uses of robot systems that interact with humans in

particular contexts. In this section, we overview the following major research

challenges within HRI: multimodal sensing and perception; design and human

factors; developmental and epigenetic robotics; social, service and assistive robotics;

and robotics for education.

Multi-Modal Perception

Real-time perception and dealing with uncertainty in sensing are some of the

most enduring challenges of robotics. For HRI, the perceptual challenges are

particularly complex, because of the need to perceive, understand, and react to human

activity in real-time.

Dept. of ECE, SJCET, Palai 7

Page 8: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

The range of sensor inputs for human interaction is far larger than for most

other robotic domains in use today. HRI inputs include vision and speech, both major

open challenges for real-time data processing. Computer vision methods that can

process human-oriented data such as facial expression and gestures must be capable

of handling a vast range of possible inputs and situations. Similarly, language

understanding and dialog systems between human users and robots remain an open

research challenge. Tougher still is to obtain understanding of the connection between

visual and linguistic data and combining them toward improved sensing and

expression.

Design And Human Factors

The design of the robot, particularly the human factor concerns, is a key

aspect of HRI. Research in these areas draws from similar research in human-

computer interaction (HCI) but features a number of significant differences related to

the robot’s physical real-world embodiment. The robot’s physical embodiment, form

and level of anthropomorphism, and simplicity or complexity of design, are some of

the key research areas being explored.

Developmental/Epigenetic Robotics

Developmental robotics, sometimes referred to as epigenetic robotics, studies

robot cognitive development. Developmental roboticists are focused on creating

intelligent machines by endowing them with the ability to autonomously acquire

skills and information. Research into developmental/epigenetic robotics spans a broad

range of approaches. One effort has studied teaching task behavior using shaping and

joint attention, a primary means used by children in observing the behavior of others

in learning tasks. Developmental work includes the design of primitives for humanoid

movements, gestures, and dialog.

Social,Service And Assistive Robotics

Service and assistive robotics include a very broad spectrum of application

domains, such as office assistants, autonomous rehabilitation aids, and educational

Dept. of ECE, SJCET, Palai 8

Page 9: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

robots. This broad area integrates basic HRI research with real-world domains that

required some service or assistive function. The study of social robots (or socially

interactive robots) focuses on social interaction, and so is a proper subset of problems

studied under HRI.

Educational Robotics

Robotics has been shown to be a powerful tool for learning, not only as a topic

of study, but also for other more general aspects of science, technology, engineering,

and math (STEM) education. A central aspect of STEM education is problem-

solving, and robots serve as excellent means for teaching problem-solving skills in

group settings. Based on the mounting success of robotics courses world-wide, there

is now is an active movement to develop robot hardware and software in service of

education, starting from the youngest elementary school ages and up. Robotics is

becoming an important tool for teaching computer science and introductory college

engineering.

Dept. of ECE, SJCET, Palai 9

Page 10: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

3. PROPOSED WORK

In this project we have established a successful interaction between a human

and robot. This interaction has become possible by the hand gesture identification.

The gestures (left, right, forward and backward) made by the human hand are

identified and converted to electrical signals (voltage) by the accelerometer. The

accelerometer captures the motion in X,Y and Z directions and corresponding

voltages are produced which are transmitted to the receiver via a wireless

transmission method ,zigbee is used for this purpose. Zigbee is used because it is very

powerful and reliable method than other methods.

The receiver receives the signals transmitted and will generate some control

sequence to make corresponding motion in the autobot. The autobot is designed with

three wheels. Because three wheeled autobot controlling is easier and power saving

method than the four wheeled autobot. In this autobot the front wheel is free to move

in any direction and the two back wheels are connected to the shafts of two motors. A

wireless camera is provided in the receiver, so the autobot can be controlled by the

human by standing in a remote location. Camera gives the instant video in the

monitor which is placed in the transmitter section. So by seeing video a deaf and

dump person can control the autobot.

Dept. of ECE, SJCET, Palai 10

Page 11: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

4. BLOCK DIAGRAMS

4.1 TRANSMITTER SECTION

Fig 4.1.2 Block Diagram Of Transmitter Section

Figure above shows the basic block diagram of the Human Robot Interaction

System. There are different ways to interact human with robot like sound, gesture,

touch etc. Here we are using the gesture method of interaction. For identifying the

gesture of the human hand we are using the accelerometer. Followed by the

accelerometer there is a processing unit which is a PIC microcontroller 16F876A.

Which is an advanced and high speed device. The output of the microcontroller is

Dept. of ECE, SJCET, Palai 11

Page 12: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

transmitted through a wireless communication method called zigbee protocol. Which

is an advanced, high speed, reliable and accurate method is of wireless

communication than the other conventional wireless protocols.

The accelerometer here using is an analog accelerometer. It will detect the

motion corresponding to X, Y & Z directions. This device produces some analog

voltages corresponding to the motion. We cannot use these voltages in the analog

form so we have to convert the analog values to digital values. The analog values are

converted to digital format by the usage of an analog to digital converter which inside

the microcontroller.

In the microcontroller memory,there are some predefined ranges of values are

stored for each type of motion for X,Y & Z. When a motion occurs the controller

checks the value and compares it with the predefined range of values. If the value is

in that predefined range, the controller identifies that the motion is occurred in X or Y

or in Z direction. Then according to the accelerometer specification, for one motion

two co-ordinate values changes and other one will remain the same. So for the left,

right, front and back movements some values are taken experimentally and assigning

some range. If the output of accelerometer is in that range the controller will generate

a particular code corresponding to each motion i.e. 01 for left, 02 for right etc. These

codes are generated in the any of the port of controller as per the program. These

codes are transmitted through the zigbee transmitter.

The function of the controller is to initialize and monitor the stop count. Stop

count is the count which is given during the effective motion detection and code

generation process.

Dept. of ECE, SJCET, Palai 12

Page 13: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

4.2 RECEIVER SECTION

Fig 4.2.3 Block Diagram Of Receiver Section

Figure above shows the receiver section of the Human Robot Interaction

System. When there is a motion occurs the transmitter detects the type of motion and

will generate and transmit codes corresponding to the type of motion. The zigbee

receiver receives the code and will produce another set of codes .These codes

determines the tasks to be performed for each command, ie. Some tasks are assigned

to each code. This is the main function of the 89C2051 microcontroller .It is a 20 pin

microcontroller with two ports. The controller then sends the codes to the main

controller of the autobot in a serial format. Then the controller in the receiver will

initialize a stop count. The stop count increments automatically and the device

continuously monitor the status of stop count. The code generation and sending

Dept. of ECE, SJCET, Palai 13

Page 14: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

process will stop when the stop count reaches it’s maximum value or there is another

motion occurred.

The controller of the autobot receives the code generated by the receiver

controller and will generate some sequence of codes to control the motor driver .The

motor driver is provided to interface the two motors with the controller and to provide

more power to the motors. It also helps to provide fast response. The motor driver can

control two motors at a time .It is having internal ESD protection and thermal shut

down, high noise immunity. According to the code received from the autobot

controller the motor driver will rotate the motor shaft in clockwise and anti clockwise

direction for the motion of robot .Thus the autobot motion occurs.

The autobot is a three wheeled device; two of them are connected to the dc

motors. The motor driver IC controls the movement of the motors. The front wheel is

free to move in any direction where as the other two wheels can move in clockwise

and anti-clockwise direction only. The three wheel concept reduces the power

requirement, power loss and increases the fast response. For a four wheeled autobot

four motors and two drivers IC’s are required.

Dept. of ECE, SJCET, Palai 14

Page 15: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

5. HARDWARE SECTION

5.1 CIRCUIT DIAGRAMS

5.1.1 TRANSMITTER SECTION

Fig 5.1.1.4 Circuit Diagram Of Transmitter Section

Dept. of ECE, SJCET, Palai 15

Page 16: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Figure shows the circuit diagram of transmitter of the Human Robot

Interaction System. The circuit diagram of power supply is shown on the top of the

figure. The voltage regulator IC 7805 is used to regulate the incoming power supply.

There is a power supply indicator also provided. The total system works with 5V

supply.

The transmitter parts mainly consist of an accelerometer. PIC microcontroller

and Zigbee transceiver. The accelerometer here using is ADXL 335 which is an

analog accelerometer. It detects the X, Y and Z directional motion of human hand and

will produce corresponding analog voltages. The system is more compatible with

digital voltages. So we need to convert the analog values in to digital format. The

ADXL 335 consists of 3 output pins for X, Y and Z outputs. For the analog to digital

conversion, the ADC in the PIC is used. The output pins of ADXL 335 are connected

to the 3 analog inputs of the PIC. i.e. pin 2, 3 and 4. The PIC converts the analog

value to digital value and compares the values with the predefined values stored in its

memory.

According to the specification of ADXL 335 for any motion the two co-

ordinate values changes and one value remains the same. For example, consider the

forward motion of the hand, X and Y co-ordinate value produces particular values

and Z remains in the previous value. For each motions, ie.forward, backward, left and

right the values for X,Y and Z co-ordinates are measured and stored in the

microcontroller memory.

When the motion occurs the accelerometer produces corresponding output.

The PIC compares those values with the values in its memory. If the comparison

satisfies, the PIC will produce a particular code and will send that code to receiver

through Zigbee transmitter. The Zigbee is connected to the transmitter and receiver

pins of PIC microcontroller.

The crystal oscillator is also provided to generate a clock frequency of

20MHz.

Dept. of ECE, SJCET, Palai 16

Page 17: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

5.1.2 RECEIVER SECTION

Fig 5.1.2.5 Circuit Diagram Of Receiver Section

Dept. of ECE, SJCET, Palai 17

Page 18: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Figure shows the circuit diagram of receiver. The power supply is provided to

generate 5V supply. The voltage regulator IC 7805 is used.

The main component is 89C2051 microcontroller. It is a 20 pin

microcontroller with 2 ports and works with 12 MHz frequency. The Zigbee

receiver is connected to receiver pin of the microcontroller. The Zigbee receiver

receives some codes and controller monitors the code and for each code the controller

generates some other codes in the port P1. The port 1 is pulled up with a resistor

pack. And the output is connected to a latch IC 7417C573. And its output is applied

to the main controller of the autobot. The latch is used to provide the quick response.

5.1.3 AUTOBOT

Fig 5.1.3.6 Circuit Diagram Of Autobot

Dept. of ECE, SJCET, Palai 18

Page 19: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Figure shows the circuit diagram of autobot. The circuit diagram of the power

supply is shown in figure. There is a provision to give ac and dc supply to the device.

There is a bridge provided for ac supply. Commonly dc is giving to the device to

make it wireless. 7805 voltage regulator is used for providing 5V supply at the output.

PIC 18F4550 is the main controller used in the autobot. It is a USB

programmable microcontroller. It is an 8 bit microcontroller with flash programming

capability. The special codes generated by the receiver-microcontroller are applied to

the pins 27 to 30. The controller receives these codes and will give some commands

to the motor driver IC. The commands are saved in the memory of the main

controller. These commands gives instruction to the driver IC and it controls the

movement of motor and the wheels attached with the motor shaft. For making a left

turn, the motor at the right side should rotate in clockwise in full speed and the motor

at the left should remain in still condition. For making a right turn the left motor

should rotate in clockwise in full speed and right motor should remain in still

position. For the forward motion both motors should rotate in clockwise and in full

speed. For the reverse motion both the motors should rotate in anticlockwise

direction.

PIN 1 PIN2(INPUT) PIN7(OUTPUT) FUNCTION

HIGH LOW HIGH TURN

CLOCKWISE

HIGH HIGH LOW TURN ANTI

CLOCKWISE

HIGH LOW LOW STOP

HIGH HIGH HIGH STOP

LOW NOT APPLICABLE NOT APPLICABLE STOP

Table 5.1.1 L293D Operation modes

Dept. of ECE, SJCET, Palai 19

Page 20: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

5.2 MAIN COMPONENTS

5.2.1 ACCELEROMETER

An accelerometer is a device that measures proper acceleration, also called the

four-acceleration. For example, an accelerometer on a rocket accelerating through

space will measure the rate of change of the velocity of the rocket relative to any

inertial frame of reference. However, the proper acceleration measured by an

accelerometer is not necessarily the coordinate acceleration (rate of change of

velocity). Instead, it is the acceleration associated with the phenomenon of weight

experienced by any test mass at rest in the frame of reference of the accelerometer

device. For an example where these types of acceleration differ, an accelerometer will

measure a value of g in the upward direction when remaining stationary on the

ground, because masses on earth have weight m*g. By contrast, an accelerometer in

gravitational free fall toward the center of the Earth will measure a value of zero

because, even though its speed is increasing, it is at rest in a frame of reference in

which objects are weightless.

Most accelerometers do not display the value they measure, but supply it to

other devices. Real accelerometers also have practical limitations in how quickly they

respond to changes in acceleration, and cannot respond to changes above a certain

frequency of change.

Single- and multi-axis models of accelerometer are available to detect

magnitude and direction of the proper acceleration (or g-force), as a vector quantity,

and can be used to sense orientation (because direction of weight changes),

coordinate acceleration (so long as it produces g-force or a change in g-force),

vibration, shock, and falling (a case where the proper acceleration changes, since it

tends toward zero). Micromachined accelerometers are increasingly present in

portable electronic devices and video game controllers, to detect the position of the

device or provide for game input.

Pairs of accelerometers extended over a region of space can be used to detect

differences (gradients) in the proper accelerations of frames of references associated

Dept. of ECE, SJCET, Palai 20

Page 21: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

with those points. These devices are called gravity gradiometers, as they measure

gradients in the gravitational field. Such pairs of accelerometers in theory may also be

able to detect gravitational waves.

Physical Principles

An accelerometer measures proper acceleration, which is the acceleration it

experiences relative to freefall and is the acceleration felt by people and objects. Put

another way, at any point in space-time the equivalence principle guarantees the

existence of a local inertial frame, and an accelerometer measures the acceleration

relative to that frame. Such accelerations are popularly measured in terms of g-force.

An accelerometer at rest relative to the Earth's surface will indicate

approximately 1 g upwards, because any point on the Earth's surface is accelerating

upwards relative to the local inertial frame (the frame of a freely falling object near

the surface). To obtain the acceleration due to motion with respect to the Earth, this

"gravity offset" must be subtracted and corrections for effects caused by the Earth's

rotation relative to the inertial frame.

The reason for the appearance of a gravitational offset is Einstein's

equivalence principle, which states that the effects of gravity on an object are

indistinguishable from acceleration. When held fixed in a gravitational field by, for

example, applying a ground reaction force or an equivalent upward thrust, the

reference frame for an accelerometer (its own casing) accelerates upwards with

respect to a free-falling reference frame. The effects of this acceleration are

indistinguishable from any other acceleration experienced by the instrument, so that

an accelerometer cannot detect the difference between sitting in a rocket on the

launch pad, and being in the same rocket in deep space while it uses its engines to

accelerate at 1 g. For similar reasons, an accelerometer will read zero during any type

of free fall. This includes use in a coasting spaceship in deep space far from any mass,

a spaceship orbiting the Earth, an airplane in a parabolic "zero-g" arc, or any free-fall

in vacuum. Another example is free-fall at a sufficiently high altitude that

atmospheric effects can be neglected.

Dept. of ECE, SJCET, Palai 21

Page 22: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

However this does not include a (non-free) fall in which air resistance

produces drag forces that reduce the acceleration, until constant terminal velocity is

reached. At terminal velocity the accelerometer will indicate 1 g acceleration

upwards. For the same reason a skydiver, upon reaching terminal velocity, does not

feel as though he or she were in "free-fall", but rather experiences a feeling similar to

being supported (at 1 g) on a "bed" of uprushing air.

Acceleration is quantified in the SI unit metres per second per second (m/s2),

in the cgs unit gal (Gal), or popularly in terms of g-force (g).

For the practical purpose of finding the acceleration of objects with respect to

the Earth, such as for use in an inertial navigation system, a knowledge of local

gravity is required. This can be obtained either by calibrating the device at rest, or

from a known model of gravity at the approximate current position.

APPLICATION

Engineering

Accelerometers can be used to measure vehicle acceleration. They allow for

performance evaluation of both the engine/drive train and the braking systems.

Accelerometers can be used to measure vibration on cars, machines,

buildings, process control systems and safety installations. They can also be used to

measure seismic activity, inclination, machine vibration, dynamic distance and speed

with or without the influence of gravity. Applications for accelerometers that measure

gravity, wherein an accelerometer is specifically configured for use in gravimetry, are

called gravimeters.

Notebook computers equipped with accelerometers can contribute to the

Quake-Catcher Network (QCN), a BOINC project aimed at scientific research of

earthquakes.

Dept. of ECE, SJCET, Palai 22

Page 23: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Industry

Accelerometers are also used for machinery health monitoring to report the

vibration and its changes in time of shafts at the bearings of rotating equipment such

as turbines, pumps, fans, rollers, compressors, and cooling towers,. Vibration

monitoring programs are proven to warn of impending failure, save money, reduce

downtime, and improve safety in plants worldwide by detecting conditions such as

wear and tear of bearings, shaft misalignment, rotor imbalance, gear failure or bearing

fault which, if not attended to promptly, can lead to costly repairs. Accelerometer

vibration data allows the user to monitor machines and detect these faults before the

rotating equipment fails completely. Vibration monitoring programs are utilized in

industries such as automotive manufacturing, machine tool applications,

pharmaceutical production, power generation and power plants, pulp and paper, sugar

mills, food and beverage production, water and wastewater, hydropower,

petrochemical and steel manufacturing.

Building And Structural Monitoring

Accelerometers are used to measure the motion and vibration of a structure

that is exposed to dynamic loads. Dynamic loads originate from a variety of sources

including:

Human activities – walking, running, dancing or skipping

Working machines – inside a building or in the surrounding area

Construction work – driving piles, demolition, drilling and excavating

Moving loads on bridges

Vehicle collisions

Impact loads – falling debris

Concussion loads – internal and external explosions

Collapse of structural elements

Dept. of ECE, SJCET, Palai 23

Page 24: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Wind loads and wind gusts

Air blast pressure

Loss of support because of ground failure

Earthquakes and aftershocks

Measuring and recording how a structure responds to these inputs is critical

for assessing the safety and viability of a structure. This type of monitoring is called

Dynamic Monitoring.

Consumer Electronics

Fig 5.2.1.7 Galaxy Nexus, an example of a smart phone with a built-in accelerometer

Accelerometers are increasingly being incorporated into personal electronic

devices.

Motion Input

Some smartphones, digital audio players and personal digital assistants

contain accelerometers for user interface control; often the accelerometer is used to

present landscape or portrait views of the device's screen, based on the way the device

is being held.

Dept. of ECE, SJCET, Palai 24

Page 25: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Automatic Collision Notification (ACN) systems also use accelerometers in a

system to call for help in event of a vehicle crash. Prominent ACN systems include

Onstar AACN service, Ford Link's 911 Assist, Toyota's Safety Connect, Lexus Link,

or BMW Assist. Many accelerometer-equipped smartphones also have ACN software

available for download. ACN systems are activated by detecting crash-strength G-

forces.

Nintendo's Wii video game console uses a controller called a Wii Remote that

contains a three-axis accelerometer and was designed primarily for motion input.

Users also have the option of buying an additional motion-sensitive attachment, the

Nunchuk, so that motion input could be recorded from both of the user's hands

independently. Is also used on the Nintendo 3DS system.

The Sony PlayStation 3 uses the DualShock 3 remote which uses a three axis

accelerometer that can be used to make steering more realistic in racing games, such

as Motorstorm and Burnout Paradise.

The Nokia 5500 sport features a 3D accelerometer that can be accessed from

software. It is used for step recognition (counting) in a sport application, and for tap

gesture recognition in the user interface. Tap gestures can be used for controlling the

music player and the sport application, for example to change to next song by tapping

through clothing when the device is in a pocket. Other uses for accelerometer in

Nokia phones include Pedometer functionality in Nokia Sports Tracker. Some other

devices provide the tilt sensing feature with a cheaper component, which is not a true

accelerometer.

Sleep phase alarm clocks use accelerometric sensors to detect movement of a

sleeper, so that it can wake the person when he/she is not in REM phase, therefore

awakes more easily.

Orientation Sensing

A number of 21st century devices use accelerometers to align the screen

depending on the direction the device is held, for example switching between portrait

Dept. of ECE, SJCET, Palai 25

Page 26: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

and landscape modes. Such devices include many tablet PCs and some smartphones

and digital cameras.

For example, Apple uses an LIS302DL accelerometer in the iPhone, iPod

Touch and the 4th and 5th generation iPod Nano allowing the device to know when it

is tilted on its side. Third-party developers have expanded its use with fanciful

applications such as electronic bobbleheads. The BlackBerry Storm phone was also

an early user of this orientation sensing feature.

Fig 5.2.1.8 Orientation Detection

The Nokia N95 and Nokia N82 have accelerometers embedded inside them. It

was primarily used as a tilt sensor for tagging the orientation to photos taken with the

built-in camera and later became available to other applications through a firmware

update.

As of January 2009, almost all new mobile phones and digital cameras contain

at least a tilt sensor and sometimes an accelerometer for the purpose of auto image

rotation, motion-sensitive mini-games, and to correct shake when taking photographs.

Dept. of ECE, SJCET, Palai 26

Page 27: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

5.2.1.1 ANALOG ACCELEROMETER ADXL335

Fig 5.2.1.9 ADXL 335

The ADXL335 is a small, thin, low power, complete 3-axis accel-erometer

with signal conditioned voltage outputs. The product measures acceleration with a

minimum full-scale range of ±3 g. It can measure the static acceleration of gravity in

tilt-sensing applications, as well as dynamic acceleration resulting from motion,

shock, or vibration.

The user selects the bandwidth of the accelerometer using the CX, CY, and

CZ capacitors at the XOUT, YOUT, and ZOUT pins. Bandwidths can be selected to

suit the application, with a range of 0.5 Hz to 1600 Hz for the X and Y axes, and a

range of 0.5 Hz to 550 Hz for the Z axis.

The ADXL335 is available in a small, low profile, 4 mm × 4 mm × 1.45 mm,

16-lead, plastic lead frame chip scale package (LFCSP_LQ).

Dept. of ECE, SJCET, Palai 27

Page 28: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Functional Block

Fig 5.2.1.10 Functional Block Of ADXL 335

The ADXL335 is a complete 3-axis acceleration measurement system. The

ADXL335 has a measurement range of ±3 g mini-mum. It contains a polysilicon

surface-micromachined sensor and signal conditioning circuitry to implement open-

loop acceleration measurement architecture. The output signals are analog voltages

that are proportional to acceleration. The accelerometer can measure the static

acceleration of gravity in tilt-sensing applications as well as dynamic acceleration

resulting from motion, shock, or vibration.

The sensor is a polysilicon surface-micromachined structure built on top of a

silicon wafer. Polysilicon springs suspend the structure over the surface of the wafer

and provide a resistance against acceleration forces. Deflection of the structure is

meas-ured using a differential capacitor that consists of independent fixed plates and

plates attached to the moving mass. The fixed plates are driven by 180° out-of-phase

square waves. Acceleration deflects the moving mass and unbalances the differential

capacitor resulting in a sensor output whose amplitude is proportional to acceleration.

Dept. of ECE, SJCET, Palai 28

Page 29: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Phase-sensitive demodulation techniques are then used to determine the magnitude

and direction of the acceleration.

The demodulator output is amplified and brought off-chip through a 32 kΩ

resistor. The user then sets the signal bandwidth of the device by adding a capacitor.

This filtering improves measurement resolution and helps prevent aliasing.

For most applications, a single 0.1 μF capacitor, CDC, placed close to the

ADXL335 supply pins adequately decouples the accelerometer from noise on the

power supply. However, in applications where noise is present at the 50 kHz internal

clock frequency (or any harmonic thereof), additional care in power supply bypassing

is required because this noise can cause errors in acceleration measurement.

If additional decoupling is needed, a 100 Ω (or smaller) resistor or ferrite bead

can be inserted in the supply line. Additionally, a larger bulk bypass capacitor (1 μF

or greater) can be added in parallel to CDC. Ensure that the connection from the

ADXL335 ground to the power supply ground is low impedance because noise

transmitted through ground has a similar effect to noise transmitted through VS.

The ADXL335 has provisions for band limiting the XOUT, YOUT, and

ZOUT pins. Capacitors must be added at these pins to implement low-pass filtering

for antialiasing and noise reduction. The equation for the 3 dB bandwidth is

F−3 dB = 1/(2π(32 kΩ) × C(X, Y, Z))

or more simply

F–3 dB = 5 μF/C(X, Y, Z)

The tolerance of the internal resistor (RFILT) typically varies as much as

±15% of its nominal value (32 kΩ), and the bandwidth varies accordingly. A

minimum capacitance of 0.0047 μF for CX, CY, and CZ is recommended in all cases.

The ST pin controls the self-test feature. When this pin is set to VS, an

electrostatic force is exerted on the accelerometer beam. The resulting movement of

the beam allows the user to test if the accelerometer is functional. The typical change

Dept. of ECE, SJCET, Palai 29

Page 30: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

in output is −1.08 g (corresponding to −325 mV) in the X-axis, +1.08 g (or +325 mV)

on the Y-axis, and +1.83 g (or +550 mV) on the Z-axis. This ST pin can be left open-

circuit or connected to common (COM) in normal use.

Never expose the ST pin to voltages greater than VS + 0.3 V. If this cannot be

guaranteed due to the system design (for instance, if there are multiple supply

voltages), then a low VF clamping diode between ST and VS is recommended.

The selected accelerometer bandwidth ultimately determines the measurement

resolution (smallest detectable acceleration). Filtering can be used to lower the noise

floor to improve the resolution of the accelerometer. Resolution is dependent on the

analog filter bandwidth at XOUT, YOUT, and ZOUT.

The output of the ADXL335 has a typical bandwidth of greater than 500 Hz.

The user must filter the signal at this point to limit aliasing errors. The analog

bandwidth must be no more than half the analog-to-digital sampling frequency to

minimize aliasing. The analog bandwidth can be further decreased to reduce noise

and improve resolution.

The ADXL335 noise has the characteristics of white Gaussian noise, which

contributes equally at all frequencies and is described in terms of μg/√Hz (the noise is

proportional to the square root of the accelerometer bandwidth). The user should limit

bandwidth to the lowest frequency needed by the application to maximize the

resolution and dynamic range of the accelerometer.

5.2.2 ZIGBEE

ZigBee is a specification for a suite of high level communication protocols

using small, low-power digital radios based on an IEEE 802 standard for personal

area networks. Applications include wireless light switches, electrical meters with in-

home-displays, and other consumer and industrial equipment that requires short-range

wireless transfer of data at relatively low rates. The technology defined by the ZigBee

specification is intended to be simpler and less expensive than other WPANs, such as

Bluetooth. ZigBee is targeted at radio-frequency (RF) applications that require a low

Dept. of ECE, SJCET, Palai 30

Page 31: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

data rate, long battery life, and secure networking. ZigBee has a defined rate of 250

kbps best suited for periodic or intermittent data or a single signal transmission from a

sensor or input device. ZigBee based traffic management system have also been

implemented. The name refers to the waggle dance of honey bees after their return to

the beehive.

ZigBee is a low-cost, low-power, wireless mesh network standard. The low

cost allows the technology to be widely deployed in wireless control and monitoring

applications. Low power-usage allows longer life with smaller batteries. Mesh

networking provides high reliability and more extensive range. ZigBee chip vendors

typically sell integrated radios and microcontrollers with between 60 KB and 256 KB

flash memory.

ZigBee operates in the industrial, scientific and medical (ISM) radio bands;

868 MHz in Europe, 915 MHz in the USA and Australia, and 2.4 GHz in most

jurisdictions worldwide. Data transmission rates vary from 20 to 900 kilobits/second.

The ZigBee network layer natively supports both star and tree typical

networks, and generic mesh networks. Every network must have one coordinator

device, tasked with its creation, the control of its parameters and basic maintenance.

Within star networks, the coordinator must be the central node. Both trees and meshes

allows the use of ZigBee routers to extend communication at the network level.

ZigBee builds upon the physical layer and medium access control defined in

IEEE standard 802.15.4 (2003 version) for low-rate WPANs. The specification goes

on to complete the standard by adding four main components: network layer,

application layer, ZigBee device objects (ZDOs) and manufacturer-defined

application objects which allow for customization and favor total integration.

Besides adding two high-level network layers to the underlying structure, the

most significant improvement is the introduction of ZDOs. These are responsible for

a number of tasks, which include keeping of device roles, management of requests to

join a network, device discovery and security.

Dept. of ECE, SJCET, Palai 31

Page 32: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Fig 5.2.2.11 Zigbee Protocol Stack

ZigBee is not intended to support powerline networking but to interface with

it at least for smart metering and smart appliance purposes.

Because ZigBee nodes can go from sleep to active mode in 30 ms or less, the

latency can be low and devices can be responsive, particularly compared to Bluetooth

wake-up delays, which are typically around three seconds.Because ZigBee nodes can

sleep most of the time, average power consumption can be low, resulting in long

battery life.

Dept. of ECE, SJCET, Palai 32

Page 33: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Uses

ZigBee protocols are intended for embedded applications requiring low data

rates and low power consumption. The resulting network will use very small amounts

of power — individual devices must have a battery life of at least two years to pass

ZigBee certification.

Typical application areas include:

Home Entertainment and Control — Home automation, smart lighting,

advanced temperature control, safety and security, movies and music

Wireless Sensor Networks — Starting with individual sensors like

Telosb/Tmote and Iris from Memsic

Industrial control

Embedded sensing

Medical data collection

Smoke and intruder warning

Building automation

Device Types

There are three different types of ZigBee devices:

ZigBee coordinator (ZC): The most capable device, the coordinator forms the

root of the network tree and might bridge to other networks. There is exactly

one ZigBee coordinator in each network since it is the device that started the

network originally. It is able to store information about the network, including

acting as the Trust Center & repository for security keys.

ZigBee Router (ZR): As well as running an application function, a router can

act as an intermediate router, passing on data from other devices.

ZigBee End Device (ZED): Contains just enough functionality to talk to the

parent node (either the coordinator or a router); it cannot relay data from other

devices. This relationship allows the node to be asleep a significant amount of

Dept. of ECE, SJCET, Palai 33

Page 34: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

the time thereby giving long battery life. A ZED requires the least amount of

memory, and therefore can be less expensive to manufacture than a ZR or ZC.

Protocols

The protocols build on recent algorithmic research (Ad-hoc On-demand

Distance Vector, neuRFon) to automatically construct a low-speed ad-hoc network of

nodes. In most large network instances, the network will be a cluster of clusters. It

can also form a mesh or a single cluster. The current ZigBee protocols support beacon

and non-beacon enabled networks.

In non-beacon-enabled networks, an unslotted CSMA/CA channel access

mechanism is used. In this type of network, ZigBee Routers typically have their

receivers continuously active, requiring a more robust power supply. However, this

allows for heterogeneous networks in which some devices receive continuously,

while others only transmit when an external stimulus is detected. The typical example

of a heterogeneous network is a wireless light switch: The ZigBee node at the lamp

may receive constantly, since it is connected to the mains supply, while a battery-

powered light switch would remain asleep until the switch is thrown. The switch then

wakes up, sends a command to the lamp, receives an acknowledgment, and returns to

sleep. In such a network the lamp node will be at least a ZigBee Router, if not the

ZigBee Coordinator; the switch node is typically a ZigBee End Device.

In beacon-enabled networks, the special network nodes called ZigBee Routers

transmit periodic beacons to confirm their presence to other network nodes. Nodes

may sleep between beacons, thus lowering their duty cycle and extending their

battery life. Beacon intervals depend on data rate; they may range from 15.36

milliseconds to 251.65824 seconds at 250 kbit/s, from 24 milliseconds to 393.216

seconds at 40 kbit/s and from 48 milliseconds to 786.432 seconds at 20 kbit/s.

However, low duty cycle operation with long beacon intervals requires precise

timing, which can conflict with the need for low product cost.

Dept. of ECE, SJCET, Palai 34

Page 35: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

In general, the ZigBee protocols minimize the time the radio is on, so as to

reduce power use. In beaconing networks, nodes only need to be active while a

beacon is being transmitted. In non-beacon-enabled networks, power consumption is

decidedly asymmetrical: some devices are always active, while others spend most of

their time sleeping.

Except for the Smart Energy Profile 2.0, ZigBee devices are required to

conform to the IEEE 802.15.4-2003 Low-Rate Wireless Personal Area Network (LR-

WPAN) standard. The standard specifies the lower protocol layers—the (physical

layer) (PHY), and the (media access control) portion of the (data link layer (DLL)).

The basic channel access mode is "carrier sense, multiple access/collision avoidance"

(CSMA/CA). That is, the nodes talk in the same way that people converse; they

briefly check to see that no one is talking before they start. There are three notable

exceptions to the use of CSMA. Beacons are sent on a fixed timing schedule, and do

not use CSMA. Message acknowledgments also do not use CSMA. Finally, devices

in Beacon Oriented networks that have low latency real-time requirements may also

use Guaranteed Time Slots (GTS), which by definition do not use CSMA.

XBEE

Fig 5.2.2.12 XBEE

Dept. of ECE, SJCET, Palai 35

Page 36: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

The XBee/XBee-PRO ZNet 2.5 OEM (formerly known as Series 2 and Series

2 PRO) RF Modules were engineered to operate within the ZigBee protocol and

support the unique needs of low-cost, low-power wireless sensor networks. The

modules require minimal power and provide reliable delivery of data between remote

devices.

Serial Communication

The XBee ZNet 2.5 OEM RF Modules interface to a host device through a

logic-level asynchronous serial port. Through its serial port, the module can

communicate with any logic and voltage compatible UART; or through a level

translator to any serial device (For example: Through a Digi proprietary RS-232 or

USB interface board).

UART Data Flow

Devices that have a UART interface can connect directly to the pins of the RF

module as shown in the figure below.

Fig 5.2.2.13 UART DATA FLOW

Data enters the module UART through the DIN (pin 3) as an asynchronous

serial signal. The signal should idle high when no data is being transmitted. Each data

byte consists of a start bit (low), 8 data bits (least significant bit first) and a stop bit

Dept. of ECE, SJCET, Palai 36

Page 37: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

(high). The following figure illustrates the serial bit pattern of data passing through

the module.

Fig 5.2.2.14 UART data packet 0x1F (decimal number ʺ31ʺ) as transmitted through the RF

module Example Data Format is 8‐N‐1 (bits ‐ parity ‐ # of stop bits)

The module UART performs tasks, such as timing and parity checking, that

are needed for data communications. Serial communications depend on the two

UARTs to be configured with compatible settings (baud rate, parity, start bits, stop

bits, data bits).

5.2.3 PIC 18F4550

PIC 18F4550 is an 8 bit microcontroller having mainly five extra features.

1. Universal serial bus features

2. Power managed modes

3. Flexible oscillator structure

4. Peripheral highlights

5. Special microcontroller features

Universal Serial Bus Features

USB V2.0 Compliant

Low Speed (1.5 Mb/s) and Full Speed (12 Mb/s)

Dept. of ECE, SJCET, Palai 37

Page 38: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Supports Control, Interrupt, Isochronous and Bulk Transfers

On-Chip USB Transceiver with On-Chip Voltage Regulator

Special Microcontroller Features

C Compiler Optimized Architecture with Optional Extended

Instruction Set

100,000 Erase/Write Cycle Enhanced Flash Program Memory Typical

1,000,000 Erase/Write Cycle Data EEPROM

Memory Typical

Flash/Data EEPROM Retention: > 40 Years

Self-Programmable under Software Control

Priority Levels for Interrupts

8 x 8 Single-Cycle Hardware Multiplier

Extended Watchdog Timer (WDT):

- Programmable period from 41 ms to 131s

Programmable Code Protection

Single-Supply 5V In-Circuit Serial Programming™ (ICSP™) via Two

Pins

In-Circuit Debug (ICD) via Two Pins

5.2.4 PIC 16F876A

Main Features

- 8 channel Analog-to-Digital Converter (A/D)· Brown-out Reset (BOR)·

Analog Comparator module with: - Two analog comparators -

Programmable on-chip voltage reference (VREF) module - Programmable

input multiplexing from device inputs and internal voltage reference.

- Only 35 single word instructions to learn· All single cycle instructions

except for program branches, which are two-cycle· Operating speed: - 20

MHz clock input 200 ns instruction cycle· x 14 words of FLASH Program

Memory, x 8 bytes of Data Memory (RAM), x 8 bytes of EEPROM Data

Dept. of ECE, SJCET, Palai 38

Page 39: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Memory· Pinout compatible to other 40/44-pin PIC16CXXX and

PIC16FXXX microcontrollers.

- Low power, high speed FLASH/EEPROM technology Fully static design

Wide operating voltage range to 5.5V) Commercial and Industrial

temperature ranges Low power consumption

Special Microcontroller Features

100,000 erase/write cycle Enhanced Flash program memory typical

1,000,000 erase/write cycle Data EEPROM memory typical

Data EEPROM Retention > 40 years

In-Circuit Serial Programming™ (ICSP™) via two pins

Single-supply 5V In-Circuit Serial Programming

Watchdog Timer (WDT) with its own on-chip RC oscillator for

reliable operation

Programmable code protection

Power saving Sleep mode

5.2.5 89C2051 MICROCONTROLLER

The AT89C2051 is a low-voltage, high-performance CMOS 8-bit

microcomputer with 2K bytes of Flash programmable and erasable read-only memory

(PEROM). The device is manufactured using Atmel’s high-density nonvolatile

memory technology and is compatible with the industry standard MCS-51 instruction

set. By combining a versatile 8-bit CPU with Flash on a monolithic chip, the Atmel

AT89C2051 is a powerful microcomputer which provides a highly-flexible and cost-

effective solution to many embedded control applications. The AT89C2051 provides

the following standard features: 2K bytes of Flash, 128 bytes of RAM, 15 I/O lines,

two 16-bit timer/counters, a five vector two-level interrupt architecture, a full duplex

serial port, a precision analog comparator, on-chip oscillator and clock circuitry. In

addition, the AT89C2051 is designed with static logic for operation down to zero

frequency and supports two software selectable power saving modes. The Idle Mode

Dept. of ECE, SJCET, Palai 39

Page 40: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

stops the CPU while allowing the RAM, timer/counters, serial port and interrupt

system to continue functioning. The power-down mode saves the RAM contents but

freezes the oscillator disabling all other chip functions until the next hardware reset.

5.2.6 L293D MOTOR DRIVER

The Device is a monolithic integrated high voltage, high current four channel

driver designed to accept standard DTL or TTL logic levels and drive inductive loads

(such as relays solenoids , DC and stepping motors) and switching power transistors.

To simplify use as two bridges each pair of channels is equipped with an enable input.

A separate supply input is provided for the logic, allowing operation at a lower

voltage and internal clamp diodes are included. This device is suitable for use in

switching applications at frequencies up to 5 kHz. The L293D is assembled in a 16

lead plastic package which has 4 center pins connected together and used for heat

sinking The L293DD is assembled in a 20 lead surface mount which has 8 center pins

connected together and used for heat sinking.

Features

1. 600ma output current capability per channel

2. 1.2a peak output current (non repetitive) per channel enable facility

3. Over temperature protection

4. Logical ”0” input voltage up to 1.5 v

5. High noise immunity

6. Internal clamp diodes

5.2.7 SL74HC573

This device contains protection circuitry to guard against damage due to high

static voltages or electric fields. However, precautions must be taken to avoid

applications of any voltage higher than maximum rated voltages to this high-

Dept. of ECE, SJCET, Palai 40

Page 41: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

impedance circuit. For proper operation, VIN and VOUT should be constrained to the

range.

GND(VIN or VOUT)VCC.

Features

The SL74HC573 is identical in pinout to the LS/ALS573. The device

inputs are compatible with standard CMOS outputs; with pull up

resistors, they are compatible with LS/ALSTTL outputs.

These latches appear transparent to data (i.e., the outputs change

asynchronously) when Latch Enable is high. When Latch Enable goes

low, data meeting the setup and hold time becomes latched.

Outputs Directly Interface to CMOS, NMOS, and TTL

Operating Voltage Range: 2.0 to 6.0 V

Low Input Current: 1.0 Ma

High Noise Immunity Characteristic of CMOS Devices

Dept. of ECE, SJCET, Palai 41

Page 42: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

6. FLOWCHARTS

6.1 TRANSMITTER SECTION

Fig 6.1.15 Flowchart Of Transmitter Section

Figure above shows the flowchart of transmitter section of the Human- Robot

Interaction system. The accelerometer is the device used for making the human

Dept. of ECE, SJCET, Palai 42

Page 43: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

interaction with robot. The accelerometer detects the gestures of human hand and it

converts the gestures to some electric voltages.ie the accelerometer detects the X,Y&

Z directional motions of the hand and converts these motions in to some voltages.

These voltages are decoded and transmitting to the robot at a remote location. This

information are transmitted through a wireless transmission method called zigbee,

which is one of the most effective wireless communication protocols. This is the basic

working of transmitting section.

At first all the devices are initialized, including the accelerometer, zigbee and

the code generating section. The accelerometer is so sensitive to detect the motions.

When there is a motion occurred, accelerometer detects the motion which is in X, Y

or in Z direction. The device will produce some output voltages. These voltages are

applied to a code generating circuit. The code generating circuit mainly consist of a

PIC micro controller 16F876A.This PIC contains analoge to digital converter. Before

the operation some ranges of values produced in the accelerometer corresponding to

the X, Y & Z motions are specified and stored in the microcontroller memory. When

there is a motion occurred the controller compares the values and if the values are in

the ranges which are specified in the memory. For each range of values the

microcontroller will generate a pre assigned code in any one of it’s port, ie for eg. for

X &Y combination values the code is 01,for X&Z code is 02 and for Y&Z code is 03

etc.

So the controller checks the incoming values from the output pins of

accelerometer and will generate corresponding code if the values are in the predefined

range. Otherwise the device will not generate any code and waits for the occurance of

any motion.

After generating the predefined code the microcontroller sends these codes to

the remote location through a zigbee device, and will initialize a stop count. The stop

count will increment repeatedly till another motion is occurred. At the receiver there

are some tasks assigned to the codes which are transmitted. When another motion is

occurred the stop count will be reinitialize for that particular motion code. Consider if

there is no other motion occurred after an initialization of stop count, the controller

Dept. of ECE, SJCET, Palai 43

Page 44: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

checks the count and if it reaches the defined count value the code generation will

stops. And the device will stop the working.

Here we are using the zigbee protocol; the zigbee is having a particular range.

Consider for a particular motion a particular code will be generate and consider the

task corresponding to the code is “forward motion”. If we are not trying to provide

the stop count the device will check for the another motion for a long time and the

robot at the receiver part will move continuously and will go out of the range of

zigbee and from our controllable range. And after this the change in the hand motion

at the transmitter will not convert in to the motion of robot. So if we are providing the

stop count the device will stop the operation when the counter reaches it’s maximum

value and waiting for the any other movement.

6.2 RECEIVER SECTION

Dept. of ECE, SJCET, Palai 44

Page 45: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Fig 6.2.16 Flowchart Of Receiver Section

Dept. of ECE, SJCET, Palai 45

Page 46: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

The receiver mainly consists of a zigbee receiver and a microcontroller

89C2051, which is the main controller of receiver. The receiver is fixed on an autobot

which is able to move in any direction according to the commands. The main

components of the autobot is PIC18F4550, a motor driver ic and two wheels

connected to dc motors. The front wheel is free to move in any direction.

During non operating condition ie, the device is not detecting any motion the

motors are in idle state. When a motion occurs the transmitter detects the motion and

sends the code corresponding to the motion. The zigbee receiver receives the code

and sends this data to microcontroller in te receiver circuit. The controller detects the

codes and generates some other codes corresponding to the received codes in any one

of it’s port as per the program. This code determines the movement of the autobot.

Some tasks like left motion, right motion, forward motion, backward motion and stop

are assigned in the autobot controller to the codes of receiver-controller.

While seeing the code generated by the receiver controller the main controller

of autobot produces some sequence of codes to control the motor driver IC and the

driver IC controls the motion of motors like forward, backward etc. After this the PIC

microcontroller initializes a stop count and waits for the any other motion.

When another motion is detected the device reinitializes the stop count and the

code corresponding to that motion is generated and the main controller generates

corresponding sequence to control the motion of the motors and wheels of autobot

corresponding to that motion will be performed. If the motion is not changed the

device will check the stop count and if it is reached it’s maximum allowable value

the operation of the device will stop.

The stop count concept helps us to keep our device in our control range .In the

receiver part we are using two controllers one for controlling the receiver and one for

controlling the autobot. There may be many complexity in programming the autobot

and receiver with one controller which will affect the sudden response of the autobot

with the commands in the form of gestures. By using two controllers we can control

the device more accurately without more interrupting the main controller of autobot.

Dept. of ECE, SJCET, Palai 46

Page 47: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

7. PCB LAYOUTS

Fig 7.17 Component Layout Of Autobot

Dept. of ECE, SJCET, Palai 47

Page 48: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Fig 7.18 PCB Layout Of Autobot

Dept. of ECE, SJCET, Palai 48

Page 49: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Fig 7.19 PCB Layout Of Receiver

Dept. of ECE, SJCET, Palai 49

Page 50: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Fig 7.20 PCB Layout Of Transmitter

Dept. of ECE, SJCET, Palai 50

Page 51: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

8. RESULT AND DISCUSSION

The project “Human Robotic Interaction Based On Gesture Identification”

was designed such that autobot can move in forward, backward, right side and left

side according to the motion of the hand. The main highlight of this project is the

ZIGBEE transceiver, which is used for the data transfer between the receiver and

transmitter. Movement of the hand is detected by the accelerometer which is attached

to the hand. This system can be used in home automation. This system also has a

camera in the receiver section. As a result autobot can be used for spy works. By the

usage of zigbee transceiver it is able to control the autobot from another location.

Fig 8.21 Prototype of Robot

Dept. of ECE, SJCET, Palai 51

Page 52: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

9. ADVANTAGES AND DISADVANTAGES

9.1 ADVANTAGES

Ease of controlling.

Movement of autobot can be controlled by hand movements.

Fast response.

The module can be made into various forms as per the area of application.

User friendly- One need not to know about the robot, as they can control by

hand movement.

Efficient and low cost design.

9.2 DISADVANTAGES

Camera in the receiver section uses more power, so robot cannot run on

battery for long time.

Dept. of ECE, SJCET, Palai 52

Page 53: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

10. APPLICATIONS

Robots are used for many services in our society. It extends from industrial

automation tools to medical care. Robots can be used in the hazardous areas where

the human can’t reach.

A deaf and dumb person can also control the robot. So this system can be used

in home automation. The system can be used in industrial areas for fast operation and

ease of work.Giant machinery vehicles can be controlled by body movements.

In the mine industry, robots can be used before human workers for examining

the environment .By knowing the environmental conditions inside the mines

appropriate precautions can be taken.

Dept. of ECE, SJCET, Palai 53

Page 54: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

11.FUTURE SCOPE

HIR is going to be an important military application in future. By translating

whole motions of a human body to a humanoid (human-like robot) we can make a

machine clone of human beings. And these robots can be used for military

applications. By this we can reduce human casualty as there is no direct involvement

of human beings, also the machine parts are not easily damaged as human organs

would be.

In medical area, doctors can treat patients in a remote location by sitting in

their own cabin under normal situation.

Dept. of ECE, SJCET, Palai 54

Page 55: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

12.CONCLUSION

This project proposes an authoring method capable of creating and controlling

motions of industrial robots based on gesture identification. The proposed method is

simple, user-friendly, cost effective, and intelligent and facilitates motion authoring

of industrial robots using hand, which is second only to language in terms of means of

communication. The proposed robot motion authoring method is expected to provide

user-friendly and intuitive solutions for not only various industrial robots, but also

other types of robots including humanoids.

Dept. of ECE, SJCET, Palai 55

Page 56: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

13. REFERENCES

1. N. K. Aaronson, C. Acquadro, J. Alonso, G. Apolone, D. Bucquet, M.

Bullinger, K.Bungay,S.Fukuhara, B. Gandek, S. Keller, D. Razavi and R.

Sanson-Fisher.International quality of life assessment (iqola) project. Quality

of Life Research, 1(5):349–351, Dec 2004.

2. P. Aigner and B. McCarragher. Shared control framework applied to a robotic

aid for the blind. Control Systems Magazine, IEEE, 19(2):40–46, April 1999.

3. D.Grollman,Jenkins. Learning elements of robot soccer from demonstration.

In Proceedings of theInternational Conference on Development and Learning

(ICDL), London, England, Jul 2007.

4. K. Gold and B. Scassellati. Learning about the self and others through

contingency. In AAAI Spring Symposium on Developmental Robotics,

Stanford, CA, March 2005.

5. P. H. Kahn, H. Ishiguro, B. Friedman, and T. Kanda. What is a human? –

Toward psychological benchmarksin the field of human-robot interaction. In

IEEE Proceedings of the International Workshop on Robot and Human

Interactive Communication (RO-MAN), Hatfield, UK, Sep 2006.

Dept. of ECE, SJCET, Palai 56

Page 57: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

APPENDIX

Dept. of ECE, SJCET, Palai 57

Page 58: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 58

Page 59: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 59

Page 60: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 60

Page 61: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 61

Page 62: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 62

Page 63: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 63

Page 64: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 64

Page 65: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 65

Page 66: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 66

Page 67: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 67

Page 68: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 68

Page 69: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 69

Page 70: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 70

Page 71: Human-Robot Interaction Based On Gesture Identification

Human Robotic Interaction Based On Gesture Identification

Dept. of ECE, SJCET, Palai 71