Upload
p-singh-ijaet
View
12.470
Download
23
Embed Size (px)
Citation preview
VOLUMEVOLUMEVOLUMEVOLUME----1 1 1 1
ISSUEISSUEISSUEISSUE----5555
NOVEMBERNOVEMBERNOVEMBERNOVEMBER----2011201120112011
International Journal of Advances in
Engineering & Technology (IJAET)
URL : http://www.ijaet.org E-mail : [email protected]
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
i Vol. 1, Issue 5, pp. i-iii
Table of Content
S. No. Article Title & Authors (Vol. 1, Issue. 5, Nov-2011) Page No’s
1. APPLICATION OF SMES UNIT TO IMPROVE THE VOLTAGE
PROFILE OF THE SYSTEM WITH DFIG DURING GRID DIP AND
SWELL
A. M. Shiddiq Yunus, A. Abu-Siada and M. A. S. Masoum
1-13
2. HYBRID MODEL FOR SECURING E-COMMERCE
TRANSACTION
Abdul Monem S. Rahma, Rabah N. Farhan, Hussam J. Mohammad
14-20
3. DSSS DIGITAL TRANSCEIVER DESIGN FOR ULTRA
WIDEBAND
Mohammad Shamim Imtiaz
21-29
4. INTRODUCTION TO METASEARCH ENGINES AND RESULT
MERGING STRATEGIES: A SURVEY
Hossein Jadidoleslamy
30-40
5. STUDY OF HAND PREFERENCES ON SIGNATURE FOR RIGHT-
HANDED AND LEFT-HANDED PEOPLES
Akram Gasmelseed and Nasrul Humaimi Mahmood ,
41-46
6. DESIGN AND SIMULATION OF AN INTELLIGENT TRAFFIC
CONTROL SYSTEM
Osigwe Uchenna Chinyere, Oladipo Onaolapo Francisca, Onibere
Emmanuel Amano
47-57
7. DESIGN OPTIMIZATION AND SIMULATION OF THE
PHOTOVOLTAIC SYSTEMS ON BUILDINGS IN SOUTHEAST
EUROPE
Florin Agai, Nebi Caka, Vjollca Komoni
58-68
8. FAULT LOCATION AND DISTANCE ESTIMATION ON POWER
TRANSMISSION LINES USING DISCRETE WAVELET
TRANSFORM
Sunusi. Sani Adamu, Sada Iliya
69-76
9. AN Investigation OF THE PRODUCTION LINE FOR ENHANCED
PRODUCTION USING HEURISTIC METHOD
M. A. Hannan, H.A. Munsur, M. Muhsin
77-88
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
ii Vol. 1, Issue 5, pp. i-iii
10. A NOVEL DESIGN FOR ADAPTIVE HARMONIC FILTER TO
IMPROVE THE PERFORMANCE OF OVER CURRENT RELAYS
A. Abu-Siada
89-95
11. ANUPLACE: A SYNTHESIS AWARE VLSI PLACER TO
MINIMIZE TIMING CLOSURE
Santeppa Kambham and Krishna Prasad K.S.R
96-108
12. FUNCTIONAL COVERAGE ANALYSIS OF OVM BASED
VERIFICATION OF H.264 CAVLD SLICE HEADER DECODER
Akhilesh Kumar and Chandan Kumar
109-117
13. COMPARISON BETWEEN GRAPH BASED DOCUMENT
SUMMARIZATION METHOD AND CLUSTERING METHOD
Prashant D.Joshi, S.G.Joshi, M.S.Bewoor, S.H.Patil
118-125
14. IMPROVED SEARCH ENGINE USING CLUSTER ONTOLOGY
Gauri Suresh Bhagat, Mrunal S. Bewoor, Suhas Patil
126-132
15. COMPARISON OF MAXIMUM POWER POINT TRACKING
ALGORITHMS FOR PHOTOVOLTAIC SYSTEM
J. Surya Kumari, Ch. Sai Babu
133-148
16. POWER QUALITY DISTURBANCE ON PERFORMANCE OF
VECTOR CONTROLLED VARIABLE FREQUENCY INDUCTION
MOTOR
A. N. Malleswara Rao, K. Ramesh Reddy, B. V. Sanker Ram
149-157
17. INTELLIGENT INVERSE KINEMATIC CONTROL OF SCORBOT-
ER V PLUS ROBOT MANIPULATOR
Himanshu Chaudhary and Rajendra Prasad
158-169
18. FAST AND EFFICIENT METHOD TO ASSESS AND ENHANCE
TOTAL TRANSFER CAPABILITY IN PRESENCE OF FACTS
DEVICE
K. Chandrasekar and N. V. Ramana
170-180
19. ISSUES IN CACHING TECHNIQUES TO IMPROVE SYSTEM
PERFORMANCE IN CHIP MULTIPROCESSORS
H. R. Deshmukh, G. R. Bamnote
181-188
20. KANNADA TEXT EXTRACTION FROM IMAGES AND VIDEOS
FORVISION IMPAIRED PERSONS
Keshava Prasanna, Ramakhanth Kumar P, Thungamani.M, Manohar
Koli
189-196
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
iii Vol. 1, Issue 5, pp. i-iii
21. COVERAGE ANALYSIS IN VERIFICATION OF TOTAL ZERO
DECODER OF H.264 CAVLD
Akhilesh Kumar and Mahesh Kumar Jha
197-203
22. DESIGN AND CONTROL OF VOLTAGE REGULATORS FOR
WIND DRIVEN SELF EXCITED INDUCTION GENERATOR
Swati Devabhaktuni and S. V. Jayaram Kumar
204-217
23. LITERATURE REVIEW OF FIBER REINFORCED POLYMER
COMPOSITES
Shivakumar S, G. S. Guggari
218-226
24. IMPLEMENTATION RESULTS OF SEARCH PHOTO AND
TOPOGRAPHIC INFORMATION RETRIEVAL AT A LOCATION
Sukhwant Kaur, Sandhya Pati, Trupti Lotlikar, Cheryl R, Jagdish T.,
Abhijeet D.
227-235
25. QUALITY ASSURANCE EVALUATION FOR PROGRAMS USING
MATHEMATICAL MODELS
Murtadha M. Hamad and Shumos T. Hammadi
236-247
26. NEAR SET AN APPROACH AHEAD TO ROUGH SET: AN
OVERVIEW
Kavita R Singh, Shivanshu Singh
248-253
27. MEASUREMENT OF CARBONYL EMISSIONS FROM EXHAUST
OF ENGINES FUELLED USING BIODIESEL-ETHANOL-DIESEL
BLEND AND DEVELOPMENT OF A CATALYTIC CONVERTER
FOR THEIR MITIGATION ALONG WITH CO, HC’S AND NOX.
Abhishek B. Sahasrabudhe, Sahil S. Notani, Tejaswini M. Purohit,
Tushar U. Patil and Satishchandra V. Joshi
254-266
28. IMPACT OF REFRIGERANT CHARGE OVER THE
PERFORMANCE CHARACTERISTICS OF A SIMPLE VAPOUR
COMPRESSION REFRIGERATION SYSTEM
J. K. Dabas, A. K. Dodeja, Sudhir Kumar, K. S. Kasana
267-277
29. AGC CONTROLLERS TO OPTIMIZE LFC REGULATION IN
DEREGULATED POWER SYSTEM
S. Farook, P. Sangameswara Raju
278-289
30. AUTOMATIC DIFFERENTIATION BETWEEN RBC AND
MALARIAL PARASITES BASED ON MORPHOLOGY WITH
FIRST ORDER FEATURES USING IMAGE PROCESSING
Jigyasha Soni, Nipun Mishra, Chandrashekhar Kamargaonkar
290-297
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
iv Vol. 1, Issue 5, pp. i-iii
31. REAL ESTATE APPLICATION USING SPATIAL DATABASE
M. Kiruthika, Smita Dange, Swati Kinhekar, Girish B, Trupti G,
Sushant R.
298-309
32. DESIGN AND VERIFICATION ANALYSIS OF APB3 PROTOCOL
WITH COVERAGE
Akhilesh Kumar and Richa Sinha
310-317
33. IMPLEMENTATION OF GPS ENABLED CAR POOLING SYSTEM
Smita Rukhande, Prachi G, Archana S, Dipa D
318-328
34. APPLICATION OF MATHEMATICAL MORPHOLOGY FOR THE
ENHANCEMENT OF MICROARRAY IMAGES
Nagaraja J, Manjunath S.S, Lalitha Rangarajan, Harish Kumar. N
329-336
35. SECURING DATA IN AD HOC NETWORKS USING MULTIPATH
ROUTING
R. Vidhya and G. P. Ramesh Kumar
337-341
36. COMPARATIVE STUDY OF DIFFERENT SENSE AMPLIFIERS IN
SUBMICRON CMOS TECHNOLOGY
Sampath Kumar, Sanjay Kr Singh, Arti Noor, D. S. Chauhan & B.K.
Kaushik
342-350
37. CHARACTER RECOGNITION AND TRANSMISSION OF
CHARACTERS USING NETWORK SECURITY
Subhash Tatale and Akhil Khare
351-360
38. IMPACT ASSESSMENT OF SHG LOAN PATTERN USING
CLUSTERING TECHNIQUE
Sajeev B. U, K. Thankavel
361-374
39. CASCADED HYBRID FIVE-LEVEL INVERTER WITH DUAL
CARRIER PWM CONTROL SCHEME FOR PV SYSTEM
R. Seyezhai
375-386
40. A REVIEW ON: DYNAMIC LINK BASED RANKING
D. Nagamalleswary , A. Ramana Lakshmi ,
387-393
41. MODELING AND SIMULATION OF A SINGLE PHASE
PHOTOVOLTAIC INVERTER AND INVESTIGATION OF
SWITCHING STRATEGIES FOR HARMONIC MINIMIZATION
B. Nagaraju, K. Prakash
394-400
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
v Vol. 1, Issue 5, pp. i-iii
42. ENHANCEMENT OF POWER TRANSMISSION CAPABILITY OF
HVDC SYSTEM USING FACTS CONTROLLERS
M. Ramesh, A. Jaya Laxmi
401-416
43. EIGEN VALUES OF SOME CLASS OF STRUCTURAL
MATRICES THAT SHIFT ALONG THE GERSCHGORIN CIRCLE
ON THE REAL AXIS
T. D. Roopamala and S. K. Katti
417-421
44. TYRE PRESSURE MONITORING AND COMMUNICATING
ANTENNA IN THE VEHICULAR SYSTEMS
K. Balaji, B. T. P. Madhav, P. Syam Sundar, P. Rakesh Kumar, N.
Nikhita, A. Prudhvi Raj, M. Mahidhar
422-428
45. DEEP SUB-MICRON SRAM DESIGN FOR DRV ANALYSIS AND
LOW LEAKAGE
Sanjay Kr Singh, Sampath Kumar, Arti Noor, D. S. Chauhan &
B.K.Kaushik
429-436
46. SAG/SWELL MIGRATION USING MULTI CONVERTER
UNIFIED POWER QUALITY CONDITIONER
Sai Ram. I, Amarnadh.J, K. K. Vasishta Kumar
437-440
47. A NOVEL CLUSTERING APPROACH FOR EXTENDING THE
LIFETIME FOR WIRELESS SENSOR NETWORKS
Puneet Azad, Brahmjit Singh, Vidushi Sharma
441-446
48. SOLAR HEATING IN FOOD PROCESSING
N. V. Vader and M. M. Dixit
447-453
49. EXPERIMENTAL STUDY ON THE EFFECT OF METHANOL -
GASOLINE, ETHANOL-GASOLINE AND N-BUTANOL-
GASOLINE BLENDS ON THE PERFORMANCE OF 2-STROKE
PETROL ENGINE
Viral K Pandya, Shailesh N Chaudhary, Bakul T Patel, Parth D Patel
454-461
50. IMPLEMENTATION OF MOBILE BROADCASTING USING
BLUETOOTH/3G
Dipa Dixit, Dimple Bajaj and Swati Patil
462-472
51. IMPROVED DIRECT TORQUE CONTROL OF INDUCTION
MOTOR USING FUZZY LOGIC BASED DUTY RATIO
CONTROLLER
Sudheer H., Kodad S.F. and Sarvesh B.
473-479
52. INFLUENCE OF ALUMINUM AND TITANIUM ADDITION ON 480-491
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
vi Vol. 1, Issue 5, pp. i-iii
MECHANICAL PROPERTIES OF AISI 430 FERRITIC STAINLESS
STEEL GTA WELDS
G. Mallaiah, A. Kumar and P. Ravinder Reddy
53. ANOMALY DETECTION ON USER BROWSING BEHAVIORS
FOR PREVENTION APP_DDOS
Vidya Jadhav and Prakash Devale
492-499
54. DESIGN OF LOW POWER LOW NOISE BIQUAD GIC NOTCH
FILTER IN 0.18 µM CMOS TECHNOLOGY
Akhilesh kumar, Bhanu Pratap Singh Dohare and Jyoti Athiya
500-506
Members of IJAET Fraternity A-F
Best Reviewers for this Issue are: 1. Dr. Sukumar Senthilkumar
2. Dr. Tang Aihong
3. Dr. Rajeev Singh
4. Dr. Om Prakash Singh
5. Dr. V. Sundarapandian
6. Dr. Ahmad Faridz Abdul Ghafar
7. Ms. G Loshma
8. Mr. Brijesh Kumar
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
1 Vol. 1, Issue 5, pp. 1-13
APPLICATION OF SMES UNIT TO IMPROVE THE VOLTAGE
PROFILE OF THE SYSTEM WITH DFIG DURING GRID DIP
AND SWELL
A. M. Shiddiq Yunus1, 2
, A. Abu-Siada2 and M. A. S. Masoum
2
1Department of Mechanical Engineering, Energy Conversion Study Program,
State Polytechnic of Ujung Pandang, Makassar, Indonesia 2Departement of Electrical and Computer Engineering, Curtin University, Perth, Australia
ABSTRACT
One of the most important parameters of the system where wind turbine generators (WTGs) are connected is
voltage profile at the point of common coupling (PCC). In the earlier stage, WTGs were possible to be
disconnected from the system to avoid the damage of WTGs. Following the rapid injection of WTGs to the
existing network during last decades, the transmission line operators (TSOs) require WTGs to stay connected in
certain level of fault to continue support the grid. This new requirements have been compiled in new
international grid codes. In this paper, superconducting magnetic energy storage (SMES) is applied to improve
the voltage profile of PCC bus where WTGs equipped with doubly fed induction generator (DFIG) is connected
to meet the used gird codes of Spain and German during grid dip and swell. The voltage dip at the grid side is
examined to comply with the low voltage ride through (LVRT) while the voltage swell at the grid side is
examined to comply with the high voltage ride through (HVRT) of both Spain and German voltage ride through
(VRT).
KEYWORDS: Voltage Ride through (VRT), SMES, DFIG, Voltage Dip & Voltage Swell.
I. INTRODUCTION
The effect of pollution from conventional energy to the environment and the implementation of
carbon tax have become a trigger of the increase of renewable energy utilization in the world. In
addition, conventional energy is very limited and would soon be finished if exploited on a large scale
because of oil, gas or coal is a material created in the process of millions of years. The limited amount
and high demand for energy resources will affect the rise in oil prices from time to time. Therefore,
attention is directed now onto the renewable energies which are clean and abundantly available in the
nature [1]. The first wind turbines for electricity generation had already been developed at the
beginning of the twentieth century. The technology was improved step by step from the early 1970s.
By the end of the 1990s, wind energy has re-emerged as one of the most important sustainable energy
resources. During the last decade of the twentieth century, worldwide wind capacity doubled
approximately every three years [2]. The global installed capacity worldwide increased from just less
than 2000 MW at the end of 1990 to 94000 MW by the end of 2007. In 2008, wind power already
provides a little over 1% of global electricity generation and by about 2020, it is expected that wind
power to be providing about 10% of global electricity [3]. Moreover, the total 121 GW installed
capacity of wind turbine in 2008 has produced 260 TWh of electricity and has saved about 158
million tons of CO2. In addition, the predication of total installed capacity of wind turbines will be
573 GW in 2030 [4]. Power quality issue is the common consideration for new construction or
connection of power generation system including WTGs installation and their connection to the
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
2 Vol. 1, Issue 5, pp. 1-13
existing power system. In this paper, voltage dip (sag) and swell will be considered as the conditions
of the fault ride through capability of WTG equipped with DFIG. Voltage dip (sag) and swell are two
common types of power quality issue. Voltage dip is a decrease to between 0.1 and 0.9 pu in rms
voltage or current at the power frequency for durations of 0.5 cycles to 1 minute. Voltage dips are
usually associated with system faults but can also be caused by switching of heavy loads or starting of
large motors. A swell is defined as an increase in rms voltage or current at the power frequency for
durations from 0.5 cycles to 1 minute. Typical magnitudes are between 1.1 and 1.8 pu. As with dips,
swells are usually associated with system fault conditions, but they are much less common than
voltage dips. A swell can occur due to a single line-to-ground fault on the system resulting in a
temporary voltage rise on the unfaulted phases. Swells can also be caused by switching off a large
load or switching on a large capacitor bank [5, 6]. Since voltage dip is a common power quality
problem in power systems, most of studies are focused on the performance of WTGs during voltage
dip [7-14]. Although it is a less power quality problem, voltage swell may also lead to the
disconnection of WTGs from the grid. In this paper, voltage dip and swell will applied on the grid
side to investigate their effects on PCC which could affect the continuation of WTGs connection if
complying with the used grid codes in this paper as explained below with and without SMES
connected.
II. SPAIN AND GERMAN GRID CODE
In the earlier stage, WTGs are possible to be disconnected from the system to avoid the damage of
WTGs. Following the rapid injection of WTGs to the existing network during last decades, the
transmission line operators (TSOs) require WTGs to stay connected in certain level of fault to
continue support the grid. This new requirements have been compiled in new grid codes. However,
most of grid codes are only providing low voltage ride through (LVRT) in their codes without any
restriction information regarding the high voltage ride through (HVRT) which might be can lead
instability in the PCC. The following figures are the international grid codes of Spain and German
which used in this study. Figure 1a and 1b show the voltage ride through (VRT) of Spain and German
respectively. The selection of these grid codes is based on their strictness in low voltage ride through
(LVRT), meanwhile providing complete voltage ride through (VRT) with their HVRT.
(a) (b)
Figure 1. (a) FRT of Spain grid code and (b) FRT of German grid code [15]
In Figure 1 (a), the FRT of Spain is divided by three main blocks. “A” block is representing the high
voltage ride through (HVRT) of Spain grid code. The maximum allowable high voltage in the vicinity
of PCC is 130% lasts for 0.5 s. After that the maximum high voltage is reduced to 120% until next
0.5 s. All high voltage profiles above “A” block will lead the disconnection of WTGs from the
system. The normal condition of this grid code is laid on “B” block. All voltage profiles within this
block range are classified as a normal condition (90% to 110%). The low voltage ride through
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
3 Vol. 1, Issue 5, pp. 1-13
(LVRT) is limited in “C” block. The minimum voltage drop allows in this grid code is 50% lasts for
0.15 s and increased to 60% until 0.25s. The low voltage restriction then ramp to 80% at 1 s and
reaching the normal condition in 15 s since the fault occurs. The HVRT of German grid code (shown
in Figure 1(b)) is much strict then Spain. The maximum allowable HVRT is 120% for 0.1 s (shown in
“A” block). The normal condition that is shown in “B” block is the same with Spain grid code.
However, the LVRT is allowed to reach 45% lasts for 0.15 s and should be at least 70% until 0.7 s.
After that the voltage margin ramps to 85% at 1.5 s.
III. SYSTEM UNDER STUDY
There are two major classifications of wind turbine generator, fixed-speed turbine and variable-speed
turbines. One of the most popular variable speed wind turbine is doubly fed induction generator
(DFIG). About 46.8 % of this type has been installed in 2002 [2]. A doubly fed induction generator
(DFIG) using a medium scale power converter. Slip rings are making the electrical connection to the
rotor. If the generator is running super-synchronously, electrical power is delivered to the grid through
both the rotor and the stator. If the generator is running sub- synchronously, electrical power is
delivered into the rotor from the grid. A speed variation of + 30% around synchronous speed can be
obtained by the use of a power converter of 30% of nominal power. The stator winding of the
generator is coupled to the grid, and the rotor winding to a power electronic converter, nowadays
usually a back-to-back voltage source converter with current control loops. In this way, the electrical
and mechanical rotor frequencies are decoupled, because the power electronic converter compensates
the different between mechanical and electrical frequency by injecting a rotor current with variable
frequency. Variable speed operation thus became possible. The typical of generic model of DFIG is
shown in Figure 1.
Figure 2. Typical configuration of WTG equipped with DFIG
The system under study shown in Figure 3 consists of six-1.5 MW DFIG connected to the AC grid at
PCC via Y/∆ step up transformer. The grid is represented by an ideal 3-phase voltage source of
constant frequency and is connected to the wind turbines via 30 km transmission line. The reactive
power produced by the wind turbine is regulated at 0 Mvar at normal operating conditions. For an
average wind speed of 15 m/s which is used in this study, the turbine output power is 1 pu and the
generator speed is 1 pu. SMES Unit is connected to the 25 KV (PCC) bus and is assumed to be fully
charged at its maximum capacity of 2 MJ.
Figure 3. System under study
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
4 Vol. 1, Issue 5, pp. 1-13
IV. SMES CONFIGURATION AND CONTROL SYSTEM
The selection of SMES Unit in this paper is based on its advantages over other energy storage
technologies. Compared to other energy storage options, the SMES unit is ranked first in terms of
highest efficiency which is 90-99% [16-18]. The high efficiency of the SMES unit is achieved by its
lower power loss because electric currents in the coil encounter almost no resistance and there are no
moving parts, which means no friction losses. SMES stores energy within a magnetic field created by
the flow of direct current in a coil of superconducting material. Typically, the coil is maintained in its
superconducting state through immersion in liquid helium at 4.2 K within a vacuum - insulated
cryostat. A power electronic converter interfaces the SMES to the grid and controls the energy flow
bidirectionally. With the recent development of materials that exhibit superconductivity closer to
room temperatures this technology may become economically viable [1]. The stored energy in the
SMES coil can be calculated as:
SMSM LIE2
2
1= (1)
Where E is the SMES energy; ISM is the SMES Current and LSM is the SMES inductor coil.
The SMES unit configuration used in this paper consists of voltage source converter (VSC) and
DC-DC chopper which are connected through a DC shunt capacitor. The VSC is controlled by a
hysteresis current controller (HCC) while the DC-DC chopper is controlled by fuzzy logic controller
(FLC) as shown in Figure 4.
Figure 4. SMES configuration
DC-DC Chopper along with FLC is used to control charging and discharging process of the SMES
coil energy. The generator active power and the current in the superconductor coil are used as inputs
to the fuzzy logic controller to determine the value of the DC chopper duty cycle, active power of
DFIG and SMES coil current are used as inputs of the fuzzy logic controller. The duty cycle (D) is
compared with 1000 Hz saw-tooth signal to produce signal for the DC-DC chopper as can be seen in
Figure 5.
Figure 5. Control algorithm of DC-DC chopper
Compared with pulse width modulation (PWM) technique, the hysteresis band current control has the
advantages of ease implementation, fast response, and it is not dependent on load parameters [19].
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
5 Vol. 1, Issue 5, pp. 1-13
Hysteresis current control (HCC) is used to control the power flow exchange between the grid and the
SMES unit. HCC is comparing the 3-phase line currents with the reference currents (Id* and Iq*). The
value of Id* and Iq* are generated through the conventional PIs controller both from the deviation of
the capacitor voltage Vdc and system voltage Vs. To minimize the effect of phases interference while
maintaining the advantages of the hysteresis methods, phase-locked loop (PLL) technique is applied
to limit the converter switching at a fixed predetermined frequency [20]. The proposed control
algorithm in this paper is much simpler and closer to realistic application compared with the controller
used in [21], where four PIs controller were used which complicate the process of finding the optimal
parameters of the PIs, moreover, only Pg was used as the control parameter of the DC-DC chopper
and it ignored the energy capacity of the SMES coil. The detailed VSC control scheme used in this
paper is shown in Figure 6. The rules of duty cycles D and the corresponding SMES action are shown
in Table I. When D is equal to 0.5, SMES unit is in idle condition and there is no power exchange
between the SMES unit and the system. When there is any voltage drop because of fault, the
controller generates a duty cycle in the range of 0 to 0.5 according to the value of the inputs and
power will be transferred from SMES coil to the system. The charging action (corresponding to the
duty cycle higher than 0.5) will take place when SMES coil capacity is dropped and power will be
transferred from the grid to the SMES unit.
Figure 6. Control algorithm of VSC
Table 1. Rules of duty cycle
Duty cycle (D) SMES coil action
D = 0.5 standby condition
0 ≤ D < 0.5 discharging condition
0.5 < D ≤ 1 charging condition
The variation range in SMES current and DFIG output power and the corresponding duty cycle are used to
develop a set of fuzzy logic rules in the form of (IF-AND-THEN) statements to relate the input variables to
the output. The duty cycle for any set of input date (Pg and ISM) can be evaluated from the surface graph
shown in Figure 7.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
6 Vol. 1, Issue 5, pp. 1-13
Figure 7. Surface graph- Duty cycle
V. SIMULATION RESULTS
In this paper, two grid disturbances will be applied. The first disturbance would be a voltage dip of
20% and the second is a voltage swell of 135%. Both of disturbances are applied at 0.5 s and last for 5
cycles.
5.1. Voltage Dip
Figure 8. Complying voltage profile at PCC with Spain VRT during grid dip
Figure 9. Complying voltage profile at PCC with German VRT during grid dip
As can be seen in Figures 8 and 9, during voltage dip at the grid side, voltage profile at the PCC will
be dropped about 0.35 pu without SMES connected. This value is beyond the LVRT of both Spain
and German, therefore in this case, the DFIGs have to be disconnected from the system. However,
when SMES is connected voltage drop at the PCC can be significantly corrected to about 0.8 pu far
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
7 Vol. 1, Issue 5, pp. 1-13
from the lowest limit of LVRT of both Spain and German. When fault is cleared, it is naturally that
there is a spark which forces the overshoot voltage, however, the overshoot is still under the safety
margin of both Spain and German HVRT.
Figure 10. Shaft speed during grid dip
During voltage dip, the speed of shaft will increase at the time when the grip dip occurs to compensate
the power drop due to the voltage drop at the PCC as shown in Figure 10. In some severe grid dip
cases the extreme oscillation on shaft speed will lead to instability of the system. With SMES
connected to the PCC, the oscillation, settling time and the overshoot of the shaft speed are
significantly reduced if compared with the system without SMES.
Figure 11. Current behaviour of SMES coil during grid dip
Figure 12. Stored energy behaviour of SMES coil during grid dip
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
8 Vol. 1, Issue 5, pp. 1-13
Figure 13. Voltage behaviour across the SMES coil during grid dip
Figure 14. Duty cycle of DC-DC chopper during grid dip
The behavior of the SMES coil during the fault can be investigated through Fig 11 to Fig.13 which
respectively show the SMES coil current, SMES stored energy and the voltage across the coil. The
SMES coil energy is 2 MJ during normal operating conditions, when voltage dip occurs, SMES coil
instantly discharges its energy into the grid as shown in Figure 11. The characteristic of SMES
current shown in Figure 12 is similar to the energy stored in the coil. The charging and discharging
process of SMES coil can also be examined from the voltage across SMES coil (VSM) shown in
Figure 13. During normal operating conditions, VSM is equal to zero, it goes to negative value during
discharging process and will return back to zero level after the fault is cleared. As mentioned before,
the duty cycle of DC-DC chopper play important role to determine the charging and discharging
process of SMES coil energy. As shown in Figure 14, when voltage dip occur, power produced by
DFIG will also reduced, hence the FLC will see this reduction and act according to the membership
function rules shown in Figure 7, the duty cycle will in the range between 0 to 0.5 at this stage and
once the fault is cleared, the control system will act to charging the SMES coil. In this stage, duty
cycle will be in the range of 0.5 to 1 and will be back to its idle value of 0.5 once the SMES coil
energy reach its rated capacity.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
9 Vol. 1, Issue 5, pp. 1-13
5.2. Voltage Swell
Figure 15. Complying voltage profile at PCC with Spain and German HVRT during grid swell
The grid swell is started at 0.5 s and lasts for 5 cycles. As can be observed in Figure 15, without
SMES unit connected, during grid swell, voltage profile at the PCC will rise above 130 % and in this
condition, DFIGs that connected at the PCC have to be disconnected from the grid if complying with
both HVRT of Spain and German, however when fault is cleared out, the voltage profile can be soon
recovered and remains in the safety margin of both LVRT of Spain and German. When SMES unit is
connected, the voltage at the PCC is corrected to the safety margin of both HVRT of the grid codes of
Spain and German, hence avoid the disconnection of DFIGs from the grid.
Figure 16. Shaft speed during grid swell
Voltage swell at the grid side will force the voltage at the PCC will increase accordingly depends on
the percentage level of the swell. Hence, the power will be forced to level above the pre determined
rated, the speed control in this condition will limit the speed to avoid over-speeding of the shaft,
however in certain level of swell, the over speed protection may work and lead the generator to be
shut down. As described in Figure 16, with SMES connected to the PCC, the settling time and
oscillation of the shaft speed can be considerably reduced compared with the system without SMES.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
10 Vol. 1, Issue 5, pp. 1-13
Figure 17. Current behaviour of SMES coil during grid swell
Figure 18. Stored energy behaviour of SMES coil during grid swell
Figure 19. Voltage behaviour across the SMES coil during grid swell
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
11 Vol. 1, Issue 5, pp. 1-13
Figure 20. Duty cycle of DC-DC chopper during grid swell
Behaviours of SMES unit can be seen in Figures 17 to 20. Because the voltage swell at the grid side
causing short overshoot of power produced by DFIGs, current in the SMES coil will rise slightly and
likewise the energy in the SMES coil following the control regulation of FLC to damp the high
voltage at the PCC. When voltage swell is cleared out, voltage at the PCC will slightly drop causing
the power produced by DFIGs will drop either. This small amount of power drop is seen by the
controller and taking action to discharging the small amount of energy and improve the voltage at the
PCC, this can be justified in Figure 15, where voltage drop is lesser and voltage recovery is quicker
with SMES unit connected if compare with the system without SMES.
VI. CONCLUSIONS
This paper investigates the use of SMES unit to enhance the VRT capability of doubly fed induction
generator to comply with the grid codes of Spain and German grid codes. Results show that, without
the use of SMES unit, DFIGs must be disconnected from the grid because the voltage drop during
grid dip and voltage rise during grid swell at the PCC will cross beyond the safety margin of both the
LVRT and HVRT of Spain and German, therefore in this condition wind turbines equipped with
DFIG must be disconnected from the power system to avoid the turbines from being damaged.
However, using the proposed converter and chopper of the SMES unit which are controlled using a hysteresis
current controller (HCC) and a fuzzy logic controller (FLC), respectively, both the LVRT and HVRT
capability of the DFIGs can significantly improve and their connection to the grid can be maintained
to support the grid during faulty condition and to ensure the continuity of power supply.
ACKNOWLEDGEMENT
The first author would like to thank the Higher Education Ministry of Indonesia (DIKTI) and the State
Polytechnic of Ujung Pandang for providing him with a PhD scholarship at Curtin University,
Australia.
REFERENCES
[1] L. Freris and D. Infield, Renewable Energy in Power System. Wiltshire: A John Wiley & Sons, 2008.
[2] T. Ackerman, Wind Power in Power System. West Sussex: John Wiley and Sons Ltd, 2005.
[3] P. Musgrove, Wind Power. New York: Cambridge University Press, 2010.
[4] "Global wind energy outlook 2010," Global Wind Energy Council, 2010.
[5] A. N. S. (ANSI), "IEEE Recommended Practice for Monitoring Electric Power Quality," 1995.
[6] E. F. Fuchs and M. A. S. Masoum, "Power Quality in Power Systems and Electrical Machines,"
Elsevier, 2008.
[7] R. K. Behera and G. Wenzhong, "Low voltage ride-through and performance improvement of a grid
connected DFIG system," in Power Systems, 2009. ICPS '09. International Conference on, 2009, pp. 1-
6.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
12 Vol. 1, Issue 5, pp. 1-13
[8] S. Hu and H. Xu, "Experimental Research on LVRT Capability of DFIG WECS during Grid Voltage
Sags," in Power and Energy Engineering Conference (APPEEC), 2010 Asia-Pacific, pp. 1-4.
[9] K. Lima, A. Luna, E. H. Watanabe, and P. Rodriguez, "Control strategy for the rotor side converter of a
DFIG-WT under balanced voltage sag," in Power Electronics Conference, 2009. COBEP '09.
Brazilian, 2009, pp. 842-847.
[10] L. Trilla, O. Gomis-Bellmunt, A. Junyent-Ferre, M. Mata, J. Sanchez, and A. Sudria-Andreu,
"Modeling and validation of DFIG 3 MW wind turbine using field test data of balanced and unbalanced
voltage sags," Sustainable Energy, IEEE Transactions on, vol. PP, pp. 1-1, 2011.
[11] Y. Xiangwu, G. Venkataramanan, P. S. Flannery, and W. Yang, "Evaluation the effect of voltage sags
due to grid balance and unbalance faults on DFIG wind turbines," in Sustainable Power Generation
and Supply, 2009. SUPERGEN '09. International Conference on, 2009, pp. 1-10.
[12] Y. Xiangwu, G. Venkataramanan, P. S. Flannery, W. Yang, D. Qing, and Z. Bo, "Voltage-Sag
Tolerance of DFIG Wind Turbine With a Series Grid Side Passive-Impedance Network," Energy
Conversion, IEEE Transactions on, vol. 25, pp. 1048-1056.
[13] A. M. Shiddiq-Yunus, A. Abu-Siada, and M. A. S. Masoum, "Effects of SMES on Dynamic
Behaviours of Type D-Wind Turbine Generator-Grid Connected during Short Circuit," in IEEE PES
meeting Detroit, USA: IEEE, 2011.
[14] A. M. Shiddiq-Yunus, A. Abu-Siada, and M. A. S. Masoum, "Effects of SMES Unit on the
Perfromance of Type-4 Wind Turbine Generator during Voltage Sag," in Renewable Power Generation
RPG 2011 Edinburgh, UK: IET, 2011.
[15] Alt, x, M. n, Go, O. ksu, R. Teodorescu, P. Rodriguez, B. B. Jensen, and L. Helle, "Overview of recent
grid codes for wind power integration," in Optimization of Electrical and Electronic Equipment
(OPTIM), 2010 12th International Conference on, pp. 1152-1160.
[16] R. Baxter, Energy Storage: A Nano Technical Guide. Oklahoma: PenWell Corporation, 2006.
[17] F. A. Farret and M. G. Simoes, Integration of Alternative Source of Energy. New Jersey: John Wiley &
Sons, 2006.
[18] E. Acha, V. G. Agelidis, O. Anaga-Lara, and T. J. E. Miller, Power Electronic Control in Electrical
System. Oxford: Newnes, 2002.
[19] M. Milosevic. vol. 2011.
[20] L. Malesani and P. Tenti, "A novel hysteresis control method for current-controlled voltage-source
PWM inverters with constant modulation frequency," Industry Applications, IEEE Transactions on,
vol. 26, pp. 88-92, 1990.
[21] M. H. Ali, P. Minwon, Y. In-Keun, T. Murata, and J. Tamura, "Improvement of Wind-Generator
Stability by Fuzzy-Logic-Controlled SMES," Industry Applications, IEEE Transactions on, vol. 45, pp.
1045-1051, 2009.
Authors
A. M. Shiddiq Yunus was born in Makassar, Indonesia. He received his B.Sc from
Hasanuddin University in 2000 and his M.Eng.Sc from Queensland University of
Technology (QUT), Australia in 2006 both in Electrical Engineering. He recently towards
his PhD study in Curtin University, WA, Australia. His employment experience included
lecturer in the Department of Mechanical Engineering, Energy Conversion Study Program,
State Polytechnic of Ujung Pandang since 2001. His special fields of interest included
superconducting magnetic energy storage (SMES) and renewable energy.
A. Abu-Siada received his B.Sc. and M.Sc. degrees from Ain Shams University, Egypt and
the PhD degree from Curtin University of Technology, Australia, All in Electrical
Engineering. Currently, he is a lecturer in the Department of Electrical and Computer
Engineering at Curtin University. His research interests include power system stability,
condition monitoring, superconducting magnetic energy storage (SMES), power electronics,
power quality, energy technology, and system simulation. He is a regular reviewer for the
IEEE Transaction on Power Electronics, IEEE Transaction on Dielectric and Electrical
Insulations, and the Qatar National Research Fund (QNRF).
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
13 Vol. 1, Issue 5, pp. 1-13
Mohammad A. S. Masoum received his B.S., M.S. and Ph.D. degrees in Electrical and
Computer Engineering in 1983, 1985, and 1991, respectively, from the University of Colorado,
USA. Dr. Masoum's research interests include optimization, power quality and stability of power
systems/electric machines and distributed generation. He is the co-author of Power Quality in
Power Systems and Electrical Machines (New York: Academic Press, Elsevier, 2008).
Currently, he is an Associate Professor and the discipline leader for electrical power engineering
at the Electrical and Computer Engineering Department, Curtin University, Perth, Australia and a
senior member of IEEE.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
14 Vol. 1, Issue 5, pp. 14-20
HYBRID MODEL FOR SECURING E-COMMERCE
TRANSACTION
Abdul Monem S. Rahma1, Rabah N. Farhan2, Hussam J. Mohammad3 1Computer science Dept. University of Technology, Iraq
2 &3Computer science Dept., College of Computer, Al-Anbar University, Iraq
ABSTRACT
The requirements for securing e-commerce transaction are privacy, authentication, integrity maintenance and
non-repudiation. These are the crucial and significant issues in recent times for trade which are transacted over
the internet through e-commerce channels. In this paper suggest cipher method that is improves the Diffie-
Hellman key exchange by using truncated polynomial in discrete logarithm problem ( DLP ) to increases the
complexity of this method over unsecured channel, also combines the hashing algorithm of MD5, the symmetric
key algorithm of AES and the asymmetric key algorithm of Modification of Diffie-Hellman (MDH).
KEYWORDS: key exchange, Securing E-commerce Transaction, Irreducible Polynomial
I. INTRODUCTION
As an electronic commerce exponentially grows, the number of transactions and participants who use e-commerce applications has been rapidly increased. Since all the interactions among participants occur in an open network, there is a high risk for sensitive information to be leaked to unauthorized users. Since such insecurity is mainly created by the anonymous nature of interactions in e-commerce, sensitive transactions should be secured. However, cryptographic techniques used to secure ecommerce transactions usually demand significant computational time overheads, and complex interactions among participants highly require the usage of network bandwidth beyond the manageable limit [1]. Security problems on the Internet receive public attention, and the media carry stories of high-profile malicious attacks via the Internet against government, business, and academic sites [3]. Confidentiality, integrity, and authentication are needed. People need to be sure that their Internet communication is kept confidential. When the customers shop online, they need to be sure that the vendors are authentic. When the customers send their transactions request to their banks, they want to be certain that the integrity of the message is preserved [2]. From above discussions, it is clear that we must pay careful attention to security in E-commerce. Commonly, the exchange of data and information between the customers and the vendors and the bank must rely on personal computers that are available worldwide and based on central processing units (CPU) with 16-bit or 32-bit or 64-bit and operating systems that commonly used such as (windows) that running on the same computer. Communication security requires a period of time to exchange information and data between the customers and the vendors and the bank in such a way that no one can break this communication during this period. Irreducible truncated polynomial mathematics was adopted since 2000, which was developed for use in modern encryption methods, such as AES. Irreducible truncated polynomial mathematics we can use to build the proposed system because it is highly efficient and compatible with personal computers. As a practical matter, secure E-commerce may come to mean the use of information security mechanisms to ensure the reliability of business transactions over insecure networks [4].
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
15 Vol. 1, Issue 5, pp. 14-20
II. RELATED WORKS In the following review, different methods were used in order to increase the e-commerce security: Sung W. T, Yugyung L., and et al (2001) this research proposed an adaptive secure protocol to support secure e-commerce transactions. This adaptive Secure Protocol dynamically adapts the security level based on the nature and sensitivity of the interactions among participants. The security class incorporates the security level of cryptographic techniques with a degree of information sensitivity. It forms implements Adaptive Secure Protocol and measures the performance of Adaptive Secure Protocol. The experimental results show that the Adaptive Secure Protocol provides ecommerce transactions with high quality of security service [9]. Also Ganesan R and Dr. K. Vivekanandan (2009) proposed a software implementation of a digital envelope for a secure e-commerce channel that combines the hashing algorithm of MD5 for integrity, the symmetric key algorithm of AES and the asymmetric key algorithm of Hyperelliptic Curve Cryptography (HECC). The algorithm tested for various sizes of files. The digital envelope combining AES and HECC is the better alternative security mechanism for the secure e-commerce channel to achieve Privacy, Authentication, Integrity maintenance and Non-Repudiation [5]. Also H. K. Pathak , Manju Sanghi [2010] proposed a new public key cryptosystem and a Key Exchange Protocol based on the generalization of discrete logarithm problem using Non-abelian group of block upper triangular matrices of higher order. The proposed cryptosystem is efficient in producing keys of large sizes without the need of large primes. The security of both the systems relies on the difficulty of discrete logarithms over finite fields [6].
III. AES ALGORITHM The Advanced Encryption Standard AES is a symmetric block cipher. It operates on 128-bit blocks of data. The algorithm can encrypt and decrypt blocks using secret keys. The key size can either be 128-bit, 192-bit, or 256-bit. The actual key size depends on the desired security level[57]. The algorithm consists of 10 rounds (when the key has 192 bits, 12 rounds are used, and when the key has 256 bits, 14 rounds are used). Each round has a round key, derived from the original key. There is also a 0th round key, which is the original key. The round starts with an input of 128 bits and produces an output of 128 bits. There are four basic steps, called layers that are used to form the rounds [8]: The ByteSub Transformation (SB): This non-linear layer is for resistance to differential and linear cryptanalysis attacks. The ShiftRow Transformation (SR): This linear mixing step causes diffusion of the bits over multiple rounds. The MixColumn Transformation (MC): This layer has a purpose similar to ShiftRow. AddRoundKey (ARK): The round key is XORed with the result of the above layer.
IV. BASICS OF MD5
MD5 (Message-Digest algorithm 5), is an Internet standard and is one of the widely used cryptographic hash function with a 128-bit message digest. This has been employed in a wide variety of security applications. The main MD5 algorithm operates on a 128-bit, divided into four 32-bit words [5].
V. MODIFICATION OF DIFFIE-HELLMAN (MDF)
The idea is improves the Diffie-Hellman key exchange by using truncated polynomial in discrete logarithm problem ( DLP ) to increases the complexity of this method over unsecured channel. The DLP of our cipher method is founded on polynomial arithmetic, whereas the elements of the finite filed G are represented in polynomial representations. The original DLP implies a prime number for its module operation, and the same technique is used in proposal method but considering an irreducible (prime) polynomial instead of an integer prime number. Before offering the method, we will offer Discrete Logarithm Problem ( DLP ) in polynomials i. Discrete Logarithm Problem (DLP) in polynomials
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
16 Vol. 1, Issue 5, pp. 14-20
In our method (DLP) involve raising an polynomial to an polynomial power, mod irreducible
polynomial .The algorithm to compute offer as following : Where: F (a) = polynomial value, F (x) = polynomial value. F (g) = irreducible polynomial value. ii. The solution steps for this method
We suppose there are two sides want to exchange key (Client and Server) the Client side encrypt message and Server side decrypt its, as following: 1. Key generation
There are two publicly known numbers: irreducible polynomial F( p ) and a polynomial value F( a ) that is a primitive root of F( p ).
Client Side The client side select a random polynomial value F( XC ) < F( p ) and computes:
F( YC )= ( mod F( p )…………..(1) Server Side The server side select a random polynomial value F( XS ) < F( p ) and computes:
F( YS )= ( mod F( p ) ………….. (2) Each side keeps the F(X) value private and makes the F(Y) value available publicly to the other side. Client Side The client side compute shared key by return the F( YS ) from server side :
Key = ( mod F( p ) ………….. (3) Server Side The server side compute shared key by return the F ( Yc ) from client side :
Key = mod F( p ) ………….. (4) Now the two sides have same Secret key (SK):
………….. (5)
Algorithm 1: Modular Exponentiation Algorithm in Polynomial.
Input: .
Output: F ( z ) = Value in polynomial .
Process:
Step1: Convert the F(x) to binary and put the value in K as
Kn , Kn-1 , Kn-2 , ..... k0 .
Step2: Select F (z ) polynomial variable first equal to one
F (z) = 1 .
Step3: apply following
For i = n down to 0
F ( z ) = F ( z ) ⊗ F ( z ) mod F( g )
If Ki = 1 then
F ( z ) = F ( z ) ⊗ F ( a ) mod F( g )
Step4: return F ( z )
Step5: End.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
17 Vol. 1, Issue 5, pp. 14-20
2. Encryption Message To encrypt the message firstly convert each letter from message to polynomial, secondly apply the following equation to find cipher ( C ): Ci = ( Mi Sk) mod F(g) ………….. (6) 3. Decryption Message
To decrypt message firstly compute the multiplicative inverse for secret key (Sk'), secondly apply the following equation to find message: Mi = (Ci Sk') mod F (g) ………….. (7)
Figure (1): Modification of Diffie Hellman (MDF)
VI. IMPLEMENTATION DETAILS
We present here combines the best features of both symmetric and asymmetric encryption techniques. The data (plain text) that is to be transmitted is encrypted using the AES algorithm. The data (plain text) used input to MD5 to generate AES key. This key encrypted by using modification of diffie-hellman (MDF). The using of MD5 useful in two directions, firstly to ensure integrity of the data that is transmitted, secondly to easy generate secret key that used in AES algorithm. Thus the client sends cipher text of the message, and ciphertext of the AES key also it's represent ciphertext of the message
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
18 Vol. 1, Issue 5, pp. 14-20
digest. The server upon receiving ciphertext of the message, and ciphertext of the AES key. First decrypts the Ciphertext of the AES key by (MDH) to obtain the AES key. This is then used to decrypt the cipher text of the message by AES decryption to obtain the plain text. The plaintext is again subjected to MD5 hash algorithm to compare with decrypted message digest to ensure integrity of data.
Figure (1): implementation details of model
VII. RESULTS
The hybrid algorithm execute on PC computer of CPU Intel Pentium 4 2.2 MHz Dual Core 2. The programs implemented using Microsoft Visual Studio 2008 (C#). It's tested with three messages different in length (1000 char, 3000 char, 5000 char) .The key sizes that used for AES (128 bit) .the table 1 provides details on the time taken for encryption, decryption for (AES,MDH) and Calculation of MD5 Message Digest.
Table 1: Time in (Second: Milliseconds) for AES, MDH Encryption and Decryption and Calculation of MD5 Message Digest
Message length AES Enc AES Dec MDH Enc MDH Dec MD5
1000 char 0:30 0:17 0:700 0:500 0:20
3000 char 0:93 0:62 1: 500 1: 300 0:35
5000 char 0:187 0:109 2:800 2:400 0:52
Plain
Text
AES
MD5
MDH
Symmetric
Key
AES
MDH
Server's
Public key
Server's
Private key
Symmetric
Key
Plain
Text
MD5
Compare
If same ACCEPT
, else REJECT
Client Side Server Side
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
19 Vol. 1, Issue 5, pp. 14-20
VIII. ANALYSIS
With any cryptographic system dealing with 128 bit key, the total number of combination is .
The time required to check all possible combinations at the rate of rate 50 billion keys / second is
approximately ( 5 * ) years thus AES is very strong and efficiency to used in e-commerce .
Randomness of Modification of Diffie-Hellman (MDH) is very high whatever the irreducible polynomial because the result is always unexpected, also the complexity is always complex because it depends on irreducible truncated polynomial.
IX. CONCLUSION
Satisfying security requirements is one of the most important goals for e-commerce system security designers; in this paper we give the protocol design for securing e-commerce transaction by using hybrid encryption technique. This hybrid encryption method surely will increase the performance of cryptographic algorithms. This protocol will ensure the confidentiality, integrity and authentication. The AES algorithm provides confidentiality, the MD5 hash function provides the integrity and the modification of Diffie-Hellman will ensure the authentication. We have tested the algorithm for various sizes of messages. The experimental results showed that the model be improved the interacting performance, while providing high quality of security service for desired e-commerce transactions.
REFERENCE
[1] Sung W. T., Yugyung L., Eun K. P., and Jerry S. ," Design and Evaluation of Adaptive Secure Protocol for E-Commerce " , 0-7803-7128-3/01/$10.00 (C) | 2001 IEEE.
[2] Abeer T. Al-Obaidy , " Security Techniques for E-Commerce Websites ", Ph. Thesis, The Department of Computer Science , University of Technology, 2010.
[3] Oppliger R.,"Security Technologies for the World Wide Web, Second Edition", Library of Congress, © ARTECH HOUSE, Inc., USA, 2003.
[4] Wooseok Ham, “Design of Secure and Efficient E-commerce Protocols Using Cryptographic Primitives", MSc. Thesis , School of Engineering , Information and Communications University 2003.
[5] Ganesan R. , Dr. Vivekanandan K., " A Novel Hybrid Security Model for E-Commerce Channel" , © 2009 IEEE.
[6] Pathak H. K. , Manju S. , " Public key cryptosystem and a key exchange protocol using tools of non-abelian group" , (IJCSE) International Journal on Computer Science and Engineering , Vol. 02, No. 04, 2010 .
[7] Oswald E., " Encrypt: State of the Art in Hardware Architectures", Information Society Technologies, UK, 2005.
[8] Trappe W., Washington L.,"Introduction to Cryptography with Coding Theory, Second Edition", ©Pearson Education, Inc. Pearson Prentice Hall, USA, 2006. [9] Sung W. T., Yugyung L., et al," Design and Evaluation of Adaptive Secure Protocol for E-Commerce”, , © IEEE, 2005.
Authors Abdul Monem Saleh Rahma awarded his MSc from Brunel University and his PhD from Loughborough University of technology United Kingdom in 1982, 1985 respectively. He taught at Baghdad university department of computer science and the Military Collage of Engineering, computer engineering department from 1986 till 2003.He fills the position of Dean Asst. of the scientific affairs and works as a professor at the University of Technology Computer Science Department .He published 82 Papers in the field of computer science and supervised 24 PhD and 57 MSc students. His research interests include Cryptography, Computer Security, Biometrics, image processing, and Computer graphics. And he
Attended and Submitted in many Scientific Global Conferences in Iraq and Many other countries. Rabah Nory Farhan has received Bachelor Degree in Computer Science, Almustanseria University, 1993, High Diploma in Data Security/Computer Science, University of Technology, 1998. Master Degree in Computer Science, University of Technology, 2000.PHD Degree in Computer Science, University of Technology, 2006. Undergraduate
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
20 Vol. 1, Issue 5, pp. 14-20
Computer Science Lecturer, University of Technology, 2002 to 2006. Undergraduate and postgraduate Computer Science Lecturer, Graduate Advisor, Computer College, University of Al-Anbar, 2006 -till now. Hussam Jasim Mohammed Al-Fahdawi has received B.Sc in Computer Science, Al-Anbar University, Iraq, (2005-2009). M.Sc student (2010- tell now) in Computer Science Department, Al-Anabar University. Fields of interest: E-Commerce Security, cryptography and related fields. Al-Fahdawi taught many subjects such as operation system, computer vision, image processing.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
21 Vol. 1, Issue 5, pp. 21-29
DSSS DIGITAL TRANSCEIVER DESIGN FOR ULTRA
WIDEBAND Mohammad Shamim Imtiaz
Part-time Lecturer, Department of EEE, A.U.S.T, Dhaka, Bangladesh
ABSTRACT
Despite the fact ultra-wideband technology has been around for over 30 years, there is a newfound excitement
about its potential for communications. In this paper we are specifically focused on a software radio transceiver
design for impulse-based UWB with the ability to transmit a raw data rate of 100 Mbps yet encompass the
adaptability of a reconfigurable digital receiver. Direct sequence spread spectrum has become the modulation
method of choice for wireless local area networks, because it’s numerous advantages such as jammer
suppression, code division multiple access and ease of implementation. We also observe its characteristics and
complete the modulation techniques with MATLAB Simulink. The latter includes bit error rate testing for variety
of modulation schemes and wireless channels using pilot-based matched filter estimation techniques.
Ultimately, the transceiver design demonstrates the advantage and challenge of UWB technology while boasting
high data rate communication capability and providing the flexibility of a research test bed.
KEYWORDS: Ultra-wideband (UWB), direct sequence spread spectrum (DSSS), wireless local area
networks (WLAN’s), personal communication systems (PCS), code division multiple access (CDMA).
I. INTRODUCTION
Ultra wideband (also known as UWB or as digital pulse wireless) is a wireless technology for
transmitting large amount of digital data over a wide spectrum of frequency bands with very low
power for a short distance. Ultra wideband radio can carry a huge amount of data over a distance up to
230 feet at very low power (less than 0.5 mW) and it has the ability to carry signals through doors and
other obstacles that tend to reflect signals at more limited bandwidths and higher power [5]. The
concept of UWB was formulated in the early 1960s through research in time-domain electromagnetic
and receiver design, both performed primarily by Gerald F. Ross [1]. Through his work, the first
UWB communications patent was awarded for the short-pulse receiver, which he developed while
working for Sperry Rand Corporation. Throughout that time, UWB was referred in broad terms as
“carrier less” or impulse technology. After that UWB was coined in the late 1980s to describe the
development, transmission, and reception of ultra-short pulses of radio frequency (RF) energy. For
communication applications, high data rates are possible due to the large number of pulses that can be
created in short time duration [3][4]. Due to its low power spectral density, UWB can be used in
military applications that require low probability of detection [14]. UWB also has traditional
applications in non cooperative radar imaging, target sensor data collection, precision locating and
tracking applications [13]. A significant difference between traditional radio transmissions and UWB
radio transmissions are that traditional systems transmit information by varying the power level,
frequency, and/or phase of a sinusoidal wave. UWB transmissions transmit information by generating
radio energy at specific time instants and occupying large bandwidth thus enabling a pulse-position or
time-modulation [4].UWB communications transmit in a way that doesn't interfere largely with other
more traditional 'narrow band' and continuous carrier wave uses in the same frequency band [5][6].
However first studies show that the rise of noise level by a number of UWB transmitters puts a burden
on existing communications services [10]. This may be hard to bear for traditional systems designs
and may affect the stability of such existing systems. The design of UWB is very different from that
of conventional narrow band. In the conventional narrow band, frequency domain should be
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
22 Vol. 1, Issue 5, pp. 21-29
considered to design the filter or mixer because the signals are in narrow frequency band. On the other
hand, in UWB, time domain should be also considered to design especially for miser because the
carrier less signals possess wide frequency-band and using short pulse means discontinuous signal.
The Federal Communications Commission has recently approved use of Ultra Wideband technology,
allowing deployment primarily in frequency band not only from 3.1 GHz, but also below 960 MHz
for imaging applications [2]. Hence, pulse width should be about 2 ns in order to be used below 960
MHz frequency band.
Recently there has been a burst of research about UWB; hence more and more papers are being
published. However, many papers have been found on the transceiver circuit description for UWB
with different technology but here we propose a system model of UWB Transceiver with Direct
Sequence Spread Spectrum technology. In this paper we focused on a software based radio
transceiver design for impulse-based UWB with the ability to transmit a raw data rate of 100 Mbps
yet encompass the adaptability of a reconfigurable digital receiver. Here we also introduce a
transmitter and receiver of pulse based ultra wideband modulation. Direct sequence spread spectrum
(DSSS) has become the modulation method of choice for wireless local area networks (WLAN’s), and
personal communication systems (PCS), because it’s numerous advantages, such as jammer
suppression, code division multiple access (CDMA), and ease of implementation. As with other
spread spectrum technologies, the transmitted signal takes up more bandwidth than the information
signal that is being modulated. The name 'spread spectrum' comes from the fact that the carrier signals
occur over the full bandwidth (spectrum) of a device's transmitting frequency.
This paper is structured as follows: Section 2 briefly introduces system blocks that have used to
design the DSSS Digital Transceiver. Section 3 and 4 present the design of DPSK Transmitter and
DPSK Receiver respectively. Section 5 exhibits the results taken by oscilloscopes and demonstrates
the discussion of finding such results. Section 6 suggests the future work and modification of this
paper. Section 7 concludes the paper.
II. SYSTEM MODEL
The designed model for the transceiver is shown in Fig-1, consists of a hierarchical system where
blocks represent subsystems and oscilloscopes are placed along the path for display purposes.
The main components or blocks of this design are PN sequence generator, XOR, Unite delay, Switch,
Pulse generator, Derivative, Integer delay, Digital Filter, Product, Gain and oscilloscope. The PN
Sequence Generator block generates a sequence of pseudorandom binary numbers. A pseudo noise
sequence generator which uses a shift register to generate sequences, can be used in a pseudorandom
scrambler, descrambler and in a direct-sequence spread-spectrum system [12]. The PN Sequence
Generator block uses a shift register to generate sequences. Here, PN sequence generator uses for
generating both incoming message and high speed pseudo random sequence number for spreading
purpose. XOR block work as a mixer, it mixes two different inputs with each other as digital XOR
does and gives the output. The Unit Delay block holds and delays its input by the sample period you
specify. This block is equivalent to the discrete-time operator. The block accepts one input and
generates one output. Each signal can be scalar or vector. If the input is a vector, the block holds and
delays all elements of the vector by the same sample period. Pulse generator capable of generating a
variety of pulses with an assortment of options.
Switch uses for switching the two different input and direct it to the output as per requirement.
Derivative block basically differentiate the input data. The pulse generator and sequentially two
derivatives are used for performing Bi-phase modulation as per requirement. Integer delay use to
delay the 63 chip incoming data. Digital filter has its special use. It uses for creating digital filter for
recovering purpose. Gain blocks use for amplifying process. Oscilloscopes are placed along the path
for display purpose.
Direct-sequence spread spectrum (DSSS) is a modulation technique. The DPSK DSSS modulation
and dispread techniques are mainly use for designing the whole transceiver with the exception of
receiving the signal using Bi-phase modulation. The design for pulse based UWB is divided into three
parts as DSSS DPSK transmitter where transmitter part is separately designed, DPSK DSSS
transceiver where received signal has dispread with some propagation delay, DPSK DSSS transceiver
with Bi-phase modulator and matched filter where original signal has recovered.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET
Figure 1: Simulink model of DPSK DSSS Transceiver
The data signal, rather than being transmitted on a narrow band as is done in microwave
communications, is spread onto a mu
encoding scheme. This encoding scheme is known as a Pseudo
Direct sequence spread spectrum has become the modulation method of choice for wireless local area
networks, and personal communication systems. Direct
multiply the data being transmitted by a "noise" signal. This noise signal is a pseudorandom sequence
of 1 and −1 values, at a frequency much higher than that of the o
energy of the original signal into a much wider band. The resulting signal resembles
an audio recording of "static". However, th
original data at the receiving end, by multiplying it by the same pseudorandom sequence
process, known as "de-spreading", mathematically constitutes a
sequence with the PN sequence that the receiver believes the transmitter is using. For de
work correctly, transmit and receive sequences must be synchronized. This requires
synchronize its sequence with the transmitter's sequence via some sort of timing search process.
However, this apparent drawback can be a significant benefit: if the sequences of multiple transmitters
are synchronized with each other, the
can be used to determine relative timing, which, in turn, can be used to calculate the receiver's
position if the transmitters' positions are known
systems.
The resulting effect of enhancing
effect can be made larger by employing a longer PN sequence and more chips per bit, but physical
devices used to generate the PN sequence impose practical limits on attainable processing gain
III. DPSK TRANSMITTER
DPSK DSSS transmitter consists of PN Sequence generator which generates a sequence of pseudo
random binary numbers using a linear
used for delayed data and oscilloscopes are placed along the path for display purposes.
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231
: Simulink model of DPSK DSSS Transceiver
The data signal, rather than being transmitted on a narrow band as is done in microwave
communications, is spread onto a much larger range of frequencies (RF bandwidth) using a specific
encoding scheme. This encoding scheme is known as a Pseudo-noise sequence, or PN sequence.
Direct sequence spread spectrum has become the modulation method of choice for wireless local area
works, and personal communication systems. Direct-sequence spread-spectrum transmissions
multiply the data being transmitted by a "noise" signal. This noise signal is a pseudorandom sequence
−1 values, at a frequency much higher than that of the original signal, thereby spreading the
energy of the original signal into a much wider band. The resulting signal resembles white noise
an audio recording of "static". However, this noise-like signal can be used to exactly reconstruct the
original data at the receiving end, by multiplying it by the same pseudorandom sequence
spreading", mathematically constitutes a correlation of the transmitted PN
sequence with the PN sequence that the receiver believes the transmitter is using. For de
work correctly, transmit and receive sequences must be synchronized. This requires
synchronize its sequence with the transmitter's sequence via some sort of timing search process.
However, this apparent drawback can be a significant benefit: if the sequences of multiple transmitters
are synchronized with each other, the relative synchronizations the receiver must make between them
can be used to determine relative timing, which, in turn, can be used to calculate the receiver's
position if the transmitters' positions are known [12]. This is the basis for many satellite navigation
The resulting effect of enhancing signal to noise ratio on the channel is called process gain
effect can be made larger by employing a longer PN sequence and more chips per bit, but physical
he PN sequence impose practical limits on attainable processing gain
RANSMITTER
DPSK DSSS transmitter consists of PN Sequence generator which generates a sequence of pseudo
random binary numbers using a linear-feedback shift register, XOR used for mixing data, Unite delay
used for delayed data and oscilloscopes are placed along the path for display purposes.
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231-1963
The data signal, rather than being transmitted on a narrow band as is done in microwave
ch larger range of frequencies (RF bandwidth) using a specific
noise sequence, or PN sequence.
Direct sequence spread spectrum has become the modulation method of choice for wireless local area
spectrum transmissions
multiply the data being transmitted by a "noise" signal. This noise signal is a pseudorandom sequence
riginal signal, thereby spreading the
white noise, like
like signal can be used to exactly reconstruct the
original data at the receiving end, by multiplying it by the same pseudorandom sequence [12] This
of the transmitted PN
sequence with the PN sequence that the receiver believes the transmitter is using. For de-spreading to
the receiver to
synchronize its sequence with the transmitter's sequence via some sort of timing search process.
However, this apparent drawback can be a significant benefit: if the sequences of multiple transmitters
relative synchronizations the receiver must make between them
can be used to determine relative timing, which, in turn, can be used to calculate the receiver's
satellite navigation
process gain. This
effect can be made larger by employing a longer PN sequence and more chips per bit, but physical
he PN sequence impose practical limits on attainable processing gain [12].
DPSK DSSS transmitter consists of PN Sequence generator which generates a sequence of pseudo
for mixing data, Unite delay
used for delayed data and oscilloscopes are placed along the path for display purposes. Here, PN
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET
Sequence generator is used as both generating message and a sequence of pseudo random binary
numbers for spreading process. Figure
When differentially encoding an incoming message, each input data bit must be delayed until the next
one arrives. The delayed data bit is then mixed with the next incoming data bit. The output of the
mixer gives the difference of the incoming data bit and the delayed data bit. The differentially
encoded data is then spread by a high
assigns each data bit its own unique code, allowing only a receiver wi
dispread the encoded data.
The 63-bit pseudo noise sequences (PN) used in this papers are generated by a 6th order maximal
length sequence shown in equation one
gx
Figure 2: Simulink model of DPSK
The maximal length spreading sequence uses a much wider bandwidth than the encoded data bit
stream, which causes the spread sequence to have a much lower power spectral density
transmitted signal is then given by,
xtWhere mt is the differentially encoded data, and
recovering of message sequence, we XOR the modulated signal with same type of 63
sequences (PN). Here we also use a unite delay to fin
process is successfully done with some propagation delay which was obvious because of some noise
& losses.
IV. DPSK RECEIVER
Before dispreading, the receiving signal is modulated by Bi
split into two parallel paths and fed into two identical matched filters with the input to one having a
delay of 63 chips. Figure 3 is the Simulink model of DPSK DSSS Receiver.
The BPSK modulation technique is
Where, is 1,1 a data bits
Certain advantage of Bi-phase modulation is its improvement over OOK and PPM in BER
performance, as the / is 3 dB less than OOK for the same probability of bit error.
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231
Sequence generator is used as both generating message and a sequence of pseudo random binary
Figure 2 is the Simulink model of DPSK DSSS Transmitter
When differentially encoding an incoming message, each input data bit must be delayed until the next
one arrives. The delayed data bit is then mixed with the next incoming data bit. The output of the
gives the difference of the incoming data bit and the delayed data bit. The differentially
encoded data is then spread by a high-speed pseudo noise sequence (PN).This spreading process
ts own unique code, allowing only a receiver with the same spreading to
bit pseudo noise sequences (PN) used in this papers are generated by a 6th order maximal
length sequence shown in equation one,
x x x 1 (1)
Figure 2: Simulink model of DPSK DSSS Transmitter
The maximal length spreading sequence uses a much wider bandwidth than the encoded data bit
stream, which causes the spread sequence to have a much lower power spectral density
t mt ct (2)
is the differentially encoded data, and ct is the 63 chip PN spreading code. For
recovering of message sequence, we XOR the modulated signal with same type of 63-bit pseudo noise
sequences (PN). Here we also use a unite delay to find the original signal. The signal recovering
process is successfully done with some propagation delay which was obvious because of some noise
Before dispreading, the receiving signal is modulated by Bi-phase modulation technique
split into two parallel paths and fed into two identical matched filters with the input to one having a
is the Simulink model of DPSK DSSS Receiver.
technique is mathematically described as:
∑ !"#$∝&∝ (3)
phase modulation is its improvement over OOK and PPM in BER
dB less than OOK for the same probability of bit error.
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231-1963
Sequence generator is used as both generating message and a sequence of pseudo random binary
2 is the Simulink model of DPSK DSSS Transmitter.
When differentially encoding an incoming message, each input data bit must be delayed until the next
one arrives. The delayed data bit is then mixed with the next incoming data bit. The output of the
gives the difference of the incoming data bit and the delayed data bit. The differentially
speed pseudo noise sequence (PN).This spreading process
th the same spreading to
bit pseudo noise sequences (PN) used in this papers are generated by a 6th order maximal
The maximal length spreading sequence uses a much wider bandwidth than the encoded data bit
stream, which causes the spread sequence to have a much lower power spectral density [11]. The
is the 63 chip PN spreading code. For
bit pseudo noise
d the original signal. The signal recovering
process is successfully done with some propagation delay which was obvious because of some noise
phase modulation technique then signal is
split into two parallel paths and fed into two identical matched filters with the input to one having a
phase modulation is its improvement over OOK and PPM in BER
dB less than OOK for the same probability of bit error.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
The probability of bit error for Bi-phase modulation assuming matched filter reception is:
P( Q*+,-./0 1 (4)
Figure 3: Simulink model of DPSK Receiver
Another benefit of Bi-phase modulation is its ability to eliminate spectral lines due to the change in
pulse polarity. This aspect minimizes the amount of interference with conventional radio systems
[16]. A decrease in the overall transmitted power could also be attained, making Bi-phase modulation
a popular technique in UWB systems when energy efficiency is a priority.
Special type of Digital Matched Filter have used for recovering the transmitted message. This Digital
matched filtering is a data processing routine which is optimal in term of signal-to-noise ratio (SNR).
Specifically, it can be shown for an additive white Gaussian noise (AWGN) channel with no
interference that the matched filter maximizes the SNR for a pulse modulated system. To perform this
operation, the received waveform is over sampled to allow for multiple samples per pulse period.
Over sampling gives a more accurate representation of the pulse shape, which then produces better
results using a digital matched filter [11]. Correlation processing, another form of matched filtering, is
often used in the digital domain when dealing with white noise channels. The method for calculating
the correlation output is the following:
gκ ∑ rtht/4& (5)
Where:
gk Is the resulting correlation value
6 Is the 678 pulse period
N Is the number of samples in one pulse width
rt Is the received sampled waveform
ht Is the known pulse waveform
One of the primary drawbacks of the matched filter receiver topology is the lack of knowledge of the
pulse shape at the receiver due to distortion in the channel. Imperfect correlations can occur by
processing the data with an incorrect pulse shape, causing degradation in correlation energy. There are
numerous ways to correct this problem, including an adaptive digital equalizer or matching a template
by storing multiple pulse shapes at the receiver. A more accurate approach is to estimate the pulse
shape from the pilot pulses, which will experience the same channel distortion as the data pulses [11].
This estimation technique is a promising solution to UWB pulse distortion.
The outputs of the two matched filters are denoted by xt and x,t are given by
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
9 = : ; <= (6)
x,t =dt t; T@ RBt T@ (7)
Where "Cthe data is bit period, and <= is the autocorrelation function of the 63-chip pseudorandom
sequence. Since there are exactly 63 chips per data bit the PN sequence is periodic with "C so
< <= "C (8)
The two outputs of the matched filters are then mixed and then low pass filtered and the original
message is recovered.
V. RESULTS AND DISCUSSION
Following the analytical approach presented in section 3 and 4, we evaluate the simulation result of
UWB technology. The simulations are performed using MATLAB [15], and the proof-of-concept is
valid as the BER curves are slightly worse than theoretical values for a perfectly matched receiver due
to the imperfections in the template caused by noise and aperture delay variation. Figure 4 shows the
original input message sequence that is generated from a PN sequence generator. Then, the incoming
message are differentially encoded by using mixer and unite delay where each input data bit has
delayed with Unit delay until the next one arrives where the delayed data bit is then mixed with the
next incoming data bit. Figure 5 shows such a differential output of the original message signal.
Eventually the mixer will give the difference of the incoming data bit and the delayed data bit. The
differentially encoded data is then spread by a high-speed 63-bit pseudo noise (PN) Sequence
generator which is generated by a 6th order maximal length sequence. This spreading process assigns
each data bit its own unique code which is shown in Figure 6 allowing only a receiver with the same
spreading to dispread the encoded data.
Figure 4: Original Input message signal
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure 5: Differential output of message signal
Figure 6: Output waveforms of Simulink DPSK DSSS Transmitter
Figure 7: Received Signal into DPSK DSSS Receiver after Dispreading
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure 8: Original recovered output signal
For recovering of message sequence in the receiving part of DPSK DSSS transceiver, the modulated
signal has been dispread using same type of 63-bit pseudo noise sequences and also use a unite delay
to find the original signal. Before dispreading, the receiving signal is modulated by Bi-phase
modulation technique then signal is split into two parallel paths and fed into two identical matched
filters with the input to one having a delay of 63 chips. Among two split signal, one is spreading
received message and another is Bi-phase modulated signal. The signal recovering process is
successfully done with some propagation delay which was obvious because of some noise & losses.
Figure 7 represented the received signal into DPSK DSSS receiver after dispreading and Figure 8
denoted original recovered messages.
VI. FUTURE MODIFICATION AND WORK
Designing of Transceiver was difficult and it took time to resolve the obstacles. The transmitter side
was easy to build but it was hard to recover it in the receiver side due to spreading process. The
recovered massage came with unwanted delays after dispreading it into DPSK DSSS receiver with the
same 63-bit PN Sequence generator. To remove the delay a BPSK modulator and two special matched
filters were used. This Matched filters are usually FIT filters which are designed in a special way to
recover the original signal. Its have used for detecting the 6th order maximal length sequence and
recovering the transmitted message. In the first matched filter the input signal was delayed due to
correlating purpose. It was obtained by correlating the delayed signal with the received signal to
detect the presence of the template in the received signal. This is equivalent to convolving the
unknown signal with a conjugated time-reversed version of the template. As it is known that matched
filter is the optimal linear filter for maximizing the signal to noise ratio in the presence of additive
stochastic noise, use of more matched filter increase the possibilities of recovering the original signal
and maximizing the signal to noise ratio depending on signal that is being transmitted. In this whole
work we have discussed about UWB basics, modulation technique and transmitter circuits but all of
those were limited in the design and system level. Though we have included some present important
features and applications of UWB but implementation or circuit level simulation has not been done
here. People who are interested in analyzing UWB technology can work on circuit level simulation.
VII. CONCLUSIONS
We have analyzed the performance of UWB technology using Time Hopping (TH) technique. The
results from the system simulation were very encouraging for the UWB receiver design presented in
this paper. It was also shown by increasing the number of averaged pilot pulses in the pilot-based
matched filter template, better performance can be obtained, although the data rate will suffer.
Performance for multipath was also examined (albeit for perfect synchronization) and was close to the
theoretical values. Finally, use of the template sliding matched filter synchronization routine led to
worse BER performance when compared with perfect synchronization results. Although these
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
simulations were specific in terms of data bits and number of multipath, other simulations were
successfully run on a smaller-scale varying these two parameters. The results of the system simulation
give a solid foundation for the design as a whole, but also will assist in the future with issues such as
the implementation of receiver algorithms within the PGA and determining timing limitations when
the receiver is being constructed.
REFERENCES
[1]. G. F. Ross, “Transmission and reception system for generating and receiving base-band duration pulse
signals without distortion for short base-band pulse communication system,” US Patent 3,728,632,
April 17, 1973.
[2]. Authorization of Ultra wideband Technology, First Report and Order, Federal Communications
Commission, February 14, 2002.
[3]. C. R. Anderson, “Ultra wideband Communication System Design Issues and Tradeoffs,” Ph.D.
Qualifier Exam, Virginia Polytechnic Institute and State University, May 12, 2003.
[4]. J. R. Foerster, “The performance of a direct-sequence spread ultra-wideband system in the presence of
multipath, narrowband interference, and multiuser interference,” IEEE Conference on Ultra Wideband
Systems and Technologies, May 2002.
[5]. C. R. Anderson, A. M. Orndorff, R. M. Buehrer, and J. H. Reed, “An Introduction and Overview of an
Impulse-Radio Ultra wideband Communication System Design,” tech. rep., MPRG, Virginia
Polytechnic Institute and State University, June 2004
[6]. J. Han and C. Nguyen, “A new ultra-wideband, ultra-short monocycle pulse generator with reduced
ringing,” IEEE Microwave and Wireless Components Letters, Vol. 12, No. 6, pp. 206-208, June 2002.
[7]. S. Licul, J. A. N. Noronha, W. A. Davis, D. G. Sweeney, C. R. Anderson, T. M. Bielawa, “A
parametric study of time-domain characteristics of possible UWB antenna architectures,” submitted to
IEEE Vehicular Technology Conference, February 2003.
[8]. M. Z. Win and R. A. Scholtz, “Impulse radio: how it works,” IEEE Communications Letters, Vol. 2,
No. 1, pp. 10-12, January 1998.
[9]. J. Ibrahim “Notes on Ultra Wideband Receiver Design,” April 14, 2004.
[10]. Takahide Terada, Shingo Yoshizumi,Yukitoshi and Tadahiro kuroda, “Transceiver Circuits for Pulsed-
Based Ultra Wideband” Department of Electrical Engineering, Keio University, Japan, Circuits and
Systems, 2004. ISCAS '04.L. W. Couch II, Digital and Analog Communication Systems, 6th Edition,
New Jersey: Prentice Hall, 2001.
[11]. S.M. Nabritt, M.Qahwash, M.A. Belkerdid, “Simulink Simulation of a Direct Sequence Spread
Spectrum Differential Phase Shift Keying SAW Correlator”, Electrical and Comp. Engr. Dept,
University of Central Florida, Orlando FL 32816, Wireless Personal Communications, The Kluwer
International Series in Engineering and Computer Science, 2000, Volume 536, VI, 239-249
[12]. Alonso Morgado, Rocio del Rio and Jose M. de la Rosa, “A Simulink Block Set for the High-Level
Simulation of Multistandard Radio Receivers”, Instituto de Microelectronica de Sevilla-IMSE-CNM
(CSIC), Edif. CICA-CNM, Avda Reina Mercedes s/n, 41012-Sevilla, Spain
[13]. M. I. Skolnik, Introduction to Radar Systems, 3rd Edition. New York: McGraw- Hill, 2001.
[14]. Military Applications of Ultra-Wideband Communications, James W. McCulloch and Bob Walters
[15]. Matlab, Version 7 Release 13, The Mathworks, Inc., Natick, MA.
[16]. L. W. Couch II, Digital and Analog Communication Systems, 6th Edition, New Jersey: Prentice Hall,
2001.
Author
Mohammad Shamim Imtiaz was born in Dhaka, Bangladesh in 1987. He received his
Bachelor degree in Electrical and Electronic Engineering from Ahsanullah University of
Science and Technology, Dhaka, Bangladesh in 2009. He is working as a Part-Time
Lecturer in the same university from where he has completed his Bachelor degree. Currently
he is focusing on getting into MSc Program. His research interests include digital system,
digital signal processing, multimedia signal processing, digital communication and signal
processing for data transmission and storage. There are other several projects he is working
on and they are “Comparison of DSSS Transceiver and FHSS Transceiver on the basis of
Bit Error Rate and Signal to Noise Ratio”, “Mobile Charging Device using Human Heart Pulse”, “Analysis of
CMOS Full Adder Circuit of Different Area and Models”.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
30 Vol. 1, Issue 5, pp. 30-40
INTRODUCTION TO METASEARCH ENGINES AND RESULT
MERGING STRATEGIES: A SURVEY
Hossein Jadidoleslamy
Deptt. of Information Tech., Anzali International Branch, University of Guilan, Rasht, Iran
ABSTRACT
MetaSearch is utilizing multiple other search systems to perform simultaneous search. A MetaSearch Engine
(MSE) is a search system that enables MetaSearch. To perform a MetaSearch, user query is sent to multiple
search engines; once the search results returned, they are received by the MSE, then merged into a single
ranked list and the ranked list is presented to the user. When a query is submitted to a MSE, decisions are made
with respect to the underlying search engines to be used, what modifications will be made to the query and how
to score the results. These decisions are typically made by considering only the user’s keyword query,
neglecting the larger information need. The cornerstone of their technology is their rank aggregation method. In
other words, Result merging is a key component in a MSE. The effectiveness of a MSE is closely related to the
result merging algorithm it employs. In this paper, we want to investigate a variety of result merging methods
based on a wide range of available information about the retrieved results, from their local ranks, their titles
and snippets, to the full documents of these results.
KEYWORDS: Search, Web, MetaSearch, MetaSearch Engine, Merging, Ranking.
I. INTRODUCTION
MetaSearch Engines (MSEs) are tools that help the user identify such relevant information. Search
engines retrieve web pages that contain information relevant to a specific subject described with a set
of keywords given by the user. MSEs work at a higher level. They retrieve web pages relevant to a set
of keywords, exploiting other already existing search engines. The earliest MSE is the MetaCrawler system that became operational since June 1995 [5,16]. Over the last years, many MSEs have been
developed and deployed on the web. Most of them are built on top of a small number of popular
general-purpose search engines but there are also MSEs that are connected to more specialized search
engines and some are connected to over one thousand search engines [1,10]. In this paper, we
investigate different result merging algorithms; The rest of the paper is organized as: In Section 2
motivation, In Section 3 overview of MSE, Section 4 provides scientific principles of MSE, Section 5
discusses about why do we use MSE, Section 6 discusses architecture of MSE, Section 7 describes ranking aggregation methods, In Section the paper expresses key parameters to evaluating the ranking
strategies, Section 9 gives conclusions and Section 10 present future works.
II. MOTIVATION
There are some primarily factors behind developing a MSE, are:
• The World Wide Web (WWW) is a huge unstructured corpus of information; MSE covers a larger
portion of WWW;
• By MSE we can have the latest updated information;
• MSE increases the web coverage;
• Improved convenience for users;
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
31 Vol. 1, Issue 5, pp. 30-40
• MSE provides fast and easy access to the desired search [5]; better retrieval effectiveness [2];
• MSE provides a broader overview of a topic [12];
• MSE has ability to search the invisible Web, thus increasing the precision, recall and quality of
result;
• MSE makes the user task much easier by searching and ranking the results from multiple search
engine;
• MSE provides a quick way to determine which search engines are retrieving the best match for
user's information need [4].
III. OVERVIEW OF METASEARCH ENGINE
MSE search several engines at once; it does not crawl the web or maintain a database of web pages;
instead, they act as a middle agent, passing the user’s query simultaneously to other search engines or
web directories or deep web, returning the results, collecting them, remove the duplicate links,
merge and rank them into a single list and display it to the user [5,8]. Some samples of MSEs are
Vivisimo, MetaCrawler, Dogpile, Mamma, and Turbo10.
a. Differences Between Search and MetaSearch
• MSE does not crawl the Web [2,4];
• MSE does not have a Database [4,10];
• MSE sends search queries to several search engines at once [2,5];
• MSE increased search coverage (but is limited by the engines they use with respect to the number
and quality of results) and a consistent interface [6,12];
• MSE is an effective mechanism to reach deep web.
b. MetaSearch Engine Definition
• Dictionary meaning for Meta: more comprehensive, transcending;
• Accept the User query; Convert the query into the correct syntax for underlying search engines,
launch the multiple queries, wait for the result; Analyze, eliminate duplicates and merge results;
Deliver the post processed result to the users.
• A MSE allows you to search multiple search engines at once, returning more comprehensive and
relevant results, fast [5,9];
• A search engine which does not gather its own information directly from web sites but rather
passes the queries that it receives onto other search engines. It then compiles, summarizes and
displays the found information;
• MSE is a hub of search engines/databases accessible by a common interface providing the user
with results which may/may not be ranked independently of the original search engine/source ranking [6,10].
c. The Types of MetaSearch Engine Different types of MetaSearch Engines (MSEs) are:
• MSEs which present results without aggregating them;
• Searches multiple search engines, aggregates the results obtained from them and returns a single
list of results [1,3], often with duplicate removed;
• MSEs for serious deep digging.
d. MSE Issues Some of most common issues in MSEs are as follows:
• Performing search engine/database selection [5,6];
• How to pass user queries to other search engines;
• How to identify correct search results returned from search engines; an optimal algorithm for
implementing minimum cost bipartite matching;
• How to search results extraction, requiring a connection program and an extraction program
(wrapper) for each component search engine [14];
• Expensive/time-consuming to produce/maintain wrapper;
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
32 Vol. 1, Issue 5, pp. 30-40
• merging the results from different search sources;
• Different search engines produce result pages in different formats [6,8].
IV. SCIENTIFIC FUNDAMENTALS
a. Search Engine Selection
To enable search engine selection, some information that can represent the contents of the documents
of each component search engine needs to be collected first. Such information for a search engine is
called the representative of the search engine [5,17]. The representatives of all search engines used by
the MSE are collected in advance and are stored with the MSE. During search engine selection for a
given query, search engines are ranked based on how well their representatives match with the query.
Different search engine selection techniques often use different types of representatives. A simple
representative of a search engine may contain only a few selected key words or a short description. This type of representative is usually produced manually but it can also be automatically generated
[5]. As this type of representatives provides only a general description of the contents of search
engines, the accuracy of using such representatives for search engine selection is usually low. More
elaborate representatives consist of detailed statistical information for each term in each search engine
[5,9,17].
b. Automatic Search Engine Connection
In most cases, the HTML form tag of a MSE contains all information needed to make the connection to the search engines. The form tag of each search engine interface is usually pre-processed to extract
the information needed for program connection and the extracted information is saved at the MSE
[5,17]. After the MSE receives a query and a particular search engine, among possibly other search
engines, is selected to evaluate this query, the query is assigned to the name of the query textbox of
the search engine and sent to the server of the search engine using the HTTP request method. After
the query is evaluated by the search engine, one or more result pages containing the search results are
returned to the MSE for further processing.
c. Automatic Search Result Extraction
A result page returned by a search engine is a dynamically generated HTML page. In addition to the
search result records (SRRs) for a query, a result page usually also contains some unwanted
information/links [5]. It is important to correctly extract the SRRs on each result page. A typical SRR
corresponds to a retrieved document and it usually contains the URL, title and a snippet of the
document. Since different search engines produce result pages in different format, a separate wrapper
program needs to be generated for each search engine [5,14]. Most of them analyze the source HTML files of the result pages as text strings or tag trees to find the repeating patterns of the SRRs.
d. Results Merging
Result merging is to combine the search results returned from multiple search engines into a single
ranked list. There are many methods for merging/ranking search results; some of them are,
• Normalizing the scores returned from different search engines into values within a common range
with the goal to make them more comparable [1,6,16]; the results from more useful search
engines to be ranked higher.
• Using voting-based techniques.
• Downloading all returned documents from their local servers and compute their matching scores
using a common similarity function employed by the MSE [1,6,17].
• Using techniques rely on features such as titles and snippets and so on [1].
• The same retrieved results from multiple search engines are more relevant to the query [1,5].
V. WHY ARE METASEARCH ENGINES USEFUL?
1. Why MetaSearch?
• Individual Search engines do not cover all the web;
• Individual Search Engines are prone to spamming [5];
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
33 Vol. 1, Issue 5, pp. 30-40
• Difficulty in deciding and obtaining results with combined searches on different search engines
[6];
• Data Fusion (multiple formats supported ) and take less effort of user.
2. Why MetaSearch Engines?
• General search engines have difference in search syntax, frequency of updating, display
results/search interface and incomplete database [5,16];
• MSE improves the search quality with comprehensive, efficient and one query queries all;
• MSE is good for quick search results overview with 1 or 2 keywords;
• MSE convenient to search different content sources from one page.
3. Key Applications of MetaSearch Engines
• Effective mechanism to search surface/deep web;
• MSE provides a common search interface over multiple search engines [5,10];
• MSE can support interesting special applications.
4. General Features of MetaSearch Engine
• Unifies the search interface and provides a consistent user interface; Standardizes the query
structure [5];
• May make use of an independent ranking method for the results [6]; May have an independent
ranking system for each search engine/database;
• MetaSearch is not a search for Meta data.
VI. METASEARCH ENGINE ARCHITECTURE MSEs enable users to enter search criteria once and access several search engines simultaneously.
This also may save (a lot of time) the user from having to use multiple search engines separately (by
initiating the search at a single point). MSEs have virtual databases; they do not compile a physical
database. Instead, they take a user's request, pass it to several heterogeneous databases and then
compile the results in a homogeneous manner. No two MSEs are alike; they are different in component search engines, ranking/merging methods, search results presentation and etc.
a. Standard Architecture
Figure1. Block diagram and components
• User Interface: similar search engine interfaces with options for types of search and search
engines to use;
• Dispatcher: generates actual queries to the search engines by using the user query; may involve
choosing/expanding search engines to use;
S E 1 S E 2 S E 3
Dispatcher
Display
User In
terface
Knowledge
Personalize
Query
Feedback
User
Web
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
34 Vol. 1, Issue 5, pp. 30-40
• Display: generates results page from the replies received; May involve ranking, parsing and
clustering of the search results or just plain stitching;
• Personalization/Knowledge: may contain either or both. Personalization may involve weighting of
search results/query/engine for each user.
b. The Architecture of a MSE with Concerns User Preferences Current MSEs make several decisions on be-half of the user, but do not consider the user’s complete
information need. A MSE must decide which sources to query, how to modify the submitted query to best utilize the underlying search engines, and how to order the results. Some MSEs allow users to
influence one of these decisions, but not all three [4,5].
Figure2. The architecture of a MSE with user needs
User’s information needs are not sufficiently represented by a keyword query alone [4,10]. This
architecture has an explicit notion of user preferences. These preferences or a search strategy, are used
to choose the appropriate search engines (source selection), query modifications and influence the
order the results (result scoring). Allowing the user to control the search strategy can provide relevant
results for several specific needs, with a single consistent interface [4]. The current user interface
provides the user with a list of choices. The specification of preferences allows users with different
needs, but the same query, to not only search different search engines (or the same search engines
with different “modified” queries), but also have results ordered differently [4]. Sometimes Even
though users have different information needs, they might type the same keyword query, and even
search some of the same search engines. This architecture guarantees consistent scoring of results by downloading page contents and analyzing the pages on the server [1,4].
c. Helios Architecture In this section we describe the architecture of Helios. The Web Interface allows users to submit their
queries and select the desired search engines among those supported by the system. This information
is interpreted by the Local Query Parser & Emitter that re-writes queries in the appropriate format for
the chosen engines. The Engines Builder maintains all the settings necessary to communicate with the
remote search engines. The HTTP Retrievers modules handle the network communications. Once
search results are available, the Search Results Collector & Parser extracts the relevant information
and returns it using XML. Users can adopt the standard Merger & Ranker module for search results or
integrate their customized one [12].
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
35 Vol. 1, Issue 5, pp. 30-40
Figure3. The architecture of HELIOS MSE
d. Tadpole Architecture In this architecture, when a user issues a search request, multiple threads are created in order to fetch
the results from various search engines. Each of these threads is given a time limit to return the
results, failing which a time out occurs and the thread is terminated [5,11].
Figure4. Basic component architecture of a typical MSE
MSEs are web services that receive user queries and dispatch them to multiple crawl-based search
engines; then collect returned results, reorder them and present the ranked result list to the user [11].
The ranking fusion algorithms that MSEs utilize are based on a variety of parameters, such as the
ranking a result receives and the number of its appearances in the component engine’s result lists [15].
Better results classification can be achieved by employing ranking fusion methods that take into
consideration additional information about a web page. Another core step is to implicitly/explicitly
collect some data concerning the user that submits the query. This will assist the engine to decide
which results suit better to his informational needs [4,11,15].
VII. RESULTS MERGING AND RANKING STRATEGIES
There are many techniques for ranking retrieved search results from different search engines in MSEs; some important approaches are,
• Normalizing/ uniform the scores of search results[1];
• The reliability of each search engine;
• The document collection used by a search engine;
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
36 Vol. 1, Issue 5, pp. 30-40
• Some ranking algorithms which completely ignore the scores assigned by the search engines to
the retrieved web pages [1]: such as bayes-fuse and borda-fuse[7];
• Merging based on SRR contents such as title, snippet, local rank and different similarity
functions[6];
• Considering the frequencies of query terms in each SRR, the order and the closeness of these
terms;
• Downloading and analyzing full document.
We want to investigate result merging algorithms for MSEs. Most search engines present more
informative search result records (SRRs) of retrieved results to the user; a typical SRR consists of the
URL, title and snippet of the retrieved result [6,7].
1) Take the Best Rank
In this algorithm, we try to place a URL at the best rank it gets in any of the search engine rankings
[13]. That is [17],
• MetaRank (x) = Min (Rank1(x), Rank2(x), …, Rankn(x));
Clashes are avoided by search engines popularity.
2) Borda’s Positional Method In this algorithm, MetaRank of an URL is obtained by computing the L1-Norm of the ranks in
different search engines [8,17],
• MetaRank(x) =∑ (Rank1(x) p, Rank2(x) p, …, Rankn(x) p) 1/p;
Clashes are avoided by search engine popularity.
3) Weighted Borda-Fuse In this algorithm, search engines are not treated equally, but their votes are considered with weights
depending on the reliability of each search engine. These weights are set by the users in their profiles.
Thus, the votes that the i result of the j search engine receive are [9,17],
• V (ri,j) = wj * (maxk (rk)-i+1);
Where wj is the weight of the j search engine and rk is the numbers of results rendered by search
engine k. Retrieved pages that appear in more than one search engines receive the sum of their votes.
4) The Original KE Algorithm KE Algorithm on its original form is a score-based method [1]. It exploits the ranking that a result
receives by the component engines and the number of its appearances in the component engines’ lists.
All component engines are treated equally, as all of them are considered to be reliable. Each returned
ranked item is assigned a score based on the following formula [10],
• Wke = ∑mi=1(r (i)) / ((n) m * (k/10 + 1) n);
Where ∑mi=1(r(i)) is the sum of all rankings that the item has taken, n is the number of search engine
top-k lists the item is listed in, m is the total number of search engines exploited and k is the total
number of ranked items that the KE Algorithm uses from each search engine. Therefore, it is clear that the less weight a result scores the better ranking it receives.
5) Fetch Retrieved Documents A straightforward way to perform result merging is to fetch the retrieved documents to the MSE and
compute their similarities with the query using a global similarity function. The main problem of this
approach is that the user has to wait a long time before the results can be fully displayed. Therefore,
most result merging techniques utilize the information associated with the search results as returned
by component search engines to perform merging. The difficulty lies in the heterogeneities among the component search engines.
6) Borda Count Borda Count is a voting-based data fusion method [15]. The returned results are considered as the
candidates and each component search engine is a voter. For each voter, the top ranked candidate is
assigned n points (n candidates), the second top ranked candidate is given n–1 points, and so on. For
candidates that are not ranked by a voter (i.e., they are not retrieved by the corresponding search
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
37 Vol. 1, Issue 5, pp. 30-40
engine), the remaining points of the voter will be divided evenly among them. The candidates are then
ranked on their received total points in descending order [13,15,17].
7) D-WISE Method In D-WISE, the local rank of a document (ri) returned from search engine j is converted to a ranking
score (rsij); the formula is [6],
• rsij = 1 – (ri - 1) * Smin / (m * Sj) ;
Where Sj is the usefulness score of the search engine j, Smin is the smallest search engine score
among all component search engines selected for this query and m is the number of documents
desired across all search engines. This function generates a smaller difference between the ranking
scores of two consecutively ranked results retrieved from a search engine with a higher search engine
score. This has the effect of ranking more results from higher quality search engines higher. One
problem of this method is that the highest ranked documents returned from all the local systems will
have the same ranking score 1.
8) Merging Based on Combination Documents Records (SRRs) Among all the proposed merging methods, the most effective one is based on the combination of the
evidences of document such as title, snippet, and the search engine usefulness. In these methods [1,2]:
for each document, computing the similarity between the query and its title and its snippet;
aggregating linearly the two as this document’s estimated global similarity. For each query term,
computing its weight in every component search engine based on the Okapi probabilistic model [6].
The search engine score is the sum of all the query term weights of this search engine. Finally, the
estimated global similarity of each result is adjusted by multiplying the relative deviation of its source
search engine’s score to the mean of all the search engine scores. It is very possible that for a given
query, the same document is returned from multiple component search engines. In this case, their
(normalized) ranking scores need to be combined [1]. A number of linear combination fusion
functions have been proposed to solve this problem include min, max, sum, average and etc [15].
9) Use Top Document to Compute Search Engine Score (TopD) Assume Sj denote the score of search engine j with respect to q. This algorithm uses the similarity
between q and the top ranked document returned from search engine j (denoted dij) [6,7]. Fetching the
top ranked document from its local server have some delay, but that this delay is tolerable, since only
one document is fetched from each used search engine. The similarity function using the Cosine
function and Okapi function. The formula is [6],
• ∑TEq W * (((K1 + 1) * tf) / (K + tf)) * (((K3 + 1) * qtf) / (K3 + qtf)) ;
• With W = Log ((N-n+0.5) /(n+0.5)) and K = K1 * ((1-b)+b*(dl/avgdl)) ;
Where tf is the frequency of the query term T within the processed document, qtf is the frequency of
T within the query, N is the number of documents in the collection, n is the number of documents
containing T, dl is the length of the document, and avgdl is the average length of all the documents in
the collection. K1, k3 and b are the constants with values 1.2, 1,000 and 0.75, respectively [6]. N, n,
and avgdl are unknown, we can use some approximations to estimate them. The ranking scores of the
top ranked results from all used search engines will be 1[1,6]. We remedy this problem by computing
an adjusted ranking score arsij by multiplying the ranking score computed by above formula, namely
rsij, by Sj [6], arsij = ∑ (rsij * Sj); If a document is retrieved from multiple search engines, we compute its final ranking score by summing up all the adjusted ranking scores.
10) Use Top Search Result Records (SRRs) to Compute Search Engine Score (TopSRR) In this method, when a query q is submitted to a search engine j, the search engine returns the SRRs
of a certain number of top ranked documents on a dynamically generated result page. In the TopSRR
algorithm, the SRRs of the top n returned results from each search engine, instead of the top ranked
document, are used to estimate its search engine score [6]. Intuitively, this is reasonable as a more
useful search engine for a given query is more likely to retrieve better results which are usually reflected in the SRRs of these results. Specifically, all the titles of the top n SRRs from search engine j
are merged together to form a title vector TVj, and all the snippets are also merged into a snippet
vector SVj. The similarities between query q and TVj, and between q and SVj are computed
separately and then aggregated into the score of search engine j [6],
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
38 Vol. 1, Issue 5, pp. 30-40
• Sj = C1 * Similarity (q, TVj) + (1 – C1) * Similarity (q, SVj);
Too, both the Cosine function and Okapi function are used [6,7].
11) Compute Simple Similarities between SRRs and Query (SRRsim)
We can rank SRRs returned from different search engines; because each SRR can be considered as the
representative of the corresponding full document. In the SRRsim algorithm, the similarity between a
SRR (R) and a query q is defined as a weighted sum of the similarity between the title (T) of R and q
and the similarity between the snippet (S) of R and q [6,7],
• Sim(R , q) = C2 * Similarity (q, T) + (1 – C2) * Similarity (q , S) ;
Where, C2 is constant (C2 = 0.5). Again both the Cosine function and the Okapi function are used. If
a document is retrieved from multiple search engines with different SRRs (different search engines
usually employ different ways to generate SRRs), then the similarity between the query and each such
SRR will be computed and the largest one will be used as the final similarity for merging.
12) Rank SRRs Using More Features (SRRRank) The similarity function used in the SRRsim algorithm may not be sufficiently powerful in reflecting
the true matches of the SRRs with respect to a given query [6]. For example, these functions do not take proximity information such as how close the query terms occur in the title and snippet of a SRR
into consideration, nor does it consider the order of appearances of the query terms in the title and
snippet. Somtimes, the order and proximity information have a significant impact on the match of
phrases. This algorithm defines five features with respect to the query terms; that are [6,7],
• NDT: The number of distinct query terms appearing in title and snippet;
• TNT: total number occurrences of the query terms in the title and snippet;
• TLoc: The locations of the occurred query terms;
• ADJ: whether the occurred query terms appear in the same order as they are in the query and
whether they occur adjacently;
• WS: the window size containing distinct occurred query terms.
For each SRR of the returned result, the above pieces of information are collected. The SRRRank
algorithm works as [6]:
• All SRRs are grouped based on NDT. The groups having more distinct terms are ranked higher;
• Within each group, the SRRs are further put into three subgroups based on TLoc. The subgroup
with these terms in the title ranks highest, the subgroup with the distinct terms in the snippet and
the subgroup with the terms scattered in both title and snippet;
• Finally, within each subgroup, the SRRs that have more occurrences of query terms (TNT)
appearing in the title and the snippet are ranked higher. If two SRRs have the same number of
occurrences of query terms, first the one with distinct query terms appearing in the same order and
adjacently (ADJ) as they are in the query is ranked higher, and then, the one with smaller window
size is ranked higher. If there is any tie, it is broken by the local ranks. The result with the higher local rank will have a
higher global rank in the merged list. If a result is retrieved from multiple search engines, we only
keep the one with the highest global rank [3,6].
13) Compute Similarities between SRRs and Query Using More Features (SRRSimMF) This algorithm is similar to SRRRank except that it quantifies the matches based on each feature
identified in SRRRank so that the matching scores based on different features can be aggregated into
a numeric value [1,3]. Consider a given field of a SRR, say title (the same methods apply to snippet). For the number of distinct query terms (NDT), its matching score is the ratio of NDT over the total
number of distinct terms in the query (QLEN), denoted SNDT=NDT/QLEN. For the total number of
query terms (TNT), its matching score is the ratio of TNT over the length of title, denoted
STNT=TDT/TITLEN. For the query terms order and adjacency information (ADJ), the matching
score SADJ is set to 1 if the distinct query terms appear in the same order and adjacently in the title;
otherwise the value is 0. The window size (WS) of the distinct query terms in the processed title is
converted into score SWS= (TITLEN–WS)/TITLEN. All the matching scores of these features are
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
39 Vol. 1, Issue 5, pp. 30-40
aggregated into a single value, which is the similarity between the processed title T and q, using this
formula [6],
• Sim(T , q) = SNDT + (1/QLEN) * (W1 * SADJ + W2 * SWS + W3 * STNT) ;
For each SRR, the final similarity is,
• Similarity = (TNDT/QLEN) * (C3 * Sim(T , q) + (1 – C3) * Sim (S , q)) ;
Where TNDT is the total number of distinct query terms appeared in title and snippet [6,7].
VIII. EVALUATION KEY PARAMETERS FOR RANKING STRATEGIES
Some parameters for ranking methods are algorithmic complexity (time complexity), rank
aggregation time, overlap across search engines (relative search engine performance) and performance
of the various rank aggregation methods include precision with respect to number of results returned
and precision vs. recall.
IX. CONCLUSION
In this paper, we have presented an overview and some ranking strategies on MSEs. An effective and
efficient result merging strategy is essential for developing effective MetaSearch systems. We
investigated merging algorithms that utilize a wide range of information available for merging, from
local ranks by component search engines, search engine scores, titles and snippets of search result
records to the full documents. We discuss methods for improving answer relevance in MSEs; propose
several strategies for combining the ranked results returned from multiple search engines. Our study
has several results; that are,
• A simple and efficient merging method can help a MSE significantly outperform the best single
search engine in effectiveness [2];
• Merging based on the titles and snippets of returned search result records can be more effective
than using the full documents. This implies that a MSE can achieve better performance than a
centralized retrieval system that contains all the documents from the component search engines;
• The computational complexity of ranking algorithms used and performance of the MSE are
conflicting parameters;
• MSEs are useful, because,
• Integration of search results provided by different engines; Comparison of rank positions;
• Advanced search features on top of commodity engines;
• A MSE can be used for retrieving, parsing, merging and reporting results provided by other
search engines.
X. FUTURE WORKS
Component search engines employed by a MSE may change their connection parameters and result
display format anytime. These changes can make the affected search engines unusable in the MSE.
How to monitor the changes of search engines and make the corresponding changes in the MSE
automatically. Most of today’s MSEs employ only a small number of general purpose search engines.
Building large-scale MSEs that using numerous specialized search engines is another area problem.
Challenges arising from building very large-scale MSEs include automatic generation and
maintenance of high quality search engine representatives needed for efficient and effective search
engine selection, and highly automated techniques to add search engines into MSEs and adapt to
changes of search engines.
REFERENCES
[1] Renda M. E. and Straccia U.; Web metasearch: Rank vs. score based rank aggregation methods; 2003.
[2] Meng W., Yu C. and Liu K.; Building efficient and effective metasearch engines; In ACM Computing
Surveys; 2002.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
40 Vol. 1, Issue 5, pp. 30-40
[3] Fagin R., Kumar R., Mahdian M., Sivakumar D. and Vee E.; Comparing and aggregating rankings with
ties; In PODS; 2004.
[4] Glover J. E., Lawrence S., Birmingham P. W. and Giles C. L.; Architecture of a Metasearch Engine that
Supports User Information Needs; NEC Research Institute, Artificial Intelligence Laboratory, University of
Michigan; In ACM; 1999.
[5] MENG W.; Metasearch Engines; Department of Computer Science, State University of New York at
Binghamton; Binghamton; 2008.
[6] Lu Y., Meng W., Shu L., Yu C. and Liu K.; Evaluation of result merging strategies for metasearch engines;
6th International Conference on Web Information Systems Engineering (WISE Conference); New York;
2005.
[7] Dwork C., Kumar R., Naor M. and Sivakumar D.; Rank aggregation methods for the Web; Proceedings of
ACM Conference on World Wide Web (WWW); 2001.
[8] Fagin R., Kumar R., Mahdian M., Sivakumar D. and Vee E.; Comparing partial rankings; Proceedings of
ACM Symposium on Principles of Database Systems (PODS); 2004.
[9] Fagin R., Kumar R. and Sivakumar D.; Comparing top k lists; SIAM Journal on Discrete Mathematics;
2003.
[10] Souldatos S., Dalamagas T. and Sellis T.; Captain Nemo: A Metasearch Engine with Personalized
Hierarchical Search Space; School of Electrical and Computer Engineering; National Technical University
of Athens; November, 2005.
[11] Mahabhashyam S. M. and Singitham P.; Tadpole: A Meta search engine Evaluation of Meta Search ranking
strategies; University of Stanford; 2004.
[12] Gulli A., University of Pisa, Informatica; Signorini A., University of Iowa, Computer Science; Building an
Open Source Meta Search Engine; May, 2005.
[13] Aslam J. and Montague M.; Models for Metasearch; In Proceedings of the ACM SIGIR Conference; New
Orleans; 2001.
[14] Zhao H., Meng W., Wu Z., Raghavan V. and Yu C.; Fully automatic wrapper generation for search engines;
World Wide Web Conference; Chiba, Japan; 2005.
[15] Akritidis L., Katsaros D. and Bozanis P.; Effective Ranking Fusion Methods for Personalized Metasearch
Engines; Department of Computer and Communication Engineering, University of Thessaly; Panhellenic
Conference on Informatics (IEEE); 2008.
[16] Manning C. D., Raghavan P. and Schutze H.; Introduction to Information Retrieval; Cambridge University
Press; 2008.
[17] Dorn J. and Naz T.; Structuring Meta-search Research by Design Patterns; Institute of Information Systems,
Technical University Vienna, Austria; International Computer Science and Technology Conference; San
Diego; April, 2008.
Author Biography
H. Jadidoleslamy is a Master of Science student at the Guilan University in Iran. He received
his Engineering Degree in Information Technology (IT) engineering from the University of
Sistan and Balouchestan (USB), Iran, in September 2009. He will receive his Master of Science
degree from the University of Guilan, Rasht, Iran, in March 2011. His research interests include
Computer Networks (especially Wireless Sensor Network), Information Security, and E-
Commerce. He may be reached at [email protected].
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
41 Vol. 1, Issue 5, pp. 41-46
STUDY OF HAND PREFERENCES ON SIGNATURE FOR RIGHT-
HANDED AND LEFT-HANDED PEOPLES Akram Gasmelseed and Nasrul Humaimi Mahmood Faculty of Health Science and Biomedical Engineering,
Universiti Teknologi Malaysia, Johor, Malaysia.
ABSTRACT
Signature is the easiest way to issue the document. The problem of handwritten signature verification is a
pattern recognition task used to differentiate two classes of original and fake signatures. The subject of interest
in this study is about signature recognition that deals with the process of verifying the written signature patterns
of human individuals and specifically between right-handed and left-handed people. The method that been used
in this project is an on-line verification by using IntuosTM Graphics Tablet and Intuos pen as the data capturing
device. On-line signature verification involved the capturing of dynamic signature signals such as pressure of
pen tips, time duration of whole signature, altitude and azimuth. The ability to capture the signature and have it
immediately available in a digital form for verification has opens up a range of new application areas about this
topic.
KEYWORDS: Signature verification, IntuosTM Graphics Tablet, Right-handed people, Left-handed people
I. INTRODUCTION
Recent years, handwritten signatures are commonly used to identify the contents of a document or to
confirm a financial transaction. Signature verification is usually made by visual check up. A person
compares the appearance of two signatures and accepts the given signature if it is sufficiently similar
to the stored signature, for example, on a credit card. When using credit cards, suitable verification of
signature by a simple comparison using the human eye is difficult [1,2].
In order to prevent illegal use of credit cards, an electrical method for setting an auto identification
device is desired. Biometrics, an identification technology that uses characteristics of the human body,
characteristics of motion or characteristics of voice is often effective in identification [2]. However,
identification technologies that use physical characteristics, especially fingerprints, often present
difficulties as a result of psychological resistance. In contrast, automatic signature verification
provides a great advantage in current social systems because the handwritten signature is often used
for legal confirmation.
Theoretically, the problem of handwritten signature verification is a pattern recognition task used to
differentiate two classes of original and fake signatures. A signature verification system must be able
to detect forgeries and to reduce rejection of real signatures simultaneously [3]. Automatic signature
verification can be divided into two main areas depending on the data gaining method. The methods
are off-line and on-line signature verification [2,4].
In off-line signature verification, the signature is available on a document which is scanned to obtain
its digital image representation. This method also identifies signatures using an image processing
procedure whereby the user is supposed to have written down completely the signature onto a
template that is later captured by a CCD camera or scanner to be processed. Another method is on-
line signature verification. It used special hardware, such as a digitizing tablet or a pressure sensitive
pen, to record the pen movements during writing [5,6,7]. On-line signature verification also involved
the capturing of dynamic signature signals such as pressure of pen tips, time duration of whole
signature and velocity along signature path.
In the past few years, there have been a lot of researches [8,9] regarding signature verification and
signature recognition. Unfortunately, none of them specify the research and focusing on hand
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
42 Vol. 1, Issue 5, pp. 41-46
preferences. The subject of interest in this research is about signature recognition that deals with the
process of verifying the written signature patterns of human individuals and specifically among right-
handed and left-handed people.
II. METHODOLOGIES
The method that had been used in this work is an on-line verification by using IntuosTM 9 X 12
Graphics Tablet and Intuos pen as the data capturing device. The information then had been processed
using suitable software such as Capture 1.3, Microsoft Excel, MATLAB and MINITAB. The
flowchart of methodology is shown in Figure 1.
Figure 1: Flowchart of methodology
The first phase is about collecting the signature or data of individuals. Figure 2 shows the process of
taking the signature. The data had been collected minimum 30 from right-handed and 30 left-handed
people and taken from both of their hands (left and right). This will be totalled up all the data to 120.
All the data will be detected and digitalis by Capture 1.3 software and then save in format of word pad.
Figure 2: Process of taking the signature
The data had arranged using Excel and simulate by using MATLAB and MINITAB. All the data were
analysed using correlation and regression methods. The last phase of this work is to get the result
from the analysis phase. All the data then, analysed between left-handed and right-handed people’s
signatures. The result and all the problems during this project will be discussed clearly. Lastly, overall
conclusion and recommendation is summarized.
III. RESULT AND DISCUSSION
Linear correlation coefficient measures the strength of a linear relationship between two variables.
This method measures the extent to which the points on a scatter diagram cluster about a straight line.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
43 Vol. 1, Issue 5, pp. 41-46
Table one shows the correlation coefficient for pressure, altitude and azimuth of the samples from
different right-handed and left-handed peoples. From table 1, some analysis can be done accordingly.
Firstly, the analysis for right-handed people (RR-RL) and left-handed people (LL-LR) correlation has
been made. For the right-handed people have a difference of 0.014 less than left-handed people for
pressure correlation in this study. It same with the altitude correlation but it was less about 0.406 less
than left-handed people. For azimuth correlation the result it was in negative value but right-handed
people have higher difference than left-handed people about 0.209. The negative value showed the
values of the data are in opposite directions. So it was recommended to apply the dominant for each
correlation while doing this study to get maximum information for application.
Secondly, for major usage (RR-LL) and for minor usage hand (RL-LR) the higher value has been
dominant for major usage compared to minor usage in term of pressure and azimuth correlation that
are 0.004 and 0.425 respectively. For altitude correlation minor usage has a value of 0.141 greater
than major usage. To get a measure for more general dependencies in the data, the percentage of the
data also has been made. For a pressure correlation of LH people (94.9%) is higher than the
correlation value of pressure for RH people (93.5%). The correlation value of altitude for LH people
(89.3%) is also higher than the correlation value of pressure for RH people (48.7%). But, the
correlation value of azimuth for LH people (62.3%) is lower than the correlation value of azimuth for
RH people (83.2%).
The left-handed people have higher values of correlation compared to right-handed people for
pressure and altitude. But for azimuth, right-handed people have higher correlation than left-handed
people. From this result, it is advisable to use the left-handed people information or setting if using for
pen pressure and also altitude. The right-handed people information or setting can be advisable to use
for azimuth.
Figure 3 shows that the pen pressures have the higher percentage of correlation rather than altitude
and azimuth for all types of hand usage. With this result, it is advisable to use the pen pressure to
obtain the signature recognition.
Regression generally models the relationship between one or more response variables and one or
more predictor variables. Linear regression models the relationship between two or more variables
Table 1: Correlation Measurement
Correlation RR-RL (RH) LL-LR (LH) RR-LL (major) RL-LR (minor)
Pressure 0.935 0.949 0.882 0.878
Altitude 0.487 0.893 0.779 0.920
Azimuth -0.832 -0.623 0.925 0.500
Figure 3: Graph of Correlation
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
44 Vol. 1, Issue 5, pp. 41-46
using a linear equation. Linear regression gives a formula for the line most closely matching those
points. It also gives an R-Squared (r2) value to say how well the resulting line matches the original
data points. The closer a line is to the data points, overall, the stronger the relationship.
Table 2 shows all variables have the linear relationship that shown by the linear equations. For the
right-handed people have the equation of “PRES RR = 114 + 0.787 PRES RL” and value of r2
=
87.5%. The left-handed people have equation of “PRES LL = 101 + 0.772 PRES LR” and have higher
values of r2 that are 90.1%. The high value of r
2 shows that the pressure has a strong relationship for
the right-handed and left-handed people. For the altitude and azimuth, the value of r2 is less than 80%.
This means there are weak relationships between them.
For the linear relationship between pen pressure, altitude and azimuth, table 2 shows that left-handed
people have a value of r2 that is 82.3% higher than right-handed people with 69.4%. But for the minor
usage hand the r2 value is higher for right-handed people with 90.1% rather than left-handed people
with 79.4%. These results show that there are high linear relationship between pen pressure, altitude
and azimuth for both of the people and also their major and minor usage hand.
Figure 4: Graph of Regressions
Table 2: Regression Analysis
Equation R-Sq
PRES RR vs. ALT RR, AZM RR PRES RR = - 3892 + 2.60 ALT RR + 2.44 AZM RR 69.4%
PRES LL vs. ALT LL, AZM LL PRES LL = - 629 + 10.2 ALT LL - 2.10 AZM LL 82.3%
PRES RL vs. ALT RL, AZM RL PRES RL = - 1265 + 9.30 ALT RL - 1.52 AZM RL 90.1%
PRES LR vs. ALT LR, AZM LR PRES LR = - 25218 + 25.0 ALT LR + 11.8 AZM LR 79.4%
PRES RR vs. PRES RL PRES RR = 114 + 0.787 PRES RL 87.5%
PRES LL vs. PRES LR PRES LL = 101 + 0.772 PRES LR 90.1%
PRES RR vs. PRES LL PRES RR = 77.0 + 0.985 PRES LL 77.9%
PRES RL vs. PRES LR PRES RL = 83.9 + 0.946 PRES LR 77.0%
ALT RR vs. ALT RL ALT RR = 353 + 0.392 ALT RL 23.7%
ALT LL vs. ALT LR ALT LL = - 741 + 2.26 ALT LR 79.7%
ALT RR vs. ALT LL ALT RR = 261 + 0.517 ALT LL 60.6%
ALT RL vs. ALT LR ALT RL = - 581 + 1.92 ALT LR 84.6%
AZM RR vs. AZM RL AZM RR = 3552 - 1.02 AZM RL 69.3%
AZM LL vs. AZM LR AZM LL = 7326 - 5.37 AZM LR 38.8%
AZM RR vs. AZM LL AZM RR = - 738 + 0.763 AZM LL 85.5%
AZM RL vs. AZM LR AZM RL = - 268 + 2.91 AZM LR 25.0%
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
45 Vol. 1, Issue 5, pp. 41-46
Figure 4 shows that the pen pressures have the higher percentage of regression rather than altitude and
azimuth for all types of hand usage. This also can be advised to use the pen pressure to obtain the
signature recognition.
IV. CONCLUSION AND FUTURE WORKS
This work is about analyzing signature recognition especially on people’s hand preferences by using
correlation and regression methods. The left-handed people have higher values of correlation
compared to right-handed people for pressure and altitude. But for azimuth, right-handed people have
higher correlation than left-handed people. That means for each hand preference group are having
their own parameters that can be consider during performing signature recognition between these two
groups of people. From the regression method, the results show that there are high linear relationship
between pen pressure, altitude and azimuth for both of the people and also their major and minor
usage hand. Meaning that, all groups of data are having highly linear relationship between these three
parameters. The resulting analysis, for pen pressure can be advisable to be obtained for signature
recognition rather than altitude and azimuth. Pen pressure data analysis is showing the highest value
of correlation and regression compared to the data of altitude and azimuth. This result indicates that
the data from left-handed and right-handed people’s signatures are highly related in term of pen
pressure.
This research work can be extended in order to apply to the real world due to the market demand as an
establish method or technique to verify the signatures. Some of further recommendation can be made.
Firstly, the analysis can be extended by developing new software of signature recognition. The
software will make the research more reliable and maybe can predict the outcome from the input
signatures. The method that's been used is only using correlation and regression analysis to analyze all
the data. By using several recognition algorithms, the research can be ended with more precise and
trusted results. The numbers of data also should be increased to greater than 30 for each of the data
groups. The physical poses and body position for person that give the signature also very important.
They must have the same pose during the signature was taken. This will decrease the false of Intuos
pen position that will affect on the altitude and azimuth of the signatures.
REFERENCES
[1] Anil K. Jain, Friederike D. Griess and Scott D. Connell, On-line signature verification.
Pattern Recognition 35 (2002) pp.2963 – 2972.
[2] Hiroki Shimizu, Satoshi Kiyono, Takenori Motoki and Wei Gao. An electrical pen for
signature verification using a two-dimensional optical angle sensor. Sensors and Actuators A
111 (2004) pp.216–221.
[3] Inan Güler and Majid Meghdadi. A different approach to off-line handwritten signature
verification using the optimal dynamic time warping algorithm. Digital Signal Processing 18
(2008) pp.940–950.
[4] Musa Mailah and Lim Boon Han. Biometrics signature verification using pen position, time,
velocity and pressure parameters. Jurnal Teknologi,UTM 48(A) Jun 2008: pp. 35 - 54.
[5] Fernando Alonso-Fernandez, Julian Fierrez-Aguilar, Francisco del-Valle and Javier Ortega-
Garcia. On-Line Signature Verification Using Tablet PC. Proceedings of the 4th International
Symposium on Image and Signal Processing and Analysis (2005) pp 245-250.
[6] Oscar Miguel-Hurtado, Luis Mengibar-Pozo, Michael G. Lorenz and Judith Liu-Jimenez. On-
Line Signature Verification by Dynamic Time Warping and Gaussian Mixture Models. 41st
Annual IEEE International Carnahan Conference on Security Technology (2007), pp. 23-29.
[7] Seiichiro Hangai, Shinji Yamanaka, Takayuki Hamamoto, On-Line Signature Verification
Based On Altitude and Direction of Pen Movement., IEEE International Conference on
Multimedia and Expo, (2000), pp.489-492.
[8] Lim Boon Han, Biometric Signature Verification Using Neural Network. Universiti
Teknologi Malaysia. Master of Engineering (Mechanical) Thesis, 2005.
[9] Reena Bajaj and Santanu Chaudhury. Signature Verification Using Multiple Neural
Classifiers. Pattern Recognition, Vol. 30, No. 1, pp. l-7, 1997.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
46 Vol. 1, Issue 5, pp. 41-46
Authors
A. GASMELSEED received his B.Sc. degree in Electrical Engineering and Informatics – major
in Computer Engineering – and M.Sc degree in Electrical Engineering and Informatics from
Budapest, Hungary, in 1993 and 1999, respectively. He received the PhD degree in Electrical
Engineering from Universiti Teknologi Malaysia (UTM), Malaysia, in 2009. His research is in
the areas of electromagnetic biological effects, biophotonics, and computer signal/image-
processing application to biomedical engineering. Currently he is a Senior Lecturer at Faculty of
Health Science and Biomedical Engineering, UTM.
N. H. MAHMOOD received his B.Sc. and M.Sc. degrees in Electrical Engineering from
Universiti Kebangsaan Malaysia (UKM) and Universiti Teknologi Malaysia (UTM)
respectively. He obtained his Ph.D. degree from the University of Warwick, United Kingdom.
His research areas are biomedical image processing, medical electronics and rehabilitation
engineering. Currently he is a Senior Lecturer at Faculty of Health Science and Biomedical
Engineering, UTM.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
47 Vol. 1, Issue 5, pp. 47-57
DESIGN AND SIMULATION OF AN INTELLIGENT TRAFFIC
CONTROL SYSTEM 1Osigwe Uchenna Chinyere,
2Oladipo Onaolapo Francisca,
3Onibere Emmanuel Amano
1, 2Computer Science Department, Nnamdi Azikiwe University, Awka, Nigeria 3Computer Science Department, University of Benin, Benin City, Nigeria
ABSTRACT
This paper described our research experiences of building an intelligent system to monitor and control road
traffic in a Nigerian city. A hybrid methodology obtained by the crossing of the Structured Systems Analysis and
Design Methodology (SSADM) and the Fuzzy-Logic based Design Methodology was deployed to develop and
implement the system. Problems were identified with the current traffic control system at the ‘+’ junctions and
this necessitated the design and implementation of a new system to solve the problems. The resulting fuzzy logic-
based system for traffic control was simulated and tested using a popular intersection in a Nigerian city;
notorious for severe traffic logjam. The new system eliminated some of the problems identified in the current
traffic monitoring and control systems.
KEYWORDS: Fuzzy Logic, embedded systems, road traffic, simulation, hybrid methodologies
I. INTRODUCTION
One of the major problems encountered in large cities is that of traffic congestion. Data from the
Chartered Institute of Traffic and Logistic in Nigeria revealed that about 75 per cent mobility needs in the country is accounted for by road mode; and that more than seven million vehicles operate on
Nigerian roads on a daily bases [1]. This figure was also confirmed by the Federal Road Safety
Commission of Nigeria; the institution responsible for maintaining safety on the roads [2]. The
commission further affirmed that the high traffic density was caused by the influx of vehicles as a
result of breakdown in other transport sectors and is most prevalent in the ‘+’ road junctions.
Several measures had been deployed to address the problem of road traffic congestion in large cities
in Nigeria; namely among these are: the construction of flyovers and bypass roads, creating ring
roads, posting of traffic wardens to trouble spots and construction of conventional traffic light based
on counters. These measures however, had failed to meet the target of freeing major ‘+’ intersections
resulting in loss of human lives and waste of valuable man hour during the working days.
This paper described a solution to road traffic problems in large cities through the design and
implementation of an intelligent system; based on fuzzy logic technology to monitor and control
traffic light system. The authors will show how the new fuzzy logic traffic control system for “+”
junction, eliminated the problems observed in the manual and conventional traffic control system
through the simulation software developed using Java programming language. This paper is divided
into five sections. The first section provided a brief introduction to traffic management in general and
described the situations in urban cities. We reviewed related research experiences and results on road
traffic systems in the second section. Particular attention was given to intelligent traffic control
systems and several approached were outlined. While section three described the methodologies
deployed in the development of the system, section four presented the research results and section five
concluded the work.
II. REVIEW OF RELATED WORK
An intelligent traffic light monitoring system using an adaptive associative memory was designed by
Abdul Kareem and Jantan (2011). The research was motivated by the need to reduce the unnecessary
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
48 Vol. 1, Issue 5, pp. 47-57
long waiting times for vehicles at regular traffic lights in urban area with 'fixed cycle' protocol. To
improve the traffic light configuration, the paper proposed monitoring system, which will be able to
determine three street cases (empty street case, normal street case and crowded street case) by using
small associative memory. The experiments presented promising results when the proposed approach
was applied by using a program to monitor one intersection in Penang Island in Malaysia. The program could determine all street cases with different weather conditions depending on the stream of
images, which are extracted from the streets video cameras [3].
A distributed, knowledge-based system for real-time and traffic-adaptive control of traffic signals was
described by Findler and et al (1997). The system was a learning system in two processes: the first
process optimized the control of steady-state traffic at a single intersection and over a network of
streets while the second stage of learning dealt with predictive/reactive control in responding to
sudden changes in traffic patterns [4]. GiYoung et al., (2001) believed that electro sensitive traffic
lights had better efficiency than fixed preset traffic signal cycles because they were able to extend or
shorten the signal cycle when the number of vehicles increases or decreases suddenly. Their work was
centred on creating an optimal traffic signal using fuzzy control. Fuzzy membership function values
between 0 and 1 were used to estimate the uncertain length of a vehicle, vehicle speed and width of a
road and different kinds of conditions such as car type, speed, delay in starting time and the volume of
cars in traffic were stored [5]. A framework for a dynamic and automatic traffic light control expert
system was proposed by [6]. The model adopted inter-arrival time and inter-departure time to simulate
the arrival and leaving number of cars on roads. Knowledge base system and rules were used by the
model and RFID were deployed to collect road traffic data. This model was able to make decisions
that were required to control traffic at intersections depending on the traffic light data collected by the
RFID reader. A paper by Tan et al., (1996) described the design and implementation of an intelligent
traffic lights controller based on fuzzy logic technology. The researchers developed a software to
simulate the situation of an isolated traffic junction based on this technology. Their system was highly graphical in nature, used the Windows system and allowed simulation of different traffic conditions at
the junction. The system made comparisons the fuzzy logic controller and a conventional fixed-time
controller; and the simulation results showed that the fuzzy logic controller had better performance
and was more cost effective [7].
Research efforts in traffic engineering studies yielded the queue traffic light model in which vehicles
arrive at an intersection controlled by a traffic light and form a queue. Several research efforts
developed different techniques tailored towards the evaluation of the lengths of the queue in each lane on street width and the number of vehicles that are expected at a given time of day. The efficiency of
the traffic light in the queue model however, was affected by the occurrence of unexpected events
such as the break-down of a vehicle or road traffic accidents thereby causing disruption to the flow of
vehicles. Among those techniques based on the queue model was a queue detection algorithm
proposed by [8]. The algorithm consisted of motion detection and vehicle detection operations, both
of which were based on extracting the edges of the scene to reduce the effects of variations in lighting
conditions. A decentralized control model was described Jin & Ozguner (1999). This model was a
combination of multi-destination routing and real time traffic light control based on a concept of cost-
to-go to different destinations [9]. A believe that electronic traffic signal is expected to augment the
traditional traffic light system in future intelligent transportation environments because it has the
advantage of being easily visible to machines was propagated by Huang and Miller (2004). Their
work presented a basic electronic traffic signaling protocol framework and two of its derivatives, a
reliable protocol for intersection traffic signals and one for stop sign signals. These protocols enabled
recipient vehicles to robustly differentiate the signal’s designated directions despite of potential
threats (confusions) caused by reflections. The authors also demonstrated how to use one of the
protocols to construct a sample application: a red- light alert system and also raised the issue of
potential inconsistency threats caused by the uncertainty of location system being used and discuss
means to handle them [10]. Di Febbraro el al (2004) showed that Petri net (PN) models can be applied
to traffic control. The researchers provided a modular representation of urban traffic systems regulated
by signalized intersections and considered such systems to be composed of elementary structural components; namely, intersections and road stretches, the movement of vehicles in the traffic network
was described with a microscopic representation and was realized via timed PNs. An interesting
feature of the model was the possibility of representing the offsets among different traffic light cycles
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
49 Vol. 1, Issue 5, pp. 47-57
as embedded in the structure of the model itself [11]. Nagel and Schreckenberg (1992) described a
Cellular Automata model for traffic simulation. At each discrete time-step, vehicles increase their
speed by a certain amount until they reach their maximum velocity. In case of a slower moving
vehicle ahead, the speed will be decreased to avoid collision. Some randomness is introduced by
adding for each vehicle a small chance of slowing down [12]. The experiences of building a traffic light controller using a simple predictor was described by
Tavladakis (1999). Measurements taken during the current cycle were used to test several possible
settings for the next cycle, and the setting resulting in the least amount of queued vehicles was
executed. The system was highly adaptive, however as it only uses data of one cycle and could not
handle strong fluctuations in traffic flow well [13]. Chattarajet al., (2008) proposed a novel
architecture for creating Intelligent Systems for controlling road traffic. Their system was based on
the principle of the use of Radio Frequency Identification (RFID) tracking of vehicles. This
architecture can be used in places where RFID tagging of vehicles is compulsory and the efficiency of
the system lied in the fact that it operated traffic signals based on the current situation of vehicular
volume in different directions of a road crossing and not on pre-assigned times [14].
III. METHODOLOGY
A novel methodology was described in this work for the design and implementation of the intelligent
traffic lights control system. This methodology was obtained as a hybrid of two standard
methodologies: The Structured System Analysis and Design Methodology (SSADM) and the Fuzzy
Based Design Methodology (Figure 1). The systems study and preliminary design was carried out
using the Structured System Analysis and Design Methodology and it replaced the first step of the
Fuzzy Based Design Methodology as shown in the broken arc in figure 1. The Fuzzy Logic-based
methodology was chosen as the paradigm for an alternative design methodology; applied in
developing both linear and non-linear systems for embedded control. Therefore, the physical and
logical design phases of the SSADM were replaced by the two steps of the Fuzzy Logic-based
methodology to complete the crossing of the two methodologies. A hybrid methodology was
necessary because there was a need to examine the existing systems, classify the intersections as “Y”
and “+” junction with the view of determining the major causes of traffic deadlock on road junction.
There was also the need to design the traffic control system using fuzzy rules and simulation to
implement an intelligent traffic control system that will eliminate logjam.
Figure1 Our Hybrid Design Methodology
Understand physical System
and control requirement
Design the controller
using fuzzy Rules
Simulate, Debug and
Implement the system
Business
System
Options(BSOs)
Investigate
current
system
Requirement
Specification
Logical
Design
Physical
Design
Technical
System
Options(TSOs)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
50 Vol. 1, Issue 5, pp. 47-57
An analysis of the current traffic control system in the South Eastern Nigerian city showed that some
of the junctions are controlled by traffic wardens while some are not manned at all. Some of these
junctions also have traffic lights strategically located but are not intelligent. These problems are
inherent due to nonchalant attitude of traffic warders to effectively control traffic through hand
signals. They could easily get tired as they are humans. Also, they can leave their duty post when the weather is not conducive to them. Cars in urban traffic can experience long travel times due to
inefficient fixed time traffic light controller being used at the some junctions in the cities. Moreover,
there is no effective intelligent traffic system that works twenty four hours (day and night) to
effectively control signal at these busy junctions. In addition, aside from the manual control of traffic
by traffic policemen, basically, there are two types of conventional traffic light control in use. One
type of control uses a preset cycle time to change the lights while the other type of control combined
preset cycle time with proximity sensors which can activate a change in the cycle time of the lights. In
case of a less traveled street which may not need a regular cycle of green light when cars are present.
This type of control depended on having a prior knowledge of flow patterns at the intersection so that
signal cycle times and placement of proximity sensors may be customized for the intersection.
IV. RESULTS AND DISCUSSIONS
Based on our analysis of the present traffic control system, the following assumptions became
necessary in order to develop a feasible system:
1. The system will only work for an isolated four-way junction with traffic coming from the four
cardinal directions.
2. Traffic only moves from the North to the South and vice versa at the same time; and at this
time, the traffic from the East and West is stopped. In this case, the controller considers the
combination of all the waiting densities for the North and south as that of one side and those
of the east and west combined as another side.
3. Turns (right and left) are considered in the design
4. The traffic from the west lane always has the right of way and the west-east lane is considered
as the main traffic.
4.1 Results: Input / Output Specifications for the Design
Figure 2 shows the general structure of a fuzzy input output traffic lights control system. The system
was modeled after the intelligent traffic control system developed at the Artificial intelligence centre, Universiti Teknologi Malaysia for the city of Kualar Lumpur, Malaysia by [7]. S represented the two
electromagnetic sensors placed on the road for each lane. The first sensor was placed behind each
traffic lights and the second sensor was located behind the first sensor. A sensor network normally
constitutes a wireless ad-hoc network [15], meaning that each sensor supported a multi-hop routing
algorithm. While the first sensor is required to count the number of cars passing the traffic lights; the
second is required to count the number of cars coming to intersection at distance D from the lights.
To determine the amount of cars between the traffic lights, the difference of the reading between the
two sensors is evaluated. This differs from what is obtained in a conventional traffic control system
where a proximity sensor is placed at the front of each traffic light and can only sense the presence of
cars waiting at the junction and not the amount of cars waiting at traffic. The sequence of states that
the fuzzy traffic controller should cycle through is controlled by the state machine controls the. There
is one state for each phase of the traffic light. There is one default state which takes place when no
incoming traffic is detected. This default state corresponds to the green time for a specific approach,
usually to the main approach. In the sequence of states, a state can be skipped if there is no vehicle
queues for the corresponding approach. The objectives of this design are to simulate an intelligent
road traffic control system and build a platform independent software that is simple, flexible and
robust and will ease traffic congestion (deadlock) in an urban city in Nigeria especially at “+”
junction.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
51 Vol. 1, Issue 5, pp. 47-57
Figure 2 General structure of a fuzzy input/output traffic lights Control System
4.2 High Level Model and Modules Specifications for the System
Figure 3 shows the high level model of the system. The main module is the Traffic Panel and the main
class traffic system is implemented using the java programming language. There are several methods
that implement the intelligent traffic Light system such as changeLight, Calflow, TrafficPanel,
PaintMode, PaintLightPeriod, PaintLights, Traffic System, Waiting, Moving, Flow density, Run,
ActionPerformed and ItemStateChanged. These methods are interwoven into a complete interface that
implements a total intelligent traffic control system. The main class trafficSystem, which is
implemented using java programming language calls other methods already stated above. The
changeLight module is overloaded with the function of toggling the lights (green to red and vice
versa) depending on the signal passed to its executable thread. Calflow animates the objects (cars) on the interface using a flow sequence that depicts a typical traffic and a time sequence automatically
generated by the system timer (measured in milliseconds), taken into consideration the number of cars
waiting and the time they have been on the queue. Traffic panel initializes the interface parameters
such as frames, buttons, timer, objects and other processes (threads) that run when the interface is
invoked by the applet viewer command. On the other hand, PaintMode, PaintLight, PaintRoad,
PaintLights are modules which draw the objects(Cars), lights, roads(paths) for traffic flow and graphs
for traffic count and toggling of traffic lights. These modules implement the various functionalities of the graphic interface or class library.
Figure 3 High level model of the traffic control system
It is worth mentioning here that the attributes of a typical car object are initialized by class node
defined at the beginning of the code. Such attributes as the X and Y co-ordinates of the car object, the
line, road and delay of the car object are all encapsulated in class node. The class is inherited by other
classes to implement the entire system. Traffic system class initializes the buttons that start and end
Waiting
PaintRoad PaintLight PaintMode
Moving
ChangeLight TrafficSystem CalFlow
ItemState Changed
ActionPerformed Run FlowDensity
Traffic
panel
State
machine
Fuzzy
Logic
Controller
Counter
Queue
Arrival
Traffic
Lights
Interface
D S
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
52 Vol. 1, Issue 5, pp. 47-57
the traffic light simulation. The start and end processes commence and terminate the traffic flow and
light sequence respectively. The modules for the commencement and termination of the traffic control
process are bound to these controls at run time. This is achieved by implementing class
ActionListener that listens to a click event on a specific button. Each click event invokes ActionEvent
that retrieves the label on each button to determine which button is being invoked. This allows a comprehensive control of operations on the interface without deadlock. Waiting module enables the
program to plot graph for waiting time of cars. Moving class also plots the graph for moving time of
cars both in conventional traffic control system and fuzzy logic traffic control system. Flow density
module checks the car density of every lane that is, checks which lane has more cars before it gives
access for movement. Run class multithreads the traffic light. It controls the Go and Stop button.
ActionPerformed class is responsible for loading the applet in browser. ItemStateChanged class
ensures that car sensors are not deselected thereby making the program work efficiently. Finally, the
traffic control system simulates the complete functionality of a real time traffic light and provides a
user friendly interface for easy implementation. The overall internal context diagram for the system is
shown in Figure 4.
Figure 4 Overall internal context diagram for the system
4.3 Simulation of the Traffic Control System
Java SE 6 Update 10 was the tool deployed for building the simulated version of the traffic control
system. This choice was based on the feature that the Java is the researchers’ language of choice in
developing applications that require higher performance [15]. The Java Virtual Machine, (JVM) provided support for multiple languages platforms and the Java SE 6 Update 10 provided an improved
performance of Java2D graphics primitives on Windows, using Direct3D and hardware acceleration.
Figures 5 shows control centre for the simulation of the traffic control system.
CreateCarQueue
CarModule AdvanceQueue
StopQueue
ChangeLight
Check TrafficLightModule Traffic
Control
System
CarDensityChecker
StopLight
StopCarDensityChecker
Initialize object
for moving cars
r
Advance object
for moving cars
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
53 Vol. 1, Issue 5, pp. 47-57
Figure 5 The simulated fuzzy logic traffic control system
The system is highly graphical in nature. A number of pop-up and push-down menus were introduced
in the implementation for ease of use (figure 5). Command buttons to display graphs showing waiting
time of cars (Figure 6), Movement time of cars (Figure 7), car flow density (Figure 8) and current
arrival/departure times were all embedded in the application’s control centre. The views can be
cascaded to show the control centre and any of the graphs at the same time (Figure 9). Two fuzzy
input variables were chosen in the design to represent the quantities of the traffic on the arrival side
(Arrival) and the quantity of traffic on the queuing side (Queue). The green side represented the
arrival side while the red side is the queuing side. To vary the flow of traffic in the simulation
according to real life situations; the density of flow of cars is set as required by clicking on the arrows on the sides of each lane.
Figure 6 Car waiting time in the simulation
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
54 Vol. 1, Issue 5, pp. 47-57
Figure 7 Car moving time in the simulation
Figure 8 Flow density of cars in the simulation
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
55 Vol. 1, Issue 5, pp. 47-57
Figure 9 Cascading different views of the traffic control system
V. CONCLUSION Information technology (IT) has transformed many industries, from education to health care to
government, and is now in the early stages of transforming transportation systems. While many think
improving a country’s transportation system solely means building new roads or repairing aging
infrastructures, the future of transportation lies not only in concrete and steel, but also increasingly in
using IT. IT enables elements within the transportation system—vehicles, roads, traffic lights,
message signs, etc. to become intelligent by embedding them with microchips and sensors and empowering them to communicate with each other through wireless technologies [16]. The
researchers in this work, attempted to solve the problems of road traffic congestion in large cities
through the design and implementation of an intelligent system; based on fuzzy logic technology to
monitor and control traffic lights. An analysis of the current traffic management system in Nigeria
was carried out and the results of the analysis necessitated the design of an intelligent traffic control
system. Figures 5 through 9 shows the outputs of a Java software simulation of the system developed
using a popular ‘+” junction in an eastern Nigeria city; notorious for traffic congestion. The system
eliminated the problems observed in the manual and conventional traffic control system as the flow
density was varied according to real life traffic situations. It was observed that the fuzzy logic control
system provided better performance in terms of total waiting time as well as total moving time. Since
efficiency of any service facility was measured in terms of how busy the facility is, we therefore
deemed it imperative to say that the system under question is not only highly efficient but also has
curbed successfully the menace of traffic deadlock which has become a phenomenon on our roads as
less waiting time will not only reduce the fuel consumption but also reduce air and noise pollution.
REFERENCES [1]. Ugwu, C. (2009). Nigeria: Over 7 Million Vehicles Ply Nigerian Roads Daily- Filani. Champion
Newspapers, Nigeria 2nd October 2009. Posted by AllAfrica.com project. Downloaded 15 September
2011 from http://allafrica.com/stories/200910020071.html
[2]. Mbawike, N. (2007). 7 Million Vehicles Operate On Nigerian Roads – FRSC. LEADERSHIP
Newspaper, 16th
November, 2007. Posted by Nigerian Muse Projects. Downloaded 15 September 2011
from http://www.nigerianmuse.com/20071116004932zg/nm-projects/7-million-vehicles-operate-on-
nigerian-roads-frsc/
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
56 Vol. 1, Issue 5, pp. 47-57
[3]. Abdul Kareem, E.I. & Jantan, A. (2011). An Intelligent Traffic Light Monitor System using an
Adaptive Associative Memory. International Journal of Information Processing and Management. 2(
2): 23-39
[4]. Findler, N. V., Sudeep S., Ziya, M. & Serban, C. (1997). Distributed Intelligent Control of Street and
Highway Ramp Traffic Signals. Engineering Applications of Artificial Intelligence 10(3):281- 292.
[5]. GiYoung, L., Kang J. and Hong Y. (2001). The optimization of traffic signal light using
artificial intelligence. Proceedings of the 10th IEEE International Conference on Fuzzy Systems.
[6]. Wen, W. (2008). A dynamic and automatic traffic light control expert system for solving the road
congestion problem. Expert Systems with Applications 34(4):2370-2381.
[7]. Tan, K., Khalid, M. and Yusof, R. (1996). Intelligent traffic lights control by fuzzy logic. Malaysian
Journal of Computer Science, 9(2): 29-35
[8]. Fathy, M. and Siyal, M. Y. (1995). Real-time image processing approach to measure traffic queue
parameters. Vision, Image and Signal Processing, IEEE Proceedings - 142(5):297-303.
[9]. Lei, J and Ozguner. U. (1999). Combined decentralized multi-destination dynamic routing and real-
time traffic light control for congested traffic networks. In Proceedings of the 38th IEEE Conference on
Decision and Control.
[10]. Huang, Q. and Miller, R. (2004). Reliable Wireless Traffic Signal Protocols for Smart
Intersections. Downloaded August 2011 from
http://www2.parc.com/spl/members/qhuang/papers/tlights_itsa.pdf
[11]. Di Febbraro, A., Giglio, D. and Sacco, N. (2004). Urban traffic control structure based on
hybrid Petri nets. Intelligent Transportation Systems, IEEE Transactions on 5(4):224-237.
[12]. Nagel, K.A. and Schreckenberg, M.B. (1992).A cellular automation model for freeway
Traffic. Downloaded September 2011 from www.ptt.uni-
duisburg.de/fileadmin/docs/paper/1992/origca.pdf.
[13]. Tavladakis, A. K.(1999). Development of an Autonomous Adaptive Traffic Control System.
European Symposium on Intelligent Techniques.
[14]. Chattaraj, A. Chakrabarti, S., Bansal, S., Halder , S. and . Chandra, A. (2008). Intelligent
Traffic Control System using RFID. In Proceedings of the National Conference on Device, Intelligent
System and Communication & Networking, India.
[15]. Osigwe U. C. (2011). An Intelligent Traffic Control System. Unpublished M.Sc thesis,
Computer Science Department, Nnamdi Azikiwe University, Awka, Nigeria.
[16]. Ezell, S. (2011). Explaining IT application leadership :Intelligent Transportation System.
White paper of the Information Technology and Innovation Foundation, (ITIF). Downloaded August
2011 from www.itif.org/files/2010-1-27-ITS_Leadership.pdf
AUTHORS’ BIOGRAPHY Osigwe, Uchenna Chinyere is completing her M.Sc. in Computer Science at Nnamdi
Azikiwe University Awka, Nigeria. She is a chartered practitioner of the computing
profession in Nigeria; haven been registered with by the Computer Professionals Regulatory
Council of Nigeria. She is currently a Systems Analyst with the Imo State University
Teaching Hospital Orlu, Nigeria.
Oladipo, Onaolapo Francisca holds a Ph.D in Computer Science from Nnamdi Azikiwe
University, Awka, Nigeria; where she is currently a faculty member. Her research interests
spanned various areas of Computer Science and Applied Computing. She has published
numerous papers detailing her research experiences in both local and international journals
and presented research papers in a number of international conferences. She is also a reviewer
for many international journals and conferences. She is a member of several professional and
scientific associations both within Nigeria and beyond; they include the British Computer
Society, Nigerian Computer Society, Computer Professionals (Regulatory Council) of
Nigeria, the Global Internet Governance Academic Network (GigaNet), International Association Of Computer
Science and Information Technology (IACSIT ), the Internet Society (ISOC), Diplo Internet Governance
Community and the Africa ICT Network.
Emmanuel Onibere started his teaching career in the University of Ibadan in 1976 as an
Assistant Lecturer. He moved to University of Benin in 1977 as Lecturer II. He rose to
Associate Professor of Computer Science in 1990. In January 1999 he took up an appointment
at University of Botswana, Gaborone to give academic leadership, while on leave of absence
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
57 Vol. 1, Issue 5, pp. 47-57
from the University of Benin. In October 2000, he was appointed Common Wealth Visiting Professor of
Computer Science to University of Buea in Cameroon to again give academic leadership. He returned in
December 2002 to University of Benin. In 2003 he was appointed full Professor of Computer Science in
University of Benin. Prof. Onibere, has been an External Examiner at B.Sc, M.Sc. and Ph.D levels in many
Universities and he has been a resource person in a number of workshops and conferences both inside and
outside Nigeria. He had BSc in Mathematics, MSc and PhD in Computer Science. His special area of research is
in Software Engineering. He has been involved in a number of research projects both in Nigeria and outside
Nigeria. He has been Chairman of organizing Committee of a number of conferences and training programmes.
Prof. E.A. Onibere has produced 5 Ph.Ds and over 42 Masters. He has published 5 books and fifty articles. He
is currently the Deputy Vice Chancellor (academic) of University of Benin and Chairman of Information
Technology Research and Grants Committee of National Information Technology Development Agency
(NITDA) of the Ministry of Science and Technology.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
58 Vol. 1, Issue 5, pp. 58-68
DESIGN OPTIMIZATION AND SIMULATION OF THE
PHOTOVOLTAIC SYSTEMS ON BUILDINGS IN SOUTHEAST
EUROPE
Florin Agai, Nebi Caka, Vjollca Komoni Faculty of Electrical and Computer Engineering, University of Prishtina, Prishtina,
Republic of Kosova.
ABSTRACT
The favourable climate conditions of the Southeast Europe and the recent legislation for the utilization of
renewable energy sources provide a substantial incentive for the installation of photovoltaic (PV) systems. In
this paper, the simulation of a grid-connected photovoltaic system is presented with the use of the computer
software package PVsyst and its performance is evaluated. The performance ratio and the various power losses
(temperature, soiling, internal network, power electronics) are calculated. There is also calculated the positive
effects on the environment by reducing the release of gases that cause greenhouse effect.
KEYWORDS: Photovoltaic, PV System, Renewable Energy, Simulation, Optimization
I. INTRODUCTION
The aim of the paper is to present a design methodology for photovoltaic (PV) systems, like those of
small appliances, as well as commercial systems connected to the network. It will present also the
potentials of Southeast Europe (Kosova) to use solar energy by mentioning changes in regulations for
initiating economic development. The project of installing a PV system connected to the grid, which
is the roof type, will have to respond to the requests:
1. What is the global radiation energy of the sun
2. What is the maximum electrical power which generates the PV system
3. What is the amount of electrical energy that the system produces in a year
4. What is the specific production of electricity
5. How much are the losses during the conversion in PV modules (thermal degradation, the
discrepancy).
6. How much are the values of loss factors and the normalized output
7. What is the value of the Performance Ratio (PR)
8. How much are the losses in the system (inverter, conductor, ...)
9. What is the value of energy produced per unit area throughout the year
10. What is the value of Rated Power Energy
11. What is the positive effect on the environment
We want to know how much electricity could be obtained and how much will be the maximum power
produced by photovoltaic systems connected to network, build on the Laboratory of Technical Faculty
of Prishtina, Prishtina, Kosovo.
Space has something over 5000 m2
area, and it has no objects that could cause shadows. We want to
install panels that are in single-crystalline technology and we are able to choose from the program
library. Also the inverters are chosen from the library.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET
Figure 1. Laboratory conceptual
In the next chapter, the smilar and related projects are mantioned
results through the references. In the material and methods is explained the use of the software for
simulation the design and use of a PV sistem.
parameters and results of the simulation. All the losses and mismatches along the system are
quantified, and visualised on the "Loss Diagram", specific for each configuration.
II. RELATED WORK
In the paper ” Performance analysis of a grid connected photovoltaic park on the island of C
the grid-connected photovoltaic park (PV park) of Crete has been evaluated and presented by
term monitoring and investigating.
Optimization of Photovoltaic Pumping Systems Pedagogic and Simulation Tool Implementation in
the PVsyst Software” [9], is the elaboration of a general procedure for
pumping systems, and its implementation in the PVsyst software. This tool is mainly dedicated to
engineers in charge of solar pumping projects in the southern countries.
III. MATERIALS AND METHODS
Within the project we will use the computer program
of Geneva, which contains all the sub
connected to the grid, autonomous
for about 7200 models of PV modules and
PVsyst is a PC software package for the study, sizing, simulation and data analysis of complete PV
systems. It is a tool that allows to analyze accurately different configurations and to evaluate its
results in order to identify the best technical and economical
performances of different technological options for any specific
part, performing detailed simulation in hourly values, includ
helps the user to define the PV-field and to choose the right components.
meteo and components management. It provides also a wide choice
geometry, meteo on tilted planes, etc), as well as a powerful mean of importing real data measured on
existing PV systems for close comparisons with simulated values. Besides the Meteo Database
included in the software, PVsyst now gives access to many
from the web, and includes a tool for easily importing the most popular ones.
The data for the parameters of location:
Geographic coordinates: latitude: 42
Prishtina_sun.met:Prishtina, Synthetic Hourly data synthesized from the program
Solar path diagram is a very useful
determining the potential shadows.
[kWh/m2.year]. The value of Albedo
0.2. [1]
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231
conceptual plan for PV system on the roof. Photo taken from Google Map
In the next chapter, the smilar and related projects are mantioned and we can study the explained
results through the references. In the material and methods is explained the use of the software for
simulation the design and use of a PV sistem. In results chapter the detailed report explains all
the simulation. All the losses and mismatches along the system are
quantified, and visualised on the "Loss Diagram", specific for each configuration.
Performance analysis of a grid connected photovoltaic park on the island of C
connected photovoltaic park (PV park) of Crete has been evaluated and presented by
term monitoring and investigating. Also, the main objective of the project “Technico
Optimization of Photovoltaic Pumping Systems Pedagogic and Simulation Tool Implementation in
is the elaboration of a general procedure for the simulation of photovoltaic
pumping systems, and its implementation in the PVsyst software. This tool is mainly dedicated to
engineers in charge of solar pumping projects in the southern countries.
ETHODS
the project we will use the computer program simulator PVsyst, designed by E
subprograms for design, optimization and simulation
and solar water pumps. The program includes a separate
modules and 2000 models of inverters.
is a PC software package for the study, sizing, simulation and data analysis of complete PV
systems. It is a tool that allows to analyze accurately different configurations and to evaluate its
results in order to identify the best technical and economical solution and closely compare the
performances of different technological options for any specific photovoltaic project.
part, performing detailed simulation in hourly values, including an easy-to-use expert system, which
field and to choose the right components. Tools performs the database
meteo and components management. It provides also a wide choice of general solar tools (solar
geometry, meteo on tilted planes, etc), as well as a powerful mean of importing real data measured on
existing PV systems for close comparisons with simulated values. Besides the Meteo Database
now gives access to many meteorological data sources
from the web, and includes a tool for easily importing the most popular ones.
The data for the parameters of location: Site and weather: Country: KOSOVO, Locality: Prishtina
latitude: 42o40'N, longitude: 21o10' E, altitude: 652m.
Prishtina, Synthetic Hourly data synthesized from the program Meteonorm
useful tool in the first phase of the design of photovoltaic
Annual global radiation (radiant and diffuse) for Prishtina is 1193
The value of Albedo effect for urban sites is 0.14 to 0.22; we will take the
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231-1963
Google Map
and we can study the explained
results through the references. In the material and methods is explained the use of the software for
In results chapter the detailed report explains all
the simulation. All the losses and mismatches along the system are
Performance analysis of a grid connected photovoltaic park on the island of Crete” [2],
connected photovoltaic park (PV park) of Crete has been evaluated and presented by long
Technico-economical
Optimization of Photovoltaic Pumping Systems Pedagogic and Simulation Tool Implementation in
the simulation of photovoltaic
pumping systems, and its implementation in the PVsyst software. This tool is mainly dedicated to
designed by Energy Institute
of PV systems
separate database
is a PC software package for the study, sizing, simulation and data analysis of complete PV
systems. It is a tool that allows to analyze accurately different configurations and to evaluate its
closely compare the
. Project design
use expert system, which
performs the database
of general solar tools (solar
geometry, meteo on tilted planes, etc), as well as a powerful mean of importing real data measured on
existing PV systems for close comparisons with simulated values. Besides the Meteo Database
meteorological data sources available
Locality: Prishtina,
. Weather data:
Meteonorm'97.
photovoltaic systems for
Prishtina is 1193
take the average
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure 2. The diagram of sun path for Prishtina (42o40’ N, 21o10’ E)
Transposition factor = 1.07 (Transposition factor shows the relationship between radiation panels and
global radiation). For grid connected system, the user has just to enter the desired nominal power, to
choose the inverter and the PV module types in the database. The program proposes the number of
required inverters, and a possible array layout (number of modules in series and in parallel). This
choice is performed taking the engineering system constraints into account: the number of modules in
series should produce a MPP voltage compatible with the inverter voltage levels window. The user
can of course modify the proposed layout: warnings are displayed if the configuration is not quite
satisfactory: either in red (serious conflict preventing the simulation), or in orange (not optimal
system, but simulation possible). The warnings are related to the inverter sizing, the array voltage, the
number of strings by respect to the inverters, etc.
Photovoltaic (PV) module solution: From the database of PVmodules, we choose the model of the
solar panel and that is: CS6P – 230M, with maximum peak power output of WP = 230W – Canadian
Solar Inc.
Inverter solution: For our project we will choose inverter 100K3SG with nominal power Pn=100kW
and output voltage of 450-880V, the manufacturer Hefei. For chosen modules here are some
characteristics of working conditions:
Figure 3. U-I characteristics for irradiation h = 1245 W/m2and working temperature 60oC.
Output power P = f(U)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET
Figure 4. The characteristic of power
Figure
Figure 5. shows the PV system is comprised of a 2622 Canadian Solar CS6P
silicon PV modules (panels). The PV modules are arranged in 138 parallel strings
connection of modules), with 19 modules (panels) in each, and connected to six
inverters installed on the supporting structure, plus connection boxes, irradiance and temperature
measurement instrumentation, and data logging system. The PV system is mounted on a stainless steel
support structure facing south and tilted a
energy production.
IV. RESULTS
1. Global horizontal irradiation energy of
(specifically for Prishtina) according to results
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231
The characteristic of power for irradiation h = 1245W/m2and working temperature
ure 5. Blok-diagram of the PV System
the PV system is comprised of a 2622 Canadian Solar CS6P-230M monocrystalline
silicon PV modules (panels). The PV modules are arranged in 138 parallel strings (string
, with 19 modules (panels) in each, and connected to six Hefei
inverters installed on the supporting structure, plus connection boxes, irradiance and temperature
measurement instrumentation, and data logging system. The PV system is mounted on a stainless steel
support structure facing south and tilted at 30°. Such a tilt angle was chosen to maximize yearly
energy of the sun for a year in the territory of Eastern
according to results from PVsyst program is h=1193 kWh/m
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231-1963
temperature 60oC
230M monocrystalline
(string – serial
Hefei 100K3SG
inverters installed on the supporting structure, plus connection boxes, irradiance and temperature
measurement instrumentation, and data logging system. The PV system is mounted on a stainless steel
t 30°. Such a tilt angle was chosen to maximize yearly
Eastern Europe,
kWh/m2year. At
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
62 Vol. 1, Issue 5, pp. 58-68
the panel surface the level of radiation is 7.9% higher because the panels are tilted. This value is
reduced for 3.3% because of the effect of Incidence Angle Modifier (IAM) and the final value is:
h = 1245 kWh/m2year.
Reference incident energy falling on the panel's surface (in a day) is:
Yr = 3526 kWh/m2/kWp/day. The highest value of total radiation on the panel surface is in July,
167.5 kW/m2, where as the lowest value is in December, 41.4kW/m2. Annual irradiation is 1245
kW/m2, and the average temperature is 10.26 oC. The PV system generates 76.2 MWh of
electricity in July and 20 MWh in December.
2. Maximum electric power that PV system generates in output of inverter is: Pnom = 603kWp.
3. Annual produced electric energy in output of inverter is: E = 610,512kWh.
4. Specific production of electricity (per kWp/year) is: 1012 kWh/kWp/year.
5. Losses of power during PV conversion in modules are:
FV losses due to radiation rate = 4.7%
FV losses due to the temperature scale = –4.9%
Losses due to quality of modules = 7976 kWh per year (1.2%)
Losses due to mis match of modules = 14334 kWh per year (2.1%)
Losses due to conduction resistance = 5174 kWh per year (0.8%).
6. Loss factors and Normalised production are:
Lc – Panel losses (losses in PV array) = 982,006 kWh per year (13.1%)
Ls – System losses (inverter ...) = 40,904 kWh per year (6.7%)
Yf – Useful energy produced (the output of inverter) = 610,512 kWh per year.
Loss factors and Normalised production (per installed kWp) are:
Lc – Panel losses (losses in PV array) per maximum power = 0.55 kWh/kWp/day
Ls – Losses in the system (inverter ...) for maximum powe = 0.20 kWh/kWp/day
Yf – Useful produced energy (the output of inverter) for maximum power = 2.77 kWh/kWp/day
7. Performance ratio (PR) is the ratio between actual yield (output of inverter) and target yield
(output of PV array) [2]:
PR =
=
= ! × ! # × .!%% =
& & = 0.787 *78.7%, (1)
8. System losses are losses in the inverter and conduction. They are Ls = – 6.7 %.
System Efficiency (of inverters) is: 1– 0.067 = 0.933, or ηsys = 93.3 %.
Overall losses in PV array (temp, module, quality, mismatch, resistant) are: Lc = – 13.1 %.
PV array efficiency is: Lc = 1– 0.131 = 0.869, orηrel = 86.9 %.
9. The energy produced per unit area throughout the year is: [3]
-. = ℎη1 η η232η456 = 78hη456 = 0.787 × 1245 × 0.143 = 140.4 ?@A
6B *annual, (2)
10. Energy forRated Poweris:
G HI
J = KL
η1 η232η = -.
KL= PR
KL= 0.787 × !
= 0.9798 *97.98%, (3)
11. Economic Evaluation. With the data of retail prices from PV and inverter stock market we can
make estimation for the return of investment [4]:
Panels: 2622(mod) × 1.2 (Euro/Wp.mod) × 230 (WP) = 723672 Euro
Inverters: 6 × 5200 (Euro) = 31200 Euro
Cable: 2622(mod) × 3 (euro/mod) = 7866 Euro
Construction: 2622 (mod) × 5 (Euro/mod) = 13110 Euro
Handwork: 2622 (mod) × 5 (Euro /mod) = 13110 Euro
Total: 788958 Euro
If the price of one kWh of electricity is 0.10 Euro/kWh, then in one year will be earned [5]:
610500 (kWh/year) x 0.10 (Euro/kWh) × 1 (year) = 61050 (Euro/year)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
63 Vol. 1, Issue 5, pp. 58-68
The time for return of investment will be : &##N#
× . = &##N# = 12.9 years (4)
Module life time is 25 years, and the inverter live time is 5 years.
12. Positive effect on the environment. During the generation of electricity from fossil fuels, as a
result we produce greenhouse gases such as: nitrogen oxide (NOx), Sulphur dioxide (SO2) and
Carbon dioxide (CO2). Also is produced the large amount of ash that must be stored [6].
Table1.Positive effects of the PV system for environmental protection
Statistics for products by the power plants with fossil fuels (coal)
with the capacity of electricity production (E = 610.5 MWh per year)
Byproducts of coal
power plant
Per kWh For annual energy production of
E = 610.5 MWh
SO2 1.24 g 757 kg
NOx 2.59 g 1581 kg
CO2 970 g 692.2 t
Ash p 68 g 41.5 t
13. Diagrams
Figure 6. Diagram of system losses
The simulation results include a great number of significant data, and quantify the losses at every
level of the system, allowing to identify the system design weaknesses. This should lead to a deep
comparison between several possible technologic solutions, by comparing the available performances
in realistic conditions over a whole year. The default losses management has been improved,
especially the "Module quality loss" which is determined from the PV module's tolerance, and the
mismatch on Pmpp which is dependent on the module technology. Losses between inverters and grid
injection have been implemented. These may be either ohmic wiring losses, and/or transformer losses
when the transformer is external.
Detailed loss diagram (Figure 6) gives a deep sight on the quality of the PV system design, by
quantifying all loss effects on one only graph. Losses on each subsystem may be either grouped or
expanded in detailed contributions.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
64 Vol. 1, Issue 5, pp. 58-68
Results - and particularly the detailed loss diagram - show the overall performance and the
weaknesses of a particular design.
Figure 7. Reference incident Energy in collector plane
Figure 8. Normalized productions (per installed kWp)
Figure 9. Normalized production and Loss factors
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
65 Vol. 1, Issue 5, pp. 58-68
Figure 10. Performance ratio (PR)
Figure 11. Daily input/output diagram
Figure 12. Daily system output energy
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
66 Vol. 1, Issue 5, pp. 58-68
Figure 13. Incident irradiation distribution
Figure 14. Array power distribution
V. CONCLUSIONS
The design, the optimization and the simulation ofthe PV systems for use in Southeast Europe have
been analyzed and discussed, and the following conclusions are drawn: average annual PV system
energy output is 1012 kWh/kWp and average annual performance ratio of the PV system is 78.7 %.
The performance ratio (Figure 10) shows the quality of a PV system and the value of 78.7% is
indicative of good quality (Equation 1). Usually the value of performance ratio ranges from 60-80%
[7]. This shows that about 21.3% of solar energy falling in the analysed period is not converted in to
usable energy due to factors such as losses in conduction, contact losses, thermal losses, the module
and inverter efficiency factor, defects in components, etc.
It is important that we have matching between the voltage of inverter and that of the PV array, during
all operating conditions. Some inverters have a higher efficiency in certain voltage, so that the PV
array must adapt to this voltage of maximum efficiency. Use of several inverters cost more than using
a single inverter with higher power. In (Figure 9) is presented the histogram of the waited power production of the array, compared to the inverter's
nominal power. Estimation of the overload losses (and visualization of their effect on the histogram). This tool
allows to determine precisely the ratio between array and inverter Pnom, and evaluates the associated losses.
Utility-interactive PV power systems mounted on residences and commercial buildings are likely to
become a small, but important source of electric generation in the next century. As most of the electric
power supply in developed countries is via centralised electric grid, it is certain that widespread use of
photovoltaic will be as distributed power generation inter-connected with these grids.
This is a new concept in utility power production, a change from large-scale central examination of
many existing standards and practices to enable the technology to develop and emerge into the
marketplace. [8]. As prices drop, on-grid applications will become increasingly feasible. For the
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
67 Vol. 1, Issue 5, pp. 58-68
currently developed world, the future is grid-connected renewables. In the next 20 years, we can
expect only a slight improvement in the efficiency of first generation (G-1) silicon technology. Will
we witness a change of the dominant technology of the G-1 in an era of market share with second-
generation technology (G-2), based mainly on thin-film technology (with 30% cost reduction) [9].
While these two branches will largely dominate the commercial sector of PV systems, within the next
20 years will have increased use of third generation technology (G-3) and other new technologies,
which will bring to enlarge the performance or cost reduction of solar cells [10]. During this project,
the overall results of the simulation system to connect to the network PV is bringing in the best
conditions possible, by using the software package PVsyst [16]. Overall, the project gives them
understand the principle of operation, the factors affecting positively and negatively, losses incurred
before the conversion, conversion losses and losses in the cells after conversion. All this helps us to
make optimizing FV systems under conditions of Eastern Europe.
REFERENCES
[1] Ricardo Borges, Kurt Mueller, and Nelson Braga. (2010) “The Role of Simulation in Photovoltaics:
From Solar Cells To Arrays”. Synopsys, Inc.
[2] Kymakis, E.; Kalykakis, S.; Papazoglou, T. M., (2009) “Performance analysis of a grid connected
photovoltaic park on the island of Crete”, Energy Conversion and Management, Vol. 50, pp. 433-438
[3] Faramarz Sarhaddi, Said Farahat, Hossein Ajam, and Amin Behzadmehr, (2009) “Energetic
Optimization of a Solar Photovoltaic Array”, Journal of Thermodynamics,Volume, Article ID 313561,
11 pages doi:10.1155/2009/313561.
[4] Colin Bankier and Steve Gale. (2006) “Energy Payback of Roof Mounted Photovoltaic Cells”. Energy
Bulletin. [5] Hammons, T. J. Sabnich, V. (2005), “Europe Status of Integrating Renewable Electricity Production
into the Grids”, Panel session paper 291-0, St. Petersburg.
[6] E. Alsema (1999). “Energy Requirements and CO2 Mitigation Potential of PV Systems.” Photovoltaics
and the environment. Keystone, CO, Workshop Proceedings.
[7] Goetzberger, (2005), Photovoltaic Solar Energy Generation, Springer.
[8] Chuck Whitaker, Jeff Newmiller, Michael Ropp, Benn Norris, (2008) “Distributed Photovoltaic
Systems Design and Technology Requirements”. Sandia National Laboratories.
[9] Mermoud, A. (2006), "Technico-economical Optimization of Photovoltaic Pumping Systems
Pedagogic and Simulation Tool Implementation in the PVsyst Software",
Research report of the Institut of the Environnemental Sciences, University of Geneva.
[10] Gong, X. and Kulkarni, M., (2005), Design optimization of a large scale rooftop pv system, Solar
Energy, 78, 362-374
[11] S.S.Hegedus, A.Luque, (2003),“Handbook of Photovoltaic Science and Engineering" John Wiley &
Sons,
[12] Darul’a, Ivan; Stefan Marko. "Large scale integration of renewable electricity production into the
grids". Journal of Electrical Engineering. VOL. 58, NO. 1, 2007, 58–60
[13] A.R. Jha, (2010), “Solar cell technology and applications”, Auerbach Publications
[14] Martin Green, (2005), “Third Generation Photovoltaics Advanced Solar Energy Conversion”,
Springer,
[15] M.J. de Wild-Scholten, (2006), A cost and environmental impact comparison of grid-connected rooftop
and ground-based pv systems, 21th European Photovoltaic Solar Energy Conference, Dresden,
Germany,
[16] www.pvsyst.com
Authors
Florin Agai received Dipl. Ing. degree from the Faculty of Electrical Engineering in Skopje,
the “St. Kiril and Metodij” University, in 1998. Currently works as Professor at Electro-
technical High School in Gostivar, Macedonia. Actually he finished his thesis to obtain Mr. Sc.
degree from the Faculty of Electrical and Computer Engineering, the University of Prishtina,
Prishtina, Kosovo.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
68 Vol. 1, Issue 5, pp. 58-68
Nebi Caka received the Dipl. Ing. degree in electronics and telecommunications from the
Technical Faculty of Banja Luka, the University of Sarajevo, Bosnia and Herzegovina, in 1971;
Mr. Sc degree in professional electronics and radio-communications from the Faculty of
Electrical Engineering and Computing, the University of Zagreb, Zagreb, Croatia, in 1988; and
Dr. Sc. degree in electronics from the Faculty of Electrical and Computer Engineering, the
University of Prishtina, Prishtina, Kosovo, in 2001. In 1976 he joined the Faculty of Electrical
and Computer Engineering in Prishtina, where now is a Full Professor of Microelectronics,
Optoelectronics, Optical communications, VLSI systems, and Laser processing.
Vjollca Komoni received Dipl. Ing. degree in electrical engineering from the Faculty of
Electrical and Computer Engineering, the University of Prishtina, Prishtina, Kosovo, in 1976;
Mr. Sc degree in electrical engineering from the Faculty of Electrical Engineering and
Computing, the University of Zagreb, Zagreb, Croatia, in 1982; and Dr. Sc. degree in electrical
engineering from the Faculty of Electrical and Computer Engineering, the University of Tirana,
Tirana, Albania, in 2008. In 1976 she joined the Faculty of Electrical and Computer
Engineering in Prishtina, where now is an Assistant Professor of Renewable sources, Power
cables, Electrical Installations and Power Systems.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
69 Vol. 1, Issue 5, pp. 69-76
FAULT LOCATION AND DISTANCE ESTIMATION ON POWER
TRANSMISSION LINES USING DISCRETE WAVELET
TRANSFORM
Sunusi. Sani Adamu1, Sada Iliya
2
1Department of Electrical Engineering, Faculty of Technology, Bayero University Kano,
Nigeria 2Department of Electrical Engineering, College of Engineering, Hassan Usman Katsina
Polytechnic
ABSTRACT
Fault location is very important in power system engineering in order to clear fault quickly and restore power
supply as soon as possible with minimum interruption. In this study a 300km, 330kv, 50Hz power transmission
line model was developed and simulated using power system block set of MATLAB to obtain fault current
waveforms. The waveforms were analysed using the Discrete Wavelet Transform (DWT) toolbox by selecting
suitable wavelet family to obtain the pre-fault and post-fault coefficients for estimating the fault distance. This
was achieved by adding non negative values of the coefficients after subtracting the pre-fault coefficients from
the post-fault coefficients. It was found that better results of the distance estimation, were achieved using
Daubechies ‘db5’wavele,t with an error of three percent (3%).
KEYWORDS: Transmission line, Fault location, Wavelet transforms, signal processing
I. INTRODUCTION
Fault location and distance estimation is very important issue in power system engineering in order to
clear fault quickly and restore power supply as soon as possible with minimum interruption. This is
necessary for reliable operation of power equipment and satisfaction of customer. In the past several
techniques were applied for estimating fault location with different techniques such as, line
impedance based numerical methods, travelling wave methods and Fourier analysis [1]. Nowadays,
high frequency components instead of traditional method have been used [2]. Fourier transform were
used to abstract fundamental frequency components but it has been shown that Fourier Transform
based analysis sometimes do not perform time localisation of time varying signals with acceptable
accuracy. Recently wavelet transform has been used extensively for estimating fault location
accurately. The most important characteristic of wavelet transform is to analyze the waveform on time
scale rather than in frequency domain. Hence a Discrete Wavelet Transform (DWT) is used in this
paper because it is very effective in detecting fault- generated signals as time varies [8].
This paper proposes a wavelet transform based fault locator algorithm. For this purpose,
330KV,300km,50Hz transmission line is simulated using power system BLOCKSET of MATLAB
[5].The current waveform which are obtained from receiving end of power system has been analysed.
These signals are then used in DWT. Four types of mother wavelet, Daubechies (db5), Biorthogonal
(bio5.5), Coiflet (coif5) and Symlet (sym5) are considered for signal processing.
II. WAVELET TRANSFORM
Wavelet transform (WT) is a mathematical technique used for many application of signal processing
[5].Wavelet is much more powerful than conventional method in processing the stochastic signal
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
70 Vol. 1, Issue 5, pp. 69-76
because of analysing the waveform in time scale region. In wavelet transform the band of analysis can
be adjusted so that low frequency and high frequency components can be windowing by different
scale factors. Recently WT is widely used in signal processing application such as de noising,
filtering, and image compression [3]. Many pattern recognition algorithms were developed based on
the wavelet transform. According to scale factors used the wavelet can be categorized into different
sections. In this work, the discrete wavelet transform (DWT) was used. For any function (f), DWT is
written as.
, = ∑ [ ] (1)
Where ψ is the mother wavelet [3], is the scale parameter
, , are the translation parameters.
III. TRANSMISSION LINE EQUATIONS
A transmission line is a system of conductors connecting one point to another and along which
electromagnetic energy can be sent. Power transmission lines are a typical example of transmission
lines. The transmission line equations that govern general two-conductor uniform transmission lines,
including two and three wire lines, and coaxial cables, are called the telegraph equations. The general
transmission line equations are named the telegraph equations because they were formulated for the
first time by Oliver Heaviside (1850-1925) when he was employed by a telegraph company and used
to investigate disturbances on telephone wires [1]. When one considers a line segment with
parameters resistance (R), conductance (G), inductance (L), and capacitance (C), all per unit length,
(see Figure 3.1) the line constants for segment are , ! , " , and # . The electric flux ψ
and the magnetic flux Ф created by the electromagnetic wave, which causes the instantaneous voltage %, & and current ', &, are:
& = %, &# (2)
(& = ', &" (3)
Calculating the voltage drop in the positive direction of x of the distance one obtains
%, & − % + , & = −%, & = − +,-,.+- = / + " ++.0 ', & (4)
If '1 cancelled from both sides of equation (4), the voltage equation becomes
+,-,.+- = −" +2-,.+. − ', & (5)
Similarly, for the current flowing through G and the current charging C, Kirchhoff’s current law can
be applied as ', & − ' + , & = −', & = − +2-,.+- = /! + # ++.0 %, & (6)
If '1 cancelled from both sides of (6), the current equation becomes
+2-,.+- = −# +,-,.+. − !%, & (7)
The negative sign in these equations is caused by the fact that when the current and voltage waves
propagates in the positive x-direction, ', & and %, & will decrease in amplitude for increasing . The expressions of line impedance, Z and admittance Y are given by
3 = + +4-,.+. (8)
5 = ! + +6-,.+. (9)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Differentiate once more with respect to x, the second-order partial differential equations
+72-,.+-7 = −5 +,-,.+. = 53', & = 89', & (10)
+7,-,.+-7 = −3 +,-,.+. = 35%, & = 89%, & (11)
Figure 1 Single phase transmission line model
In this equation, 8 is a complex quantity which is known as the propagation constant, and is given by
8 = √35 = ; + <= (12)
Where, ; is the attenuation constant which has an influence on the amplitude of the wave, and = is the phase constant which has an influence on the phase shift of the wave.
Equations (7) and (8) can be solved by transform or classical methods in the form of two arbitrary
functions that satisfy the partial differential equations. Paying attention to the fact that the second
derivatives of the voltage > and current 'functions, with respect to t and x, have to be directly
proportional to each other, so that the independent variables t and x appear in the form [1] %, & = ?&@A- + ?9&@A- (13)
', & = B [?&@A- + ?9&@A-C (14)
Where Z is the characteristic impedance of the line and is given by
3 = DEF4 GGHIF6 GGH (15)
A1 and A2 are arbitrary functions, independent of x
To find the constants A1and A2 it has been noted that when = 0 , % = %R and ' = 'r from
equations (13) and (14) these constants are found to be
? = KEFBLM9 (16) ?9 = KEBLM9 (17)
Upon substitution in equation in (13) and (14) the general expression for voltage and current along a
long transmission line become
% = KMFBLM9 @A-+ KMBLM9 @A- (18)
' = NMO FLM9 @A- − NMO LM9 @A- (19)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
72 Vol. 1, Issue 5, pp. 69-76
The equation for voltage and currents can be rearranged as follows
% = PQRFPSQR9 TE + 3 PQRPSQR
9 UE (20)
' = V PQRPSQR9 TE + PQRFPSQR
9 UE (21)
Recognizing the hyperbolic functions1'ℎ, XY1ℎ, the above equations (20) and (21)
are written as follows:
% = XY1ℎ8TE + 31'ℎ8UE (22)
' = V 1'ℎ8TE + XY1ℎ8UE (23)
The interest is in the relation between the sending end and receiving end of the line. Setting =Z, %Z = T[ UZ = U[ , the result is
T[ = XY1ℎ8ZTE + 31'ℎ8ZUE (24)
U[ = V 1'ℎ8ZTE + XY1ℎ8ZUE (25)
Rewriting the above equations (24) and (25) in term of ABCD constants we have
\T[U[] = ^ ? _# ` \TEUE ] (26)
Where ? = XY1ℎ8Z , _ = 31'ℎ8Z , # = 31'ℎ8Z = XY1ℎ8Z IV. TRANSMISSION LINE MODEL
In this paper fault location was performed on power system model which is shown in figure 2. The
line is a 300km, 330kv, 50Hz over head power transmission line. The simulation was performed using
MATLAB SIMULINK.
Figure 2: Simulink transmission line model
Ditributed line 1 Distributed line 2 Distributed line 3 Distributed line 4 Distributed line 5Distributed line 6
400MVA Transformer R-L-C Load
Three phase Fault breaker
Ac voltage source
C'B
V'T
C ' T
Scope 1
Scope 2
Step 1
Scope 3
Step 2
C' T
C 'B
V 'T
Scope 4
FIG 3. 1 300 KM , 50Hz, 330kV Transmission line model
c
12
c
12
powergui
Continuous
v+ -
v + -
x2
x4x3x1
A B CA B C
A
B
C
a
b
c
A
B
C
i+ -
i+ -
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
The fault is created after every 50km distance, with a simulation time of 0.25sec, sample time = 0,
resistance per unit length = 0.012ohms, inductance per unit length = 0.9H and capacitance per unit
length = 127farad.
4.1 SIMULATION RESULTS
Figure 3 shows the normal load current flowing prior to the application of the fault, while the fault
current is shown in figure 4, which is cleared in approximately one second.
Fig 3: Pre-fault current waveform at 300km
Fig 4: Fault current waveform at 50km
4.2 DISCRETE WAVELET COEFFICIENTS.
Figures 5 and 6 showed pre-fault/post fault wavelets coefficients (approximate, horizontal
detail, diagonal detail and vertical detail) at 3 00km using the following db5 wavelet familioes.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig 5: Pre- fault wavelet coefficients
Fig. 6: Post- fault wavelet coefficients at 50km
4.2.1 TABLES OF THE COEFFICIENTS
The tables below present the minimum / maximum scales of the coefficients using db5.
Table 1: Pre-fault wavelet coefficients using db5
Coefficients Max. Scale Min. Scale
Approximate(A1) 693.54 0.00
Horizontal(H1) 205.00 214.44
Vertical (V1) 235.56 218.67
Diagonal (D1) 157.56 165.78
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
75 Vol. 1, Issue 5, pp. 69-76
Table 2: Pre-fault wavelet coefficients using db5
Coefficients Max. Scale Min.
Scale
Approximate(A1) 693.54 34.89
Horizontal(H1) 218.67 201.33
Vertical (V1) 201.33 218.67
Diagonal (D1) 157.56 148.89
Table 3: Differences between maximum and minimum scale of the coefficients using db5
db5 max db5 min
Coefficients A1 H1 V1 D1 A1 H1 V1 D1
Coefficients. At
50km 693.54 218.67 201.33 157.56 34.89 201.33 218.67 148.89
Pre-fault
coefficients. 693.54 205.00 235.56 157.56 0.00 214.44 218.67 165.78
Differences 0.00 13.67 -34.23 0.00 34.89 -13.11 0.00 -16.89
Estimated distance (km) = 13.67 + 34.89 = 48.5
Table 4: Actual and estimated fault location
Actual location(km) db5 bio5.5 coif5 Sym5
50 48.5 39.33 47.32 26.23
100 97.44 173.78 04.37 43.56
4.3 DISCUSSION OF THE RESULTS.
The results are presented in figures 5 and 6, and tables 1 to 4. Figure 3 is the simulation result of
pre-fault current waveform which indicates that the normal current amplitude reaches 420A. When a
fault was created at 50km from the sending end point, figure 4 shows that the fault current amplitude
reaches up to 14 kA.
The waveforms obtained from figures 3 and 4 were imported into the wavelet toolbox of MATLAB
for proper analysis to generate the coefficients. Figures 5 and 6 presents the discrete wavelet
transform coefficients in scale time region. The scales of the coefficients are based on minimum scale
and maximum scale. These scales for both pre-fault and post fault coefficients were recorded from
the work space environment of the MATLAB which was presented in tables 1and 2.
The estimated distance was obtained by adding non negative values of the scales after subtracting the
pre-fault coefficients from the post-fault coefficients; this is presented in table 4.
V. CONCLUSIONS
The application of the wavelet transform to estimate the fault location on transmission line has been
investigated. The most suitable wavelet family has been made to identify for use in estimating the
fault location on transmission line. Four different types of wavelets have been chosen as a mother
wavelet for the study. It was found that better result was achieved using Daubechies ‘db5’ wavelet
with an error of 3%. Simulation of single line to ground fault (S-L-G) for 330kv, 300km transmission
line was performed using SIMULINK MATLAB SOFTWARE. The waveforms obtained from
SIMULINK have been converted as a MATLAB file for feature extraction. DWT has been used to
analyze the signal to obtain the coefficients for estimating the fault location. Finally it was shown that
the proposed method is accurate enough to be used in detection of transmission line fault location.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
REFERENCES
[1] Abdelsalam .M. (2008) “Transmission Line Fault Location Based on Travelling Waves”
Dissertation submitted to Helsinki University, Finland, pp 108-114.
[2] Aguilera, A.,(2006) “ Fault Detection, classification and faulted phase selection approach” IEE
Proceeding on Generation Transmission and Distribution vol.153 no. 4 ,U.S.A pp 65-70
[3] Benemar, S. (2003) “Fault Locator For Distribution System Using Decision Rule and DWT”
Engineering system Conference, Toranto, pp 63-68
[4] Bickford, J. (1986) “Transient over Voltage” 3rd
Edition, Finland, pp245-250
[5] Chiradeja , M (1997) “New Technique For Fault Classification using DWT” Engineering system
Conference, UK, pp 63-68
[6] Elhaffa, A. (2004) “Travelling Waves Based Earth Fault Location on transmission Network”
Engineering system Conference, Turkey, pp 53-56
[7] Ekici, S. (2006) “Wavelet Transform Algorithm for Determining Fault on Transmission Line’’ IEE
Proceeding on transmission line protection. Vol. 4 no.5, Las Vegas, USA, pp 2-5
[8] Florkowski, M. (1999) “Wavelet based partial discharge image de-noising” 11th
International
symposium on High Voltage Engineering, UK, pp. 22-24.
[9] Gupta, J (2002) “Power System Analysis” 2nd
Edition, New Delhi, pp, 302-315
[10] Okan, G. (1995) “Wavelet Transform for Distinguishing Fault Current” John Wiley Inc. Publication,
New York, pp 39-42
[11] Osman, A. (1998) “Transmission Line Distance protection based on wavelet transform” IEEE
Transaction on power delivery, vol. 19, no2, Canada pp.515-523
[12] Saadat, H. (1999) “Power System Analysis” Tata McGraw-Hill, New Delhi, pp 198-206
[13] Wavelet Toolbox for MATLAB , Mathworks (2005)
[14] Youssef, O. (2003) “A wavelet based technique for discriminating fault” IEEE Transaction on power
delivery, vol.18, no. 1, USA, pp 170-176 .
[15] Yeldrim, C (2006) “ Fault Type and Fault Location on Three Phase System” IEEE Proceeding on
transmission line protection. Vol. 4 no.5 , Las-Vegas, USA ,pp 215-218
[16] D.C. Robertson, O.I. Camps, J.S. Meyer and W.B. Gish, ‘ Wavelets and electromagnetic power system
transients’, IEEE Trans. Power Delivery, vol11, no 2, pp1050-1058, April 1996
Authors’ Biography
Sunusi Sani Adamu receives the B.Eng degree from Bayero University Kano, Nigeria in
1985; the MSc degree in electrical power and machines from Ahmadu Bello University,
Zaria, Nigeria in 1996; and the PhD in Electrical Engineering, from Bayero University, Kano,
Nigeria in 2008. He is a currently a senior lecturer in the Department of Electrical
Engineering, Bayero University, Kano. His main research area includes power systems
simulation and control, and development of microcontroller based industrial retrofits. Dr
Sunusi is a member of the Nigerian Society of Engineers and a registered professional
engineer in Nigeria.
Sada Iliya receives the B.Eng degree in Electrical Engineering from Bayero University
Kano, Nigeria,in 2001. He is about to complete the M.Eng degree in Electrical Engineering
from the same University. He is presently a lecturer in the Department of Electrical
Engineering, Hassan Usman Ploytechnic, Katsina, Nigeria. His research interest is in power
system operation and control.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
77 Vol. 1, Issue 5, pp. 77-88
AN Investigation OF THE PRODUCTION LINE FOR ENHANCED
PRODUCTION USING HEURISTIC METHOD
M. A. Hannan, H.A. Munsur, M. Muhsin Deptt. of Mechanical Engg., Dhaka University of Engg. & Tech., Gazipur. Bangladesh.
ABSTRACT
Line balancing is the phase of assembly line study that nearly equally divides the works to be done among the
workers so that the total number of employees required on the assembly line can be minimized. As small
improvements in the performance of the system can lead to significant monetary consequences, it is of utmost
importance to develop practical solution procedures that may yield a significant enhancement in the
throughputs of production. Bangladesh Machine Tool Factory (BMTF) was undertaken as a research project
which had been incurring loss for a long time at their current production rate. In the course of analysis, a line
balancing (LB) technique was employed to have a detail analysis of the line. This paper describes how an
efficient heuristic approach was applied to solve the deterministic and single-model ALB problem. The aim of
the work was sought as to minimize the number of workstations with minimum cycle time so as to maximize the
efficiency of the production line. The performance level was found so low that there was no way to improve the
productivity without any reduction of the idle time from the line curtailing the avoidable delays so far possible.
All the required data was measured and the parameters such as elapsed times, efficiencies, number of workers,
time of each of the workstations etc. was calculated from the existing line. The same production line was
redesigned through rehabilitating & reshuffling the workstations as well as the workers and using the newly
estimated time study data, keeping minimum possible idle time at each of the stations. A new heuristic approach,
the Longest Operation Time (LOT) method was used in designing the new production line. After set up of the
new production line, the cost of production and effectiveness of the new line was computed and compared with
those of the existing one. How much costs could be saved and how much productivity could be increased for the
newly designed production line that were estimated and the production was found to have been increased by a
significant amount reducing the overall production cost per unit.
KEYWORDS: Assembly Line Balancing (Alb), Workstation, Line Efficiency, Task Time, Cycle Time and Line
Bottleneck.
I. INTRODUCTION
An arrangement of workers, machines, and equipment in which the product being assembled passes consecutively from operation to operation until completed. Also it is called production line[1]. An assembly line[1] is a manufacturing process (sometimes called progressive assembly) in which parts (usually interchangeable parts) are added to a product in a sequential manner using optimally planned logistics to create a finished product much faster than with handcrafting-type methods. The division of labor was initially discussed by Adam Smith, regarding the manufacture of pins, in his book “The Wealth of Nations” (published in 1776). The assembly line developed by Ford Motor Company between 1908 and 1915 made assembly lines famous in the following decade through the social ramifications of mass production, such as the affordability of the Ford Model T and the introduction of high wages for Ford workers. Henry Ford was the first to master the assembly line and was able to improve other aspects of industry by doing so (such as reducing labor hours required to produce a single vehicle, and increased production numbers and parts). However, the various preconditions for the development at Ford stretched far back into the 19th century, from the gradual realization of the dream of interchangeability, to the concept of reinventing workflow and job descriptions using analytical methods (the most famous example being “Scientific Management”). Ford was the first company to build large factories around the assembly line concept. Mass production via assembly lines is widely considered to be the catalyst which initiated the modern consumer culture by making possible low unit cost for manufactured goods. It is often said that Ford's production system was ingenious because it turned Ford's own workers into new customers. Put another way,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
78 Vol. 1, Issue 5, pp. 77-88
Ford innovated its way to a lower price point and by doing so turned a huge potential market into a reality. Not only did this mean that Ford enjoyed much larger demand, but the resulting larger demand also allowed further economies of scale to be exploited, further depressing unit price, which tapped yet another portion of the demand curve. This bootstrapping quality of growth made Ford famous and set an example for other industries For a given a set of manufacturing tasks and a specified cycle time, the classical line balancing problem consists of assigning each task to a workstation such that: (i) each workstation can complete its assigned set of tasks within the desired cycle time, (ii) the precedence constraints among the tasks are satisfied, and (iii) the number of workstations is minimized. (Krajewski and Ritzman, 2002[2], Meredith and Schafer, 2003)[3]. Scholl (1999) [6]. The precedence relations among activities in a line balancing problem present a significant challenge for researchers in formulating and implementing an optimization model for LB problem. While integer programming formulations are possible, but they quickly become unwieldy and increasingly difficult to solve when problem size increases. As a result, many researchers recommend heuristic approaches to solving the line balancing problem (Meredith and Schafer, 2003[3], Sabuncuoglu[5], Erel et al. 2000[5]; Suresh, Vivod and Sahu, (1996)[7]. An assembly line (as shown in Figure 1) is a flow-oriented production system where the productive units performing the operations, referred to as stations, are aligned in a serial manner. The work pieces visit stations successively as they are moved along the line usually by some kind of transportation system, e.g. a conveyor belt. The current market is intensively competitive and consumer-centric. For example, in the automobile industry, most of the models have a number of features, and the customer can choose a model based on their desires and financial capability. Different features mean that different, additional parts must be added on the basic model. Due to high cost to build and maintain an assembly line, the manufacturers produce one model with different features or several models on a single assembly line. Due to the complex nature of the ALB problem, there are many heuristics that was used to solve the real life problems relating to the assembly line with a view to increase the efficiency and productivity of the production line at minimum cost. Now-a-day, in mass production, a huge number of units of the same product are produced. This is only possible with a high degree of division of labors. Since Adam Smith (1776) [8] it has been shown that division of labor will train the required skills of the workers and will increase the productivity to a maximum. The maximum degree of division of labor is obtained by organizing production as an assembly line system. Even in the early days of the industrial revolution mass production was already organized in assembly line systems. According to Salveson [9], the "First assembly line was introduced by Eli Whitney during the French Revolution [10] for the manufacturing of muskets. The most popular example is the introduction of the assembly line on 1 April 1913, in the “John R-Streeta of Henry Ford’s Highland-Parka production plant [10], where are still `up to date’ because of the principle to increase productivity by division of labor is timeless. The most known example is the final assembly in automotive industry. But nearly all goods of daily life are made by mass production which at its later stages is organized in assembly line production systems. For example the final assembly of consumer durables, like coffee machines, toasters, washing machines, refrigerators or products of the electrical industry like radio and TV or even personal computers is organized in assembly line systems. The characteristic problem in assembly line systems is how to split up the total work to be done by the total system among the single stations of the line. This problem is called “assembly line balancing” because we have to find a “balance” of the work loads of the stations. First of all we have to determine the set of single tasks which have to be performed in the whole production system and the technological precedence relations among them. The work load of each station (also: set of task, station load, operation) is restricted by the cycle time, which depends on the fixed speed of the conveyor and the length of the stations. The cycle time is defined as the time between the entering of two consecutive product units in a station[11]. In the literature usually the objective is to minimize the number of stations in a line for a given cycle time. This is called time-oriented assembly line balancing[12]. As in recent years the industry was facing with sharp competitiveness the production cost has become more relevant. Even in such successful production systems like the assembly line system, we have to look for possibilities to cut down production cost. As final assembly is usually a labor intensive kind of production we may
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
79 Vol. 1, Issue 5, pp. 77-88
analyze the existent wage compensation system. Almost all collective agreements between unions and employers work with a wage differential in most developed industrial nations, e.g. in German industry which has been analyzed in detail. The higher the difficulties to perform a task, the higher the point value of the task and the wage rate. As the tasks in final assembly are similar but not of unique difficulty there exists certain different wage rates in assembly line production systems. Under this economic perspective the objective in organizing work in assembly line production systems is not to minimize the number of stations, but to minimize the total production cost per unit. Therefore we have to allocate the tasks to stations in a way that both, cost rates and number of stations are considered. This is done in cost-oriented assembly line balancing [13]. A formal description of this objective and the restrictions of this problem are given in [14, 15]. As this paper is directly related to a previous work [16] the formal descriptions needed are reduced to a minimum. Compared to existent balances which were obtained by the use of time-oriented methods neglecting wage rate differences, it is possible to realize savings in production cost up to a two-digit percentage by a cost-oriented reallocation of tasks using cost-oriented methods.
Figure 1: A typical assembly line with few work stations
II. APPROACHES TO DETERMINATION OF PERFORMANCE OF ASSEMBLY
LINE BALANCING PROBLEM (ALBP)
According to M. Amen (2000)[17], there are two types of optimization problems for the line balancing problem (LBP). Assembly line balancing problems are classified into two categories. In Type-I problems with the cycle time, number of tasks, tasks times and task precedence. The objective is to find the minimum number of workstations. A line with fewer stations results in lower labor cost and reduced space requirements. Type-I problems occurs when we have to develop a new assembly line. Type-II problem occurs when the numbers of workstations or workers are fixed. Here the objective is to minimize the cycle time. This will maximize the production rate because the cycle time is expressed in time units per part (time/parts) and if we can find the minimum cycle time then we can get more production per shift. This kind of problem occurs when a factory already has a production line and the management wants to find the optimum production rate so that the number of workstations (workers) is fixed. According to Nearchou (2007), the goal of line balancing is to develop an acceptable, though not necessarily optimum but near to an optimum solution for assembly line balancing for higher production. With either type, it is always assumed that the station time, which is the sum of times of all operations assigned to that station, must not exceed the cycle time. However, it is unnecessary or even impossible (e.g. when operation times are uncertain) to set a cycle time large enough to accommodate all the operations assigned to every station for each model. Whenever the operator cannot complete the pre-assigned operations on a work piece, work overload occurs. Since, idle time at any station is the un-utilized resource, the objective of line balancing is to minimize this idle time. Line balancing[12] is the phase of assembly line study that nearly equally divides the work to be done among the workers so that the total number of employees required on the assembly line can be minimized. The Type-II approach had been followed, where the line balancing involves selecting the appropriate combination of work tasks to be performed at each workstation so that the work is performed in a feasible sequence and proximately equal mounts of time are allocated at each of the workstations. The aim of the present study is to minimize the required labor input and facility investment for a given output. The objective of the present work was to perform either: (i) Minimizing the number of workstation (workers) required to achieve a given cycle time (i.e., given production capacity) or, minimizing the cycle time to maximize the output rate for a given number of workstations. Assembly lines are designed for a sequential organization of workers, tools or machines, and parts. The motion of workers is minimized to the extent possible. All parts or assemblies are handled either
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
80 Vol. 1, Issue 5, pp. 77-88
by conveyors or motorized vehicles such as forklifts, or gravity, with no manual trucking. Heavy lifting is done by machines such as overhead cranes or forklifts. Each worker typically performs one simple operation. According to Henry Ford [19] the principles of assembly are: (a) Placing the tools and the men in the sequence of the operation so that each component part shall travel the least possible distance while in the process of finishing. (b) Using work slides or some other form of carrier so that when a workman completes his operation, he drops the part always in the same place--which place must always be the most convenient place to his hand--and if possible have gravity carry the part to the next workman for his operation. (c) Using sliding assembling lines by which the parts to be assembled are delivered at convenient distances.
III. PROBLEM DESCRIPTION
First, let us make some assumptions complied with most practical mixed model assembly lines: 1. The line is connected by a conveyor belt which moves at a constant speed. Consecutive work pieces are equi-spaced on the line by launching each after a cycle time. 2. Every work piece is available at each station for a fixed time interval. During this interval, the work load (of the respective model) has to be performed by an operator while the work piece rides downstream on the conveyor belt. If the work load is not finished within the cycle time, the operator can drift to the next consecutive station f or a certain distance. If the drifting distance is reached without finishing the operations, work overload occurs. In this case, a utility worker is additionally employed to perform the remainder work so fast that the work can be completed as soon as possible. 3. The operators of different stations do not interfere with each other while simultaneously servicing a work piece (i.e. during drifting operations). 4. The operator returns to the upstream boundary of the station or the next work piece, whatever is reached first, in zero time after finishing the work load on the current unit, because the conveyor speed is much smaller than the walking speed of the operators. 5. Precedence graphs can be accumulated into a single combined precedence graph, similar operations of different models may have different operation time; zero operation time indicate that an operation is not required for a model. 6. Cycle time, number of stations, drifting distance, conveyor speed and the sequence of models to be assembled within the decision horizon must be known.
IV. SURVEY OF THEORIES
4.1 A heuristics applied for solving the cost-oriented assembly line balancing problem applied in LBP [15,
23] many heuristics exist in literature for LB problem. The heuristic provides satisfactory solution but does not guarantee the optimal one (or the best solution). As the Line balancing problems can be solved by many ways, out of those, the Longest Operation Time (LOT)[23] approach had been used. It is the line-balancing heuristic that gives top assignment priority to the task that has the longest operation time. The steps of LOT are: LOT 1: To assign first the task that takes the most time to the first station. LOT 2: After assigning a task, to determine how much time the station has left to contribute. LOT 3: If the station can contribute more time, to assign it to a task requiring as much time as possible. The operations in any line follow same precedence relation. For example, operation of super-finishing cannot start unless earlier operations of turning, etc., are over. While designing the line balancing problem, one has to satisfy the precedence constraint. This is also referred as technological constraint, which is due to sequencing requirement in the entire job.
V. TERMINOLOGY DEFINED IN ASSEMBLY LINE
5.1 Few Terminology of assembly line analysis [24, 25] a. Work Element (i) : The job is divided into its component tasks so that the work may be spread
along the line. Work element is a part of the total job content in the line. Let TV be the maximum
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
81 Vol. 1, Issue 5, pp. 77-88
number of work element, which is obtained by dividing the total work elements into minimum rational work elements. Minimum rational work element is the smallest practical divisible task into which a work can be divided. The time in a-work element, i say (TjN), is assumed as constant. Also, all TiN are additive in nature. This means that if "assume that if work elements, 4 and 5, are done at any one station, the station time would be (T4N + T5N). Where N is total number of work elements.
b. Work Stations (w): It is a location on the assembly line where a combination of few work elements is performed. c. Total Work Content (Twc) : This is the algebraic sum of time of all the work elements on the line. Thus; Twe = ∑
Ni=1 TiN
d. Station Time (Tsi) : It is the sum of all the work elements (i) on work station (s). e. Cycle Time (c): This is the time between two successive assemblies coming out of a line. Cycle time can be greater than or equal to the maximum of all times. If, c = max Tsi, then there will be ideal time at all stations having station time less than the cycle time. f. Delay or Idle Time at Station (Tds) : This is the difference between the cycle time of the line and station time.
Tds = c - Tsi
g. Precedence Diagram This is a diagram in which the work elements are shown as per their sequence relations. Any job cannot be performed unless its predecessor is completed. A graphical representation, containing arrows from predecessor to Predecessor have the successor work element. Every node in the diagram represents a work element. h. Balance Delay or Balancing Less (d) : This is a measure of line-inefficiency. Therefore, the efficient is done to minimize the balance delay. Due to imperfect allocation of work along various stations, there is idle time to station. Therefore, balance delay: D = nc - Twe / nc = nc - ∑
Ni=1 TiN / nc
Where; c = Total cycle time; Twe = Total work content; n = Total number of stations. i. Line Efficiency (LE) : It is expressed as the ratio of the total station time to the cycle time, multiplied by the number of work stations (n): LE = ∑N
i=1 TiN / (nc) x 100%
Where; Tsi = Station time at station i, n = Total number of stations, c = Total cycle time j. Target time : Target cycle time (which must be greater than or equal to the target task) or define
the target number of workstations. If the Σti and n are known, then the target cycle time ct can be
found out by the formula: ct = ΣΣΣΣti/n. k. The Total Idle time: : The total idle time for the line is given by:
IT = nc -
k
ii 1
t=
∑
A line is perfectly balanced if IT = 0 at the minimum cycle time. Sometimes the degree to which a line approaches this perfect balance is expressed as a percentage or a decimal called the balance delay. In percentage, the balance delay is found given as
D = ( )1 0 0 IT
n c
Where, IT = Total idle time for the line. n = the number of workstations, assuming one worker per workstation c = the cycle time for the line ti = time for the ith work task k = the total number of work task to be performed on the production line
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
82 Vol. 1, Issue 5, pp. 77-88
The total amount of work to be performed on a line broken into tasks and the tasks assigned to work stations so the work is performed in a feasible sequence within and acceptable cycle time. The cycle time for a line (time between completions of successive items on the line) is determined by the maximum amount of time required at any workstation. Work can not flow through the line any faster than it can pass through the slowest stage (the bottleneck of the line)[28]. If one workstation has a great deal of more work than others, it is desirable to assign some of this work to stations with less work so that there will exist no bottlenecks in the line.
VI. DATA PRESENTATION FOR WORK STATIONS
The following table shows the time study data at each of the work stations of the present production line[7:
Table 1: Elapsed time at each work station
Station
No. Tasks No. of workers
Time-1
(Minutes)
Time-2
(Minutes)
Time-3
(Minutes)
01
(a) Box opening 2 10 12 11
(b) check 2 10 11 9
Parts distribution 2 30 29 32
02
Frame cleaning 2 30 32 34
Axle with wheel 2 50 54 48
Leaf spring setting 2 30 32 30
Engine mounting 2 20 18 21
Axle with frame 2 40 42 45
Harnessing 2 30 32 28
Disc wheel setting 2 20 22 21
Check 1 30 30 28
03
Bracket fittings 4 60 55 50
Flexible piping 4 30 26 27
Copper piping 4 30 28 26
Nut tightens 4 30 25 28
Booster + Air tank 1 170 180 190
Check 1 30 26 25
04
Engine assembly 2 30 28 32
Alternation 2 15 14 16
Fan 2 15 16 17
Self stator 2 14 15 16
Transmission sub. Ass. 2 30 32 35
Member assembly 2 60 60 65
05
Radiator, silencer, ass. 3 60 65 62
Check 1 30 25 26
Horn and hose pipe 2 20 25 25
Air cleaner 2 20 22 26
Fuel tank 2 30 32 35
06
Battery carrier 2 30 31 33
Transfer line 2 30 28 35
Propeller shaft 2 50 60 55
Fluid Supply 2 20 25 22
Check 1 30 35 30
07
Cabin sub assembly 3 90 100 95
Side, signal lamp 2 30 35 40
Cabin on Chassis 3 30 32 29
Starting system 2 30 32 34
08
Check 2 25 26 30
Wood pattern making 6 60 60 65
Seat making 5 45 55 48
Wood paining 7 47 54 51
Load body sub assy. 8 60 58 62
09
Load body on Vehicle 12 55 58 60
Electric wiring 4 25 30 30
Pudding 5 52 55 55
Rubbing the cabin 6 64 58 60
10
Primary painting 3 40 42 44
Re-pudding 4 25 28 24
Final painting 3 50 48 55
Touch-up 3 32 30 34
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
83 Vol. 1, Issue 5, pp. 77-88
VII. COMPARISON BETWEEN EXISTING AND MODIFIED MODELS OF THE
PRODUCTION LINE
Fig 2:Existing Model of AL : Fig 3: Proposed Model of AL
(with ten stations) (With eight Stations)
Station-1 Working time (modified) = 50 No. of workers (required) = 2
Station-1 Working time = 60 No. of workers (working) = 2
Station-2 Working time (modified) = 60 No. Of workers (required) = 2
Station-2 Working time = 45 No. Of workers (working) = 2
Station-3 Working time (modified) = 192 No. Of workers (required) = 4
Station-3 Working time = 210 No. Of workers (working) = 6
Station-4 Working time (modified) = 156 No. Of workers (required) = 6
Station-4 Working time = 230 No. Of workers (working) = 8
Station-5 Working time (modified) = 229 No. Of workers (required) = 4
Station-5 Working time (modified) = 229 No. Of workers (required) = 4
Station-6 Working time (modified) = 174 No. Of workers (required) = 4
Station-6 Working time (modified) = 174 No. Of workers (required) = 4
Station-7 Working time (modified) = 199 No. Of workers (required) = 5
Station-7 Working time (modified) = 199 No. Of workers (required) = 5
Station-8 Working time (modified) = 205 No. Of workers (required) = 27
Station-8 Working time (modified) = 205 No. Of workers (required) = 27
Station-9 Working time (modified) = 234 No. Of workers (required) = 15
Station-10 Working time (modified) = 192 No. Of workers (required) = 19
VIII. ASSEMBLY LINE AND ANALYSIS
The present situation of the stations is shown in the following table below.
Table 2: Observed time and workers at all workstations in the existing production line
Station No. No of Worker Elapsed Time(Min)
Station 1 2 50
Station 2 2 60
Station 3 6 210
Station 4 8 230
Station 5 6 160
Station 6 5 198
Station 7 6 185
Station 8 33 235
Station 9 19 202
Station 10 22 210
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
84 Vol. 1, Issue 5, pp. 77-88
Actual number of
workers, WA 109
Total Elapsed Time 1740 mins.
IX. PERFORMANCE ANALYSIS OF THE ASSEMBLY LINE
Iterations for line balance efficiency at the stations: First iteration, as a sample calculation: Using the existing production line time data from Table 2, where the total elapsed time was 210 minutes at the workstation no.3.
9.1 Sample Calculations: From the existing model we have[30]:
Available time / PeriodCycleTime
Output Units required / period
8 hours 60 min. 480 min.240 min.
2 2
=
×= = =
Theoretical minimum no. of workers. = T
C T
∑
Since, Total time, ΣT = W1T1 + W2T2 + W3T3 + ----------- + Wy Ty
= 22,593 minutes.
Theoretical minimum no of workers =
Balance Efficiency =
9.2 Iterations for final balance efficiency:
Similarly, the existing assembly line had been rearranged several times many iterations had been carried out at all workstations at aim to eliminate the idle time and reducing the number of work stations to eight, keeping the precedence works logical and finally the station time have been furnished in the Table 3. Eliminating all idle time, the total elapsed time for the line has been made to 1685 minutes.
Table 3: Total elapsed time in all workstations in the new production line(for Iterations #1). Stations Functions Time Consumed Station1 Materials Handling &
Distribution 223
Station2 Spot Welding Section 223
Station3 Metal Section 203
Station4 Painting Section 205
Station5 Chassis Section 205
Station6 Trimming Section. 206
Station7 Final Section. 208
Station8 Inspection Section 212
Total working time that had been reduced to
1685 minutes
9.3 Sample analysis for reducing the idle time and number of work stations to minimum as
follows:
Let us consider the Work Station no. 3: This station has five workers. Applying the line balancing technique the precedence diagram is shown in Fig 2.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
85 Vol. 1, Issue 5, pp. 77-88
Figure 3: Precedence diagram and elapsed time of tasks operations at Station # 3 in the existing assembly line.
Table 4: Time elapsed in modified line at workstation no. 3.
Tasks
/Activity
Workers Predecessor
activity
Actual time needed to finish the work
(min.)
A 2 - 16
B 1 - 19
C 1 B, A 35
D 1 C 26
E 1 D 19
F 1 E 32
G 2 F 9
H 3 G 6
∑ = 162
Figure 4 : Precedence diagram and elapsed time of tasks operations at Station # 3 in the proposed assembly line.
Therefore, Time can be saved at this station = (240 – 162) = 78 minutes. In this way all the idle time had been computed. This saved time could be used at another station. If all of the 5 workers work at a time, they are not fully busy with all the works. So, partly they can be utilized at other stations for maximum utilization of workers and machines and to minimize the cost of production.
Table 5: Balance Efficiency after computations of all the Iterations completed at all stations or all iterations.
Iterati
on no
Cycle time
(CT)min.
Actual no
of
workers
(WA)
Theoretica
l minimum
workers
(WT)
Balance
efficiency
(eff.B)%
01 240 107 96 86
02 240 86 72 84
03 240 109 95 86
04 240 99 96 97
05 240 101 100 99
06 240 104 100 96
07 240 104 101 98
08 240 97 97 100
09 240 103 100 97
10 240 103 99 96
In the similar way the theoretical minimum number of workers and Balance efficiency were found out and these are furnished in Table 4.
X. COST ANALYSIS AND COMPARISONS [29]
Cost Calculations and Cost savings at the present rate of production(two vehicles per day) : Table 6: Worker reduction drive at different stations.
Station Number No of workers that can be
reduced
01 00
C 3
3
D 2
3
E 3
7
F 1
3
G 9
B 2
3
A 2
2
C 3
5
D 2
6
E 1
9
F 3
2
G 9
H 6
B 1
9
A 1
6
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
86 Vol. 1, Issue 5, pp. 77-88
02 00
03 02
04 02
05 02
06 01
07 01
08 06
09 04
10 03
Total no. of workers reduced = 21
Total no. of reduced workers = 21 Nos. The authority pays at least Tk 200/- to every worker for each working day. Therefore, according to the previous design of the production line, cost can be saved through worker reduction policy: Daily Savings = Tk. 200/- × 21 = Tk. 4200/- Monthly Savings = Tk. 4200/- × 26 = Tk. 1,09,200/- Considering one day as holiday, number of working days in a month = 26 The labor cost of existing line has been found as follows: For one vehicle: (a) Assembly cost Tk. 6000/- (b) Painting Tk. 4700/- (c) Load body fabrication Tk. 7500/- (d) Load body Painting Tk. 6600/- Therefore, Total Labor Cost = Tk. 24,800/- Daily labor cost (for Production of two vehicles) = Tk.24,800/- × 2 = Tk. 4,96,00/- Monthly labor cost =Tk. 49,600/- × 26 = Tk.12,89,600/-. In the modified production line it could easy save: Tk. 4,200/- from every pair of automobile assembled everyday. Therefore, Monthly money savings (for the modified model) = Tk. 4,200 /- × 26 = Tk. 1,09,200/- Labor cost calculations if three vehicles were produced a day: For increasing productivity in 8 hours working period (in a working day) from two to three automobiles, the number of workers on the assembly line = (0+2+3+0+2+1+2+1+1) = 12 workers more required than the existing model. For this enhanced number of workers the labor cost will be increased too much. Total cost increased: Daily Increased Cost = Tk. 200/- × 12 = Tk. 2,400/- Monthly Increased Cost = Tk. 2,400/- ×26 = Tk. 62,400/- And Total number of vehicles assembled in a month will be = 3×26 = 78. Total monthly labor cost for assembly of 78 vehicles = Total labor cost of two vehicles assembled + total cost increased for three vehicles assembled in a month = Tk. 12,89,600/-+ Tk.62,400/- + Tk. 13, 52,000/-
XI. RESULTS AND DISCUSSIONS
Cost Comparison if 2 and 3 nos. of automobiles could be produced in each working day: If the top management wants to produce two automobiles each working day, the labor cost would be found for each vehicle
= Tk.12,89,600/- ÷ 52 = Tk. 24,800/- But, if the management wants to produce three vehicles each working day, then the labor cost would
be found for each vehicle = Tk.13,52,000/- ÷ 78 = Tk17,333/-. Therefore, it would be now easy to realize that, it would be more profitable to produce three vehicles each working day, instead of two.
XII. CONCLUSIONS The proposed line has been designed very carefully in order to keep the balance efficiency at maximum level. Through the redesigning process of the production line all the idle and avoidable delays have been eliminated and the production line has been made free of bottlenecks, as a result it is found that the production rate can be increased with a considerable amount of profit margin. Through the study of total labor costs, it had been shown that if the daily delivery rate could be kept constant, about Tk.1, 94,142.00 could be saved every month. The gains in productivity allowed BMTF to increase worker pay from Tk. 150.00 per day to $200.00 per day and to reduce the hourly work week while continuously lowering the product price. These
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
87 Vol. 1, Issue 5, pp. 77-88
goals appear altruistic; however, it has been argued that they were implemented by BMTF in order to reduce high employee turnover.
ACKNOWLEDGEMENT
The author would like to thank Mr. H.A. Munsur and Mr. M. Muhsin, two of his undergraduate students who carried out a research work for redesigning, rehabilitation and balancing the production line of Bangladesh Machine Tool Factory (BMTF) in 2010 for obtaining the B.Sc. Engineering Degree under his direct supervision. They successfully completed the research work showing that the proposed model of production line would increase a significant number of products saving a considerable amount of money which has a positive impact in reducing the cost per unit.
REFERENCES [1] www. Assembly line - Wikipedia, the free encyclopedia.mht. [2] Krajewski, L. and L. Ritzman (2002), Operations Management Strategy and Analysis, 6th Edition,
Prectice-Hall, New Jersey. [3] Meredith, J. and S. Shafer (2003), Introducing Operations Management, Wiley, New York. [4] Ragsdale, C.T. (2003), "A New Approach to Implementing Project Networks in Spreadsheets,"
INFORMS Transactions on Education, Vol. 3, No. 3. [5] Sabuncuoglu, I., E. Erel, and M. Tanyer (2000), "Assembly Line Balancing Using Genetic
Algorithms," Journal of Intelligent Manufacturing, Vol. 11, pp. 295-310. [6] Scholl, A. (1999), Balancing and Sequencing of Assembly Lines, Springer Verlag, Heidelberg. [7] Suresh, G., V. Vivod, and S. Sahu (1996), "A Genetic Algorithm for Assembly Line Balancing,"
Production Planning and Control, Vol. 7, No. 1, pp. 38-46. [8] A. Smith, An Inquiry into the Nature and Causes of the Wealth of Nations, 1st Edition, 1776, London ,
2nd edn. London 1789. [9] M.E. Salveson, The assembly line balancing problem, Journal of Industrial Engineering 6 (1955) 18-
25. [10] K. Williams et al., The myth of the line: Ford's production of the Model T at Highland Park, 1909-16,
Business History 35 (1993) 6687. [11] www.assembly line Definition2 from Answers_com.htm [12] Ajenblit D. A., "Applying Genetic Algorithms to the U-shaped Assembly Line Balancing Problem",
Proceedings of the IEEE Conference on Evolutionary Computation , (1992), pp. 96-101. [13] M.D. Kildbridge and L. Wester, "A Heuristic Method of Assembly Line Balancing", Journal of
Industrial Engineering, Vol. 12, No. 4, (1961), pp. 292-298. [14] Dar-El, E. M., “Solving Large Single-model Assembly Line Balancing Problem – A comparative
Study”, AIIE Transactions, Vol. 7, No 3, (1975), pp. 302-306. [15] F.M. Tonge, "Summary of a Heuristic Line Balancing Procedure", Management Science, Vol. 7, No. 1,
1969, pp. 21-42. [16] H.A. Munsur and Mr. M. Muhsin, “Assembly line Balancing for Enhanced Production”, an
unpublished thesis carried out under direct supervision of the author for obtaining B.Sc Engineering Degree, ME Department, DUET, Gazipur, 2010.
[17] M. Amen, Heuristic methods for cost-oriented assembly line balancing: A survey, International Journal of Production Economics 68 (2000), pp 114.
[18] Ajenblit, D.A., Wainwright, R.L. (1998), “ Applying genetic algorithms to the U-shaped assembly line balancing problem”, Management Science, Vol. 7, No. 4, pp. 21-42.
[19] Leu Y., Matheson L.A., and Ress L.P., "Assembly Line Balancing Using Genetic Algorithms with Heuristic-Generated Initial Populations and Multiple Evaluation Criteria", Decision Sciences, Vol. 25 Num. 4 (1996), pp. 581-605.
[20 ] Ignall, E. J., “Review of Assembly Line Balancing” Journal of Industrial Engineering, Vol. 15, No 4 (1965), pp. 244- 254.
[21] Klein M., "On Assembly Line Balancing", Operations Research, Vol. 11, (1963), pp. 274-281. [22] A.A. Mastor, An experimental investigation and comparative evaluation of production line balancing
techniques, Management Science 16 (1970) 728-746. [23] Held M., R.M. Karp, and R. Shareshian, "Assembly Line Balancing Dynamic Programing with
Precedence Contraints", Operations research, Vol. 11, No. 3, (1963), pp. 442-460. [24] J.R. Jackson, A computing procedure for a line balancing problem, Management Science 2 (1956) 261-
271. [25] F.B. Talbot, J.H. Patterson, W.V. Gehrlein, A comparative evaluation of heuristic line balancing
techniques, Management Science 32 (1986) 430-454.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
88 Vol. 1, Issue 5, pp. 77-88
[26] Bowman E. H. "Assembly Line Balancing by Linear Programming“ Operations Research, Vol. 8, (1960), pp. 385-389.
[27] R. Wild, Mass-production Management - The Design and Operation of Production Flow-line Systems, Wiley, London, 1972.
[28] F.W. Taylor, The Principles of Scientific Management, Harper & Brothers Publishers, New York/London, 1911.
[29] M. Amen, An exact method for cost-oriented assembly line balancing, International Journal of Production Economics 64 (2000) 187195. M. Amen / Int. J. Production Economics 69 (2001) 255264 263.
[30] Dar-El, E. M., "Solving Large Single-model Assembly Line Balancing Problem – A comparative Study", AIIE Transactions, Vol. 7, No 3, (1975), pp. 302-306.
Author’s Biography:
M. A. Hannan has been working as a Faculty member in the Department of Mechanical Engineering, Dhaka University of Engineering & Technology, Gazipur, Bangladesh. He has a specialization in Industrial & Production Engineering, DUET. Bangladesh. His specialization is in POM of Industrial Engineering.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
89 Vol. 1, Issue 5, pp. 89-95
A NOVEL DESIGN FOR ADAPTIVE HARMONIC FILTER TO
IMPROVE THE PERFORMANCE OF OVER CURRENT RELAYS
A. Abu-Siada Department of Electrical and Computer Engineering, Curtin University, Perth, Australia
ABSTRACT
Due to the ever-increasing in non linear loads and the worldwide trend to establish smart grids, harmonic level
in the electricity grids is significantly increased. In addition to their impact on power quality, harmonic current
can have a devastating effect on the operation of over current relays as they are designed to operate efficiently
at the fundamental frequency. The distorted waveform will affect the operation of the over current rely and may
cause the relay to trip under normal operating conditions. To solve this problem, power passive and active
filters are employed to eliminate the harmonics and purify the relay operational signal. Passive filters are not
cost effective choice to solve this issue. On the other hand, active filters are more complex and need proper and
complicated controller. This paper introduces a new and simple approach for adaptive filter design. This
approach is economic, compact and very effective in eliminating harmonics in the grid. It can be easily attached
with any protective relay to improve its performance. Application of this design to improve the performance of
over current relays in the IEEE-30 bus system with heavy penetration of non-linear loads is investigated.
KEYWORDS: Over current relay, harmonic filters, IEEE-30 bus system
I. INTRODUCTION
Most of the litratures reveal that the performance of relays in presence of harmonic currents is not
significantly affected for total harmonic distortion (THD) less than 20% [1]. As there has been a
tremendous increase in harmonic sources in the last few decades, harmonic levels of 20 % and higher
are expected. Moreover overcurrent relays have to operate with current transformers which may
saturate and distort the current waveform causing a relay to trip under conditions which would
normally incur smooth running of the system without interruption [1-5]. Current transformer
saturation may occur due to the presence of harmonics which may cause a current transformer failure
to devliver a true reproduction of the primary current to the relay during fault conditions and thus may
cause undesirable operations [6-8]. Electromechanical relays are nowadays considered obsolete in
most of developing countries, however they are still used in some places. Electromechanical relays
time delay characteristics are altered in the presence of harmonics. Another type of relays that is
affected by harmonics is the negative-sequence overcurrent relay which is designed to specifically
function with the negative sequence current component and it cannot perform upto its standard when
there is a significant waveform distortion. Digital and numerical relays usually have built-in filters to
filter out harmonics and thus are less prone to maloperation [9].
Active power filters which are more flexible and viable than passive filters have become popular
nowadays [10]. However, active power filters configuration is more complex and require appropriate
control devices to operate [11]. This paper introduces a novel active filter design that is compact,
simple and reliable. Application of this design to improve the performance of over current relays in
the IEEE-30 bus system with heavy penetration of non-linear loads is investigated.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET
The proposed filter design with the detailed circuit components is elaborated in section 2. To prove
the reliability of the proposed filter, the simulation results of two case studies are
3. Application of the proposed filter to the IEEE
draws the overall conclusion of the paper.
II. PROPOSED FILTER DESIGN
To purify the current signal received by the current transformer
which consists of a fundamental current
secondary side of the step down transformer is extracted and the fundamental current component is
filtered out using a narrow band rejected filter while the remaining harmonic components will be used
to cancel the harmonic components in the other path by using a shifting transformer
1. In this way the current signal fed to the rel
The overall circuit is shown in Fig. 2.
Fig
In the circuit shown in Fig. 2, the current transformer measures the distorted current from the step
down transformer secondary. The resistor
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231
The proposed filter design with the detailed circuit components is elaborated in section 2. To prove
the reliability of the proposed filter, the simulation results of two case studies are illustrated in section
3. Application of the proposed filter to the IEEE-30 bus system is examined in section 4. Section 5
draws the overall conclusion of the paper.
ESIGN
To purify the current signal received by the current transformer (CT), the distorted
current component (I0) and harmonic current components
secondary side of the step down transformer is extracted and the fundamental current component is
rejected filter while the remaining harmonic components will be used
to cancel the harmonic components in the other path by using a shifting transformer as
In this way the current signal fed to the relay will only contain the fundamental current
he overall circuit is shown in Fig. 2.
Figure 1. Proposed harmonic design
Figure 2. Filter components
In the circuit shown in Fig. 2, the current transformer measures the distorted current from the step
down transformer secondary. The resistor R with its value of 1Ω is used to convert the current signal
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231-1963
The proposed filter design with the detailed circuit components is elaborated in section 2. To prove
illustrated in section
30 bus system is examined in section 4. Section 5
he distorted current signal
components (Ihs) in the
secondary side of the step down transformer is extracted and the fundamental current component is
rejected filter while the remaining harmonic components will be used
as shown in Fig.
current component.
In the circuit shown in Fig. 2, the current transformer measures the distorted current from the step-
is used to convert the current signal
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
to a voltage signal which is amplified by 10 times using an operational amplifier. The key component
of the active filter is the narrow band-reject 50 Hz filter which suppresses the 50Hz fundamental
component. The filter compresses low-pass and high-pass filter components with a summing amplifier
(twin-T notch filters). The filter transfer function and the value of its components are calculated based
on the required specifications. The output signal of the filter is amplified using an operational
amplifier and then is converted to a current signal (comprising harmonic components only) using a
voltage controlled current source (VCCS). The harmonic components are then fed to one terminal of
the cancellation transformer where the original current component (comprising fundamental and
harmonic components) is fed to another terminal for harmonic cancellation. In this way, a pure
fundamental current signal is guaranteed to be fed to the over current relay.
III. SIMULATION RESULTS
To examine the filter capability in suppressing all undesired current harmonics while retaining the
fundamental component, the circuit shown in Fig. 2 is simulated using PSIM software and 2 case
studies are performed.
Case study 1: The primary side of the (1:1000) current transformer was fed by a distorted current
signal compressing sub frequencies of high amplitude at 10 Hz and 35 Hz as shown in Table 1. The
4th column in table 1 shows the ideal values of the output signal where all sub harmonic components
are assumed to be eliminated and 100% (1 A) of the fundamental component will be supplied to the
relay. The 5th column in Table 1 shows the output current components of the proposed filter. The
performance of the filter in eliminating harmonic components can be examined by comparing the
filter output current components with the ideal output current. The waveforms of the input current,
ideal output current and filter output current along with their harmonic spectrums are shown in Fig. 3.
Table 1. Filter performance with Sub-harmonic components
Harmonic
Order
Frequency
( Hz )
Input
( A )
Ideal output
( A )
Output the
filter ( A )
1 50 1000 1.0 0. 95
0.2 10 500 0 0.0213
0.7 35 500 0 0.0816
Figure 3. Waveforms and spectrum analysis for case study 1
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Table 2. Filter performance with sub-harmonic and harmonic components
Harmonic
Order
Frequency
( Hz )
Input
( A )
Ideal output
( A )
Output the
filter ( A )
1 50 1000 1.0 0. 9863
0.2 10 500 0 0.0101
0.6 30 500 0 0.3293
2 100 500 0 0.0102
3 150 500 0 0.0023
5 250 300 0 0.0079
7 350 300 0 0.0131
9 450 300 0 0.0055
11 550 100 0 0.0067
13 650 100 0 0.0079
Case study 2: The amount of harmonic contents in the input signal is significantly increased to include
the harmonic and sub harmonic orders shown in Table 2. It can be shown from table 2 that the
difference between the ideal output current and the actual filter output current is negligible. The
waveforms of the input current, ideal output current and filter output currents along with their
harmonic spectrums for this case are shown in Fig. 4.
Figure 4. Waveforms and spectrum analysis for case study 2
IV. APPLICATION OF THE PROPOSED FILTER ON THE IEEE-30 BUS SYSTEM
To investigate the impact of the proposed filter on relay’s operation, the IEEE 30-bus system [12]
(shown in Fig. 5) is simulated using ETAP Software and the THD is measured as 3%. Relays
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
coordination is performed as in [13, 14]. A 3-phase short circuit fault is applied at bus 10 and as a
result, relays 8, 9 and 10 will trip in the sequence shown in Fig. 5 to isolate the faulty bus.
Non-linear loads were then connected to the system at different buses such that the THD is reaching
20%. The same three phase short circuit fault is applied on bus 10. As can be seen from Fig. 6, under
such significant THD, the relays will have undesired tripping sequence and they will not isolate the
faulty bus. The tripping sequence in this case starts with relay 9 on bus 10. However, relays 8 and 9
will not trip and relays 19 and 20 on bus 25 will trip instead. As a consequence, under such heavy
harmonic level, the relays will have a malfunction operation and they will not isolate the faulty zone.
To promote a correct sequence of relays tripping operation in the existence of significant THD, the
proposed filter design was connected at the locations shown in Fig. 7. As a result, the THD was
reduced to only 3.1%. Fig. 7 shows a right sequence of relays tripping operation which is similar to
Fig. 5. The relay pickup values become much sensible to the relay operation after the installation of
harmonic filters. It can be concluded that the proposed filter is very effective in rectifying relays
operation in the existence of significant harmonic currents as it eliminate a significant amount of
harmonic currents.
Fig. 5 Tripping Sequence during 3 Phase Fault on bus 10 (THD = 3%)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure 6. Tripping Sequence during 3 Phase Fault on bus 10 (THD = 20%)
Figure 7. Tripping Sequence during 3 Phase Fault on bus 10 (THD = 3.1%)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
V. CONCLUSION
Simulation results show that, when the THD is more than 20%, the over current relay’s performance
is significantly affected and a malfunction will be caused. When a fault occurs in the system, the over
current relay will not be able to isolate the faulty location as they will trip in an undesired way.
Reducing THD to a level below 20% will mitigate this problem and a proper relay’s operation can be
retained. Passive harmonic filters are not a cost effective solution to solve this problem. The proposed
filter design is very effective in reducing the THD in the system to almost negligible level and will
rectify relays operation in the existence of significant harmonic currents. The proposed filter is
compact, cost effective, technically sound and easy to be implemented.
REFERENCES
[1] Tumiran, T. Haryono and Zulkarnaini, “Effect of Harmonic Loads on Over Current Relay to Distribution
System protection”, Proceeding of the International Conference on Electrical Engineering and Informatics June
2007
[2] N.X. Tung, G. Fujita, M.A.S Masoum, S.M Islam, “Impact of harmonics on Tripping time and Coordination
of Overcurrent Relay”, 7th
WSEAS International Conference on Electric Power Systems, High Voltages,
Electric Machines, Venice, Italy, November, 2007
[3] A.wright and C.christopoulos, Electrical power system protection, London: Chapman & Hall, 1993
[4] S. Arrillaga, Watson and Wood, Power system harmonics, England: John Wiley & Sons Ltd, 1997.
[5] A. Watson, Power system harmonics, England: John Wiley & Sons Ltd, 2003.
[6] N.A Abbas, “Saturation of current transformers and its impact on digital Over current relays” Msc Thesis,
King Fahd University of Petroleum and Minerals, Dahrain, Saudi Arabia, August 2005
[7] F. M. A. S. M. Elwald F, Power Quality in Power Systems and Electrical Machines, Amsterdam and
Boston: Academic Press/Elsvier, 2008.
[8] Francisco C. De La rosa.”Effect of harmonic distortion on power systems” in Harmonics and Power
Systems. Boca Raton, FL : CRC/Taylor & Francis, 2006.
[9] A. A. Girgis, J. W. Nims, J. Jacamino, J. G. Dalton, and A. Bishop, "Effect of voltage harmonics on the
operation of solid state relays in industrial applications," in Industry Applications Society Annual Meeting,
1990., Conference Record of the 1990 IEEE, 1990, pp. 1821-1828 vol.2.
[10] C. Cheng-Che and H. Yuan-Yih, "A novel approach to the design of a shunt active filter for an unbalanced
three-phase four-wire system under nonsinusoidal conditions," Power Delivery, IEEE Transactions on, vol. 15,
pp. 1258-1264, 2000.
[11] G.J Wakileh, Power system harmonics fundamental analysis and filter design, Berlin ; New York
: Springer, 2001
[12] H. Saadat, Power System Analysis, New York: McGraw-Hills Inc., 2002.
[13] M. Ezzeddine, R. Kaczmarek, and M. U. Iftikhar, "Coordination of directional overcurrent relays using a
novel method to select their settings," Generation, Transmission & Distribution, IET, vol. 5, pp. 743-750.
[14] D. Birla, R. P. Maheshwari, and H. O. Gupta, "A new nonlinear directional overcurrent relay coordination
technique, and banes and boons of near-end faults based approach," Power Delivery, IEEE Transactions on, vol.
21, pp. 1176-1182, 2006.
Author
A. Abu-Siada (M’07) received his B.Sc. and M.Sc. degrees from Ain Shams University,
Egypt and the PhD degree from Curtin University of Technology, Australia, All in
Electrical Engineering. Currently, he is a lecturer in the Department of Electrical and
Computer Engineering at Curtin University. His research interests include power system
stability, Condition monitoring, Superconducting Magnetic Energy Storage (SMES), Power
Electronics, Power Quality, Energy Technology, and System Simulation. He is a regular
reviewer for the IEEE Transaction on Power Electronics, IEEE Transaction on Dielectrics
and Electrical Insulations, and the Qatar National Research Fund (QNRF).
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
96 Vol. 1, Issue 5, pp. 96-108
ANUPLACE: A SYNTHESIS AWARE VLSI PLACER TO
MINIMIZE TIMING CLOSURE
Santeppa Kambham1 and Krishna Prasad K.S.R
2
1ANURAG, DRDO, Kanchanbagh, Hyderabad-500058, India
2ECE Dept, National Institute of Technology, Warangal-506004, India
ABSTRACT
In Deep Sub Micron (DSM) technologies, circuits fail to meet the timings estimated during synthesis after
completion of the layout which is termed as ‘Timing Closure’ problem. This work focuses on the study of
reasons for failure of timing closure for a given synthesis solution. It was found that this failure is due to non-
adherence of synthesizer’s assumptions during placement. A synthesis aware new placer called ANUPLACE
was developed which adheres to assumptions made during synthesis. The new algorithms developed are
illustrated with an example. ANUPLACE was applied to a set of standard placement benchmark circuits. There
was an average improvement of 53.7% in the Half-Perimeter-Wire-Lengths (HPWL) with an average area
penalty of 12.6% of the placed circuits when compared to the results obtained by the existing placement
algorithms reported in the literature.
KEYWORDS: Placement, Signal flow, Synthesis, Timing
I. INTRODUCTION
VLSI IC design process involves two important steps namely (i) synthesis of high level representation
of the circuit producing technology mapped components and net-list and (ii) layout of the technology
mapped circuit. During the layout process, the placement of circuit components to the exact locations
is carried out. The final layout should meet the timing and area requirements which are estimated
during the synthesis process. Placement is the major step which decides the area and delay of the
final layout. If the area and delay requirements are not met, the circuits are to be re-synthesized. This
two step process has to be iterated till the required area and delay are achieved. In Deep Sub Micron
(DSM) technologies, circuits fail to meet the timing requirements estimated during the synthesis after
completing the layout. This is termed as “Timing Closure” problem. It has been found that even after
several iterations, this two step process does not converge [1,2,3]. One reason for this non-
convergence is that the synthesis and layout are posed as two independent problems and each one
solved separately. There are other solutions which try to unify these two steps to achieve timing
closure which can be classified into two categories (i) synthesis centric[4,5,6] and (ii) layout centric
[7,8]. In synthesis centric methods, layout related information is used during synthesis process. In
layout centric methods, the sub modules of circuits which are not meeting the requirements are re-
synthesised. All these methods have not investigated why a given synthesis solution is not able to
meet the timing requirements after placement. Our work focuses in finding the reasons for failure of
timing closure for a given synthesis solution. Based on these findings, we developed a placer named
as ANUPLACE which minimizes the timing closure problem by placing the circuits as per the
assumptions made during the synthesis process.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
97 Vol. 1, Issue 5, pp. 96-108
In Section 2, we briefly review the existing methods of placement and their limitations. Section 3
tabulates and illustrates the reasons for failure of timing closure. Section 4 describes the implications
in adhering to synthesis assumptions during placement. Based on this, the basis for new placement
algorithm was worked out in Section 5. With this background, a new placer called ANUPLACE was
developed which is described in Section 6. The new placer ANUPLACE is illustrated with an
example in Section 7. Improvements to initial placement solution are given in Section 8. The
experimental setup to evaluate ANUPLACE is described in Section 9. Results are tabulated and
improvements obtained are discussed. Conclusions of research work carried and future scope are
given in Section 10.
II. EXISTING PLACEMENT METHODS AND THEIR LIMITATIONS
Placement assigns exact locations to circuit components within chip area. The existing algorithms use
component cell dimensions and component interconnection information as input to the placer. Thus
the placer is not directly coupled to the synthesis. Lot of information available after synthesis is not
used during placement [9,10,11,28,36,37]. The studies in [12] show that the results of leading
placement tools from both industry and academia may be up to 50% to 150% away from optimal in
total wire length.
Major classical approaches to placement are Constructive method and Iterative method [13]. In
Constructive placement, once the components are placed, they will never be modified thereafter. The
constructive methods are (i) Partitioning-based (ii) Quadratic assignment and (iii) Cluster growth. An
iterative method repeatedly modifies a feasible placement by changing the positions of one or more
core cells and evaluates the result. It produces better result at the expense of enormous amounts of
computation time. Main iterative methods are (i) Simulated annealing, (ii) Simulated evolution and
(iii) Force-directed. During placement, we have to optimize a specific objective function. Typical
objectives include wire length, cut, routing congestion and performance. These classical approaches
are very effective and efficient on small to medium scale designs. In DSM SOC era, due to complex
chips and interconnect delay dominance, these are not very effective [1,2,3,4]. Some new methods to
overcome this problem reported in literature [13] are (a) Hierarchical placement, which utilizes the
structural properties [23] of the circuit during placement (b) Re-synthesis, which re-synthesizes a
soft-macro, in case of timing violation. (3) Re-timing method relocates registers to reduce the cycle
time while preserving the functionality. Existing timing-driven placement algorithms
[14,15,16,17,18,19] are classified into two categories: path-based and net-based. Path-based
algorithms try to directly minimize the longest path delay. Popular approaches in this category
include mathematical programming and iterative critical path estimation. TimberWolf [18] used
simulated annealing to minimize a set of pre-specified timing-critical paths. The drawback is that
they usually require substantial computation resources. In the net-based algorithms, timing
constraints are transformed into net-length constraints. The use of signal direction to guide the
placement process found to give better results [28]. In Timing driven placement based on monotone
cell ordering constraints [24], a new timing driven placement algorithm was presented, which
attempts to minimize zigzags and criss-crosses on the timing-critical paths of a circuit.
Table 1 summarises how the existing algorithms are unable to solve the timing closure problem for a
given synthesis solution. Most of the existing placement algorithms consider only connectivity
information during placement and ignore other information available from synthesis [28].
III. REASONS FOR FAILURE OF TIMING CLOSURE
Our study has indicated that the failure of achieving timing closure is due to non-adherence of
synthesizer’s assumptions during placement. The assumptions made during synthesis [25,26,27,29]
and the implications of these assumptions during placement are summarized in Table 2 and illustrated
in Figures 1 to 8. Column 1 with heading “Fig” refers to the Figure number.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
98 Vol. 1, Issue 5, pp. 96-108
Table 1 Drawbacks of existing placement methods
Placement method Drawback
Quadratic assignment [20] Minimizes all wires whereas only critical path is to be
minimized
Cluster growth [21] Loosing track of cells along signal flow
Simulated annealing [18] Signal flow disturbed
Force directed [22] Tries to minimize all wires which is not required
Hierarchical placement [23] Global signal flow not known. Additional burden of
partitioning into cones
Re-synthesis of soft-macros [8] Iterative process
Monotone cell ordering [24] Additional burden of finding zigzags and criss-crosses from
net-list
Figure 1 Gates placed as per levels Figure 2 Non-uniformity of row widths
Figure 3 Cones versus Rectangle Figure 4 Primary inputs
Figure 5 Sharing inputs Figure 6 Sharing common terms
Figure 7 Non-uniformity of cell sizes Figure 8 Pin positions on cell
Table 2 Implication of non-adherence of synthesis assumptions
Fig Synthesis Assumption Placement Implication
1 Gates are placed as per levels. During placement gates are randomly placed. This increases the
delay in an unpredictable manner.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
99 Vol. 1, Issue 5, pp. 96-108
1 Delay is proportional to number
of levels.
Since gates are randomly placed, delay is no longer proportional
to number of levels. 1 Delay from one level to the
other level is a fixed constant
(some “k”)
Since the original structure of levels is not maintained, the delay
from one level to the other level is unpredictable.
1 Upper bound of delay = No. of
levels * (Delay of max (level) +
delay from one level to the next
level)
Since the original structure of levels is not maintained, upper
bound of delay is not predictable.
2 No restrictions on the aspect
ratio- number of rows and
columns
Synthesis assumes irregular structure as shown in figure. Placer
tries to achieve rectangular shape. Due to this, synthesis
assumptions here can never be met, if the goal is a rectangle.
2 No restrictions on the aspect
ratio, no uniformity on size of
rows or columns
Synthesizer has no notion of shape of the placement. It does not
bother about uniformity on the size of rows or columns. Thus
synthesizer may produce irregular shapes when it calculates
delay. This is not the case with placer.
3 Synthesizer assumes a ‘cone’.
Combinational circuits have a natural ‘cone’ shape as shown in
figure. Placer requires ‘rectangle’ for effective use of silicon.
Synthesizer expected delay can be achieved only if placer uses
‘cone’ for critical signals.
4 Geographical distance of input
source pins
In the Figure, A & B assumed to be available in a constant ‘k’
time. In reality, this can never be the case. This synthesis
assumption can never be met.
5 Sharing of inputs
Synthesizer assumes inputs to be available in constant time
which is not the case during placement. This synthesis
assumption can never be met.
6 Common terms Sharing output produces more wire during layout than what was
assumed during synthesis. This synthesis assumption can never
be met.
7 Non-uniformity of cell sizes Requires more wire during placement. Cell size (length and
width) are uniform and fixed during synthesis as far as wire
required for routing are concerned. This synthesis assumption
can never be met.
8 Pin position on a cell It is assumed that inputs are available at the same point on the
cell. This is not the case during placement. This synthesis
assumption can never be met.
IV. IMPLICATIONS IN ADHERING TO SYNTHESIS ASSUMPTIONS DURING
PLACEMENT
We now analyze how we can adhere to synthesis assumptions during placement. Synthesizer assumes
that cells are placed as per the levels assumed during synthesis, whereas during placement cells are
placed randomly without any regard to levels. Cells can be placed as per the levels as a ‘cone’ [28]
and use the left over area to fill with non-critical cells to form rectangle for better silicon utilization.
Synthesizer assumes that delay is proportional to number of levels, where this information is lost
during placement due to random placement. By placing cells on critical paths, as per the levels along
signal flow, we adhere to this synthesis assumption. Non-critical cells can be placed in the left-over
area. By placing cell as per levels assumed during synthesis, the cell from one level to the next level
can be approximately maintained as a fixed constant. The upper bound of delay can be predicted.
Synthesizer assumes irregular structure as shown in Figure 2. Cells which are not in critical paths can
be moved to other row to achieve rectangular shape. Based on the above analysis, the basis for the
new method is evolved, which is explained in the next section.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
100 Vol. 1, Issue 5, pp. 96-108
V. BASIS FOR THE NEW ALGORITHMS
The new method is evolved based on the following.
• Use natural signal flow available during synthesis [28].
• Use cone placement for signals along critical path [28].
• Try to maintain the placement as close to the levels assumed during synthesis.
Signal flow indicates the direction of signal, from Primary Input (PI) to another gate input or from
output of a gate to the input of another gate. The issues in placing cells along signal flow are
explained below with the help of Figure 9. The gate G has one output and 3 inputs.
S1, S2, S3 show the direction of signals to the inputs of G. Ideally the output of preceding gates
should be on the straight lines S1, S2, S3 as shown in Figure 9 for the gate g1, g2, g3. The gates g1,
g2, g3 are to be placed as close as possible to G. The pin separations w1, w2 are much smaller than
gate widths f1, f2, f3 for gate g1, g2, g3. It is impossible to place all input gates g1, g2, g3 in a row in
Level i such that their outputs fall on the straight lines s1, s2, s3. At least two out of 3 gates are to be
placed as shown in Figure 10. This results on two bends on signals s1 and s3. This cannot be avoided.
There can be only one signal which can be placed on the straight line. This can be used for placing
critical paths. Other less critical paths can be placed above or below of this straight line. The new
placement algorithms which are used in ANUPLACE are explained in the next section.
f3 S3
S1
S2g2
g3
g1
W2
f3
W1
f2
f1
G
f3
S3
S1
S2g2
g3
g1
W2
f3
W1
f2
f1
Level i+1Level i
G
Figure 9 Signal Flow as per Synthesizer Figure 10 Placement along Signal Flow
VI. ALGORITHMS USED IN ANUPLACE
ANUPLACE reads the benchmark circuit which is in the form of a net-list, taken from “SIS”
synthesizer [35], builds trees with primary outputs as roots as shown in Figure 11 and places the
circuit along signal flow as cones. The placement benchmark circuits in bookshelf [30] format
contain ‘nodes’ giving aspect ratio of gates and ‘nets’ which give interconnection details between
gates and input/output terminals. These formats do not identify primary inputs or primary outputs.
We took benchmark circuits from SIS [35] synthesizer in “BLIF” format which are then converted
into Bookshelf format using converters provided in [31,32,33,34]. This produces “.nodes” and “.nets”
file. The ‘nodes’ file identifies primary inputs and primary outputs by “_input” and “_output” suffix
respectively. The “nodes” file consists of information about gates, primary inputs and outputs. The
“nets” file consists of inter connections between the nodes and inputs/outputs. While parsing the
files, Primary Input/Output information is obtained using the “terminal” names which identify
“input/output”. The new placement algorithm is shown in Figure 12. Once the trees are created, delay
information is read into the data structure, from SIS, which is used during placement. This delay
information is available at every node from “SIS” synthesizer. A circuit example with 3 Primary
outputs, marked as PO-1, PO-2 and PO-3 is shown in Figure 11.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
101 Vol. 1, Issue 5, pp. 96-108
Primary inputs
PO-2
Primary inputs
PO-1
Primary inputs
PO-3
Figure 11 Trees with Primary output as root Figure 12 ANUPLACE algorithm
ANUPLACE works as follows.
• Read the benchmark circuit which is in the form of a Net-list with timing information.
• Build trees with primary outputs as roots.
• Sort the trees based on time criticality.
• Starting with most time critical tree, place on the layout surface, one tree pointed by the root
starting from the primary output using “place-cell” algorithm shown in Figure 13.
• Place the remaining trees one by one, on the layout surface using “place-cell”.
The place_cell algorithm shown in Figure 13 works as follows.
• Place the cell pointed by root using “place_one_cell” algorithm shown in Figure 14.
• Sort the input trees based on time criticality;
• For each input, if it is a primary input, place it using “place_one_cell”, if not; call
“place_cell” with this input recursively.
Figure 13 Algorithm Place-cell Figure 14 Algorithm Place-one-cell
The “place_one_cell” algorithm shown in Figure 14 works as follows. The layout surface is divided
into number of rows equal to number of levels in the tree as shown in Figure 11. Each row
corresponds to one level of the tree. The first root cell is placed in the middle of the top row.
Subsequently the children are placed below this row based on availability of the space. Roots of all
trees (that is, all Primary Outputs) are placed in the top row. While placing a cell beneath a root,
preference is given to the place along the signal flow. If space is not available on the signal flow path,
then a cell is shifted either to right or left of the signal flow and placed as nearer as possible to the
signal flow.
VII. ILLUSTRATION OF ANUPLACE WITH AN EXAMPLE
The ANUPLACE algorithms are illustrated with an example whose logic equations are shown in
Figure 15. The timing information from the SIS synthesizer [35] is given in Table 3. The tree built by
ANUPLACE with the average slacks is shown in Figure 16. The sequence of placement based on the
time criticality is also shown in Figure 16. The sequence of placement is indicated by the numbers 1-
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
102 Vol. 1, Issue 5, pp. 96-108
31 shown at the each node of the tree. The placement sequence number for each gate is also shown in
the last column of Table 3. The initial placement is shown in Figure 17.
Primary Inputs
Primary Output
Level 1
Level 2
Level 3
Level 4
Level 5 [a1]-4.58
[143]-4.46
[2]-4.435
[14]-4.25
[17]-4.46
[19]-4.365
[18]-4.415
[15]-4.11
[127]-4.085
a9-4.395
a6-3.52
a5-4.475
a20-3.72
a10-4.415
a8-3.28
[135]-4.24
[82]-4.21
[4]-4.065
[12]-3.935
[30]-4.25
[119]-4.21
6,24,27
1
2
3
4
5,9,12,20
7
8,15,23,30
11
10
13,16
14
17
18
19
21,31
22
25
26
28
29
a1
Figure 15 Example-Equations Figure 16 Example-Tree built by ANUPLACE
There are 6 primary inputs marked as a5 a6 a8 a9 a10 a20 and there is one primary output marked as
a1. There are 15 two input gates marked as [127], [15], [14], [18], [19], [17], [143], [4], [82], [135],
[119], [12], [30], [2] and [a1]. The interconnections are as shown in Figure 16. The slack delays
computed by the synthesizer at each gate are shown in Figure 16. The placement algorithm given in
Figure 12, places the Primary Output cell a1 first. Then it looks at its leaf cells [143] and [2]. From
the time criticality given in Figure 16, it places cell [143] along the signal flow just below the cell
[a1]. Then the algorithm is recursively invoked to place the tree with root as [143] which places the
cells and the inputs in the sequence [17], [18], a10, a9, [19], a5, a10, [14], [15], a10, a20, [127], a5
and a20 along the signal flow as shown. Once the placer completes the placement of tree pointed
[143] as root, it starts placing the tree pointed by cell [2]. Now the cells marked [2], [30], [119], a10,
a6, [12], a5, a9, [135], [82], a9, a8, [4], a5 and a6 are placed. This completes the placement of
complete circuit. Primary Inputs and Primary Outputs are re-adjusted after placing all the cells.
[a1]
[17]
a9
[2][143]
[135]
a6
[14]
[82] [4]
a10 a8a20 a5
[30]
[19] [18] [119] [12][127] [15]
Primary Inputs
Primary Output
Level 1
Level 2
Level 3
Level 4
Level 5
a1
Figure 17 Example Initial placement Figure 18 Find-best-place
Table 3 Timing information for the example circuit
Gate Arrival
time rise
Arrival
time fall
Required
time rise
Required
time fall Slack rise Slack fall
Slack
average
Placement
sequence
a5 1.45 1.11 -3.28 -3.11 -4.73 -4.22 -4.475 8,15,23,30
a6 0.69 0.53 -2.89 -2.93 -3.58 -3.46 -3.52 21,31
a8 0.35 0.27 -2.84 -3.1 -3.19 -3.37 -3.28 28
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
103 Vol. 1, Issue 5, pp. 96-108
a9 1.16 0.89 -3.47 -3.27 -4.63 -4.16 -4.395 6,24,27
a10 1.5 1.15 -3.44 -2.74 -4.94 -3.89 -4.415 5,9,12,20
a20 0.8 0.61 -3.51 -2.52 -4.31 -3.13 -3.72 13,16
[127] 1.72 2.18 -1.74 -2.53 -3.46 -4.71 -4.085 14
[15] 2.08 2.09 -1.71 -2.35 -3.79 -4.43 -4.11 11
[14] 3.13 2.64 -1.58 -1.14 -4.71 -3.79 -4.25 10
[18] 1.93 2.07 -1.96 -2.87 -3.89 -4.94 -4.415 4
[19] 2.04 2.06 -1.93 -2.69 -3.98 -4.75 -4.365 7
[17] 3.11 2.66 -1.83 -1.31 -4.94 -3.98 -4.46 3
[143] 3.45 4.1 -0.53 -0.85 -3.98 -4.94 -4.46 2
[4] 2.05 2.04 -2.17 -1.87 -4.22 -3.91 -4.065 29
[82] 1.74 2.22 -2.42 -2.04 -4.16 -4.26 -4.21 26
[135] 3 2.79 -1.25 -1.44 -4.26 -4.22 -4.24 25
[119] 1.93 2.49 -1.84 -2.16 -3.77 -4.65 -4.21 19
[12] 2.04 2.04 -1.81 -1.98 -3.85 -4.02 -3.935 22
[30] 3.42 2.6 -1.22 -1.26 -4.65 -3.85 -4.25 18
[2] 3.72 3.98 -0.5 -0.67 -4.22 -4.65 -4.435 17
[a1] 4.94 4.22 0 0 -4.94 -4.22 -4.58 1
VIII. CONTROLLING ASPECT RATIO
Due to non-uniformity of number of cells per level, final aspect ratio is not rectangle. For better
silicon utilization, it is required to make final aspect ratio as rectangle. Aspect ratio can be controlled
while placing the cells by suitably modifying the algorithm “place-one-cell” given in Figure 14 which
is discussed in the following paragraphs.
8.1 Algorithm: find-best-place
In the main algorithm, “place_circuit”, the following steps are added.
• Max_row=number of levels as given by synthesizer
• Total_width=total of widths of all cells in the circuit
• Average_width_per_level = Round (Total_width/Max_row) + Tolerance, where “Tolerance”
is an integer to make the placement possible which can be varied based on need.
At the beginning, before starting placing cells, a layout surface rectangle of the size “Max_row X
Average_width_per_level” is defined. As the placement progresses, the “used space” and “available
space” are marked as shown in Figure 18.
The “find-best-place” algorithm works as follows.
• Current level of parent cell = c as shown in Figure 18.
• Check availability on level c-1.
• If space available on level c-1, place the leaf cell at level c-1.
• Else check availability on levels c, c-2, c+1 and c+2 in the “free” spaces as shown in Figure
18.
• Find the shortest free location from the current position shown as C in the Figure 18. Place
the leaf cell here.
The example given in Figure 15 will be placed as follows.
The total number of levels excluding Primary Inputs and Primary Output are 4. Assuming that each
cell has a width of unit 1, total width of all cells in the circuit is 15. So Max_row=4, Total_width=15
and Average_width_per_level = round (15/4) = 4. Here Tolerance = 0. So a maximum of 4 cells can
be placed per row. The final placement by ANUPLACE is shown in Figure 19.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
104 Vol. 1, Issue 5, pp. 96-108
Figure 19 Final placement by ANUPLACE Figure 20 Placement by existing placer
The resulting layout is nearer to a rectangle. After placing the gates, Primary Inputs and Primary
Outputs are placed nearer to the gates from which these Input/Outputs are taken. The placement given
by the public domain placer [31,32,33,34] is shown in Figure 20.
The experimental set up to evaluate ANUPLACE using benchmark circuits and the results are given
in the next section.
IX. RESULTS AND DISCUSSION
In this section, the test setup to evaluate the new algorithms with the benchmark circuits is described.
The results are compared with those obtained from public domain placement algorithms. The test set
up is shown in Figure 21. The test set up for comparing the results is shown in Figure 22. The
benchmark circuit is taken in the form of a PLA. The normal placement bench mark circuits [36,37]
are not useful, because they give only cell dimensions and interconnect information. Timing and
other circuit information from synthesizer is not available in these placement bench marks. SIS
synthesizer [35] is used for synthesizing the benchmark circuit. SIS [35] produces the net list in
BLIF format along with the timing information. The BLIF output is then converted into Bookshelf
format using the public domain tools available at the web site [31,32,33,34] using the utility
“blif2book-Linux.exe filename.blif filename”. ANUPLACE is used to place the circuit, which gives
the placement output in Bookshelf format. To check the overlaps and also to calculate the wirelenth
(HPWL), a public domain utility [31,32,33,34], taken from the same web site, is used. The utility
“PlaceUtil-Lnx32.exe -f filename.aux -plotWNames filename -checkLegal -printOverlaps”, checks
for out of core cells and overlaps. This utility also gives Half Perimeter Wire Length (HPWL) of the
placed circuit. The same “BLIF” file is used with the public domain placer available at [31] using the
utility “LayoutGen-Lnx32.exe -f filename.aux -saveas filename ” and HPWL calculated using the
utility “PlaceUtil-Lnx32.exe -f filename.aux -plotWNames filename -checkLegal”.
The Table 4 shows the Half-Perimeter-Wire-Lengths (HPWL) of the placed circuits using existing
public domain algorithms [31] and ANUPLACE. There is an average improvement of 53.7% in
HPWLs with an average area penalty of 12.6%. Due to aligning of cells to signal flow, the layout
produced by ANUPLACE is not a perfect rectangle. There will be white spaces at the left and right
sides as shown in Figure 19. Because of this, there is an increase in the area of the placed circuit.
The cells which are logically dependent are placed together as in [28]. Other placement algorithms
randomly scatter the cells. Because of this there is reduction in HPWL of the entire placed circuit.
Since the cells are placed along the signal flow, wire length along the critical paths will be optimum.
So zigzags and criss-crosses are not present as in [24]. Circuit is naturally partitioned when trees are
built rooted by Primary Outputs (POs). So there is no additional burden of extracting cones as in
[23,28]. ANUPLACE is a constructive method, so better than other iterative methods. Only critical
paths are given priority while construction of the layout. Global signal flow is kept in mind all
through the placement, unlike other placement methods. Average slacks are used in these
experiments. Using maximum of rise and fall slacks will give worst case delay. The timing results are
being communicated in a separate paper. The final placement is closer to synthesis assumptions when
compared to other placement methods. This approach may be useful towards evolving Synergistic
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
105 Vol. 1, Issue 5, pp. 96-108
Design Flow, which is to create iteration loops that are tightly coupled at the various levels of design
flow as mentioned in [1].
Table 4 Comparison of HPWL
Circuit
Name
HPWL
(ANUPLAC
E)
HPWL
(Existing)
Core
cell
Area
Area
(Existing)
Area
(ANUPLAC
E)
Improve-
ment in
HPWL %
Area
Penalty %
5xp1 1343.8 2021.9 317 361 378 50.46 4.7
9sym 4321.3 5162.4 657 729 730 19.46 0.1
apex2 10788.7 16088.1 1225 1369 1372 49.12 0.2
b12 765.1 1180.1 200 225 210 54.25 -6.7
b9 1591.1 2601.5 308 342 387 63.51 13.2
clip 2433 3968.9 511 576 612 63.13 6.3
cm82a 148.2 216 62 72 76 45.69 5.6
comp 2093.4 3681.9 452 506 650 75.88 28.5
con1 125.2 188.6 48 56 85 50.6 51.8
cordic 692.7 1569.8 230 256 360 126.63 40.6
count 2777.4 3842.9 473 529 520 38.36 -1.7
f51m 1494.7 2174.4 309 342 360 45.47 5.3
fa 62.9 83.9 30 36 44 33.27 22.2
ha 21.4 33.9 11 12 12 58.37 0
misex2 2107.3 2626.6 308 342 330 24.65 -3.5
mux1-8 111.3 148.3 32 36 42 33.33 16.7
mux8-1 130.6 211.9 59 64 88 62.29 37.5
o64 3008.3 3467.3 327 361 384 15.26 6.4
parity 346.3 636 149 169 196 83.64 16
rd53 425.2 659.5 130 144 192 55.09 33.3
rd73 1619.6 2666.1 387 420 500 64.61 19
rd84 1730.5 2588.6 394 441 480 49.59 8.8
sao2 1957.8 2913 384 420 500 48.79 19
squar5 648.9 835.2 156 169 192 28.71 13.6
t481 166.1 386.9 76 81 91 132.88 12.3
table3 69834.1 87642.5 4388 4900 4580 25.5 -6.5
Z5xp1 2521 3925.1 485 529 558 55.69 5.5
Z9sym 1276 1892.7 302 342 360 48.33 5.3
Figure 21 Test set up Figure 22 Set up to compare results
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
106 Vol. 1, Issue 5, pp. 96-108
X. CONCLUSIONS AND FUTURE SCOPE
New algorithms place the circuits along signal flow as per the assumptions made during synthesis.
The study conducted investigates the reasons for failure of placement tools to achieve timings given
by the synthesizer. This showed us that certain assumptions made by synthesizer can be implemented,
and some assumptions can never be implemented. Those which can be implemented are tried in our
new placement algorithms. One problem encountered during implementation of the algorithms was
that new placer produced cones, which are area inefficient. This problem to some extent
circumvented by controlling the aspect ratio using non-critical cell placement to convert the cone into
a rectangle. This new placer uses knowledge of the delay information during construction of the
solution. This is useful to effectively control the aspect ratio of the placement solution. The
improvements obtained in delay are being communicated in a separate paper.
ACKNOWLEDGEMENTS
We thank Dr. K.D. Nayak who permitted and guided this work to be carried out in ANURAG. We
also thank members of ANURAG who reviewed the manuscript. Thanks are due to Mrs. D.
Manikyamma and Mr. D. Madhusudhan Reddy for the preparation of the manuscript.
REFERENCES
[1] Kurt Keutzer., et al., (1997), “The future of logic synthesis and physical design in deep-submicron
process geometries”, ISPD '97 Proceedings of the international symposium on Physical design, ACM
New York, NY, USA, pp 218-224.
[2] Randal E. Byrant, et al., (2001), "Limitations and Challenges of Computer-Aided Design Technology
for CMOS VLSI", Proceedings of the IEEE, Vol. 89, No. 3, pp 341-65.
[3] Coudert, O, (2002), “Timing and design closure in physical design flows”, Proceedings. International
Symposium on Quality Electronic Design (ISQED ’02), pp 511 – 516.
[4] Gosti, W., et al., (2001), “Addressing the Timing Closure Problem by Integrating Logic Optimization
and Placement”, ICCAD 2001 Proceedings of the 2001 IEEE/ACM International Conference on
Computer-aided design, San Jose, California , pp 224-231.
[5] Wilsin Gosti , et al., (1998), “Wireplanning in logic synthesis”, Proceedings of the IEEE/ACM
international conference on Computer-aided design, San Jose, California, USA, pp 26-33
[6] Yifang Liu, et al., (2011), “Simultaneous Technology Mapping and Placement for Delay
Minimization”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,
Vol. 30 No. 3, pp 416–426.
[7] Pedram, M. & Bhat, N, (1991), “Layout driven technology mapping”, 28th ACM/IEEE Design
Automation Conference, pp 99 – 105.
[8] Salek, A.H., et al., (1999), “An Integrated Logical and Physical Design Flow for Deep Submicron
Circuits”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol.
18,No. 9, pp 1305–1315.
[9] Naveed A. Sherwani, (1995), “Algorithms for VLSI Physical Design Automation”, Kluwer Academic
Publishers, Norwell, MA, USA.
[10] Sarrafzadeh, M., & Wong, C.K., (1996), “An introduction to VLSI Physical Design”, The McGraw-Hill
Companies, New York.
[11] Shahookar K & Mazumder P, (1991), “VLSI cell placement techniques” ACM Computing Surveys,
Vol. 23, No. 2.
[12] Jason Cong, et al., (2005), “Large scale Circuit Placement”, ACM Transactions on Design Automation
of Electronic Systems, Vol. 10, No. 2, pp 389-430.
[13] Yih-Chih Chou & Young-Long Lin, (2001), “Performance-Driven Placement of Multi-Million-Gate
Circuits”, ASICON 2001 Proceedings of 4th International Conference on ASIC, Shanghai, China, pp 1-
11.
[14] Andrew B. Kahng & Qinke Wang, (2004), “An analytic placer for mixed-size placement and timing-
driven placement”, Proceedings of International Conference on Computer Aided Design, pp 565-572.
[15] Jun Cheng Chi, et al., (2003), “A New Timing Driven Standard Cell Placement Algorithm”,
Proceedings of International Symposium on VLSI Technology, Systems and Applications, pp 184-187.
[16] Swartz, W., & Sechen, C., (1995), “Timing Driven Placement for Large Standard Cell Circuits”, Proc.
ACM/IEEE Design Automation Conference, pp 211-215.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
107 Vol. 1, Issue 5, pp. 96-108
[17] Tao Luo, et al., (2006), “A New LP Based Incremental Timing Driven Placement for High Performance
Designs”, DAC ‘06 Proceedings of the 43rd Annual Design Automation Conference, ACM New York,
NY, USA, pp 1115-1120.
[18] Carl Sechen & Alberto Sangiovanni-Vincentelli, (1985), “The TimberWolf Placement and Routing
Package”, IEEE Journal of Solid-State Circuits, vol. SC-20, No. 2, pp 510-522.
[19] Wern-Jieh Sun & Carl Sechen, (1995), “Efficient and effective placement for very large circuits”,.
IEEE Transactions on CAD of Integrated Circuits and Systems, Vol. 14 No. 3, pp 349-359
[20] C. J. Alpert, et al., (1997), "Quadratic Placement Revisited", 34th ACM/IEEE Design Automation
Conference, Anaheim, pp 752-757
[21] Rexford D. Newbould & Jo Dale Carothers , (2003), “Cluster growth revisited: fast, mixed-signal
placement of blocks and gates”, Southwest Symposium on Mixed Signal Design, pp 243 - 248
[22] Andrew Kennings & Kristofer P. Vorwerk, (2006), “Force-Directed Methods for Generic Placement”,
IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems, Vol. 25, N0. 10, pp
2076-2087.
[23] Yu-Wen Tsay, et al., (1993), “A Cell Placement Procedure that Utilizes Circuit Structural Properties”,
Proceedings of the European Conference on Design Automation, pp 189-193.
[24] Chanseok Hwang & Massoud Pedram, (2006), “Timing-Driven Placement Based on Monotone Cell
Ordering Constraints”, Proceedings of the 2006 Conference on Asia South Pacific Design Automation:
ASP-DAC 2006, Yokohama, Japan, pp 201-206.
[25] Brayton R K, et al., (1990), “Multilevel Logic Synthesis”, Proceedings of the IEEE, Vol. 78, No. 2, pp-
264-300.
[26] Brayton R K, et al.,(1987), “MIS: A Multiple-Level Logic Optimization System”, IEEE Transactions on
Computer Aided Design, Vol.6, No.6, pp-1062-1081.
[27] Rajeev Murgai, et al.,(1995), “Decomposition of logic functions for minimum transition activity”,
EDTC '95 Proceedings of the European conference on Design and Test, pp 404-410.
[28] Cong, J. & Xu, D, (1995), “ Exploiting signal flow and logic dependency in standard cell placement”,
Proceedings of the Asian and South Pacific Design Automation Conference, pp 399 – 404.
[29] Fujita, M. & Murgai, R, (1997), “Delay estimation and optimization of logic circuits: a survey”,
Proceedings of Asia and South Pacific Design Automation Conference, Chiba,Japan, pp 25 – 30.
[30] Andrew Caldwell, et al., (1999), “Generic Hypergraph Formats, rev. 1.1”, from
http://vlsicad.ucsd.edu/GSRC/bookshelf/Slots/Fundamental/HGraph/HGraph1.1.html.
[31] Saurabh Adya & Igor Markov, (2005), “Executable Placement Utilities” from
http://vlsicad.eecs.umich.edu/BK/PlaceUtils/bin.
[32] Saurabh N. Adya, et al., (2003), "Benchmarking For Large-scale Placement and Beyond",
International Symposium on Physical Design (ISPD), Monterey, CA, pp. 95-103.
[33] Saurabh Adya and Igor Markov, (2003), “On Whitespace and Stability in Mixed-size Placement and
Physical Synthesis”, International Conference on Computer Aided Design (ICCAD), San Jose, pp 311-
318.
[34] Saurabh Adya and Igor Markov, (2002), "Consistent Placement of Macro-Blocks using Floorplanning
and Standard-Cell Placement", International Symposium of Physical Design (ISPD), San Diego, pp.12-
17.
[35] Sentovich, E.M., et al., (1992), “SIS: A System for Sequential Circuit Synthesis”, Memorandum No.
UCB/ERL M92/41, Electronics Research Laboratory, University of California, Berkeley, CA 94720.
[36] Jason Cong, et al, (2007), “UCLA Optimality Study Project”, from http://cadlab.cs.ucla.edu/~pubbench/.
[37] C. Chang, J. Cong, et al., (2004), "Optimality and Scalability Study of Existing Placement Algorithms",
IEEE Transactions on Computer-Aided Design, Vol.23, No.4, pp.537 – 549.
AUTHORS
K. Santeppa obtained B.Tech. in Electronics and Communication engineering from
J N T U and M Sc (Engg) in Computer Science and Automation (CSA) from Indian
Institute of Science, Bangalore. He worked in Vikram Sarabhai Space Centre, Trivandrum
from 1982 to 1988 in the field of microprocessor based real-time computer design. From
1988 onwards, he has been working in the field of VLSI design at ANURAG, Hyderabad.
He received DRDO Technology Award in 1996, National Science Day Award in 2001 and
“Scientist of the Year Award" in 2002. He is a Fellow of IETE and a Member of IMAPS
and ASI. A patent has been granted to him for the invention of a floating point processor device for high speed
floating point arithmetic operations in April 2002.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
108 Vol. 1, Issue 5, pp. 96-108
K.S.R. Krishna Prasad received B.Sc degree from Andhra University, DMIT in electronics
from MIT, M.Tech. in Electronics and Instrumentation from Regional Engineering College,
Warangal and PhD from Indian Institute of Technology, Bombay. He is currently working as
Professor at Electronics and Communication Engineering Department, National Institute of
Technology, Warangal. His research interests include analog and mixed signal IC design,
biomedical signal processing and image processing.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
109 Vol. 1, Issue 5, pp. 109-117
FUNCTIONAL COVERAGE ANALYSIS OF OVM BASED
VERIFICATION OF H.264 CAVLD SLICE HEADER DECODER
Akhilesh Kumar and Chandan Kumar
Department of E&C Engineering, NIT Jamshedpur, Jharkhand, India
ABSTRACT
Commercial chip design verification is a complex activity involving many abstraction levels (such as
architectural, register transfer, gate, switch, circuit, fabrication), many different aspects of design (such as
timing, speed, functional, power, reliability and manufacturability) and many different design styles (such as
ASIC, full custom, semi-custom, memory, cores, and asynchronous). In this paper, functional coverage analysis
of verification of RTL (Register Transfer Level) design of H.264 CAVLD (context-based adaptive variable
length decoding) slice header decoder using SystemVerilog implementation of OVM (open verification
methodology) is presented. The methodology used for verification is OVM which has gathered very positive
press coverage, including awards from magazines and industry organizations. There is no doubt that the OVM
is one of the biggest stories in recent EDA (electronic design automation) history. The SystemVerilog language
is at the heart of the OVM which inherited features from Verilog HDL, VHDL, C, C++ and adopted by IEEE as
hardware description and verification language in 2005. The verification environment developed in OVM
provides multiple levels of reuse, both within projects and between projects. Emphasis is put onto the actual
usage of the verification components and functional coverage. The whole verification is done using
SystemVerilog hardware description and verification language. We are using QuestaSim 6.6b for simulation.
KEYWORDS: Functional coverage analysis, RTL (Register Transfer Level) design, CAVLD (context-based
adaptive variable length decoding), slice header decoder, OVM (open verification methodology),
SystemVerilog, EDA (electronic design automation).
I. INTRODUCTION
Verification is the process which proceeds parallel as design creation process. The goal of verification
is not only finding the bugs but of proving or disproving the correctness of a system with respect to
strict specifications regarding the system [2].
Design verification is an essential step in the development of any product. Today, designs can no
longer be sufficiently verified by ad-hoc testing and monitoring methodologies. More and more
designs incorporate complex functionalities, employ reusable design components, and fully utilize the
multi-million gate counts offered by chip vendors. To test these complex systems, too much time is
spent constructing tests as design deficiencies are discovered, requiring test benches to be rewritten or
modified, as the previous test bench code did not address the newly discovered complexity. This
process of working through the bugs causes defects in the test benches themselves. Such difficulties
occur because there is no effective way of specifying what is to be exercised and verified against the
intended functionality [11]. Verification of RTL design using SystemVerilog implementation of OVM
dramatically improves the efficiency of verifying correct behavior, detecting bugs and fixing bugs
throughout the design process. It raises the level of verification from RTL and signal level to a level
where users can develop tests and debug their designs closer to design specifications. It encompasses
and facilitates abstractions such as transactions and properties. Consequently, design functions are
exercised efficiently (with minimum required time) and monitored effectively by detecting hard-to-
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
110 Vol. 1, Issue 5, pp. 109-117
find bugs [7]. This technique addresses current needs of reducing manpower and time and the
anticipated complications of designing and verifying complex systems in the future.
1.1. Importance of Verification
When a designer verifies her/his own design - then she/he is verifying her/his own
interpretation of the design - not the specification.
Verification consumes 50% to 70% of effort of the design cycle.
Twice more Verification engineers than RTL designer.
Finding bug in customer’s environment can cost hundreds of millions.
1.2. Cost of the Bugs Bugs found early in design have little cost. Finding a bug at chip/system level has moderate cost. A
bug at system/chip level requires more debug time and isolation time. It could require new algorithm,
which could affect schedule and cause board rework. Finding a bug in System Test (test floor)
requires re-spin of a chip. Finding a bug after customer delivery cost millions.
Figure 1. Cost of bugs over time.
II. SLICE HEADER
2.1. THE H.264 SYNTAX
H.264 provides a clearly defined format or syntax for representing compressed video and related
information [1]. Fig. 2 shows an overview of the structure of the H.264 syntax. At the top level, an
H.264 sequence consists of a series of ‘packets’ or Network Adaptation Layer Units, NAL Units or
NALUs. These can include parameter sets containing key parameters that are used by the decoder to
correctly decode the video data and slices, which are coded video frames or parts of video frames.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
111 Vol. 1, Issue 5, pp. 109-117
Figure 2. H.264 Syntax [1]
2.2. SLICE
A slice represents all or part of a coded video frame and consists of a number of coded macro blocks,
each containing compressed data corresponding to a 16 × 16 block of displayed pixels in a video
frame.
2.3. SLICE HEADER
Supplemental data placed at the beginning of slice is Slice Header.
III. SLICE HEADER DECODER
An H.264 video decoder carries out the complementary processes of decoding, inverse transform and
reconstruction to produce a decoded video sequence [1].
Slice header decoder is a part of H.264 video decoder. Slice header decoder module takes the input bit
stream from Bit stream parser module.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
112 Vol. 1, Issue 5, pp. 109-117
Figure 3. H.264 video coding and decoding process [1]
The slice header decoder module parses the slice header-RBSP (raw byte sequence payload) bit
stream to generate first MB in slice, Slice type etc. The module sends the decoded syntax element to
the controller.
IV. CAVLD
Context-adaptive variable-length decoding (CAVLD) is a form of entropy decoding used in
H.264/MPEG-4 AVC video decoding. It is an inherently lossless decompression technique, like
almost all entropy-decoders.
V. SYSTEM VERILOG
SystemVerilog is a combined Hardware Description Language (HDL) and Hardware Verification
Language (HVL) based on extensions to Verilog HDL. SystemVerilog becomes an official IEEE
standard in 2005. SystemVerilog is the extension of the IEEE Verilog 2001. It has features inherited
from Verilog HDL, VHDL, C, C++. One of the most important features of SystemVerilog is that it’s
an object oriented language [4]. SystemVerilog is rapidly getting accepted as the next generation
HDL for System Design, Verification and Synthesis. As a single unified design and verification
language, SystemVerilog has garnered tremendous industry interest, and support [9].
VI. OVM (OPEN VERIFICATION METHODOLOGY)
The Open Verification Methodology (OVM) is a documented methodology with a supporting
building-block library for the verification of semiconductor chip designs [8].
The OVM was announced in 2007 by Cadence Design Systems and Mentor Graphics as a joint effort
to provide a common methodology for SystemVerilog verification. After several months of extensive
validation by early users and partners, the OVM is now available to everyone. The term “everyone”
means just that everyone, even EDA competitors, can go to the OVM World site and download the
library, documentation, and usage examples for the methodology [7].
OVM provides the best framework to achieve coverage-driven verification (CDV). CDV combines
automatic test generation, self-checking testbenches, and coverage metrics to significantly reduce the
time spent verifying a design [2]. The purpose of CDV is to:
Eliminate the effort and time spent creating hundreds of tests.
Ensure thorough verification using up-front goal setting.
Receive early error notifications and deploy run-time checking and error analysis to simplify
debugging.
VII. OVM TESTBENCH
A testbench is a virtual environment used to verify the correctness of a design. The OVM testbench is
composed of reusable verification environments called OVM verification components (OVCs). An
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
113 Vol. 1, Issue 5, pp. 109-117
OVC is an encapsulated, ready-to-use, configurable verification environment for an interface
protocol, a design submodule, or a full system. Each OVC follows a consistent architecture and
consists of a complete set of elements for stimulating, checking, and collecting coverage information
for a specific protocol or design.
Fig 4. Testbench [2].
VIII. DEVELOPMENT OF OVM VERIFICATION COMPONENTS
SystemVerilog OVM Class Library:
Figure 5. OVM Class Library [3]
The SystemVerilog OVM Class Library provides all the building blocks to quickly develop well-
constructed, reusable, verification components and test environments [3]. The library consists of base
classes, utilities, and macros. Different verification components are developed by deriving the base
classes, utilities, and macros.
The OVM class library allows users in the creation of sequential constrained-random stimulus which
helps collect and analyze the functional coverage and the information obtained, and include assertions
as members of those configurable test-bench environments.
The OVM Verification Components (OVCs) written in SystemVerilog code is structured as follows
[4]:
— Interface to the design-under-test
— Design-under-test (or DUT)
— Verification environment (or testbench)
— Transaction (Data Item)
— Sequencer (stimulus generator)
— Driver
— Top-level of verification environment
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
114 Vol. 1, Issue 5, pp. 109-117
— Instantiation of sequencer
— Instantiation of driver
— Response checking
— Monitor
— Scoreboard
— Top-level module
— Instantiation of interface
— Instantiation of design-under-test
— Test, which instantiates the verification environment
— Process to run the test
Interface: Interface is nothing but bundle of wires which is used for communication between
DUT(Design Under Test) and verification environment(testbench). The clock can be part of the
interface or a separate port [2].
Figure 6. Interface [2]
Here, all the Slice Header Decoder signals are mentioned along with their correct data types. A
modport is defined showing connections with respect to the verification environment.
Design Under Test (DUT): DUT completely describes the working model of Slice Header Decoder
written in Hardware Description Language which has to be tested and verified.
Transaction (Data Item): Data items represent the input to the DUT. The sequencer which creates
the random transactions are then retrieved by the driver and hence used to stimulate the pins of the
DUT. Since we use a sequencer, the transaction class has to be derived from the ovm_sequence_item
class, which is a subclass of ovm_transaction. By intelligently randomizing data item fields using
SystemVerilog constraints, one can create a large number of meaningful tests and maximize coverage.
Sequencer: A sequencer is an advanced stimulus generator that controls the items that are provided to
the driver for execution. By default, a sequencer behaves similarly to a simple stimulus generator and
returns a random data item upon request from the driver. It allows to add constraints to the data item
class in order to control the distribution of randomized values.
Driver: The DUT’s inputs are driven by the driver that runs single commands such as bus read or
write. A typical driver repeatedly receives a data item and drives it to the DUT by sampling and
driving the DUT signals.
Monitor: The DUT’s output drives the monitor that takes signal transitions and groups them together
into commands. A monitor is a passive entity that samples DUT signals but does not drive them.
Monitors collect coverage information and perform checking.
Agent: Agent encapsulates a driver, sequencer, and monitor. Agents can emulate and verify DUT
devices. OVCs can contain more than one agent. Some agents (for example, master or transmit
agents) initiate transactions to the DUT, while other agents (slave or receive agents) react to
transaction requests.
Scoreboard: It is a very crucial element of a self-checking environment. Typically, a scoreboard
verifies whether there has been proper operation of your design at a functional level.
Environment: The environment (env) is the top-level component of the OVC. The environment class
(ovm_env) is architected to provide a flexible, reusable, and extendable verification component. The
main function of the environment class is to model behaviour by generating constrained-random
traffic, monitoring DUT responses, checking the validity of the protocol activity, and collecting
coverage.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
115 Vol. 1, Issue 5, pp. 109-117
Test: The test configures the verification environment to apply a specific stimulus to the DUT. It creates an instance of the environment to invoke the environment.
Top-level module: A single top-level module connects the DUT with the verification environment
through the interface instance. Global clock pulses are created here. run_test is use to run the
verification process. global_stop_request is used to stop the verification process after a specified
period of time or number of iterations or after a threshold value of coverage.
IX. FUNCTIONAL COVERAGE ANALYSIS
9.1. Coverage
As designs become more complex, the only effective way to verify them thoroughly is with
constrained-random testing (CRT). This approach avoids the tedium of writing individual directed
tests, one for each feature in the design. If the testbench is taking a random walk through the space of
all design states, one can gauge the progress using coverage.
Coverage is a generic term for measuring progress to complete design verification. The coverage tools
gather information during a simulation and then post-process it to produce a coverage report. One can
use this report to look for coverage holes and then modify existing tests or create new ones to fill the
holes. This iterative process continues until desired coverage level.
Figure 7. Coverage convergence [2]
9.2. Functional Coverage
Functional coverage is a measure of which design features have been exercised by the tests.
Functional coverage is tied to the design intent and is sometimes called “specification coverage”. One
can run the same random testbench over and over, simply by changing the random seed, to generate
new stimulus. Each individual simulation generates a database of functional coverage information. By
merging all this information together, overall progress can be measured using functional coverage.
Functional coverage information is only valid for a successful simulation. When a simulation fails
because there is a design bug, the coverage information must be discarded. The coverage data
measures how many items in the verification plan are complete, and this plan is based on the design
specification. If the design does not match the specification, the coverage data is useless. Reaching for
100% functional coverage forces to think more about what anyone want to observe and how one can
direct the design into those states.
9.3. Cover Points
A cover point records the observed values of a single variable or expression.
9.4. BINS
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
116 Vol. 1, Issue 5, pp. 109-117
Bins are the basic units of measurement for functional coverage. When one specify a variable
or expression in a cover point, SystemVerilog creates a number of “bins” to record how many
times each value has been seen. If variable is of 3-bits, maximum number of bins created by
SystemVerilog is eight.
9.5. Cover Group
Multiple cover points that are sampled at the same time (such as when a transaction
completes) are place together in a cover group.
X. SIMULATION RESULT OF OVM BASED VERIFICATION OF H.264
CAVLD SLICE HEADER DECODER
We use the QuestaSim 6.6b for simulation. Sequencer produces the sequences of data (transitions)
which is send to the DUT through the driver which converts the transactions into pin level activity.
The monitor keep track with the exercising of the DUT and its response and gives a record of
coverage of the DUT for the test performed. Figure 8 showing the simulation result of coverage with
cover groups. Total numbers of cover groups in the verification of Slice Header Decoder are thirty.
Inside a cover group, a number of cover points are present and inside a cover point, a number of bins
are present. We are considering a cover group CV_CAVLD_SH_09.
Figure 8. Simulation result of coverage Figure 9. Simulation result of coverage with
coverpoints and bins
Figure 9 shows the cover point (FI_SH_09) and bins inside the cover group CV_CAVLD_SH_09.
The whole coverage report is very large and is not possible to include in this paper. We are including
the coverage report related to cover group CV_CAVLD_SH_09.
Coverage reropt:
COVERGROUP COVERAGE:
---------------------------------------------------------------------------------
Covergroup Metric Goal/ Status
At Least
---------------------------------------------------------------------------------
TYPE /CV_CAVLD_SH_09
100.0% 100 Covered
Coverpoint CV_CAVLD_SH_09::FI_SH_09 100.0% 100 Covered
covered/total bins: 3 3
missing/total bins: 0 3
bin pic_order_cnt_lsb_min 263182 1 Covered
bin pic_order_cnt_lsb_max 3811 1 Covered
bin pic_order_cnt_lsb_between 36253 1 Covered
The number (Metric) present in front of bins represents the number of hits of a particular bin appears
during simulation.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
117 Vol. 1, Issue 5, pp. 109-117
XI. CONCLUSION
We present the verification of H.264 CAVLD Slice Header Decoder using SystemVerilog
implementation of OVM. We analyze the functional coverage with cover groups, cover points, and
bins. We achieve the 100 percent functional coverage for Slice Header Decoder module. Since
coverage is 100%, hence the RTL design meets the desired specifications of slice header decoder.
REFERENCES [1] ‘THE H.264 ADVANCED VIDEO COMPRESSION STANDARD’, Second Edition by Iain E. Richardson.
[2] ‘SystemVerilog for Verification: A Guide to Learning the Testbench Language Features’ by Chris Spear s.l.
: Springer, 2006.
[3] OVM User Guide, Version 2.1.1, March 2010. [4] http://www.doulos.com/knowhow. [5] http://www.ovmworld.org. [6] H.264: International Telecommunication Union, Recommendation ITU-TH.264: Advanced Video Coding
for Generic Audiovisual Services, ITU-T, 2003.
[7] 'Open Verification Methodology: Fulfilling the Promise of SystemVerilog' by Thomas L. Anderson, Product
Marketing Director Cadence Design Systems, Inc. [8] O.Cadenas and E.Todorovich, 'Experiences applying OVM 2.0 to an 8B/10B RTL design', IEEE 5th
Southern Conference on Programmable Logic, 2009, pp. 1 - 8.
[9] P.D. Mulani, 'SoC Level Verification Using SystemVerilog', IEEE 2nd International Conference on
Emerging Trends in Engineering and Technology (ICETET), 2009, pp. 378 - 380.
[10] G. Gennari, D. Bagni, A.Borneo and L. Pezzoni, 'Slice header reconstruction for H.264/AVC robust
decoders', IEEE 7th Workshop on Multimedia Signal Processing (2005), pp. 1 - 4.
[11] C. Pixley et al., 'Commercial design verification: methodology and tools', IEEE International Conference
on Test Proceedings, 1996, pp. 839 - 848.
Authors
Akhilesh Kumar received B.Tech degree from Bhagalpur university, Bihar, India in 1986
and M.Tech degree from Ranchi, Bihar, India in 1993. He has been working in teaching and
research profession since 1989. He is now working as H.O.D. in Department of Electronics
and Communication Engineering at N.I.T. Jamshedpur, Jharkhand, India. His interested field
of research digital circuit design.
Chandan Kumar received B. E. Degree from Visveswarya Technological University,
Belgaum, Karnataka, India in 2009. Currently pursuing M. Tech project work under guidance
of Prof. Akhilesh Kumar in the Department of Electronics & Communication Engg, N. I. T.,
Jamshedpur. Interest of field is ASIC Design & Verification, and Image Processing.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
118 Vol. 1, Issue 5, pp. 118-125
COMPARISON BETWEEN GRAPH BASED DOCUMENT
SUMMARIZATION METHOD AND CLUSTERING METHOD
Prashant D.Joshi1, S.G.Joshi
2, M.S.Bewoor
3, S.H.Patil
4
1, 3, 4Department of Computer Engineering, Bharati Vidyapeeth University, CoE, Pune, India
2Department of Computer Engineering, A.I.S.S.M.S. CoE, Pune, India
ABSTRACT
Document summarization and clustering are two techniques which can be used while accessing text files within
short period of time from the computer. In document summarization graph method, document graph of each text
file is generated. For creating document graph each paragraph is assumed as one individual node. Node score
and Edge score are calculated using mathematical formulas. Input query is applied on the document and
according to that summary from the Text file is generated. Clustering ROCK algorithm can also be used for
doing the summarization. Here each paragraph is considered as individual cluster and link score between two
paragraphs are calculated and on that basis two clusters are merged. Here Input query is applied on the
merged clusters as well as individual cluster and accordingly summary is generated. Various results are taken
in to consideration and we conclude that Rock algorithm requires less time as compared to other method for
document summarization. Clustering ROCK algorithm can be used with standalone machine, LAN, Internet for
retrieving text documents with small amount of retrieval time.
KEYWORDS: Input Query, Document summarization, Document Graph, Clustering, Link, Robust
Hierarchical Clustering Algorithm
I. INTRODUCTION
Today every human with basic computer knowledge is connected with the world by using an internet.
WWW provides features like communication, chatting, Information Retrieval. Huge amount of data is
available on N number of servers in the form of the files like text files, document files. Text Summarization is the process of identifying the most salient information in a document or text file. In
existing days Query Summarization was done through the BOW (Bag of Words) approach, in which
both the query and sentences were represented with word vectors. But this approach has drawback
where it merely considers lexical elements (words) in the documents, and ignores semantic relations
among sentences. [6]
Graph method is very important and crucial in document summarization which provides effective way
to study local, system level properties at a component level. Following examples shows the
importance of graphs. In the application of Biological network a protein interaction network is
represented by a graph with the protein as vertices and edge is exist between two vertices if the
proteins are known to interact based on two hybrid analysis and other biological experiments[3]. In
stock market graph vertices are represented by stocks and edge between two point exist if they are
positively correlated over some threshold value based on the calculations.[3] in Internet application an
Internet graph has vertices representing IP addresses while a web graph has vertices representing websites.[3]. In this paper we are comparing clustering ROCK algorithm with graph based document
summarization algorithm for generating summary from the text file.
Even though there is an increasing interest in the use of clustering methods in pattern recognition
[Anderberg1973], image processing [Jain and Flynn 1996] and information retrieval [Rasmussen
1992; Salton 1991], clustering has a rich history in other disciplines [Jain and Dubes 1988] such as
biology, psychiatry, psychology, archaeology, geology, geography, and marketing..[4]
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
119 Vol. 1, Issue 5, pp. 118-125
Currently, clustering algorithms can be categorized into partition-based, hierarchical, density-based,
grid-based and model-based. [7] In clustering, related document should contain same or similar terms.
One can expect a good document cluster to contain large number of matching terms. In reality when a
document cluster is large, there is no single term that occurs in all the documents of the cluster. In
contrast when a cluster is small one can expect certain term to occur in its all documents [8]. Clustering and Data summarization are two techniques which are present in data mining. Data Mining
is the notion of all methods and techniques, which allow to analyze very large data sets to extract and
discover previously unknown structures and relations out of such huge heaps of details These
information is filtered, prepared and classified so that it will be a valuable aid for decisions and
strategies.[5]
II. RELATED WORK FOR DOCUMENT GRAPH METHOD
2.1 Document Summarization
Query-oriented summarization is primarily concerned with synthesizing an informative and well-
organized summary from a collection of text document by applying an input query. Today most
successful multi-document summarization systems refer the extractive summarization framework.
These systems first rank all the sentences in the original document set and then select the most silent sentences to compose summaries for a good coverage of the concepts. For the purpose of creating
more concise and fluent summaries, some intensive post-processing approaches are also appended on
the extracted sentences.
Here input query as q and the collection of documents as D. The goal of QS is to generate a summary
which best meets the information needs expressed by q . To do this, a Query Summarization system
generally takes two steps: first, the stop words from documents as well as from input query is
removed. Second sentences are selected until the length of the summary is reached. For making document graph node weights, edge weights should be known.
Nodes are nothing but the paragraphs. Node weights are calculated after applying an input query.
Following formula is refereed for calculating the node score.[1]
∑ ∈, .
. .
.
….(1) [1]
where N is total number of text files present on the system.
df is total number of text files that contains the input term.
tf means total count of input keywords in text file.
qtf means number of times keyword occurred in input query.
k1, b, k3 are constant value. Here k1 is assumed as 1, b =0.5, k3 =2
dl is the total text file length. avdl is average document length assume as 120.
2.2 Problem Definition for Document Summarization using Graph based Algorithm Lets we have n document i.e.d1, d2, to dn. Size of document is total number of words. i.e. size (di).
Term frequency tf(d,w) is no of words present in documents.
Inverse document frequency is i.e.idf(w) Means inverse of documents contain word w in all
Documents.
Keyword query is set of words. i.e.Q(w1,w2…wn).
The document graph G (V, E) of a document d is defined as follows:
• d is split to a set of non-overlapping text fragments t(v),each corresponding to a node v€V.
• An edge e(u,v) €E is added between nodes u,v if there is an association between t(u) and t(v) in d.
Two nodes can be connected using edges. Such edge weight is calculated by following formula. Here
t (u) means first paragraph and t (v) means second paragraph. Like this edge weights between all
paragraphs are calculated and stored in the database. Size t (u) shows number of keyword in first
paragraph and t (v) shows number of keyword in second paragraph. Edge weight can be calculated
before applying the input query because no. of text files are present on the system.[1]
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
120 Vol. 1, Issue 5, pp. 118-125
Escore (e)=∑ ,, .!∈∩
#!$% #!$% …(2) [1]
w€ t(u) ∩ t(v) means that common word present in both paragraph. Common keyword count is
assigned to w. in this fashion all edge score of all text files are calculated and they are permanently
stored in the database. When new file is added then this module is run by administrator for storing the
edge weights in the database. Summary module is referring the concept of spanning tree on the document graph because multiple
Nodes may have input query so which nodes will be selected? Different combinations from graph are
identified and node score is generated using following formula.
Score (T) =a ∑ &#'()%%%*% %∈+ + b
∑ #'()%(% ∈+
….(3) [1]
Equation (3) will calculate the spanning tree score in the document graph.[1] From spanning tree table
the minimum score of spanning tree is considered and that paragraph is displayed as summary.
III. CLUSTERING
Clustering can be considered as the most important unsupervised learning problem. Various
techniques can be applied for making the groups. A loose definition of clustering could be “the
process of organizing objects into groups whose members are similar with certain property. The similarity criterion is distance: two or more objects belong to the same cluster if they are “close”
according to a given distance (in this case geometrical distance). This is called distance-based
clustering.[4]
Another kind of clustering is conceptual clustering: two or more objects belong to the same cluster if
this one defines a concept common to all that objects. In other words, objects are grouped according
to their fit to descriptive concepts, not according to simple similarity measures.
3.1 Example:
Clustering concept is always used with library where we have different subject’s book. These books are arranged in proper structure to reduce the access time. Consider books of operating system they
will be kept in operating system shelf. Shelf has also assigned numbers for managing books
efficiently. Likewise all subjects’ books are arranged in cluster form.
Clustering algorithms can be applied in many fields, for example
• City-planning: globally houses are arranged by considering house type, value and
geographical location;
• Earthquake studies: clustering is applied while observing dangers zone. • World Wide Web: in WWW clustering is applied for document classification and document
summary generation.
• Marketing: for getting the details of the customer who purchase similar thing from huge
amount of data.
• Biology: classification of plants and animals given their features;
• Libraries: organizing book in efficient order for reducing the access delay.
• Insurance: identifying groups of motor insurance policy holders with a high average claim
cost; identifying frauds [4]
Problem definition: Assume n is no of text documents with size p number of paragraphs.
Generate the summary from text files while applying the input query q. This paper follows
following system architecture for implementing text file summarization using clustering as well as
graph based method. Below fig.1.1 shows the system architecture for implementation of this system.
IV. SYSTEM ARCHITECTURE
This system is developed in network environment. The main goal of this system is to get relevant text
file from the server without going through all text files. User time will be saved by just reading the
summary of text file relevant to input query. Here user input query is compared with all text files and
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET
121
from that which text file is most relevant to input query that is generated
machine. Here User can use graphical summarization method or can use clustering algorithm for
generating summary.
Fig1.1 system Architecture for Document Summarization and Clustering Method
V. ROCK ALGORITHM FOR
Procedure cluster(S,k)Begin
1.Link:=compute_links(S)
2. For each s € S do
3 q[s]:=build_local_heap(link,s)
4.Q:=build_gloabal_heap(S,q)
5.While size(Q)>k do
6.u:=extract,max(Q)
7.v:=max(q[u])
8delete(Q,v)
9.w:=merge(u,v)
10.for each x€ q[u] U q[v] do
11.link[x,w]:= link[x,u]+ link[x,v]
12.delete(q(x),u);delete(q(x),v)
13.insert(q([x],w,g(x,w));insert(q[w],x,q(x,w))
14.update(Q,x,q[x])
15.
16. insert(Q,w,q[w])
17.Deallocate(q[u];deallocate(q[v])
18.
end
5.1 For calculating Link score here
Procedure compute_link(S)
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231
Vol. 1, Issue 5, pp.
from that which text file is most relevant to input query that is generated as an output
ine. Here User can use graphical summarization method or can use clustering algorithm for
Fig1.1 system Architecture for Document Summarization and Clustering Method
LGORITHM FOR CLUSTERING
13.insert(q([x],w,g(x,w));insert(q[w],x,q(x,w))
here following algorithm is used.
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231-1963
Vol. 1, Issue 5, pp. 118-125
as an output on user
ine. Here User can use graphical summarization method or can use clustering algorithm for
Fig1.1 system Architecture for Document Summarization and Clustering Method
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
122 Vol. 1, Issue 5, pp. 118-125
Begin
1.Compute nbrlist[i] for every point i in S
2.Set link[i,j] to be zero for all i,j.
3.for i=1 to n do
4. N:=nbrlist[i]
5.for j:=1 to |N| -1 do
6.for l:= j+1 to |N| do
7.link[N[j],N[l]:=link[N[j],N[l]+1
8.… [2]
Following Example will give the concept of clustering and how it is applied on the text file. Let’s assume we have brainchip text file which contains four paragraphs.
1 Brain chip offers hope for paralyzed.
2. A team of neuroscientists have successfully implanted a chip into the brain of a quadriplegic man,
allowing him to control a computer.
3. Since the insertion of the tiny device in June, the 25-year old has been able to check email an play
computer games simply using thoughts. He can also turn lights on and off and control a television, all
while talking and moving his head. 4. The chip, called BrainGate, is being developed by Massachusetts-based neurotechnology company
Cyberkinetics, following research undertaken at Brown University, Rhode Island.
Rock algorithm is applied on above text file following thing will be done on this file and result is
generated.
Count number of paragraphs in this file. Remove stop words from this file.
Assume each paragraph as individual cluster.
Above file contains 4 paragraphs. i.e.P1, P2, P3, P4.
Start with P1, compare P1 with all reaming paragraphs and find the value of link.
Link score is calculated by comparing keywords of each paragraph. The results of link score will be
stored in one array.
Table 1.1 Keywords of each individual paragraph
Keywords
List of C1
Keywords
List of C2
Keywords
List of C3
Keywords
List of C4
Brain Team insertion Chip
Chip neuroscientists Tiny BrainGate
Offers successfully device Developed
Hope implanted June Massachusetts_based
paralyzed chip 25-year neurotechnology
brain old Company
quadriplegic check Cyberkinetics
man Email Research
allowing play Undertaken
Control computer Brown
computer games University
simply Rhode
thoughts Island
turn
Lights
control
television
talking
moving
head
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
123 Vol. 1, Issue 5, pp. 118-125
Table 1.2 Local heap, Link result for P1-P4
Paragraphs Link Result Common words
P1,p2 02 Chip, brain
P1,p3 00 Nil
P1,p4 01 Chip
Table 1.3 Local heap, Link result for P2-P4
Table 1.4 Local heap, Link result for P3-P4
Paragraphs Link
Result
Common
words
P3,P4 00 Nil
From Table 1.2 it can easily understand that P1_P2 Link score is maximum. So P1-P2 can be merged
and one new cluster can be created. From Table 1.3 P2-P3 Link score is maximum i.e.2 so P2-P3 can
be merged and one new cluster can be created. In Table 1.4 Link score of P3-P4 is zero so no need to
make the cluster.
Now we have C1, C2, C3, C4 total 4 clusters. Where C1 is merged keywords of P1-P2, C2 is merged
keywords of P2-P3, C3 is individual paragraph3 i.e. P3 which is not matching with any other
paragraphs. Likewise C4 which is paragraph P4 having single keyword common with P1 but link
score of P1-P4 is less than P1-P2. Here P4 is considered as individual cluster because input query may
be present with this paragraph also. So even though two paragraphs are not matching we want to take
them as separate cluster. Now apply “Brain Chip Research” Query on Merged cluster as well as
individual cluster.
Brain chip part of Input query is present with both C1, C2 which shown with bold Letters. In C3 there
is no keyword of Input “Brain Chip Research”. In cluster C4 ‘Chip’ and ‘Research’ keywords are
present with C4. The Keyword count of Input query on cluster as well as the size of Cluster is
considered while selecting final cluster as an output.
Here we are not getting “brain chip research” input query from individual cluster. So once again
clustering algorithm should be applied on C1, C2, and C4. Link score between C1-C2, C1-C4, and
C2-C4 is calculated and stored in database.
C1-C4, C2-C4 will give all part of input query. C1-C4 will give Keyword count of 18 where as C2-C4
will give keyword count of 24. So C1-C4 gives less count so Summary should be generated from C1,
C4 clusters.
VI. EXPERIMENTAL RESULT
We have implemented above system with following Hardware and software configuration.
Pentium Processor: –IV, Hard disk: 160 Gb, RAM Capacity: 1 Gb
Software requirement for implementation of above system is:
Operating System: Windows XP, Visual Studio.NET 2008, SQL Server 2005.
We have stored 57 text files in the database, the memory capacity required for these text files were
122 kb. Table 1.5 Clustering and Graph based Algorithm Result
Sr.No. File
Name
Input Query Rock Algo
(Time in
millisecond)
Graph Algo
( Time in
millisecond )
1 F1.txt eukaryotic organisms 218 234
2 F2.txt woody plant 249 280
3 F4 Bollywood film music 296 439
4 F6 personal computers 327 592
5 F7 Taj Mahal monument 390 852
6 F8 computer programs
software
468 1216
Paragraphs Link
Result
Common words
P2,p3 02 Control, Computer
P2,p4 01 Chip
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
124 Vol. 1, Issue 5, pp. 118-125
7 F13 wireless local area
network
390 758
8 F15 Mobile WiMAX 780 1060
9 F16 system development 670 724
10 F22 remote procedure calls 546 1482
First query is “eukaryotic organisms” which is applied on the system. Rock algorithm requires 218
milliseconds where as Graph based summarization requires 234 millisecond. Second query applied is
“woody plant” here Rock algorithm requires 249 millisecond where as Document Graph algorithm
requires 280Milisecond. After observing execution time of all Input query we conclude that
Clustering Rock algorithm has good performance than graph based document summarization. But
when input query is not available in any of the text file then graph based summarization gives output
fast as compared to Rock Algorithm.
VII. CONCLUSION
In this paper we have compared the performance of Graph based document summarization method
with clustering method. And the performance of Rock algorithm is better than Graph based document
summarization method algorithm. This system can be applied with stand alone machine, LAN, WAN
for retrieving text files within short period of time. Further this system can be improved to work on
Doc file as well as PDF file which contain huge number of textual data.
ACKNOWLEDGEMENT
I am thankful to Professor & H.O.D. Dr. S. H. Patil, Associate Professor M. S. Bewoor, Prof. Shweta
Joshi for their continuous guidance. I am also thanks to all my friends who are directly or indirectly supported me to complete this system.
REFERENCES
[1]. Ramakrishna Varadarajan School of Computing and Information Sciences Florida, International
University, paper on “A System for Query-Specific Document Summarization”.
[2]. Sudipto Guha_Stanford University Stanford, CA 94305, Rajeev Rastogi Bell Laboratories, Murray
Hill, NJ 07974 Kyuseok Shim,Bell Laboratories Murray Hill, NJ 07974 Paper on “A Robust Clustering
Algorithm for Categorical Attributes” .
[3]. Balabhaskar Balasundaram 4th IEEE Conference on Automation Science and Engineering Key Bridge
arriott, Washington DC, USA August 23-26, 2008 “ A cohesive Subgroup Model For Graph-based
Text Mining”.
[4]. A Review by A.K. Jain Michigan State University,M.N. Murty Indian Institute of Science AND P.J.
Flynn The Ohio State University on “Data Clustering”.
[5]. Johannes Grabmeier University of Applied Sciences, Deggendorf, Edlmaierstr 6+8, D-
94469Deggendorf, Germany, Andreas Rudolph Universitat der Bundeswehr Munchen, Werner-
Heisenberg-Weg 39, Neubiberg, Germany D-85579, on “Techniques of Cluster Algorithms in Data
Mining”.
[6]. Prashant D. Joshi, M. S. Bewoor,S. H. Patil on topic “System for document summarization using
graphs In text mining” in “International Journal of Advances in Engineering & Technology (IJAET)”.
[7]. Bao-Zhi Qiu1, Xiang-Li Li, and Jun-Yi Shen, on “Grid-Based Clustering Algorithm Based on
Intersecting Partition and Density Estimation”.
[8]. Jacob kogan, Department of Mathematics and statistics,Marc Teboulle, paper on “The Entropic
Geometric Means Algorithm: An Approach to Building small clusters for large text datasets”.
Authors
Prashant D. Joshi currently working as Assistant professor and pursuing Mtech Degree from
Bharati Vidyapeeth Deemed University College of Engineering Pune. Total 5 and half years
of teaching experience and six months of software development experience. He has
Completed B.E. Computer science degree from Dr. Babasaheb Ambedkar University
Aurangabad (MH) in year 2005 with distinction. Published 2 papers in national conferences,
2 papers in International conferences and published 1 paper in international Journal.His area
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
125 Vol. 1, Issue 5, pp. 118-125
of interest is Data Mining, Programming Languages, and Microprocessors.
S. G. Joshi currently working as Lecturer in A.I.S.S.M.S.College of Engineering, Pune. She
is having total 2 years of teaching experience in polytechnic college. She has completed B.E.
computer science engineering from Swami Ramanand Teerth Marathwada University
Nanded with distinction. Her research interest is in Data Mining, Operating System, and Data
Structure.
M. S. Bewoor currently working as Assistant Professor in Bharati Vidyapeeth Deemed
university college of enginering,pune.she has total having 10 years of teaching experience in
Engineering college and 3 years of Industry experience. She is involved in Reseacrh activity
by presenting 07 papers in national conferences, 08- international conferences and 07-
International journals.Her area of interest is Data Structure,Data Mining,Artificial
Intelligence.
S. H. Patil working as professor and Head of Computer Department at Bharati Vidyapeeth
Deemed University College of engineering Pune. Total 24 years of teaching experience. He
has published more than 100 papers in National conferences, International conferences,
National Journals and International Journals. His area of Interest is Operating System,
Computer Network, and Database Management System.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
126 Vol. 1, Issue 5, pp. 126-132
IMPROVED SEARCH ENGINE USING CLUSTER ONTOLOGY
Gauri Suresh Bhagat, Mrunal S. Bewoor, Suhas Patil Computer Department, Bharati Vidyapeeth Demeed University College of Engineering, Pune
Maharashtra, India
ABSTRACT
Search engine such as Google and yahoo returns a list of web pages that match the user query. It is very
difficult for the user to find relevant web pages. Cluster based search engine can provide significantly more
powerful models for searching a user query. Clustering is a process of forming groups (clusters) of similar
objects from a given set of inputs. When applied to web search results, clustering can be perceived as a way of
organising the results into a number of easily brows able thematic groups. In this paper, we propose a new
approach for applying background knowledge during pre-processing in order to improve clustering results and
allow for selection between results. We preprocess our input data applying an ontology-based heuristics for
feature selection and feature aggregation. The inexperienced users, who may have difficulties in formulating a
precise query, can be helped in identifying the actual information of interest. Clustering are readable and
unambiguous descriptions (labels) of the thematic groups. They provide the users with an overview of the topics
covered in the results and help them identify the specific group of documents they were looking for.
KEYWORDS: Cluster, stemming, stop words, Cluster label induction, Frequent Phrase Extraction, cluster
content discovery.
I. INTRODUCTION
With an enormous growth of the Internet it has become very difficult for the users to find relevant
documents. In response to the user’s query, currently available search engines return a ranked list of
documents along with their partial content. If the query is general, it is extremely difficult to identify
the specific document which the user is interested in. The users are forced to sift through a long list of
off-topic documents. For example When “java Map” query submitted to Cluster based search engine
The result set spans two categories, namely the Java map collection classes and maps for the
Indonesian island Java. Generally speaking, the computer science student would be most likely
interested in the Java map collection classes, whereas the geography student would be interested in
locating maps for the Indonesian island Java. The solution is that for each such web page, the search-
engine could determine which real entity the page refers to. This information can be used to provide a
capability of clustered search, where instead of a list of web pages of (possibly) multiple entities with
the same name, the results are clustered by associating each cluster to a real entity. The clusters can be
returned in a ranked order determined by aggregating the rank of the web pages that constitute the
cluster.
II. RELATED WORK
The Kalashnikov et al. Have developed a disambiguation algorithm & then studied its impact on
people search [1]. The Author has proposed algorithm that use Extraction techniques to extracts
entities such as names, organizations locations on each web page. The algorithm analyses several
types of information like attributes, interconnections that exist among entities in the Entity-
Relationship Graph.If the multiple people name web pages merged into same cluster it is difficult for
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
127 Vol. 1, Issue 5, pp. 126-132
user to find relevant web pages. For the disambiguating people that have same name a novel
algorithm is developed.
The Kalashnikov et al. have, discuss a Web People Search approach which is based on collecting co-
occurrence information from web to make clustering decisions [2]. To classify the collected co-
occurrence information a sky-line based classification technique is used.
Bekkerman and Zilberstein have proposed framework makes the heuristic search viable in the vast
domain of the WWW and applicable to clustering of Web search results and to Web appearance
disambiguation [3].
Chen and Kalashnikov have, presented graphical approach for entity resolution. The overall idea
behind this is to use relationships & to look at the direct and indirect (long) relationships that exist
between specific pairs of entity representations in order to make a disambiguation decision. In terms
of the entity-relationship graph that means analyzing paths that exist between various pairs of nodes
[4].
III. DESIGN OF PREPROCESSING OF WEB PAGES
The preprocessing of the web pages which include the two processing named as stemming and stops
word removal. Stemming algorithms are used to transform the words in texts into their grammatical
root form, and are mainly used to improve the Information Retrieval System’s efficiency. To stem a
word is to reduce it to a more general form, possibly its root. For example, stemming the term may
produce the term interest. Though the stem of a word might not be its root, we want all words that
have the same stem to have the same root. The effect of stemming on searches of English document
collections has been tested extensively. Several algorithms exist with different techniques. The most
widely used is the Porter Stemming algorithm. In some contexts, stemmers such as the Porter stemmer
improve precision/recall scores .After stemming it is necessary to remove unwanted words. There are
400 to 500 types of stop words such as “of”, “and”, “the,” etc., that provide no useful information
about the document’s topic. Stop-word removal is the process of removing these words. Stop-words
account for about 20% of all words in a typical document. These techniques greatly reduce the size of
the search engine’s index. Stemming alone can reduce the size of an index by nearly 40%. To
compare a webpage with another webpage, all unnecessary content must be removed and the text put
into an array.
When designing a Cluster Based Web Search, special attention must be paid to ensuring that both
content and description (labels) of the resulting groups are meaningful to humans. As stated, “a good
cluster—or document grouping—is one, which possesses a good, readable description”. There are
various algorithms such as K means, K-medoid but this algorithm require as input the number of
clusters. A Correlation Clustering (CC) algorithm is employed which utilizes supervised learning. The
key feature of Correlation Clustering (CC) algorithm is that it generates the number of clusters based
on the labeling itself & not necessary to give it as input but it is best suitable when query is person
names[9]. For general query, the algorithms are Query Directed Web Page Clustering (QDC), Suffix
Tree Clustering (STC), Lingo, and Semantic Online Hierarchical Clustering (SHOC)[5].The focus is
made on Lingo because the QDC considers only the single words. The STC tends to remove longer
high quality phrases, leaving only less informative & shorter ones. So, if a document does not include
any of the extracted phrases it will not be included in results although it may still be relevant. To
overcome the STC's low quality phrases problem, in SHOC introduce two novel concepts: complete
phrases and a continuous cluster definition. The drawback of SHOC is that it provides vague
threshold value which is used to describe the resulting cluster. Also in many cases, it produces
unintuitive continuous clusters. The majority of open text clustering algorithms follows a scheme
where cluster content discovery is performed first, and then, based on the content, the labels are
determined. But very often intricate measures of similarity among documents do not correspond well
with plain human understanding of what a cluster’s “glue” element has been. To avoid such problems
Lingo reverses this process—first attempt to ensure that we can create a human-perceivable cluster
label and only then assign documents to it. Specifically, extract frequent phrases from the input
documents, hoping they are the most informative source of human-readable topic descriptions. Next,
by performing reduction of the original term-document matrix using Singular Value Decomposition
(SVD), try to discover any existing latent structure of diverse topics in the search result. Finally,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
128 Vol. 1, Issue 5, pp. 126-132
match group descriptions with the extracted topics and assign relevant documents to them. The detail
description of Lingo algorithm is in [4].
IV. FREQUENT PHRASE EXTRACTION
The frequent phrases are defined as recurring ordered sequences of terms appearing in the input
documents. Intuitively, when writing about something, we usually repeat the subject-related keywords
to keep a reader’s attention. Obviously, in a good writing style it is common to use synonymy and
pronouns and thus avoid annoying repetition. The Lingo can partially overcome the former by using
the SVD-decomposed term document matrix to identify abstract concepts—single subjects or groups
of related subjects that are cognitively different from other abstract concepts.
A complete phrase is a complete substring of the collated text of the input documents, defined in the
following way: Let T be a sequence of elements (t1, t2, t3 . . . tn). S is a complete substring of T when S
occurs in k distinct positions p1, p2, p3 . . . pk in T and Ǝi, j ϵ 1 . . . k : tpi−1 ≠ tpj−1 (left completeness)
and Ǝi, j ϵ 1 . . . k : tpi+|S| ≠ tpj+|S| (right-completeness). In other words, a complete phrase cannot be
“extended” by adding preceding or trailing elements, because at least one of these elements is
different from the rest. An efficient algorithm for discovering complete phrases was proposed in [11].
V. CLUSTER LABEL INDUCTION
Once frequent phrases (and single frequent terms) that exceed term frequency thresholds are known,
they are used for cluster label induction. There are three steps to this: term-document matrix building,
abstract concept discovery, phrase matching and label pruning.
The term-document matrix is constructed out of single terms that exceed a predefined term frequency
threshold. Weight of each term is calculated using the standard term frequency, inverse document
frequency (tfidf) formula [12], terms appearing in document titles are additionally scaled by a
constant factor. In abstract concept discovery, Singular Value Decomposition method is applied to the
term-document matrix to find its orthogonal basis. As discussed earlier, vectors of this basis (SVD’s
U matrix) supposedly represent the abstract concepts appearing in the input documents. It should be
noted, however, that only the first k vectors of matrix U are used in the further phases of the
algorithm. We estimate the value of k by selecting the Frobenius norms of the term-document matrix
A and its k-rank approximation Ak. Let threshold q be a percentage-expressed value that determines to
what extent the k-rank approximation should retain the original information in matrix A.
VI. CLUSTER CONTENT DISCOVERY
In the cluster content discovery phase, the classic Vector Space Model is used to assign the input
documents to the cluster labels induced in the previous phase. In a way, we re-query the input
document set with all induced cluster labels. The assignment process resembles document retrieval
based on the VSM model. Let us define matrix Q, in which each cluster label is represented as a
column vector. Let C = QTA, where A is the original term-document matrix for input documents. This
way, element cij of the C matrix indicates the strength of membership of the j-th document to the i-th
cluster. A document is added to a cluster if cij exceeds the Snippet Assignment Threshold, yet another
control parameter of the algorithm. Documents not assigned to any cluster end up in an artificial
cluster called others.
VII. FINAL CLUSTER FORMATION
Clusters are sorted for display based on their score, calculated using the following simple formula:
Score = label score × ||C||, where ||C|| is the number of documents assigned to cluster C. The scoring
function, although simple, prefers well-described and relatively large groups over smaller, possibly
noisy ones.
VIII. ONTOLOGY
Let tf(d, t) be the absolute frequency of term t ϵ T in document d ϵ D, where D is the set of documents
and T = t1,..., tm is the set all different terms occurring in D. We denote the term vectors →td
=
((tf(d, t1),....., tf(d,tm)). Later on, we will need the notion of the centroid of a set X of term vectors. It
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
is defined as the mean value . As initial approach we have produced this standard representation of the
texts by term vectors. The initial term vectors are further modified as follows.
Stopwords are words which are considered as non–descriptive within a bag–of–words approach.
Following common practice, we removed stopwords from T.
We have processed our text documents using the Porter stemmer. We used the stemmed terms to
construct a vector representation →td
for each text document. Then, we have investigated how
pruning rare terms affects results. Depending on a pre-defined threshold δ, a term t is discarded from
the representation (i. e., from the set T), if ∑ dϵD tf (d,t) ≤ δ. We have used the values 0, 5 and 30 for δ.
The rationale behind pruning is that infrequent terms do not help for identifying appropriate clusters.
Tfidf weighs the frequency of a term in a document with a factor that discounts its importance when it
appears in almost all documents[14]. The tfidf (term frequency-inverted document frequency) of term
t in document d is defined by:
where df(t) is the document frequency of term t that counts in how many documents term t appears If
tfidf weighting is applied then we replace the term vectors →td
= ((tf(d, t1),....., tf(d,tm)) by →td
= ((tfidf(d, t1),....., tfidf(d,tm)) [13]. A core ontology is a tuple O := (C, ≤ C) consisting of a set C
whose elements are called concept identifiers, and a partial order ≤ C on C, called concept hierarchy
or taxonomy . This definition allows for a very generic approach towards using ontologies for
clustering.
IX. RESULTS AND DISCUSSION
The system was implemented using Net bean 6.5.1 as development tool & Jdk 1.6 development
Platform .Also it was tested for variety of queries under following four categories and the results
obtained where satisfactory.
9.1 Web pages retrieval for the query
This module gives the facilities for specifying the various queries to the middleware. The front end
developed so far is as follows. The Figure 1 shows user interface, by using that the user enters the
query to the middleware. Along with the query, user can also select the number of results
(50/100/150/200) to be fetched from source. In Figure.1, query entered is “mouse” & result selected is
100.The user issues a query to the system (middleware) sends a query to a search engine, such as
Google, and retrieves the top-K returned web pages. This is a standard step performed by most of the
current systems. The Figure1 shows that the 200 results were fetched from the source Google for
query “mouse” Input: Query “mouse” & k=50/100/150/200 page. Output: Web pages of Query
“mouse”.
The system was assessed for a number of real-world queries; also analyzed the results obtained from
our system with respect to certain characteristics of the input data. The queries are mainly categorized
in four types such as Ambiguous Query, General Query, Compound Query, People Name, The system
was tested for all these queries & the result obtained is satisfactory.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure 1. Clustering results for a ambiguous query “mouse” & k=200 results
X. QUALITY OF GROUP IDENTIFICATION
Figure 1 demonstrates the overall disambiguation quality results on WWW 2005 and WEPS data sets.
We also compare the results with the top runners in the WEPS challenge [6]. The first runner in the
challenge reports 0.78 for Fp and 0.70 for B-cubed measures. The proposed algorithm outperforms all
of the WEPS challenge algorithms. The improvement is achieved since the proposed disambiguation
method is simply capable of analyzing more information, hidden in the data sets, and which [8] and
[7] do not analyze. That algorithm outperforms [7] by 11.8 percent of F-measure, as illustrated in
Table 1 and Table 2. In this experiment, F-measure is computed the same way as in [7].The field
“#W” in Table 1. is the number of the to-be found web pages related to the namesake of interest. The
field “#C” is the number of web pages found correctly and the field “#I” is the number of pages found
incorrectly in the resulting groups. The baseline algorithm also outperforms the algorithm proposed in
[7]. Table 1. F- Measures Using WWW’05 Algo.
Name #W WWW’05 Algo.
#C #I F-measure
Adam cheyer 96 62 0 78.5
William cohen 6 6 4 75.0
Steve hardt 64 16 2 39.0
David Israel 20 19 4 88.4
Leslie kaelbling 88 84 1 97.1
Bill Mark 11 6 9 46.2
Mouse 54 54 2 98.2
Apple 15 14 5 82.4
David Mulford 1 1 0 100.0
Java 32 30 6 88.2
Jobs 32 21 14 62.7
Gauri 1 0 1 0.0
Overall 455 313 47 80.3
F-measure: let Si be the set of the correct web pages for cluster-i and Ai be the set of web pages
assigned to cluster-i by the algorithm .Then, Precisioni = | ∩ |
| | , Recall i=
| ∩ |
| | and F is their
harmonic mean[10]. And Fp is referred to as Fα = 0.5 [8].
Table 2. F- Measures using Baseline Algo
Name #W Baseline Algo
#C #I F-measure
Adam cheyer 96 75 1 87.2(+8.7)
William cohen 6 5 0 90.9(+15.9)
Steve hardt 64 40 7 72.1(+33.1)
David Israel 20 14 2 77.8(-10.6)
K=200
results
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
131 Vol. 1, Issue 5, pp. 126-132
Leslie kaelbling 88 66 0 85.7(-11.4)
Bill Mark 11 9 17 48.6(+2.4)
Mouse 54 52 0 98.1(-0.1)
Apple 15 15 2 93.8(+11.4)
David Mulford 1 0 1 0.0(-100.0)
Java 32 27 1 90.0(+1.8)
Jobs 32 23 17 63.9(+1.2)
Gauri 1 1 0 100.0(+100.0)
Overall 455 327 47 82.4(+2.1)
Table 3. F-Measure using Cluster Based Algo
Name #W Cluster based Algo.
#C #I F-measure
Adam cheyer 96 94 0 98.9(+20.4)
William cohen 6 4 0 80.0(+5.0)
Steve hardt 64 51 2 87.2(+48.2)
David Israel 20 17 2 87.8(-1.2)
Leslie kaelbling 88 88 1 99.4(+2.3)
Bill Mark 11 8 1 80.0(+33.8)
Mouse 54 54 1 99.1(+0.9)
Apple 15 12 5 75.0(-7.4)
David Mulford 1 1 0 100.0(+0.0)
Java 32 25 1 86.2(-2.0)
Jobs 32 25 11 73.5(+10.8)
Gauri 1 0 0 0.0(+0.0)
Overall 455 379 24 92.1(+11.8)
XI. CONCLUSION
The number of outputs processed for a single query is likely to have impact on two major aspects of
the results: the quality of groups’ description and the time spent on clustering .The focus is made on
the evaluation of usefulness of generated clusters. The term usefulness involves very subjective
judgments of the clustering results. For each created cluster, based on its label, decided whether the
cluster is useful or not. Useful groups would most likely have concise and meaningful labels, while
the useless ones would have been given either ambiguous or senseless. For each cluster individually,
for each snippet from this cluster, judged the extent to which the result fits its group's description. A
very well matching result would contain exactly the information suggested by the cluster label.
ACKNOWLEDGEMENTS
We would like to acknowledge and extend our heartfelt gratitude to the following persons who have
made the completion of this paper possible: my guide Prof. M.S.Bewoor and Our H. O. D, Dr. Suhas
H. Patil for his vital encouragement and support. Most especially to our family and friends and to
God, who made all things possible!
REFERENCES
[1] D.V. Kalashnikov, S.Mehrotra, R.N.Turenand Z.Chen, “Web People Search via Connection
Analysis” IEEE Transactions on Knowledge and data engg.Vol 20,No11,November 2008.
[2] D.V. Kalashnikov, S. Mehrotra, Z. Chen, R. Nuray-Turan, and N.Ashish, “Disambiguation Algorithm
for People Search on the Web,” Proc. IEEE Int’l Conf. Data Eng. (ICDE ’07), Apr. 2007.
[3] R. Bekkerman, S. Zilberstein, and J. Allan, “Web Page Clustering Using Heuristic Search in the Web
Graph,” Proc. Int’l Joint Conf. Artificial Intelligence (IJCAI), 2007.
[4] Z. Chen, D.V. Kalashnikov, and S. Mehrotra, “Adaptive Graphical Approach to Entity Resolution,”
Proc. ACM IEEE Joint Conf. Digital Libraries (JCDL), 2007.
[5] Zamir, O.E.: Clustering Web Documents: A Phrase-Based Method for GroupingSearch Engine Results.
PhD thesis, University of Washington (1999).
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
[6] J. Artiles, J. Gonzalo, and S. Sekine, “The SemEval-2007 WePSEvaluation: Establishing a Benchmark
for the Web People Search Task,” Proc. Int’l Workshop Semantic Evaluations (SemEval ’07), June
2007.
[7] R. Bekkerman and A. McCallum, “Disambiguating Web Appearancesof People in a Social Network,”
Proc. Int’l World Wide Web Conf. (WWW), 2005.
[8] J. Artiles, J. Gonzalo, and F. Verdejo, “A Testbed for People Searching Strategies in the WWW,” Proc.
SIGIR, 2005.
[9] N. Bansal, A. Blum, and S. Chawla, “Correlation Clustering,”Foundations of Computer Science, pp.
238-247, 2002.
[10] D.V.Kalashnikov, S.Mehrotra, R.N.Turenand Z.Chen, “Web People Search via Connection Analysis”
IEEE Transactions on Knowledge and data engg.Vol 20,No11,November 2008.
[11] Zhang Dong. Towards Web Information Clustering. PhD thesis, Southeast University, Nanjing, China,
2002.
[12] Gerard Salton. Automatic Text Processing — The Transformation, Analysis, and Retrieval of
Information by Computer. Addison–Wesley, 1989.
[13] G. Amati, C. Carpineto, and G. Romano. Fub at trec-10 web track: A probabilistic framework for topic
relevance term weighting. In The Tenth Text Retrieval Conference (TREC 2001). National Institute of
Standards and Technology (NIST), online publication, 2001.
[14] Hotho A., Staab S. and Stumme G, (2003) WordNet improves text document clustering, Proc. of the
SIGIR 2003 Semantic Web Workshop, Pp. 541-544.
Authors
Gauri S. Bhagat is a student of M.Tech in Computer Engineering, Bharati Vidyapeeth Deemed
University College of Engg, Pune-43.
M. S. Bewoor working as an Associate Professor in Computer Engineering Bharati Vidyapeeth
Deemed University college of Engg, Pune-43.She is having total 10 years of teaching experience.
S. H. Patil working as a Professor and Head of Department in Computer engineering, Bharati
Vidyapeeth Deemed University college of Engg,Pune-43. He is having total 22 years of teaching
experience & working as HOD from last ten years.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
133 Vol. 1, Issue 5, pp. 133-148
COMPARISON OF MAXIMUM POWER POINT TRACKING
ALGORITHMS FOR PHOTOVOLTAIC SYSTEM
J. Surya Kumari1, Ch. Sai Babu
2
1Asst. Professor, Dept of Electrical and Electronics, RGMCET, Nandyal, India.
2Professor, Dept of Electrical and Electronics, J.N.T.University, Kakinada, India.
ABSTRACT Photovoltaic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver
the highest possible power to the load when variations in the isolation and temperature occur, Photovoltaic
(PV) generation is becoming increasingly important as a renewable source since it offers many advantages such
as incurring no fuel costs, not being polluting, requiring little maintenance, and emitting no noise, among
others. PV modules still have relatively low conversion efficiency; therefore, controlling maximum power point
tracking (MPPT) for the solar array is essential in a PV system. The Maximum Power Point Tracking (MPPT)
is a technique used in power electronic circuits to extract maximum energy from the Photovoltaic (PV) Systems.
In the recent days, PV power generation has gained more importance due its numerous advantages such as fuel
free, requires very little maintenance and environmental benefits. To improve the energy efficiency, it is
important to operate PV system always at its maximum power point. Many maximum power point Tracking
(MPPT) techniques are available and proposed various methods for obtaining maximum power point. But,
among the available techniques sufficient comparative study particularly with variable environmental
conditions is not done. This paper is an attempt to study and evaluate two main types of MPPT techniques
namely, Open-circuit voltage and Short-circuit current. The detailed comparison of each technique is reported.
The SIMULINK simulation results of Open-circuit voltage and Short-circuit current methods with changing
radiation and temperature are presented.
KEYWORDS: Photovoltaic system, modelling of PV arrays, Open-circuit voltage algorithm Short circuit
current algorithm, Boost converter and Simulation Results
I. INTRODUCTION
Renewable sources of energy acquire growing importance due to its enormous consumption and
exhaustion of fossil fuel. Also, solar energy is the most readily available source of energy and it is free. Moreover, solar energy is the best among all the renewable energy sources since, it is non-
polluting. Energy supplied by the sun in one hour is equal to the amount of energy required by the
human in one year. Photo voltaic arrays are used in many applications such as water pumping, street
lighting in rural town, battery charging and grid connected PV system
The maximum power point tracker is used with PV modules to extract maximum energy from the Sun
[1]. Typical characteristics of the PV module shown in Fig.1 clearly indicate that the operating point
of the module (intersection point of load line and IV characteristic) is not same as the maximum power point of the module. To remove this mismatch power electronic converter is accompanied with
the PV system as shown in Fig.1 The electrical characteristics of PV module depend on the intensity
of solar radiation and operating temperature. Increased radiation with reduced temperature results in
higher module output. The aim of the tracker is to derive maximum power always against the
variations in sunlight, atmosphere, local surface reflectivity, and temperature. or to operate the module
at MPP, a dc-to-dc power electronic converter is accompanied with the PV system. The electrical
characteristic of PV module depends on the intensity of solar radiation and operating temperature.
Increased radiation with reduced temperature results in higher module output.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure 1: PV Module Characteristics
Since a PV array is an expensive system to build, and the cost of electricity from the PV array systems
is more expensive compared to the price of electricity from the utility grid, the user of such an
expensive system naturally wants to use all of the available output power. A near sinusoidal current as
well as voltage with minimum harmonic distortion under all operating conditions [2], [3].
Therefore, PV array systems should be designed to operate at their maximum output power levels for
any temperature and solar irradiation level at all the time. The performance of a PV array system
depends on the operating conditions as well as the solar cell and array design quality. Multilevel
converters are particularly interesting for high power applications. The main tasks of the system
control are maximize the energy transferred from the PV arrays to the grid and to generate a near sinusoidal current as well as voltage with minimum harmonic distortion under all operating
conditions.
The paper is organized in the following way. Section II presents the entire system configuration
Section III discuss about the Mathematical modeling of PV array, Maximum Power Point Tracking
Methods, analyzing the boost converter, about the concept of multilevel inverter with Five- level H-
bridge cascade multilevel inverter. In section IV Simulation results for the multilevel inverter system
under considerations are discussed. Finally, conclusions are made in Section V.
II. SYSTEM CONFIGURATION
The system configuration for the topic is as shown figure 2. Here the PV array is a combination of
series and parallel solar cells. This array develops the power from the solar energy directly and it will
be changes by depending up on the temperature and solar irradiances. [1], [2].
Fig. 2. System Configuration of PV System
So we are controlling this to maintain maximum power at output side we are boosting the voltage by
controlling the current of array with the use of PI controller. By depending upon the boost converter
output voltage this AC voltage may be changes and finally it connects to the utility grid that is nothing
but of a load for various applications. Here we are using Five-level H-Bridge Cascade multilevel inverter to obtain AC output voltage from the DC boost output voltage.
III. PROPOSED MPPT ALGORITHM FOR PHOTOVOLTAIC SYSTEM
3.1. Mathematical Modeling of PV Array
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
The PV receives energy from sun and converts the sun light into DC power. The simplified equivalent
circuit model is as shown in figure.3.
Figure.3. Simplified – equivalent Circuit of Photovoltaic Cell
The PV cell output voltage is a function of mathematical equation of the photocurrent that mainly
determined by load current depending on the solar irradiation level during the operation. The equation
(1) is,
cs
cphc
cx IRI
III
q
AKTV −
−+=
0
0ln (1)
Where the symbols are defined as follows:
q: electron charge (1.602 × 10-19 C).
k: Boltzmann constant (1.38 × 10-23 J/0K).
Ic: cell output current, A.
Iph: photocurrent, function of irradiation level and
junction temperature (5 A). Io: reverse saturation current of diode (0.0002 A).
Rs: series resistance of cell (0.001 Ω).
Tc: reference cell operating temperature (25 °C).
Vc: cell output voltage, V.
Both k and TC should have the same temperature unit, either Kelvin or Celsius. A method to include
these effects in the PV array modeling is given in [4]. These effects are represented in the model by
the temperature coefficients CTV and CTI for cell output voltage and cell photocurrent, respectively, as
in equation (2) and (3),
( )xaTTV TTC −+= β1 (2)
( )axt
T TTsc
C −+=γ
11 (3)
Where, βT=0.004 and γT=0.06 for the cell used and Ta=20°C is the ambient temperature during the cell
testing. If the solar irradiation level increases from SX1 to SX2, the cell operating temperature and the
photocurrent will also increase from TX1 to TX2 and from IPh1 to Iph2, respectively. CSV and CSI, which
are the correction factors for changes in cell output voltage VC and photocurrent Iph respectively in
equation (4) and (5),
( )CxSTSV SSC −+= αβ1 (4)
( )CX
c
ST SSS
C −+=1
1 (5)
where SC is the benchmark reference solar irradiation level during the cell testing to obtain the
modified cell model. The temperature change, occurs due to the change in the solar irradiation
level and is obtained using in equation (6),
( )CXSTC SS −+=∆ α1 (6)
The constant represents the slope of the change in the cell operating temperature due to a change in
the solar irradiation level [1] and is equal to 0.2 for the solar cells used. Using correction factors CTV,
CTI, CSV and CSI, the new values of the cell output voltage VCX and photocurrent IPHX are obtained for
the new temperature TX and solar irradiation SX as follows in equation (7) and (8),
CSVTVCX VCCV = (7)
phSlTph ICCI 1= (8)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
VC and IPH are the benchmark reference cell output voltage and reference cell photocurrent,
respectively. The resulting I-V and P V curves for various temperature and solar irradiation levels
were discussed and shown in [3, 4, and 5]; therefore they are not going to be given here again. The output power from PV is the result from multiplying PV terminal voltage and PV output current are
obtained from equation (9) and (10). The power output from PV modules is shown in (2).
−∗−= )1exp( caphcc V
AKT
qIIVP
(9)
−∗−= 1exp0 cphc V
AKT
qIII
(10) 3.2 MPPT Methods
The tracking algorithm works based on the fact that the derivative of the output power P with respect
to the panel voltage V is equal to zero at the maximum power point as in Fig.4.The derivative is greater than zero to the left of the peak point and is less than zero to the right.
Figure 3: P-V Characteristics of a module
∂P/∂V = 0 for V = Vmp (11)
∂P/∂V > 0 for V <Vmp (12)
∂P/∂V < 0 for V >Vmp (13)
Various MPPT algorithms are available in order to improve the performance of PV system by
effectively tracking the MPP. However, most widely used MPPT algorithms are considered here, they
are
a) Open Circuit Voltage
b) Short Circuit Current
A. Open-Circuit Voltage
The open circuit Voltage algorithm is the simplest MPPT control method. This technique is also
known as constant voltage method. VOC is the open circuit voltage of the PV panel. VOC depends on
the property of the solar cells. A commonly used VMPP/Voc value is 76% This relationship can be
described by equation (14),
ocMPP VkV ∗= 1 (14)
Here the factor k1 is always less than unity. It looks very simple but determining best value of k is
very difficult and k1 varies from 0.71 to 0.8. The common value used is 0.76; hence this algorithm is
also called as 76% algorithm. The operating point of the PV array is kept near the MPP by regulating
the array voltage and matching it to a fixed reference voltage Vref. The Vref value is set equal to the VMPP of the characteristic PV module or to another calculated best open circuit voltage this method
assumes that individual insulation and temperature variations on the array are insignificant, and that
the constant reference voltage is an adequate approximation of the true MPP.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure.4. Flow Chart of Open Circuit Voltage.
The open circuit voltage method does not require any input. It is important to observe that
when the PV panel is in low insulation conditions, the open circuit Voltage technique is more
effective. Detailed flowchart of the open circuit voltage algorithm is depicted in Figure.4.
B. Short -Circuit Current
The Short Circuit Current algorithm is the simplest MPPT control method. This technique is also
known as constant current method. ISC is the Short circuit current of the PV panel. ISC depends on the
property of the solar cells as shown in figure.3..This relationship can be described by equation (15),
SCMPP IkI ∗= 2 (15)
Here the factor k2 is always <1. It looks very simple but determining best value of k2 is very difficult
and k2 varies from between 0.78 and 0.92.
When the PV array output current is approximately 90% of the short circuit current, solar module
operates at its MPP. In other words, the common value of k2 is 0.9. Measuring ISC during operation is
problematic. An additional switch usually has to be added to the power converter. A boost converter
is used, where the switch in the converter itself can be used to short the PV array. Power output is not
only reduced when finding ISC but also because the MPP is never perfectly matched. A way of
compensating k2 is proposed such that the MPP is better tracked while atmospheric conditions change.
To guarantee proper MPPT in the presence of multiple local maxima periodically sweeps the PV array
voltage from short-circuit to update k2. Detailed flowchart of the short circuit current algorithm is
depicted in Figure.5.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure.5. Flow Chart of Short Circuit Current MPPT
3.3 MPPT Methodology
When compared with the system without control algorithms PV system output approximately 20 to 65%. By using Control algorithms the dc-to-dc converter and performs all control functions required
for MPP Tracking process. The MPP of a module varies with radiation and temperature. The variation
of MPP position under changing conditions demands optimized algorithm, which in turn control the
dc to- dc converter operation to increase the PV efficiency. Table.1 shows the detailed comparisons of
the above two methods. Each MPPT algorithm has its own merits and barriers in view of changing
environmental conditions. The Open circuit voltage and short circuit current methods are simple and easy for implementation. However, it is very tedious to find the optimal value of k factor for the
changing temperature and irradiance. The open circuit voltage algorithm suffers from low efficiency
92%, as it is very tedious to identify the exact MPP. Also, this method fails to find MPP when
partially shaded PV module or damaged cells are present. The short circuit current algorithm has the
higher efficiency 96%.The advantage of this method is, response is quick as ISC is linearly
proportional to the Imp respectively. Hence, this method also gives faster response for changing conditions. When rapidly changing site conditions are present and the efficiency depends on how the
method is optimised at design stage. The implementation cost of this method is relatively lower. The
Open circuit voltage method is easy to implement as few parameters are to be measured and gives
moderate efficiencies about 92%.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Table 1: Comparison of MPPT methods
Specification Open Open
Circuit
Voltage
Short
Circuit
Current
Efficiency Low About 90% High About 94%
Complexity Very simple but
Very difficult to
Get optimal k1
Very simple but
Very difficult to
Get optimal k2
Realization Easy to implement
With Analog hardware
Easy to implement as
few measured parameters
Cost Relatively Lower Relatively Lower
Reliability Not accurate and may
not operate exactly at
MPP (below to it)
Accurate and operate
exactly at MPP
Rapidly changing
Atmospheric conditions.
Slower response as Vmp is
proportional to the VOC but
may not locate Correct MPP
Faster response as Imp is
Proportional to the ISC and
locate correct MPP
k factor 0.73 < k1 < 0.8
k1 ≈ 0.76 Varies with Temp
and Irradiance
0.85 < k2 < 0.9
k2 ≈ 0.9Varies with Temp
and Irradiance
The implementation cost of Open circuit voltage method is relatively lower. The problems with this
method are it gives arbitrary performance with oscillations around MPP particularly with rapidly
changing conditions and provides slow response. Sometimes, this method is not reliable as it is
difficult to judge whether the algorithm has located the MPP or not. The Short circuit method offers
high efficiencies about 96%. It has several advantages such as more accurate, highly efficient and operates at maximum power point. This method operates very soundly with rapidly changing
atmospheric conditions as it automatically adjusts the module’s operating voltage to track exact MPP
with almost no oscillations.
3.4 Boost Converter
The boost converter which has boosting the voltage to maintain the maximum output voltage constant
for all the conditions of temperature and solar irradiance variations. A simple boost converter is as
shown in figure.6.
Figure.6. Boost Topology
For steady state operation, the average voltage across the inductor over a full period is zero as given in
equation (16), (17) and (18).
Vin*ton – (Vo-Vin) toff = 0 (16)
Therefore,
Vin*D*T = (Vo-Vin)(1-D)T (17) and
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
DV
V
in
o
−=
1
1 (18)
By designing this circuit we can also investigate performance of converters which have input from
solar energy. A boost regulator can step up the voltage without a transformer. Due to a single switch,
it has a high efficiency.
3.5 Multilevel Inverter topology
The DC-AC converters have experienced great evaluation in the last decade due to their wide use in
uninterruptible power supplies and industrial applications. Figure.6 shows the voltage source inverters
produce an output voltage or a current with levels either 0 or ± Vdc. They are known two-level
inverter. To obtain a quality output voltage (230.2V rms) or a current (4.2 Amps rms) waveform with
a minimum amount of ripple content.
Figure.7. Five-level H-Bridge Cascade Multilevel inverter circuit
IV. SIMULATION RESULTS
The converter circuit topology is designed to be compatible with a given load to achieve maximum power transfer from the solar arrays. The boost converter output which is giving to input to five-level
H-bridge multilevel inverter. We observed that the designed Five-level H-Bridge cascade multilevel
inverter successfully followed the variations of solar irradiation and temperatures. Here the power is
maintaining maximum value and similarly the boost converter boosting the voltage under the control
of the MPPT. By this, PV array, boost converter output voltages are converted to AC voltages which
are supplied to the grid by using Five-level H-Bridge cascade multilevel inverter and its
characteristics also mentioned here. Photovoltaic array V-I and P-V characteristics are obtained by
considering the varying temperature and the varying irradiance conditions shown in Fig. 8, 9, 10 and
11.
Fig.8. Variations of V-I Characteristics of PV system with varying irradiance
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig.9 Variations of .P-V Characteristics of PV system with varying irradiance
Fig.10.V-I Characteristics of PV system with three different varying temperature
Fig.11. P-V Characteristics of PV system with varying temperature
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig.12. Voltage curve of PV system with Open circuit voltage control
Fig.13. Current curve of PV system with Open circuit voltage MPPT control
Fig.14. Power curve of PV system with Open circuit voltage MPPT control
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig.15. Voltage curve of PV system with Short circuit current MPPT control
Fig.16. Current curve of PV system with Short circuit current MPPT control
Fig.17. power curve of PV system with Short circuit current MPPT control
The Efficiency of maximum power point tracker is defined as
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
∫
∫=
1
0
max
1
0
)(
)(
tP
tPactual
MPPTη (19)
Fig.12, 13 and Fig.14 shows the simulation results of Voltage, Current and Power of the Open circuit
voltage method with radiation as 1000w/m2 and with temperature as 25
0C. Where as Fig.15,16 and
Fig.17 shows the simulation results Voltage, Current and Power of the Short circuit current method. The results clearly indicate that, the Short circuit current method is comparatively good in terms of
tracking the peak power point (at that particular situation) At STC conditions (1000 w/m2, 250C), the
efficiency of Open circuit voltage method is calculated using Eqn.(15) as 91.95% and for Short circuit
current method as 96%. These values are relatively high and obviously validate the algorithm of the
two methods. The maximum power is 1kW for the solar irradiation and temperature levels. Fig. 18,
19, 20 and 21 shows the gate pulses of the boost converter from Short Circuit Current MPPT
algorithm, current, output voltage and power response of the boost converter. Fig.22 and 23are shows
the output voltage and voltage with harmonic spectrum (THD = 11.59%) from five level H-bridge
multilevel-inverter.
Fig. 18. Gate pulse response
Fig. 19. Current response of boost converter
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig. 20. Voltage response of boost converter
Fig. 21. Power response of boost converter
Fig. 22. Five-level output voltage of inverter.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure.23. Output Voltage with Harmonic Spectrum (THD = 11.59%)
Table 2 Comparison Evaluation of MPPT Methods
MPPT
methods
Open circuit
voltage method
Short circuit
current method
Voltage 136.4 117
Current 7.88 9.76
Power 1075 1132
Efficiency 90.4% 93.4%
Table 3 Comparison Evaluation of various parameters of Photovoltaic systems with MPPT methods
Irradiance
W/m2
Open circuit
voltage(V)
Short Circuit
current (A)
Maximum
Voltage(V)
Maximum
current(A)
Maximum
Power(W)
1000 152.4 10 125 9.352 1169
800 150.1 8 122.7 7.436 912.39
600 147.2 6 122.5 5.445 667.01
400 143 4 116.4 3.694 429.98
V. CONCLUSIONS
The derivative of the output power P with respect to the panel voltage V is equal to zero at the
maximum power point (∂P/∂V = 0). Employing Control algorithms improves flexibility and fast
response. Methodology of two major open circuit voltage and short circuit current are discussed. The
open circuit voltage easy to implement and offers relatively moderate efficiencies but results in
unpredictable performance against rapidly changing conditions. The short circuit current method is
complex and expensive when compared to open circuit voltage. However, the short circuit current
method gives very high efficiencies about 96% and performs well with changing radiation and
temperature. It can be concluded that, if economical aspect is not a constraint and rapidly changing
site conditions are obligatory, the short circuit current method is the best choice among the two methods discussed. A comprehensive evaluation of these two methods with the simulation results is
also stated. The principles of operation of Five-level H-Bridge cascade multilevel inverter topology
suitable for photovoltaic applications have been presented in this paper. The cost savings is further
enhanced with the proposed cascade multilevel inverters because of the requires the least number of
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
147 Vol. 1, Issue 5, pp. 133-148
component to achieved the same number of voltage level. These configurations may also be applied in
distributed power generation involving photovoltaic cells. Solar cells in PV array works only in part
of volt-ampere characteristic near working point where maximum voltage and maximum current can
be obtained. Photovoltaic system works most of time with maximum efficiency with minimum ripple
and harmonics. But by using the P and O and Incremental Conductance Algorithms are easy to implement and offers relatively high efficiencies against rapidly changing conditions than above
algorithms. Employing microcontroller, DSP processors improves flexibility and fast response.
ACKNOWLEDGEMENT
We express our sincere thanks to RGMCET for providing us good lab facilities. A heart full and
sincere gratitude to my beloved supervisor professor Ch. Sai Babu Garu for their tremendous
motivation and moral support
REFERENCES
[1] J Surya Kumari, Ch Sai Babu et.al, “An Enhancement of Static Performance of Multilevel
Inverter for Single Phase Grid Connected Photovoltaic modules”, International journal of Recent Trends in Engineering, Academy Publishers, Finland, Vol. 3, No. 3, May 2010, pp.20-24.
[2] J Surya Kumari, Ch Sai Babu et.al, “ Design and Investigation of Short Circuit Current Based
Maximum Power Point Tracking for Photovoltaic System” International Journal of Research and
Reviews in Electrical and Computer Engineering (IJRRECE) Vol. 1, No. 2, June 2011 ISSN:
2046-5149.
[3] J Surya Kumari, Ch Sai Babu et.al, Mathematical Model of Photovoltaic System with Maximum
Power Point Tracking (MPPT) International Conference on Advances in Engineering and Technology, (ICAET-2011), May 27-28, 2011.
[4] Balakrishna S, Thansoe, Nabil A, Rajamohan G, Kenneth A.S., Ling C. J.’, “The Study And
Evaluation Of Maximum Power Point Tracking Systems”, Proceedings Of International
Conference On Energy And Environment 2006 (ICEE 2006), Organized by University Tenaga
Nasional, Bangi, Selangor, Malaysia; 28-30 August 2006, pp.17-22.
[5] Jawad Ahmad, “A Fractional Open Circuit Voltage Based Maximum Power Point Tracker for
Photovoltaic Arrays”, Proceedings of 2nd
IEEE International Conference on Software Technology
and Engineering, ICSTE 2010, pp. 287-250.
[6] R. Faranda, S. Leva, V. Maugeri, “MPPT techniques for PV systems: energetic and cost
comparison.” Proceedings of IEEE Power and Energy Society General Meeting- Conversion
and Delivery of Electrical Energy in the 21st Century, 2008, pp-1-6.
[7] I.H. Altas; A.M. Sharaf, “A Photovoltaic Array Simulation Model for Matlab-Simulink GUI
Environment”. Proceedings of IEEE, IEEE 2007.
[8] Abu Tariq, M.S. Jamil, “Development of analog maximum power point tracker for photovoltaic
panel.” Proceedings of IEEE International Conference on Power Electronic Drive Systems, 2005,
PEDS 2005, pp-251-255.
[9] M.A.S. Masoum, H. Dehbonei, “Theoretical and experimental analysis of photovoltaic systems
with voltage and current based maximum power point trackers”, IEEE Transactions on Energy
Conversion, vol. 17, No. 4, pp 514-522, Dec 2002.
[10] J.H.R. Enslin, M.S. Wolf, D.B. Snyman and W. Swiegers, “Integrated photovoltaic maximum power point tracking converter”, IEEE Transactions on Industrial Electronics, Vol. 44, pp-769-
773, December 1997.
[11] D.Y. Lee, H.J. Noh, D.S. Hyun and I. Choy, “An improved MPPT converter using current
compensation methods for small scaled pv applications.” Proceedings of APEC, 2003, pp-
540545.
[12] A.K. Mukerjee, Nivedita Dasgupta, “DC power supply used as photovoltaic simulator for testing
MPPT algorithms.”, Renewable Energy, vol. 32, no. 4, pp-587-592, 2007. [13] Katshuhiko ogata, “MODERN CONTROL ENGINEERING” - Printice Hall of India Private
Limited.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
[14] Chihching Hua and ChihmingShen “Study of Maximum Power Tracking Techniques and control
of DC/DC Converters for Photovoltaic Power Systems” IEEE 1998
[15] Gui-Jia Su “Multilevel DC-Link Inverter” IEEE Transactions on Energy Conversion, vol. 41, No.
3, IEEE-2005
[16] Martina Calais Vassilios G “A Transformer less Five Level Cascaded Inverter Based Single –
Phase Photovoltaic Systems” IEEE-2000.
[17] D.P. Hohm, D.P, M.E. Ropp, “Comparative Study of Maximum Power Point Tracking
Algorithms, Journal of Progress in Photovoltaic: Research and Applications, Wiley Interscience,
vol. 11, no. 1, pp. 47-62, 2003.
[18] D.P Hohm, M.E. Ropp, Comparative Study of Maximum Power Point Tracking Algorithm Using
an Experimental, Programmable, Maximum Power Point Tracking Test Bed. [Online], Available:
IEEE Explore Database [12th July 2006]
[19] V. Salas, E. Olias, A. Barrado, and A. Lazaro, “review of maximum Power Point Tracking
Algorithms for Standalone Photovoltaic systems.” Solar Matter, Solar Cells, vol. 90, no. 11, pp.
1555-1578, July 2006.
[20] Mohammad A.S. Masoum, Hooman Dehbonei and Ewald F.Fuchs “Theoretical and
Experimental Analysis of Photovoltaic System With Voltage –and Current-Based Maximum –
Power- Point –Tracking IEEE Transactions on Energy conversion.Vol.17, No.4, December. 2002.
[21] Yang Chen, Jack brouwer, “A New Maximum –Power- Point –Tracking Controller for
Photovoltaic Power Generation” IEEE 2003.
[22] Yeong –Chau Kuo, Tsorng-Juu Liang, Jiann-Fuh Chen “Novel Maximum –Power- Point –
Tracking Controller for Photovoltaic Energy Conversion System” IEEE Transactions on Industrial Electronics.Vol.48, No.3, June 2001.
[23] K.H.Hussein, IMuta, T.Hoshino, M.Osakada, “Maximum Photovoltaic Power : an algorithm for
rapidly changing atmospheric conditions” IEEE Transactions on Industrial Electronics.Vol.142,
No.1, January 1995.
[24] T.J.Liang J.F.Chen, T.C.Mi, Y.C.Kuo and C.A Cheng “Study and Implemention of DSP- based
Photovoltaic Energy Conversion System”2001 IEEE.
[25] Chihchiang Hua, Jongrong lin and Chihming Shen “Implemention of a DSP-Controlled
Photovoltaic System with Peak Power Tracking” IEEE Transactions on Industrial
Electronics.Vol.45, No.1, February
J. Surya Kumari was born in Kurnool, India in 1981. She received the B.Tech (Electrical and
Electronics Engineering) degree from S.K University, India in 2002 and the M.Tech (High
voltage Engineering) from J.N.T University, Kakinada in 2006. In 2005 she joined the Dept.
Electrical and Electronics Engineering, R.G.M. College of Engineering and Technology, Nandyal,
as an Assistant Professor. She has published several National and International
Journals/Conferences. Her field of interest includes Power electronics, Photovoltaic system,
Power systems and High voltage engineering.
Ch. Sai Babu received the B.E from Andhra University (Electrical & Electronics Engineering),
M.Tech in Electrical Machines and Industrial Drives from REC, Warangal and Ph.D in
Reliability Studies of HVDC Converters from JNTU, Hyderabad. Currently he is working as a
Professor in Dept. of EEE in JNTUCEK, Kakinada He has published several National and
International Journals and Conferences. His area of interest is Power Electronics and Drives,
Power System Reliability, HVDC Converter Reliability, Optimization of Electrical Systems and
Real Time Energy Management.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
149 Vol. 1, Issue 5, pp. 149-157
POWER QUALITY DISTURBANCE ON PERFORMANCE OF
VECTOR CONTROLLED VARIABLE FREQUENCY INDUCTION
MOTOR
A. N. Malleswara Rao1, K. Ramesh Reddy
2, B. V. Sanker Ram
3
1Research Scholar, JNT University Hyderabad, Hyderabad, India
2G.Narayanamma Institute of Science and Technology, Hyderabad, India
3JNTU College of Engineering, JNTUH, Hyderabad, India
ABSTRACT
Sensitive equipment and non-linear loads are now more common in both the industrial/commercial sectors and
the domestic environment. Because of this a heightened awareness of power quality is developing among
electricity users. Therefore, power quality is an issue that is becoming increasingly important to electricity
consumers at all levels of usage. Continuous variation of single-phase loads on the power system network leads
to voltage variation and unbalance, most importantly; the three-phase voltages tend to become asymmetrical.
Application of asymmetrical voltages to induction motor driven systems severely affects its working
performance. Simulation of an Induction Motor under various voltage sag conditions using Matlab/Simulink is
presented in this paper. Variation of input current, speed and output torque for vector controlled variable
frequency induction motor-drive is investigated. Simulation results show that the variation of speed and current
in motor-drive system basically depends on the size of the dc link capacitor. It is shown that the most reduction
of dc-link voltage happens during voltage sag. It is also observed that as the power quality become poor, the
motor speed decreases, causing significant rise in power input to meet the rated load demand.
KEYWORDS: Power quality disturbance, Sag, Vector Control Induction Drive
I. INTRODUCTION
Electric power quality (PQ) has captured much attention from utility companies as well as their
customers. The major reason for growing concerns are the continued proliferation of sensitive
equipment and the increasing applications of power electronics devices which results in power supply
degradation [1]. PQ has recently acquired intensified interest due to wide- spread use of
microprocessor based devices and controllers in large number of complicated industrial process [2].
The proper diagnosis of PQ problems requires a high level of engineering ability. The increased
requirements on supervision, control and performance in modern power systems make power quality
monitoring a common practice for utilities [3].
In general, the main PQ issue can be identified as, voltage variation, voltage imbalance, voltage
fluctuations, low frequency, transients, interruptions, harmonic distortions, etc. The consequences of
one or more of the above non-ideal conditions may cause thermal effects, life expectancy reduction,
dielectric strength and mis-operation of different equipment. Furthermore, the PQ can have direct
economic impact on technical as well as financial aspects by means of increase in power consumption
and in electric bill [4]. PQ problems affecting Induction Motor performance are harmonics, voltage
unbalance, voltage sags, interruption etc. Voltage sags are mainly caused by faults on transmission or
distribution systems, and it is normally assumed that they have a rectangular shape [5]. This
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
150 Vol. 1, Issue 5, pp. 149-157
assumption is based on neglecting a change in the fault impedance during the fault progress.
However, this assumption does not hold in case of the presence of induction motors and longer
duration faults since the shape of voltage sags in such cases gets deformed due to the motors’ dynamic
responses [6]. When voltage sags appear at the terminals of an induction motor, the torque and speed
of the motor will decrease to levels lower than their nominal values. When voltage sags are over,
induction motor attempts to re-accelerate, resulting in drawing an excessive amount of current from
the power supply.
In this paper first, various types of voltage sag are simulated in Matlab / Simulink environment.
Thereafter, performance of an (Vector Controlled Variable Frequency Induction Motor)VCVF IM-
drive system is simulated and the results are analyzed in order to identify the parameters affecting the
drive-motor performance.
II. TYPES OF SAGS
Due to different kinds of faults in power systems, different types of voltage sag can be produced.
Different types of transformer connections in power grid have a significant role in determination of
voltage sag type [7]. Voltage sag are divided in to seven groups as type A, B, C, D, E, F and G as
shown in Table I. In this table "h" indicates the sag magnitude. Type A is symmetrical and the other
types are known as unsymmetrical voltage sag.
There are different power quality problems that can affect the induction motor behaviors such as
voltage sag (affecting torque, power and speed), harmonics (causing losses and affecting torque),
voltage unbalance (causing losses), short interruptions (causing mechanical shock), impulse surges
(affecting isolation), overvoltage (reducing expected life time), and under voltage (causing
overheating and low speed) . There are several power quality issues which until today were normally
not included in motor protection studies. However, they should be taken into consideration due to
their increasing influence. Other actual power quality problems have been considered for many years
now, such as voltage imbalance, under voltages, and interruptions [8].
This type of problems is intensified today because power requirements of sensitive equipment, and
voltage– frequency pollution have increased drastically during recent years. The actual trend is
anticipated to be maintained in the near future. Principally, voltage amplitude variations cause the
present power quality problems. Voltage sags are the origin of voltage amplitude reduction together
with phase-angle shift and waveform distortion and result in having different effects on sensitive
equipment. Voltage sags, voltage swells, overvoltages, and undervoltages are considered such as
amplitude variations [8].
New power quality requirements have a great effect on motor protection, due to the increasingly
popular fast reconnection to the same source or to an alternative source. The characteristics of both
the motor and supply system load at the reconnection time instant are critical for the motor behavior.
Harmless voltage sags can be the origin of great load loss (load drop) due to the protection device
sensitivity TABLE-I : Types of Sags
Type A
hVVa =
32
1
2
1jhVhVVb −−=
32
1
2
1jhVhVVb +−=
Type B
hVVa =
32
1
2
1jVVVb −−=
32
1
2
1jVVVb +−=
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
151 Vol. 1, Issue 5, pp. 149-157
2.1 Symmetrical Faults
The voltage during the fault at the point-of-common coupling (pcc) between the load and the fault can
be calculated from the voltage-divider model shown in Figure 1.
Figure 1. Voltage divider model for voltage sags due to faults.
For three-phase faults, the following expression holds:
EZZ
ZV
sF
F
++
+
+
= ----(1)
where ZS+ and ZF+ are the positive-sequence impedance of source at the pcc and impedance
between the pcc and faulty point including the fault impedance itself. Through this relation it can be
concluded that the current through the faulted feeder is the main cause for the voltage drop [8].
2.2 Non-Symmetrical Faults
Type C
VVa =
32
1
2
1jhVVVb −−=
32
1
2
1jhVVVb +−=
Type D
hVVa =
32
1
2
1jVhVVb −−=
32
1
2
1jVhVVb +−=
Type E
VVa =
32
1
2
1jhVhVVb −−=
32
1
2
1jhVhVVb +−=
Type F
hVVa =
36
1
2
13
3
1jhVhVjVVb −−−=
36
1
2
13
3
1jhVhVjVVc +−+=
Type G
Vh
Va )33
2( +=
32
1)2(
6
1hVjVhVb −+−=
32
1)2(
6
1hVjVhVb ++−=
Where 10 <≤ h
(h= sag magnitude)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
152 Vol. 1, Issue 5, pp. 149-157
For non-symmetrical faults the expressions are similar but slightly more complicated. This leads to
resulting characterization of unbalanced dips due to non-symmetrical faults. For two-phase-to-ground
and phase-to-phase faults the characteristic voltage is found from (2); for single-phase faults also the
zero-sequence quantities affect the result:
E
ZZZZ
ZZZ
V
SFSF
SFF
)(2
1
)(2
1
0011
00
+++
++
=
+
----(2)
where ZS0 and ZF0 are the zero-sequence source impedance at the pcc and the zero-sequence
impedance between the fault and the pcc, respectively [9]. For two-phase-to-ground faults it can also
be obtained from:
EZZZZ
ZZZV
SFSF
SFF
)(2
)(2
0011
00
+++
++=
+ -------(3)
The main assumptions behind these equations are that the positive-sequence and negative-sequence
impedances are equal and that all impedances are constant and time independent. They lead to a
“rectangular dip” with a sharp drop in rms voltage, a constant rms voltage during the fault, and a
sharp recovery. Under the assumption of constant impedance, all load impedances can be included in
the source voltage and impedance equivalent, and the voltages at the motor terminals are equal to the
voltages at the PCC.
III. BEHAVIOUR OF AN INDUCTION MOTOR SUPPLIED WITH NON-
SINUSOIDAL VOLTAGE
When induction motors are connected to a distorted supply voltage, their losses increase. These losses
can be classified into four groups:
1) Losses in the stator and rotor conductors, known as copper losses or Joule Effect losses.
2) Losses in the terminal sections, due to harmonic dispersion flows.
3) Losses in the iron core, including hysterics and Foucault effects; these increase with the order
of the harmonic involved and can reach significant values when feeding motors with skewed
rotors with wave forms which contain high frequency harmonics[7,8,9].
4) Losses in the air gap. The pulsing harmonic torques is produced by the interaction of the
flows in the air gap with those of the rotor harmonic currents, causing an increase in the
energy consumed.
These increased losses reduce the motor’s life. Further information on each of the groups is given
below. The effect of the copper losses intensifies in the presence of high frequency harmonics, which
augment the skin effect, reducing the conductors’ effective section and so increasing their physical
resistance [10].
3.1 Induction Motor Behaviour
The study can be done experimentally or analytically, by using dynamic load models mainly designed
for stability analysis, but they are rather complicated, requiring precise system data and high level
software [11-13]. Therefore, in this investigation, the study is adopted as a preliminary step. When a
temporary interruption or voltage sag takes place, with time duration between 3 seconds and 1 minute,
the whole production process will be disrupted. Keeping the motor running is useless because most of
the sensitive equipment will drop out. The induction motor should be disconnected, and the restart
process should begin at the supply recovery, taking into account the reduction and control of the hot
load pickup phenomenon.
Keeping the motor connected to the supply during voltage sags and short interruptions, rather than
disconnecting and restarting it, is advantageous from the system’s stability point of view. It is
necessary to avoid the electromagnetic contactor drop out during transients. This scheme improves the
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
153 Vol. 1, Issue 5, pp. 149-157
system ride-through ability due to the reduction of the reacceleration inrush [14]. Such problems
result in the initial reduction of the motor speed, keeping for a while a higher voltage supplied by its
internal, or back electromotive force (emf). The voltage reduction is governed by the stored energy
dissipation through the available closed circuits, which are the internal rotor circuit (including the
magnetizing inductance) and the external circuit composed of the load (paralleled by the faulted path
in case of fault-originated voltage sags.) The whole circuit time-constant determines the trend which
the decaying voltage will follow until the final voltage magnitude is reached or the event is ended.
When the transient ends, the motor speed increases demanding more energy from the supply until the
steady state speed is reached. The load torque in this case shows very different characteristics as
compared to normal start up conditions, due to several reasons such as the motor generated voltage
that might be out of phase, heavily loaded machinery, and a rigorous hot-load pickup [15].
As mentioned above, the single line-to-ground fault is the most probable type of fault, and through a
∆Y transformer is transferred as a two-phase voltage sag, in which case normal and extremely deep
voltage sags should be considered as a case of transient unbalanced supply. The effect of voltage
unbalance is the decrease of the developed torque and increase of the copper loss due to the negative-
sequence currents. The thermal effect of the short duration considered can be neglected. Besides,
three-phase voltage events represent the worst stability condition. Therefore, only balanced
phenomena were experimentally studied here, leaving the unbalanced behavior for future
investigation [16],[17].
IV. CASE STUDY AND SIMULATION RESULTS
This paper also investigates the impact of power quality on sensitive devices. At this stage, the focus
is on the operation characteristics of a Vector Controlled Variable Frequency Induction Motor Drive
(as shown in Fig. 2) in the presence of sag events. The motor under consideration is a 50 HP, 460V
and 60 Hz asynchronous machine. A DC voltage of 780V average is obtained at the DC link from the
diode bridge rectifier which takes a nominal 3-phase (star connected) input of 580V rms. line-to-line.
Voltage sags are normally described by magnitude variation and duration. In addition to these
quantities, sags are also characterized by unbalance, non sinusoidal wave shapes, and phase angle
shifts.
Fig 2 . Vector controlled Variable Frequency Induction Motor Drive
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
154 Vol. 1, Issue 5, pp. 149-157
Fig 3: Wave forms of 3 phase currents and Vdc during LG Fault
Fig 4: waveforms of Vabc ,Iabc, Speed and Torque during LG fault
Fig 5 : Wave forms of 3 phase currents and Vdc during LLG Fault
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
155 Vol. 1, Issue 5, pp. 149-157
Fig 6 : waveforms of Vabc ,Iabc, Speed and Torque during LLG Fault
Fig 7: Wave forms of 3 phase currents and Vdc during 3 phase Fault
Fig 8: waveforms of Vabc ,Iabc, Speed and Torque during 3phase fault
Fig. 3-8 illustrate disturbance inputs, the fall in DC link voltage and change in rotor speed for Case C
corresponding to the sag event that occurs at time t= 3 seconds when Phase A and Phase B experience
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
156 Vol. 1, Issue 5, pp. 149-157
a line to ground fault. The fall in DC link voltage, and the rotor speed are observed for the period of
the event. When normal supply resumes, the DC link voltage stabilises at 780 Volts and the rotor
speed at 120 radians per second. There might be different kinds of short circuit faults on the network
resulting in voltage sags such as single phase-to-ground, phase-to-phase, 2 phase-to-ground and 3
phase-to-ground faults. Studying the speed variation waveform of the induction motor due to the
different voltage sags caused by such faults at a specific place in the network as shown in Figure 5, it
is proved that single phase-to-ground fault causes the least variation in speed profile but a 3 phase-to-
ground fault the highest variations. Also, the ability of the drive to ride-through a voltage sag event is
dependent upon the energy storage capacity of the DC link capacitor, the speed and inertia of the load,
the power consumed by the load, and the trip point settings of the drive. The control system of the
drive has a great impact on the behaviour of the drive during sag and after recovery. The trip point
settings can be adjusted to greatly improve many nuisance trips resulting from minor sags which may
not affect the speed of the motor. Table II shows three cases of inputs “A” to “C” supplied as
unbalanced sags to the above system, and the corresponding outputs observed. TABLE II: SIMULATION RESULTS
INPUT CASE
LG LLG 3 φ
Fault Sag magnitude : Phase A
(p.u.) Phase B
Phase C
0.1
1
1
1
0.1
0.1
0.1
0.1
0.1 Start time of sag (sec) 4 4 4
Duration of sag (sec) 1 1 1
Phase angle shift: Phase A
(radians) Phase B Phase C
0
-1.047 1.047
0
0 0
0
0 0
Load torque (N-m) 50 50 50
Start time of load (sec) 0 0 0
Duration of load (sec) 4 4 4
Reference rotor speed (rad/s) 120 120 120
OBSERVATIONS
Nominal DC link Voltage (V) 780 780 780
DC link Voltage during event (V) 450 370 250
Change in DC link Voltage (%) 42.3 52.6 68
Rotor speed during event (rad/s) 120 93 25
Change in rotor speed (%) 0 22.5 79.7
V. CONCLUSIONS
Voltage sags and short time interruptions are a main power quality problem for the induction motors
utilized in the industrial networks. Such problems can also lead to the unbalanced voltages of the
network. Their result is the effect on torque, power and speed characteristics of the motor and the
increase in the losses. In this paper, the short interruption and voltage sag effects on the motor
behaviour were studied where through the simulations done with MAT LAB, the different behaviours
of induction motors due to voltage sags from different origins and other related problems were
investigated. In addition the amount of effect of different sources of the faults leading to voltage sag
and imbalanced voltage sag were observed. Behaviour of a Vector controlled Variable Frequency
Induction Motor Drive in the presence of sag events has been simulated as our initial investigation of
impact of power quality on sensitive equipment.
REFERENCES [1]C. Sankaran, Power Quality, 2002, CRC Press.
[2] M. H. J. Bollen, “The influence of motor reacceleration on voltage sags,” IEEE Trans. on Industry Applications,
Vol. 31, pp. 667–674, July/Aug. 1995.
[3] J. W. Shaffer, “Air conditioner response to transmission faults,” IEEE Trans. on Power System, Vol. 12, pp. 614–
621, May 1997.
[4] E. W. Gunther and H. Mehta, “A survey of distribution system power quality—Preliminary results,” IEEE Trans. on
Power Delivery, Vol. 10, pp. 322–329, Jan. 1995.
[5] L. Tang, J. Lamoree, M. McGranagham, and H. Mehta, “Distribution system voltage sags: Interaction with motor
and drive loads,” in Proc. IEEE Transmiss. Distribut. Conf., pp. 1–6, Chicago, IL, 1994.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
157 Vol. 1, Issue 5, pp. 149-157
[6] D. S. Dorr, M. B. Hughes, T. M. Gruzs, R. E. Jurewicz, and J. L. Mc- Claine, “Interpreting recent power quality
surveys to define the electrical environment,” IEEE Trans. Industry Applicat., vol.33, pp. 1480–1487, Nov./Dec.
1997.
[7] C. Y. Lee, “Effects of unbalanced voltage on the operation performance of a three-phase induction motor,” IEEE
Trans. Energy Conv., vol. 14, pp. 202–208, June 1999.
[8] M.H.J. Bollen, M. Hager, C. Roxenius, “Effect of induction motors and other loads on voltage dips: Theory and
measurement”, Proc. IEEE PowerTech Conf., June 2003, Italy.
[9] W. H. Kersting, “Causes and Effects of Unbalanced Voltages Serving an Induction Motor”, IEEE Trans. on Industry
Applications, Vol. 37, No. 1, pp. 165-170, January/February 2001.
[10] G. Yalcinkaya, M.J. Bollen, P.A. Crossley, “Characterization of Voltage Sags in Industrial Distribution Systems”,
IEEE Trans. on Industry Applications, Vol. 34, No. 4, pp. 682-688, July1998.
[11] S. S. Mulukutla and E. M. Gualachenski, “A critical survey of considerations in maintaining process continuity
during voltage dips while protecting motors with reclosing and bus-transfer practices,” IEEE Trans. Power Syst.,
vol. 7, pp. 1299–1305, Aug. 1992.
[12] J. C. Das, “Effects of momentary voltage dips on the operation of induction and synchronous motors,” IEEE Trans.
Industry Applicat., vol. 26, pp. 711–718, July/Aug. 1990.
[13] T. S. Key, “Predicting behavior of induction motors during service faults and interruptions,” IEEE Industry
Applicat. Mag., vol. 1, pp. 6–11, Jan. 1995.
[14] J.C. Gomez, M.M. Morcos, C.A. Reineri, G.N.Campetelli, “Behaviour of Induction Motor Due to Voltage Sags
and Short Interruptions”, IEEE Trans. on Power Delivery, Vol. 17, No. 2, pp. 434-440, April 2002.
[15] J.C. Gomez, M.M. Morcos, C. Reineri, G. Campetelli, “Induction motor behaviour under short interruptions and
voltage sags: An experimental study,” IEEE Power Eng. Rev., Vol. 21, pp. 11–15, Feb. 2001.
[16] A.N.Malleswara Rao, Dr.K.Ramesh Reddy and Dr. B.V.Sanker Ram”A new approach to diagnosis of power
quality problems using Expert system” International Journal Of Advanced Engineering Sciences And
Technologies Vol No. 7, Issue No. 2, 290 – 297
[17]A.N. Malleswara Rao, Dr. K. Ramesh Reddy and Dr. B.V. Sanker Ram” Effects of Harmonics in an Electrical
System” International Journal of Advances in Science and Technology (IJAET), Vol. No. 3, Issue No. 2, 25 – 30
AUTHORS
A. N. Malleswara Rao received B.E. in Electrical and Electronics Engineering from Andhra
University, Visakhapatnam, India in 1999, and M.Tech in Electrical Engineering from JNT
University, Hyderabad, India. He is Ph.D student at Department of Electrical Engineering, JNT
University, Hyderabad, India. His research and study interests include power quality and power
electronics.
K. Ramesh Reddy received B.Tech. in Electrical and Electronics Engineering from Nagarjuna
University, Nagarjuna Nagar, India in 1985, M.Tech in Electrical engineering from National
Institute of Technology(Formerly Regional Engineering College), Warangal, India in 1989, and
Ph.D from SV University, Tirupathi, India in 2004. Presently he is Head of the department and
Dean of PG studies in the Department of Electrical & Electronics Engineering, G.Narayanamma
Institute of Technology & Science (For Women), Hyderabad, India. Prof. Ramesh Reddy is an
author of 16 journal and conference papers, and author of two text books. His research and study interests
include power quality, Harmonics in power systems and multi-Phase Systems.
B. V. Sanker Ram received B.E. in Electrical Engineering from Osmania University,
Hyderabad, India in 1982, M.Tech in Power Systems from Osmania University, Hyderabad,
India in 1984, and Ph.D from JNT University, Hyderabad, India in 2003. Presently he is
professor in Electrical & Electronics Engineering, JNT University, Hyderabad, India. Prof.
Sanker Ram is an author of about 25 journal and conference papers. His research and study
interests include power quality, control systems and FACTS.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
158 Vol. 1, Issue 5, pp. 158-169
INTELLIGENT INVERSE KINEMATIC CONTROL OF
SCORBOT-ER V PLUS ROBOT MANIPULATOR
Himanshu Chaudhary and Rajendra Prasad
Department of Electrical Engineering, IIT Roorkee, India
ABSTRACT
In this paper, an Adaptive Neuro-Fuzzy Inference System (ANFIS) method based on the Artificial Neural
Network (ANN) is applied to design an Inverse Kinematic based controller forthe inverse kinematical control of
SCORBOT-ER V Plus. The proposed ANFIS controller combines the advantages of a fuzzy controller as well as
the quick response and adaptability nature of an Artificial Neural Network (ANN). The ANFIS structures were
trained using the generated database by the fuzzy controller of the SCORBOT-ER V Plus.The performance of
the proposed system has been compared with the experimental setup prepared with SCORBOT-ER V Plus robot
manipulator. Computer Simulation is conducted to demonstrate accuracyof the proposed controller to generate
an appropriate joint angle for reaching desired Cartesian state, without any error. The entire system has been
modeled using MATLAB 2011.
KEYWORDS: DOF, BPN, ANFIS, ANN, RBF, BP
I. INTRODUCTION
Inverse kinematic solution plays an important role in modelling of robotic arm. As DOF (Degree of Freedom) of
robot is increased it becomes a difficult task to find the solution through inverse kinematics.Three traditional
method used for calculating inverse kinematics of any robot manipulator are:geometric[1][2] ,
algebraic[3][4][5] and iterative [6] methods. While algebraic methods cannot guarantee closed form
solutions. Geometric methods must have closed form solutions for the first three joints of the
manipulator geometrically. The iterative methods converge only to a single solution and this solution
depends on the starting point.
The architecture and learning procedure underlying ANFIS, which is a fuzzy inference system
implemented in the framework of adaptive networks was presented in [7]. By using a hybrid learning
procedure, the proposed ANFIS was ableto construct an input-output mapping based on both human
knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs.
Neuro-Genetic approach for the inverse kinematics problem solution of robotic manipulators was
proposed in [8]. A multilayer feed-forward networks was applied to inverse kinematic problem of a 3-
degrees-of freedom (DOF) spatial manipulator robot in [9]to get algorithmic solution.
To solve the inverse kinematics problem for three different cases of a 3-degrees-of freedom (DOF)
manipulator in 3D space,a solution was proposed in [10]usingfeed-forward neural networks.This
introduces the fault-tolerant and high-speed advantages of neural networks to the inverse kinematics
problem.
A three-layer partially recurrent neural network was proposed by [11]for trajectory planning and to
solve the inverse kinematics as well as the inverse dynamics problems in a single processing stage for
the PUMA 560 manipulator.
Hierarchical control technique was proposed in[12]for controlling a robotic manipulator.It was based
on the establishment of a non-linear mapping between Cartesian and joint coordinates using fuzzy
logic in order to direct each individual joint. Commercial Microbot with three degrees of freedom was
utilized to evaluate this methodology.
Structured neural networks based solution was suggested in[13] that could be trained quickly. The
proposed method yields multiple and precise solutions and it was suitable for real-time applications.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
159 Vol. 1, Issue 5, pp. 158-169
To overcome the discontinuity of the inverse kinematics function,a novel modular neural network
system that consists of a number of expert neural networks was proposed in[14].
Neural network based inverse kinematics solution of a robotic manipulator was suggested in[15]. In
this study, three-joint robotic manipulator simulation software was developed and then a designed
neural network was used to solve the inverse kinematics problem.
An Artificial Neural Network (ANN) using backpropagation algorithm was applied in [16]to solve
inverse kinematics problems of industrial robot manipulator.
The inverse kinematic solution of the MOTOMAN manipulator using Artificial Neural Network was
implemented in [17]. The radial basis function (RBF) networks was used to show the nonlinear
mapping between the joint space and the operation space of the robot manipulator which in turns
illustrated the better computation precision and faster convergence than back propagation (BP)
networks.
Bees Algorithm was used to train multi-layer perceptron neural networks in [18]to model the inverse
kinematics of an articulated robot manipulator arm.
This paper is organized into four sections. In the next section, the kinematicsanalysis (Forward as well
as inverse kinematics) of SCORBOT-ER V Plus has been derived with the help of DH algorithm as
well as conventional techniques such as geometric[1][2], algebraic[3][4][5] and iterative [6] methods.
Basics of ANFIS are introduced in section3. It also explains the wayfor input selection for ANFIS
modeling. Simulation results are discussed in section 4. Section 5 gives concluding remarks.
II. KINEMATICS OF SCORBOT-ER V PLUS
SCORBOT-ER V Plus [19] is a vertical articulated robot, with five revolute joints. It has a Stationary
base, shoulder, elbow, tool pitch and tool roll. Figure 1.1 identifies the joints and links of the
mechanical arm.
2.1. SCORBOT–ER V PLUS STRUCTURE
All joints are revolute, and with an attached gripper it has six degree of freedom. Each joint is
restricted by the mechanical rotation its limits are shown below.
Joint Limits:
Axis 1: Base Rotation: 310°
Axis 2: Shoulder Rotation: + 130° / – 35°
Axis 3: Elbow Rotation: ± 130°
Axis 4: Wrist Pitch: ± 130°
Axis 5: Wrist Roll Unlimited (electrically 570°)
Maximum Gripper Opening: 75 mm (3") without rubber pads 65 mm (2.6") with rubber pads
The length of the links and the degree of rotation of the joints determine the robot’s work envelope.
Figure 1.2 and 1.3 show the dimensions and reach of the SCORBOT-ER V Plus. The base of the robot
is normally fixed to a stationary work surface. It may, however, be attached to a slide base, resulting
in an extended working range.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
160 Vol. 1, Issue 5, pp. 158-169
2.2. FRAME ASSIGNMENT TO SCORBOT–ER V PLUS
For the kinematic model of SCORBOT first we have to assign frame to each link starting from base
(frame 0) to end-effector (frame 5). The frame assignment is shown in figure 1.4.
Here in model the frame 3 and frame 4 coincide at same joint, and the frame 5 is end– effector
position in space.
Joint i () () Operating range
1 − /2 16 349 1 −155° + 155°
2 0 221 0 2 −35° + 130°
3 0 221 0 3 −130° + 130°
4 /2 0 0 /2 + 4 −130° + 130°
5 0 0 145 5 −570° 570°
2.3. FORWARD KINEMATIC OF SCORBOT–ER V PLUS
Once the DH coordinate system has been established for each link, a homogeneous transformation
matrix can easily be developed considering frame i-1 and frame i. This transformation consists of
four basic transformations.
0 0 1 2 3 45 1 2 3 4 5* * * *T T T T T T= (1)
0 *1 1 1 1
0 *0 1 1 1 11 0 1 0
1
0 0 0 1
C S a C
S C a ST
d
−
= −
(2)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
161 Vol. 1, Issue 5, pp. 158-169
2 2 2 2
2 2 2 212
0 *
0 *
0 0 1 0
0 0 0 1
C S a C
S C a ST
− =
(3)
3 3 3 3
3 3 3 323
0 *
0 *
0 0 1 0
0 0 0 1
C S a C
S C a ST
− =
(4)
4 0 4 0
4 0 4 034 0 1 0 0
0 0 0 1
S C
C ST
− =
(5)
5 5 0 0
5 5 0 045 0 0 1 5
0 0 0 1
C S
S CT
d
− =
(6)
Finally, the transformation matrix is as follow: -
1 5 1 5 234 5 1 1 5 234 1 234 1 1 2 2 3 23 5 234
1 5 1 5 234 1 5 1 5 234 1 234 1 1 2 2 3 23 5 2340
5
5 234 5 234 234 1 2 2 3 23 5 234
( )
( )
( )
0 0 0 1
S S C C S C S C S S C C C a a C a C d C
C S S C S C C S S S S C S a a C a C d CT T
C C S C S d a S a S d S
− − − + + + +
− + + + += =
− − − − −
(7)
Where, = (), = () = ( + + ), = ( + + ). The T is all over transformation matrix of kinematic model of SCORBOT-ER V Plus, from this we
have to extract position and orientation of end –effector with respect to base is done in the following
section.
2.4. OBTAINING POSITION IN CARTESIAN SPACE
The value of , , is found from last column of transformation matrix as: -
1 1 2 2 3 23 5 234( )X C a a C a C d C= + + + (8)
1 1 2 2 3 23 5 234( )Y S a a C a C d C= + + − (9)
1 2 2 3 23 5 234( )Z d a S a S d S= − − − (10)
For Orientation of end-effector frame 5 and frame 1 should be coincide with same axis but in our
model it is not coincide so we have to take rotation of −90° of frame 5 over y5 axis, so the overall
rotation matrix is multiplied with −90° as follow: -
cos( 90 ) 0 sin( 90 )
0 1 0
sin( 90 ) 0 cos( 90 )
yR
− −
= − − −
o o
o o
0 0 1
0 1 0
1 0 0
yR
−
=
(11)
The Rotation matrix is: -
1 5 1 5 234 5 1 1 5 234 1 234
1 5 1 5 234 1 5 1 5 234 5 234
5 234 5 234 234
0 0 1
0 1 0
1 0 0
S S C C S C S C S S C C
R C S S C S C C S S S S C
C C S C S
− − − − +
= × − + − −
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
162 Vol. 1, Issue 5, pp. 158-169
5 234 5 234 234
1 5 1 5 234 1 5 1 5 234 5 234
1 5 1 5 234 5 1 1 5 234 1 234
C C S C S
R C S S C S C C S S S S C
S S C C S C S C S S C C
−
= − + − − − + (12)
Pitch: Pitch is the angle of rotation about y5 axis of end-effector
2 3 4 234pitchβ θ θ θ θ= + + = (13)
2 2234 a tan 2( 13, 23 33 )r r rθ = ± + (14)
Here we use atan2 because its range is [−, ], where the range of atan is [−/2, /2].
Roll: The = 5 is derived as follow: -
5 234 234tan 2( 12 / , 11/ )a r C r Cθ = (15)
Yaw: Here for SCORBOT yaw is not free and bounded by 1.
2.5. HOME POSITION IN MODELING
At home position all angle are zero so in equation (1.7) put 1 = 0, 2 = 0, 3 = 0, 4 = 0, 5 = 0
So the transformation matrix reduced to:-
1 2 3 5
1
0 0 1 0 0 1 603
0 1 0 0 0 1 0 0
1 0 0 1 0 0 349
0 0 0 1 0 0 0 1
Home
a a a d
Td
+ + + = = − −
(16)
The home position transformation matrix gives the orientation and position of end-effector frame.
From the 3×3 matrix orientation is describe as follow, the frame 5 is rotated relative to frame 0
such that 5 axis is parallel and in same direction to 0 axis of base frame; 5is parallel and in same
direction to 0 axis of base frame; and 5axis is parallel to 0but in opposite direction. The position is
given by the 3 × 1 displacement matrix 1 2 3 5 10 .T
a a a d d+ + +
2.6. INVERSE KINEMATICS OF SCORBOT-ER V PLUS
For SCORBOT we have five parameter in Cartesian space is x, y, z, roll (), pitch ().For joint
parameter evaluation we have to construct transformation matrix from five parameters in Cartesian
coordinate space. For that rotation matrix is generated which is depends on only roll, pitch and yaw of
robotic arm. For SCORBOT there is no yaw but it is the rotation of first joint 1.
So the calculation of yaw is as follow: -
1 tan 2( , )a x yα θ= = (17)
Now for rotation matrix rotate frame 5 at an angle – about its x axis then rotate the new frame 5′
by an angle with its own principal axes ′ , finally rotate the new frame 5′′ by an angle with
its own principal axes ''.
= ( −) ∗( ) ∗()
1 0 0 0 0
0 0 1 0 0
0 0 0 0 1
C S C S
C S S C
S C S C
γ γ α α
β β α α
β β γ γ
−
= × × − −
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
163 Vol. 1, Issue 5, pp. 158-169
C C C S S
C S C S S C C S S S C S
S S C C S S C S C S C C
α γ γ α γ
β α α β γ β α β γ α γ β
β α α β γ β α α β γ β γ
−
= − + − − − + (18)
Now rotate matrix by 90° about y axis: -
(90 ) 0 (90 )
( 90 ) 0 1 0
(90 ) 0 (90 )
y
COS SIN
R
SIN COS
− = −
o o
o
o o
0 0 1
( 90 ) 0 1 0
1 0 0
yR
− = −
o
(19)
After pre multiplying the equation 19 with equation 18, one will get following rotation matrix: -
S S C C S S C S C S C C
C S C S S C C S S S C S
C C C S S
β α α β γ β α α β γ β γ
β α α β γ β α β γ α γ β
α γ γ α γ
− − − +
= − + − − (20)
So, the total transformation matrix is as follows: -
0 0 0 1
S S C C S S C S C S C C X
C S C S S C C S S S C S YT
C C C S S Z
β α α β γ β α α β γ β γ
β α α β γ β α β γ α γ β
α γ γ α γ
− − − +
− + = − − (21)
After comparing the transformation matrix in equation (7) with matrix in equation (21), one can
deduce: -
1 = ,
234 = ,
5 = ,
Now, we have 1 and 5 directly but 2, 34 are merged in 234 so we have separate them, to
separate them we have used geometric solution method as shown in Figure 1.6
Here for finding 2, 3, 4, we have X, Y, Z in Cartesian coordinate space from that we can take:-
2 21 1( )X X Y andY Z= + = (22)
We have pitch of end-effector 234 = , from that we can find point 2, 2 is calculated as follows: -
2 1 5 234
2 1 5 234
cos
sin
X X d
Y Y d
θ
θ
= −
= + (23)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
164 Vol. 1, Issue 5, pp. 158-169
Now the distance 3and 3can be found: -
3 2 1
3 2
X X a
Y Y
= −
=
From the low of cosines applied to triangle ABC, we have: - 2 2 2 23 3 2 3
32 3
( )cos
2
X Y a a
a aθ
+ − −=
23 3 3tan 2( 1 cos , cos )aθ θ θ= ± −
(24)
From figure 1.6 2 = −∅ − or
2 3 3 3 3 2 3tan 2( , ) tan 2( sin , cos )a Y X a a aθ θ θ= − − + (25)
Finally we will get: -
4 234 2 3θ θ θ θ= − − (26)
III. INVERSE KINEMATICS OF SCORBOT-ER V PLUS USING ADAPTIVE
NEURO FUZZY INFERENCE SYSTEM (ANFIS)
The proposed ANFIS[7][20][21] controller is based on Sugeno-type Fuzzy Inference System (FIS)
controller.The parameters of the FIS are governed by the neural-network back propagation method.
The ANFIS controller is designed by taking the Cartesian coordinates plus pitch as the inputs, and the
joint angles of the manipulator to reach a particular coordinate in 3 dimensional spaces as the output.
The output stabilizing signals, i.e., joint angles are computed using the fuzzy membership functions
depending on the input variables. The effectiveness of the proposed approach to the modeling is
implemented with the help of a program specially written for this in MATLAB. The information
related to data used to train is given inTable 1.2.
Sr.
No.
Manipulator
Angles
No. of
Nodes
No. of Parameters Total No. of
Parameters
No. of
Training
Data Pairs
No. of
Checking
Data Pairs
No. of
Fuzzy
Rules Linear Nonlinear
01. Theta1 193 405 36 441 4500 4500 81
02. Theta2 193 405 36 441 4500 4500 81
03. Theta3 193 405 36 441 4500 4500 81
04. Theta4 193 405 36 441 4500 4500 81
The procedure executed to train ANFIS is as follows:
(1) Data generation: To design the ANFIS controller, the training data have been generated by using
an experimental setup with the help of SCORBOT-ER V Plus. A MATLAB program is written to
govern the manipulator to get the input –output data set. 9000 samples were recorded through the
execution of the program for the input variables i.e., Cartesian coordinates as well as Pitch. Cartesian
coordinates combination for all thetas are given in Fig.1.7
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
165 Vol. 1, Issue 5, pp. 158-169
(2) Rule extraction and membership functions: After generating the data, the next step is to estimate
the initial rules. A hybrid learning algorithm is used for training to modify the above parameters after
obtaining the Fuzzy inference system from subtracting clustering. This algorithm iteratively learns the
parameter of the premise membership functions and optimizes them with the help of back propagation
and least-squares estimation. The training is continued until the error minimization..The input as well
as output member function used was triangular shaped member function.The final fuzzy inference
system chosen was the one associated with the minimum checking error, as shown in figure 1.8.it
shown the final membership function for the thetas after training.
-0 . 5 0 0 . 5
0
0 . 5
1
in p u t 1
De
gre
e o
f m
em
be
rsh
ip
in 1 m f1 in 1 m f2 in 1 m f3
-0 . 4 -0 . 2 0 0 . 2 0 . 4
0
0 . 5
1
in p u t 2
De
gre
e o
f m
em
be
rsh
ip
in 2 m f1 in 2 m f2 in 2 m f3
-0 . 2 0 0 . 2 0 . 4 0 . 6 0 . 8
0
0 . 5
1
in p u t 3
De
gre
e o
f m
em
be
rsh
ip
i n 3 m f1 in 3 m f2 in 3 m f3
-4 -2 0 2 4
0
0 . 5
1
in p u t 4
De
gre
e o
f m
em
be
rsh
ip
in 4 m f1 in 4 m f2 in 4 m f3
-0 . 5 0 0 . 5
0
0 . 5
1
in p u t 1
De
gr
ee
of
me
mb
er
sh
ip
i n 1 m f1 in 1 m f2 in 1 m f3
-0 . 4 -0 . 2 0 0 .2 0 . 4
0
0 . 5
1
in p u t 2
De
gr
ee
of
m
em
be
rs
hip
i n 2 m f1 in 2 m f2 in 2 m f3
-0 . 2 0 0 . 2 0 . 4 0 . 6 0 . 8
0
0 . 5
1
in p u t 3
De
gre
e o
f m
em
be
rsh
ip
i n 3 m f1 in 3 m f2 i n 3 m f3
- 4 -2 0 2 4
0
0 . 5
1
in p u t 4
De
gre
e o
f m
em
be
rsh
ip
i n 4 m f1 in 4 m f2 in 4 m f3
- 0 . 5 0 0 . 5
0
0 . 5
1
i n p u t 1
De
gr
ee
o
f
me
mb
er
sh
ip
i n 1 m f 1 i n 1 m f 2 i n 1 m f 3
- 0 . 4 - 0 . 2 0 0 . 2 0 . 4
0
0 . 5
1
i n p u t 2
De
gr
ee
o
f
me
mb
er
sh
ip
i n 2 m f1 i n 2 m f2 i n 2 m f 3
- 0 . 2 0 0 . 2 0 . 4 0 . 6 0 . 8
0
0 . 5
1
i n p u t 3
De
gr
ee
o
f
me
mb
er
sh
ip
i n 3 m f1 i n 3 m f 2 i n 3 m f 3
- 4 - 2 0 2 4
0
0 . 5
1
i n p u t 4
De
gr
ee
o
f
me
mb
er
sh
ip
i n 4 m f1 i n 4 m f2 i n 4 m f 3
- 0 . 5 0 0 . 5
0
0 . 5
1
i n p u t 1
De
gr
ee
o
f
me
mb
er
sh
ip
i n 1 m f 1 i n 1 m f 2 i n 1 m f 3
- 0 . 4 - 0 . 2 0 0 . 2 0 . 4
0
0 . 5
1
i n p u t 2
De
gr
ee
o
f
me
mb
er
sh
ip
i n 2 m f 1 i n 2 m f 2 i n 2 m f3
- 0 . 2 0 0 . 2 0 . 4 0 . 6 0 . 8
0
0 . 5
1
i n p u t 3
De
gr
ee
o
f
me
mb
er
sh
ip
i n 3 m f1 i n 3 m f2 i n 3 m f 3
- 4 - 2 0 2 4
0
0 . 5
1
i n p u t 4
De
gr
ee
o
f
me
mb
er
sh
ip
i n 4 m f1 i n 4 m f 2 i n 4 m f3
θ1
θ2
θ3θ4
(3) Results: The ANFIS learning was tested on a variety of linear and nonlinear processes. The
ANFIS was trained initially for 2 membership functions for 9000 data samples for each input as well
as output. Later on, it was increased to 3 membership functions for each input. To demonstrate the
effectiveness of the proposed combination, the results are reported for a system with81 rules and a
system with an optimized rule base. After reducingthe rules the computation becomes fast and it also
consumes less memory. The ANFIS architecture for θ1 is shownin Fig. 1.9.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
166 Vol. 1, Issue 5, pp. 158-169
Five angles have considered for the representation of robotic arm. But as the 5 is independent
of other angles so only remaining four angles was considered to calculate forward kinematics. Now,
for every combination of 1, θ2, θ3 andθ4 values the x and y as well as z coordinates are deduced using
forward kinematics formulas.
IV. SIMULATION RESULTS AND DISCUSSION
The plots displaying the root-mean-square error are shown in figure 1.10. The plot in blue represents
error1, the error for training data. The plot in green represents error2, the error for checking data.
From the figure one can easily predict thatthere is almost null difference between the training error as
well as checking error after the completion of training of ANFIS.
0 2 4 6 8 1 0 1 2 1 4 1 6 1 8 2 00 . 5 5
0 .6
0 . 6 5
0 .7
0 . 7 5
0 .8
0 . 8 5
0 .9
E p o c h s
RM
SE
(R
oo
t M
ea
n S
qu
are
d E
rro
r)
E r ro r C u rve s
0 2 4 6 8 1 0 1 2 1 4 1 6 1 8 2 00 . 2 2
0 . 2 4
0 . 2 6
0 . 2 8
0 .3
0 . 3 2
0 . 3 4
E p o c h s
RM
SE
(R
oo
t M
ea
n S
qu
are
d E
rro
r)
E rro r C u rve s
0 2 4 6 8 1 0 1 2 1 4 1 6 1 8 2 00 . 4 5
0 .5
0 . 5 5
0 .6
0 . 6 5
0 .7
E p o c h s
RM
SE
(R
oo
t M
ea
n S
qu
are
d E
rro
r)
E r ro r C u rve s
0 2 4 6 8 1 0 1 2 1 4 1 6 1 8 2 00 . 3 2
0 . 3 4
0 . 3 6
0 . 3 8
0 . 4
0 . 4 2
0 . 4 4
E p o c h s
RM
SE
(R
oo
t M
ea
n S
qu
are
d E
rro
r) E r r o r C u rve s
θ1 θ2
θ3 θ4
In addition to above error plots, the plot showing the ANFIS Thetas versus the actual Thetasare given
in figures1.11,1.12,1.13 and 1.14 respectively. The difference between the original thetas values and
the values estimated using ANFIS is very small.
0 50 100 150 200 250 300 350-4
-3
-2
-1
0
1
2
3
Time (sec)
Theta1 and ANFIS Prediction theta1
Experimental Theta1
ANFIS Predicted Theta1
0 50 100 150 200 250 300 350-1
-0.5
0
0.5
1
1.5
2
Time (sec)
Theta2 and ANFIS Prediction theta2
Experimental Theta2
ANFIS Predicted Theta2
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
167 Vol. 1, Issue 5, pp. 158-169
0 50 100 150 200 250 300 350-3
-2
-1
0
1
2
3
Time (sec)
Theta3 and ANFIS Prediction Theta3
Experimental Theta3
ANFIS Predicted Theta3
0 50 100 150 200 250 300 350-3
-2
-1
0
1
2
Time (sec)
Theta4 and ANFIS Prediction Theta4
Experimental Theta4
ANFIS Predicted Theta4
The prediction errors for all thetas appear in the figures 1.15, 1.16, 1.17, 1.18 respectively with a much finer
scale. The ANFIS was trained initially for only 10 epochs. After that the no. of epochs were increased to 20 for
applying more extensive training to get better performance.
0 50 100 150 200 250 300 350-3
-2
-1
0
1
2
3
Time (sec)
Prediction Errors for THETA 1
Prediction Error Theta1
0 50 100 150 200 250 300 350-1.5
-1
-0.5
0
0.5
1
Time (sec)
Prediction Errors for THETA2
Prediction Error Theta2
0 50 100 150 200 250 300 350-3
-2
-1
0
1
2
Time (sec)
Prediction Errors for THETA3
Prediction Error Theta3
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
168 Vol. 1, Issue 5, pp. 158-169
0 50 100 150 200 250 300 350-2
-1.5
-1
-0.5
0
0.5
1
1.5
Time (sec)
Prediction Errors for THETA4
Prediction Error Theta4
V. CONCLUSION
From the experimental work one can see that the accuracy of the output of the ANFIS based inverse
kinematic model is nearly equal to the actual mathematical model output, hence this model can be
used as an internal model for solving trajectory tracking problems of higher degree of freedom (DOF)
robot manipulator. Asingle camera for the reverse mapping from camera coordinates to real world
coordinateshas been used in the present work, if two cameras are used stereo vision can be achieved
andproviding the height of an object as an input parameter would not be required. The methodology
presented herecan be extended to be used for trajectory planning and quite a few tracking applications
with real world disturbances. Thepresent work did not make use of color image processing; making
use of color image processing can helpdifferentiate objects according to their colors along with their
shapes.
ACKNOWLEDGEMENTS
As it is the case in almost all parts of human endeavour so also the development in the field of robotics has been
carried on by engineers and scientists all over the world.It can be regarded as a duty to express the appreciation
for such relevant, interesting and outstanding work to which ample reference is made in this paper.
REFERENCES
[1] F. R, "Position and velocity transformation between robot end-effector coordinate and joint angle,"
International Journal of Robotics, vol. 2, no. 2, pp. 35-45, 1983.
[2] L. G. C. S., "Robot arm kinematics, dynamics and control," IEEE, vol. 15, no. 12, pp. 62-79, 1982.
[3] D. J., Analysis of Mechanism and Robot Manipulators. New York, USA: Wiley, 1980.
[4] D. Manocha and J. Canny, "Efficient inverse kinematics for general 6r manipulators," IEEE Transaction on
Robotics Automation, IEEE, vol. 10, no. 5, pp. 648-657, 1994.
[5] R. Paul, B. Shimano, and G. Mayer, "Kinematic control equations for simple manipulators," IEEE
Transaction on System, Man and Cybernetics, IEEE, vol. SMC-11, no. 6, pp. 66-72, 1982.
[6] J. Korein and N. Balder, "Techniques for generating the goal-directed motion of articulated structures,"
IEEE Computer Graphics and Application, IEEE, vol. 2, no. 9, pp. 71-81, 1982.
[7] J.-S. R. Jang, "ANFIS : Adap tive-Ne twork-Based Fuzzy," IEEE Transactions on Systems, Man and
Cybernatics, IEEE, vol. 23, no. 3, pp. 665-685, Jun. 1993.
[8] K. Rasit, "A neuro-genetic approach to the inverse kinematics solution of robotic manipulators," Scientific
Research and Essays, The Academic Journals, vol. 6, no. 13, pp. 2784-2794, Jul. 2011.
[9] B. B. Choi and C. Lawrence, "Inverse Kinematics Problem In Robotics Using Neural Networks," NASA
Technical Memorandum 105869, pp. 1-23, 1992.
[10] D. M. A. L. R. O. B. W. H. E. H. Jack, "Neural Networks and The Inverse Kinematics Problem," Journal
of Intelligent Manufacturing, vol. 4, pp. 43-66, 1993.
[11] A. F. R. Arahjo and H. D. Jr., "A Partially Recurrent Neural Network To Perform Trajectory Planning,
Inverse Kinematics and Inverse Dynamics," Systems, Man and Cybernetics, IEEE, vol. 2, pp. 1784-1789,
1998.
[12] D. W. Howard and A. Zilouchian, "Application of Fuzzy Logic for The Solution Of Inverse Kinematics
And Hierarchical Controls Of Robotic Manipulators," Journal of Intelligent and Robotic Systems, Kluwer
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Academic Publishers, vol. 23, p. 217–247, 1998.
[13] S. Tejomurtula and S. Kak, "Inverse Kinematics In Robotics Using Neural Networks," Elsevier
Information Sciences, Elsevier, vol. 116, pp. 147-164, 1999.
[14] E. Oyama, N. Y. Chong, A. Agah, T. Maeda, and S. Tachi, "Inverse Kinematics Learning By Modular
Architecture Neural Networks With Performance Prediction Networks," International Conference On
Robotics & Automation, IEEE, pp. 1006-1012, 2001.
[15] R. Koker, C. Oz, T. Cakar, and H. Ekiz, "A Study Of Neural Network Based Inverse Kinematics Solution
For A Three-Joint Robot," Robotics and Autonomous Systems, Elsvier, vol. 49, p. 227–234, 2004.
[16] Z. Bingul, H. M. Ertunc, and C. Oysu, "Comparison Of Inverse Kinematics Solutions Using Neural
Network For 6r Robot Manipulator With Offset," Computational Intelligence Methods And Applications,
IEEE, pp. 1-5, 2005.
[17] P.-Y. Zhang, T.-S. Lu, and Li-Bosong, "RBF Networks-Based Inverse Kinematics Of 6R Manipulator,"
International Journal Advance Manufacturing Technology, Springer-Verlag, vol. 26, p. 144–147, 2005.
[18] D. T. Pham, M. Castellani, and A. A. Fahmy, "Learning The Inverse Kinematics Of A Robot Manipulator
Using The Bees Algorithm," International Conference On Industrial Informatics, IEEE, pp. 493-498, 2008.
[19] I. Inc. (2003) http://www.intelitekdownloads.com/Manuals/Robots/ER_V_plus_manual_100016.pdf.
[20] S. R. Khuntia and S. Panda, "ANFIS approach for SSSC controller design for the improvement oftransient
stability performance," Mathematical and Computer Modelling, Elsevier, pp. 1-12, Jun. 2011.
[21] B. A. A. Omar, A. Y. M. Haikal, and F. F. G. Areed, "Design adaptive neuro-fuzzy speed controller for an
electro-mechanical system," Ain Shams Engineering Journal, Elsevier, pp. 1-9, Jul. 2011.
Authors
Himanshu Chaudhary received his B.E. in Electronics and Telecommunication from
Amravati University, Amravati, India in 1996, M.E. in Automatic Controls and Robotics
from M.S. University, Baroda, Gujarat, India in 2000.Presently he is a research scholar in
Electrical Engineering Department, IIT Roorkee, India. His area of interest includes
industrial robotics, computer networks and embedded systems.
Rajendra Prasad received B.Sc. (Hons.) degree from Meerut University, India in 1973. He
received B.E.,M.E. and Ph.D. degree in Electrical Engineering from the University of
Roorkee, India in 1977, 1979 and 1990 respectively. . He also served as an Assistant
Engineer in Madhya Pradesh Electricity Board (MPEB) from 1979- 1983. Currently, he is a
Professor in the Department of Electrical Engineering, Indian Institute of Technology
Roorkee, Roorkee (India).He has more than 32 years of experience of teaching as well as
industry. He has published 176 papers in various Journals/conferences and received eight
awards on his publications in various National/International Journals/Conferences Proceeding papers. He has
guided Seven PhD’s, and presently six PhD’s are under progress. His research interests include Control,
Optimization, System Engineering and Model Order Reduction of Large Scale Systems and industrial robotics.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
170 Vol. 1, Issue 5, pp. 170-180
FAST AND EFFICIENT METHOD TO ASSESS AND ENHANCE
TOTAL TRANSFER CAPABILITY IN PRESENCE OF FACTS
DEVICE
K. Chandrasekar1 and N. V. Ramana
2
1Department of EEE, Tagore Engineering College, Chennai, TN, India
2Department of EEE, JNTUHCEJ, Nachupally, Karimnagar Dist, AP, India
ABSTRACT
This paper presents the application of Genetic Algorithm (GA) to assess and enhance Total Transfer Capability
(TTC) using Flexible AC Transmission System (FACTS) devices during power system planning and operation.
Conventionally TTC is assessed using Repeated Power Flow (RPF) or Continuation Power Flow (CPF) or
Optimal Power Flow (OPF) based methods which normally uses Newton Raphson (NR) method and the
enhancement of TTC is done by optimally locating FACTS devices using an optimization algorithm. This
increases the CPU time and also limits the search space hence resulting in local optimal value in TTC. To
eliminate this drawback, in this paper a novel procedure using the optimization algorithm (GA) is proposed
which simultaneously assess and enhance Total Transfer Capability (TTC) in presence of FACTS. Also power
flow is performed using Broyden’s method with Sherman Morrison formula instead of NR method, which
reduces the CPU time further without compromising the accuracy. To validate the proposed method, simulation
test is carried on WSCC 9 bus and IEEE 118 bus test system. Results indicate that the proposed method
enhances TTC effectively with higher computational efficacy when compared to that of conventional method.
KEYWORDS: FACTS Device, Genetic Algorithm, Power System Operation and Control, Total Transfer
Capability
I. INTRODUCTION
According to NERC report [1], Total Transfer Capability (TTC) is defined as the amount of electric
power that can be transferred over the interconnected transmission network in a reliable manner while
meeting all defined pre and post contingencies. Available Transfer Capacity (ATC) is a measure of
transfer capability remaining in the physical transmission network for further commercial activity
over and above already committed uses. It is well known that the FACTS devices are capable of
controlling voltage magnitude, phase angle and circuit reactance. By controlling these, we can
redistribute the load flow and regulate bus voltage. Therefore this method provides a promising one to
improve TTC [2-7].
The optimal location and settings of FACTS devices for the enhancement of TTC is a combinatorial
analysis. The best solutions to such type of problems can be obtained using heuristic methods. The
basic approach is to combine a heuristic method with RPF [8] or CPF (Continuation Power Flow) [9-
10] or OPF (Optimal Power Flow) [11] method to asses and enhance TTC. From the literature
available it is understood that in all these approaches heuristic methods are used only for finding the
optimal location and/or settings of FACTS devices to enhance TTC, but the value of TTC is computed
using conventional methods like CPF, RPF or OPF based methods [12-20] which takes much
computational time because of the following reasons. TTC should be computed accurately as well as
with less computational time because of the following reasons:
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
171 Vol. 1, Issue 5, pp. 170-180
First, from [21] it is evident that in operation of a power system, ATC or TTC is done for a week and
each hour for a week has a new base case power flow. A typical TTC calculation frequency according
to western interconnection report [22] is
• Hourly TTC for the next 168 Hours : Once per day
• Daily TTC for the next 30 days : Once per week
• Monthly TTC for months 2 through 13: Once per month.
Second, due to the fact of uncertainty in contingency listing, forecasted load demand etc even after a
careful study in planning of power system and optimally locating these FACTS devices and its
settings to enhance TTC the results may not be optimal during different power system operating
conditions. Once when these FACTS devices are located, its location cannot be changed but the
settings of these FACTS devices can be adjusted to obtain a maximum TTC for different power
system operating conditions. This is again a problem of combinatorial analysis with a number of
FACTS devices present in the system and with the wide range in its operating parameters.
Hence for the above reasons and with the known solution methods [12-20] to asses and enhance TTC
in presence of FACTS, very high computational time is needed, which may not be a drawback during
planning of power system but has an adverse effect in the operation stage.
In [23-24] TTC is computed with OPF based Evolutionary program(EP), in which EP is used to find
the location, setting of FACTS devices and simultaneously it searches the real power generation,
generation voltages and real power load. This method can be used in both planning and operation of a
power system but the major drawback in this method is that the length of chromosome, which
increases with the power system size there by increasing the computational time for getting global
optimal results. Further the load distribution factor and power factor of loads in the system has not
been maintained constant.
In this paper Genetic Algorithm with power flow using Broyden’s method [25-26] with Sherman
Morrison formula (GABS) is proposed to assess and enhance TTC in presence of FACTS which
effectively enhances TTC and reduces the computational time to a great extent during planning and
operation of power system. The results are compared with the conventional method Genetic
Algorithm with Repeated Power Flow using NR method (GARPFNR).
The remaining paper is organized as follows: Section 2 deals with FACTS devices modelling and
TTC problem formulation using GARPFNR. Section 3 gives the description about the proposed
method. Section 4 deals with the Results and Discussion and finally conclusion are drawn in Section
5.
II. FACTS DEVICES AND TTC FORMULATION USING GARPFNR
In this paper the mathematical formulation for TTC with and without FACTS device using RPFNR
method from [2] is combined with GA i.e. GARPFNR [18] to enhance TTC. Though there are many
heuristic methods which can be combined with RPFNR to enhance TTC using FACTS, GA is used in
this paper because these are best suited for optimization problems which do not possess qualities such
as continuity, differentiability etc. This works on the principle that the best population of a generation
will participate in reproduction and their children’s called as offspring’s will move on to next
generation based on the concept of “survival of the fittest”. Hence in this paper GARPFNR is
compared with the proposed method GABS. The TTC level in normal or contingency state is given
by:
max
sin
( )iD
i k
TTC P λ∈
= ∑ (1)
and ATC neglecting TRM, ETC is given by
0max
sin sin
( )i iD D
i k i k
ATC P Pλ∈ ∈
= −∑ ∑ (2)
where
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
172 Vol. 1, Issue 5, pp. 170-180
Fig 1. Representation of Chromosome
max
sin
( )iD
i k
P λ∈
∑ is the sum of load in sink area whenmaxλ λ= .
0
sin
iD
i k
P∈
∑ is the sum of load in sink area when 0λ = .
Therefore the objective function is
max
sin
maximize TTC = ( )iD
i k
P λ∈
∑ (3)
Subject to
1
0i i ij
n
G D loss
j
P P P− −
=
=∑ (4)
1
0i i ij
n
G D loss
j
Q Q Q− −
=
=∑ (5)
min maxi i iV V V≤ ≤ (6)
maxij ijS S= (7)
maxi iG GP P≤ (8)
2.1. Power Flow in GARPFNR
In GARPFNR method the power flow equations are solved repeatedly using NR method by increasing
the complex load at every load bus in the sink area and increasing the injected real power at generator
bus in the source area until limits are incurred the computational time will be more. In general NR
method finds ' 'x iteratively such that
( ) 0F x = (9)
In the iterative process, say in thm iteration ' 'x is updated as given below
1m mx x x
+= − ∆ (10)
and 1( ) ( )
m mx J F x
−∆ = − (11)
where mJ
is the Jacobian matrix.
Since the power flow equations are solved repeatedly, for every step increment of ttcλ there are more
than one number of iteration and for every iteration a Jacobian matrix of size n × n is computed and
then inverted. For ‘n’ non linear equations, computation of Jacobian matrix elements includes
computation of n2 partial derivatives and ‘n’ number of component functions. Therefore n2 + n
functional evaluations need to be done. Again inversion of an n × n Jacobian matrix using Gauss
Jordan elimination method requires n3 arithmetic operations. The representation of chromosome in
GARPFNR assuming one TCSC and one SVC at a time is shown in Fig 1.
.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
173 Vol. 1, Issue 5, pp. 170-180
2.2. Computational Time in GARPFNR
For example let us consider a case in which GARPFNR has a population size of 30 and number of
generation is 100. For each chromosome let us say it takes 10 steps in steps of 1MW of load and
generation increments to compute the loading factormaxλ , and for each increment say a NR power
flow of 3 iterations takes 1.5 sec, then for 30 chromosomes and 100 generations with 10 contingencies
conditions the total time required to complete one transfer will be approximately 125 hrs. The
accuracy of results can be improved by decreasing the step size at the cost of increase in
computational time i.e. if the step size is decreased by a factor 10 (from 1MW to 0.1 MW) then the
time for computation increases by the same factor 10.
III. DESCRIPTION OF THE PROPOSED METHOD
In this method power flow model of FACTS device and mathematical formulation of TTC is same as
that of GARPFNR method but the chromosome representation and power flow procedure differs as
discussed below:
3.1. Power Flow in GABS
In this Broyden’s method with Sherman Morrison formula is used for solving power flow. Broyden’s
method is a Quasi–Newton method. The starting point of Broyden’s method is same as NR method
i.e. an initial approximation 0x is chosen to find 0( )F x and 1
x is calculated using the Jacobian 0J .
From the second iteration this method departs from NR by replacing the Jacobian matrix with an
equivalent matrix ‘A’ which is given by
( 1) 1 1 1[( ( ) ( ) ( )]
m m m m m m mA A F x F x A x x
− − − −= + − − − (12)
and 1 1( ) ( )m m m mx x A F x
+ −= − (13)
henceforth the number of functional evaluations is reduced to ‘n’ from ‘n2 + n’. Further the n3
arithmetic operation for computing the inverse of mA matrix can be reduced to n
2 operations using the
Sherman Morrison formula as shown below as
( 1) 11
1 1
[ ]( )
[ ] ( )
mm
T m
A UA
x A F x
− −−
− −
+=
∆ ∆ (14)
Where 1 1 1 1 [ ] ( )* [ ]
Tm mU x A F x x A
− − − −= ∆ − ∆ ∆ (15)
3.2. Modified Chromosome Representation
As in GARPFNR method population is initialized randomly and each chromosome in the population
consists of decision variables for FACTS device location, device settings, and objective function
value and apart from that it consist of ttcλ value. The value of ttcλ for each chromosome is
fixed within a range between ‘0’ to ‘1’ (for an increase up to 100% in loading factor) or ‘0’ to ‘2’ (for
an increase up to 200% in loading factor) which holds good for any complicated power system, since
no power system even at the worst case is under utilized for more than 200% and the objective
Fig. 2. Modified representation of Chromosome
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
174 Vol. 1, Issue 5, pp. 170-180
function is designed such that GA maximizes the value of ttcλ subject to the satisfaction of equality
and inequality constraints. This eliminates the use of RPF or CPF methods to calculate the loading
factorttcλ .This is shown in Fig 2.
3.3. Computational Time in GABS
The computational time for assessing and enhancing TTC using GABS in presence of FACTS is far
less when compared to GARPFNR because of two main reasons.
At first unlike GARPFNR method, GABS simultaneously finds the optimal location, settings for the
FACTS devices and the loading factor maxλ for TTC computation by representing all these
information in the chromosome.
Secondly in power flow using Broyden’s method with Sherman Morrison formula the Jacobian
inverse is computed only once during the first iteration for a given network topology and for the
remaining iterations a rank one update is done to compute the inverse (an approximate Jacobian
inverse). Due to the above fact the quadratic convergence of Newton Raphson method is replaced by
super linear convergence which is faster than linear but slower than quadratic convergence. For a
large scale system, computing Jacobian inverse for ‘n’ number of iterations with many transfer
direction in a single contingency case is a time consuming process when compared to super linear
convergence of Broyden’s method. Hence the total time required to compute TTC with Broyden’s
method is less when compared to NR method.
For example let us consider the same case as that of GARPFNR which has a population size of 30 and
number of generation are 100. For each chromosome let us say it takes 10 steps in steps of 1MW of
load and generation increments to compute the loading factor maxλ , and for each increment say the
power flow in GABS using Broyden’s method with Sherman Morrison formula takes 4 iterations for a
total time of 2 sec, then for 30 chromosomes and 100 generations with 10 contingencies conditions
the total time required to complete one transfer will be approximately 17 hrs which is only 13.6 % of
the computational time when compared to GARPFNR. This approach can also be applied during
operation of power system by removing the information of FACTS location in the chromosome.
3.4. Algorithm for GABS
The algorithm for the proposed method GABS is given below
Step 1: Population size and number of generations is set.
Step 2: Read Bus data, line data, objectives, decision variables, minimum and maximum value of
decision variables.
Step 3: Initialize the Population.
Step 4: TCSC and SVC settings and/or its set values with maxλ are obtained from decision variables
of GA and make corresponding changes in power flow data.
Step 5: Run Power Flow using Broyden’s method with Sherman Morrison formula.
Step 6: Check for convergence of Power flow and limit violations if any. IF there is any violations,
penalize the corresponding chromosome to a very low fitness value say 51 10−× . ELSE
Fitness for the chromosome is evaluated as defined in (3). This process is repeated for all
chromosomes.
Step 7: Apply genetic operators to perform reproduction and Replace Population.
Step 8: Check for maximum generation. IF yes go to step 9. ELSE go to step 4.
Step 9: From the final solution identify the setting and/or location of TCSC and SVC and maxλ to
calculate TTC.
The flow chart for GABS is shown in Fig 3.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
175 Vol. 1, Issue 5, pp. 170-180
Replace Population
Apply Genetic operators – Cross over and Mutation
Read the Population size, number of generations and
decision variables with its range.
Read Power flow data and system operation limits
Population initialization
Start
Obtain the location and/or set values of TCSC and SVC
with maxλ from the decision variables and make
corresponding changes in power flow data
Stop
Fig 3. Flow chart for GABS
Calculate fitness for the chromosomes
Violations
Penalize chromosome by
assigning a very low
value of fitness
Max Gen (or)
convergence
Calculate TTC
Run Load flow using Broyden’s method with
Sherman Morrison Formula
Yes
No
Yes
No
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
176 Vol. 1, Issue 5, pp. 170-180
IV. RESULTS AND DISCUSSION
GABS and GARPFNR is carried out in MATLAB environment using Genetic Algorithm and Direct
search toolbox and modified MATPOWER [27] simulation package in INTEL core 2 Duo CPU
T5500@ 1.66 GHz processor under Windows XP professional operating system. The standard WSCC
9 bus test system and IEEE 118 bus test system [27-28] are considered to test the performance of the
proposed method. Transaction between Area 1 to Area 2 alone is considered. The base value, voltage
limits, SVC and TCSC limits are considered from [20]. For GABS and GARPFNR, population size of
20 and 200 generations are considered with stall generation of 20. In each of the test system two
cases are considered. First case represents planning problem in which FACTS device optimal settings
and location is found to enhance TTC, while the second case represents an operational problem such
as change in load, unexpected contingencies etc, assuming that the FACTS devices are already
located in the system, new optimal settings alone are found to enhance TTC.
4.1. WSCC 9 Bus Test System
WSCC 9 bus test system is divided into two areas. Area 1 has buses 3, 6, 8 and 9. Area 2 has 1, 2, 4, 5
and 7. Only one FACTS device in both the types (TCSC and SVC) is considered for placement.
4.1.1. Power System Planning (WSCC 9 Bus Test System)
The base case (without FACTS device) load in area 2 is 190 MW and using RPFNR method the TTC
value, limiting condition and CPU time for computing this value is shown in column 2 of Table 1.
Similarly with FACTS device, its optimal location and settings, TTC value, limiting condition and the
computational time using GARPFNR and GABS is shown in column 3 and 4 of Table 1 respectively.
It is evident that for the proposed method GABS computational time is 98.69% less and the TTC
value is 0.653 % higher when compared to that of conventional method GARPFNR. The results are
tabulated in Table 1.
Table 1. WSCC 9 Bus for Transfer of power from Area 1 to Area 2 (Planning)
4.1.2. Power System Operation (WSCC 9 Bus Test System)
In this case FACTS device location from the results of GABS method in 4.1.1 is considered as base
case. For the operational problem, the corresponding TTC values CPU time, with and without change
in FACTS device settings are tabulated in Table 2. Using GABS the TTC values for 10% increase in
load, 10% decrease in load, outage of line 6 -7 and generator outage at bus 3 are 0.3%, 0.157%,
0.412% and 0.608% higher respectively and the corresponding CPU time for computation is very low
when compared to that of GARPFNR method as shown in Table 2.
Parameters
Without FACTS With FACTS Device
RPFNR GARPFNR GABS
FACTS Device
Setting and
Location
SVC at Bus 5,
Qsvc=85.45
TCSC in line 4-5, Xtcsc
= - 0.3658
SVC at Bus 4,
Qsvc=96.27
TCSC in line 6-7, Xtcsc
=0.0845
TTC (MW) 410.4 486.4 489.6
Limiting
Condition Vmin at Bus 5 MVA Limit Line 1 - 4 MVA Limit Line 1 - 4
CPU Time (Sec) 1.182 549.328 7.167
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
177 Vol. 1, Issue 5, pp. 170-180
Table 2. WSCC 9 Bus for Transfer of power from Area 1 to Area 2 (Operation)
Parameters
Without change in
FACTS device
settings With change in FACTS device settings
RPFNR GARPFNR GABS
Change in
MVA Load
(+ 10 %) at all
Load Bus
FACTS Device Setting
Qsvc=95.39
Xtcsc = 0.3234
Qsvc=97.88
Xtcsc = 0.0934
TTC (MW) 440.99 440.99 442.34
Limiting Condition
MVA Limit
Line 1 - 4
MVA Limit
Line 1 - 4
MVA Limit
Line 1 - 4
CPU Time (Sec) 1.196 416.911 3.547
Change in
MVA Load (-
10 %) at all
Load Bus
FACTS Device Setting
Qsvc=99.62
Xtcsc = - 0.0212
Qsvc=99.07
Xtcsc = - 0.0943
TTC (MW) 490.77 495.9 496.68
Limiting Condition Vmin at Bus 5 Vmin at Bus 5 Vmin at Bus 5
CPU Time (Sec) 1.801 691.923 4.089
Line 6 - 7
outage
FACTS Device Setting
Qsvc=83.66
Xtcsc = - 0.4602
Qsvc=57.75
Xtcsc = - 0.4830
TTC (MW) 288.8 357.2 358.68
Limiting Condition Vmin at Bus 7
MVA Limit
Line 5 - 6
MVA Limit
Line 1 - 4
CPU Time (Sec) 0.66 283.886 7.327
Outage of
Generator at
Bus 3
FACTS Device Setting
Qsvc=100.00
Xtcsc = 0.2705
Qsvc=85.77
Xtcsc = 0.2704
TTC (MW) 279.3 279.3 281.01
Limiting Condition
MVA Limit
Line 1 - 4
MVA Limit
Line 1 - 4
MVA Limit Line
1 - 4
CPU Time (Sec) 0.634 173.769 5.553
4.2. IEEE 118 Bus Test System
IEEE 118 bus test system is divided into two areas as shown in Table 3 and transfer of power from
Area 1 to Area 2 with only one FACTS device in both the types (TCSC and SVC) is considered for
placement.
Table 3 Area classification of IEEE 118 bus test system
4.2.1 Power System Planning (IEEE 118 bus test system)
The total load in area 2 is 1937 MW. TTC value without FACTS device using RPFNR method is
2111.3 MW. TTC value with FACTS device using GARPFNR and GABS is 2202.8MW and 2224.3
MW respectively and the corresponding time for calculation is shown in Table 4. Hence in the
proposed method GABS the time required for computation is nearly 96.77% less and the TTC value is
0.966 % higher when compared to the conventional method GARPFNR.
Area Area 1 Area 2
Bus Numbers
1 – 23, 25 –37,39 – 64, 113 –115, 117
24,38,65 – 112, 116,118
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
178 Vol. 1, Issue 5, pp. 170-180
Table 4 IEEE 118 Bus for Transfer of power from Area 1 to Area 2 (Planning)
Table 5 IEEE 118 Bus for Transfer of power from Area 1 to Area 2 (Operation)
4.2.2 Power System Operation (IEEE 118 Bus test system)
In this case FACTS device location from the results of GABS method in 4.2.1 is considered as base
case. For an operational problem of ±5% change in load, outage of line 23 -24 and generator at bus
61 is considered and their corresponding TTC values with and without change in FACTS device
settings are tabulated in Table 5 which shows that the proposed method GABS is more efficient in
assessing and enhancing TTC.
Parameters
Without FACTS With FACTS Device
RPFNR GARPFNR GABS
FACTS Device
Setting and
Location
SVC at Bus 44,
Qsvc=57.65
TCSC in line 89 - 92,
Xtcsc = -0.4908
SVC at Bus 86,
Qsvc= - 61.58 TCSC in
line 89 - 92,
Xtcsc =0.1483
TTC (MW) 2111.3 2202.8 2224.3
Limiting Condition
MVA Limit
Line 89 - 92
MVA Limit
Line 65 - 68
MVA Limit
Line 65 - 68
CPU Time (Sec) 1.259 308 9.937
Parameters
Without change in
FACTS device
settings
With change in FACTS device settings
RPFNR GARPFNR GABS
Change in MVA
Load (+ 5 %) at all
Load Bus
FACTS Device Setting ---- Qsvc=52.89
Xtcsc = 0.5
Qsvc=54.79
Xtcsc = 0.1421
TTC (MW) 2359.3 2359.3 2368.8
Limiting Condition Pg max at Bus
89 Pg max at Bus 89
MVA Limit
Line 89 - 92
CPU Time (Sec) 1.59 402.008 5.992
Change in MVA
Load (- 5 %) at all
Load Bus
FACTS Device Setting ----- Qsvc= -100.00
Xtcsc = 0.2908
Qsvc=52.04
Xtcsc = 0.0876
TTC (MW) 1987.4 1987.4 2000.8
Limiting Condition MVA Limit
Line 65 - 68
MVA Limit
Line 65 - 68
MVA Limit
Line 65 - 68
CPU Time (Sec) 0.8 223.012 5.355
Line 23 - 24
outage
FACTS Device Setting ----- Qsvc= - 33.76
Xtcsc = -0.2061
Qsvc=36.15
Xtcsc = -
0.2179
TTC (MW) 1995.1 2150.1 2151.6
Limiting Condition MVA Limit
Line 90 - 91
MVA Limit
Line 65 - 68
MVA Limit
Line 65 - 68
CPU Time (Sec) 0.541 270.317 5.977
Outage of
Generator at Bus
61
FACTS Device Setting ----- Qsvc=27.28
Xtcsc = 0.2890
Qsvc=99.23
Xtcsc = 0.4955
TTC (MW) 2246.9 2246.9 2256
Limiting Condition Pg max at Bus
89
Pg max at Bus
89
Pg max at
Bus 89
CPU Time (Sec) 1.256 405.674 6.812
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
179 Vol. 1, Issue 5, pp. 170-180
V. CONCLUSION
A fast and efficient method GABS is presented to assess and enhance TTC in presence of FACTS
devices. Simulation test is carried out on WSCC 9 bus, IEEE 118 bus test system and the results are
compared with the conventional GARPFNR method. From the results it is evident that the search
space in the conventional method is limited due to the step increment in loading factor which results
in local optimal value of TTC and use of NR method for power flow increases the CPU time due to
the presence of multiple Jacobian inverses. On the other hand GABS searches the loading factor
instead of incrementing it which results in near global optimal value of TTC and also the power flow
is performed using Broyden’s method with Sherman Morrison formula which reduces the CPU time
when compared to NR method. The percentage reduction in CPU time will increase further in GABS
either if the size of the system is more or when the system is lightly loaded. Hence GABS method
proves to be a promising one when compared to that of conventional method.
REFERENCES [1] “Available Transfer Capability Definitions and Determination” NERC report, June 1996.
[2] Ou, Y. and Singh, C. “Improvement of total transfer capability using TCSC and SVC”, Proceedings of
the IEEE Power Engineering Society Summer Meeting. Vancouver, Canada, July 2001, pp. 15-19.
[3] Farahmand, H. Rashidi-Nejad, M. Fotuhi-Firoozabad, M., “Implementation of FACTS devices for
ATC enhancement using RPF technique”, IEEE Power Engineering conference on Large Engineering
Systems, July 2004, pp. 30-35.
[4] Ying Xiao, Y. H. Song, Chen-Ching Liu, Y. Z. Sun, “ Available Transfer Capability Enhancement Using
FACTS Devices”, IEEE Trans. Power Syst., 2003,18, (1), pp. 305 – 312.
[5] T Masuta, A Yokoyama, “ATC Enhancement considering transient stability based on OPF control by
UPFC”, IEEE International conference on power system technology, 2006, pp. 1-6.
[6] K.S. Verma, S.N. Singh and H.O. Gupta “FACTS device location for enhancement of Total Transfer
Capacity” IEEE PES Winter Meeting, Columbus, OH, 2001, 2, pp. 522-527.
[7] Xingbin Yu, Sasa Jakovljevic and Gamg Huang, “Total Transfer capacity considering FACTS and
security constraints”, IEEE PES Transmission and Distribution Conference and Exposition, Sep 2003, 1,
pp. 73-78.
[8] Gravener, M.H. and Nwankpa, C. “Available transfer capability and first order sensitivity”, IEEE Trans.
Power Syst., 1999, 14, (2), pp. 512-518.
[9] H. Chiang, A. J. Flueck, K. S. Shah, and N. Balu, “CPFLOW: A practical tool for tracing power system
steady-state stationary behavior due to load and generation variations,” IEEE Trans. Power Syst., 1995,
10, (2) pp. 623–634.
[10] G. C. Ejebe, J. Tong, J. G. Waight, J. G. Frame, X. Wang, and W. F. Tinney, “Available transfer
capability calculations,” IEEE Trans. Power Syst., 1998, 13, (4) pp. 1521–1527.
[11] Ou, Y. and Singh, C. “Assessment of available transfer capability and margins”, IEEE Trans. Power
Syst., 2002, 17, (2), pp. 463-468.
[12] Leung, H.C., Chung, T.S., “Optimal power flow with a versatile FACTS controller by genetic algorithm
approach”, IEEE PES Winter Meeting, Jan 2000, 4, pp 2806-2811.
[13] S. Gerbex, R. Cherkaoui, A.J. Germond, “Optimal Location of Multitype FACTS Devices in a Power
System by Means of Genetic Algorithms”, IEEE Trans. Power Syst., 2001, 16, (3), pp. 537-544.
[14] S. Gerbex, R. Cherkaoui, and A. J. Germond, “Optimal Location of FACTS Devices to Enhance Power
System Security”, IEEE Bologna Power Tech Conference, Bologna, Italy, June 2003, 3, pp. 23-26.
[15] Wang Feng, and G. B. Shrestha, “Allocation of TCSC devices to optimize Total Transfer capacity in a
Competitive Power Market”, IEEE PES Winter Meeting, Feb 2001, 2, pp. 587 -593.
[16] Sara Molazei, Malihe M. Farsangi, Hossein Nezamabadi-pour, “Enhancement of Total Transfer
Capability Using SVC and TCSC”, 6th
WSEAS International Conference on Applications of Electrical
Engineering, Istanbul, Turkey, May 27-29, 2007. pp 149-154.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
180 Vol. 1, Issue 5, pp. 170-180
[17] Hossein farahmand, Masoud rashidinejad and Ali akbar gharaveisi, “A Combinatorial Approach of Real
GA & Fuzzy to ATC Enhancement”, Turkish Journal Of Electrical Engineering, 2007, 1, (4), pp. 77-88.
[18] Fozdar, M., “GA based optimisation of thyristor controlled series capacitor”, 42nd
International
Universities Power Engineering Conference, Brighton, Sept. 2007, pp. 392 – 396.
[19] X. Luo, A. D. Patton, and C. Singh, “ Real power transfer capability calculations using multi-layer feed-
forward neural networks,” IEEE Trans. Power Syst., 2000, 15, (2), pp. 903–908.
[20] N. V. Ramana, K.Chandrasekar, “Multi Objective Genetic Algorithm to mitigate the composite problem
of Total transfer capacity, Voltage stability and Transmission loss minimization”, IEEE 39th North
American Power Symposium, New Mexico, 2007, pp 670-675.
[21] Peter. W. Sauer, “Technical challenges of Computing ATC in Electric Power System”, 30th
Hawaii
International conference on system sciences, Wailea, HI, USA, Jan 1997, 5, pp. 589-593.
[22] “Determination of ATC within the Western Interconnection”, WECC RRO Document MOD -003-0, June
2001.
[23] Ongsakul, W. Jirapong, P. “Optimal allocation of FACTS devices to enhance total transfer capability
using evolutionary programming”, IEEE International Symposium on Circuits and System, ISCAS, May
2005, 5, pp 4175- 4178.
[24] Peerapol Jirapong and Weerakorn Ongsakul, “Optimal Placement of Multi-Type FACTS Devices for
Total Transfer Capability Enhancement Using Hybrid Evolutionary Algorithm”, Journal of Electric
Power Components and Systems, 2007, 35, (9) pp. 981 – 1005.
[25] C. G. Broyden, “A class of methods for solving Non Linear Simultaneous Equations” Mathematics of
Computation, 1965 , 19, (92), pp. 577-593.
[26] Asif Selim, “An Investigation of Broyden’s Method in Load Flow Analysis”, MS thesis report, Ohio
University, March 1994.
[27] R. D. Zimmermann and Carlos E. Murillo-Sánchez, Matpower a Matlab® power system simulation
package, User’s Manual, Version 3.2, 2007.
[28] http://www.ee.washington.edu/research/pstca/.
Authors
K Chandrasekar received his B.E. (EEE) from University of Madras, Madras, India in
1997 and M.E (Power systems) form Madurai Kamarajar University, Madurai, India in
2001. He is currently an Assoc. Professor in Dept of EEE, Tagore Engineering College,
Chennai and is pursuing PhD in J.N.T. University, Hyderabad, A.P, India. His research
interests are in Power system Optimization, and application of FACTS devices. He is a
member of IEEE.
N. V. Ramana has Graduated in 1986 and Post-Graduated in 1991 respectively from S.V.
University, Tirupati and obtained his Ph.D in the year 2005 from J.N.T. University,
Hyderabad, A.P., India. He is currently Professor and Head, EEE dept., JNTUH College of
Engineering, Nachupally, Karimnagar Dist. A.P, India. He has publications in international
journals and conferences and presented papers in IEEE Conferences held in USA, Canada
and Singapore. His research interests are design of intelligent systems for power systems
using Fuzzy Logic Control and Genetic and Cluster Algorithms.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
181 Vol. 1, Issue 5, pp. 181-188
ISSUES IN CACHING TECHNIQUES TO IMPROVE SYSTEM
PERFORMANCE IN CHIP MULTIPROCESSORS
H. R. Deshmukh1, G. R. Bamnote
2
1Associate professor, B.N.C.O.E., Pusad, M.S., India
2Associate professor & Head, PRMIT&R, Badnera, M.S., India
ABSTRACT
As cache management in chip multiprocessors has become more critical because of the diverse workloads,
increasing working sets of many emerging applications, increasing memory latency and decreasing size of
cache devoted to each core due to increased number of cores on a single chip in Chip multiprocessors (CMPs).
This paper identifies caching techniques and important issues in caching techniques in chip multiprocessor for
managing last level cache to reduce off chip access to improve the system performance under critical conditions
and suggests some future directions to address the identified issues.
KEYWORDS: Multiprocessors, Partitioning, Compression, Fairness, QoS.
I. INTRODUCTION
Over the past two decades, speed of processors has increased at much faster rate than DRAM speeds.
As a result, the number of processor cycles it takes to access main memory has also increased. Current
high performance processors have memory access latency of well over more than hundreds of cycle,
and trends indicate that this number will only increase in the future. The growing disparity between
processor speed and memory speed is popularly referred in the architecture community as the Memory
Wall [1]. Main memory accesses affect processor performance adversely. Therefore, current
processors use caches to reduce the number of memory accesses. A cache hit provides fast access to
recently accessed data. However, if there is a cache miss at the last level cache, a memory access is
initiated and the processor is stalled for hundreds of cycles [1]. So as, to sustain high performance, it
is important to reduce cache misses.
The importance of cache management has become even more critical because of, diverse workloads,
increasing working sets of many emerging applications, increasing memory latency and decreasing
size of cache devoted to each core due to increased number of cores on a single chip.
Improvements in silicon process technology have facilitated the integration of multiple cores into
modern processors and it is anticipated that the number of cores on a single chip will continue to
increase in chip multiprocessors in future. Multiple application workloads are attractive for utilising
multi-core processors, put significant pressure on the memory system [2]. This motivates the need for
more efficient use of the cache in order to minimize the expensive, in terms of both latency and,
requests to off-chip memory. This paper discusses the exiting approaches and limitations of exiting
approaches in caching techniques in chip multiprocessors available in literature and investigates the
important issues in this area.
II. REPLACEMENT TECHNIQUE
Different workloads and program phases have diverse access patterns like, Sequential access pattern
in which all block are accesses one after another and never re-accessed, such as file scanning.
Looping-like access patterns in which all blocks are accessed repeatedly with a regular interval,
Temporally-clustered access patterns in which blocks accessed more recently are the ones more likely
to be accessed in the near future and Probabilistic access patterns in which, each block has a
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
182 Vol. 1, Issue 5, pp. 181-188
stationary reference probability, and all blocks are accessed independently with the associated
probabilities.
Previous researchers [3]-[11] have shown that one replacement policy usually performs efficiently
under the workload with one kind of access pattern; it may perform badly once the access pattern of
the workload changes. For example, MRU replacement policy (Most Recently Used) performs well to
sequential and looping patterns, LRU replacement policy performs well to temporally clustered
patterns, while LFU replacement policy performs well to probabilistic patterns. From the study of
existing replacement policies, it is found that none of the single cache replacement policy performs
efficiently for mix type of access pattern like Sequential references, Looping references, Temporally-
clustered references and Probabilistic references, which may be occurs simultaneously in one
workload during execution. Some of the policies require additional data structures to hold the
information of non-residential pages. Some policies require data update in every memory access,
which necessarily increases memory and time overhead, in result degrade the performance.
Kaveh Samiee et al. (2009, 2008) [3][4] suggested weighted replacement policy. The basic idea of
this policy is to rank pages based on their recency, frequency and reference rate. So, pages that are
more recent and have used frequently are ranked higher. It means that the probability of using pages
with small reference rate is more than the one with bigger reference rate. This policy behaves like
both LRU and LFU by replacing pages, that were not recently used and pages that are used only once.
WRP needs three elements to work and will add space overhead to system. Algorithm needs a space
for recency counter , frequency counter , and for weight value , which is as weighting value for
each object in the buffer. Calculating weighting function value for each object after every access to
cache will cause a time overhead to system. This policy fails for sequential access and loop access
patterns.
Dr Mansard Jargh et al. (2004) [5] describes improved replacement policy (IRP) which perform some
key modifications to the LRU algorithm and combine it with a significantly enhanced version of the
LFU algorithm and take spatial locality into account in the replacement decision. IRP also uses the
concept of spatial locality and therefore efficiently expels only blocks, which are not likely to be,
accessed again. This algorithm-required memory overhead to store recency count ‘rc’, frequency
count ‘fc’ and block address ‘ba’ for each block. Algorithm required time and processor overhead to
search smallest ‘fc’ value and largest ‘rc’ value, as well as time and processor overhead to changing
value of ‘fc’ and ‘rc’ to every access to block. Algorithm does not perform well for loop access
pattern and sequential access pattern.
Jiang et al. (2002) [6] presented low inter-reference recency set policy (LIRS). Its objective is to
minimize the deficiencies presented by LRU using an additional criterion named IRR (Inter-
Reference Recency) that represents the number of different pages accessed between the last two
consecutive accesses to the same page. The algorithm assumes the existence of some behaviour inertia
and, according to the collected IRR’s, replaces the page that will take more time to be referenced
again. This means that LIRS does not replace the page that has not been referenced for the longest
time, but it uses the access recency information to predict which pages have more probability to be
accessed in a near future. The LIRS divides the cache into two sets, high inter-reference recency
(HIR) block set and low inter-reference recency (LIR) block set. Each block with history information
has a status either LIR or HIR. Cache is divided into a major part and a minor part in terms of size.
Major part is used to store LIR blocks, and the minor part is used to store HIR blocks. A HIR block is
replaced when the cache is full for replacement, and the LIRS stack may grow arbitrarily large, and
hence, it needs to be required large memory overhead. This policy does not perform well for
sequential access pattern.
Zhan-Sheng et al. (2008) [7] proposed CRFP Policy. It is and novel adaptive replacement policy,
which combines LRU and LFU policies. CRFP propose a novel adaptive replacement policy that
combined the LRU and LFU Policies (CRFP), CRFP is self-tuning and can switch between different
cache replacement policies adaptively and dynamically in response to the access pattern changes.
Memory overhead is required to store the cache directory, recency value, and frequency value, hit
value, miss value, switches time and switch ration. Policy also required time overhead to search cache
directory, and computational time to switch from LRU to LFU. However, this policy fails in the case,
where accesses inside loops with working set size slightly larger than the available memory.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
183 Vol. 1, Issue 5, pp. 181-188
E. J. O’Neil et al. (1993) [8] presented LRU-K policy which makes its replacement decision based on
the time of the Kth
to last reference to the block i.e. reference density observed during the past K-
reference. When K is larger, it can discriminate well between frequently and infrequently referenced
blocks. When K is small, it can remove cold block quickly since such block would have wide span
between the current time and to last reference time. Time complexity of algorithm is O (log (n));
however this policy does not perform well for loop access pattern, and sequential access pattern.
Zhuxu Dong (2009) [9] proposed spatial locality based, block correlations directed cache replacement
policy (BCD), which uses both of history and runtime access information to predict spatial locality,
prediction results are use to improve the utilization of the cache and reduces the penalty incurred by
incorrect predications. For most of real system workloads, BCD can reduce the cache miss ratio by
11% to 38% compared with LRU.
Y. Smaragadaki et al. (1999) [10] described early eviction LRU policy (EELRU) which was proposed
as an attempt to mix LRU and MRU, based only on the positions on the LRU queue that concentrate
most of the memory references. This queue is only a representation of the main memory using the
LRU model, ordered by the recency of each page. EELRU detects potential sequential access patterns
analyzing the reuse of pages. One important feature of this policy is the detection of non-numerically
adjacent sequential memory access patterns. This policy does not perform well for loop access
pattern.
Andhi Janapsatya et al. (2010) [11] proposed a new adaptive cache replacement policy, called
Dueling CLOCK (DC). The DC policy developed to have low overhead cost, to capture recency
information in memory accesses, to exploit the frequency pattern of memory accesses and to be scan
resistant. Paper proposed a hardware implementation of the CLOCK algorithm for use within an on-
chip cache controller to ensure low overhead cost. DC policy, which is an adaptive replacement
policy, that alternates between the CLOCK algorithm and the scan resistant version of the CLOCK
algorithm. This policy reduced maintenance cost of LRU policy. Research issue here is to explore
how replacement policy will perform efficiently under diverse workload (mix access pattern) and how
processor and memory overhead will be, reduce for novel replacement policy.
III. PARTITIONING TECHNIQUE
Chip multiprocessors (CMPs) have been widely adopted and commercially available as the building
blocks for future computer systems. It contains multiple cores, which enables to concurrently
execute multiple applications (or threads) on a single chip. As the number of cores on a chip
increases, the pressure on the memory system to sustain the memory requirements of all the
concurrently executing applications (or threads) increases. An important question in CMP design is
how to use the limited area resources on chip to achieve the best possible system throughput for a
wide range of applications. Keys to obtaining high performance from multicore architectures is to
provide fast data accesses (reduce latency) for on-chip computation resources and manage the largest
level on-chip cache efficiently so that off-chip accesses are reduced. While limited off-chip
bandwidth, increasing latency, destructive inter-thread interference, uncontrolled contention and
sharing, increasing pollution, decreasing harmonic mean and diverse workload characteristics pose
key design challenges. To address these challenges many researchers [12]-[24] have proposes
different cache partitioning scheme to share on-chip cache resources among different threads, but all
challenges are not address properly.
Cho and Jin et al. (2006) [12], proposed software-based mechanism for L2 cache partitioning based
on physical page allocation. However, the major focus of their work is on how to distribute data in a
Non-Uniform Cache Architecture (NUCA) to minimize overall data access latencies. However, they
do not concentrate on the problem of uncontrolled contention on a shared L2 cache.
David Tam et al. (2007) [13], demonstrated a software-based cache partitioning mechanism and
shown some of the potential gains in a multiprogrammed computing environment, which allows for
flexible management of the shared L2 cache resource. This work neither supports the dynamic
determination of optimal partitions nor dynamically adjusts the number of partitions.
Stone et al. (1992) [14] investigated optimal (static) partitioning of cache resources between multiple
applications, when the information about change in misses for varying cache size is available for each
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
184 Vol. 1, Issue 5, pp. 181-188
of the competing applications. However, such information is non-trivial to obtain dynamically for all
applications, as it is dependent on the input set of the application.
Suh et al. (2004) [15] described dynamic partitioning of shared cache to measure utility for each
application by counting the hits to the recency position in the cache and used way partitioning to
enforce partitioning decisions. The problem with way partitioning is that it requires core-identifying
bits with each cache entry, which requires changing the structure of the tag-store entry. Way
partitioning also requires that the associativity of the cache be increased to partition the cache among
a large number of applications.
Qureshi et al. (2006) [16] proposed the cache monitoring circuits outside the cache so that the
information computed by one application is not polluted by other concurrently executing applications.
They provide a set sampling based utility monitoring circuit that requires storage overhead of 2KB per
core and used way partitioning to enforce partitioning decisions. TADIP-F is better able to respond to
workloads that have working sets greater than the cache size while UCP does not.
Chang et al. (2007) [17] used time slicing as a means of doing cache partitioning so that each
application is guaranteed cache resources for a certain time quantum. Their scheme is still susceptible
to thrashing when the working set of the application is greater than the cache size.
Suh et al. (2002) [18] described a way of partitioning a cache for multithreaded systems by estimating
the best partition sizes. They counted the hits in the LRU position of the cache to predict the number
of extra misses that would occur if the cache size were decreased. A heuristic used this number
combined with the number of hits in the second LRU position to estimate the number of cache misses
that are avoided if the cache size is increased.
Dybdahl et al.,(2006) [19] presented the method which adjust the size of the cache partitions within a
shared cache, work did not consider a shared partition with variable size, nor did they look at
combining private and shared caches.
Kim et al. (2004) [20] presented cache partitioning in shared cache for a two-core CMP where a trial
and fail algorithm was applied. Trial and fail as a partitioning method does not scale well with
increasing number of cores since the solution space grows fast.
Z. Chishti et al. (2005) [21] described spilling evicted cache blocks to a neighbouring cache. They did
not consider putting constraints on the sharing or methods for protection from pollution. No
mechanism was described for optimizing partition sizes.
Chiou et al.(2000) [22] suggested a mechanism for protecting cache blocks within a set. Their
proposal was to control which blocks that can be replaced in a set by software, in order to reduce
conflicts and pollution. The scheme was intended for a multi-threaded core with a single cache.
Dybdahl et al.(2007) [23] presented a approach in which the amount of cache space that can be
shared among the cores is controlled dynamically, as well as uncontrolled sharing of resources is also
control effectively . The adaptive scheme estimates, continuously, the effect of increasing/ decreasing
the shared partition size on the overall performance. Paper describes NUCA organization in which
blocks in a local partition can spill over to neighbour core partitions. Approach suffers from pollution
and harmonic mean problem.
Dimitris Kaseridis et al. (2009) [24] proposed a dynamic partitioning strategy based on realistic last
level cache designs of CMP processors. Proposed scheme provides on average a 70% reduction in
misses compared to non-partitioned shared caches, and a 25% misses reduction compared to static
equally partitioned (private) caches. This work highlights the problem of sharing the last level of
cache in CMP systems and motivates the need for low overhead, workload feedback-based
hardware/software mechanisms that can scale with the number of cores, for monitoring and
controlling the L2 cache capacity partitioning.
Research issue here is to explore cost effective solution for future improvements in caching
requirement, including thrashing avoidance, throughput improvement, fairness improvement and QoS
guarantee under above key design challenges.
IV. COMPRESSION TECHNIQUE
Chip multiprocessors (CMPs) combine multiple processors on a single die, however, the increasing
number of processor cores on a single chip increases the demand of two critical resources, the shared
L2 cache capacity and the off-chip pin bandwidth. Demand of critical resources are satisfied by the
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
185 Vol. 1, Issue 5, pp. 181-188
technique of cache compression. From the existing research work [25][26][27][28][29][30][31] it is
well known that Compression technique, which can both reduce cache miss ratio by increasing the
effective shared cache capacity, and improve the off-chip bandwidth by transferring data in
compressed form. Jang-Soo Lee et al., (1999) [25] proposed the selective compressed memory system
based on the selective compression technique, fixed space allocation method, and several techniques
for reducing the decompression overhead. The proposed system provide on the average 35% decrease
in the on-chip cache miss ratio as well as on the average 53% decrease in the data traffic. However,
authors could not control the problem of long DRAM latency and limited bus bandwidth.
Charles Lefurgy et al (2002)[26] presented a method of decompressing programs using software. It
relies on using a software managed instruction cache under control of the decompressor. This is
achieved by employing a simple cache management instruction that allows explicit writing into a
cache line. It also considers selective compression (determining which procedures in a program
should be compressed) and show that selection based on cache miss profiles can substantially
outperform the usual execution time based profiles for some benchmarks. This technique achieves
high performance in part through the addition of a simple cache management instruction that writes
decompressed code directly into an instruction cache line. This study focuses on designing a fast
decompressor (rather than generating the smallest code size) in the interest of performance. Paper
shown that a simple highly optimized dictionary compression perform even better than CodePack, but
at a cost of 5 to 25% in the compression ratio
Prateek Pujara et al. (2005) [27] investigated restrictive compression techniques for level one data
cache, to avoid an increase in the cache access latency. The basic technique all words narrow (AWN)
compresses a cache block only if all the words in the cache block are of narrow size. AWN technique
here stores a few upper halfwords (AHS) in a cache block to accommodate a small number of normal-
sized words in the cache block. Further, author not only make the AHS technique adaptive, where the
additional half-words space is adaptively allocated to the various cache blocks but also propose
techniques to reduce the increase in the tag space that is inevitable with compression techniques.
Overall, the techniques in this paper increase the average L1 data cache capacity (in terms of the
average number of valid cache blocks per cycle) by about 50%, compared to the conventional cache,
with no or minimal impact on the cache access time. In addition, the techniques have the potential of
reducing the average L1 data cache miss rate by about 23%.
Martin et al. (2008) [28] shown that it is possible to use larger block sizes without increasing the off-
chip memory bandwidth by applying compression techniques to cache/memory block transfers. Since
bandwidth is reduced up to a factor of three, work proposes to use larger blocks. While
compression/decompression ends up on the critical memory access path, works find its negative
impact on the memory access latency time. Proposed scheme dynamically chosen a larger cache
block when advantageous given the spatial locality in combination with compression. This combined
scheme consistently improves performance on average by 19%.
Xi Chen et al. (2009) [29] presented a lossless compression algorithm that has been designed for
fast on-line data compression, and cache compression in particular. The algorithm has a number of
novel features tailored for this application, including combining pairs of compressed lines into one
cache line and allowing parallel compression of multiple words while using a single dictionary and
without degradation in compression ratio. The algorithm is based on pattern matching and partial
dictionary coding. Its hardware implementation permits parallel compression of multiple words
without degradation of dictionary match probability. The proposed algorithm yields an effective
system-wide compression ratio of 61%, and permits a hardware implementation with a maximum
decompression latency of 6.67 ns.
Martin et al. (2009) [30] presents and evaluates FPC, a lossless, single pass, linear-time compression
algorithm. FPC targets streams of double-precision floating-point values. It uses two context-based
predictors to sequentially predict each value in the stream. FPC delivers a good average compression
ratio on hard-to-compress numeric data. Moreover, it employs a simple algorithm that is very fast and
easy to implement with integer operations. Author claimed that FPC to compress and decompress 2 to
300 times faster than the special-purpose floating-point compressors. FPC delivers the highest
geometric-mean compression ratio and the highest throughput on hard-to compress scientific data
sets. It achieves individual compression ratios between 1.02 and 15.05.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
186 Vol. 1, Issue 5, pp. 181-188
David Chen et al. (2003)[31] propose a scheme that dynamically partitions the cache into sections of
different compressibilities, in this work it is applied repeatedly on smaller cache-line sized blocks so
as to preserve the random access requirement of a cache. When a cache-line brought into the L2 cache
or the cache-line is to be modified, the line is compressed using a dynamic, LZW dictionary.
Depending on the compression, it is placed into the relevant partition. The partitioning is dynamic in
that the ratio of space allocated to compressed and uncompressed varies depending on the actual
performance, a compressed L2 cache show an 80% reduction in L2 miss-rate when compared to
using an uncompressed L2 cache of the same area.
Research issues here is, when the processor requests a word within a compressed data block stored in
the compressed cache, the compressed block has to be all decompressed on the fly and then the
requested word is transferred to the processor. Compression ratio, compression time and
decompression overhead, causes a critical effect on the memory access time and offsets the
compression benefits, these issues are interesting and challenging for future research. Another issue
associated with the compressed memory system is that, compressed blocks can be generated with
different sizes depending on the compression efficiency. Therefore, in worst case, the length of any
compressed block can be rather longer than that of its source block, this will adversely affect the
performance of system.
V. CONCLUSION
From the above discussion following conclusion can be arrived to address the above research issues in
caching techniques in chip multiprocessors to improve system performance
• To develop low overhead novel replacement policy, which will performs efficiently under
under diverse workload, different cache size and varying working set.
• To develop efficient caching partitioning scheme in Chip Multiprocessors with different
optimization objectives, including throughput, fairness, and guaranteed quality of service
(QoS)
• To develop low overhead caching compression/decompression scheme in Chip
Multiprocessors to increase shared cache capacity and off chip Bandwidth.
REFERENCES
[1] John L. Henneaay and David A. Patterson, “Computer Architecture a Quantitative Approach”,
Edition ,Elsevier publication, 2003.
[2] Konstantinos Nikas, Matthew Horsnell, Jim Garside, “An Adaptive Bloom Filter Cache Partitioning
Scheme for Multicore Architectures”, International Conference on, Embedded Computer Systems:
Architectures, Modelling, and Simulation, July 21-24 2008, SAMOS 2008, pp. 21-24.
[3] Kaveh Samiee, GholamAli Rezai Rad, “WRP: Weighting Replacement Policy to Improve Cache
Performance”, Proceeding of the International Symposium on Computer Science and its Applications,
2008, pp. 38-41.
[4] Kaveh Samiee “A Replacement Algorithm Based on Weighting and Ranking Cache”, International
Journal of Hybrid Information Technology Volume Number 2 , April, 2009
[5] Dr Mansard Jargh, Ahmed Hasswa, “Implementation Analysis and Performance Evolution of the IRP-
Cache Replacement Policy”, IEEE, International Conference on Computer and Information
Technology Workshops, 2004.
[6] S. Jiang and X. Zhang, “LIRS: An Efficient Low Inter-reference Recency Set Replacement Policy to
Improve Buffer Cache Performance”, Proceedings of the ACM SIGMETRICS Conference on
Measurement and Modelling of computer Systems, pp. 31–42, 2002.
[7] Zhan-sheng, Da-wei, Hui-juan1, “CRFP: A Novel Adaptive Replacement Policy Combined the LRU and
LFU Policies”, IEEE 8th International Conference on Computer and Information Technology Workshops
2008.
[8] E. J. Neil, P. E. Neil, and Gerhard Weikum, “The LRU-K Page Replacement Algorithm for Database
Disk Buffering”, Proceedings of the 1993 ACM SIGMOD Conference, pp. 297–306, 1993.
[9] Zhu Xu-Dong, Ke Jian, Xu Lu, “BCD: To Achieve the Theoretical Optimum of Spatial Locality Based
Cache Replacement Algorithm”, IEEE International Conference on Networking, Architecture, and
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
187 Vol. 1, Issue 5, pp. 181-188
Storage, 2009
[10] Y. Smaragdakis, S. Kaplan, and P. Wilson, “EELRU: Simple and Effective Adaptive Page Replacement”,
Proceedings of ACM SIGMETRICS Conference on Measurement and Modelling of Computer Systems,
1999.
[11] Andhi Janapsatya, Aleksandar Ignjatovic, Jorgen Peddersen and Parameswaran, “Dueling CLOCK:
Adaptive Cache Replacement Policy Based on The CLOCK Algorithm”
[12] S. Cho and L. Jin, “Managing Distributed, Shared L2 Caches through OS-level Page Allocation”,
Proceedings of the Workshop on Memory System Performance and Correctness, 2006.
[13] David Tam, Reza Azimi, Livio Soares, and Michael Stumm, “Managing Shared L2 Caches on Multicore
Systems in Software”, Workshop on the Interaction between Operating Systems and Computer
Architecture, 2007.
[14] H. S. Stone, J. Turek, and J. L. Wolf., “Optimal Partitioning of Cache Memory” IEEE Transactions on
Computers, 41(9):1054–1068, 1992.
[15] G. E. Suh, L. Rudolph, and S. Devadas, “Dynamic Partitioning of Shared Cache Memory” Journal of
Supercomputing, 28(1):7–26, 2004.
[16] M. K. Qureshi and Y. Patt, “Utility Based Cache Partitioning: A Low Overhead High-Performance
Runtime Mechanism to Partition Shared Caches”, The Annual IEEE/ACM International
Symposium on Microarchitecture, MICRO'06
[17] J. Chang and G. S. Sohi, “Cooperative Cache Partitioning for Chip Multiprocessors”, Proceeding of
Annual International Conference on Supercomputing, ICS-21, 2007.
[18] G. Suh, S. Devadas, and L. Rudolph, “Dynamic Cache Partitioning for Simultaneous Multithreading
Systems”, International Conference On Parallel and Distributed Computing Systems, 2002.
[19] H. Dybdahl, P. Stenstrom, and L. Natvig “A Cache Partitioning Aware Replacement Policy for Chip
Multiprocessors”, In International Conference High Performance Computing, HiPC, 2006.
[20] C. Kim, D. Burger, and S. W. Keckler, “Nonuniform Cache Architectures for Wire-Delay Dominated
On-Chip Caches”, IEEE Micro 2004, 23(6): 99-107,
[21] Z.Chishti, M.D.Powell, and T. N. Vijaykumar, “Optimizing Replication Communication and Capacity
Allocation in CMPs”, Annual International Symposium on Computer Architecture, ISCA, 2005,
pp: 357-368.
[22] D.Chiou, P.Jain, S. Devadas, and L. Rudolph, “Dynamic Cache Partitioning via Columnisation”,
Proceedings of the Conference on Design Automation, Los Angeles, June 5-9, 2000, ACM, 2000.
[23] Haakon Dybda, Perstenstrom, “An Adaptive Shared/Private NUCA Cache Partitioning Scheme for Chip
Multiprocessors”, IEEE International Symposium on High Performance Computer Architecture,
2007, pp: 2 – 12.
[24]
Dimitris Kaseridis, Jeffrey Stuechelix and Lizy K. John, “Bank-aware Dynamic Cache Partitioning for
Multicore Architectures International Conference on Parallel Processing 2009
[25] Jang-Soo Lee, Won-Kee Hong, and Shin-Dug Kim, “Design and Evaluation of a Selective Compressed
Memory System”, International Conference On Computer Design (ICCD), 1999, pp: 184-191.
[26] CHARLES LEFURGY, EVA PICCININNI, AND TREVOR MUDGE, “REDUCING CODE SIZE WITH RUN-TIME
DECOMPRESSION”, PROCEEDINGS ON INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE
COMPUTER ARCHITECTURE HPCA, 2002, PP. 218-228.
[27] Prateek Pujara, Aneesh Aggarwal, “Restrictive Compression Techniques to Increase Level Cache
Capacity”, IEEE International Conference on Computer Design: VLSI in Computers and Processors,
ICCD 2005, PP: 327-333.
[28] Martin Thuresson and Per Stenstrom, “Accommodation of the Bandwidth of Large Cache Blocks using
Cache/Memory Link Compression”, International Conference on Parallel Processing, ICCP 2008,
PP: 478-486.
[29] Xi Chen, Lei Yang, Robert P. Dick, Li Shang, and Haris Lekatsas, “C-Pack: A High-Performance
Microprocessor Cache Compression Algorithm”, IEEE Transaction on Very large Scale Integration
System 2009, 44(99), PP: 1-11.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
188 Vol. 1, Issue 5, pp. 181-188
Authors H. R. Deshmukh received his M.E. CSE degree from SGB Amravati University, Amravati
in 2008, and research scholar from 2009. Working as associate professor in deptt. Of CSE
B.N.C.O.E., Pusad (India), & life member of Indian Society for Technical Education New
Delhi.
G. R. Bamnote is Professor & Head of Department. Of Computer Science & Engineering at
Prof. Ram Meghe Institute of Technology & Research, Badnera – Amravati. He did his BE
(Computer Engg) in 1990 from Walchand College of Engineering, Sangli, M.E. (Computer
Science & Engg) from PRMIT&R, Badnera-Amravati in 1998 and Ph.D. in Computer Science
& Engineering from SGB Amravati University, Amravati in 2009. He is life member of Indian
Society of Technical Education, Computer Society of India, and Fellow of The Institution of
Electronics and Telecommunication Engineers, The Institution of Engineers (India).
[30] Martin, Burtscher and Paruj Ratanaworabhan, “FPC: A High-Speed Compressor for Double-Precision
Floating-Point Data” IEEE Transaction on Computers, vol. 58(1), January 2009, PP: 18-31.
[31] David Chen, Enoch Pegerico and Larry Rudolpha, “A Dynamically Partitionable Compressed Cache”,
Proceeding of Singapore-MIT Alliance Symposium, 2003.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
189 Vol. 1, Issue 5, pp. 189-196
KANNADA TEXT EXTRACTION FROM IMAGES AND VIDEOS
FORVISION IMPAIRED PERSONS
Keshava Prasanna1, Ramakhanth Kumar P
2, Thungamani.M
3, Manohar Koli
4
1, 3 Research Assistant, Tumkur University,Tumkur, India.
2Professor and HOD, R.V. College of Engineering,Bangalore, India.
4 Research Scholar, Tumkur University,Tumkur, India.
ABSTRACT
We propose a system that reads the Kannada text encountered in natural scenes with the aim to provide
assistance to the visually impaired persons of Karnataka state. This paper describes the system design and
standard deviation based Kannada text extraction method. The proposed system contain three main stages text
extraction, text recognition and speech synthesis. This paper concentrated on text extraction from
images/videos. In this paper: an efficient algorithm which can automatically detect, localize and extract
Kannada text from images (and digital videos) with complex backgrounds is presented. The proposed approach
is based on the application of a color reduction technique, a standard deviation base method for edge
detection, and the localization of text regions using new connected component properties. The outputs of the
algorithm are text boxes with a simple background, ready tobe fed into an OCR engine for subsequent
character recognition. Our proposal is robust with respect to different font sizes, font colors, orientation,
alignment and background complexities. The performance of the approach is demonstrated by presenting
promising experimental results for a set of images taken from different types of video sequences.
KEYWORDS: SVM, OCR, AMA, CCD Camera, Speech synthesis.
I. INTRODUCTION
Recent studies in the field of computer vision and pattern recognition show a greatamount of interest in
content retrieval from images and videos. Text embedded in images contains large quantities of useful
semantic information, which can be used to fully understand images. In this world maximum objects
can be analyzed and identified by reading the text information present on that object
Automatic detection and extraction of text in images have been used in many applications such as
document retrieving; a document image analysis system is one that can handle text documents in
Kannada, which is the official language of the south Indian state of Karnataka. The input to the system
is the scanned image of a page of Kannada text. The output is an editable computer file containing the
information in the page. The system is designed to be independent of the size of characters in the
document and hence can be used with any kind of document in Kannada. The task of separating lines
and words in the document is fairly independent of the script and hence can be achieved with standard
techniques. However, due to the peculiarities of the Kannada script, we make use of a novel
segmentation scheme whereby words are first segmented to a sub-character level, the individual pieces
are recognized and these are then put together to effect recognition of individual aksharas or characters.
The Kannada alphabet (50) is classified into two main categories 16 Vowels and 34 consonants as
shown in figure 1 and figure 2 words in Kannada are composed of aksharas[13] which are analogues to
characters in English words. We use a novel feature vector to characterize each segment and employ a
classifier based on the recently developed concept of Support Vector Machines (SVM)[14], address
block location, content based image/video indexing, mobile robot navigation to detect text based
landmarks, vehicle license detection / recognition, object identification, etc. The blind peoples are
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
190 Vol. 1, Issue 5, pp. 189-196
almost dependent on others. They cannot read and analyze objects their own. In making blind peoples
readable extraction of textual information plays very vital role. Textual information extraction helps
blind peoples in various aspects such as identifying the objects,identifying and self-reading of the text
books, newspapers, current and electric bills, sign boards, personal letters etc.
OCR systems available for handling English documents, with reasonable levels of accuracy. (Such
systems are also available for many European languages as well as some of the Asian languages such as
Japanese, Chinese etc.) However, there are not many reported efforts at developing OCR systems for
Indian languages. The work reported in this project is motivated by the fact that there are no reported
efforts at developing document analysis systems for the south Indian language, Kannada. In most OCR
[13] systems the final recognition accuracy is always higher than the raw character recognition
accuracy. For obtaining higher recognition accuracy, language-specific information such as co-
occurrence frequencies of letters, a word corpus [14], a rudimentary model of the grammar etc. are
used. This allows the system to automatically correct many of the errors made by the OCR subsystem.
In our current implementation, we have not incorporated any such post-processing. The main reason is
that, at present we do not have a word corpus for Kannada. Even with a word corpus the task is still
difficult because of the highly in flexional nature of Kannada grammar. The grammar also allows for
combinations of two or more words. Even though these follow well-defined rules of grammar, the
number of rules is large and incorporating them into a good spell-checking application for Kannada is a
challenging task.
Figure 1: Vowels in Kannada [13]
Figure 2: Consonants in Kannada [13]
II. RELATED WORK
Due to the variety of font size, style, orientation, and alignment as well as the complexity of the
background, designing a robust general algorithm, which can effectively detect and extract text from
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
191 Vol. 1, Issue 5, pp. 189-196
both types of images, which is full of challenges. Various methods have been proposed in the past for
detection and localization of text in images and videos. These approaches take into consideration
different properties related to text in an image such as color, intensity, connected – components, edges
etc. These properties are used to distinguish text regions from their background and / or other regions
within the image.
[1]. Xiaoqing Liu et al [1, 2]:The algorithm proposed is based on edge density, strength and
orientation. The input image is first pre-processed to remove any noise if present. Then horizontal,
vertical and diagonal edges are identified with the help of Gaussian kernels and based on edge
density, strength and orientation text regions are identified. This approach is based on the fact that
edges are most reliable features of text.
[2]. JulindaGllavata et al [3]:The algorithm proposed is based on connected component based
method. This approach is based on the fact that text is collection of characters usually comes in a
group. The input image is first pre-processed to remove any noise if present. Then an input image is
converted from RGB to YUV model and Y-channel is processed, horizontal and vertical projections
are calculated. Then with the help of horizontal and vertical threshold text regions are identified.
[3]. Wang and Kangas et al [4]:The algorithm proposed is based on color clustering. The input
image is first pre-processed to remove any noise if present. Then the image is grouped into different
color layers and gray component. This approach utilities the fact that usually the color data in text
characters is different from the color data in the background. The potential text regions are localized
using connected component based heuristics from these layers. Also an aligning and merging analysis
(AMA) method is used in which each row and column value is analyzed. The experiments conducted
show that the algorithm is robust in locating mostly Chinese and English characters in images;
sometimes false alarms occurred due to uneven lighting or reflection in the test images.
[4]. K.C. Kim et al [5]:The text detection algorithm is also based on color continuity. In addition it
also uses multi-resolution wavelet transforms and combines low as well as high level image feature
for text region extraction, which is a hierarchical feature combination method to implement text
extraction in natural scenes. However, authors admit that this method could not handle large text very
well due to the use of local features that represents only local variations of images blocks.
[5]. Victor Wu et al [6]:The text finder algorithm proposed is based on the frequency, orientation
and spacing of text within an image. Texture based segmentation is used to distinguish text from its
background. Further a bottom – up ‘chip generation’ process is carried out which uses the spatial
cohesion property of text strokes and edges. The results show that the algorithm is robust in most of
the cases, expect for every small text characters that are not properly detected. Also in case of low
contrast in image, misclassifications occur in the texture segmentation.
[6].Qixiang Ye et al[7,8]:The approach used in [7, 8] utilizes a support vector machines (SVM)
classifier to segment text from non – text in an image or video frame. Initially text is detected in multi
scale images using non edge based techniques, morphological operations and projection profiles of
the image. These detected text region are then verified using wavelet features and SVM. The
algorithm is robust with respect to variance in color and size of font as well as language.
[7].SanjeevKunteet al [11]:The Kannada character detection algorithm is based on Neural Network
concept. The input image is first pre-processed to remove any noise if present. Neural classifiers are
effectively used for the classification of characters based on moment features.
[8]. Te´ofilo E. de Campos et al [12]:The character detection algorithm is based on SVM. It
evaluate six different shape and edge based features, such as Shape Context, Geometric Blur and
SIFT, but also features used for representing texture, such as filter responses, patches and Spin
Images.
III. PROPOSED WORK
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
192 Vol. 1, Issue 5, pp. 189-196
In this Proposed Work, a robust system for automatically extracting Kannada text appearing in images
and videos with complex background is presented. Standard deviation based edge detection is
performed to detect edges present in all directions.
The identification of the used script can help in improving the segmentation results and in increasing
the accuracy of OCR by choosing the appropriate algorithms. Thus, a novel technique for Kannada
script recognition in complex images will be presented. Figure 3 shows the general configuration of
proposed system. The building elements are the TIE, the CCD-camera and the voice synthesizer.
Figure3. System configuration (walk-around mode)
Proposed system contains three main steps after acquiring image with the help of CC-camera.
1. Textual information Extraction.
2. Optical character Recognition.
3. Speech Synthesis.
As the first step in the development of this system, simple standard deviation based method for
Kannada text detection method is proposed.
The different steps of our approaches are asfollows.
1. Image preprocessing.
2. Calculate Standard Deviation of Image.
3. Detection of Text Regions.
Step 1: Image Preprocessing. If the image data is not represented in HSV color space, it is converted
to this color space by means of appropriate transformations. Our system only uses the intensity
dataFigure 5 (V channel of HSV) during further processing. A median filtering operation is then
applied on theV (intensity) band to reduce noise before a contrast-limited Adaptive Histogram
Equalization is applied for contrast enhancement.
Figure4.Original Image Figure5. V channel
Step 2: Edge Detection. This step focuses the attention to areas where text may occur. We employ a
simple method for converting the gray-level image into an edge image.
Our algorithm is based on the fact that the characters processes high standard deviation compared to
their local neighbors.
Std(x)=1/ (N-1) ∑(V (i)-µ(x)) 2…………… (1)
1. Textual
Information
Extraction.
2. Optical Character
Recognition.
3. Speech
synthesis.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
193 Vol. 1, Issue 5, pp. 189-196
i€W(x) Where x is a set of all pixels in a sub-window W(x), N is a number of pixels in W(x), µ(x)is mean
value of V(i)and i €W(x). A window size of 3X7 pixels was used in this step.
Figure6. Standard Deviation Image
Step 3:Detection of Text Regions.Steps used in Kannada Text location are different compared to
English text localizationbecause features of both texts are different. Height and width ratio, Centroid
difference and orientation calculations used in English text extraction are not suitable for Kannada text
extraction.
Normally, text embedded in an image appears in clusters, i.e., it is arranged compactly. Thus,
characteristics of clustering can be used to localize text regions. Since the intensity of the feature map
represents the possibility of text, a simple global thresholding can be employed to highlight those with
high text possibility regions resulting in a binary image. A morphological dilation operator can easily
connect the very close regions together while leaving those whose positions are far away to each other
isolated. In our proposed method, we use a morphological dilation operator with a 7×7 square
structuring element to the previous obtained binary image to get joint areas referred to as text blobs.
Two constraints are used to filter out those blobs which do not contain text [1 ,2], where the first
constraint is used to filter out all the very small isolated blobs whereas the second constraint filters out
those blobs whose widths are much smaller than corresponding heights. The retaining blobs are
enclosed in boundary boxes. Four pairs of coordinates of the boundary boxes are determined by the
maximum and minimum coordinates of the top, bottom, left and right points of the corresponding
blobs. In order to avoid missing those character pixels which lie near or outside of the initial boundary,
width and height of the boundary box are padded by small amounts as in Figure 7.
Figure 7.Final results for the example given in Figure. 5
IV. EXPERIMENTAL EVALUATION
The proposed approach has been evaluated using datasets containing different types of images Figure
8,9,10. The whole test data consists of 300images where 100 of them were extractedfrom various
MPEG videos
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
194 Vol. 1, Issue 5, pp. 189-196
Figure 8. Results of House Boards
Figure 9. Results of Wall Boards
Figure 10. Results of Banners.
The precision and recall rates (Equations (2) and (3)), have been computed based on the number of
correctly detected words in an image in order to further evaluated the efficiency and robustness. The
precision rate is defined as the ration of correctly detected words to the sum of correctly detected words
plus false positive. False positive are those regions in the image, which are actually not characters of
text, but have detected by the algorithm as text regions.
Correctly detected words
Precision Rate=-----------------------------------*100% ............ (2)
Correctly detected words + False Positives
The Recall rate is defined as the ratio of correctly detected Words to the sum of correctly detected
words plus false negatives. False negatives are those regions in the image, which are actually text
characters, but have been not detected by the algorithm.
Correctly detected words
RecallRate=-----------------------------------*100% …... (3)
Correctly detected words + False Negatives
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
195 Vol. 1, Issue 5, pp. 189-196
Table 1. Analysis of precession rate and recall rate
TEST DATA NO OF IMAGES PRECISSION
RATE
RECALL
RATE
FROM IMAGES
200 92.2 88.6
FROM VIDEOS 100 78.8 80.2
TOTAL 300 80.5 84.4
V. CONCLUSION
In this paper, Text extraction is a critical step as it sets up the quality of the final recognition result.
Itaims at segmenting text from background, meaning isolating text pixels from those ofbackground.
we presented the design of a Kannada scene-text detection module for visually impaired persons. As
the first step in the development of this system, simple standard deviation based method for Kannada
text detection have been implemented and evaluated.
VI. FUTURE WORK The main challenge is to design a system as versatile as possible to handle all variability in daily life,
meaning variable targets with unknown layout, scene text, several characterfonts and sizes and
variability in imaging conditions with uneven lighting, shadowing and aliasing. Variation in Font
style, size, Orientation, alignment & complexity ofbackground makes the text segmentation as a
challenging task in text extraction.
We plan to employ an OCR system to check the recognition performance for the text images
produced by the proposed algorithm andalso employ a Speech Synthesizer to spell the recognized text
to vision impaired persons. Finally, work will focus on new methods for extracting Kannada text
characters with higher accuracy.
REFERENCES
[1]. Xiaoqing Liu and JagathSamarabandu , An Edge-based text region extraction algorithm for Indoor
mobile robot navigation, Proceedings of the IEEE, July 2005.
[2].Xiaoqing Liu and JagathSamarabandu, Multiscale edge-based Text extraction from Complex images, IEEE,
2006.
[3].JulindaGllavata, Ralph Ewerth and Bernd Freisleben, A Robust algorithm for Text detection in images
, Proceedings of the 3 international symposium on Image and Signal Processing and Analysis, 2003.
[4].Kongqiao Wang and Jari A. Kangas, Character location in scene images from digital camera, the journal of
the Pattern Recognition society, March 2003.
[5]K.C. Kim, H.R. Byun, Y.J. Song, Y.W. Choi, S.Y. Chi, K.K. Kim and Y.K Chung, Scene Text
Extraction in Natural Scene Images using Hierarchical FeatureCombining and verification , Proceedings
of the 17International Conference on Pattern Recognition (ICPR ’04), IEEE.
[6] Victor Wu, RaghavanManmatha, and Edward M.Riseman,Text Finder: An Automatic System to Detect and
Recognize Text in Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 11,
November 1999.
[7]Qixiang Ye, Qingming Huang, Wen Gao and DebinZhao,Fast and Robust text detection in images and
video frames, Image and Vision Computing 23, 2005.
[8]Qixiang Ye, Wen Gao, Weiqiang Wang and Wei Zeng,A Robust Text Detection Algorithm in Images
and Video Frames, IEEE, 2003.
[9]Rainer Lienhart and Axel Wernicke, Localizing and Segmenting Text in Images and Videos, IEEE
Transactions on Circuits and Systems for Video Technology, Vol.12,No.4, April 2002.
[10]Keechul Jung, Kwang in Kim and Anil K. Jain, Text information extraction in images and video: a survey,
the journal of the Pattern Recognition society, 2004.
[11]SanjeevKunte and R D Sudhaker Samuel, A simple and efficient optical character recognition systemfor
basic symbols in printed Kannada text.
[12]Nobuo Ezaki, Marius Bulacu, Lambert Schomaker, Text Detection from Natural Scene Images: Towards a
System for Visually Impaired Persons, Proc. of 17th Int. Conf. on Pattern Recognition (ICPR 2004), IEEE
Computer Society, 2004, pp. 683-686, vol. II, 23-26 August, Cambridge, UK.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
196 Vol. 1, Issue 5, pp. 189-196
[13]T V Ashwin and P S Sastry, “A font and size-independent OCR system for printed Kannada documents
using support vector machines”, S¯ adhan¯ a Vol. 27, Part 1, February 2002, pp. 35–58. © Printed in India
[14] Department of Computer Sciences, University of Texas at Austin, Support Vector Machines,
www.cs.utexas.edu/~mooney/cs391L/svm.ppt,The VC/SRM/SVM Bible:
Keshava Prasanna received B.E from Bangalore University and M.Tech in Information and
Technology in the year 2005.He has experience of around 13 years in academics. Currently
pursuing Ph.D. and working as Research Assistant in Tumkur University, Tumkur. Life membership
in Indian Society for Technical Education (ISTE).
Ramakanth Kumar P completed his Ph.D. from Mangalore University in the area of Pattern
Recognition. He has experience of around 16 years in Academics and Industry. His areas of interest
are Image Processing, Pattern Recognition and Natural Language Processing. He has to his credits 03
National Journals, 15 International Journals, and 20 Conferences. He is a member of the Computer
Society of India (CSI) and a life memember of Indian Society for Technical Education (ISTE). He
has completed number of research and consultancy projects for DRDO.
Thungamani. M received B.E from Visvesvaraya Technological University and M.Tech in
Computer Science and Engineering in the year 2007.She has experience of around 08 years in
academics. Currently pursuing Ph.D. and working as Research Assistant in Tumkur University,
Tumkur. Life membership in Indian Society for Technical Education (MISTE) The Institution of
Electronics and Telecommunication Engineers (IETE).
Manhoar Koli received B.E from Visvesvaraya Technological University and M.Tech in Computer
Science and Engineering.He has experience of around 08 years in academics. Currently pursuing
Ph.D. as Research Scholar in Tumkur University, Tumkur.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
197 Vol. 1, Issue 5, pp. 197-203
COVERAGE ANALYSIS IN VERIFICATION OF TOTAL ZERO
DECODER OF H.264 CAVLD
Akhilesh Kumar and Mahesh Kumar Jha
Department of E&C Engineering, NIT Jamshedpur, Jharkhand, India
ABSTRACT
H.264 video standard is used to achieve high quality video and high data compression when compared to other
existing video standards. H.264 uses context-based adaptive variable length coding (CAVLC) to code residual
data in Baseline profile.The H.264 bitstream consist of zeros and ones.At one of the decoding stages of context-
based adptive variable length decoder (CAVLD), Total Zeros decoder is used to calculate the total zeros, which
is the number of zeros before the last non-zero coefficient.H.264 specifies different lookup table to decode total
zero, which is chosen depending on the number of non zero coefficients.In this paper the coverage analysis in
verification of Total Zeros decoder of the CAVLD ASIC using open verification methodology (OVM) is
proposed.
KEYWORDS: H.264, CAVLC/CAVLD, OVM
I. INTRODUCTION
Today the verification engineer have outnumbered the design engineers for the most complex
designs.Studies revealed that about 70% of all respin of Ics are due to functional errors.Verification
has become the bottleneck in project's time-to-profit goal [1]. According to the International
Technology Roadmap for Semiconductors (ITRS), in many application domains the verification of the
design has become the predominant component of a project's development in terms of time,cost, and
the human resorces dedicated to it [2].
H.264 is jointly developed by the ITU and ISO/IEC.It has better compression efficiency than previous
coding standards,and it is also network-friendly,which makes it suitable for many kinds of network
[3].This paper is just about the verification of VLSI design of Total Zero Decoder of H.264 CAVLD
decoder.In this paper, the verification using OVM is built by developing verification components
using SystemVerilog and OVM class library, which provides the suitable building block to design the
test environment.OVM is an open source verification methodology library intended to run on multiple
platforms and be supported by multiple EDA vendors. OVM is used for functional verification
using System Verilog, inclusive with a following library of System Verilog code [4]. The test
benches in OVM are composed of reusable verification components that are absolute verification
environments. The method does not depend on vendor and can be interoperated with several
languages and simulators. The methodology is completely open, and includes a strong class library
and source code [4].
The work embodied in this paper presents the Verification of RTL Total Zero Decoder of CAVLD
using OVM.Coverage analysis is a vital part of the verification process; it gives idea that to what
degree the source code of the DUT(Design Under Test) has been tested.The design and analysis is
carried out in QuestaSim from Mentor Graphics using QuestaSim-6.6b.
II. PROPOSED INTERFACE DIAGRAM OF TOTAL ZERO DECODER
2.1 Interface Diagram
The proposed interface diagram of total zero decoder is shown in Figure 1.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
198 Vol. 1, Issue 5, pp. 197-203
Figure 1. Interface diagram of total zero decoder
Inputs to this process are bit stream, total coefficients and maximum number of coefficients. This
process calculates the number of total zeros using the total coefficients, maximum number of
coefficients and the bit stream. Total zeros are the number of zeros before the last quantized
coefficient of the block. This process is basically a probability model where total zeros are derived
from the bit stream by VLC models, which are separated by using the total coefficients and maximum
number of coefficients in the standard.
Maximum number of coefficients and total coefficients is used to select the model used to derive the
coefficient token. After decoding the coefficient token, total zeros are derived from the look up tables
(H.264 standard table 9.7, table 9.8 table9.9) [5] provided in the ROM. Output of this process is total
zeros.
2.2 Port Description The port description of the proposed interface diagram of Total Zero Decoder is described in Table 1.
Table 1.Port Description
Signal Name I/O Bit Width Description Allowable Values
System I/F
clk1 I 1 Operative clock (dedicated to CAVLC) NA
nreset I 1 Asynchronous Reset 0 – Reset
1 – No Reset
sreset I 1 Synchronous Reset 1 – Reset
0 – No Reset
Decode sequence control I/F
dec_brk I 1 Request IP to stop the decoding
process
0 – IP continue decoding
1 – IP stops decoding
Bit stream parser I/F
bitstream_i I 9 Input Bit stream from Getbits. 0 – (2^9 -1)
TCTO I/F
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
199 Vol. 1, Issue 5, pp. 197-203
tcoeff_i I 5 Tcoeff of 4x4 Block. 0 – 16
tcoeff_vld_i I 1 Valid signal for Tcoeff of 4x4 Block.
0 – Not valid
1 - Valid
Level Decoder I/F
start_tz_i I 1 Start signal from controller 0 – Wait
1- Start total zeros module
Slice Dec Controller I/F
cavld_ceb_i I 1 Valid signal read clock enable to ROM 0 – Don’t enable clock
1 – Enable clock
CAVLD Controller I/F
maxcoeff_i I 5 Maximum coefficients of the block 0 - 16
shift_length_t
z_o
O 4 No of bits to be skipped. 0 – 9
shift_en_tz_o O 1 Valid signal for skip length 0 – Disable
1 – Enable
Run before decoder I/F
tz_valid_o O 1 Valid signal for Total Zeros 0 – Not valid
1 – valid
total_zeros_o O 4 Total Zeros of 4x4 block 0 -15
2.3 Micro Architecture
The Micro-Architecture of the Decoder is shown in Figure 2.The architecture of Total zero
decoder is explained as follows:
1. Pipeline Stage 1:
The value of maximum coefficients of a block is taken as input. Based on the value of
maximum number of coefficients and the total coefficients the value of ROM address from
which the total zero value of that particular block is found is calculated. The ROM table is
designed as follows:
• For Chroma DC values address ranges from 0x00h to 0x17h
• For chroma 422 and where tc = 1 address ranges from 0x18h to 0x20h
• For chroma 422 and where tc > 1 address ranges from 0x21h to 0x58h
• For luma values where tc = 1 address ranges from 0x59h to 0x68h
• For luma values where tc > 1 address ranges from 0x69h to 0x427h
2. Pipeline Stage 2:
In this stage the value of total zero is read from the TZ Rom and registered and sent as output
along with tz_end.
2.3 Timing Diagram
The timing diagram of Total Zero Decoder is shown in Figure 3.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
200 Vol. 1, Issue 5, pp. 197-203
Figure 2. Total zeros decoder Architecture Diagram
Figure 3.Timing Diagram of Total Zero Decoder
2.4 Applying OVM to Total Zero Decoder
A verifocation plan is developed to verify the Total Zero Decoder in the OVM environment.The
suggested decoder is taken as DUT and then it was interfaced with the OVM environment.The
suggested DUT was written using verilog coding.The open verification environment is created by
joining different components written in SystemVerilog coding, those componet are Transaction,
Sequence, Sequencer, Driver, Coverage, Assertion, Interface, Monitor, Scoreboard, Agent,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
201 Vol. 1, Issue 5, pp. 197-203
Environment and finally Top module.The Clock signal for the DUT is generated in top module.The
top module contains the typical HDL construct and SystemVerilog interfaces. In the top module the
DUT is connected to the test environment through the interface.The compilation and verification
analysis is carried out in QuestaSim 6.6b form Mentor Graphics.
III. SIMULATION RESULTS
To measure the coverage of the decoder the code was compiled and then simulated to
get the encoded output. The simulated output is shown in Figure 4 and Figure 5.
Figure 4. Simulation result when Figure 5.Simulation result when
maxcoeff_i is 8 maxcoeff_i is 16
IV. COVERAGE ANALYSIS
The Coverage Summary and Coverage Report gives the details of the functional coverage when
complete Analysis was done for the decoder and coverage report as shown in Figure 6 was
generated it is found that the coverage is less than 100%.
Figure 6.Coverage results Figure 7.Coverage results
V. CONCLUSION AND FUTURE SCOPE
H.264/AVC is a public and open standard. Every manufacturer can build encoders and decoders in a
competitive market. This will bring prices down quickly, making this technology affordable to
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
202 Vol. 1, Issue 5, pp. 197-203
everybody. There is no dependency on proprietary formats, as on the Internet today, which is of
almost importance for the broadcast community. OVM is clearly simulation-oriented. The test
benches in OVM are composed of reusable verification components that are absolute verification
environments. The method does not depend on vendor and can be interoperated with several
languages and simulators. The methodology is completely open, and includes a strong class library
and source code. In this work OVM based Total Zero Decoder VIP (Verification intellectual
property) is developed. The decoder is subjected to various analyses. The decoder is verified for
functional coverage using QuestaSim. It is observed after compilation and simulation that the
verification environment is responding accurately with no errors. The Coverage Report of Total Zero
Decoder is 100%. This work can be extended to verify the various IP in the OVM environment and
minimize the bugs generated, basically in the corner cases, thus reducing the verification time of a
design.
ACKNOWLEDGEMENT
This work was supported by TATA ELXSI, Bangalore.
REFERENCES [1] J.Bergeron, “What is verification?” in Writing Test benches: Functional Verification of HDL Models,
2nd ed. New York: Springer Science, 2003, ch.1, pp. 1-24.
[2] International Technology Roadmap for Semiconductors [Online]. Available:
http://www.itrs.net/Links/2006Update
[3] R. Schafer, T. Wiegand and H. Schwarz, "EBU TECHNICAL REVIEW of the emerging H.264/AVC
standard”, Heinrich Hertz Institute, Berlin, Germany,January 2003
[4] http://www.doulos.com/knowhow/sysverilog/ovm/tutorial_0
[5] ITU-T Rec. H.264, ITU-T Study Group, March 2009,Available: http://www.itu.int /rec/T-REC-H.264-
200903-S/en.
[6] http://www.testbench.co.in
[7] Chris Spear, SystemVerilog for Verification, New York : Springer, 2006.
[8] OVM User Guide ,Vers. 2.1,OVM world ,December 2009, Available: www. ovmworld.org.
[9] Iain E. Richardson, The H.264 Advanced Video Compression Standard ,2nd
ed.UK : Wiley, 2010, pp.
81-85.
[10] "VLSI Design of H.264 CAVLC Decoder", China-Papers, February 16,2010, [Online]. Available:
http://mt.china-papers.com/4/?p=25415
[11] "The Algorithm Study on CAVLC Based on H.264/AVC and Its VLSI Implementation", China-
Papers, May 31,2010, [Online].Available:http://mt.china-papers.com/4/?p=75976
[12] "Design of CAVLC Codec for H.264",China-Papers, March 24, 2010, [Online]. Available:
http://mt.china-papers.com/4/?p=76424
[13] Wu Di, Gao Wen, Hu Mingzeng and JiZhenzhou, “A VLSI architecture design of CAVLC decoder”,
ASIC,2003.
[14] Tien-Ying Kuo and Chen-Hung Chan, “Fast Macroblock Partition Prediction for H.264/AVC ”, in
IEEE International Conference on Multimedia and Expo (ICME2004), pp. 675–678, 2004.
[15] Y.L. Lee, KH. Han, and G.J. Sullivan, “Improved lossless intra coding for H.264/MPEG-4 AVC ”,
IEEE Trans. Image Processing, vol. 15, no. 9, pp. 2610–2615, Sept. 2006.
[16] http://www.ovmworld.org/white_papers.php [17] OVM Golden Reference Guide ,Vers. 2.0, DOULOS, september 2008, Available: www.doulos.com
[18] Mythri Alle, J Biswas and S. K. Nandy, "High performance VLSI architecture design for H.264
CAVLC Decoder",in Proceedings of Application-specific Systems, Architectures and Processors,2006
[19] "An Introduction to SystemVerilog",Asic,[Online].Available:
http://www.asic.co.in /Index_files/tutorials/SystemVerilog_veriflcation.ppt
[20] N. Keshaveni , S. Ramachandran and K.S. Gurumurthy "Implementation of Context Adaptive
Variable Length Coder for H.264 Video Encoder",International Journal of Recent Trends in
Engineering, Vol 2, No. 5, pp.341-345, November 2009.
[21] Mihaela E.Radhu and Shannon M.Sexton, “Integrating Extensive Functional Verification into digital
design Education,” IEEE Trans. Educ., vol. 51, no. 3, pp. 385–393, Aug.2008.
[22] Donghoon Yeo and Hyunchul Shin, "High Throughput Parallel Decoding Method for H.264/AVC
CAVLC",ETRI Journal, Vol. 31, no. 5, pp.510-517, October 2009.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
203 Vol. 1, Issue 5, pp. 197-203
Authors
Akhilesh Kumar received B.Tech degree from Bhagalpur University, Bihar, India in 1986 and
M.Tech degree from Ranchi University, Bihar, India in 1993. He has been working in teaching
and research profession since 1989. He is now working as H.O.D. in Department of Electronics
and Communication Engineering at N.I.T. Jamshedpur, Jharkhand, India. His interested field of
research is analog circuit and VLSI design.
Mahesh Kumar Jha received B.Tech. Degree from Biju Patnaik University of Technology,
Orissa, India in 2007. He is now pursuing M. Tech in Department of Electronics and
Communication Engineering at N.I.T. Jamshedpur, Jharkhand, India. His interested field of
research is VLSI Design.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
204 Vol. 1, Issue 5, pp. 204-217
DESIGN AND CONTROL OF VOLTAGE REGULATORS FOR
WIND DRIVEN SELF EXCITED INDUCTION GENERATOR
Swati Devabhaktuni1
and S. V. Jayaram Kumar2
1Assoc. Prof., Gokarajurangaraju Institute of Engg. and Tech., Hyderabad, India
2Professor, J.N.T. University Hyderabad, India
ABSTRACT
This paper deals with the performance analysis of static compensator (STATCOM) based voltage regulator for
self excited induction generators (SEIGs) supplying balanced/unbalanced and linear/non linear loads. A three-
phase insulated gate bipolar transistor (IGBT) based current controlled voltage source inverter (CC-VSI)
known as STATCOM is used for harmonic elimination. It also provides the required reactive power SEIG needs
to maintain a constant terminal voltage under varying loads. A set of voltage regulators are designed and their
performance is simulated using SIMULINK to demonstrate their capabilities as a voltage regulator, a harmonic
eliminator, a load balancer and a neutral current compensator. It also discusses the merits and demerits, to
select a suitable topology of the voltage regulator according to self excited induction generator. The simulated
results show that by using a STATCOM based voltage regulator the SEIG terminal voltage can be maintained
constant and free from harmonics under linear/non linear and balanced/unbalanced loads
KEYWORDS: Self-excited induction generator, static compensator, voltage regulation, load balancing.
I. INTRODUCTION
The rapid depletion and the increased cost of conventional fuels have given a thrust to the research on
self excited induction generator as alternative power sources driven by various prime movers based on
nonconventional energy sources[5]. These energy conversion systems are based on constant speed
prime movers, constant power prime movers and variable power prime movers[6][15]. In constant
speed prime movers (biogas, biomass, biodiesel etc) based generating systems; the speed of the
turbine is almost constant therefore the frequency of the generated voltage remains constant. An
externally driven induction machine operates as a self-excited induction generator (SEIG), with its
excitation requirements being met by a capacitor bank connected across its terminals. The SEIG has
advantages [1][12][16][25] like simplicity, being maintenance free, absence of DC, being brushless,
etc. as compared to a conventional synchronous generator[8][11][13]. A major disadvantage of an
SEIG is its poor voltage regulation [14][24][18]. It requires a variable capacitance bank to maintain
constant terminal voltage under varying loads.
Attempts have been made to maintain constant terminal voltage using fixed capacitor and thyristor
controlled reactors (TCR), saturable-core reactors and short-shunt connections [6][9][19][21]. The
voltage regulation provided by these schemes is discrete but these inject harmonics into the generating
system. However, with the invention of solid state commutating devices, it is possible to make a
static, noiseless voltage regulator which is able to regulate continuously variable reactive power to
keep the terminal voltage of an SEIG constant under varying loads. This system, called STATCOM,
has specific benefits compared to conventional SVC’s[2][23][17].
Basic topology of STATCOM consists of a 3-phase current controlled voltage source converter (VSC)
and an electrolytic capacitor at its DC bus. The DC bus capacitor is used to self support a DC bus of
STATCOM and takes very small active power from SEIG for its internal losses to provide sufficient
reactive power as per requirements [3][10]. Here STATCOM is a source of leading or lagging current
and can be designed in such a way to maintain constant voltage across the SEIG terminals with
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
varying loads. In this paper various STATCOM based VR topologies are presented which are based
on two leg VSC, three leg VSC for three phase three wire SEIG system[4][7][20].
An SEIG is an isolated system, which is small in size, and the injected harmonics pollute the
generated voltage. The STATCOM eliminates the harmonics, provides load balancing and supplies
the required reactive power to the load and the generator. In this paper, the authors present a simple
mathematical model for the transient analysis of the SEIG-STATCOM system under
balanced/unbalanced. Simulated results show that the SEIG-STATCOM system behaves as an ideal
generating system under these conditions.
The brief description about this paper includes, Section2 discusses mainly about the various
STATCOM controllers used in this paper with the diagrams. Section 3 includes the design of various
STATCOM techniques included in this paper with the controlling strategies. Section 4 discusses the
results obtained from the MATLAB/SIMULINK models for various STATCOM techniques applied
to a self excited induction generator connected to a grid.Section 5 gives the conclusions of this paper.
The system we tested has the following components:
• a wind turbine
• a three-phase, 3-hp,slip ring induction generator driven by the wind turbine
• various sets of capacitors at stator terminals to provide reactive power to the induction
generator
• a three-phase various STATCOM devices
• a three phase balanced/unbalanced grid
II. SYSTEM STATCOM CONTROLLERS
The VRs are classified as three phase three wire VRs and three phase four wire VRs. These VRs are
based on the two leg VSC, three leg VSC, four leg VSC, three single phase VSC, three leg with
midpoint capacitor based VSC and transformer based VRs. In the following section, detailed system
description is presented for different STATCOM based voltage regulators.
2.1. Three Phase 3-wire voltage regulators Mainly two types of VR topologies are discussed for three phase 3-wire self excited induction
generator (SEIG). The first one is based on three leg voltage source converter (VSC) and another one
is based on a two leg VSC with midpoint capacitor.
2.1.1. Two Leg Voltage Source Converter (VSC) Based Voltage Regulator Figure 1 shows an isolated generating system which consists of a constant speed wind turbine, and
self excited induction generator along with two leg VSC based VR Two legs of VSC are connected to
each phase of the generator through interfacing inductors while the third phase of the generating
system is connected to the midpoint of the capacitors. Midpoint capacitors require equal voltage
distribution across both the capacitors and voltage rating at the DC link of the VSC is comparatively
higher than the 3-leg VSC based topology. However switch counts are reduced in this topology of VR
Fig1: Two leg VSC based VR for SEIG system feeding three phase three wire loads.
2.1.2 Three Leg Voltage Source Converter (VSC) Based Voltage Regulator Figure 2 shows an asynchronous generator system based isolated generating system along with three
leg VSC based STATCOM based voltage regulator. The VR consists of a three-leg IGBT (Insulated
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Gate Bipolar Junction Transistor) based current controlled voltage source converter, DC bus capacitor
and AC inductors. The output of the VSC is connected through the AC filtering inductor to the SEIG
terminals. The DC bus capacitor is used as an energy storage device and provides self-supporting DC
bus of VR This DC side capacitor supplies the real power difference between the load and SEIG
during the transient period. In the steady state the real power supplied by the SEIG should be equal to
the real power demand of the load plus a small power to compensate for the losses of the VR.Thus
DC capacitor voltage is maintained at a reference value for its satisfactory operation.
Fig2: Three leg VSC based VR for SEIG system feeding three phase three wire loads.
III. MODELING OF SEIG-STATCOM SYSTEM
The mathematical model of the SEIG-STATCOM system contains the modelling of an SEIG and
STATCOM as follows.
3.1. Modeling of control scheme of STATCOM Different components of the SEIG-STATCOM system shown in Fig. are modelled as follows.
3.1.1. Control scheme of Two Leg Voltage Source Converter (VSC) Based Voltage
Regulator The block diagram of control scheme for two leg VSC based voltage regulator for a SEIG system is as
shown in Fig.3
The control strategy of the two-leg voltage controller based VR is realized similar to three-leg VSC,
through derivation of reference source currents (isar, isb
r) while main difference between two topology
to derivation of active component of current as shown in Figure 3. Reference source currents consist
of two components one is in phase or active power component (idar, idb
r) for the self supporting DC bus
of VSC while the other one is in quadrature or reactive power component (iqar, iqb
r,) for regulating the
terminal voltage. The amplitude of active power component of the source current (Idm) is estimated
using two PI controllers among which, one is used to control the voltage of DC bus of VSC while
another one is used for equal voltage distribution across the midpoint DC bus capacitors. The output
of the first PI controller is estimated by comparing the reference DC bus voltage (Vdcref) with the
sensed DC bus voltage (Vdx).The output of the second P1 controller is estimated by comparing the
voltages across both capacitors (V) and (Va). This voltage error signal is processed using this second
PI controller The sum of output of both PI controllers (Idm1) and (Idm2) gives the active power current
component (Idm) of the reference source current. The multiplication of Idm with in phase unit amplitude
templates (uad,ubd) yields the in-phase component of instantaneous reference source currents. These
(uad,ubd) templates are sinusoidal functions, which are derived by unit templates of in-phase with line
voltages (uab,ubc,uca). These templates (uab,ubc,uca) are derived by dividing the AC voltages
Vab,Vbc,Vcaby their amplitude Vt. To generate the quadrature contponent of reference source currents,
another set of sinusoidal quadrature unity amplitude templates (uaq,,ubq, ucq) is obtained from in-phase
unit templates (uabd,ubcd,ucad). The multiplication of these components (uaq,ubq) with output of the PI
(Proportional Integral) AC voltage controller (Iqm) gives the quadrature, or reactive power component
of reference source currents. The sum of instantaneous quadrature and inphase component of source
currents is the reference source currents (isar,isb
r) and each phase source current is compared with
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
thecorresponding reference source current to generate the PWM switching signal for VSC of the
controller.
Vt= (1)
Fig.3. Block diagram of control scheme for Two leg VSC based voltage regulator for a SEIG system
3.1.1.1. Design of Two Leg Voltage Source Converter (VSC) Based Voltage Regulator This section presents the detailed design of two-leg VSC based VRs for a SEIG driven by a constant
speed wind turbine. The two leg VSC and its voltage waveforms are shown in Figure 4.
Fig.4.Two leg VSC
The design procedure is focused on, to determine the value of interfacing inductors, DC link
capacitors and the voltage across the DC link capacitors along with the rating of the devices. The
design of the inductor and capacitor depends upon the voltage and current ripples.
3.1.1.2. Design of the Interfacing Inductor
In PWM switching of the converter, VcontrolA,VcontrolB and Vcontrolc can be assumed to be constant to be
constant during one switching frequency time period.At the zero crossing of VcontrolA therefore,
VcontrolA=0
VcontrolB=maVtrisin(120)= ma√ (2)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
208 Vol. 1, Issue 5, pp. 204-217
VcontrolC=maVtrisin(240)=- ma√
When ma is the modulation index and the converter AC terminal voltage vector is defined from the
line to neutral voltages Van,Vbn,Vcnwhich can be calculated as follows:
Van = VaN-VNn=VaN-(VaN+VbN+VcN)
Vbn = VbN – VNn=VbN-(VaN+VbN+VcN) (3)
Vcn = VcN – VNn=VcN-(VaN+VbN+VcN)
Where VaN,VbN and VcN are the converter pole voltages against the midpoint of the DC capacitor and
VNn is the voltage between the neutral point(n) and the midpoint of the DC capacitor(N,C).
Peak to peak inductor current ripple is
ILripple = dt =
(4)
The interfacing inductor Lan,Lbn and Lcn can be calculated as follows:
Lan=Lbn= ! "#$%&#'())*+, -m √ 10 (5)
By substituting the values of all parameters, the value of the inductor can be calculated as given in
table shown in Fig.5.
Parameter Expression Calculated Selected
Lan
Lbn
Lcn
Vdc1,Vdc2
Cdc1,Cdc2
Vsw
Isw
!V212, 4 15 f7i&99:; 41 < √32 ;
!V212, 4 15 f7i&99:; 41 < √32 ;
!V26 , 4 15 f7i&99:; 41 < √34 ;
2√2 @- √0m A
BCD2E2
Vsw = (Vdc+Vd)
Isw=1.25(iripple(pp)+Is(peak))
8.8mH
8.8mH
5.2mH
677V
1655µF
1833V
30A
8mH
8mH
5mH
700V
4000µF
3300V
60A
Fig.5.Calculation and selection of various components of two leg VSC based VR
3.1.2.1. Design of the midpoint D.C link capacitor and its voltage
Voltage across each capacitor must be more than the peak voltage for satisfactory PWM control as
Vdc1=Vdc2= √ √F (6)
Where ma is the modulation index normally with maximum value 1.The current which flows through
the phase connected to the midpoint capacitor is equal to the flow through the capacitors. Therefore
the ripple in the capacitor voltage can be estimated as:
Vdc1-ripple= G H iI dt =
LMNωG H (7)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Iavg ≈ 0.9 Is (8)
Where Iavg is the average current which flows through the DC bus capacitor (Cdc1) and Isis the required
rms compensator current rating of the devices. The voltage rating (Vsw) of the device can be
calculated under dynamic condition as:
Vsw = (Vdc+Vd) (9)
Where Vd is 10% overshoot in the DC link voltage under dynamic conditions.
Rated current which flows through the two leg VSC is Is.The peak value of the current flows through
the VR considering the safety factor of 1.25 the maximum device current can be calculated as:
Isw = 1.25(iripple(pp)+Is(peak)) (10)
From this voltage(Vsw) and current rating(Isw) of the IGBT switches can be estimated.Here one design
example for a two-leg VSC based VR is carried out for feeding 0.8p.f lagging reactive load,SEIG
requires reactive power of 140-160% of rated generated power.Therefore the additional reactive
power required from no load to full load at 0.8 lagging p.f load and it is calculated as:
Additional VAR(QAR)=√3VIs (11)
Where V is SEIG line voltage and Isis VR line current.
3.1.2. Control scheme of Three Leg Voltage Source Converter (VSC) Based Voltage
Regulator The block diagram of control scheme for two leg VSC based voltage regulator for a SEIG system is as
shown in Fig.6.
Fig.6. Block diagram of control scheme for three leg VSC based voltage regulator for a SEIG system
3.1.2.1. Modelling of control scheme of STATCOM
Different components of the SEIG-STATCOM system shown in Fig.6 are modelled as follows.
From the three-phase voltages at the SEIG terminals (Va,Vb and Vc), their amplitude (Vt) is computed
as:
Vt= (12)
The unit vector in phase with Va,Vb and Vc are derived as:
ua= O;ub=
PO ; uc= PO (13)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
210 Vol. 1, Issue 5, pp. 204-217
The unit vectors in quadrature with Va,Vb and Vcmay be derived using a quadrature transformation of
the in-phase unit vectors ua,ub and uc as:
wa =QRP√ +
R √
wb =√ u RPQR √ (14)
wc = - √ u RPQR √
3.1.2.2. Quadrature component of reference source currents The AC voltage error Ver(n) at the n
th sampling instant is:
Ver(n) = Vtref(n) – Vt(n) (15)
Where Vtref(n) is the amplitude of the reference AC terminal voltage and Vt(n) is the amplitude of the
sensed three-phase AC voltage at the SEIG terminals at the nth
instant. The output of the PI controller
(I*smq(n)) is used for maintaining constant AC terminal voltage at the nth sampling instant.
The qudrature components of the reference source currents are computed as:
i*saq = I*
smqwa ; i*sbq = I*
smqwb ; i*
scq = I*smqwc (16)
3.1.2.3.In-phase component of reference source currents
The error in the DC bus voltage of the STATCOM (Vdcer(n)) at the nth sampling instant is:
Vdcr(n) = Vdcref(n) – Vdc(n) (17)
whereVdcref(n) is the reference DC voltage and Vdc(n) is the sensed DC link voltage of the STATCOM.
The output of the PI controller is used for maintaining the DC bus voltage of the STATCOM at the nth
sampling instant.
The in-phase components of the reference source currents are computed as:
i*sad = I*
smdua
i*
sbd = I*smdub (18)
i*sad = I*
smduc
3.1.2.4. Total reference source currents
The total reference source currents are the sum of the in phase and quadrature components of the
reference source currents as :.
i*
sa = i*
saq + i*sad
i*
sb = i*sbq + i
*sbd (19)
i*sc = i*
scq + i*scd
3.1.2.5. PWM current controller
The total reference currents (i*sa, i
*sb and i
*sc) are compared with the sensed source currents (isa, isb and
isc).The ON/OFF switching patterns of the gate drive signals to the IGBTs are generated from the
PWM current controller. The current errors are computed as:
isaerr = i*
sa - isa
isberr = i*sb - isb (20)
iscerr = i*sc - isc
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
211 Vol. 1, Issue 5, pp. 204-217
These current error signals are amplified and then compared with the triangular carrier wave. If the
amplified current error signal is greater than the triangular wave signal switch S4 (lower device) is ON
and switch S1 (upper device) is OFF, and the value of the switching function SA is set to 0. If the
amplified current error signal corresponding to isaerr is less than the triangular wave signal, switch S1 is
ON and switch S4 is OFF, and the value of SA is set to 1. Similar logic applies to the other phases.
3.1.2.6.Modeling of STATCOM
The STATCOM is a current controlled VSI and is modeled as follows:
The derivative of its DC bus voltage is defined as:
pVdc = & TU& PTVU& TGG (21)
Where SA, SB and SC are the switching functions for the ON/OFF positions of the VSI bridge
switches S1-S6.The DC bus voltage reflects the output of the inverter in the form of the three-phase
PWM AC line voltage eab, ebcand eca. These voltages may be expressed as:
eab = Vdc(SA-SB)
ebc = Vdc(SB-SC) (22)
eca = Vdc(SC-SA)
The volt-amp equations for the output of the voltage source inverter (STATCOM) are:
Va = Rfica + Lf pica + eab –Rficb -Lfpicb
Vb = Rficb + Lfpicb + ebc –Rficc -Lfpicc (23)
ica + icb + icc =0 (24)
The value of icc from above eqn (24) is substituted into eqn.(23) which results in:
Vb = Rficb + Lfpicb + ebc +Rfica+Lfpica +Rficb +Lfpicb (25)
Rearranging the equation results in:
Lfpica – Lfpicb = Va –eab-Rfica+Rficb (26)
Lfpica +2Lfpicb = Vb –ebc-Rfica-2Rficb (27)
Hence, the STATCOM current derivatives are obtainedby solving eqns. and as:
pica = WPQP UQPQXY& ZY (28)
picb = WPQP QQPQXY& PZY (29)
3.1.2.7. AC Line Voltage at the Point of CommonCoupling
Direct and quadrature axis currents of the SEIG (ids and iqs) are converted into three-phases (a, b and
c). The derivative of the AC terminal voltage of the SEIG is defined as:
pVa = W&Q&'Q& Q&PQ&'PQ& PZG
(30)
pVb = W[Q[\]Q[]U[^Q[\^Q[]^Z_
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Va+Vb+Vc = 0 (31)
Where ia ,ib and ic are the SEIG stator line currents, ira,irb and ircare the three phase load line currents
and ica,icb and icc are the STATCOM currents. C is the per phase excitation capacitor, which is
connected across the SEIG terminals.
IV. RESULTS AND DISCUSSIONS
The SEIG-STATCOM system feeding linear/non-linear and balanced/unbalanced loads are simulated
and results are shown in Figs.7-8 . For this study, a 3.5 kW, 440V,7.5A, 4-pole machine was used as a
generator and the parameters of the generator are given in the Appendix.
4.1. Performance of two Leg voltage Regulator for a SEIG System
Here performance of two leg voltage source converter with mid point capacitor based VR topology
has been simulated using MATLAB/SIMULINK and verified for self excited induction generator
driven by wind turbine.
Generated voltage Line current
Rotor speed Electromagnetic Torque
D.C voltage
Terminal Voltage
Load voltage and load current
Fig.7.Performance of two leg VSC based VR for a SEIG system feeding 3-phase balanced/unbalanced grid
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
213 Vol. 1, Issue 5, pp. 204-217
Figure 7 shows the transient waveforms of three-phase generator voltages (vabc), generator currents
(iabc), speed, electromagnetic torque, D.C voltage, terminal voltage, load voltage and load current
respectively demonstrating the response regulating the SEIG terminal voltage while supplying a grid.
At 0.3seconds, three-phase nonlinear load is applied and it is found that with application of the sudden
load, there is increased generator currents, load currents, STATCOM currents and decrease in supply
voltage due to supplying active and reactive power to the load.The voltage is 75volts and current is
10A.
Along with this, short circuit occurs at 0.4seconds,with this there is further increased generator
currents, load currents, STATCOM currents and decrease in supply voltage due to supplying active
and reactive power to the load.The voltage is 20volts and current is 15A.
At 0.5seconds the STATCOM is connected to the system. Due to this the voltage is reached to the
required voltage. It is observed that the generator voltage remains constant underbalanced and even
unbalanced lagging pf loads. Variations in generator speed are observed with the change in load due
to the drooping characteristic of the wind turbine.
The STATCOM is disconnected from the system at 0.55seconds after it reaches the required voltage.
At 0.6 seconds short circuit is removed and at 0.7 seconds is removed from the load. Now the
machine is working under steady state conditions.
The total harmonic distortion (THD) of the generator voltage and current for the three-phase balanced
case are observed.It is observed that the THD is less than 5%.
With the application of the three phase nonlinear loads and short circuits it is found that Voltage
regulator responds in a desirable manner and maintains constant voltage at the generator terminal.
Along with this, the DC link voltage and voltage across both midpoint capacitors of voltage regulators
also remain equal and constant.
The STATCOM eliminates harmonics so that the generator voltages and currents are free from
harmonics a scan be observed
4.2. Performance of three Leg voltage Regulator for a SEIG System
Here performance of three leg voltage source converter with a capacitor based VR topology has been
simulated and verified for self excited induction generator driven by wind turbine.
Figure 8 shows the transient waveforms of three-phase generator voltages (vabc), generator currents
(iabc), speed, electromagnetic torque, D.C voltage, terminal voltage, load voltage and load current
respectively demonstrating the response regulating the SEIG terminal voltage while supplying a grid.
At 0.3seconds, three-phase nonlinear load is applied and it is found that with application of the sudden
load, there is increased generator currents, load currents, STATCOM currents and decrease in supply
voltage due to supplying active and reactive power to the load.The voltage is 75volts and current is
10A.
Along with this, short circuit occurs at 0.4seconds, with this there is further increased generator
currents, load currents, STATCOM currents and decrease in supply voltage due to supplying active
and reactive power to the load.The voltage is 20volts and current is 15A.
At 0.5seconds the STATCOM is connected to the system. Due to this the voltage is reached to the
required voltage. It is observed that the generator voltage remains constant underbalanced and even
unbalanced lagging pf loads. Variationsin generator speed are observed with the change in loaddue to
the drooping characteristic of the wind turbine.
The D.C voltage obtained here is having fewer ripples compared with the two leg voltage regulator
for a SEIG system. Due to these transients there is even change in the variation of the speed and the
torque.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Generated voltage Line current
Rotor speed Electromagnetic torque
D.C voltage Terminal voltage
Load voltage and load current
Fig.8.Performance of three leg VSC based VR for a SEIG system feeding 3-phase balanced/unbalanced grid
V. CONCLUSIONS
A set of VRs have been designed and their performance have been studied for SEIG system. For
three-phase three-wire SEIG system two topologies of VR have been demonstrated one is based on
three leg VSC while another one is based on the two leg VSC. A topology which is based on the two
leg VSC, requires higher voltage rating of the switches and equal voltage distributed DC link,
however less number of switching devices are required compared to three leg VSC based topology of
VR. In three phase three wire SEIG system there are a number of configurations of the VRs for a
three phase four wire SEIG system . It is observed that the developed dynamic model of the three-
phase SEIG–STATCOM is capable of simulating its performance while feeding linear/non-linear,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
215 Vol. 1, Issue 5, pp. 204-217
balanced /unbalanced loads under transient conditions. From these results, it is found that the SEIG
terminal voltage remains constant and sinusoidal under a three-phase or a single phase rectifier load.
When a single-phase rectifier load is connected, the STATCOM balances these unbalanced load
currents so that the generator currents and voltages remain sinusoidal, balanced and constant and,
thus, STATCOM acts as a load balancer. A rectifier based non-linear load generates harmonics, which
are also eliminated by STATCOM. Therefore, it is concluded that the STATCOM acts as a voltage
regulator, a load balancer and a harmonic eliminator. Although different aspects of uncontrolled
rectifiers have been modelled as non-linear loads here, the developed model can easily be modified to
simulate a compensating controlled rectifier as a nonlinear load. For future work they may develop
the various STATCOM techniques by considering the neutral line and can develop the 3-leg and 4-leg
wire systems.
REFERENCES
[1] M. H. Salama and P. G. Holmes, “Transient and steadystate load performance of a stand-alone self-excited
induction generator,” IEE Proc. Electr. Power Appl. Vol.143, No. 1, pp. 50-58, January 1996. [2] L. Wang and R. Y. Deng, "Transient performance of an isolated induction generator under unbalanced
excitation capacitors," IEEE Trans. on Energy Conversion, Vol. 14, No. 4, pp. 887-893, Dec. 1999.
[3] S. K. Jain, J. D. Sharma and S. P. Singh, “Transient performance of three-phase self-excited induction generator
during balanced and unbalanced faults,” IEE Proc. Gener. Transm. Distrib., Vol. 149, No. 1, pp. 50-57,January
2002.
[4] M. B. Brennen and A. Abbondati, "Static exciter for induction generator," IEEE Trans. on Industry applications,
Vol. 13, No. 5, pp. 422-428, 1977. [5] L ShridliaxBhim Singh, and C.S. Jha, “‘Transient performance of the self regulated short shunt self excited
induction generator,” IEEE ‘trans. on Energy Conversion, vol. 10, no. 2, pp. 261-267, June 1995.
[6] K. Muijadi, and TA Lipo, “Series compensated PWM inverter with battery supply applied to an isolated
induction generator,” IEEE hans. on Industry Applications, vol. 30, no.4, pp. 1073-1082.
[7] J.K Chatterjee, PK& Khan, A. Anand, and A..Jinclal, “Performance evaluation of an electronic leadlagVAr
compensator and its application in brushless generation,” m Proc. Inter Conf. on Power Electronics and Drive
Systems, vol.1, May 1997, pp. 59-64.
[8] Bhim Singh, and LB. Shilpalcar, ‘Analysis of a novel solid state voltage regulator for a selfexcited induction
generator,” TEE Proc Caner Transm. Distrib.,voL 145, no.6, pp. 647-655, November 1998.
[9] E.C. Marra, and J.A. Pomilio, “Self excited induction generator controlled by a VSPWM converter providing
high power factor current to a singlephase grid,” in Proc. Annual Conference of the IEEE on Industrial
Electronics Societ)ç 1998, pp. 703- 708.
[10] B. Singh, L Shridhar, and C.S. Tha, “Improvements in the performance of selfexcited induction generator
through series compensation,” TEE Proc.CenerTransm. andDistnl,, voL 146, no 6, pp. 602-608, November
1999.
[11] R. Leidhold, and C. Garcia, “Parallel capacitive and electronics excited stand alone induction generator,” in
Proc. International Conf. on Electric Machines and Drives, 1999, pp. 631- 633.
[12] 0. Ojo, and I.E. Davidson, “PWMVSI inverter assisted standalone dual stator winding induction generator,” in
Proc. Thirty Fourth lAS Annual Meeting on Industry Applications, 1999, pp. 1573 -1580.
[13] EC. Marra, and J.A. Pomllio, “Induction generator based system providing regulated voltage with constant
frequency” in Proc. Conf. Applied Power Electronics, 1999, pp. 410-415.
[14] PICS. Khan, J.K Chatteijee, MA Salam, and if Ahmad, “Transient performance of unregulated prime mover
driven standalone selfexcited induction generator with solidstateleadlagVArcompensatoz,” in Proc. TENCON
2000, voL 1, Sep. 2000, pp. 235- 239.
[15] Bhim Singh, S.S. Murthy, and Sushma Cupta, ‘Analysis and design of STATCOM based regulator for self
excited induction generator,” IEEE Trans. on Energy Conversion, vol. 19, na 4, pp. 783-790, Dec. 2004.
[16] Bhim Singh, S.S. Murthy, and Sushma Cupta, “STATCOM based voltage regulator for self excited induction
generator feeding nonlinear loads,” IEEE Trans. on Industrial Electronics, vol. 53, pp 1437-1452, Oct. 2006.
[17] WoeiLnen Chen, YungHsiang Lin, HrongShengCau, and ChiaHung Yu, “STATCOM controls for a selfexcited
induction generator feeding random load.s,” IEEE Transactions on Power Delweq accepted for future
publication.
[18] ppKhera, “Application of zigzag transformers for reducing harmonics in the neutral conductor of low voltage
distribution system,” in Proc. IEEE LAS Conf. Rec., 1992, Pp. 1092—1990.
[19] PN. Enjeti, WajihaShireen, Paul Packebush, and Ira J. Pitel, Analysis and design of a new active power filter to
cancel neutral current harmonics in three phase four Wire electric distribution systems” IEEE Transactions on
Industry Applications, vol. 30, no.6, pp. 1565-1572, Dec. 1994.
[20] M. Lzhar, G.M. Hadzeç M. Syahdn, S. Tails, and S. Tdns ‘An analysis and design of a star delta transformer in
series with active power filter for current hamonics reduction,” in Proc. National Power and Energy Conference
(PECon) 2004, Kuala Lumpur, Malaysia, pp. 94-98.
[21] Sewan Choi, and Minsoo Jang, “A ReducedRating Hybrid Filter to Suppress Neutral Current Harmonics in
ThreePhaseFourWire Systems,” IEEE Trans. on Ind. Electron., vol. 51, no.4, pp. 927-930, Aug. 2004.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
[22] HumgLiehng Jon JinnChang Wi KuenDer Nu, WenJung Chiang, and YiHsun Chen, ‘Analysis of zigzag
transformer applying in the threephasefourwire distribution power system” IEEE Transactions on Power
Delivery, vol. 20, no. 2, pp. 1168-1173, Jan. 2005.
[23] Sewan Choi, and Minsoo Jang, “Analysis and control of a singlephaseinverter— zigzagtransformer hybrid
neutralcurrent suppressor in threephasefourwire systems,” IEEE Transactions on Industrial Electronics, vol. 54,
no.4, pp. 2201-2208, Aug. 2007.
[24] H.R Karshonas, and A Abdolahi, ‘Analysis of a voltage regulator for selfexcited induction generator employing
currenttype static compensator,” in Proc. Canadian Conf. on Electrical and Computer Engineering, vol. 2, May
2001, pp.1053 -1058.
[25] S.C. Kuo, and L Wang, “Analysis of voltage control for a selfexcited induction generator using a
currentcontrofled voltage source inverter (CCVSI),” TEE Proc.CenerTransrmtDistrib., vol. 148, no. 5, pp. 431-
438, Sept. 2001.
APPENDIX
1. STATCOM Control Parameters
Lf = 1.2 mH, Rf = 0.045 Ω and Cdc= 4000µF.
AC voltage PI controller: Kpa =0.05, Kia = 0.04.
DC bus voltage PI controller Kpd = 0.7, Kid =0.1
Carrier frequency = 20 kHz
2. Parameters of Rectifier Load
Three-phase rectifier LsL=0.1mH, RSL = 1 Ω, RRL = 22Ω, and CRL = 470µF.
Single-phase rectifier LSL=0.1mH, RSL = 1 Ω, RRL=75Ω and CRL=150Μf
3. Machine Parameters
The parameters of the 3.5 kW,440V, 7.5A, 50 Hz,4-pole induction machine are given below.
Rs = 0.69 Ω, Rr= 0.74Ω, Lls = Llr = 1.1 mH, J = 0.23kg/m2,
Lss = Lls + Lm and Lrr = Llr + Lm.
4. Terminal capacitor
C = 15 µF/ phase
5. Air gap voltage:
The piecewise linearization of magnetization characteristic of machine is given by:
E1=0 Xm≥260
E1=1632.58-6.2Xm 233.2≤Xm ≤260
E1=1314.98-4.8Xm 214.6≤Xm ≤233.2
E1=1183.11-4.22Xm 206≤Xm ≤214.6
E1=1120.4-3.9.2Xm 203.5≤Xm ≤206
E1=557.65-1.144Xm 197.3≤Xm ≤203.5
E1=320.56-0.578Xm Xm≤197.3
Author
Swati Devabhaktuni received the B.Tech degree in electrical and electronics engineering
from V. R. Siddhartha Engineering College, Andhra University, India in 2001, and the
M.Tech degree in control systems from J.N.T.U University, in 2004. Currently, she is a
Associate professor in Gokarajurangaraju Institute of engineering and technology,
Hyderabad, She is also a Research Scholar in J.N.T.U University, Hyderabad. Her research
interests are the power electronics, AC motor drives, and control systems.
S. V. Jayaram Kumar received the M.E. degree in electrical engineering from the Andhra
University, Vishakapatnam, India, in 1979. He received the Ph.D. degree in electrical
engineering from the Indian Institute of Technology, Kanpur, in 2000. Currently, he is a
professor at Jawaharlal Nehru Technological University, Hyderabad. His research interests
include FACTS and Power System Dynamics,A.C drives.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
218 Vol. 1, Issue 5, pp. 218-226
LITERATURE REVIEW OF FIBER REINFORCED POLYMER
COMPOSITES
Shivakumar S1, G. S. Guggari2
1,2Faculty, Dept. of I&PE, Gogte Institute of Technology, Belgaum, Karnataka, India
ABSTRACT
Polymer-matrix composites (PMCs) have been used for a variety of structural memberships for chemical plants
and airplanes, since they have outstanding performances, such as lightweight and good fatigue properties. To
hold the long-term durability and to estimate the residual life of the composites under some hostile
environments, it is an important issue to clarify the facture and/or the failure mechanism in each service
conditions. Degradation of components made from polymeric materials occurs in a wide variety of
environments and service conditions, and very often limits the service lifetime. Degradation occurs as the result
of environment-dependent chemical or physical attack, often caused by a combination of degradation agents,
and may involve several chemical and mechanical mechanisms. The main concern of this review will be to
examine the causes of degradation of polymeric components from the completion of fabrication to ultimate
failure.
KEYWORDS: Degradation, oxidation, Hydrolysis, moulding
I. INTRODUCTION
Many polymers are prone to degradation caused by weathering in which photo- chemical reactions, involving ultraviolet solar photons and atmospheric oxygen, lead to chain scission. The chemical reactions may be accelerated by elevated temperatures caused by the warming effect of the sun. Alternatively, or additionally, the chemical reactions may be accelerated by the presence of stress that may he applied externally, or may be present in the form of moulding stress, or as the result of a temperature gradient or of differences in thermal expansion coefficient at different locations within the molding. Failure is often simply taken as the fracture of the component, hut degradation of some other properly, such as the transparency or surface gloss may render a component unserviceable. In broad terms, the majority of failures that are the consequence of polymer degradation can be attributed to one of three types of source such as1] Molecular degradation caused during processing, usually due to elevated temperatures (as in melt processing) and often in combination with an oxidizing atmosphere,2] Degradation in service caused by the natural environment and 3] Attack by an aggressive chemical, again during the service lifetime. The type of degradation referred to in third one includes as the major problem environment-sensitive fracture, in which contact with a liquid chemical leads to craze initiation and growth. This can be a particular problem with consumer goods, where the service conditions are not under the control of the supplier; the end- user may employ an inappropriate cleaning fluid, for example. Significant research has been conducted in this area over
the past 20 years and several test procedures have been developed. It will be necessary to examine the mechanisms of failure and the features of the environment that control them, and then to look for possible remedies. The methods of testing are discussed with reference to their application in establishing ranking orders for plastics with respect to their weather resistance in determining the effectiveness of additives such as anti-oxidants: in providing data for lifetime prediction; and in research into the mechanisms of failure and the development of improved materials. There are elements of degradation behaviour that are common to all polymers and elements that are peculiar to a particular polymer. Much research has been
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
219 Vol. 1, Issue 5, pp. 218-226
conducted on the important commodity polymers poly (vinylchloride) PVC), polyethylene, and polypropylene, and these materials are used by way of example in this review.
II. POLYMER DEGRADATION
2.1. Chemical mechanisms of degradation:
In an aggressive chemical environment polymer molecules- break (chain scission), cross-link, or suffer substitution reactions. Substitution is the least common and causes the smallest property changes and will not be considered further in this review. Scission and cross-linking both occur under natural weathering conditions, and molecular degradation can also take place during processing. There is general agreement hat molecular degradation occurs almost exclusively at defects in the molecule. Much research has been conducted into the chemical reactions involved and there are many papers and several reviews on this topic [1-7].
2.1.1 Degradation Studies A Universal Testing Machine is an instrument used for the measurement of loads and the associated test specimen deflections such as those encountered in tensile, compressive or flexural modes. It is used to test the tensile, flexural and Inter Laminar Shear Strength (ILSS) properties of materials. The flexural strengths of the specimens were determined for different alkali exposure durations using the three-point bending test as per ASTM-D790. The specimens (80 X 8 X 3mm) were tested with a spam length of 50 mm in air using an instrumented 10 ton capacity UTM (M/s Kalpak, Pune).
Table 1 Degradation of Flexural strength at T=70ºC
Degradation of flexural strength at 70C
0
200
400
600
800
1000
0 200 400 600 800
No.of hours of exposure
Fle
xu
ral
stre
ng
th i
n
MP
a
Carbon-EpoxyCarbon-VinylesterCarbon-Isopolyester
Fig 1 Degradation of Flexural strength at T=70°C
The specimens were tested for tensile strength as per ASTM-D638 the specimen dimensions of Length: 216mm, Thickness: 3mm and Width: 19mm at a cross head speed of 1 mm/ min.
No. of hr of exposure Carbon- Epoxy Carbon -Vinylester Carbon -Isopolyester
0 834.452 432 370
120 765.92 380 304
248 732 348 264
365 684 320 232
480 648 300 216
600 636 294 208
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
220 Vol. 1, Issue 5, pp. 218-226
Table 2 Degradation of Ultimate tensile strength at T=70°C
Degradation of UTS at 70C
0
100
200
300
400
500
600
0 200 400 600 800
Nop.of hours of exposure
UT
S in M
Pa
Carbon-Epoxy
Carbon-Vinylester
Carbon-Isopolyester
Fig 2 Degradation of Ultimate tensile strength at T=70°C
Three-point bend test was carried out to determine the ILSS values of the specimens in accordance to ASTM D2344. The testing was done at a crosshead speed of 1mm per minute.
Table 3 Degradation of Inter laminar shear strength at T=70°C
Degradation of ILSS at 70C
0
10
20
30
40
50
60
0 200 400 600 800
No. of hours of exposure
ILS
S i
n M
Pa
Carbon-Epoxy
Carbon-Vinylester
Carbon-Isopolyester
Fig 3Degradation of Inter laminar shear strength at T=70°C
No. of hr of exposure Carbon- Epoxy carbon -Vinylester carbon -Isopolyester
0 508.014 358.666 295.183
120 468 336.667 256
240 446 314 225
360 430 298 203
480 415 273 180
600 393 260 172
No. of hr of exposure Carbon- Epoxy Carbon -Vinylester Carbon -Isopolyester
0 51.2396 22.4003 16.7829
120 50 21.7016 15.1
240 48 20.921 14.7
360 46.6 20.1743 14.3
480 45.3 19.2967 13.4
600 44 19.009 12.9
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
221 Vol. 1, Issue 5, pp. 218-226
2.2. Photo-Oxidation:
Of major importance is the process of photo-oxidation is proceeds by a radical chain process initiated either by dissociation caused by the collision of a photon with sufficient energy with a polymer molecule, or as the result of some impurity present, for example trace metals from the polymerization catalyst. Once initiation has occurred, converting the long- chain polymer molecule, PH, into a radical, P, the reactions are as listed by Davis and Sims [8]: Termination is then normally through the reaction of pairs of radicals. The reaction schemes are affected by trace metal impurities such as polymerization catalyst residues or contaminants from processing machinery, for these may catalyse some of the degradation reactions, for example Reaction 4 [11]. Degradation can still occur slowly in the dark through the formation of hydroperoxides through intermolecular back-biting hydrogen abstraction by peroxy radicals [12]. The reactions listed above would not cause serious degradation of the engineering properties of the material as they stand because the long-chain nature of the polymer molecules is preserved almost unchanged. Degradation occurs because the radicals are unstable and may undergo scission reactions. Discussion of scission reactions for polypropylene is given in a recent paper by Severini et al. [13]. Hydroperoxides produced by Reaction 2 or by other means can be decomposed by u.v. radiation with wavelength below 360 nm giving a PO radical, as shown in Reaction 3. The decomposition of hydroperoxides is generally acknowledged to be a key feature in the degradation of polyolefins, though their behaviour in polyethylene, in which they do not accumulate [14]. (Note that hydroperoxides accumulate both in polyethylene and polypropylene on thermal oxidation [15]).The presence of carbonyl groups in a degraded polymer indicates that oxidation has taken place and also warns that the material is vulnerable to further deterioration because they are photo-labile. Aldehyde and ketone carbonyl groups arc common products during processing and the effect of processing on the subsequent degradation behaviour has been identified as of significant importance [15]. Although most studies of photo-oxidation have centred on U.V radiation, the need for information on the behaviour of polymers for insulation (polyethylene) and jacketing (PVC) in nuclear installations has stimulated study of the effect of y-radiation. Clough and Gillen [16, 17] found that radiation dose and temperature act synergistically in promoting degradation.
2.3 Thermal decomposition and oxidation:
Thermal degradation is of relevance here because damage suffered by the polymer during processing at elevated temperature can lead subsequently to further deterioration under the conditions of photo-oxidation. Thermal degradation is a serious problem with PVC and has been the subject of much research. The initial step in the process of degradation is dehydrochlorination, with hydrogen and chlorine atoms on adjacent chain carbon atoms stripping off to form HCI and leaving behind a double bond in the polymer backbone, adjacent sites become less stable, more HCI may be stripped off, and a conjugated polyene structure develops. This causes yellowing of the material. HCI catalyses the reaction which is therefore auto- accelerating unless steps are taken to remove the HCI. The process is accelerated in oxygen but can occur in the absence of oxygen at temperatures above 1200 C [IS]. Troitskii and Troitskaya [19] conclude that abnormal unstable fragments have a major influence over thermal degradation of PVC. Mechanico-chemical degradation may occur during processing, producing free radicals that may then initiate dehydrochlorination in PVC [20, 21]. It is expected that dehydrochlorination will initiate at preexisting defect sites in the polymer, though there is evidence that it may not be restricted exclusively to them [20]. The small amount of oxygen present during processing allows the formation of hydroperoxides by reaction with radicals. After thermal degradation the polymer will suffer further degradation during later stage in processing, or under other conditions favouring thermal oxidation, or under conditions of photo-oxidation [22]. Even though the shearing action during processing is generally believed to promote molecular damage, the inclusion of lubricants to reduce the viscosity during processing does not produce any significant reduction in the vulnerability of the product PVC to oxidation [23]. The susceptibility to further degradation will depend on the amount of HG present, the degree of unsaturation and on the hydroperoxide content [20]. Although generally regarded as a lesser problem than with PVC, degradation of polyolefins occurs during processing as well. Mellor et al [24] found that the lifetime under U.V. exposure was very
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
222 Vol. 1, Issue 5, pp. 218-226
sensitive to the degree of oxidation that took place during processing on a two-roll mill and that the rate of UV degradation was related to the increase in melt flow index that occurred in the material. Billiani and Fleischmann [25] used the weight- average molecular weight, M, to assess molecular degradation during injection moulding of polypropylene and found that it was more sensitive to increases in melt temperature than to increases in shear rate. There was no significant difference between the molecular weight of material taken respectively from the skin and the core, and they deduced that degradation occurs in the plasticizing system and/or in the sprue. Amin et al. [26] claimed that processing low-density polyethylene (LDPE) at 160’ C produces hydroperoxides that have a photo-initiating effect, whereas those produced by thermal oxidation in the range 85-95° C do not [26]. This has been examined further by Lemaire and co-workers [27. 28] and by Gugumus [29] who discuss the chemistry of oxidation and the nature of the oxidation products. Gugumus further claims that the mechanisms may be adapted to other polymers including non-olefinic polymers such as polystyrene and polyamides [29], though this may not be so because Ginhac et al. [27] report that hydroperoxides which initiate new oxidation reactions form in polypropylene under thermal oxidation conditions that do not cause the formation of active hydroperoxides in polyethylene.
2.4 Hydrolysis:
Hydrolytic attack can cause chain scission in some polymers, leading inevitably to deterioration in properties. A general hydrolysis scheme can be summarized as follows: Polymers susceptible to this kind of attack include polycarbonate. The reaction can be unacceptably fast at elevated temperature and can be a problem with articles that need to be sterilized; Some polymers absorb water, leading to other problems. Nylons become plasticized and their Young’s modulus can fall by as much as an order of magnitude. Some relevant references are given in a recent paper by Paterson and White [30]. When water is absorbed in polycarbonate in sufficient quantity it can form disc-shaped defects that act as stress-concentrating flaws and cause a serious fall in toughness. A review of the literature and some new results has been presented recently by Qayyum and White [31].
2.5 Attack by pollutants:
The attack of polymers by pollutants has been reviewed by Ränby and Rabek [32]. Some of the pollutants themselves are photolytic, leading to further products that may cause degradation. For example, so2 h0to-oxidizes and reacts with water to produce H2SO4.
2.6 Mechanical degradation:
If a chemical bond is placed under sufficient stress it will break. It may not always be easy to apply such a stress because deformation mechanisms intervene. For a polymer chain bond to be broken, the segment in which it is contained must not be able to uncoil (i.e. it roust be extended between entanglements or cross- links already) nor slip. Such constraints may be present in a cross-linked polymer, where the short chain segments become fully extended at fairly low extension in a highly oriented polymer, or possibly at the tip of a growing crack. Molecular fracture has been shown to occur in this way using electron spin resonance to detect the free radicals that are produced when chain scission occurs.
2.7 Stress-aided chemical degradation:
The phenomenon of mechanico-chemical degradation (or sometimes more specifically “mechanico-oxidative” degradation) has been known to occur in rubbers for many years [33]. The effect of stress on the rate of chemical degradation in a much wider range of polymers has been reviewed by Terselius et al. [34] and Popov et al. [35]. Unlike the case of mechanical degradation dealt with in the previous section in which very high stresses are needed to break a chain bond, a more modest stress may accelerate scission caused by chemical reaction. The most highly stressed bonds will still be the most likely to react [36, 37] 50 that bonds contained within short segments or highly strained bonds near entanglements will be most vulnerable. Highly oriented polymers are generally m-’- resistant to this type of attack than when in more randomly oriented form because the molecules tend to share the load evenly, so that the chance of overstressing is much less. Nevertheless, the rate of oxidation of oriented polypropylene at 130°C was found to increase with load at high loads [38, 39].
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
223 Vol. 1, Issue 5, pp. 218-226
III. EFFECTS OF PROCESSING:
Much of the discussion of thermal degradation is related to the problem of molecular degradation during processing, when the temperature required to produce the desired flow properties for a moulding operation is often high enough to promote significant degradation, especially if oxygen is present. There will often be circumstances during processing operations in which stress-aided chemical degradation will occurs this problem in the formed product, but some degradation of this kind may have already occurred during processing. There is a further aspect of processing that has not yet been dealt with and that is the morphology of the moulded or formed polymer. The rate of cooling is often quite high in molding operations and varies considerably from one position within the moulding to another. As a consequence the morphology of a semi-crystalline polymer varies substantially within an injection moulding, which normally contains equiaxed spherulites in the core and an oriented structure near to the surface. This is discussed further in section 6.9. The important point to note here is that degradation reactions occur almost exclusively in the amorphous phase because it takes up oxygen much more readily than the crystal phase [64] and that there can be a strong influence exercised by the morphology. It is further suggested that oxidation may occur preferentially at the crystal--amorphous boundary where the effects will be most damaging [65—67]. Nishimoto at [68] found that the crystal structure of their polypropylene samples varied with the quenching conditions and that there was a marked variation in property deterioration even though the (γ) radiation-induced oxidation did not differ. The diffusion rates of the various reactants are very different in the crystal and non-crystal regions of most polymers. Another morphological feature is molecular orientation, which can occur in either crystalline or amorphous regions. There have been several studies of the effect of orientation on the degradation of polymers and some of them are referred to in section 2.3, in which the effect of orientation on stress-aided chemical degradation was discussed. Photo-degradation is slower in oriented polyethylene in the unstressed state as well as when an external stress is applied [69]. Some of these topics are discussed further by Slobodetskaya [70], who observed that hydroperoxides accumulated at a lower rate in oriented polypropylene than in unoriented material.
IV. CREEP FRACTURE
The mechanism of polymer-matrix composite is complicated than the other materials, since it can fail under a constant load that is significantly lower than its static strength even at room temperature and its degradation mechanism has not been fully discussed yet. McLean [4] described the creep behavior of unidirectional composites. It was assumed that a fiber was elastic and a matrix was viscoelastic. The matrix stress transfers to the fiber stress with time and makes the fiber strain increase equal to the composite strain. Curtin [5] predicted the rupture strain and the maximum fiber stress of unidirectional composites in view of estimating the probability of fiber breakages in its own cross section. Du and McMeeking [6], Sofronis and McMeeking [7] and Ohno et al. [8] also predicted the creep rupture time of unidirectional composites under tensile loads. They discussed about the relaxation of the interfacial shear stress that could decrease the unidirectional composite’s strength. Among the above studies, although only the fiber breakages were considered as the fatal damage, the interfacial debondings that were likely to progress even for the normal PMC were not examined. This time-dependent failure would promote fiber breakages and degrade the mechanical properties of composites [6–10]. From this point of view, Beyerlein and co-workers [11,12] investigated the interfacial debonding propagation and verified that the interface failed with time in a single fiber composite under a constant strain. In this paper, fragmentation tests were conducted tests with a single fiber composite to examine the interfacial debonding.
V. PREDICTIONS OF FATIGUE LIFE
FRP laminates have been subjected to the variable amplitude loading. The linear cumulative damage rule and Palmgren–Miner rule were used for the prediction of fatigue life under the variable amplitude loads. However, the linear cumulative damage rule for the materials is not useful for describing the complicated fracture mechanism [13–15]. Therefore, the cumulative damage was evaluated using residual strength or residual stiffness as the parameter of damage [13, 15, 16]. Recently, Yao and
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
224 Vol. 1, Issue 5, pp. 218-226
Himmel [17] assumed that the cumulative damage was proportional to the decrease of strength, and they modify the analysis by considering the residual strength caused fatigue damage in FRP. In this paper, the variable amplitude cyclic loading tests of two-stage were conducted with quasi-isotropic [45/0/–45/90]S CFRP laminates Stress-corrosion crack problem of FRP Considering the degradation of a fiber embedded in a composite near the crack tip caused by the solution diffusion, the fragmentation test using a single fiber model specimen was employed. Fragmentation tests were conducted to investigate the degradation mechanism using a single fiber composite. The specimen was consisted of ECR-glass/vinylester and an E-glass/vinylester. Effects of environmental solution diffusion into a matrix on interfacial shear strength have been evaluated with immersion time.
VI. FRAGMENTATION TEST AND INTERFACIAL SHEAR STRENGTH
The specimen was constituted by an E-glass fiber as the reinforcement and a vinylester resin as the matrix. And the geometry of a specimen is shown in Fig. 2. The interface exposing to the solution was sealed at the end of the specimen in order to reduce the water uptake through the interface. The maximum interfacial shear strength was calculated by the Cox formula, which assumed that the fiber/matrix interface was perfectly bonded. Fig. 3 shows the maximum interfacial shear strength as a function of the water absorption rate. And the interfacial shear strength decreased against the water absorption rate. The maximum interfacial shear strength was influenced by matrix Young’s modulus. Therefore, the interfacial shear strength decreased as a function of the water absorption rate, and it depended on the mechanical degradation of the matrix.
VII. RESULTS AND DISCUSSIONS
The effect of Alkali exposure for neat casting of Epoxy, Vinylester, and Isopolyester at 70˚C are shown in Figure (1,2,3). The specimens after alkali exposure show increase in degradation substantially with increase in time of exposure. The percentage drops in UTS for epoxy/carbon, vinylester/carbon and Iso-polyester/carbon after 600 hours of exposure were 24.81, 26.73, and 63 % respectively The percentage drops in flexural strength for epoxy/carbon, vinyl ester/carbon and Iso-polyester/carbon after 600 hours of exposure were 28.73, 41.17, and 71.29 % respectively at RT and 31.20, 46.93, and 77.88 % respectively at 70ºC. Carbon /epoxy show better performance than the others and carbon/Iso-Polyester exhibiting least ILSS.
VIII. CONCLUSIONS
Composite materials have a great potentiality of application in structures subjected primarily to compressive loads. Composite materials have attractive aspects like the relatively high compressive strength, good adaptability in fabricating thick composite shells, low weight and corrosion resistance. But, material characterization and failure evaluation of thick composite materials in compression is still an item of research. Glass reinforced plastics have wide application to naval & other vessels accompanied by application of conservative design safety factors due to limited durability data and to account for underwater shock loading. Increasingly GRP is being proposed for critical marine components such as masts, submarine control surfaces, transmission shafts propellers, & superstructures, submarine casings, radomes, etc.
REFERENCES
[1]. H. Kawada, A. Kobiki, J. Koyanagi, A. Hosoi, Long-term durability of polymer matrix composites under hostile environments, Materials Science and Engineering A 412 (2005) 159–164
[2]. Jin-Chul Yun, Seong-Il Heo, Kyeong-Seok Oh, Kyung-Seop Han, Degradation of graphite reinforced polymer composites for PEMFC bipolar plate after Hygrothermal ageing, 16th international conference on composite materials
[3]. Deanna N. Busick and Mahlon S. Wilson, “Low-cost composite material for PEFC bipolar plates, Fuel
cells Bulletin” Vol. 2, No. 5, pp. 6- 8, 2006 [4]. S.L. Bai, V.Djafari, “ Interfacial properties of microwave cured composites”, Composites, volume
26(1995), 645- 651\
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
225 Vol. 1, Issue 5, pp. 218-226
[5]. Viral Mehta and Joyce Smith Cooper, “Review and analysis of PEM fuel cell design and manufacturing” Journal of Power Sources, Vol. 114, pp. 32-53,2003
[6]. C.Y.Yue, H.C.Looi, “Influence of thermal and microwave processing on the mechanical and interfacial properties of glass/epoxy composites.” Composites, volume 26(1995), 767-773
[7]. Chia.L.H, Jacob.J, F.Y.C Boey, “Radiation curing of PMMA using a variable power microwave source”, Journal of materials processing technology, volume 48(1995), 445-449
[8]. Shuangjie Zhou and Martin C. Hawley, “A study of Microwave reaction rate enhancement effect in adhesive bonding of polymers and composites”, Composite structures, volume 61(2003), 303-309
[9]. S.L bai, V.djafari, M. andreani and D.francois, “A comparative study of the mechanical behaviour of an epoxy resin cured by microwaves with one cured thermally”, Journal of polymers, volume 31 (1995), 875-884
[10]. Shen, C.-H, and Springer, G.S.,"Moisture absorption and desorption of composite materials,"
Composites, Volume 8, Issue 1, January 1977, Page 63 [11]. S. M. Bishop,"Effect of moisture on the notch sensitivity of carbon fibre composites," Composites,
Volume 14, Issue 3, July 1983, Pages 201-205 [12]. D. J. Boll, W. D. Bascom and B. Motiee, "Moisture absorption by structural epoxy-matrix carbon-
fibercomposites," Composites Science and Technology, Volume 24, Issue 4, 1985, Pages 253-273 [13]. John.W.Lane, “Dynamic Modeling of the Curing Process”, Polymer Engineering and Sciences,
volume 26,(1986),346-353. [14]. Woo Lee, G.S. Springer, “Interactions of Electromagnetic Radiation with Organic Matrix Composites”,
Journal of Composite Materials, volume 18 (1984), 357-386. [15]. Jian Zhou, Chun Shi, Bingchu Mei, Runzang Yuan, Zhengyi Fu, “Research on the Technology and
Mechanical Properties of the Microwave Processing of the Polymer”, Journal of Materials Processing Technology, volume 137, (2003), 156-158.
[16]. Dariusz Bogdal, Jaroslaw Gorczyk, “Synthesis and characterization of epoxy resins prepared under microwave irradiation”, journal of applied polymer science, vol. 94, 1969-1975 (2004).
[17]. Henri Jullien and Henri Valot, “Polyurethane Curing by a Pulsed Microwave Field”, Polymer, volume 26 (1985), 506-510
[18]. Liming Zong, Leo C. Kempel and Martin C. Hawley “Dielectric studies of three epoxy resin systems during microwave cure”, polymer, vol 46 (2005),2638-2645
[19]. . Varaporn Tanrattanakul, Kaew SaeTiaw “comparison of microwave and thermal cure of Epoxy- Anhydride resins: Mechanical properties and dynamic characteristics” journal of applied polymer science, vol.97, 1442-1461(2005)
[20]. . Yumin Liu, Y. Xiao, X.Sun, D.A.Scola., “Microwave Irradiation of Nadic-End-Capped Polyimide Resin (RP- 46) and Glass- Graphite-RP-46 Composites: Cure and Process Studies”, Journal of Applied Science, volume 73, (1999), 2391-2411.
[21]. C.Antonio and R.T.Deam, “Comparison of Linear and Non-Linear Sweep Rate Regimes in Variable Frequency Microwave Technique for Uniform Heating in Materials processing”, Journal of Materials Processing Technology, volume 169,(2005), 234-241.
[22]. F.Y.C. Boey , W.L. Lee “Microwave Radiation Curing of Thermosetting Composite ” , Journal of
Material Science Letters , vol.9. (1990), 1172-1173. [23]. Jian Zhou, Chun Shi, Bingchu Mei, Runzhang Yuan, Zhengyi Fu, “Research on the Technology and
the Mechanical Properties of the Microwave Processing of Polymer”, [24]. Journal of Materials Processing Technology, volume 137 (2003), 156-158.
[25]. Michel Delmotte, Henri Jullien, Michel Ollivon “Variations of the Dielectric properties of Epoxy resins during Microwave Curing”, European Polymer Journal, Vol.27, (1991), 371-376.
[26]. . H.S.Ku, J.A.R.Ball,E.Siores, B.Horsfield, “Microwave Processing and Permittivity Measurement of Thermoplastic Composites at Elevated Temperature” , Journal of Materials Processing Technology,
volume 89-90 (1990),419-424 [27]. Cleon Davis, Ravindra Tanikella, Taehyun Sung, Paul Kohl and Gary May, “Optimization of Variable
Frequency Microwave Curing Using Neutral Networks and Genetic Algorithms”, Electronic Components and Technology Conference (2003).
[28]. Vittorio Frosini, Enzo Butta and Mario Calamia, “Dielectric Behaviour of Some Polar High Polymers at Ultra-High Frequencies (microwaves)”, Journal of Applied Polymer Science, volume 11,(1967), 527-551.
[29]. Y.F.C. BOEY and S.W.LYE, “Void Reduction in Autoclave Processing of Thermoset Composites.
Part-2: Void Reduction in a Microwave Curing Process” , Composites, Volume-23, (1992), 266-270 [30]. A.Livi, G Levita and P.A. Rolla, “Dielectric Behavior at Microwave Frequencies of an Epoxy Resin
During Cross linking”, Journal of Applied Polymer Science, volume 50, (1993), 1583-1590.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
226 Vol. 1, Issue 5, pp. 218-226
[31]. Quang Le Van and Albert Gourdenne , “Microwave Curing of Epoxy Resins with Diaminophenylmethane -General Features”, European Polymer Jouenal,Volume 23, (1987), 777-780.
[32]. Claire Hedruel, Jocelyne Galy, Jerome Dupuy, Michel Delmotte, and Claude More, “A study of Morphology of a Linear Polymer Modified Epoxy-Amine Formulation Cured by Microwave Energy at
Different Hating Rates” , Journal of Applied Polymer Science, Volume 82, (2001), 1118-1128. [33]. Claire Hedruel, Jocelyne Galy, Jerome Dupuy, Michel Delmotte, and Claude More, “Kinetics
Modeling of a rubber Modified Epoxy –Amine Formulation Cured by Thermal and Microwave Energy.” Journal of Applied Polymer Science, Volume 68,(1998), 543-552.
[34]. X.Q. Liu, Y.S. Wang, J.H. Zhu, “Epoxy Resin / Polyurethane Functionally Graded Material Prepared by Microwave Irradiation.” Journal of Applied Polymer Science, Volume 94, (2004), 994-999.
[35]. Nadir Beldjoudi and Albert Gourdenne, “Microwave Curing of Epoxy Resins with Diaminophenylmethane-4. Average Electrical Power and Pulse Length Dependence in Pulsed
Irradiation”, European Polymer Journal, volume 24, (1988), 265-270. [36]. G.Levita,A Livi, P.A. Rolla and C.Culicchi, “Dielectric Monitoring of Epoxy Cure”,Jounel of Polymer
Science- Part B –Polymer Physics, Volume.34, 2731-2737 [37]. J.S .Salafsky, H.Kerp, R.E.I.Schropp, “Microwave Photoconductivity, Photovoltaic properties and
architecture of a Polymer-semiconductor nanocrystal composite.”, Synthetic Metals,Volume102(1999), 1256-1258.
[38]. Wooil Lee, George S. Springer, “Microwave curing of composites” Journal of Composite Materials, Volume18 (1984) 387-409
[39]. LIMING ZONG, SHUANGJIE, RENSHENG SUN, and LEO C. KEMPEL, “Dielectric Analysis of Crosslinking epoxy resin at a High Microwave Frequency” journal of polymer science: part B: polymer physics, vol. 42, 2871-2877 (2004)
[40]. “A study of microwave reaction rate enhancement effect in adhesive bonding of polymers and
composites”, Shuangjie Zhou, Martin C. Hawley, composite structures, vol. 61 (2003) 303-309 [41]. C.Nightingle, R.J.Day, “Flexural and interlinear shear strength properties of the carbon fiber/epoxy
composites cured thermally and microwave radiation.” Composites part a vol 33, (2002), 1021-1030
Authors Biographies
Shivakumar S was born on 17th January 1966. He has completed B.E from Mysore
University (1988) V Rank, FCD & M.Tech in Industrial Engg. IIT Bombay (1994) FCD. He is now pursuing PhD in Mechanical Engg. UVCE, Bangalore since 2007. He is currently working as Associate Professor, Dept. of Industrial and Production Engineering, Gogte Institute of Technology, Belgaum Karnataka. He also worked as special officer, Visvesvaraya Technological University from 2002 to 2006 and currently PG Coordinator, Dept. IPE, GIT, Belgaum, BOE, VTU, Belgaum.
Geetanjali S Guggari has completed her B.E in Industrial & Production Engg.
Karnataka University (1992) passed in First class & M.Tech in Machine Design. BEC Bagalkot (2010) First Class with Distinction. She is currently working as Lecturer in Dept. of Industrial and Production Engineering, Gogte Institute of Technology, Belgaum karnataka
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
227 Vol. 1, Issue 5, pp. 227-235
IMPLEMENTATION RESULTS OF SEARCH PHOTO AND
TOPOGRAPHIC INFORMATION RETRIEVAL AT A LOCATION
1Sukhwant Kaur,
2Sandhya Pati,
3Trupti Lotlikar,
4Cheryl R,
5Jagdish T.,
6Abhijeet D.
1Sr. Lecturer, Deptt. of Computer Engineering, Manipal University, Dubai
2&3Asstt. Prof., Deptt. of Computer Engg., Fr.CRIT, Vashi, Navi Mumbai, Maharashtra, India
4, 5, 6Deptt. of Computer Engineering, Fr.CRIT, Vashi, Navi Mumbai, Maharashtra, India
ABSTRACT
Tourism is the strongest and largest industry in the global economy. It has played a significant role in boosting
the city's economy and social employment. There has been a large increase in the number of people out on
tours, for the sake of recreation and entertainment. In the traditional tourism industry, tourist information is
obtained mainly through newspaper, magazines, friends and other simple ways. . Such traditional sources are
user-friendly but, they have some serious limitations. First, the suggestions from friends are limited to those
places they have visited before. Second, the information from travel agencies is sometime biased since agents
tend to recommend businesses they are associated with. Moreover, information available from the Internet is
too overwhelming and the users have to spend a long time finding those that they are interested in. Thus, trying
to eliminate this difficulty SPATIAL employs geo-tagged images to show the interesting scenes of different
places. Detailed texts, images, paths and other guidance information are provided, so people can better
understand the tourist attractions and make their decision objectively.
In this paper, we present the successful implementation of a photo and topographic information search. A user
can provide a desired keyword describing the place of interest, and the system will look into its database for
places that share the visual characteristics. One can select two locations on the map; the latitude, longitude of
the selected area, the path and the distance between the two places would appear. Then from the multiple paths,
user can select either path and images of famous places would be displayed. These images are broadly
classified into categories such as holy places, universities, historical monuments, nature-driven places and
wildlife. One can also see the detailed information of the selected place as well as of the selected image.
KEYWORDS – Latitude, Longitude, Path, Tourist Place
I. INTRODUCTION
The system ‘Search Photo And Topographic Information At A Location- SPATIAL’ is an application where in the user can provide a desired keyword describing the place of interest, and the system will
look into its database for places that share the visual characteristics. From the country India, on
selection of two locations in a particular state; the latitude and longitude of the selected place, the path
or multiple paths and the distance between the two places would appear. On selection of a particular
path, images of famous places would be displayed. We have broadly classified these images into the
categories such as holy places, universities, historical monuments, nature-driven places and wildlife.
We can also see the detailed information of the selected place as well as of the selected image. There is an Administrator Control Panel in which the only the administrator has the access rights to
add a region, city, images and their description as well as edit the same. He can also define paths
between various cities and edit or delete the same.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
228 Vol. 1, Issue 5, pp. 227-235
The system named ‘SPATIAL’ which is an abbreviated form of Search Photo and Topographic
Information at a Location. In this project, inorder to get started, click two locations on the map
displayed to define the geographical area for which image results should appear. When you click on
the map for a location, respective latitude and longitude information is automatically entered in
the boxes displayed. The search can further be refined by using specific search based on the different image categories like
holy places, historical monuments, nature-driven places, universities, wild life. Thus, a collection of
interesting images that contains both user-tags and geolocations are needed [2].
The features of the system ‘SPATIAL’ are as follows [3]:
i) It provides detailed information about the image and its location.
ii) It provides the user with more efficient and easy ways to find tourism recommendations
which can save time and efforts.
iii) It suggests tourist destinations based on his/her interest.
iv) Latitude, Longitude, multiple paths and distance between the selected locations will be
displayed.
Figure 1: Design of software
In order to fulfill the criterions, the system is divided into the following modules as shown in Figure
1.
1.1 Module 1 (Selection).
1.2 Module 2 (Displaying Images).
1.3 Module 3(Image Processing and Image Search)
1.4 Module 4 (Information Retrieval).
The detailed descriptions of these modules are as follows:
1.1 Module 1 (Selection)
In Module 1, SPATIAL provides the user the ability to switch between Interstate and Intrastate. Interstate comprises of South India which includes traversing among the states Karnataka, Kerala,
Tamil Nadu and Andhra Pradesh. Also, in Intrastate, we have limited it to Maharashtra and Uttar
Pradesh. One can select two locations on the map; the latitude, longitude of the selected place;
distance between the two places would appear.
1.2 Module 2 (Displaying Images)
From the selected points, multiple paths are displayed and all famous or user-desired images appear.
User-desired images include temples, historical places, nature driven places, universities, wild life and
other famous places.Moreover, the displaying of images comprises of the following:
i. Air Route - A line is drawn connecting the two location points and images would be
displayed. A route that would appear as a straight line on the map would actually be shorter
than the original distance between the two locations.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
229 Vol. 1, Issue 5, pp. 227-235
ii. Rail Route – This is based on the rail network within the particular location. Depending on
the rail network for the selected location, the associated path would appear on the map. Then,
all famous or user-desired images would be displayed.
iii. Road Route – The exact road route would appear between the two selected locations. Images
of all well-known places like temples, beaches, monuments; etc that is present on that route would be displayed.
In order to reach the location of famous places, road route would be efficient. Hence, we have
displayed images of such places according to the road route.
1.3 Module 3 (Image Processing and Image Searching)
Image Processing - Here, we have performed image enlargement on the selected image. Search of
specific images based on the category in the map is thus possible. For example, we can display all
temples in specific area.
1.4 Module 4 (Image Information Retrieval)
Image and Location Information- All information about image and its location is displayed to the
user. Such information includes distance calculation by road route, details about the places worth
visiting in that location. It is a basic form of information retrieval. This will help users, especially
tourists, to know about that tourist spot.
II. IMPLEMENTATION ISSUES
There are few issues which encountered while implementing the project. They are listed as follows:
2.1. Language of Implementation To develop a web application, different tools are available such as ASP.NET, PHP etc. We have
implemented our project in ASP.NET because ASP.NET is a web application framework developed
and marketed by Microsoft to allow programmers to build dynamic web sites, web applications and web services. ASP.NET makes development simpler and easier to maintain with an event-driven,
server-side programming model [7]. The connectivity of ASP.NET with SQL Server is also very fast,
secure, and it can store extremely large amounts of data. We have used jQuery for scripting because
jQuery is fast and concise JavaScript Library that simplifies HTML document traversing, event
handling, and animating and Ajax interactions for rapid web development [8].
2.2. Database Considerations
In order to enable tourists to know about the famous places in a particular area, it was necessary that we obtain the images from the database.
Here, we have faced the following cases:-
i) What type of database to be used?
SQL database are designed and optimized to run with large amounts of records from a database
quickly and efficiently. With the help of simple SQL queries, one can retrieve complex information
from millions of records. Storing data in a SQL database is more secure [9].
ii) What will be stored in the database? In database the relative path of images are stored according to the image categories.
iii) How are the images and information retrieved?
Images are retrieved by writing query in a SQL server.
2.3. Ease in Image Searching
Tourists desire more efficient ways to find tourism recommendations which can save time and efforts.
In case they want to see specific images in particular locations then searching will take too much time.
For this purpose, we have stored images by their categories so that searching of specific category like
temples, nature-driven places etc. will be easy.
III. IMPLEMENTATION LIBRARIES
There were two libraries which were being used in implementing the project. The two libraries are
discussed below:
3.1 Scalable Vector Graphics (SVG)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
230 Vol. 1, Issue 5, pp. 227-235
Scalable Vector Graphics is a family of specifications of an XML-based file format for describing
two-dimensional vector graphics, both static and dynamic (i.e. interactive or animated).SVG images
and their behaviors are defined in XML text files. This means that they can be searched, indexed,
scripted and, if required, compressed. Since they are XML files, SVG images can be created and
edited with any text editor, but drawing programs are also available that supports SVG file formats. We have used the SVG 1.1 specification which defines certain important functional areas or feature
sets such as Paths, Basic Shapes, Text, Colour, Interactivity, Linking, Scripting, Animation
and Fonts.
3.2 jQuery
jQuery is a cross-browser JavaScript library designed to simplify the client-side scripting of HTML.
The jQuery library can be added to a web page with a single line of markup. jQuery is a library of
JavaScript Functions. jQuery's syntax is designed to navigate a document, select DOM elements,
create animations, handle events, and develop Ajax applications. jQuery also provides capabilities for
developers to create plugins on top of the JavaScript library. Using these facilities, developers are able
to create abstractions for low-level interaction and animation, advanced effects and high-level, theme-
able widgets. This contributes to the creation of powerful and dynamic web pages [6].
jQuery contains the following features:
i. DOM element selections using the cross-browser open source selector engine Sizzle, a spin-
off out of the jQuery project
ii. DOM traversal and modification (including support for CSS 1-3)
iii. Events
iv. CSS manipulation
v. Effects and animations
vi. Ajax
vii. Extensibility through plug-ins
viii. Utilities - such as browser version and the each function.
ix. Cross-browser support
The jQuery library is stored a single JavaScript file, containing all the jQuery functions. It can be
added to a web page.
IV. IMPLEMENTATION SCREENSHOTS
The figure 2 shows the Home page where in the user is able to get an overview of SPATIAL. The
Home page also consists of a brief view of hot spots and the image gallery. From this home page, the
user can choose his navigation via Interstate or Intrastate. The Interstate navigation mainly deals with
South India.
Figure 2: Home Page
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
231 Vol. 1, Issue 5, pp. 227-235
Figure 3 gives the overview of all the images stored in the image gallery. The images in the image
gallery have been broadly classified into categories such as holy places, universities, historical
monuments, nature-driven places and wildlife. On moving the mouse over a particular image, the
detailed information of that image will appear as shown in the figure below.
Figure 3: Image Gallery
In the Intrastate navigation, the user can select two locations, that is, a source and a destination in a
particular state.Thereby the latitude, longitude, city details and the available multiple paths between the two locations will be displayed. On selection of a particular path, images of famous places would
appear as shown in figure 4.
Figure 4: Overview of Intrastate
The Interstate navigation consists of traversing South India. This traversing consists of navigating
through four states- Andhra Pradesh, Karnataka, Kerala and Tamil Nadu as shown in figure 5.
Navigation from one state to another is similar to that of the intrastate navigation. The user selects two
locations as source and destination; its details along with the latitude and longitude are then displayed.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
232 Vol. 1, Issue 5, pp. 227-235
Multiple paths appear, the user is able to select his desired path and images available on that path are
displayed below.
Figure 5: Overview of Interstate
On clicking on a particular image in the image gallery gives an enlarged form of the image.One can
then see the enlarged form and the detailed information of that image as shown in figure 6.
Figure 6: Enlarged Image with details
Figure 7 shows the Login page for Administrator from where he can access all the administrator
rights. Here the administrator needs to enter a username and an authenticated password. These
administrator rights are the access rights which are given only to the administrator and not to the end user.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
233 Vol. 1, Issue 5, pp. 227-235
Figure 7: Login Page
This is the Administrator Home page from where the administrator can perform the administrator
rights. These rights include adding acity, adding images and their description as well as editing and
deleting the same as shown in figure 8.
Figure 8: Home Page for Administrator
This is the Administrator Control Panel where the administrator can add a region as shown in figure 9.
Adding a region is defining a new state. Adding a new state comprises of defining the state name,
state description and other related parameters.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
234 Vol. 1, Issue 5, pp. 227-235
Figure 9: Add State
The Administrator Control Panel also enables the administrator to add a new city, add images and
define paths between the cities. Figure 10 shows where and how the administrator can add a city,
images as well as their description. He can define multiple paths between two locations. In case of a
change or modification, the administrator can also edit and delete the same.
Figure 10: Add Panel
V. CONCLUSION
The system - SPATIAL has been successfully implemented as a photo and topographic information
search at a location. Topography means determining the position of any feature or more generally any
point in terms of coordinate system such as latitude and longitude. Here, images have also been
organized by the specified categories. We have also implemented geotagged images to show the
interesting scenes of different places in the world, and help users to find destinations which match
their interests best. We can also see the detailed information of the selected place. The system aims on
suggesting tourist destinations based on his/her interest. The system also makes image search more
efficient, specific, easy and thus more interesting. SPATIAL can be accessed by a number of people
especially tourists, thereby making it more popular among them.
REFERENCES
[1]“Tour-Guide: Providing Location-Based Tourist Information”- a white paper by Xiaoyu Shi, Ting Sun,
YanmingShen, Keqiu Li and Wenyu Qu.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
235 Vol. 1, Issue 5, pp. 227-235
[2]“Geo-location Inference from Image Content and User Tags”-a white paper by Andrew Gallagher, Dhiraj
Joshi, Jie Yu and JieboLuo.
[3] “Determining Photo and Topographic Information at a Location”-published in the proc. of National
Conference on ETCSIT-2011 organized by K.K.Wagh Institute of Engineering Education and Research, Nashik.
[4] “Exploring User Image Tags for Geo-Location Inference” – a white paper by Dhiraj Joshi, Andrew
Gallagher, Jie Yu and Jiebo Luo.
[5] http://en.wikipedia.org/wiki/Scalable_Vector_Graphics
[6] http://en.wikipedia.org/wiki/JQuery
[7] http://en.wikipedia.org/wiki/ASP.NET
[8] http://en.wikipedia.org/wiki/JQuery
[9] http://en.wikipedia.org/wiki/Microsoft_SQL_Server
Authors biography
Sukhwant Kaur is currently working with Manipal University, Dubai Campus in the
Department of Engineering. She has done B.Tech (Computer Science and Engineering) in
1999 from Punjab Technical University. She has completed M.S. (SOFTWARE SYSTEMS)
in 2001 from BITS, Pilani . She has worked in Fr. C. Rodrigues Institute of Technology,
Vashi, Mumbai as Assistant professor in Computer Department for 10 years. Her Research
area is Wireless Communication, Mobile Communication, Image Processing and Software
Engineering. She has published 4 papers in International Conferences and 11 papers in
National Conference.
Sandhya Pati is currently working as Assistant Professor in Fr. C. Rodrigues Institute of
Technology, Vashi, Mumbai in the Department of Computer Engineering. She has done
B.Tech(Computer Science and Engineering) in 2002 from Sree Vidyanikethan College of
Engineering, Tirupati affiliated to Jawaharlal Nehru Technological University,Anantapur.
She has completed M.E from Sathyabama Deemed University in 2006. She has worked in
Gokula Krishna College of Engineering, Andhra Pradesh for 4 years. She has published 2
papers in International Conference and 3 papers in National Conference.
Cheryl Rodrigues completed B.E. (Computer Engineering) from Fr. C. Rodrigues Institute
of Technology, Vashi, Mumbai. She has published 1 paper in a National Conference.
Jagdish Talekar completed B.E.(Computer Engineering) from Fr. C. Rodrigues Institute of
Technology, Vashi, Mumbai. He has published 1 paper in a National Conference.
Abhijeet Dhere completed B.E. (Computer Engineering) from Fr. C. Rodrigues Institute of
Technology, Vashi, Mumbai. He has published 1 paper in a National Conference.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
236 Vol. 1, Issue 5, pp. 236-247
QUALITY ASSURANCE EVALUATION FOR PROGRAMS USING
MATHEMATICAL MODELS
Murtadha M. Hamad and Shumos T. Hammadi Faculty of Computers, Department of Computer Science, Al-Anbar University, Iraq
ABSTRACT
The purpose of this paper based on comprehensive quality standards that have been developed for program
measurements. This paper adopted four measures to evaluate program performance (time complexity,
reliability, modularity and documentary) evaluate on the basis of performance, these measures are based on
mathematical models to evaluate the program . These measures applied on a sample of texts file that contain
programs written in C++, so that was formed texts file that contain program to be evaluated . Analyzed the
data obtained by using the algorithms proposed evaluation, which relied primarily on mathematical analysis,
using mathematical functions to evaluate each program. C# was used as an environment in which software
applied program evaluation. The results showed that the assessment depends on the structure and method of
writing program.
KEYWORDS: Quality Assurance, time complexity, reliability, modularity, documentary.
I. INTRODUCTION
With increasing importance placed on standard quality assurance methodologies by large companies
and government organizations, many software companies have implemented rigorous QA processes
to ensure that these standards are met. The use of standard QA methodologies cuts maintenance costs,
increases reliability, and reduces cycle time for new distributions. Modelling systems differ from most
software systems in that a model may fail to solve to optimality without the modelling system being
defective. This additional level of complexity requires specific QA activities. To make software
quality assurance (SQA) more cost-effective, the focus is on reproducible and automated techniques
[1].
In Software Quality, the definition should be as follows: software quality characterizes all attributes
on the excellence of computer system such as reliability, maintainability and usability. In terms of
practical application, software quality can be defined with three points on consistency: consistency
with determined function and performance; consistency with documented development standard;
consistency with the anticipated implied characteristics of all software specially developed [2].
Software quality is concerned with assuring that quality is built into the software products. Software
quality assures creation of complete, correct, workable, consistent, and verifiable software plans,
procedures, requirements, designs, and verification methods. Software quality assurance (SQA)
adherence to those software requirements, plans, procedures, and standards to successive products.
The software quality discipline consists of product assurance and process assurance activities that are
performed by the functions of SQA, software quality engineering, and software quality control [3].
Software quality assurance is that it is the systematic activities providing evidence of the fitness for
use of the total software product. SQA is achieved through the use of established guidelines for
quality control to ensure the integrity and prolonged life of software. SQA involves [4]:
• Establishing a Quality Assurance Group who has required independence.
• Participation of SQA in establishing the plans, standards and procedures for the project.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
237 Vol. 1, Issue 5, pp. 236-247
• Reviewing and auditing the software products and activities to ensure that they comply with
the applicable procedures and standards.
• Escalating unresolved issues to an appropriate level of management.
In this paper will explain the affects of software Quality Evaluation and measurement approved to
Software Performance Analysis. In the end, is discuss the results and the conclusions.
II. RELATED WORK
Several researches in the field of QA Evaluation have been done. There are a number of researchers
and scientists used the methods of modelling technique based on the mathematical technique for
evaluation to ensure the quality of the assessment. Some of these researches are summarized below:
Stefan Wagner, Florian Deissenboeck, and Sebastian Winter, This paper proposes that managing
requirements on quality aspects is an important issue in the development of software systems.
Difficulties arise from expressing them appropriately what in turn results from the difficulty of the
concept of quality itself. Building and using quality models is an approach to handle the complexity of
software quality. A novel kind of quality models uses the activities performed on and with the
software as an explicit dimension. These quality models are a well-suited basis for managing quality
requirements from elicitation over refinement to assurance. The paper proposes such an approach and
shows its applicability in an automotive case study [5].
Manju Lata and Rajendra Kumar, This paper presented an approach to optimize the cost of SQA. It
points out, how to optimize the investment into various SQA techniques and software quality. The
detection and removal of defect is a software inspection providing technical support for the defect
detection activity, and large volume of documentation are related to software inspection in the
development of the SQA as a cost effective. The value of an inspection improves the quality and
saves defect cost describe the optimization model for selecting the best commercial off-the-self
(COTS) software product among alternatives for each module. As objective function of the models is
to maximize quality within a budgetary constraint and standard quality assurance (QA) methodologies
cuts maintenance costs. Increase reliability, and reduces cycle time for new distribution modelling
system [6].
Holmqvist and Karlsson, The purpose of this work to improve the quality of software testing in a
large company developing real-time embedded system. Software testing is a very important part of
software development. By performing comprehensive software testing, the quality and validity of a
software system can be assured. One of the main issues with software testing is to be sure that the
tests are correct. Knowing what to test, but also how to perform testing, is of utmost importance. This
thesis explores different ways to increase the quality of real-time testing by introducing new
techniques in several stage of the software development model. The proposed methods are validated
by implementing them in an existing and completed project on a subset of the software development
process [7].
III. SOFTWARE QUALITY EVALUATION
Software quality directly affects the application and maintenance of software, so how to objectively
and scientifically evaluate software quality becomes the hot spot in software engineering field.
Software quality evaluation involves the following tasks throughout software life cycle and based on
software quality evaluation standard, which is implemented during software development process:
continuously measure software quality throughout software development process, reveal current status
of software, predict follow up development trend of software quality, and provide effective means for
buyer, developer and evaluator. A set of evaluation activities may generally include review, appraisal,
test, analysis and examination, etc. Performance of such activities is aimed to determine whether
software products and process is consistent with technical demands, and finally determine products
quality. Such activities will change the phase of development, and may be performed by several
organizations. A set of evaluation activities may be generally defined in the software quality
specifications of project plan, special project, as well as related software quality specifications [8].
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
238 Vol. 1, Issue 5, pp. 236-247
IV. SOFTWARE PERFORMANCE ANALYSIS
For software qualification, it is highly desirable to have an estimate of the remaining errors in a
software system. It is difficult to determine such an important finding without knowing what the
initial errors are. Research activities in software reliability engineering have been studied over the past
30 years and many statistical models and various techniques have been developed for estimating and
predicting reliability of software and numbers of residual errors in software. From historical data on
programming errors, there are likely to be about 8 errors per 1000 program statements after the unit
test. This, of course, is just an average and does not take into account any tests on the program [9].
4.1 Time complexity
Important factors in measuring the efficiency or effectiveness of any algorithm is the amount of
(execution time), the time it takes for the implementation of the algorithm. There are no simple rules
to determine the time, so let us go to (appreciation prior) for the execution time using some of the
mathematical techniques after knowing a number of important factors relating to the issue addressed
by the algorithm. Identify the function that determines the expected time for implementation,
depending on some variables related to the steps of the algorithm, suppose that the algorithm includes
the following statement [10].
X=X+1;
Here we must account for the amount of time required to execute this statement alone, and then must
know the frequency of implementation of the so-called (frequency Count). It differs according to the
sample data and by multiplying the amounts in (the time of the statement and the amount of
frequency) we get the Total Execution Time expected.
That calculation time of implementation of all instruct with the required accuracy of the information is
needed for:
• Type of computer hardware that implement the algorithm.
• The programming language used in the computer.
• Time of implementation of all instruct.
• Kind of translator or interpreter.
Possible to know that information to choose a machine (computer) fact or definition of a computer by
default, and in both cases, the calculated time may not be accurate and appropriate for a number of
computers or any computer, as the language interpreter may vary from one computer to another as
well as other factors. These considerations make us focus our appreciation in advance of the execution
time on the number of iterations of code phrases directives. Take the following three examples:
………. for(i=1;i<=n; i++) for(i=1;i<=n; i++)
.……... ………. .………
……… ………. ………..
for (J=1;J<=n; J++) ………. X=X+1
……… X=X+1; ………
………. …….… ………
X=X+1; ………. ………
(C) (B) (A)
In the example (A)
That is a combination( X=X+1 ) not contained within any iterative formula, that the number of
times executed ( frequency Count=1).
In the example (B): A combination of repeated (n) times.
In the example (C): A combination of repeated (n2) times.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
239 Vol. 1, Issue 5, pp. 236-247
If we assume (n = 10), these frequencies are (1, 10, 100), and this corresponds to ride a bicycle, riding
a car, boarding a plane compared to the distance that will be interrupted by each vehicle per unit
time (hour, for example) and here we use the expression (order of magnitude of algorithm ) means
the frequency of implementation of the phrase. The term ( order of magnitude of a statement ) sum of
all iterations terms under which the executive and the assessment pre-determined execution time.
The example above shows that the algorithm (A) is the fastest implementation of the algorithm (B)
and in turn faster than (C).
Example: We have a matrix (A) dimensions (n * n) is required to sum each row and store it in the
matrix to the other was (sum) and then calculate the sum total of the components of the matrix (A).
Can be the solution in two ways.
The first way:
Grandtotal=0;
for(i=1;i<=n; i++)
Sum[i]=0;
for(j=1;j<=n; j++)
Sum[i]=Sum[i]+A[i][j];
Grandtotal= Grandtotal+ A[i][j];
The second way:
Grandtotal=0;
for(i=1;i<=n; i++)
Sum[i]=0;
for(j=1;j<=n; j++)
Sum[i]=Sum[i]+A[i][j];
Grandtotal= Grandtotal+ Sum[i];
We note here that the number of the first algorithm ( 2N2 ) is greater than the number of the second
algorithm (N2+ N ), so the first take longer than the second.
The following discussion considers the various statement types that can appear in a program and state
the complexity of each terms of the number of steps [10]:
• Declarative Statement: these count as zero steps as these are no executable.
• Comment: these count as zero steps as these are no executable.
2N additions
This cycle is
repeated (N)
of times
Total collection=2N*N=2N2
N
additions
N2 additions
N
additions
Total collection=N+ N2
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
240 Vol. 1, Issue 5, pp. 236-247
• Expression and Assignment Statement: most expression has a step count of one. The
exceptions are expressions that contain function calls. In this case, we need to determine the
cost of invoking the function.
• Iteration statements: this class of statement includes the (for, while and Do….while)
statement. We shall consider the step counts only for the control part of these statements. The
step count for each execution of control part of a for statement is one.
• Switch statement: This statement consist of a header followed by one or more sets of
condition and statement pairs. The cost of header switch expression is given a cost equal to
that assignable to expression. The cost of the each following condition statement pair is the
cost of this condition plus that of all preceding conditions plus that of this statement.
• If –Then–else Statement: It consists of three parts:
If (exp)
Statement1 block of statements
else Statement2 block of statements
Each part is assigned the number of steps corresponding to <exp>, <Statement2>, <
Statement2 >, respectively. Note that if the else clause is absent, then no cost is assigned to it.
• Function invocation: All invocation of procedures and function count as one step unless the
invocation involves value parameters whose size depend on the instance characteristics.
• Function statements: these count as zero step as their cost has already been assigned to the
invoking statement.
4.2 Reliability
There is no doubt that the reliability of a computer program is an important element of its overall
quality. If a program repeatedly and frequently fails to perform, it matters little whether other software
quality factors are acceptable.
Software reliability, unlike many other quality factors, can be measured directed and estimated using
historical and developmental data. Software reliability is defined in statistical terms as "the probability
of failure-free operation of a computer program in a specified environment for a specified time". To
illustrate, program X is estimated to have a reliability of 0.96 over eight elapsed processing hours. In
other words, if program X were to be executed 100 times and require eight hours of elapsed
processing time (execution time), it is likely to operate correctly (without failure) 96 times out of 100.
Whenever software reliability is discussed, a pivotal question arises: What is meant by the term
failure? In the context of any discussion of software quality and reliability, failure is non-
conformance to software requirements. Yet, even within this definition, there are gradations. Failures
can be only annoying or catastrophic. One failure can be corrected within seconds while another
requires weeks or even months to correct. Complicating the issue even further, the correction of one
failure may in fact result in the introduction of other errors that ultimately result in other failures [11].
4.3 Modularity
Modular programming is subdividing your program into separate subprograms such as functions and
subroutines. For example, if your program needs initial and boundary conditions, use subroutines to
set them. Then if someone else wants to compute a different solution using your program, only these
subroutines need to be changed. This is a lot easier than having to read through a program line by line,
trying to figure out what each line is supposed to do and whether it needs to be changed. And in ten
years from now, you yourself will probably no longer remember how the program worked.
Subprograms make your actual program shorter, hence easier to read and understand. Further, the
arguments show exactly what information a subprogram is using. That makes it easier to figure out
whether it needs to be changed when you are modifying your program. Forgetting to change all
occurrences of a variable is a very common source of errors. Subprograms make it simpler to figure
out how the program operates. If the boundary conditions are implemented using a subroutine, your
program can be searched for this subroutine to find all places where the boundary conditions are used.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
241 Vol. 1, Issue 5, pp. 236-247
This might include some unexpected places, such as in the output, or in performing a numerical check
on the overall accuracy of the program.
Subprograms reduce the likelihood of bugs. Because subprograms can use local variables, there is less
change that the code in the subroutine interferes with that of the program itself, or with that in other
subprograms. The smaller size of the individual modules also makes it easier to understand the global
effects of changing a variable [12].
4.4 Documentation
The system test also is concerned with the accuracy of the user documentation. The principle way of
accomplishing this is to use the documentation to determine the representation of the prior system test
cases. That is, once a particular stress case is devised, you would use the documentation as a guide for
writing the actual test case. Also, the user documentation should be the subject of an inspection
(similar to the concept of the code inspection ), checking it for accuracy and clarity. Any examples
illustrated in the documentation should be encoded into test cases and fed to the program [13].
V. PROPOSED ALGORITHMS FOR EVALUATION
To see if the programmatic product has a quality or not. There must be a standards assessment
describes the programmatic product. In the software evaluation a mathematical models were used
which are easy to measure and on that basis the values of four measures of the software is evaluated
(Time complexity, Reliability, Modularity, Documentation). The next will explain each measure
separately.
5.1. The Time complexity Measurement of time is the time of performance, operating, or the so-called the execution time.
Measuring the time adopted several measures to measure the execution time of software, as described
in the chapter three, on which found the evaluation. The following algorithm describes the steps for
finding the time:
Algorithm 1 Time complexity measures of program.
Input: Text file of the program.
Output: Report of the Time complexity program.
___________________________________________
Step1: - Read Text file.
Step2: - Determine (Len Length of text file).
• Let t is two-dimension array
• k =0, is pointer on current state
Step3: - for (i =1; i < Len; i++).
Step4: - Determine (aa Token).
Step5: - Check aa
- Case aa= "" then
- if (t[0, k] == 1) then
- t[1, k - 1] = t[1, k - 1] + t[1, k];
- if (t[0, k] == 2) then
- t[1, k - 1] = t[1, k - 1] + (t[1, k] * n);
Else
- k = k + 1;
- t[1, k] = 0;
- k = k - 1;
- Case aa = "for" OR aa= "while" OR aa= "do" then
- k = k + 1;
- t[0, k] = 2
- while (aa != "")
- i = i + 1;
- i = i - 1;
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
242 Vol. 1, Issue 5, pp. 236-247
- Case aa = "" then
- if ( t[0, k] != 2 )
- k = k + 1; t[0, k] = 1;
- Case aa = ";" then
- t[1, k] = t[1, k] + 1
• End
Example:
#include <iostream.h> // sequence is 0, 1, 1, 2, 3, 5, 8, 13, ...
int fib (int i)
int pred, result, temp;
pred = 1;
result = 0;
while (n > 0)
temp = pred + result;
result = pred;
pred = temp;
n = n-1;
return(result);
int main ()
int n;
cout << "Enter a natural number: ";
cin >> n;
while (n < 0)
cout << "Please re-enter: ";
cin >> n;
cout << "fib(" << n << ") = " << fib(n) << endl;
return(0);
It is easy to see that in the for loop the value of count will increase by a total of 6n. If count is zero to
start with, then it will be 6n+9 on termination. So each invocation of sum execution a total of 6n+9
steps.
5.2. The Reliability Measure To get on the reliable software must be reaching the number of errors in the programs to the lowest
value as well as the loss the negative results which are resulting from them to the lowest level as
possible. Where the first attempts to build the quality standards of the software went about the
reliability of the programmatic product. The reason is the clarity of this attribute and easily measured
as related to probability of failure for career and illnesses that occur in the software system during the
operating effective for a long time. The reliability measuring was based on the two types of
mathematical errors a division by zero and a negative value under the root, the following algorithm
will show the reliability measurement:
Algorithm 2 Reliability measures of program.
Input: Text file of the program.
Output: Report of the reliability program.
_______________________________________________
Step1: - Read Text file.
Step2: - Determine ( Len Length of text file).
Step3: - for ( i =1; i < Len; i++ ).
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
243 Vol. 1, Issue 5, pp. 236-247
Step4: - Determine (aa Mathematical expression).
Step5: - Check aa
- Case aa[i]= "/" or aa[i]= "%" then
- n = i+1
- while ( n != ";" OR n != ")" ) then
- If ( aa[n] !="0") then
- n =n+1
endwhile
Step6: - " The program is not Reliability ".
Else
"The program is Reliability ".
Step7: - Case aa[i]= " sqrt " then
- n=i+1; while ( n!=")" ) then
- if ( aa[n] != "-") then
- n=n+1; endwhile
- Repeat Step6.
• End.
5.3. The Modularity Measure Most programs consist of number of functions which are called when they are needed. The function
is a set of instructions that can be called from anywhere in the main function to perform a specific
task. The sub-functions (sub-programs) are characterized by have the same general structure of the
main function in terms of defining variables and writing instructions. Among the benefits of the use
of sub-functions, to simplify the problem to be solved and this by divided it in to a partial tasks (sub-
functions). In some cases, the program will repeat a section or more the number of times, so the sub-
programs (sub-functions) helps to reduce these repetitions by call this section each time by one step
only. Evaluation has been adopted based on the number of existing functions, as explained in the
following algorithm:
Algorithm 3 Modularity measures of program.
Input: Text file of the program.
Output: Report of the modularity program. _______________________________________________
Step1: - Read Text file.
Step2: - Determine ( Len Length of text file).
Step3: - for ( i =1; i < Len; i++ )
- Count=0,number to the Expressions reserved.
Step4: - Determine (aa Expressions reserved).
Step5: - Check aa
- if ( aa= " void " or aa= " return ") then
- Count=Count + 1
- EndIf
Step5: - if (Count = 0) then
" The program is not modularity ".
Else
if (Count = 1) then
" The program is medium modularity ".
Else
if (Count >= 2) then
" The program is High modularity ".
• End.
5.4. The Documentation Measure The Documentation is an important stage of building the software system. It is documents the internal
construction of the program for the purpose of maintenance and development. Without documentation
the stage of programs factory no longer able to follow-up their maintenance and development. Which
/* "/" is Division in C++ and "%" is Mod in C++
// "sqrt" is root in C++
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
244 Vol. 1, Issue 5, pp. 236-247
increases the financial cost and time for that program to the limits of unexpected or in other words, the
failure to build software with high quality and long life cycle.
There is more than one way to documentation. For example, the programmer documentation is
possibility to add comments within the software code. The analyst documentation during it the
personal documents to explain the program cycle and the laboratory system documentation in which
the points imbalance in the program are recorded. In this work the programmer documentation is
adapted. Following algorithm describes the ratio of documentaries:
Algorithm 4 Documentation measures of program.
Input: Text file of the program.
Output: Report of the Documentation program. _______________________________________________
Step1: - Read Text file.
Step2: - Determine ( Len Length of text file).
Step3: - for ( i =1; i < Len; i++ ).
- Count=0, number of Symbolic expressions.
Step4: - Determine ( aa Symbolic expressions ).
Step5: - if (aa [i] = "//" or aa [i] = "*/") then
Count=Count + 1
EndIf
Step6: - if (Count = 0) then
" The program is not Documentation ".
Else
if (Count = 1) then
" The program is medium Documentation".
Else
if (Count >= 2) then
" The program is High Documentation".
• End.
VI. EXPERIMENTAL RESULTS
Implementation of the proposed evaluation algorithm on program written in c++ language in text file
Appendix A, using mathematical analysis of these program. In this paper, evaluated six program by
using four measures (time complexity, reliability, modularity, documentary)
See appendix A include some sections of code used to implement the algorithm.
In the Table 1 notes the ratio of evaluations of software in accordance with QA standards adopted for
each program: the time complexity, reliability, modularity and documentation, the evaluation found
using mathematical models.
The results after the implementation, the prog.2 was the highest rate of the time complexity, the
prog.6 was the lowest time complexity. Clear that the programs (prog.4) from an arithmetic error and
consequently appear that they are not reliability.
Table 1. Evaluate of Software
Documentation Modularity Reliability Time
Complexity
Name of
program
The program is highly
documented The program is high
Modular The program is
reliable
333 prog.1
The program is highly
documented The program is high
Modular The program is
reliable
2872 Prog.2
The program is medium
documented The program is high
Modular
The program is
reliable
49 prog.3
The program is highly
documented
The program is high
Modular
The program is not
reliable
21 prog.4
// "//" & "/*" is refer to Document in C++
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
245 Vol. 1, Issue 5, pp. 236-247
The program is highly
documented The program is medium
Modular
The program is
reliable
39 prog.5
The program is medium
documented
The program is medium
Modular
The program is
reliable
14 prog.6
VII. CONCLUSIONS
The study aimed to shed light on the concept of TQM in the evaluation of software by discussing the
different intellectual visions that dealt with the overall quality standards and models. Mathematical
analysis was used for evaluation depending on the standard model to evaluate the programs adopted
this model to four measures of evaluation. Also the time it takes for the implementation of the
algorithm there are no simple rules to determine the time, so the execution time using some of the
mathematical techniques after knowing a number of important factors.
Reliability depends on conceptual correctness of algorithms, and minimization of programming
mistakes, such as logic errors (such as division by zero or off-by-one errors).Modular benefits of the
use of sub-functions, to simplify the problem to be solved and this by divided it in to a partial tasks
(sub-functions).Using documentation as a guide for writing the actual test case, checking it for
accuracy and clarity. In the future we can use other linear models to evaluate the software and we can
dealing with software to test those which are more complex..
Appendix A:
Multiplying a vector by a square matrix many times
#include <iostream<
#include <iomanip<
#include <fstream<
#include <cmath<
using namespace std;
void mat vec (int, double[][50], double[], double[])
int main()
int n, i, j, norm;
double b[50],c[50],a[50][50];
cout << endl;
cout << " Normalize the vector after each projection?" << endl;
cout << " Enter 1 for yes, 0 for no" << endl;
cout << " -------------------------" << endl;
cin >> norm;
//--- Read the matrix and the vector:
ifstream input data;
input data.open("matrix v.dat");
input data >> n;
.
.
.
.
.
.
. for (i=1;i<=n;i++)
b[i]=c[i];
if(norm == 1)
double rnorm = 0;
for (i=1;i<=n;i++)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
246 Vol. 1, Issue 5, pp. 236-247
rnorm = rnorm + b[i]*b[i];
rnorm = sqrt(rnorm);
for (i=1;i<=n;i++)
b[i]=b[i]/rnorm;
cout << " Projected vector at stage: " << icount;
cout << "\n\n";
for (i=1;i<=n;i++)
cout << setprecision(5) << setw(10);
cout << b[i] << endl;
icount = icount+1;
cout << " One more projection? "<< endl;
cin >> more;
return 0;
*/ -------------------------------------------------
function mat vec performs matrix-vector
multiplication: c i = a ij b j
--------------------------------------------------/*
void mat vec (int n, double a[][50], double b[], double c([]
int i, j;
for (i=1;i<=n;i++)
c[i] = 0;
for (j=1;j<=n;j++)
c[i] = c[i] + a[i][j]*b[j];
Time complexity 333
Reliability The program is Reliability
Modularity The program is highly Modularity
Documentation The program is highly documented
for (int i = 0; i < len; i++)
if (aa.Substring(i, 1) == "")
if (t[0, k] == 1)
t[1, k - 1] = t[1, k - 1] + t[1, k];
else
if (t[0, k] == 2)
t[1, k - 1] = t[1, k - 1] + (t[1, k] * n);
else
k = k + 1;
t[1, k] = 0;
k = k - 1;
else
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
247 Vol. 1, Issue 5, pp. 236-247
if ((aa.Substring(i, 3) == "for") || (aa.Substring(i, 5) == "while") || (aa.Substring(i, 2) == "do"))
k = k + 1;
t[0, k] = 2;
while (aa.Substring(i, 1)!="")
i = i + 1;
REFERENCES
[1] Michael R. Bussieck, Steven P. Dirkse, Alexander Meeraus and Armin Pruessner, "Software Quality
Assurance for Mathematical Modeling system ", Springer 2005.
[2] Yang Aimin and Zhang Wenxiang, "Based on Quantification Software Quality Assessment Method",
Computer and Information Technology College, Zhejiang Wanli University , Ningbo, CHINA,
JOURNAL OF SOFTWARE, VOL. 4, NO. 10, DECEMBER 2009.
[3] Bryan O’Connor, "Software Assurance Standard Nasa Technical Standard", NASA-STD-8739.8
w/Change 1, July 28, 2004.
[4] Yujuan Dou, "Software Quality Assurance Framework (SQA)", 2008/11/28.
[5] Stefan Wagner, Florian Deissenboeck, and Sebastian Winter, " Managing Quality Requirements
Using Activity Based Quality Models", Institute for Informatics Technische University Munchen
Garching b. Munchen, Germany, ISBN: 978-1-60558-023-4, 2009.
[6] Manju Lata and Rajendra Kumar, "An Approach to Optimize the Cost of Software Quality Assurance
Analysis", Dept. of Compute Science & Engg, International Journal of Computer Applications (0975 –
8887), Volume 5– No.8, August 2010.
[7] Holmqvist J. and Karlsson K., "Enhanced Automotive Real-TimeTesting through Increased
Development Process Quality", (2010).
[8] Yang Aimin and Zhang Wenxiang, "Based on Quantification Software Quality Assessment Method",
Computer and Information Technology College, Zhejiang Wanli University , Ningbo, CHINA,
JOURNAL OF SOFTWARE, VOL. 4, NO. 10, DECEMBER 2009.
[9] Pham H, "Software Reliability", a chapter in Wiley Encyclopedia of Electrical and Electronic
Engineering, Wiley: pp 565-578, 2000.
[10] Essam al-Saffar, "data structures", Faculty of Rafidain University, Department of Computer, Baghdad
2001.
[11] Roger S. Pressman, "Software Engineering", Software engineering: a practitioner’s approach, Ph.D.
thesis, ISBN 0-07-365578-3,2001.
[12] http://www.eng.fsu.edu/~dommelen/courses/cpm/notes/progreq/
[13] Glenford J. Myers, "The Art of Software Testing", John Wiley & Sons, Inc., 2004.Study ",ICGST-
GVIP,ISSN 1687-398X,Volume (8),Issue (III),India, October 2008.
Authors
Murtadha Mohammad Hamad received his MSc degree in computer science from
University of Baghdad, Iraq. , in 1991, received his PhD degree in computer science from
University of Technology in 2004, and received the Assist Prof. title in 2005. Currently, he
is a dean of College of Computer, University of Anbar. His research interested includes
DataWarehouse, Software Engineering, and Distributed Database.
Shumos Taha Hammadi graduated from the College of Computer Department of
Computer Science University of Anbar, Iraq. Currently, she is master student in the end of
research phase.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
248 Vol. 1, Issue 5, pp. 248-253
NEAR SET AN APPROACH AHEAD TO ROUGH SET: AN
OVERVIEW
Kavita R Singh, Shivanshu Singh Department of Computer Technology, YCCE, Nagpur (MS), India.
ABSTRACT
Rough Set Theory is a fairly new concept that has found applications to various soft computing techniques. It
offers a set theory approach to manage the uncertainty in data systems. It has been used for the discovery of data
dependencies, importance of features, patterns in sample data, feature space dimensionality reduction, and the
classification of objects. Objects can be classified by means of their attributes when considered in the context of
an approximation space. The Near Sets represent a generalization of Rough Sets. It presents a nearness approach
to classifying objects. In this paper we present an overview of basics of rough sets and near sets along with their
application to face recognition problem.
KEYWORDS: Rough Sets, Near Sets.
I. INTRODUCTION
Rough set theory [2, 3, 4, 5] introduced by Z. Pawlak in 1991, is one of the new approaches towards
soft computing finding a wide application today. Rough Set Theory manages the vagueness in a data
system and has been successfully used to formulate the rules. These rules can be used to discover the
hidden patterns in data. In addition, Rough Set methods can be used to classify new samples based on
what is already known. Unlike other computational intelligence techniques, Rough Set analysis
requires no external parameters and uses only the information presented in the given data. Briefly,
Pawlak suggested that Rough Set when used as a classifier, objects can be classified by means of their
attributes [1]. By way of extension of Pawlak’s approach to classification, Near Set is an approach to
solving the problem of what it means for objects with common features to be near each other
qualitatively but not necessarily spatially.
Near Sets presents a nearness approach to classifying objects. It harkens back to the original 1981
paper by Z. Pawlak, who pointed out that exact classification of object is often impossible [1]. Thus
near Sets represent a generalization of the approach to the classification of objects introduced by Z.
Pawlak.
From a Rough Sets point-of-view, the main focus is on the approximation of sets with non-empty
boundaries. In contrast, in a Near Sets approach to set approximation, the focus is on the discovery of
Near Sets in the case where there is either a non-empty or an empty approximation boundary. Object
recognition problems, especially in images [10, 11 and 22] using the nearness of objects have
motivated the introduction of Near Sets.
In this paper we are providing an overview of a Rough Set and general theory of nearness of objects
in a Near Set approach to set approximation.
The paper is organized as follows. Section 2 presents an overview of Rough Set theory. Section 3
presents an overview on the concept of Near Sets. Section 4 describes the use of both Rough Set
theory and Near Set in feature selection. Section 5 briefs on the application of set approximation
approach from Rough Sets and Near Set to face recognition followed by conclusion.
II. ROUGH SETS
An approach put forth by mathematician Z. Pawlak in the beginning of the eighties, Rough Sets have
come up as a mathematical tool to treat the vague and imprecise data. Rough Set Theory is similar to
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
249 Vol. 1, Issue 5, pp. 248-253
Fuzzy Set Theory in many aspects. However the uncertainty and imprecision is expressed by the
Boundary Region of a set, as opposed to the Partial Membership as in Fuzzy Set Theory. Rough Set
concept is generally defined by means of interior and closure topological operations known as
Approximations [1].
Fuzzy Sets are defined by employing the Fuzzy Membership Function involving advanced
mathematical structures, numbers and functions. Rough Sets are defined by topological operations
called Approximations, thus the definition requires some advanced mathematical concepts.
Moreover, like other computational intelligence techniques, Rough Set analysis requires no external
parameters and uses only the information presented in the given data. An attractive feature of Rough
Set theory is that it can predict whether the data is complete or not, based on the data itself. If the data
is incomplete, it suggests more information about the object is required. On the other hand, if the data
is complete, Rough Sets can determine whether there are any redundancies in the data and find the
minimum data needed for classification. This property of Rough Sets is very important for
applications where the domain knowledge is very limited or data collection is expensive or laborious
since it makes sure the data collected is just sufficient to build a good classification model without
sacrificing the accuracy and without wasting time and effort to gather extra information about the
objects [3, 4 and 5].
The uncertainty and imprecision in is expressed by a boundary region of a set. It deals with the
approximation of an arbitrary subset of a universe by two definable or observable subsets called
Lower and Upper Approximations of a Rough Set. By using the concepts of Lower and Upper
Approximations in Rough Set theory, the knowledge hidden in information systems can be explored
and correct decisions could be derived.
In RST, information about the real world is expressed in the form of an information table. An
information table can be represented as a pair = (, ), where, is a non-empty finite set of
objects called the universe and is a non-empty finite set of attributes such that information function
: → , for every ∈ . The set is called the value set of a. Furthermore, a decision system is
any information table of the form = (, ∪ ), where ∉ is a decision attribute. For every
set of attributes ⊆ , an indiscernibility relation () is defined in the following way: two
objects, and , are indiscernible by the set of attributes ⊆ , if () = for every ⊆ .
The equivalence class of () is called elementary set in b because it represents the smallest
discernible groups of objects. For any element xi of u, the equivalence class of xi in relation ()
is represented as!"#$%(&). The notation !"& denotes equivalence classes. Thus the family of all
equivalence classes, partition the universe for all b will be denoted by ' . This partitions induced
by an equivalence relation can be used to build new subsets of the universe. The construction of
equivalence classes is the first step in classification with Rough Sets.
Rough Membership Function
Rough sets can also be defined by Rough Membership Functions instead of Approximation. A rough
membership function (rmf) makes it possible to measure the degree that any specified object with a
given attribute values belongs to a given decision set x. let, ⊆ and let x be a set of observations
of interest. The degree of overlap between x and !"& containing x can be quantified with an rmf
given by:
()&: → !0,1" (1)
()&() =
|!-".∩)||!-".|
(2)
where, |· | denotes the cardinality of a set. The rough membership value ()& may be interpreted as the
conditional probability that an arbitrary element x belongs to X given B. The decision set x is called a
generating set of the rough membership ()&. Thus Rough Membership Function quantifies the degree
of relative overlap between the decision set x and the equivalence class to which x belongs.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
250 Vol. 1, Issue 5, pp. 248-253
III. NEAR SET
Near set is a special theory about Nearness of objects. It was first presented by James Peter in the year
of 2006 and was formally defined in 2007. It represents a generalization of the approach to the
classification of objects introduced by Z. Pawlak during the early 1980s. Like Fuzzy Sets and Rough
Sets which instead of contracting complement each other, Near Sets and Rough Sets are also like two
sides of the same coin. The various different domains where the Near Set has been successfully
applied are: feature selection [14], object recognition in images [11 and 24], image processing [10],
granular computing [13 and 19] face recognition [20 and 21] and in various forms of machine
learning [1, 12, 13, 16, 15, 17 and 18].
In Near Sets theory, each object is described by a list of feature values. The word feature corresponds
to an observable property of physical objects in our environment. For instance, for a feature like the
nose of a human face, the feature values would be nose length or nose width. Comparing this list of
feature values, similarity between the objects can be determined and can be grouped together in a set,
called as Near Sets. Thus Near Set theory provides a formal basis for the observation, comparison and
recognition/classification of objects. The nearness of objects can be approximated using Near Sets.
Approximation can be considered in the context of information granules (neighbour hoods). Any
approximation space is a tuple given in equation (3)
0 = (, ℱ, 2) (3)
where ℱ is a covering of finite universe of object , i.e., ⋃ ℱ = and 2: 4() × 4() → !0,1"
maps a pair of set to a number in !0,1" representing the degree of overlap between the sets and 4()
is a power set of [4]. For a given approximation space 0 = (, ℱ, 2), we define a binary link
relations 6789ℱ ⊆ .
For any, : ⊆ , ℱ-lower approximation of :, and ℱ-upper approximation of : is defined
respectively by (4) and (5).
ℱ∗: = ⋃< ∈ ℱ|2(:, <) = 1, (4)
ℱ∗: = ⋃< ∈ ℱ|2(:, <) > 0, (5)
The lower approximation of a set X is the set of all objects, which can be for certain classified as X.
The upper approximation of a set X is the set of all objects which can be possibly classified as X.
The lower and upper approximations of a set lead naturally to the notion of a boundary region of an
approximation. Thus, the lower- and upper- approximations result in an increase in the number of
neighbourhoods used to assess the nearness of a classification [2].
Overlap Function
Earlier we have seen the concept of rough membership function in the context of Rough Set, used to
measure the degree of overlap. In Near Set, it is now possible to formulate a basis for measuring
average; the degree of overlap between Near Sets. Let X, Y defined in terms of a family of
neighborhoods Nr(B). There are two forms of the overlap function.
( ) ( )
≠
∩
=otherwise
YifY
YX
YXBNr ,1
,
,
φ
ν
(6)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
251 Vol. 1, Issue 5, pp. 248-253
( ) ( )
≠
∩
=otherwise
XifX
YX
YXBNr ,1
,
,
φ
ν
(7)
Coverage ( )YXBN r,)(ν is used in case where it is known that YX ≤ . For example coverage can be
used to measure the degree that a class [ ]rBx is covered by the lower approximation XBNr ∗)( in
[ ] ( )( )[ ]
( ) XBN
XBNxXBNxB
r
rB
rBrNr
r
∗
∗
∗
∩=
)(,)(ν
(8)
is called lower coverage.
IV. FEATURE SELECTION
Practical outcomes of the family of soft computing tools are feature selection. In Rough Sets the task
of feature selection requires choosing the smallest subset of conditional features so that the resulting
reduced dataset remains consistent with respect to the decision feature. The reduction of attributes is
achieved by comparing equivalence relations generated by sets of attributes. Attributes are removed
so that the reduced set provides the same predictive capacity of the decision feature as the original. A
reduct is defined as a subset of minimal cardinality Rmin of the conditional attribute set such that
R=X:X⊆C,γX(D)=γC(D) (9)
Rmin = X : X ϵ R, ∀Y ϵ R, |X| ≤ |Y| (10)
CORE(Rmin) = I Rmin (11)
The intersection of all the sets in Rmin is called the core, the element of which are those attributes that
cannot be eliminated from the set without changing the original classification to the dataset. Clearly
each object can be uniquely classified according to the according to the attribute values remaining.
Feature selection is also one of the important aspects Near Set approach. Here each partition
ξ@,A contains classes defined by the relation ∼A . The classes in each ξ @, A ∈ A(B) with
information content greater than or equal to some threshold th are of interest. The basic idea here is to
identify probe functions that lead to partitions with the highest information content, which occurs in
partitions with high numbers of classes. In effect, as the number of classes in a partition increases,
there is a corresponding increase in the information content of the partition.
V. FACE RECOGNITION WITH ROUGH SET AND NEAR SET
Rough Set theory has been employed by K. Singh et. al., [20] for face recognition using only
geometrical features. The ADNN rough neural network [20] employed is built from approximation
and decider neuron using the concept of rough sets.
Literature cites that Rough Sets have been successfully used with other theories to build up a hybrid
system. Yun et al. [6] used rough-support vector machine integration and developed the Improved
Support Vector Machine (ISVM) algorithm to classify digital mammography images, where Rough
Sets are applied to reduce the original feature sets and the support vector machine is used classify the
reduced information.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
252 Vol. 1, Issue 5, pp. 248-253
Based on geometric feature and appearance feature, there are a few works been done on facial
expression recognition using Rough Set and support vector machine. Chen et al. [7] proposed a novel
approach based on Rough Set theory and SVM by considering only geometric features.
Later, S. Gupta et. al.[21], extended ADNN [20] for face recognition, with Near Set for facial feature
selection. The algorithm used to find partition selection and then to select the best features which can
be fed to the SVM classifier. Using near set author has presented how the chosen features can affect
the accuracy of face recognition system. Results shows that number of support vectors and margin are
maximum when the feature with largest average near coverage (−
v ) is chosen for face recognition. It
has also been shown that better recognition accuracy can be achieved with nose width as selected
feature [21].
VI. CONCLUSION
An overview of different approaches to deal with uncertainties has been provided in this paper. While
Rough Sets provide a powerful tool to objects classification by means of their attributes, Near Sets
present a nearness approach to classifying objects. We have also seen how feature selection can be
achieved with these two approaches. Both theories have found rapidly increasing applications in many
areas. We explored the implementation of the two approaches in a face recognition system. Both
approaches will find more applications to intelligent systems. On this basis we present a study for the
reader to understand and differentiate clearly between the two approaches.
REFERENCES
[1]. Pawlak, Z.: Classification of objects by means of attributes, Research Report PAS 429, Institute of
Computer Science, Polish Academy of Sciences, ISSN 138-0648, January (1981).
[2]. Z. Pawlak, “Rough sets”, International J. Comp. Inform. Science, vol. 11, pp.341-356, 1982.
[3]. Z. Pawlak,”Rough sets – Theoretical aspects of reasoning about data”. Kluwer, 1991.
[4]. Z. Pawlak, J. Grzymala-Busse, R. Slowinski and W. Ziarko, ”Rough Sets”. Communications of the
ACM, vol.38, no.11, pp.88-95, 1995.
[5]. L. Polkowski, “Rough Sets: Mathematical Foundations”. Physica-Verlag, 2003.
[6]. Y. Jiang, Z. Li, L. Zhang, P. Sun, “An Improved SVM Classifier for Medical Image Classification”, in
M. Kryszkiewicz et al., Eds., Int. Conf. on Rough Sets and Emerging Intelligent Systems Paradigms,
LNAI, vol. 4585, pp. 764-773, 2007.
[7]. P. Chen, G. Wang, Y. Yang and J. Zhou, “Facial Expression Recognition Based on Rough Set Theory
and SVM” Lecture Notes in Computer Science, Springer Berlin / Heidelberg, Rough Sets and
Knowledge Technology, Volume 4062/2006, pp.772-777, 2006
[8]. S.K Pal; J. F. Peters; L. Polkowski; A .Skowron: “Rough Neural Computing. An Introduction”. In Pal
et al
[9]. J.F.Peters and Marcin S. Szczuka: “Rough Neurocomputing: A Survey of Basic Models of
Neurocomputation” J. J. Alpigni et al. (Eds): RSCTC 2002.
[10]. M. Borkowski, J.F. Peters, Matching 2D image segments with genetic algorithms and approximation
spaces, Transactions on Rough Sets, V, LNCS 4100 (2006), 63-101.
[11]. C. Henry, J.F. Peters, Image Pattern Recognition Using Approximation Spaces and Near Sets, In:
Proceedings of Eleventh International Conference on Rough Sets, Fuzzy Sets, Data Mining and
Granular Computing (RSFDGrC 2007), Joint Rough Set Symposium (JRS 2007), Lecture Notes in
Artificial Intelligence, 4482 (2007), 475-482.
[12]. D. Lockery, J.F. Peters, Robotic target tracking with approximation space-based feedback during
reinforcement learning, Springer Best Paper Award, In: Proceedings of Eleventh International
Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing (RSFDGrC 2007), Joint
Rough Set Symposium (JRS 2007), Lecture Notes in Artificial Intelligence, 2007, 483-490.
[13]. J.F. Peters, Perceptual granulation in ethology-based reinforcement learning. In: Pedrycz, W.,
Skowron, A., Kreinovich, V. (Eds.), Handbook on Granular Computing, Wiley, NY, 2007.
[14]. J.F. Peters, S. Ramanna, Feature Selection: Near Set Approach. In: Z.W. Ras, S. Tsumoto, D.A. Zighed
(Eds.), 3rd Int. Workshop on Mining Complex Data (MCD’08), ECML/PKDD-2007, LNAI, Springer
(2007), in press.
[15]. J.F. Peters, M. Borkowski, C. Henry, D. Lockery, D.S. Gunderson, Line-Crawling Bots that Inspect
Electric Power Transmission Line Equipment. In: Proc. Third Int. Conference on Autonomous Robots
and Agents (ICARA 2006), Palmerston North, New Zealand (2006), 39-44.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
253 Vol. 1, Issue 5, pp. 248-253
[16]. J.F. Peters, M. Borkowski, C. Henry, D. Lockery; “Monocular vision system that learns with
approximation spaces,” In: Ella, A., Lingras, P., Slezak, D., Suraj, Z. (Eds.), Rough Set Computing:
Toward Perception Based Computing, Idea Group Publishing, Hershey, 1-22,2006.
[17]. J.F. Peters, C. Henry; “Reinforcement learning with approximation spaces,” Fundamenta
Informaticae, 71, nos. 2-3,323-349, 2006.
[18]. J.F. Peters, S. Shahfar, S. Ramanna, T. Szturm; “ Biologically-inspired adaptive learning: A Near Set
approach,” In: Proc. Frontiers in the Convergence of Bioscience and Information Technologies
(FBIT07), IEEE, NJ, 11 October 2007, in press.
[19]. A. Skowron, J.F. Peters; “Rough granular computing,” In: Pedrycz, W.,Skowron, A., Kreinovich, V.
(Eds.), Handbook on Granular Computing, Wiley, NY, 2007.
[20]. K. R. Singh and M.M. Raghuwanshi; “Face Recognition with Rough-Neural Network: A Rule Based
Approach”, International workshop on Machine Intelligence Research, pp 123-129, 24th
Jan 2009.
[21]. S. Gupta, K.S.Patnaik; “Enhancing performance of face recognition system by using Near Set approach
for selecting facial features” Journal of Theoretical and Applied Information Technology, pp.433-441,
2008.
[22]. J.F.Peters;”Near Sets. General Theory about Nearness of Objects” Applied Mathematical Sciences,
Vol. 11, no.53, 2609-2629, 2007.
Author’s Biography.
Kavita R Singh received the B.E degree in Computer Technology in 2000 from RGCERT,
Nagpur University, Nagpur, India, the M.Tech. degree in Computer Science and Engineering from
Birla Institute of Technology, Ranchi, India, in 2007, and now she is pursuing her Ph. D. degree in
Computer Science from the Sardar Vallabhbhai National Institute of Technology, Surat, India. She
is currently a Lecturer in Computer Technology Department, YCCE, Nagpur. Her current research
interests include the area of Data structures, Database, Rough sets, Image Processing, pattern
Recognition and Near Sets.
Shivanshu Singh is a final year student in Computer Technology of YCCE, Nagpur University,
Nagpur, India. He did his higher education from St. Xavier’s High School, Ranchi, and Jharkhand
in 2006. His interests include the area of programming, rough sets and Near Sets.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
254 Vol. 1, Issue 5, pp. 254-266
MEASUREMENT OF CARBONYL EMISSIONS FROM EXHAUST
OF ENGINES FUELLED USING BIODIESEL-ETHANOL-DIESEL
BLEND AND DEVELOPMENT OF A CATALYTIC CONVERTER
FOR THEIR MITIGATION ALONG WITH CO, HC’S AND NOX.
Abhishek B. Sahasrabudhe1, Sahil S. Notani
2 , Tejaswini M. Purohit
3, Tushar U. Patil
4
and Satishchandra V. Joshi 5
1,2,3,4
Student, B.E., Deptt. of Mech. Engg., Vishwakarma Institute of Technology, Bibwewadi,
Pune, Maharashtra, India. 5Prof., Deptt. of Mech. Engg., Vishwakarma Institute of Technology, Bibwewadi, Pune,
Maharashtra, India.
ABSTRACT
The research work is divided into (1) a portable sample collection technique (2) developing a suitable catalyst
combination and (3) manufacturing a catalytic converter with novel design. Taking into account the hazards of
aldehydes in ambient air, and carbonyl emissions compounds, an effort has been made to investigate the
carbonyl compounds and measure their concentrations for diesel engines using Bio Ethanol (BE)-diesel fuel.
From the said analysis, development of a potential catalytic converter is envisioned in order to reduce these
emissions along with carbon monoxides, hydrocarbons and nitrogen oxides. Catalytic converter is specially
manufactured for reduction of carbonyl emissions from the BE-diesel fuelled engine and its comparison and its
integration with conventional three way catalysts is discussed. The retention time of the raw sample peak is
comparable to the retention time of formaldehyde standard solution. Solitary formaldehyde peak is obtained.
Peaks of acetaldehyde and acetone are not obtained due to their lower concentrations than the limit of
detection, at the given loading condition. Retention time of each arrangement is close to that of formaldehyde
standard. It is observed that CO, HC and NOx conversion efficiencies remained constant irrespective of
combination with specially designed ZrO2 catalyst. Formaldehyde concentration obtained for one ZrO2 catalyst
sample is significantly lower than raw emissions. Added ZrO2 catalyst showed further reduction. Thus, with
optimum loading and surface area improvement methods, better results are achievable. Pt-Rh catalyst shows
better carbonyl reduction than Pd-Rh catalyst. However, each of the three way catalysts is less efficient than
ZrO2 catalyst. ZrO2 catalyst used in series with Pt-Rh catalyst shows the highest percentage reduction in
formaldehyde concentration. Pt-Rh catalyst pair is effective in CO mitigation than Pd-Rh pair. The percentage
reduction for HC and NOx is comparable for both. Pt-Rh also depicts better carbonyl reduction ability. ZrO2 is
a better choice than noble metals in terms of availability and cost. Moreover it features a selective nature
towards oxidation of aldehydes. Thus, Pt-Rh in combination with ZrO2 becomes technologically effective and
economically viable choice.
KEYWORDS: Biodiesel-Ethanol-Diesel, carbonyl emissions, catalytic converter.
I. INTRODUCTION
The global energy crisis, increasing prices of conventional fuels and increasingly stringent pollution
norms have led to a greater focus on development of alternative fuels. A study has shown that (Jian-
wei, Shah, and Yun-shan, 2009) due to several benefits in terms of fuel economy, power output and
emissions, diesel engines rule over the fields of commercial transportation, construction, and
agriculture [3]. Biodiesel Ethanol Diesel (BE-Diesel) blends have been considered as potential alternative fuels for diesel engines due to their renewable property, friendliness to environment and
energy values comparable to fossil fuels. Studies have revealed that (Panga et al., 2006b) biodiesel
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
255 Vol. 1, Issue 5, pp. 254-266
can be used successfully as an amphi-phile to stabilize ethanol in diesel and the biodiesel–ethanol–
diesel (BE–diesel) blend fuel can be stable well below sub-zero temperatures [1]. It was found that
(Panga et al., 2006c; Ren et al. 2008a) particulate matters (PM), total hydrocarbon (THC) and CO
were substantially reduced for BE–diesel in comparison with fossil diesel [1, 4, 15]. However the
unregulated carbonyl emissions (aldehydes and ketones) due to the use of the said blends have seldom been investigated. Partial oxidation of hydrocarbons and alcohols in the blend is considered as the
major cause of carbonyl emissions (Wagner and Wyszynski, 1996a) [5, 15].
The atmospheric carbonyls in urban area are mainly emitted by vehicular exhaust (Panga et al.,
2006d; Ren et al. 2008b) [1, 4]. Some carbonyls such as formaldehyde, acetaldehyde, acrolein and
methyl ethyl ketone are mutagenic, and even carcinogenic to human body as listed by US
Environmental Protection Agency (EPA) (Roy, 2008) [6]. Furthermore, carbonyls play a critical role
on the troposphere chemistry. They are important precursors to free radicals (HOx), ozone, and
peroxy-actylnitrates (PAN) (Panga et al., 2006e; Ren et al. 2008c) [1, 4]. Even short term exposure to
aldehyde vapours show effect of eye and respiratory tract irritancy in humans. Also, the report on the
health effects of aldehydes in ambient air (2000b) states that inhaled aldehydes are likely to cause
teratogenic effects [2]. Thus mitigation of carbonyls emitted from diesel engines fuelled using BE-
diesel blends is vital. To establish a reduction system, it is important to develop a technique for
effective measurement of carbonyl emissions.
Among all the engine hardware auxiliary systems, the catalytic converter is considered to have the
highest aldehyde reduction potential (Wagner, Wyszynski, 1996b) [5]. Relevant literature regarding
the use of catalytic converters for carbonyl emission reduction is reviewed. The aldehyde reduction
potential of oxidation catalysts with gasoline fuelled engines is 97-100 per cent. Three-way catalysts
have nearly the same reduction potential (90-100 per cent). Aromatic aldehydes were completely
removed by the catalyst, while highest inefficiency was shown for formaldehyde (three-way catalyst
and oxidation catalyst) [7]. (Cooper, 1992). According to Weaver (1989) the catalytic converters applied to natural gas fuelled engines reduced formaldehyde by 97-98% [8]. As per the study by
Colden and Lipari (1987), for methanol fuelled engines the conversion efficiency has been 98 per cent
for a three-way catalyst (platinum-palladium-rhodium) and 96 per cent for an oxidation catalyst
(copper chrome base metal) [9].Catalyst efficiencies for methanol fuelled diesel engines were
analyzed by McCabe et al., (1990a) [10]. The use of Platinum palladium oxidation catalyst resulted in
an increase in the aldehyde emissions instead of reduction. The effect was attributed to the oxidation
of methanol to aldehyde caused by the platinum palladium catalyst. The substitution of palladium with silver resulted in a reduction of aldehyde emissions owing to high selectivity of the catalyst to
convert formaldehyde to carbon-dioxide and water (McCabe et al., 1990b) [10]. The conventional
noble metal catalysts in general have oxidative action on both alcohol and carbonyls and thus there is
a strong likely hood of the exhaust leaving the catalytic converter still containing significant
aldehydes produced through partial oxidation of the alcohol(Wagner, Wyszynski, 1996c) [5]. Thus the
need for amendments in the existing catalytic converter technology for engines using alcohol fuel is
realized. In the present paper, focus has been laid on the development and testing of a catalytic
converter which would enable the reduction of carbonyl emissions for a diesel engine using BE-diesel
fuel. Mitigation of CO, HC’s and NOx has also been considered along with aldehydes and ketones.
High performance liquid chromatography, HPLC followed by spectroscopy has been used by several
researchers for measurement of carbonyl compounds in the engine exhaust [11]. Trapping method
using a bubbler and a suitable solvent had been used by Lipari and Swarin (1982).Trapping methods
using Di nitrophenyl hydrazine (DNPH) cartridges have also been reported [12]. The cartridge is
eluted using suitable solvent and the sample is available as a liquid to be injected for HPLC (Yacoub,
1999). In the present research, a bag sampling method suggested by Roy (2008) [6] is used to trap
carbonyls in engine exhaust. HPLC-UV technique is implemented for sample analysis.
II. CATALYTIC CONVERTER DEVELOPMENT
In case of ethanol blended fuels, the oxidation of ethanol and formation of aldehyde and oxidation of
aldehyde proceed according to the following formulae:
C2H5OH (From BE-Diesel) + O2 Aldehydes + H2O --------- (1)
Aldehydes + O2 H2O + CO2 --------- (2)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
256 Vol. 1, Issue 5, pp. 254-266
The conventional noble metal catalysts progress the reactions (1) and (2) side by side but the reaction
(1) may occur rapidly leading to great amount of aldehyde. It was realized that Zirconium Oxide
(ZrO2) is highly selective in its action and progresses the reaction according to formula (2)
considerably faster than reaction (1). Thus selective oxidation of aldehyde is attained relative to
alcohol and an excellent removal of aldehydes is achievable. When the proposed catalyst is placed downstream of a conventional three way catalyst, HC, CO, NOx may be controlled at the upstream
side of the arrangement and the carbonyls can be selectively controlled at the downstream side as
shown in figure 1.0.
Fig 1.0 Schematic - catalytic converter arrangement.
Metallic substrates (stainless steel) having volume 150cc are used for the Zirconium oxide catalyst.
Following properties of ZrO2 are considered during catalytic converter development.
Table 1.0 Properties of ZrO2
Sintering Temperature >10000C
Melting Point ~27000C
Chemical inertness and corrosion resistance up-to ~20000C
Loading the powder at 200gm/litre of substrate volume is intended. Zirconium oxide powder, due to
the lack of adhesiveness required for coating on the substrate, is mixed with Al2O3 and other suitable
binding agent and chemicals to prepare slurry and is then coated on to the mesh surface. The slurry
composition is as depicted in table 2.0.
Table 2.0 Slurry Composition
Component Name Function Weight %
Zirconium oxide powder Catalyst 30
Alumina Binding agent 20
Concentrated HNO3 Control over pH and viscosity 0.5
Reverse osmosis water Solvent 50
The components are mixed and slurry is agitated to attain desirable particle size distribution while
monitoring continuously the pH and viscosity. Slurry thus obtained is used to wash coat the substrate.
The substrate is then subjected to Vacuuming Process to attain a uniform thickness of the coating.
Coated substrate is then taken through a drying cycle involving temperatures of the order of 4000C
and eventually subjected to adhesion loss test at temperatures up-to 10000C.The adhesion loss is
recorded to be 2.13% which is within allowable limits.
Readily available, conventional three way catalytic converters loaded with Platinum-Rhodium (Pt-Rh)
and Palladium-Rhodium (Pd-Rh) on metallic stainless steel substrate (150cc) are used for HC, CO and
NOx mitigation. Fig 2.0 shows the catalytic converters used.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
257 Vol. 1, Issue 5, pp. 254-266
Fig 2.0 Pt-Rh catalytic converter (Left), ZrO2 catalytic converter (Right)
III. EXPERIMENTAL SET-UP AND SAMPLING PROCEDURE
Fig 3.0: Schematic- Sampling setup
Fig. 3.0 shows schematic of experimental set-up modeled in CATIA for collection of exhaust sample.
Direction Control Valves and Non Return Valves are used to guide exhaust flow. The low
temperature engine exhaust (at low load conditions) can directly be collected without allowing it to
pass through heat exchanger. The high temperature engine exhaust (at high load conditions) is cooled
(lower than the sustainable temperature of equipment in setup) by using a coiled heat exchanger,
before its collection in the sampling bag. The volume flow rate of exhaust is measured by gas
rotameter. Tedlar gas sampling bags, made from a special inert material are used for the collection of
the sample. The sample collected is subjected to chemical treatment to stabilize the carbonyls in the
collected exhaust. Thereafter the stabilized solution is analyzed to understand the carbonyl
concentration using HPLC-UV technique.
Particulate filters are used in the actual set-up to prevent clogging in the ancillary equipments. RTD
and K type thermocouples are used to determine the temperature of the exhaust gas at different
locations in the set-up. The image of actual setup is shown in figure 4.0.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
258 Vol. 1, Issue 5, pp. 254-266
Fig 4.0: Image of sampling setup
The catalytic converter substrates are 64 mm in diameter and 60 mm in length. The diameter of the
exhaust pipe from engine is 40mm. Converging and diverging nozzles are manufactured to connect
the exhaust pipe to catalytic converters. Catalytic converters are fitted inside an insulated tin canning
having diameter slightly greater than outer diameter of the substrate. Two or more converters, (when
used) are placed end to end in the canning with a small gap (around 2mm) in between. The
arrangement is shown in figure 5.0.
Fig 5.0: Image- catalytic converter arrangement in exhaust line
‘Kirloskar’ make naturally aspirated DI diesel (shown in fig 6.0) Gen-set/Tractor engine is used for
trials.
Fig 6.0: Image-Engine Set-up
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
259 Vol. 1, Issue 5, pp. 254-266
Its specifications are given in table 3.0 Table 3.0 Engine Specifications
Rated Power 27.9 KW
Rated Speed 1500 rpm
Cooling System Water cooled
Dynamometer Hydraulic Controlled
Lubrication Inbuilt engine operated pump
No of cylinders 3
Compression Ratio 18.10
Nozzle Diameter 0.20 mm
Injection Time 210 After BTDC
Fuel Injection Pressure 500 Bars
The BE-Diesel fuel used has following composition and properties (Refer Table 4.0).
Table 4.0 Fuel composition and Properties
Ethanol 30% by volume
Bio-diesel 20% by volume
Diesel 50% by volume
Density at 150C 835 kg/m3
Kinematic Viscosity at 400C 2.4 cSt
Calorific value 38965 kJ/kg
The engine exhaust, after flowing through the catalytic converter arrangement is partly diverted towards the gas sampling line. All trials are conducted at 20kg dynamometer load (mean load in five
mode test) and 1500rpm engine speed. Different catalytic converter arrangements using Pt-Rh or Pd-
Rh and the specially manufactured ZrO2 catalyst are used. Gas is sampled in Tedlar bags of 5 liters
capacity to measure the concentration of carbonyls. 0.5 grams of DNPH in 400ml of ACN with few
drops of perchloric acid is used as absorbing solution. 20ml of the said DNPH solution is inserted in
the bag before sampling. A flow rate of 1 LPM is maintained at the bag inlet by using a flow control
valve and sampling is carried for 4 minutes. 4 liters of exhaust gas is thus collected in the bag as shown in fig 7.0. It is ensured that the ambient conditions during the collection of samples for
different trials are constant. The collected sample is thoroughly shaken to homogenize the mixture and
accelerate the formation of respective hydrazones. To stabilize the mixture, it is cooled at -300C for
around 30 minutes in refrigerator. The stabilized condensate is then collected in small vials for further
analysis. An AVL make gas analyzer (refer Fig 8.0) is used to observe and record concentrations of
HC, CO and NOx.
Fig 7.0: Exhaust collection in Tedlar Bags Fig 8.0: Gas Analyzer
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
260 Vol. 1, Issue 5, pp. 254-266
IV. SAMPLE ANALYSIS:
The exhaust sample collected is analyzed using High Performance Liquid Chromatography (HPLC-
UV) technique [13, 14]. The HPLC system used is shown in Fig 9.0.Chormatographic conditions are
given in table 5.0.
Fig 9.0 HPLC-UV apparatus
Table 5.0 HPLC Chromatographic conditions:
Parameter Remarks Description
Column: Analytical column: Inertsil C-18(ODS), 4.60mm*250.00mm,5
micron particle size
Column oven
temperature
Ambient temperature 32 degrees.
Detector: UV- Visible 360nm wavelength (Lambda maximum)
Sample: 20 micro-liters
Flow-rate: 1mL/min
Mobile phase ACN HPLC grade
HPLC system LC-10AT vp, Shimadzu make(Japan)
Data acquisition PC controlled Spinchrome software
The results obtained from the HPLC-UV apparatus consist of a chromatogram showing peaks (Fig
10.0) and numerical data sheet corresponding to the retention time and peak area (table 6).
Table 6.0 Numerical Data sheet-HPLC
Sr. No. Retention.
Time [min]
Area
[milli-
Volt-sec]
Height
[milli-
Volts]
Area [%] Height
[%]
W05
[min]
1 2.823 46.014 7.35 1 1.2 0.11
2 3.01 3899.952 428.713 99 98.8 0.12
Total 3945.996 433.921 100 100
The retention time is the characteristic of the compound and the peak area is a direct measure of the
concentration.
4.1 Preparation of standards:
The standard solution of Formaldehyde Hydrazone is analyzed using HPLC-UV to get its retention
time and concentration. According to the molar equilibrium from the chemical reaction, a standard
solution having a concentration of 500µg/ml (w/v) of formaldehyde hydrazone in acetonitrile is
prepared and analyzed. The results obtained are of the following nature (refer fig. 10.0):
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
261 Vol. 1, Issue 5, pp. 254-266
[min.]Time
0 1 2 3 4
[mV]
Volta
ge
0
100
200
300
400
500
600c:\documents and settings\vitchem\desktop\srk\mech_chem_pro2011\formaldehyde_hydrazone_365nm
2.8
23 1
3.0
10 2
Fig. 10.0 Formaldehyde standard chromatogram
Similarly, peaks were obtained for acetaldehyde and acetone standard solutions.
4.2 Analysis of samples:
The samples collected in the vials after refrigeration are injected in the HPLC-UV apparatus for their
analysis. Three iterations of the same sample are carried out to verify the reproducibility of peaks.
Arithmetic mean of the corresponding three areas is considered while calculating the concentration.
The mean retention time obtained is used to identify the compound.
For the standard solution:
Formaldehyde + DNPH Hydrazone + Water
1 mole 1 mole 1 mole 1mole
30 gms 198gms 210gms 18gms
The molar mass ratio mf= 30/210 =1/ 7
Mean area Am= 3751.6395; Mean retention time Tm= 3.0065 min
For sample:
Concentration of formaldehyde in liquid sample is related to the concentration of hydrazone as
follows:
Depending on volume of exhaust sampled, the concentration (weight per liter of exhaust) is
calculated.
V. RESULTS AND DISCUSSION:
The results of the Raw Sample from the HPLC analysis are as are shown in Fig 11.0
samplein hydrazone ofion concentrat
solution standard ofion concentrat
amchromatogr samplein peak of Area
amchromatogrsolution standardin peak of Area=
collected sampleexhaust of Volume
mfsolution absorbing of
volume samplein hydrazone ofion Concentrat
samplein deformaldehy ofion Concentrat∗
∗
=
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
262 Vol. 1, Issue 5, pp. 254-266
[min.]Time
0 1 2 3 4
[mV]
Volta
ge
0
100
200
300
400
500
600
Solv
ent Peak
C:\Documents and Settings\VITCHEM\Desktop\SRK\Mech_Chem_Pro2011\trials on june 9-10\CHROMATOGRAMS\RAW\JUNE 13 new syr\Raw 00
2.2
37 1
2.9
37 2
Fig. 11.0 Raw sample chromatogram
The retention time of this peak is comparable to the retention time of formaldehyde standard solution.
Minor deviation is due to the slight variation in back pressure. Solitary formaldehyde peak is
obtained. Peaks of acetaldehyde and acetone are not obtained due to their lower concentrations than
the limit of detection, at the given loading condition. The data is sufficient to understand the
qualitative change in carbonyl concentration with the use of several catalytic converter arrangements.
Exhaust samples with different catalytic converter arrangements are collected and analyzed. Retention
time of each arrangement is close to that of formaldehyde standard. Sample chromatograms for two
arrangements are given in figures 12.0 and 13.0.
[min.]Time
0 1 2 3 4
[mV]
Vo
ltag
e
0
50
100
150
200
250C:\Documents and Settings\VITCHEM\Desktop\ZrO2 3
1.9
53
2.9
33
Fig. 12.0 Chromatogram-One ZrO2 catalytic converter sample
[min.]Time
0 1 2 3 4
[mV]
Vo
ltag
e
0
50
100
150
200 C:\Documents and Settings\VITCHEM\Desktop\Pt-Rh=Zr 6
1.9
33
2.2
43
2.9
57
Fig. 13.0 Chromatogram-ZrO2 CATCON in series with Pt-Rh catalytic converter
Chromatograms for seven different arrangements are obtained and formaldehyde concentrations are
examined (Table 7.0 and Fig 14.0). Percentage reduction for each of the arrangements is then
analyzed (Refer fig. 15.0). CO, HC and NOx conversion efficiencies for conventional three way
catalyst (Pt-Rh and Pd-Rh) are determined (refer fig. 16.0). It is observed that their efficiencies
remained constant irrespective of combination with specially designed ZrO2 catalyst.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
263 Vol. 1, Issue 5, pp. 254-266
Table 7.0 HCHO concentrations for various catalytic converter arrangements
Sr.
No.
Catalytic converter
arrangement
corresponding to the
injected sample
Mean retention
time
of the injected
sample in sec
Mean peak area
of the injected
sample in
Milli-Volt-sec
Concentration of
formaldehyde in the
sample in µg/l of exhaust
1 Raw (No CATCON) 2.9434 5333.993 580.3169
2 Pd-Rh 2.93 2897.93 315.2831
3 Pt-Rh 2.9266 2645.003 287.7656
4 Pd-Rh/ZrO2 2.9543 2350.9683 255.7758
5 ZrO2 2.931 2283.46 248.4312
6 2*ZrO2 2.932 2272.859 247.2779
7 Pt-Rh/ZrO2 2.9276 2166.016 235.6538
0
100
200
300
400
500
600
Concen
trat
ion
of
HC
HO
in
µg/l
CATCON arrangement
Concentration
of HCHO in
µg/l of exhaust
Fig. 14.0 Concentration of formaldehyde for different CATCON arrangements
0 20 40 60 80 100
Pd-Rh
Pt-Rh
Pd-Rh/ZrO2
ZrO2
2*ZrO2
Pt-Rh/ZrO2
% Reduction in HCHO concentration w.r.t. raw
CA
TC
ON
arr
angem
ent
% Reduction in HCHO
concentration
Fig. 15.0 Percentage reduction of formaldehyde in the exhaust
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
264 Vol. 1, Issue 5, pp. 254-266
0
10
20
30
HCC
on
centr
atio
n o
f H
C i
n
pp
m
Raw
Pt-Rh
0
10
20
30
HCCon
cen
trati
on
of
HC
in
pp
m Raw
Pd-Rh
0
0.01
0.02
0.03
0.04
COCon
cen
trat
ion
of
CO
in
%
volu
me
Raw
Pt-Rh
0
0.01
0.02
0.03
0.04
CO
CO
nce
ntr
atio
n o
f C
O i
n
% v
olu
me
Raw
Pt-Rh
0
500
1000
NOx
Con
nce
ntr
atio
n o
f N
Ox i
n
pp
m Raw
Pt-Rh
0
500
1000
NOxCon
ncen
trati
on
of
NO
x i
n
pp
m Raw
Pd-Rh
Fig 16.0: Pt-Rh and Pd-Rh HC,CO and NOx reduction.
Formaldehyde concentration obtained for one ZrO2 catalyst sample is significantly lower than raw
emissions. The effect is attributed to the selective nature of ZrO2 catalyst in oxidation of carbonyls.
Added ZrO2 catalyst showed further reduction. Thus with optimum loading and surface area
improvement methods, better results are achievable.
Pt-Rh catalyst shows better carbonyl reduction than Pd-Rh catalyst. However, each of the three way
catalysts is less efficient than ZrO2 catalyst. This could be due to non-selective oxidative nature of
these catalysts where-in alcohol vapors in exhaust are partially oxidized to carbonyls thereby
increasing their concentration.
ZrO2 catalyst used in series with Pt-Rh catalyst shows the highest percentage reduction in
formaldehyde concentration. Pt-Rh is effective in CO mitigation than Pd-Rh. The percentage
reduction for HC and NOx is comparable for both. Pt-Rh also depicts better carbonyl reduction ability.
The better overall catalytic properties of Pt-Rh than Pd-Rh may be due to contamination of palladium
catalyst by some elements in exhaust of BE-diesel fuelled engine (Johnson Matthey website, 2011)
[13]. The unaffected carbonyls are selectively taken care of by the ZrO2 catalyst downstream.
Increasing Platinum loading in conventional catalyst could give better results for carbonyl mitigation.
However, ZrO2 is a better choice than noble metals in terms of availability and cost. Moreover it
features a selective nature towards oxidation of aldehydes. Thus, Pt-Rh in combination with ZrO2
becomes technologically effective and economically viable choice.
VI. CONCLUSIONS
All the significant contributions listed in section 2 were experimentally verified and results are
reported. Formaldehyde was the most dominant carbonyl at the given loading conditions. Other
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
265 Vol. 1, Issue 5, pp. 254-266
aldehydes and ketones could be detected using HPLC equipment with better limit of detection or
collecting higher volume of exhaust. Zirconium oxide shows effective catalytic activity towards
carbonyl mitigation, better than conventional three way catalysts. Combination of Platinum Rhodium
and Zirconium Oxide catalyst enables significant reduction of carbon monoxide, hydrocarbons,
nitrogen oxides together with aldehydes and ketones. Platinum-Rhodium catalyst plays a role in
mitigation of CO, HC, NOx while aldehydes are taken care of by ZrO2 downstream. The said
catalyst combination is an important development in catalytic converter technology, both due to its
technological and economic features, especially in case of alcohol fuelled engines. It promotes the use
of renewable fuels such as BE-diesel, methanol-diesel, methanol, gasoline and so on. Improved
catalyst efficiencies are achievable with optimum catalytic converter design. ZrO2 catalyst used in
series with Pt-Rh catalyst shows the highest percentage reduction in formaldehyde concentration. The
future work is to be carried out with different arrangements of catalytic converter modules of lengths
(currently used in series) in series and parallel. The problem of insulation of the converter and
leakages at joints while making series units of standard lengths of converters currently available for 2
wheelers for use in 4 wheelers needs to be further investigated. The selection of parameters in
identifying different percentage is done from existing test methods and preparation of catalyst
combinations by experience, during this work and procedure for selection needs to be investigated.
REFERENCES
[1] Xiaobing Panga, Xiaoyan Shia, Yujing Mua, Hong Hea, Shijin Shuaib, Hu Chenb, Rulong Lib,
Characteristics of carbonyl compounds emission from a diesel-engine using biodiesel–ethanol–diesel as fuel,
Atmospheric Environment, Volume 40, Issue 14, pp2567-2574, May 2006.
[2] Report on the Health Effects of Aldehydes in Ambient Air, Prepared for COMEAP – the Department of
Health Committee on the Medical Effects of Air Pollutants. Government of United Kingdom, December 2000.
[3] Asad Naeem Shah, Ge yun-shan, Tan Jian-wei, Carbonyl emission comparison of a turbocharged diesel
engine fuelled with diesel, biodiesel, biodiesel-diesel blend, Jordon Journal of Mechanical Industrial.
Engineering, Vol. 3, pp 111-118, Number 2, 2009.
[4] Y Ren, Z-H Huang, D-M Jiang, W Li, B Liu, and X-B Wang, Effects of the addition of ethanol and cetane
number improver on the combustion and emission characteristics of a compression ignition engine, Journal of
Automobile Engg, Vol-222, Issue 6, pp 1077–1087, 2008.
[5] T Wagner, M. L. Wyszynski, Aldehyde and Ketones in Engine Exhaust Emissions- A review, Journal of
Automobile Engg, Vol-210, Issue D2, pp 109, 1996.
[6] Murari Mohan Roy, HPLC analysis of aldehydes in automobile exhaust gas, Energy conservation and
management, 49, pp 1111-1118, 2008.
[7] Cooper B., The future of catalytic systems. Automotive Engineering, Volume 100, Number 4, 1992.
[8] Weaver, C.S. Natural gas vehicles-a review of the state of the art. SAE paper 891233, 1989
[9] Lipari, F. and Colden, F. L. Aldehyde and unburned fuel emissions from developmental methanol-fuelled
2.51 vehicles, SAE paper, 872051, 1987.
[10] McCabe, R. W, Kmg, E. T., Watkins, W. L. and Gandhi, H. S. Laboratory and vehicle studies of aldehyde
emissions from alcohol fuels, SAE paper 900708, 1990.
[11] Lipari F and Swarin S, Determination of Formaldehyde and other Aldehydes in Automobile Exhaust with
2, 4 DNPH method, Journal of Chromatography, 247, pp 297-306, 1982.
[12] Y Yacoub, Method Procedures for sampling Aldehyde and ketones using 2,4 DNPH-a review, Journal of
Automobile Engineering. Volume 213, Issue 5, pp 503-507, 1999.
[13] Ronald K. Beasley, Sampling of Formaldehyde in Air with Coated Solid Sorbent and Determination by
High Performance Liquid Chromatography, Analytical Chemistry, Volume 52, No. 7, June 1980 pp-1111.
[14] Xlanliang thout, Measurement of Sub-Parts-per-Billion Levels of Carbonyl Compounds in Marine Air by a
Simple Cartridge Trapping Procedure Followed by Liquid Chromatography, Journal of Environ. Sci. Technol.
1990, 24, pp1482-1485.
[15] D. B. Hulwan Study on properties, improvement and performance benefit of diesel,-ethanol-biodiesel
blends with higher percentage of ethanol in a multi-cylinder IDI diesel engine, IJAET, Volume I, Issue II, July-
Sept., 2010, pp 248-273.
[16] Technical discussions, Johnson Matthey website- http://www.matthey.com/, 2011.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
266 Vol. 1, Issue 5, pp. 254-266
Author’s Biographies:
Abhishek Balkrishna Sahasrabudhe: Bachelor of Mechanical Engineering, University of Pune.
(Vishwakarma Institute of Technology). Departmental academic topper. Now, pursuing Graduate
studies in Mechanical Engineering at Stanford University, California. Graduate Research Assistant
at High temperature Gas dynamics (HTGL) Laboratory-Stanford University. Research Interests:
Engines and energy systems, Pollution mitigation, Alternative and renewable energy, combustion
and kinetics, Computational fluid flow and heat transfer, mechatronics / design.
Sahil Shankar Notani: Bachelor of Mechanical Engineering, University of Pune (Vishwakarma
Institute of Technology). Presently working at Emerson Innovation Center as an R&D Engineer,
Keeps interest in learning computational domains for design and optimization of mechanical
systems, Aspires to pursue Masters in Engineering from a renowned university in the said domain.
Tejaswini Milind Purohit: Bachelor in Mechanical Engineering, University of Pune.
(Vishwakarma Institute of Technology). Presently working as a Graduate Engineering Trainee at
Mahindra & Mahindra Ltd, Wishes to pursue Masters in the field of mechanical engineering.
Tushar Patil: B. E. Mechanical from the University of Pune (Vishwakarma Institute of
Technology). Current occupation: Working as an Engineering services person at Jubilant life
sciences, Aims to pursue Masters in Business Administration from the best university.
Satishchandra V. Joshi: Satishchandra V. Joshi is working as professor of Mechanical
Engineering at Vishwakarma Institute of Technology, Pune in Maharashtra, India. He earned his
Ph. D. from Indian Institute of Technology Bombay at Mumbai, India. Professor Joshi has vast
experience in Industry, teaching and research. He has published papers in International, National
journals and conferences numbering 15. Professor Joshi has worked on projects of World Bank and
Government of India on energy aspects.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
267 Vol. 1, Issue 5, pp. 267-277
IMPACT OF REFRIGERANT CHARGE OVER THE
PERFORMANCE CHARACTERISTICS OF A SIMPLE VAPOUR
COMPRESSION REFRIGERATION SYSTEM
J. K. Dabas 1, A. K. Dodeja
2, Sudhir Kumar
3, K. S. Kasana
4
1 Research Scholar, Department of Mechanical Engineering, National Institute of
Technology, Kurukshetra, India 2Dairy Engineering Division, National Dairy Research Institute, Karnal, India
3, 4Department of Mechanical Engg., National Institute of Technology, Kurukshetra, India
ABSTRACT
Experimental investigation was done to find the role of capillary tube length and amount of refrigerant charge
on the overall heat transfer coefficient in condenser and evaporator and actual COP of a simple vapour
compression refrigeration system. It was concluded that increasing of the refrigerant charge in the system
largely enhances the overall heat transfer coefficient in the evaporator by increasing the part of space occupied
by liquid refrigerant in the evaporator. Capillary tube length is important as it decides the evaporator
temperature and pressure directly but also affects the tendency of refilling of evaporator with liquid refrigerant
after initial start up and alters the amount of optimum charge in the system. A simple refrigeration system
should be designed with minimum possible length of capillary tube to satisfy the refrigeration conditions and
maximum amount of refrigerant charged in the system limited by unwanted condition of refrigerant liquid
entering the compressor.
KEYWORDS: Vapour Compression Refrigeration, Refrigerant charge, Capillary tube, heat transfer
coefficient, coefficient of performance
I. INTRODUCTION
A simple vapour compression refrigeration system with simplest expansion device as capillary tube is used in numerous of small or medium refrigeration applications like domestic refrigerator, deep
freezer, water cooler, room air conditioners, cooling cabinets and many more all over the world. The
small scale refrigeration machines are produced in large numbers and have substantial contribution to
energy consumption. [1] Energy conservation in refrigeration, air conditioning and heat pump systems
has a large potential. The working conditions for a refrigerating system in steady operation depend on
several factors: boundary conditions (ambient temperature, cold room temperature, compressor speed, and control settings), refrigerant type and refrigerant charge, system architecture and size, thermal
loads. [2] The performance is influenced by matching of all these factors. Theoretical performance of
the system deteriorates in real conditions due to internal and external irreversibility in the system. [3,
4, 5] Internal irreversibility is due to non isentropic compression, friction and entropy generation in
the system components. [6, 7]
NOMENCLATURE
A surface area of tubes
c specific heat
COP coefficient of performance
i specific enthalpy
Greek Symbols
∆ difference
density
Subscripts
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
268 Vol. 1, Issue 5, pp. 267-277
m mass flow rate
Q heat transfer rate
t temperature
U overall heat transfer coefficient
v specific volume
V total volume
VCR vapour compression
refrigeration
W power consumption of
compressor
ac actual
c condenser
e evaporator
i inlet/ inside
isen isentropic
liq liquid
m mean
o outlet/ outer
r refrigerant
th theoretical
vap vapour
w water
Minimization of internal irreversibility depends mainly on the design and selection of compressor
which is not in the scope of this study. External irreversibility losses occur over the condenser and
evaporator due to finite rate of heat exchange against finite values of temperature difference and heat
capacities of external fluids. These losses can be minimized by maximizing the heat transfer
coefficient over condenser and evaporator. [8, 9, 10] Considering the internal and external
irreversibility, a vapour compression refrigeration system can be theoretically optimized and balanced
using finite time thermodynamics. [11, 12, 13] But a correct estimate of the parameters causing
irreversibility i.e. finite value of heat transfer coefficients is a real challenge.
Heat transfer coefficient on the external fluid (air/water) side in the evaporator and condenser can be
enhanced optimized and managed easily. But the condensation heat transfer coefficient over the
refrigerant side in condenser and boiling heat transfer coefficient over the refrigerant side in
evaporator are quite difficult to estimate and manage because these are associated with change of phase of refrigerant and the two phase flow behavior is quite difficult to estimate through inside space
of condenser and evaporator due to non availability of exact void fraction correlations. [14, 15, 16]
Boiling coefficient in evaporator is even more difficult to estimate as compared to condensing
coefficient in condenser. [17, 18]
Condensing coefficient depends on how the condensate film forms, flows and is pierced through by
condensing vapours and finally accumulate at the bottom section of the condensing coil under the
influence of gravity, mean velocity of refrigerant vapours and geometry of condensing coil. The boiling heat transfer characteristics on refrigerant side in evaporator are quite different than that of
condenser. In small refrigeration systems, generally dry expansion tubular type evaporator without
any accumulator is used in which some portion is used for boiling of refrigerant (where nucleate
boiling dominates) and rest is used for superheating of vapours (where forced convection dominates).
[19, 20, 21] Superheating is necessary to safeguard the compressor from damage by suction of
incompressible refrigerant liquid. [22] Heat transfer coefficient in the boiling zone before dry out
point is much higher than in the superheating zone beyond dry out point. Thus a correct estimation of
average heat transfer coefficient on the refrigerant side both in the evaporator and condenser is not
possible analytically and mostly empirical approach is used. Simulation techniques have been used by
researchers for design of vapour compression refrigeration system under steady state conditions. [23,
24, 25] Design of evaporator and condenser depends mainly on two design parameters as heat transfer
coefficients and corresponding pressures, which further depend on other conditions in the system.
Among these, two main conditions are size and length of capillary tube and refrigerant charge which
can also most easily be altered in a given system. Capillary tube length is very important as it directly
decides the pressures of system. [26] The present experimental study is about to find the actual values
of refrigeration rate, overall heat transfer coefficient in the evaporator and condenser and COP of a
simple vapour compression refrigeration system under real steady state conditions for different
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
269 Vol. 1, Issue 5, pp. 267-277
combinations of capillary tube size and refrigerant charge in the system and to find the impact of
refrigerant charge with different lengths of capillary tube over the performance of system under same
constant boundary conditions.
In the following sections 2 and 3, description of experimental set-up used and the detailed procedure
adopted is given. Thereafter the results have been plotted in the form of bar charts and the detailed analysis is given in section 4. The results and future scope of work are concluded in section 5.
II. EXPERIMENTAL SET-UP AND PROCEDURES
The experimental facility as shown in “figure 1” consists of a simple vapour compression
refrigeration system charged with HFC-134a refrigerant. The evaporator and condenser are shell and tube type adiabatic heat exchangers. Refrigerant flows through copper tubes of outside and inside
diameters as 9.5 mm and 8.5 mm throughout the condenser, evaporator and connecting lines. All
connecting tubes of refrigerant are well insulated by polyurethane cellular foam. Water can flow
through the insulated shell of each of the evaporator and condenser and there is an arrangement for
control and measurement of water flow rate through each. Compressor used is Kirloskar Copeland
model no KCE444HAG (1/3 HP). Hand operated valves and connectors are provided before and after
the capillary tube to facilitate its replacement. The temperature of refrigerant at various points is measured with RTDs (Pt 100 Ω at 0oC) strongly insulated along length of tubes by means of
polyurethane cellular foam. (axial heat conduction was hence neglected). Pressure of refrigerant is
measured and indicated by separate dial gauges at four points before and after each of the evaporator
and condenser. Mass flow rate of refrigerant liquid after condenser is indicated by a glass tube
rotameter fitted in the refrigerant line after condenser. A digital wattmeter gives the instant value of
power consumption of compressor and also the total energy consumed during whole trial.
The total inside space of the closed refrigeration system is calculated as 1825 cm3 out of which 673
cm3 is of evaporator, 777 cm3 is of condenser, and 200 cm3 is of ‘liquid’ line from condenser to
evaporator. The total mass of refrigerant charged in the system is given by equation (1)
vap
vap
liqliqrv
VVm += ρ Eq. (1)
VVV vapliq =+ Eq. (2)
From the equations (1) and (2), Vliq and Vvap (volume occupied by liquid and vapour phase of
refrigerant charge ) can be calculated if we know the total weight of refrigerant charged in the system
(mc) and ρliq and vvap at the corresponding pressures. Equation (1) & (2) can also be employed
separately for evaporator and condenser.
It is hard to find the exact inventory of liquid and vapour refrigerant in different components of
system during its working. But to an approximation, only to calculate appropriate refrigerant charge in
the system, it may be taken that 10% of total volume of condenser, full liquid line and 40 % of
evaporator space is occupied by liquid refrigerant during working of the system. Rest of the inside
space is occupied by vapour phase. With this approximation the total charge calculated from equation
(1) & (2) is 700 g. Trials were conducted however with three different amounts of refrigerant charge
as 500 g, 700 g and 1000 g i.e. one estimated correct value, one less than this and one more than this
value. Refrigerant charge filled in the system is weighed by keeping the charging cylinder on
weighing balance and taking readings before and after fresh charging each time. From the previous
experience three different capillary tube sets chosen are twin capillary tubes, each of diameter 0.044”
(1.1176 mm) but lengths of 30” (0.762 m) , 42” (1.067 m) and 54” (1.372 m). In this way with three
different capillary tubes and three different amounts of charge a total of 9 trials (3*3) were conducted
in repetition.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
270 Vol. 1, Issue 5, pp. 267-277
Wattmeter Temperature Indicator
Pressure gauges Pressure gauges
↓ ↓
Rotameter→
←Capillary tube
3 4
Water inflow ←Water inflow
2 1
Water outflow →Water outflow
Condenser shell Compressor Evaporator shell
Figure 1. Experimental set up of simple vapour compression refrigeration system
Figure 2. Theoretical Vapour Compression Refrigeration Cycle
Figure 3. Actual Vapour Compression Refrigeration Cycle
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
271 Vol. 1, Issue 5, pp. 267-277
III. DATA REDUCTION
The pressure and temperature readings of refrigerant were taken at four strategic points 1, 2, 3 & 4 as
indicated in “figure 1” and “figure 3”. Temperature of cooling water at the inlet and outlet of
condenser shell and evaporator shell are also recorded in the same way. Mass flow rates of refrigerant
liquid at condenser outlet and of water entering the condenser and the evaporator are recorded with
the help of corresponding glass tube type rotameter. Actual reading in Wattmeter is also recorded
regularly. All this data was uploaded in MS Excel worksheets and the properties of refrigerant were
calculated for each of the observation by using computer subroutines for calculating refrigerant properties. [27] This data was reduced to useful performance parameters as described below:
Refrigeration rate of evaporator,
)()( 41,,,., iimttcmQ roewiewwewe −=−= Eq. (3)
Heat transfer rate in condenser
)()( 32,,,., iimttcmQ ricwocwwcwc −=−= Eq. (4)
Theoretical COP
isen
thi
iiCOP
∆
−= 41 Eq. (5)
Actual COP
ac
eac
W
QCOP = Eq. (6)
Overall heat transfer coefficient over evaporator
oeem
ee
At
QU
,,∆= Eq. (7)
Where,
−
−
−=∆
eroew
eriew
e
oewiew
em
tt
tt
ttt
,,,
,,,
,,,,
,
log
Overall heat transfer coefficient over condenser
occm
cc
At
QU
,,∆= Eq. (8)
Where,
−
−
−=∆
ocwcr
icwcr
e
icwocw
cm
tt
tt
ttt
,,,
,,,
,,,,
,
log
IV. ANALYSIS AND RESULT
Most of the refrigerant in liquid form will accumulate in the evaporator during pressure equalization
period whereas the condenser and capillary tube contain superheated gas only. A typical course of
events at the start of compressor is as follows: on start of compressor, boiling of liquid refrigerant in
evaporator starts at a fast pace and initial mass flow rate through compressor is high due to higher
evaporator pressure and temperature. On the other side mass flow rate through capillary tube is least
initially due to superheated gas and least pressure difference across it. The result is that in a very short
period, refrigerant mass is displaced towards the condenser and evaporator becomes more or less
starved of liquid refrigerant. By this, evaporator pressure falls and condenser pressure rises. The
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
272 Vol. 1, Issue 5, pp. 267-277
displaced mass of refrigerant condenses in the condenser by forming liquid layer inside the
condensing tubes and more liquid accumulates at the inlet of capillary tube. With this the evaporator
starts to refill with refrigerant liquid. This refilling process is accelerated with sub cooling of liquid
backed up in the condenser and so increase in mass flow rate of capillary tube. At the start, mass flow
rate of refrigerant through compressor is highest and through capillary tube is least. This difference is adjusted by initial displacement of refrigerant from evaporator to condenser. Thus, once most of the
liquid is displaced to condenser but again it starts coming back to evaporator with the effective
condensation and increase of pressure difference across capillary tube. This refilling of evaporator
with refrigerant liquid again activates the heat exchange and evaporation process in the evaporator and
opposes the decline of evaporator pressure. A natural balance between the individual working of
components of the system is established after some time if the boundary conditions of the system are
not changing. Under these steady state conditions, the impact of different combinations of capillary
tube length and refrigerant charge in the system on various performance parameters is analyzed as
follows:
4.1 General working parameters of VCR system:
As shown by “Table 1”, the evaporator pressure and condenser pressure both have a higher value for the higher amount of refrigerant charge in the system. More is the initial charge, more liquid is there
in the evaporator activating the heat exchange and evaporation process and so increasing the
evaporator pressure, which also results in higher condenser pressure due to increased compressor
discharge. Increase in the value of condenser pressure is however less because simultaneously the
condensation becomes more effective with the increase in discharge of compressor. Therefore the
pressure ratio is least in case of highest refrigerant charge for a given length of capillary tube and
obviously it is least for the shortest length of capillary tube. Superheating of vapours at the suction of
compressor is more in case of larger length capillary tube due to lower evaporator temperature. For a
given length of capillary tube however the superheating of vapours decreases with the increase in
refrigerant charge because the dry out point moves downstream in the evaporator due to more liquid
charge at a time. Sub-cooling of refrigerant liquid in condenser has opposite trend. With more
superheating of vapours at the suction of compressor, more is the temperature of vapours entering the
condenser so less sub-cooling and vice-versa.
Table 1 Actual conditions of VCR system under steady state conditions
Double
capillary tube
length (m)
Mass of
refrigerant
charge
(g)
Condenser
Pressure
(bar)
Evaporator
Pressure
(bar)
Pressure
Ratio of
compressor
Superheating
of vapours at
suction of
compressor
(oC)
Subcooling
of liquid in
condenser
(oC)
0.762
500 10.103 4.69 2.15 13.1 6.5
700 10.586 5.345 1.98 8.2 6.3
1000 11.413 5.828 1.96 1.7 10.4
1.067
500 9.69 3.897 2.49 19.4 5.9
700 10.241 4.793 2.14 12.4 6.7
1000 11.62 5.62 2.07 2.49 12.5
1.372
500 8.655 1.862 4.65 41.2 3.9
700 8.724 2.276 3.83 35.1 4.2
1000 11.275 3.207 3.52 27.8 11.2
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
273 Vol. 1, Issue 5, pp. 267-277
4.2 Refrigerant mass flow rate and Refrigeration capacity:
Mass flow rate of refrigerant through the system under steady state conditions has a clear trend with the change of capillary tube length and refrigerant charge as shown in “figure 4”. It increases sharply
with the decrease in length of capillary tube and moderately with the increase of refrigerant charge in
the system because of increased evaporation rate in evaporator with more filling of it with liquid
refrigerant. Refrigeration rate is directly proportional to mass flow rate of refrigerant and hence
follows the same trend as shown in “figure 5”.
Figure 4. Mass flow rate of refrigerant (mr) for different combinations of “capillary tube length” and “amount of
refrigerant charged in the system”
Figure 5. Rate of refrigeration (Qe) for different combinations of “capillary tube length” and “amount of
refrigerant charged in the system”
4.3 Overall heat transfer coefficient in the evaporator:
With the same size capillary tube, the overall heat transfer coefficient in the evaporator is raised
considerably on increasing the refrigerant charge in the system as is clear from “figure 6”. This is
solely because of increase in heat transfer coefficient on refrigerant side due to increased liquid
fraction in the evaporator at a time. On filling of evaporator with liquid, pool boiling and nucleate
boiling conditions prevail in maximum part of evaporator, which enhance the heat transfer multi
times. Great decrease in heat transfer coefficient takes place with increase in length of capillary tube
due to decreased mass flow capacity of compressor with increase in pressure ratio and decreased
tendency of refilling of evaporator with liquid refrigerant through longer capillary tube. A wide
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
274 Vol. 1, Issue 5, pp. 267-277
variation in the data of overall heat transfer coefficient was noted while the water side coefficient and
conduction resistance of wall are approximately constant in each trial. Therefore this variation is only
in the heat transfer coefficient on refrigerant side. Highest value of overall heat transfer coefficient is
in the case of shortest capillary tube with highest refrigerant charge in the system because here
evaporator is expected most filled by liquid refrigerant. Lowest value of overall heat transfer coefficient (30 times less than the highest value) is in case of largest capillary tube with minimum
refrigerant charge because here evaporator is expected most dry.
Figure 6. Overall heat transfer coefficient in the evaporator (Ue) for different combinations of capillary tube
length” and “amount of refrigerant charged in the system”
Figure 7 Overall heat transfer coefficient in the condenser (Uc) for different combinations of “capillary tube
length” and “amount of refrigerant charged in the system”
4.4 Overall heat transfer coefficient in the condenser:
With the same amount of refrigerant charge, the overall heat transfer coefficient in the condenser
decreases with increase in length of capillary tube as shown in “figure 7”. It is obviously because of
the sharp decrease in mass flow rate through condenser. This decrease is however not sharp because of opposite effect of simultaneous decrease in condenser pressure. Due to decrease in condenser
pressure, temperature difference across condensing layer also decreases and latent heat increases,
which increase the condensing coefficient as per Nusselt’s well known equation for film wise
condensation. With the same length capillary tube, value of condensing coefficient rises on increase
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
275 Vol. 1, Issue 5, pp. 267-277
of refrigerant charge in the system from 500 g to 700 g, but falls on increasing the charge from 700 g
to 1000 g. First rise is due to increase in mass flow rate through condenser (as discussed before)
which enhance the heat transfer coefficient. But simultaneously pressure in the condenser also
increases exponentially, which poses the opposite effect and decreases the condensation coefficient in
case of 1000 g refrigerant charge. So the correct combination of capillary tube length and refrigerant charge in the system is important rather selecting one individually.
4.5 Coefficient of Performance:
COP of a vapour compression refrigeration system is the single most important parameter which has
to be optimized in a given refrigeration application for maximum conservation of energy. Highest COP is coming in case of smallest capillary tube of length 0.762 m and 700 gms of refrigerant charge
as shown in “figure 8”. Length of capillary tube decides evaporator pressure and temperature directly.
Lesser is the length of capillary tube, higher are the evaporator temperature and pressure and so the
COP if simultaneously these satisfy the required refrigeration capacity. But the role of refrigerant
charge is also very important. More charge means more filling of evaporator with liquid so more
refrigeration capacity until the limiting condition of liquid sucking by compressor is reached. In this
way the refrigerant charge is very critical and its optimum value depends primarily on the length of capillary tube in a refrigeration system.
Figure 8 Coefficient of performance (COP) for different combinations of “capillary tube length” and “amount
of refrigerant charged in the system”
V. CONCLUSION
This study offers some insight into the role of capillary tube length and refrigerant charge over the
performance characteristics of a simple vapour compression refrigeration system. It was found that
as the compressor is started, fast shifting of charge from evaporator to condenser via compressor and
in a short while, again the comparatively slow refilling of this in the evaporator through capillary
tube takes place. Refilling of evaporator with more and more liquid refrigerant causes multifold
increase in heat transfer coefficient, which ultimately enhances the overall COP of the system but is
limited by the undesirable condition of liquid sucked by compressor. The refilling of evaporator and
the inventory of liquid charge in evaporator and condenser during working depends greatly on the
refrigerant charged in the system and length of capillary tube. Dry out point in the evaporator can be
shifted downstream by allowing more liquid to stay in the evaporator either by increasing the
refrigerant charge or by cutting short the length of capillary tube. But simultaneously the length of
capillary tube also decides the desired pressure in evaporator and other dependable parameters of the
system and so cannot be altered much for the gain in heat transfer. In this way by managing the
distribution of refrigerant liquid in condenser and evaporator by choosing optimum value of
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
276 Vol. 1, Issue 5, pp. 267-277
refrigerant charge, heat transfer coefficients on refrigerant side should be optimized. Based on the
optimum value of overall heat transfer coefficients and required refrigeration capacity at given
conditions, the evaporator and condenser can be designed on fixing the appropriate value of
evaporator and condenser pressures. Thereafter the capillary tube should be designed and right
compressor should be selected based on the designed value of pressures and mass flow rate of refrigerant. In the last, correct amount of refrigerant should be charged in the system to ensure
optimum values of heat transfer coefficients and overall performance of the system. Further research
work can be extended in the direction of correct estimation of refrigerant liquid inventory separately
in the evaporator and condenser, for given amount of initial charge, during working of the system
and establishing appropriate correlations of average value of refrigerant side heat transfer coefficients
based on the known fraction of liquid and vapour refrigerant in the evaporator and condenser.
ACKNOWLEDGEMENTS
This work was financially supported by “Development grant head 2049/3009, National Dairy Research Institute
(Deemed University), Karnal (Hr), India
REFERENCES
[1] Pramod Kumar (2002) “Finite time thermodynamic analysis of refrigeration and air conditioning and
heat pump systems” PhD thesis, Indian Institute of Technology, Delhi, N D.
[2] Rossi F.De, Mauro A.W., Musto M., Vanoli G.P. (2010) “Long period food storage household vertical
freezer: Refrigerant charge influence on working conditions during steady operation”, International
Journal of Refrigerationm, Vol. 34, pp 1305-14.
[3] Grazzini G. (1993). “Irreversible Refrigerators with isothermal heat exchangers”, International Journal
of Refrigeration, Vol 16, No. 2, pp 101-106
[4] Bejan A. (1996), “Entropy generation minimization: The new thermodynamics of finite size devices
and finite time processes”, Journal of Applied Physics, Vol. 79, No.3, pp 1191-1215
[5] Sahin B. and Kodal A. (1999) “Finite time thermoeconomic optimization for endoreversible
refrigerators and heat pumps”, Energy Conv. & Mgmt., Vol. 40, pp 951-960.
[6] Wu C., Kiang R.L. (1992) “Finite time thermodynamic analysis of a Carnot engine with internal
irreversibility”, Energy, Vol. 17, No. 12, pp 1173-1178.
[7] Kodal A., Sahin B. and Yilmaz T. (2000) “Effect of internal irreversibility and heat leakage on the
finite time thermoeconomic performance of refrigerators and heat pumps”, Energy Conv. and Mgmt,
Vol.41, pp 607-619
[8] Chen L., Wu C., Sun F. (1996) “Influence of heat transfer law on the performance of a Carnot engine”,
Journal of Applied Thermal Engineering, Vol. 17, No. 3, pp 277-282.
[9] Chen L., Wu C., Sun F. (1998) “Influence of internal heat leak on the performance of refrigerators”,
Energy Conv. and Mgmt, Vol.39, No.1/2, pp 45-50
[10] Chen L., Wu C., Sun F. (2001) “Effect of Heat transfer law on the finite time exergoeconomic
performance of a Carnot Refrigerator”, Exergy, An International Journal, Vol.1, No. 4, pp 295-302.
[11] Goktun S. (1996) “Coefficient of performance for an irreversible combined refrigeration cycle”,
Energy, Vol. 21, No.7/8, pp 721-724
[12] Kaushik S.C. (2001) “ Application of finite time thermodynamics to thermal energy conversion
systems: A review, Internal Report on C.E.S., I.I.T. Delhi (India)
[13] Sanaye S, Malekmohammadi H.R. (2004) “Thermal and economical optimization of air conditioning
units with vapour compression refrigeration system”, Journal of Applied Thermal Engineering, Vol.
24, pp 1807-1825.
[14] Eckels S.J. and Pate M.B. (1990) “An experimental comparison of evaporation and condensation heat
transfer coefficients for HFC-134a and CFC-12”, International Journal of Refrigeration, Vol 14, pp 70-
77
[15] Ding C.Z., Wen T.J. and Wen Q.T. (2007) “Condensation heat transfer of HFC 134a on horizontal low
thermal conductivity tubes”, International Communications in Heat and Mass Transfer, Vol.34, pp 917-
923
[16] Takamatsu H., Momoki S., Fujii T. (1992) “A correlation for forced convective boiling heat transfer of
pure refrigerants in a horizontal smooth tube”, International Journal of Heat Mass Transfer, Vol. 36,
No. 13, pp 3351-3360.
[17] Hambraeus K. (1991) “Heat transfer coefficient during two phase flow boiling of HFC-134a”,
International Journal of Refrigeration, Vol. 14, pp 357-362
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
277 Vol. 1, Issue 5, pp. 267-277
[18] Hambraeus K. (1994) “Heat transfer of oil contaminated HFC-134a in a horizontal evaporator”,
International Journal of Refrigeration, Vol. 18, No.2, pp 87-99
[19] Dongsoo J., Youngil K., Younghwan K. and Kilhong S. (2003) “Nucleate boiling heat transfer
coefficients of pure halogenated refrigerants”, International Journal of Refrigeration, Vol. 26, pp 240-
248
[20] Wongwises S., Disawas S., Kaewon J. and Onurai C. (2000) “ Two phase evaporative heat transfer
coefficients of refrigerant HFC-134a under forced flow conditions in a small horizontal tube”, Int.
Comm. Heat Mass Transfer, Vol.27, No.1, pp 35-38
[21] Seung W.Y., Jinhee J, Yong T.K. (2008) “Experimental correlation of pool boiling heat transfer for
HFC 134a on enhanced tubes: Turbo-E”, International Journal of Refrigeration, Vol. 31, pp 130-137.
[22] Arora C. P. (1981) “Refrigeration and Air Conditioning”, Tata McGraw Hill Publishing Co., New
Delhi, pp.91-101.
[23] Koury R.N.N., Machado L., Ismail K.A.R. (2001) Numerical simulation of a variable speed
refrigeration system, International Journal of Refrigeration 24, 192-200 [24] Guo-liang Ding (2007) “Recent developments in simulation techniques for vapour-compression
refrigeration systems”, International Journal of Refrigeration, Vol. 30, No. 7, pp 1119-1133.
[25] Joaquim M. G, Claudio M, Christian J.L., 2009. A semi-empirical model for steady-state simulation of
household refrigerators, Journal of Applied Thermal Engineering 29 (8-9) 1622-1630 [26] Stoecker W.F., Jones J.W. (1983) “Refrigeration and Air Conditioning”, Tata McGraw Hill Publishing
Co., New Delhi, pp.260-271
[27] Cleland A.C. (1992) “Polynomial curve-fits for refrigerant thermodynamic properties: extension to
include R134a”, International Journal of Refrigeration, Vol. 17, No. 4, pp 245-249.
Authors Biographies
Jitender Kumar Dabas was born on 5th July 1971. He has obtained his M.Tech. Degree in
“Mechanical Engineering” from Regional Engineering College, Kurukshetra University,
Kurukshetra in 2002. He is pursuing part-time PhD at NIT, Kurukshetra. He is also a Member of
the Institution of Engineers (India), Calcutta. His areas of interest are thermal science,
refrigeration and air-conditioning and heat transfer. At present, he is in the faculty of Dairy
Engineering Division of National Dairy Research Institute, Karnal in Haryana, India.
A. K. Dodeja was born on 16th
July 1953. He has obtained his PhD in “Application of thin film
scraped surface heat exchanger for processing of milk and milk products” from Kurukshetra
University, Kurukshetra in 1988. His areas of interest are heat transfer in dairy products and
design of dairy equipments. Presently, he is Head of Dairy Engineering Division of National
Dairy Research Institute, Karnal in Haryana, India.
Sudhir Kumar was born on 8th December 1951. He has obtained his PhD in the field of comfort
conditioning using passive measures from Kurukshetra University, Kurukshetra in 1995. His
areas of interest are energy conservation and new technologies of energy conversions outside
thermodynamic regime. At present he is Professor in Mechanical Engg. Deptt. of NIT,
Kurukshetra in Haryana, India.
K. S. Kasana was born on 8th
June 1945. He has obtained his PhD in the field of air-
conditioning from Kurukshetra University, Kurukshetra in 1985. His areas of interest are thermal
science, refrigeration and air-conditioning and heat transfer. He has retired from service as Head
of Mechanical Engg. Deptt., NIT, Kurukshetra in Haryana, India. At present he is working as
Director in Shri Krishna Institute of Engg. and Technology, Kurukshetra in Haryana, India.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
278 Vol. 1, Issue 5, pp. 278-289
AGC CONTROLLERS TO OPTIMIZE LFC REGULATION IN
DEREGULATED POWER SYSTEM
S.Farook1, P. Sangameswara Raju
2
1Research scholar, S.V.U College of Engineering, S.V University, Tirupathi,
Andhra Pradesh, India. 2Professor, EEE Department, S.V.U College of Engineering, S.V University, Tirupathi,
Andhra Pradesh, India.
ABSTRACT
This paper presents the AGC controllers to regulate the system frequency and to regulate the power generation
of various GENCOs at scheduled levels in a deregulated power system by optimizing the parameters of
controller using Evolutionary Real coded genetic algorithm (RCGA). The performance of the controller is
investigated on a two-area interconnected power system consisting of Hydro-Thermal unit in one area and
Thermal-Gas unit in the second area. The main goal of the optimization method is to improve the dynamics of
LFC such as improving of the transient response of frequency and tie-line power oscillations and to optimizing
the Power generated by various GENCOs according to the bilateral contracts scheduled between GENCOs and
DISCOs in an interconnected multi-area deregulated power system. In the present paper the optimal feedback
controller and a proportional-integral-derivative controller were used. The simulation results show the PID
controller tuned by the proposed algorithm exhibits improved dynamic performance over optimally tuned
Feedback controller.
KEYWORDS: AGC controllers, Bilateral Contracts, Deregulated Power System, Real Coded Genetic
algorithm (RCGA).
I. INTRODUCTION
In deregulated scenario, automatic generation control is one of the most important ancillary services
to be maintained for minimizing frequency deviations, imbalance of generation and load demand, and for regulating tie-line power exchange, facilitating bilateral contracts spanning over several control
areas and to maintain a reliable operation of the interconnected transmission system. The requirement
for improving the efficiency of power production and delivery and with intense participation of
independent power producers motivates restructuring of the power sector. In deregulated scenario,
new organizations, market operators, such as independent system operators (ISOs), are responsible for
maintaining the real-time balance of generation and load for minimizing frequency deviations and
regulating tie-line flows, and facilitates bilateral contracts spanning over various control areas. The demand being constantly fluctuating and increasing, and hence there is a need to expand the
generation by introducing new potential generating plants such as gas fired power plants which are
usually operated as peak power plants into the power market. With the trends developing in the
combined cycle gas turbine based power plants having high efficiency and generation capacities more
than 100 MW makes them suitable for providing peak loads and also can be operated as base load
power plants.
The paper is organized as follows: Section II presents the detailed concepts of deregulated power
system and its model in SIMULINK platform. In section III, the controllers used for maintaining the
LFC regulation is discussed. Section IV presents an overview of the Real Coded Genetic Algorithm
and its implementation aspects. The section V emphasizes on the simulation of the controllers with
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
279 Vol. 1, Issue 5, pp. 278-289
the proposed algorithm in a two area deregulated power system. Finally the conclusions were
presented in section VI.
II. MULTI-AREA DEREGULATED POWER SYSTEM
The electrical industry over the years has been dominated by an overall authority known as vertical
integrated utility (VIU) having authority over generation, transmission and distribution of power
within its domain of operation [1]-[3], [11]. With the emerging or various independent power
producers (IPPs) in the power market motivates the necessity of deregulation of the power system
were the power can be sold at a competitive price performing all functions involved in generation,
transmission, distribution and retail sales. With restructuring the ancillary services is no longer an
integral part of the electricity supply, as they used to be in the vertically integrated power industry
structure. In a deregulated environment, the provision of these services must be carefully managed so
that the power system requirements and market objectives are adequately met. The first step in
deregulation is to unbundle the generation of power from the transmission and distribution however, the common LFC goals, i.e. restoring the frequency and the net interchanges to their desired values
for each control area remains same. Thus in a deregulated scenario generation, transmission and
distribution is treated as separate entities [1], [6]-[11]. As there are several GENCOs and DISCOs in
the deregulated structure, agreements/ contracts should be established between the DISCOs and
GENCOs with in the area or with interconnected GENCOs and DISCOs to supply the regulation. The
DISCOs have the liberty to contract with any available GENCOs in its own or other areas. Thus, there can be various combinations of the possible contracted scenarios between DISCOs and GENCOs. A
DISCO having contracts with GENCOs in another control area are known as “Bilateral transactions”
and within same area is known as “POOL transactions”.
Figure: 1. Block diagram representation of two area Deregulated power system
The concept of DISCO Participation Matrix (DPM) [1], [2], [11] is introduced to express these
possible contracts in the generalized model. DPM is a matrix with the number of rows equal to the
number of GENCOs and the number of columns equal to the number of DISCOs in the overall
system. The entities of DPM are represented by the contract participation factor (cpfij) which
corresponds to the fraction of total load contracted by any DISCOj towards any GENCOi:
DPM= [cpf11 cpf12 cpf1j . . cpf1n
cpf21 cpf22 cpf2j . . cpf2n
cpfi1 cpfi2 cpfij . . cpfin . . . . . .
cpfn1 cpfn2 cpfn3 . . cpfnn ] (1)
The sum of all entries in each column of DPM is unity.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
280 Vol. 1, Issue 5, pp. 278-289
∑ cpf = 1 (2)
Under steady state the power equations in deregulated environment are,
∆Pd i=∆PLOC i + ∆PUC i (3)
Where ∆PLOC i= ∑∆P (4)
The scheduled contracted power exchange is given by:
∆ = (Demand of DISCOs in area2 from GENCOs In area1) - (Demand of DISCOs in area1
from GENCOs in area2) (5)
The actual power exchanged in Tie-line is given by:
∆ =
π ( − ) (6)
At any time the tie-line power error is given by:
∆ !""#"= ∆
- ∆ (7)
∆ !""#" vanishes in the steady-state as the actual tie-line power flow reaches the scheduled power
flow. This error signal is used to generate the respective ACE signals as in the traditional scenario:
$%& = '∆ + ∆ !""#" (8)
$%& = '∆ + a12*∆ !""#" (9)
Where a12=-Pr1/Pr2
The total power supplied by ith GENCO is given by:
∆Pgki = ∆ *+ + apfki ∑∆P, (10)
Where *+ = ∑ -./0 /1 ∆ 2/ (11)
∆Pgki is the desired total power generation of a GENCOi in area k and must track the contracted and
un-contracted demands of the DISCOs in contract with it in the steady state.
III. AGC CONTROLLERS
Several control strategy such as integral control, optimal control, variable structure control have been
used to control the frequency and to maintain the scheduled regulation between the interconnected
areas. One major advantage of integral controller is that it reduces the steady state error to zero, but do
not perform well under varying operating conditions and exhibits poor dynamic performance [6]-[8]. The controller based on optimal control and variable structure control needs feedback of most of state
variables of the system which is practically difficult to have access and measure them in a large
interconnected system. In this paper is focused on optimization of feedback controller and
Proportional-Integral-Derivative (PID) controller.
3.1. Optimal Feedback Controller
An optimal AGC strategy based on the linear state regulatory theory requires the feedback of all state
variables of the system for its implementation, and an optimal control feedback law is obtained by
solving the non-linear Riccati equation using suitable computational technique. In practical
environment access to all variables is limited and also measuring all of them is impossible [3]. To
solve the problem some of the measurable variables are selected for the feedback control law. The
two area power system, shown in Fig.1 can be described by the following controllable and observable time-invariant state space representation as:
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
281 Vol. 1, Issue 5, pp. 278-289
3 4 =A. X + B .U (12)
Y=C.X (13)
Where X is the state vector and U is the vector of contracted and un-contracted power demands of
the DISCOs.
X=[ ∆f1 ∆f2 ∆Pg1 ∆Pg2 ∆Pg3 ∆Pg4 ∫ACE1 ∫ACE2 ∆Ptie12,act]T (14)
and U=[ ∆PL1 ∆PL2 ∆PL3 ∆PL4 ∆Pd1 ∆Pd2]T (15)
for the system defined by the Eq.(12) and (13), the feedback control law is given as,
U= -K.Y (16)
Where K is the feedback gain matrix. In this paper using ITAE as a performance criterion to be
optimize the feedback gains of the controller is tuned using Evolutionary Real coded Genetic
algorithms.
3.2. PID Controller
The most popular approach adopted for AGC in an inter-connected power system is the use of
Proportional-Integral-Derivative (PID) controller [7]. In LFC problem the frequency deviations and
the deviations in the tie-line are weighted together as a linear combination to a single variable called
the Area control error (ACE), and is used as a control signal that applies to governor set point in each
area. By taking ACE as the system output, the control vector for a PID controller is given by:
5 = − 678$%& + 79 : $%& ;< + 7(=>!?)
@ (17)
Where Kp, Kd, KI are the proportional, derivative and integral gains of PID controller. It is well known
that the conventional method to tune gains of PID controller with numerical analyses is tedious and
time consuming. In this strategy, using ITAE as a performance criterion to be optimize the PID gains
are tuned using Real coded Genetic algorithms to improve the dynamics of LFC in a deregulated
power system.
IV. EVOLUTIONARY ALGORITHMS
In traditional approach sequential optimization, several iterations are required to determine the
optimal parameters for an objective function to be optimized. When the number of parameters to be
optimize is large the classical techniques requires large number of iterations and computation time
[5]. The evolutionary algorithms such as Genetic algorithms emerges as an alternative for optimizing
the controller gains of a multiarea AGC system more effectively than the traditional methods [9],[17].
4.1. Real Coded Genetic algorithm Genetic algorithm (GA) is an optimization method based on the mechanics of natural selection. In
nature, weak and unfit species within their environment are faced with extinction by natural selection.
The strong ones have greater opportunity to pass their genes to future generations. In the long run,
species carrying the correct combination in their genes become dominant in their population.
Sometimes, during the slow process of evolution, random changes may occur in genes. If these
changes provide additional advantages in the challenge for survival, new species evolve from the old
ones. Unsuccessful changes are eliminated by natural selection. In real-coded genetic algorithm
(RCGA), a solution is directly represented as a vector of real parameter decision variables,
representation of the solutions very close to the natural formulation of the problem [4], [9],[17]. The
use of floating-point numbers in the GA representation has a number of advantages over binary
encoding. The efficiency of the GA gets increased as there is no need to encode/decode the solution
variables into the binary type.
4.1.1 Chromosome structure In GA terminology, a solution vector known as an individual or a chromosome. Chromosomes are
made of discrete units called genes. Each gene controls one or more features of the chromosome [9],
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
282 Vol. 1, Issue 5, pp. 278-289
[17]. The chromosome consisting of gains (K) of feedback controller and gains (Kp, Kd & KI) of a PID
controller is modeled as its genes.
4.1.2 Fitness-Objective function evaluation The objective here is to minimize the deviation in the frequency and the deviation in the tie line power
flows and these variations are weighted together as a single variable called the ACE. The fitness
function is taken as the Integral of time multiplied absolute value (ITAE) of ACE [1], [2]. An optional penalty term is added to take care of the transient response specifications viz. settling time,
over shoots, etc. Integral of time multiplied absolute value of the Error (ITAE), is given by:
dt)(0
∫=Tsim
tetITAE (18)
Where e(t)= error considered.
The fitness function to be minimized is given by:
( ) FDdtPffJ
Tsim
Error
Tie +∆+∆+∆= ∫0
122211 ββ
(19)
Where FD=α1 OS+ α2 TS (20)
Where Overshoot (OS) and settling time (TS) for 2% band of frequency deviation in both areas is
considered for evaluation of the FD [10].
4.1.3 Selection Selection is a method of selecting an individual which will survive and move on to the next generation
based on the fitness function from a population of individuals in a genetic algorithm. In this paper
tournament selection is adopted for selection [8], [9], [17]. The basic idea of tournament selection scheme is to select a group of individuals randomly from the population. The individuals in this group
are then compared with each other, with the fittest among the group becoming the selected individual.
4.1.4 Crossover The crossover operation is also called recombination. This operator manipulates a pair of individuals
(called parents) to produce two new individuals (called offspring or children) by exchanging
corresponding segments from the parents' coding [9], [11], [17]. In this paper simple arithmetic
crossover is adopted.
4.1.5 Mutation By modifying one or more of the gene values of an existing individual, mutation creates new
individuals and thus increases the variability of the population [9],[17]. In the proposed work Uniform
mutation is adopted.
4.1.6 Elitism Elitism is a technique to preserve and use previously found best solutions in subsequent generations of
EA [9], [17]. In an elitist EA, the population’s best solutions cannot degrade with generation.
4.2. Pseudo code for the proposed RCGA Step 1: Initialization
Set gen=1. Randomly generate N solutions to form the first population, Pinitial. Evaluate the fitness of
solutions in Pinitial. Initialize the probabilities of crossover (pc) and mutation (pm).
While (gen ≤ Max number of generations)
Step 2: Selection
Select the individuals, called parents that contribute to the population at the next generation. In the proposed GA tournament selection is used.
Step 3: Crossover
Generate an offspring population Child,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
283 Vol. 1, Issue 5, pp. 278-289
if pc >rand,
3.1. Choose one best solutions x from Pinitial based on the fitness values and random solution y from
the population for crossover operation.
3.2. Using a crossover operator, generate offspring and add them back into the population.
Child1= r parent1 + (1 − r) parent2;
Child2= r parent2 + (1 − r) parent1;
end if
Step 4: Mutation
Mutation alters an individual, parent, to produce a single new individual, child.
if pm >rand,
Mutate the selected solution with a predefined mutation rate.
end if
Step 5: Fitness assignment
The fitness function defined by Eqs. (19) is minimized for the feasible solution
Step 6: Elitism
The selected number of Elite solutions (best solutions) is preserved in subsequent generations in
the population.
Step 7: stopping criterion
If the maximum number of generations has reached then terminate the search and return to the
current population, else, set gen=gen+1and go to Step 2. end while
The values of GA operator used for optimization is presented in appendix B.
V. SIMULATION
To investigate the performance of the proposed RCGA, a two area power system consisting of hydro-
thermal system in one area and thermal-gas plant system in second area is considered. In each area
two GENCOs and two DISCOs are considered with each GENCO demanding a load demand of
0.1puMW contracted towards the GENCOs according to the Bilateral contracts established between
various GENCOs and DISCOs. The concept of a “DISCO participation matrix” (DPM) is used for the
simulation of contracts between GENCOs and DISCOs. In a Restructured AGC system, a DISCO
asks/demands a particular GENCO or GENCOs within the area or from the interconnected area for
load power. Thus, as a particular set of GENCOs are supposed to follow the load demanded by a
DISCO, information signals must flow from a DISCO to a particular GENCO specifying
corresponding demands. The demands are specified by contract participation factors and the pu MW
load of a DISCO. These signals will carry information as to which GENCO has to follow a load
demanded by which DISCO. Using Integral of Time multiplied by Absolute Error the gains of the
feedback controller and Proportional-Integral and Derivative controller is tuned by using Evolutionary
Real coded Genetic algorithm. The simulation is done in MATLAB/SIMULINK platform and the
power system parameters used for simulation were presented in appendix A. The GENCOs in each area participates in ACE defined by the following apfs:
apf1= 0.5; apf3= 0.5;
apf2= 1-ap f1=0.5; apf4= 1-apf3=0.5;
5.1. Scenario I: Bilateral transactions In this scenario, DISCOs have the freedom to have a contract with any GENCO in their or another
areas. Consider that all the DISCOs contract with the available GENCOs for power as per following
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
284 Vol. 1, Issue 5, pp. 278-289
DPM. All GENCOs participate in the LFC task. It is assumed that a large step load 0.1 pu is
demanded by each DISCOs in areas 1 and 2.
DPM=[ 0.4 0.25 0.0 0.3
0.3 0.25 0.0 0.0
0.1 0.25 0.5 0.7
0.2 0.25 0.5 0.0];
The frequency deviations of two areas, GENCOs power generation, Tie-line power flow and Area
control error for the given operating conditions is depicted in Fig.2 to Fig.6:
Figure: 2. Frequency deviation in Area 1 and Area 2
a. GENCO 1: Thermal power plant b. GENCO 2: Hydro power plant
Figure: 3. Power generated by GENCOs in Area 1
a. GENCO 3: Thermal power plant b. GENCO 4: Gas power plant
Figure: 4. Power generated by GENCOs in Area 2
Figure: 5. Area Control Error (ACE) in Area 1 and Area 2
Figure: 6. Tie-line power Del Ptie-line12- Scheduled
From the simulation results shown in fig:6 due to the bilateral contracts existing between GENCOs
and DISCOs of area 1 and area 2, the tie-line power converges to a steady state value of ∆Ptie12-
schedule=-0.05 puMW. At steady state the total generation should match the total demand contracted by
the DISCOs, Thus the generation in area 1 and area 2 converges to the scheduled values as governed
by ISO and is tabulated in table 1:
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
285 Vol. 1, Issue 5, pp. 278-289
Table: 1. Power generated by GENCOs
GENCOs
Generation Scheduled Uncontrolled
Feedback
controller PID controller
del Pg1 0.095 0.0949 0.0949 0.095
del Pg2 0.055 0.055 0.0548 0.055
del Pg3 0.155 0.154 0.1549 0.155
del Pg4 0.095 0.095 0.0949 0.095
del Ptie12 -0.050 -0.048 -0.050 -0.050
The time domain specifications such as Overshoot and settling time for frequency and tie-line
dynamics for the given operating conditions is tabulated in table 2.
Table: 2. Time domain specifications
Uncontrolled Feedback controller PID controller
Max
Overshoot
Settling
Time (sec)
Max
Overshoot
Settling
Time(sec)
Max
Overshoot
Settling
Time(sec)
del f1 -0.253 20.963 -0.162 10.103 -0.108 10.671
del f2 -0.250 23.855 -0.093 11.752 -0.057 9.865
del Ptie12 -0.108 22.168 -0.100 10.927 -0.061 4.026
5.2. Scenario II: Contract violation by DISCOs in area 1
It may happen that a DISCO violates a contract by demanding more power than that specified in the
contract. This un-contracted power must be supplied by the GENCOs in the same area as the DISCO.
Consider scenario I with a modification that DISCOs in area 1 demands additional 0.05 pu MW of un-
contracted power in excess. Let ∆PL,uc1=0.05pu. This excess power should be supplied by GENCOs in
area 1 and the generation in area 2 remains unchanged. The frequency and Tie-line deviations, power
generated by GENCOs and Area control error were depicted in Fig: 7 to Fig. 11:
Figure: 7. Frequency deviation in Area 1 and Area 2
a. GENCO 1: Thermal power plant b. GENCO 2: Hydro power plant
Figure: 8. Power generated by GENCOs in Area 1
a. GENCO 3: Thermal power plant b. GENCO 4: Gas power plant
Figure: 9. Power generated by GENCOs in Area 2
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
286 Vol. 1, Issue 5, pp. 278-289
Figure: 10. Area Control Error (ACE) in Area 1 and Area 2
Figure: 11. Tie-line power Del Ptie-line12- Scheduled
From the simulation results shown in fig:8-9 in the event of contract violation by the DISCOs in area
1, it is observed that the excess power demand is contributed by the GENCOs in the same area, while
the generation in area 2 and the scheduled tie-line power remains unchanged. At steady state the total
generation should match the total demand contracted by the DISCOs, Thus the generation in area 1
and 2 converges to the scheduled values as governed by ISO and is tabulated in table 3.
Table: 3. Power generated by GENCOs
GENCOs
Generation Scheduled Uncontrolled
Feedback
controller PID controller
del Pg1 0.095 0.1070 0.1328 0.1320
del Pg2 0.055 0.0668 0.0674 0.0672
del Pg3 0.155 0.1667 0.1549 0.1550
del Pg4 0.095 0.1061 0.0947 0.0950
del Ptie12 -0.050 -0.0723 -0.0498 -0.0500
The time domain specifications such as Overshoot and settling time for frequency and tie-line dynamics for the given operating conditions is tabulated in table 4.
Table: 4. Time domain specifications
Uncontrolled Feedback controller PID controller
Max
Overshoot
Settling
Time (sec)
Max
Overshoot
Settling
Time(sec)
Max
Overshoot
Settling
Time(sec)
del f1 -0.3078 26.747 -0.199 5.928 -0.116 8.961
del f2 -0.322 24.096 -0.141 10.598 -0.056 8.766
del Ptie12 -0.157 22.409 -0.0889 15.089 -0.059 3.701
5.3. Scenario III: Contract violation by DISCOs in area 2
Consider scenario I with a modification that DISCOs in area 2 demands additional 0.05 pu MW of
un-contracted power in excess. Let ∆PL,uc2=0.05pu. The frequency and Tie-line deviations, power
generated by GENCOs and Area control error were depicted in Fig: 12 to Fig. 16:
Figure: 12. Frequency deviation in Area 1 and Area 2
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
287 Vol. 1, Issue 5, pp. 278-289
a. GENCO1: Thermal power plant b. GENCO 2: Hydro power plant
Figure: 13. Power generated by GENCOs in Area 1
a. GENCO 3: Thermal power plant b. GENCO 4: Gas power plant
Figure: 14. Power generated by GENCOs in Area 2
Figure: 15. Area Control Error (ACE) in Area 1 and Area 2
Figure: 16 Tie-line power Del Ptie-line12-Scheduled
From the simulation results shown in fig:13-14 in the event of contract violation by the DISCOs in
area 2, it is observed that the excess power demand is contributed by the GENCOs in the same area,
while the generation in area 1 and the scheduled tie-line power remains unchanged. At steady state the
total generation should match the total demand contracted by the DISCOs, Thus the generation in area
1 and 2 converges to the scheduled values as governed by ISO and is tabulated in table 5.
Table: 5. Power generated by GENCOs
GENCOs
Generation Scheduled Uncontrolled
Feedback
controller PID controller
del Pg1 0.095 0.1071 0.095 0.095
del Pg2 0.055 0.0668 0.0548 0.055
del Pg3 0.155 0.1667 0.180 0.180
del Pg4 0.095 0.1059 0.120 0.120
del Ptie12 -0.050 -0.0237 -0.0498 -0.050
The time domain specifications such as Overshoot and settling time for frequency and tie-line
dynamics for the given operating conditions is tabulated in table 6.
Table: 6. Time domain specifications
Uncontrolled Feedback controller PID controller
Max
Overshoot
Settling
Time (sec)
Max
Overshoot
Settling
Time(sec)
Max
Overshoot
Settling
Time(sec)
del f1 -0.298 21.448 -0.193 11.744 -0.108 8.092
del f2 -0.295 24.096 -0.216 9.446 -0.051 8.684
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
288 Vol. 1, Issue 5, pp. 278-289
del Ptie12 -0.079 16.385 -0.0837 19.914 -0.055 3.947
The convergence characteristic of the objective function given in Eqs. (19) with the algorithm is
shown in Fig. 17.
Figure.17. Convergence characteristic of the objective function
VI. CONCLUSIONS
From simulation results the dynamic response obtained for various operating conditions, it is inferred
that the implementation of PID controller optimized by Evolutionary Real Coded Genetic Algorithm
results in an appreciable reduction in the magnitude of overshoot, converging to steady state without
steady state error, and within convincible settling time for ∆f1, ∆f2, and ∆Ptie12. Also the PID controller
tuned by the algorithm has successfully traced the generation of individual GENCOs in accordance to the schedule laid by the ISO and also ensures zero area control error at steady state in both areas over a
wide range of operating conditions. From the convergence characteristics it is inferred that the
proposed algorithm converges rapidly to the optimal solution with in less number of iterations. The
overall performance of PID controller tuned by the proposed algorithm exhibits improved dynamic
performance over optimally tuned feedback controller for different operating conditions considered.
REFERENCES
[1] Elyas Rakhshani , Javad Sadeh, (2010) - Practical viewpoints on load frequency control problem in a
deregulated power system- Energy Conversion and Management 51,pp 1148–1156,
[2] Y.L.Karnavas,K.S.Dedousis, (2010)-Overall performance evaluation of evolutionary designed conventional
AGC controllers for interconnected electric power system studies in a deregulated market environment-
International journal of Engineering, Science and Technology,Vol. 2.
[3] Prabhat Kumar, Safia A Kazmi, Nazish Yasmeen,(2010) -Comparative study of automatic generation
control in traditional and deregulated power environment- World Journal of Modelling and Simulation Vol.
6 No. 3
[4] Pingkang Li Xiuxix Du ,(2009) - Multi-Area AGC System Performance Improvement Using GA Based
Fuzzy Logic Control- The International Conference on Electrical Engineering,
[5] Janardan Nanda,S.Mishra, Lalit Chandra Saikia, (2009)- Maiden application of Bacterial Foraging based
optimization technique in multiarea Automatic generation control,-IEEE Transactions on power systems,
Vol. 24. No.2,pp 602-609,
[6] Hassan Bevrani and Takashi Hiyama-Multiobjective PI/PID Control Design Using an Iterative Linear
Matrix Inequalities Algorithm-International Journal of Control, Automation, and Systems, vol. 5, no. 2, pp.
117-127, April 2007.
[7] Hossein Shayeghi, Heidar Ali Shayanfar, Aref Jalili- Multi Stage Fuzzy PID load frequency controller in a
Restructured power system - Journal of Electrical Engineering, VOL. 58, NO. 2, 2007, 61–70.
[8] Reza Hemmati, Sayed Mojtaba Shirvani Boroujeni, Hamideh Delafkar and Amin Safarnezhad Boroujeni
(2011)- PID Controller Adjustment using PSO for Multi Area Load Frequency Control-Australian Journal
of Basic and Applied Sciences, 5(3): 295-302, 2011.
[9] A. Konak et al, (2006) - Multi-objective optimization using genetic algorithms: A tutorial- / Reliability
Engineering and System Safety.
[10] Bevrani, Hassan and Mitani, Yasunori and Tsuji, Kiichiro, (2003) Robust LoadFrequency Regulation In a
New Distributed Generation Environment. In Proceedings IEEE Power Engineering Society General
Meeting.
[11] V. Donde, M. Pai, I. Hiskens,(2001). Simulation and Optimization in an AGC System after Deregulation.
IEEE Transactions on Power Systems, 16(3): 481–488,
[12] Aimin Zhou, Bo-Yang Qu, Hui Li, Shi-Zheng Zhao, Ponnuthurai Nagaratnam Suganthan, Qingfu Zhang-
Multiobjective evolutionary algorithms: A survey of the state of the art
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
289 Vol. 1, Issue 5, pp. 278-289
[13] V. Donde.M. A. Pai,I. A. Hiskens-Simulation of Bilateral Contracts in an AGC System After Restructuring,
[14] K.S.S. Ramakrishna, T.S. Bhatti, (ICEE 2006),- Load frequency control of interconnected hydro-thermal
power systems-International Conference on Energy and Environment 2006
[15] Preghnesh Bhatt, S.P. Ghoshal, and Ranjit Roy(2010)- Automatic Generation Control of Two-area
Interconnected Hydro-Hydro Restructured Power System with TCPS and SMES- Proc. of Int. Conf. on
Control, Communication and Power Engineering 2010.
[16] Preghnesh Bhatt, S.P. Ghoshal, and Ranjit Roy(2010)- Optimized multi area AGC simulation in
restructured power systems- International Journal of Electrical Power & Energy Systems, Volume 32, Issue
4, May 2010, Pages 311-322.
[17] D.Goldberg,(1989)- Genetic algorithm in search optimization and machine learning: Addison-Wesley.
[18] P. Kundur- Power system stability and control: Mc Graw Hill.
APPENDIX A: Parameters values of Power System
Area 1
GENCO 1: Tg11=0.0875; Tt11=0.4; Kr11=.33; Tr11=10; R1=3;
GENCO 2: Tg12=0.1; T21=0.513; T31=10; Tw1=1; R2=3.125;
B1=0.532
Area 2
GENCO 3: Tg21=0.075; Tt21=0.375; Kr2=.33; Tr2=10; R3=3.125;
GENCO 4: Tggas=1.5; Ttgas=0.1; Tlgas=5; R4=3.375;
B2=0.495;
T12=0.543.
APPENDIX B: Genetic Algorithm parameters
No of population : 100
Maximum no of generations : 30
Crossover : Arithmetic
Crossover probability (pc) : 0.95 Mutation : Uniform
Mutation probability (pm) : 0.1
Elitism : Yes
No. of Elite solutions : 2
AUTHORS BIOGRAPHY
S. Farook received B.Tech degree in Electrical & Electronics engineering from SVNEC,
Tirupathi in 2001 and M.Tech degree in Power systems and High voltage Engineering from
JNTU, Kakinada in the year 2004. He presently is working towards his Ph.D degree in S.V.
University, Tirupathi. His areas of interest are in soft computing techniques in power system
operation & control and stability.
P. Sangameswararaju received Ph.D from S.V. University, Tirupathi, Andhra Pradesh.
Presently he is working as Professor in the Department of Electrical and Electronics
Engineering, S.V. University. Tirupathi, Andhra Pradesh. He has about 50 publications in
National and International Journals and Conferences to his credit. His areas of interest are in
power system Operation & control and Stability.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
290 Vol. 1, Issue 5, pp. 290-297
AUTOMATIC DIFFERENTIATION BETWEEN RBC AND
MALARIAL PARASITES BASED ON MORPHOLOGY WITH
FIRST ORDER FEATURES USING IMAGE PROCESSING
Jigyasha Soni1, Nipun Mishra2, Chandrashekhar Kamargaonkar3 1 Dept of SSCET, Bhilai, India
2ResearchScholor, IITDM., Jablapur, India
3Associate Professor, Dept of ETC, SSCET, Bhilai, India
ABSTRACT
Malaria is the most important parasite infection of human and is associated with a huge burden of morbidity
and mortality in many parts of tropical world. The world health organization estimates 300-500 million malaria
cases and more than 1 million deaths per year. The definitive diagnosis of malaria infection is done by
searching for parasites in blood slides (films) through a microscope .However; this is a routine and time
consuming task. Besides a recent study on the field shows the agreements rates among the clinical experts for
the diagnosis are surprisingly low. Hence, it is very important to produce a common standard tool which is able
to perform diagnosis with same ground criteria uniformly everywhere. Techniques have been proposed earlier
that makes use of thresholding or morphology or segment an image .Here I have presented a technique that
takes benefits of morphological operation and thresholding at appropriate position in the hole process to
maximize the productivity of algorithm and differentiate between the simple RBC and malaria parasite. An
approach presenting here to detect red blood cells with consecutive classification into parasite infected and
uninfected cells for estimation of parasitaemia.
KEYWORDS: parasites, morphology, segmentation, diagnosis, thresholding
I. INTRODUCTION
Malaria cannot be passed directly from one human to another. It can be transmitted by a mosquito [2].The incubation period for malaria varies considerably. For the most serious form of malaria, the incubation period is eight to twelve days. In some rare forms of malaria, the incubation period can be as long as ten months [3]. A lot of research has been carried out in automatic processing of infected bloods cells- Jean-Philippe Thiran in his paper [4] described a method for automatic recognition of cancerous tissues from an image of a microscopic section. This automatic approach is based on mathematical morphology. This method pays no special attention to the speed of the algorithms. An accurate technique for the determination of Parasitaemia has been suggested in Selena W.S. Sio [7]. The algorithm has four stages namely edge detection, edge linking, clump splitting and parasite detection. The value of PPV given by Sio is 28-81% and it takes 30 Seconds to process a single image. F. Boray Tek [8], transforms the images to match a reference image colour characteristics. The parasite detector utilizes a Bayesian pixel classifier to mark stained pixels. The value of sensitivity given by 74% and value of PPV given by 88%, and pays no special attention to the speed of the algorithms. The objective of our work is to develop a fully automated image classification system to positively identify malaria parasites present in thin blood smears, and differentiate the species.. The effort of the algorithm is to detect presence of parasite at any stage. So if this algorithm is incorporated in routine tests, the presence of malaria parasite can be detected.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
II. MORPHOLOGICAL FEATURES OF RBC AND MALARIA PARASITES
2.1 Morphological features of RBC-RBCs are among the smallest cells normally disc-shaped, soft and flexible, and red in color in the body (the smallest is sperm) and the most numerous type of cell present in the blood.. A typical RBC has a diameter of 7.7 µm (micrometer) and a maximum thickness of roughly 2.6 µm, but the center narrows to about 0.8 µm. The total surface area of the RBC in the blood of a typical adult is roughly 3800 square meters -- 2000 times the total surface area of the body.
2.2 Morphological features of malarial parasites-There are four types of human malaria – Plasmodium falciparum, P. vivax, P. malariae, and P. ovale. P. falciparum and P. vivax are the most common. P. falciparum is by far the most deadly type of malaria infection
(a) (b) (c) (d) (e)
Figure 1: -(a)Simple RBC(a) Plasmodium Falciparum (b) P.Vivax P.Malariae (d) P.Ovale
Table 1: Morphological features of the host red blood cell by species of Plasmodia in stained thin blood film
P. Falciparum P. Vivax P. Ovale P. Malariae
Age Young and old erythrocyte s infected
Young erythrocytes infected
Young erythrocytes infected
Older erythrocyte s infected
Dimension s Normal Enlarged Enlarged, sometimes assuming oval shape
Normal
Color Normal to dark Normal to pale Normal Normal
Granules Unusual coarse scattered red stippling in mature trophozoite s or schizonts (Maurer’s clefts
Frequent fine red diffuse Stippling in all stages of erythrocytic developmenta l cycle (Schuffner’s dots)
Frequent fine red diffuse stippling in all stages of erythrocytic developmenta l cycle (Schuffner’s dots, also called James’ dots)
None
Granules Unusual coarse scattered red stippling in mature trophozoite s or schizonts (Maurer’s clefts
Frequent fine red diffuse Stippling in all stages of erythrocytic developmenta l cycle (Schuffner’s dots
Frequent fine red diffuse stippling in all stages of erythrocytic developmenta l cycle (Schuffner’s dots, also called James’ dots)
None
Pigment Dark brown and Usually compact
Golden brown and usually loose
Brown coarse pigment granules
Brown coarse scattered pigment granules
Leucocytes The presence of malaria pigment in neutrophils and monocytes is a prognostic marker of severe disease
III. THE STEPS OF ALGORITHM
Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This selected algorithm presents an original method for enumeration and
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
292 Vol. 1, Issue 5, pp. 290-297
classification of erythrocytes in stained thin blood films infected with malarial parasite. The process is given below. 1. Image Acquisition (Done using high resolution Digital Camera) 2. Image Analysis 3. Image Segmentation 4. Feature Generation 5. Classification of Parasite and result verification
3.1 Image acquisition and database collection
Oil immersion views (10x1000), of Giemsa stained blood films were captured using a binocular microscope mounted with a digital camera. Captured images were 460 pixels X 307 pixels bitmap images.
3.2 Image analysis
Image analysis usually starts with a pre-processing stage, which includes operations such as noise reduction. Canny edge detector, which has become one of the most widely used edge finding algorithms, is found to be ten times slower than this SUSAN approach.
3.2.1 Non linear filtering: SUSAN using for filtering approach For a real time system using time varying image sequences, speed is an important criterion to be considered. Also there has to be a compromise between maximizing signal extraction and minimizing output noise: the so-called “Uncertainty Principle” of edge detection. I have implemented a new approach to low-level image processing - SUSAN (Smallest Univalue Segment assimilating Nucleus) Principle [10], which performs Edge and Corner Detection and Structure Preserving Noise Reduction.
3.3 Image segmentation:
For the actual recognition stage, segmentation should be done before it to extract out only the part that has useful information. The goal of the segmentation process is to define areas within the image that have some properties that make them homogeneous. After segmentation, the discontinuities in the image correspond to boundaries between regions can be easily established.
3.3.1 Segmentation using morphology: The most commonly used morphological procedure for estimating size distribution of image components is the Granulometry.[9] The size and eccentricity of the erythrocytes are also required for the calculation of some feature values (as these can be indicative of infection). The shape of the objects (circular erythrocytes) is known a priori, but the image must be analyzed to determine the size distribution of objects in the image and to find the average eccentricity of erythrocytes present. Here gray scale granulometries based on opening with disk shape elements are used. Non flat disk shaped structural element are used to enhance the roundness and compactness of the red blood cells and flat disk shaped structural element are used to segment overlapping cells. The object to be segmented differs greatly in contrast from the background image. Changes in contrast can be detected by operators that calculate the gradient of an image. The gradient image can be calculated and a threshold can be calculated and a threshold can be applied to create a binary mask containing the segmented cell. The binary gradient mask is dilated using the vertical structuring element followed by the horizontal structuring element. The cell of interest has been successfully segmented, but it is not the only object that has been found. Any objects that are connected to the border of the image can be removed. The segmented object would be to place an outline around the segmented cell
IV. FEATURE GENERATION AND CLASSIFICATION
4.1 Feature Generation
Two sets of features are used for development. The first set will be based on image characteristics that have been used previously in biological cell classifiers, which include geometric features (shape and size), colour attributes and grey-level textures.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET
It will be advantageous to apply expert, a priori knowledge to a classification problem. This will be done with the second set of features, where measures of parasite and infected erythrocyte morphology that are commonly used by technicians for manual microscopic diagnosis are used. It’s desirable to focus on these features, because it is already known that they are able to differentiate between species of malaria.
4.2 Feature Classification
The final classification of an erythrocyte as infected with malaria or not, and if so, the species of the parasite, falls to the classifier. The classifier is a twoas positive or negative at the first node, and the species assigneThe design of a tree classifier has the following steps: the design of a tree structure (which has already been assigned), the selection of features to be used at every node, and the choice of decision rule at each node [12]. The same type of classifier is used at both nodes.
Fig
The features selected for the first classifier are those that describe the colourpossible parasites. The features used by microscopists to differentiate malaria species are selected for the second classifier. The training goal is to minimize squared errors, and training is stopped when the error of a validation set increased. This is done to avoid overtraining.
V. RESULTS FOR RBC AND
(a) (b) (c) (d) (e) (f)
Figure 3:The comparison between output ofof simple RBC(c)Susan output of simple affected blood cell(f)SUSAN output of parasite affected
We can see in Figure 3,Canny edge detector is the powerful edge edges of object, broken edges are mixed with background, very few junction involving more than two edges are correctly connected. Some sharper detection, good localization, has a single response to a single at junction is complete, the reported edges lie exactly on the image the brightness ramp are correctly found and no false edges are reported as faster.
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231
It will be advantageous to apply expert, a priori knowledge to a classification problem. This will be done with the second set of features, where measures of parasite and infected erythrocyte morphology
y used by technicians for manual microscopic diagnosis are used. It’s desirable to focus on these features, because it is already known that they are able to differentiate between species
an erythrocyte as infected with malaria or not, and if so, the species of the parasite, falls to the classifier. The classifier is a two-stage tree classifier, with an infection classified as positive or negative at the first node, and the species assigned at the second node. The design of a tree classifier has the following steps: the design of a tree structure (which has already been assigned), the selection of features to be used at every node, and the choice of decision rule at
e type of classifier is used at both nodes.
Figure 2: Structure of the tree classifier
The features selected for the first classifier are those that describe the colour and texture of the possible parasites. The features used by microscopists to differentiate malaria species are selected for
The training goal is to minimize squared errors, and training is stopped when the t increased. This is done to avoid overtraining.
AND MALARIA PARASITE AFFECTED BLOOD
(a) (b) (c) (d) (e) (f)
output of CANNY and SUSAN algorithm-(a)simple rbc (b)(c)Susan output of simple RBC (d)parasite affected blood cell(e)CANNY output of parasite
output of parasite affected blood cell
,Canny edge detector is the powerful edge detector, but they cannotedges of object, broken edges are mixed with background, very few junction involving more than two edges are correctly connected. Some sharper corners have broken edges. SUSAN provides good
a single response to a single edge. We can see the edge connectivity reported edges lie exactly on the image edges, the edges around and inside
the brightness ramp are correctly found and no false edges are reported as faster.
International Journal of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231-1963
It will be advantageous to apply expert, a priori knowledge to a classification problem. This will be done with the second set of features, where measures of parasite and infected erythrocyte morphology
y used by technicians for manual microscopic diagnosis are used. It’s desirable to focus on these features, because it is already known that they are able to differentiate between species
an erythrocyte as infected with malaria or not, and if so, the species of the stage tree classifier, with an infection classified
The design of a tree classifier has the following steps: the design of a tree structure (which has already been assigned), the selection of features to be used at every node, and the choice of decision rule at
and texture of the possible parasites. The features used by microscopists to differentiate malaria species are selected for
The training goal is to minimize squared errors, and training is stopped when the
LOOD CELL
(a)simple rbc (b)CANNY output output of parasite
cannot connect the edges of object, broken edges are mixed with background, very few junction involving more than two
corners have broken edges. SUSAN provides good can see the edge connectivity
edges around and inside
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
(g) (h)
Figure 4: Graph (g)shows the sum of pixel value in opened image as a function of radius of simple RBC (h) the sum of pixel value in opened image as a function of radius of parasite affected blood cell
(i) (j)
Figure 5: (i)graph shows the sum of pixel value in opened image as a function of radius of simple RBC (j) graph shows the sum of pixel value in opened image as a function of radius of simple RBC of parasite affected blood cell
In above figure 4, graph for RBC and malaria parasite shows the the sum of pixel values in opned image as a function of radius. Granulometry estimates the intensity surface area distribution of object (parasite affected RBC) as a function of size. Granulometry likens image objects to RBC whose sizes can be determined by sifting them through screens of increasing size and collecting what remains after each pass. Image objects are sifted by opening the image with a structuring element of increasing size and counting the remaining intensity surface area (summation of pixel values in the image) after each opening.We Choose a counter limit so that the intensity surface area goes to zero as we increase the size of our structuring element. In figure 5, graph for RBC and malaria parasites shows the size distribution or RBC. A significant drop in intensity surface area between two consecutive openings indicates that the image contains objects of comparable size to the smaller opening. This is equivalent to the first derivative of the intensity surface area array, which contains the size distribution of the objects in the image.
(a) (b)
Figure 6: (a) Histogram plot of simple RBC (b) Histogram plot of parasite affected blood cell
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
In above figure 6, the threshold gray level for extracting objects of their background. Two threshold level need to determine from the histogram one for erythrocytes and one for parasite. The histogram shows the intensity distribution in image
(a) (b)
Figure 7 :(a)Size distribution of simple RBC cell (b)Size distribution of parasite affected blood cell
In figure 7, histogram containing 10 bins that shows the distribution of different RBC sizes. The histogram shows the most common size for parasite affected RBC in the image. We extract new areas of getting image and update the distribution, here we plots the number of data values that occur in specified data range ,displays data in a Cartesian coordinate system.
(a) (b)
Figure 8: (a)RBC after segmentation (b)Parasite affected blood cell after segmentation
Figure 8 represent the final detected cell of original image. In above figure after comparison the first order statistics we finally indicate the segmented image
VI. CALCULATION CHART FOR SENSITIVITY AND POSITIVE PREDICTIVE
VALUE
Observation- The test results of 25 blood images consisting of 502 Red blood cells are included in a table. The values are tabulated below and are compared with manual counting
Table-2
Test
images
Algorithm 1 Algorithm 2 Manual counting
RBC Parasites RBC Parasites RBC Parasites 1 12 2 11 3 12 2
2 12 4 12 3 12 4
3 27 2 27 2 27 2
4 51 7 39 13 51 7
5 0 1 16 2 15 1
6 15 1 11 3 21 1
7 21 1 21 0 17 1
8 0 1 4 3 16 1
9 37 2 12 4 12 2
10 12 2 12 4 12 2
11 24 1 24 0 24 1
12 0 1 11 5 40 2
13 31 2 44 20 12 1
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
296 Vol. 1, Issue 5, pp. 290-297
14 0 1 13 0 15 6
15 0 1 48 8 14 1
16 11 6 15 0 17 1
17 0 1 7 1 21 2
18 0 2 17 0 9 2
19 13 1 12 1 25 3
20 25 3 25 7 21 1
21 14 1 14 2 14 0
22 0 2 9 1 14 2
23 21 1 20 0 16 1
24 0 1 10 3 57 2
25 11 1 17 0 8 1
VII. RESULTS FOR SENSITIVITY AND POSITIVE PREDICTIVE VALUE
The performance and accuracy of the algorithm are analyzed using two measures: sensitivity, the ability of the algorithm to detect a parasite present; and positive predictive value (PPV), the success of the algorithm at excluding non-infected cells. These values are expressed in terms of true positives (TP), false positives (FP) and false negatives (FN):
T PSensitivity
T P F N
T PP P V
T P F P
=
+
=
+ According to our result part value of sensitivity comes 98%,and the results of Positive Predictive Value comes 96%, from 25 test images.
Results of first order features -of simple RBC and parasite affected blood cells- ,P.Falciparum, P. Vivax, P.Malerie, P.Oval in this section we can see the first order features are different for each and every parasite
Table-3 ORIGINAL IMAGE MEAN SKEWNESS ENTROPY
Simple RBC 6.8315 -0.6923 1.5342
P.Falciparum 7.2492 -1.0077 1.2036
P.vivax 7.8151 -0.4231 1.3199
P.malerie 7.1696 -0.9730 1.4989
P.oval 7.0041 -0.5615 1.6735
VIII. CONCLUSION
The proposed automated parasite detection algorithm avoids the problems associated with rapid methods, such as being species-specific and having high per-test costs, while retaining many of the traditional advantages of microscopy, viz. species differentiation, determination of parasite density, explicit diagnosis and low per-test costs. On the basis of these results we can differentiate the simple RBC and parasite affected blood cells and also differentiate the species of malaria parasites. The proposed algorithm is optimized to overcome limitations of image processing algorithms used in the past. Among the tested test algorithms, ‘SUSAN edge detection technique’ gave good localization of edges but formed a thick border making cell separation difficult. If the staining of RBC is not properly done even than the edge of parasite affected RBC can be easily detected by the help of SUSAN algorithm, this is the important property of SUSAN algorithm. ‘Otsu’s algorithm’ gave accurate separation of RBCs where as local and global thresholding segmented the parasites. Granulometry provides the size distribution of object in image.. The first order features provide the mathematical ranges for simple RBC and parasite affected RBC these values are different for different malarial parasites. Results prove that the algorithm developed in this project has best sensitivity than F.Borey Tek and best positive predictive value than Selena W.S. Sio and F. Borey Tek, and is applicable to many other blood cell abnormalities other than malaria in contrast to the algorithm developed by Jean Phillipe. This is because the percentage of pathological differences in various diseases has very less effect on this robust algorithm. The algorithm detects the species of parasite and
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
the with a sensitivity of 98% and a positive predictive value of 96%.
IX. FUTURE SCOPE
After successful implementation of the algorithm it can be modified for additional facilities in routine blood check –up like differential white blood cell count, presence of any other parasite causing measure disease, etc.
REFERENCES
[1] World Health Organization. What is malaria? Facts sheet no.94. http://www.who.int/mediacentre/factsheets/fs094/en/. [2] Foster S, Phillips M, Economics and its contribution to the fight against malaria. Ann Trop MedParasitol 92:391–398, 1998. [3] F. Castelli, G.Carosi, Diagnosis of malaria, chapter 9, Institute of Infectious and Tropical Diseases, University of Brescia (Italy). [4] Jean-Philippe Thiran, Benoit Macq, Morphological Feature Extraction for the Classification of Digital Images of Cancerous Tissues. IEEE Transaction on Biomedical Engineering, Vol. 43, no. 10, October 1996. [5] C. Di Ruberto, A. Dempster, S. Khan, and B. Jarra. Automatic thresholding of infected blood images using granulometry and regional extrema. In ICPR, pages 3445–3448, 2000. [6] Silvia Halim, Timo R. Bretschneider, Yikun Li, Estimating Malaria Parasitaemia from Blood Smear Images. 1-4244-03421/06/$20.00 ©IEEE, ICARCV 2006. [7] Selena W.S. Sio, Malaria Count: An image analysis-based program for the accurate determination of parasitaemia, Laboratory of Molecular and Cellular Parasitology, Department of Microbiology, Yong Loo Lin School of Medicine, National University of Singapore. May 2006. [8] F. Boray Tek, Andrew G. Dempster and Izzet Kale, Malaria Parasite Detection in Peripheral Blood Images, Applied DSP & VLSI Research Group, London, UK, Dec 2006.
[9] Rafeal C. Gonzalez, Richard E. Woods, Digital Image Processing, 2nd
Edition, Prentice Hall, 2006. [10] S. M. Smith, J. M. Bardy, SUSAN—A New Approach to Low Level Image Processing, International Journal of Computer Vision, Volume 23, and Issue 1 Pages: 45 – 78, may 1997. [11] Di Ruberto C, Dempster A, Khan S, Jarra B, Analysis of infected blood cell images using morphological operators. Image Vis Comput 20(2):133–146, 2002. [12] Mui JK, Fu K-S, Automated classification of nucleated blood cells using a binary tree classifier. IEEE Trans Pattern Anal Machine Intell 2(5):429–443, 1980
Authors
Jigyasha Soni She is an Electronics & Communication Engineer and Head of Department of Electronics & Communication at B.I.T.M.R. Rajnandgaon India. She has more than 4 year of experience in teaching. She is post graduation student of S.S.C.E.T. Bhilai India. Her area of working is Image Processing.
Nipun Kumar Mishra is an Assistant Professor in the Department of Electronic & Communication Engineering at G.G.V. Bilaspur and Research scholar at PDPM Indian Institute of Information Technology design and Management, Jabalpur, India. He has more than 9years of experience in teaching. His current area of research includes, Signal Processing, Wireless Communication and Antenna. He is presently working on Smart Antenna at PDPM IIITDM, Jabalpur; India. He is a Life Member of IETE, Life Member of ISTE and Associate member of IE (India).
Chandrashekhar Kamargaonkar is an associated professor in the department of Electronic & Communication Engineering at S.S.C.E.T. Bhilai India. He is M.E. Coordinator in the Department of Electronic & Communication Engineering at S.S.C.E.T. Bhilai India.He has more than 7 year experience in teaching. He has received Master Degree(M.E.) from S.S.G.M. College of Engineering, Shrgaon India. His current area of research include Image Processing, Digital Communication.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
298 Vol. 1, Issue 5, pp. 298-309
REAL ESTATE APPLICATION USING SPATIAL DATABASE
1M. Kiruthika ,
2Smita Dange,
3Swati Kinhekar,
4 Girish B ,
5 Trupti G,
6Sushant R.
1Assoc. Prof., Deptt. of Comp. Engg., Fr. CRIT, Vashi, Navi Mumbai, Maharashtra, India
2&3 Asstt. Prof., Deptt. of Comp. Engg., Fr. CRIT, Vashi, Navi Mumbai, Maharashtra, India
4, 5, 6 Deptt. of Comp. Engg., Fr. CRIT, Vashi, Navi Mumbai, Maharashtra, India
ABSTRACT
Real estate can be defined as rights andimprovementsto own or use land. Most of the real estate applications
provide the features such as specification based searching, agent notification, adding property for sale, loan
informationetc.according to some specifications. This paper presents a system which will have all the features
of real estate application but using spatial databases, thus incorporating with it the flexibility and strength
provided by the Spatial Databases.
KEYWORDS: Spatial Database, Real Estate.
I. INTRODUCTION
Whenever choosing or searching is done for a new house, the main focus is on the location. As
location being a spatial entity we are using the advantages given by spatial databases for our
application. The application provides the user to select any particular location and get related
information appropriately.
Spatial data is data about location and space. This data can be represented in 2-dimension or 3-
dimension form. Spatial data is primary used in geographical information system. Different examples
of spatial data are existing, but the prominent example of spatial data is satellite image. For satellite
image earth system will act as a reference system. Another example of spatial data is medical imaging
in which human body acts as a spatial frame of reference.
A spatial database is collection of spatial data and spatial database system is collection of spatial data
and software which help us to store, retrieve, modify and search spatial data efficiently. R.Guiting has
defined the spatial database system as follows
• A spatial database system is a database system.
• It offers spatial data types (SDTs) in its data model and query language.
• It supports spatial data types in its implementation, providing at least spatial indexing and
efficient algorithms for spatial join.
The above definition is sound, it tells that spatial database system is like a traditional database system
as spatial data is complex and different from non-spatial data it needs different data type support and
different query language for retrieval of data.
A road map is common example of spatial database system. It is represented as two dimensional
objects. It consists of cities, roads, boundaries which can be represented as a points, lines and
polygons respectively. Representation is in two dimensional forms. While representing this thing its
relative position with respect to earth system is preserve.[1]
II. LITERATURE SURVEY
2.1 Need for Spatial Databases
The Geography Information System (GIS) is main factor of motivation behind the development of
Spatial Database Management Systems. It has different techniques for analysis and visualization of
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
299 Vol. 1, Issue 5, pp. 298-309
geographic data. GIS used to store spatial data and non –spatial data separately. For storing spatial
data, it uses file management system and for non-spatial data it uses traditional RDBMS. Because of
this separation of data, maintaining the integrity of data was difficult task. To overcome from this
problem, solution is use a single database system for storing and managing spatial as well as non-
spatial data. Different benefits can be achieved by combining spatial and non-spatialdata. The few are
listed as follows
- It provides better data management for spatial data.
- Reduces the complexity as don’t have to deal with different systems.
A GIS provides a rich set of operations over few objects and layers, whereas an SDBMS provides
simpler operations on set of objects and sets of layers. For example, a GIS can list neighbouring
countries of a given country (e.g. India) given the political boundaries of all countries. However it will
be fairly tedious to answer set queries like, list the countries with the highest number of neighbouring
countries or list countries which are completely surrounded by another country. Set-based queries can
be answered in an SDBMS.[3]
2.2 Spatial Query
A traditional selection query accessing nonspatial data uses the standard comparison operators:
>,<,<=,>=,!=. A spatial selection is a selection on spatial data that may use other selection comparison
operations. The types of spatial comparators that could be used include near, north, south, east, west,
contained in, and overlap or intersect. Many basic spatial queries can assist in data mining activities.
Some of these queries include:
1. A region query or range query is a query asking for objects that intersect a given region
specified in the query.
2. A nearest neighbor query asks to find objects that are close to an identified object.
3. A distance scan finds objects within a certain distance of an identified object, but the
distance is made increasingly larger. [4,5]
2.3 Spatial Indexing
Spatial indexes are used by spatial databases (databases which store information related to objects in
space) to optimize spatial queries. Indexes used by non-spatial databases like B-tree cannot effectively
handle features such as how far two points differ and whether points fall within a spatial area of
interest. A number of structures have been proposed for handling multi-dimensional point data. Cell
methods are not good for dynamic structures because the cell boundaries must be decided in advance.
Advance Quad trees and a k-d tree does not take paging of secondary memory into account. K-D-B
trees are designed for paged memory but are useful only for point data. We have used R-tree indexing
which is supported by Oracle database.[2]
III. RELATED WORK
Real Estate is a field that has widely expanded and has provided a huge ground for scope to many
users for finding desirable properties and for entrepreneurs. The users need appropriate properties and
the entrepreneurs who contain this information help the users for correct selection of properties. With
the immense amount of profitability this concept holds for both the sides of the parties involved, the
idea has caught fire.
Initially, the overall real estate process was manual. But due to increasing facilities of Internet and due
to the popularity of the concept, many web sites have come up which provide real-estate information
to the users. These web sites guide the user through various properties and help the user to find the
needed and available estates as per his/her requirements.
Example of traditional web sites
1. www.99acres.com
2. www.makaan.com
3. www.indiaproperties.com
4. www.realestateindia.com
5. www.realestateonline.in
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
300 Vol. 1, Issue 5, pp. 298-309
These websites provide features like search property, add property and gives different offer which will
be beneficial to user. But even with these features there are certain required aspects which make these
sites limited. They are:
1. No search gives correct information about basic services available from chosen location like
displaying the distance of nearest bus stop, railway station, hospital etc.
2. No flexibility in information retrieval for e.g. listing houses that is within the 2Km radius of
alocation.
The above and many more factors have to be addressed.
IV. PROPOSED SYSTEM
4.1. Proposed system Our proposed system provides all the features provided by the traditional existing systems, but instead
of working only with non-spatial data, the system also works with spatial data. The system will have
the following prominent features:-
1) Specification based searching This feature provides the related information to the users according to the specification they have
provided to the site. For e.g., if a user is looking for a house with 1bhk at 9 lakhs at Thane, then only
those properties which satisfy the aforementioned requirements will be returned to the user.
2) Agent Notification
Once the user is interested in a particular property and clicks the “Confirm” button a mail type
message would automatically be sent to the agent who manages the corresponding area, informing
agent about the user’s name, his contact number and email address.
3) Adding property for sale A user can add his property that he is willing to sale so that it can be viewed by other potential clients
interested in similar property. For this purpose the client is supposed to enter not only the address but
also pictures and the cost at which he is willing to sale that property.
4) Notifying interested users Whenever a new property is added, then a mail type notification is automatically sent to all those
clients who were interested or were searching for a similar property. Thereby notifying those users
about the availability of that property.
5) Allowing users to put interesting property finds in cart
The cart is an added database advantage to the users. The users would be given the feature of adding
interesting properties into a cart before making a final decision. This would help the user to separate
interesting property finds and thus help in final decision making.
6) Providing user with map based search Once a particular region is selected the user can gain needed related information on the basis of
geographical factors. For example, requesting information of a particular location and getting
information about regions which lie in a particular boundary of that location (e.g.In the radius of 2km
from Thane Railway station)
The features that are based upon geographical factors have to be implemented using spatial databases.
Spatial databases provide functions that help in finding distance between two points in a spatial
domain. Using these functionalities, we can very efficiently perform spatial mining and provide
advance and flexible features to the users. The relational databases prove to be slightly incompetent in
these aspects and thus the use of spatial domain is evident in the application.
4.2 Modules of the system The following are the modules considered in our proposed system :
(1) Specification Based Search:
This search provides the user to scrutinize properties based upon Property details such as “City”,
”Cost range”, ”BHK”, “Buy/Rent”. The “Search” then provides the user with all the available
properties from the database, which satisfy the requirements as specified. On clicking any of the
result, the website provides the user with that property’s details, along with its location pinpointed on
map and nearest services from that property.
(2) Map based Search:
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Along with the standard search, ‘Propsdeal’ provides a special “Map-Based Search” to the user. In
this search, the user can select properties based upon their Geographical location. The user can pin-
point areas on the map and then specify the radius in Kilometers from which to search properties.
(3) Add property for sale Module:
This feature allows the user to add his/her, own property on to the site’s database by which it will be
enlisted as an available property for sale to the other users. The main advantage ‘Propsdeal’ has over
other traditional sites in this case is that, it only requests for obvious details from the user and
calculates the nearest features from that house dynamically. Thus here, the user does not need to go
through the gruesome process of adding all the nearby services information by him/her self.
(4) Notification Module:
This feature, as the name suggests is a mail type service which provides notifications to the user about
properties that had been added onto the site’s database, when that user had been offline. Initially when
that user had been online, his history for searched records is maintained. Then, when this user is
offline and if any other user, adds a similar property to that what the earlier user was looking for, then
that user who is currently offline will be appropriately notified about the new property addition
through a notification mail. This notification mail will be sent to the aspiring user, even if he is online.
(5) Cart Module:
Adding interesting search to cart is a feature which has been provided for user personalization.
Herein, the user can add his essential searches to cart for short listing them. Each cart is separately
stored for individual user. Moreover, the status of the cart is maintained when the user logs out and is
reproduced back to that user when he/she logs-in again.
V. DESIGN
5.1 UML Diagram:
Our system has been thoroughly analyzed using UML approach .Use case diagram and
Component diagram is shown in Fig 1 and 2.
Fig 1 Usecase Diagram
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig 2 Component Diagram
5.2Data flow diagram:
Data flow diagrams are shown in Fig 3,4,5,6,7.
Fig 3 : DFD for Specification based search
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig 4 DFD for Map based search
Fig 5 DFD for Add Property for sale
Fig 6 DFD for Notification
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig:7 DFD for Cart
VI. IMPLEMENTATION
6.1 Specification based search
Herein, we provide the user with drop down selection box to select “City”, “Cost range”, “BHK” and
we provide two option buttons for the user to select whether he/she wants to buy or rent that property.
This form gets submitted, when user clicks the “Search” button. The action of this form submits these
fields to the search program written in java. This java program takes the inputs and fires a query onto
the database for it to retrieve all those properties from the database. These results are stored in an
array and this array is passed to the JSP file which is responsible for showing the search result. The
Search result JSP page receives the array containing the search results and prints them as an output to
the user.
Figure 8 is the home page for our website ‘Propsdeal’.Figure 6.2 shows the available results for
specified cost range, city, bhk and property type.Figure 6.3 shows the property details for the selected
property along with its location on map.Figure 6.4 shows Nearby services for the selected property.
Fig. 8 Home Page
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig 9 Specification Based Search
Fig 10. Property Details along with its location on map
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig 11. Nearby Services for the selected property
6.5 Map based search Here the user is provided with three drop down selection boxes to, select the region where he wants to
re-centre the map, to select what kind of properties (Buy/Rent) to be displayed on the map and to
select the kilometer radius for search, respectively. Whenever the user makes any selection onto any
of these selection boxes the appropriate functions are called which then give the desired results. On
clicking any of the point on the map, the co-ordinates of that point is retrieved. These co-ordinates are
then passed to the java program which fires a spatial SDO_Distance query onto the database for
retrieving properties whose latitude-longitude coordinates are in the user’s selected range(in KMS)
from that point. Those properties satisfying these requirements are then displayed to the user on the
right side of the map.Figure 6.5 shows results for map based search
Fig 12 Map Based Search
6.6 Adding Property onto the site’s database Initially, the user specifies the location on the map where the property he/she desires to sale/rent is
located. Then the user is redirected to another page where he fills in the obvious details of that
property such as ”BHK”, ”Address”, ”City”, “Area”, “Price”, “And certain facilities(Parking, gym,
garden, lift, etc.)”. Then, the user is provided with 4 dropdown selection lists that allow him/her to
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
add pictures of his/her property. After filling all these details the user then submits them and then that
respective property gets added onto the site’s database and is displayed on the map.Figure 6.shows
screen shot for the property details to be filled.
Fig 13 Add property for sale
6.7 Notification Every distinct search of each registered user is maintained separately for them. This historical record
of user’s search, is then used for Notification feature. Herein, whenever a new property is added onto
the site’s database, then the user’s historical search records are checked to see if he/she had ever
searched for a similar property before. If the response is positive then that respective user is notified
individually by a mail type service, to that effect. On the next login, that user will be notified of the
new property addition in which he might have interest. Figure 6.7 shows screen shot for Notification.
Fig 14 Notification
6.8 Cart The cart allows the registered users to shortlist their search. Here in the user is allowed to add any of
the searched properties onto his/her cart. The user can then later on review those particular properties
whenever he/she finds time thus saving his/her time to search from start. Also, he/she can delete items
from the cart as required.
Figure 15 shows screen shot for Cart.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Fig 15 Cart
VII. CONCLUSION
Real-Estate has always been a field which contained a mass of information and had various angles
from where the elements of this information can be viewed. Each user has his/her own perspective of
this data and can move through them according to own needs. Amongst the whole information related
to Real-Estate, the more complex are the desires and requirements of the user. Though considering
these complex user requirements and allowing the user to navigate through the Real-Estate
information is of crucial importance.Unfortunately the Existing Real-Estate web applications have
failed in grasping this as a valid issue. Due to these insignificances the user is left unsatisfied as
he/she is equipped only with a blunt tool to dig a vast field.
Thus to solve these problems and to well equip the user, this paper discusses a system,
“Propsdeal”which has made efficient use of spatial databases. Through the features of these databases,
we have provided the user an efficient tool which empowers him to specifically search for properties.
Our map based search is an excellent way for the user to search for properties based upon their
geographical locations. Thus the user’s requirements and desires can be much well fed now to the
“Search” mechanism. This is much better than the standard inflexible search.
The reason why we chose Spatial databases for our application is that they are designed to provide an
excellent way to address our necessity in developing a location based search. Their inbuilt features
reduce a lot of complex calculations which would have to be handled by us in case we had used
Relational databases in their place for designing the same system.Our system gives an efficient and an
extremely user-friendly perspective for the users to search available properties.
REFERENCES
[1]. Shashi Shekhar & Sanjay Chawala “Spatial Database ATour” Pearson Eductaions, 2003.
[2]. Y. Manolopoulos, A. Nanopoulos, A.N. Papadopoulos, Y. Theodoridi “Rtrees: Theory and Applications” Springer
2005 .
[3]. R. H. Guiting. “An introduction to spatial database systems.” The VLDB Journal, 3:357-400, 1994.
[4]. W. G. Aref and H. Samet. “Extending a DBMS with spatial operations”. In Second Symposum on Large Spatial
Database, Zurich, Switzerland, (August 1991).
[5]. M. Egenhofer. “Spatial SQL: A query and presentation language.”IEEE Tran~actions on Knowledge and Data
Engineering, 6:86-95, 1994 .
[6]. Giinther. (1993) “Efficient Computation of Spatial Joins,” Proc. 9th Data Engineering, pp. 50-60.
[7]. Koperski, K., Adhikary, J., and Han, J. 1996. “Knowledge discovery in spatial databases: Progress and
challenges.” In Proc. SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery.
Technical Report 96-08, University of British Columbia, Vancouver, Canada.
[8]. W. Lu, J. Han and B. C. Ooi. (1993) “Discovery of General Knowledge in Large Spatial Databases”, Proc. Far
East Workshop on Geographic Information Systems, Singapore, pp. 275289.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
[9]. Koperski K., Adhikary J., Han J. 1996 “Knowledge Discovery in Spatial Databases: Progress and Challenges”,
Proc. SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, Technical Report 96-08,
University of British Columbia, Vancouver, Canada
Author biography
Kiruthika .M is currently working with Fr. C. Rodrigues Institute of Technology, Vashi, Navi
Mumbai as Associate Professor in Computer Engineering Department. Her total teaching
experience is 16 years. Her Research area is Data Mining,Webmining,Databases. She has done
B.E(Electronics and Communication Engineering) in 1992 from BharathidasanUniversity. She
has completed M.E (CSE) in 1997 from NIT, Trichy . She has published 5papers in
International Journal,11 papers in International Conferences and 09 papers in National
Conference.
Smita Dange is currently working with Fr. C. Rodrigues Institute of Technology, Vashi,
NaviMumbai as Assistant Professor in Computer Engineering Department. Her total teaching
experience is 9.5 years. Her Research area is Spatial Database and Data Mining. She has done
B.Tech(Computer Engineering) in 2001 from Dr. Babasaheb Ambedkar Technological
University, Lonere. She has completed M.Tech (Computer Technology) in 2011 from VJTI,
Mumbai . She has published 1 papers in International Journal,04 papers in International
Conferences and 06 papers in National Conference. Swati Kinhekaris currently working with Fr. C. Rodrigues Institute of Technology, Vashi,
Navi Mumbai as Assistant Professor in Computer Engineering Department. Her total teaching
experience is 4.5 years. Her Research area is Database and Algorithms. She has done
B.E(Computer Engineering) in 2002 from Rajiv Gandhi Produgiki Vishwvidyalaya ,Bhopal.
She has published 1papers in International Conferences. Girish Bhole has graduated from Fr. C. Rodrigues Institute of Technology, Vashi,
NaviMumbai.
Trupti Gadakh has graduated from Fr. C. Rodrigues Institute of Technology, Vashi,
NaviMumbai.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
310 Vol. 1, Issue 5, pp. 310-317
DESIGN AND VERIFICATION ANALYSIS OF APB3 PROTOCOL
WITH COVERAGE
Akhilesh Kumar and Richa Sinha
Department of E&C Engineering, NIT Jamshedpur, Jharkhand, India
ABSTRACT
Today in the era of modern technology micro electronics play a very vital role in every aspects of life of an
individual, increasing use for micro electronics equipments increases the demand for manufacturing its
components and its availability, reducing its manufacturing time, resulting in increasing the failure rate
of the finished product. In order to overcome this problem the Technocrats develop a method called
Verification, a process which is a part of manufacturing microelectronics products. So approximately 30% of
the effort spent on the average project is consumed by design and 70% in verification. For this reason, methods
which improve the efficiency and accuracy of hardware design and verification are immensely valuable. The
current VLSI design scenario is characterised by high performance, complex functionality and short time-to-
market. A reuse based methodology for SoC design has become essential in order to meet these challenges. The
work embodied in this paper presents the design of APB 3 Protocol and the Verification of slave APB 3
Protocol. Coverage analysis is a vital part of the verification process; it gives idea that to what degree the
source code of the DUT has been tested. The Functional coverage analysis increases the verification efficiency
enabling the verification engineer to isolate the areas of un-tested function. The design and verification IP is
built by developing verification components using Verilog and System Verilog respectfully with relevant tools
such as Rivera, which provides the suitable building blocks to design the test environment.
KEYWORDS: AMBA (Advanced Microcontroller Bus Architecture), APB(Advanced peripheral Bus),
Functional coverage analysis, RTL (Register Transfer Level) design, System Verilog, SOC (System on chip),
DUT (Design Under Test), Design intellectual property (DIP), Verification intellectual property (VIP).
I. INTRODUCTION
Intellectual Property (IP) Cores are of first line of choice in the development of Systems-on-chip
(SOC). Typically, a SoC is an interconnection of different pre-verified IP blocks which communicate
using complex protocols. Approaches adopted to facilitate plug and- play style IP reuse include the
development of a few standard on-chip bus architectures such as CoreConnect[11] from IBM,
AMBA[9] from ARM among others, and the work of the VSI Alliance[8] and the OCP-IP[10]
consortium. Designers are usually provided with voluminous specifications of the protocols used by
the IP blocks and the underlying bus architecture. IP Cores are register transfer level (RTL) codes
which achieve certain desired functionality. Today the foundation of digital systems design depends
on Hardware description languages (HDLs) rather than schematic diagrams. These RTL codes are
well tested codes which must be ready for any use in SOC development.
Modern computer systems rely more and more on highly complex on–chip communication protocol
to exchange data. The enormous complexity of these protocol results from tackling high-performance
requirements. Protocol control can be distributed, and there may be non-atomicity or speculation. The
electronics industry has entered the era of multi-million-gate chips, and there is no turning back. This
technology promises new levels of integration on a single chip, called the System-on-a- Chip (SOC)
design, but also presents significant challenges to the chip designer. Processing cores on a single chip,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
311 Vol. 1, Issue 5, pp. 310-317
may number well into the high tens within the next decade, given the current rate of advancements
[1]. The important aspect of a SOC is not only which components or blocks it houses, but also how
they are interconnected. The current VLSI design scenario is characterised by high performance,
complex functionality and short time-to-market. A reuse based methodology for SOC design has
become essential in order to meet these challenges. AMBA is a solution for the blocks to interface
with each other.
In the present paper the discussion is made on the Design intellectual property (DIP) of the master and
slave of the APB3 protocols and the Verification intellectual property (VIP) slave with coverage
analysis.
II. OBJECTIVE OF THE AMBA
The objective of the AMBA specification [1] is to:
1. facilitate right-first-time development of embedded microcontroller products with one or
more CPUs, GPUs or signal processors,
2. be technology independent, to allow reuse of IP cores, peripheral and system macrocells
across diverse IC processes, encourage modular system design to improve processor
independence, and the development of reusable peripheral and system IP libraries
3. Minimize silicon infrastructure while supporting high performance and low power on-chip
communication.
2.1 History of AMBA
The AMBA was introduced by ARM in 1996 and is widely used as the on-chip bus in SoC designs.
AMBA is a registered trademark of ARM. The first AMBA buses were ASB and APB. In its 2nd
version, AMBA 2, ARM [2] added AMBA AHB that is a single clock-edge protocol. In 2003, ARM
introduced the 3rd generation, AMBA3 [3], including AXI to reach even higher performance
interconnect and the Advanced Trace Bus (ATB) as part of the Core Sight on-chip debug and trace
solution. In 2010, ARM introduced the 4th generation, AMBA 4,[1] including AMBA 4 AXI4, AXI4-
Lite, and AXI4-Stream Protocol, the AMBA 4.0 protocol defines five buses/interfaces:
• Advanced extensible Interface (AXI)-A high performance ,flexible protocol
• Advanced High-performance Bus (AHB)-retained for compatibility and to ease the transition
• Advanced System Bus (ASB)- no longer actively supported
• Advanced Peripheral Bus (APB) - retained for support of simple, low bandwidth peripherals
• Advanced Trace Bus (ATB)
Figure 1.Protocols of AMBA
2.2. AMBA Protocol Family
AHB (Advanced High Performance Bus) is for high performance, high clock frequency system
modules with suitable for medium complexity and performance connectivity solutions. It supports
multiple masters.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
312 Vol. 1, Issue 5, pp. 310-317
AHB-Lite is the subset of the full AHB specification which intended for use where only a single
master is used.
APB (Advanced Peripheral Bus) mainly used as an ancillary or general purpose register based
peripherals such as timers, interrupt controllers, UARTs, I/O ports, etc. It is connected to the system
bus via a bridge, helps reduce system power consumption. It is also easy to interface to, with little
logic involved and few corner- cases to validate.
III. ABOUT APB 3 PROTOCOL
3.1 An AMBA APB 3 Typical System [1][15]
Figure 2.AMBA APB3 Typical System
Figure 2 illustrates a typical AMBA system. Several master or slave devices are connected via AHB
which are often used as system bus. The data transfer between each memory module and peripheral
devices also can be done by it. The bridge locates between system bus and peripheral bus. While
transferring data from processor to peripheral devices like URAT, timer, peripheral I/O and keyboard,
the bridge convert the transferred signal from one type to another for satisfying different performance
and protocol.
The APB 3 provides a low-cost interface that is optimized for minimal power consumption and
reduced interface complexity. The APB interfaces to any peripherals that are low-bandwidth and do
not require the high performance of a pipelined bus interface. The APB has unpipelined protocol.
All signal transitions are only related to the rising edge of the clock to enable the integration of APB
peripherals easily into any design flow. Every transfer takes at least two cycles.
The APB can interface with the AMBA Advanced High-performance Bus Lite (AHB-Lite) and
AMBA Advanced Extensible Interface (AXI). You can use it to provide access to the programmable
control registers of peripheral devices.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
313 Vol. 1, Issue 5, pp. 310-317
3.2. AHB VS APB [2][16]
Table1. AHB vs APB
3.3. When to use AHB OR APB [17][18]
AHB uses a full duplex parallel communication. It used in external memory interface, with high
bandwidth peripheral with FIFO interfaces. It is also used in on chip memory blocks whereas the APB
uses massive memory-I/O accesses.
The APB is mainly proposed for connecting to simple peripherals. It can be seen that the APB comes
with a low power peripheral. This Bus can also be used in union with either
version of the system bus. It group narrow bus peripherals to avoid loading the system bus.
Separate the bus address decoding into two levels make it easier (in most cases) to do timing budget.
The address decoding logic will be easier to design as well. Usually, AHB decoder is used to
decode larger memory blocks. And then I/O space (small memory blocks) is decoded by
APB decoder (inside APB Bus Bridge).
E.g. you might have 4 memory blocks and 20 I/O devices. If you put them all into one level of
address decoding, you might end up a big bus multiplexer which operates at lower clock frequency.
By separating I/O devices in APB memory map, you can have a smaller and faster AHB
interconnect, and a second level of APB interconnect which might take one or two more extra cycle to
access.
IV. APB3 FSM DIAGRAM
Figure 3 shows the Finite State diagram of peripheral bus activity of the APB[14].
IDLE This is the default state of the APB.
SETUP When a transfer is required the bus moves into the SETUP state, where the appropriate select
signal, PSELx, is asserted. The bus only remains in the SETUP state for one clock cycle and always
moves to the ACCESS state on the next rising edge of the clock.
ACCESS The enable signal, PENABLE, is asserted in the ACCESS state. The address, writes, select,
and write data signals must remain stable during the transition from the SETUP to ACCESS state.
Exit from the ACCESS state is controlled by the PREADY signal from the slave:
• If PREADY is held LOW by the slave then the peripheral bus remains in the ACCESS state.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
314 Vol. 1, Issue 5, pp. 310-317
• If PREADY is driven HIGH by the slave then the ACCESS state is exited and the bus returns to the
IDLE state if no more transfers are required. Alternatively, the bus moves directly to the SETUP state
if another transfer follows
Figure 3. FSM diagram of APB3
V. MICRO ARCHITECTURE OF APB3
Figure 4. shows the micro architecture of APB3 Protocols[1]
Figure 4. Interfacing of APB Master and Slave
5.1 APB3 Master Description
There is a single bus master on the APB, thus there is no need for an arbiter. The master drives the
address and write buses and also performs a combinatorial decode of the address to decide which
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
315 Vol. 1, Issue 5, pp. 310-317
PSELx signal to activate. It is also responsible for driving the PENABLE signal to time the transfer. It
drives APB data onto the system bus during a read transfer.
5.2 APB3 Slave APB slaves have a very simple, yet flexible, interface. The exact implementation the interface will be
dependent on the design style employed and many different options are possible. In this two signals
are main which mainly protect the loss data while transfer of data is taking place they are PSLVERR
and PREADY.
VI. SIMULATION RESULTS OF DESIGN OF APB3
6.1. Master of APB3
Figure 5. Read Operation Figure 6. Write Operation
Figure 5 and Figure 6 shows the simulated result of the master APB3 read operation and write
operation respectively. The main observation is made in the master APB3 is that, the data which the
master has read by signal PRDATA (which is input of signal of master ) is able to write by signal
PWDATA (which is output signal of master) after certain clock pulse for the transfer purpose. Figure
6 shows the data that has been written what has been read in Figure 5.
6.2. SLAVE OF APB3
Figure 7. Write and Read Operation
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
316 Vol. 1, Issue 5, pp. 310-317
Work of slave is to read that data by the signal PRDATA (which is output of slave) which was written
the signal PWDATA (which is input of master). Figure 7 shows the simulate result of slave DIP in
which PRDATA is same as PWDATA.
VII. SIMULATION RESULT OF VERIFICATION OF APB 3
In this paper the simulate result of VIP of slave of APB3 is shown.
7.1 SLAVE VERIFICATION
Figure 8.Write Operation Figure 9. Read Operation
In the Figure 8 there are numbers of signals are shown. In which we can see the signal PWDATA
which is for receiving the data from the master. This we have to verify that whether the data which we
received in PWDATA can be read in PRDATA .In Figure 9 it is shown that the data which is written
in signal PWDATA has been written in signal PRDATA.
VIII. COVERAGE ANALYSIS
The Coverage Summary and Coverage Report gives the details of the functional coverage when
complete Analysis was done for the decoder and coverage report as shown in
Figure 10 was generated it is found that the coverage is less than 100%.
Figure 10. Coverage Result
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
317 Vol. 1, Issue 5, pp. 310-317
IX. CONCLUSION
In the paper a general definition for APB3 protocol flexibility and compatibility is shown. We
describe study of AMBA3 APB SOC bus protocol and their performance. Here the design and
verification of low peripheral processor’s data transfer protocol has been discussed. And also how the
error has been reduced without loss of data while transferring.
ACKNOWLEDGEMENT
This work was supported by CVC PVT LTD, Bangalore.
REFERENCES
[1] ARM, “AMBA Specification Overview”, available at http://www.arm.com/.
[2] ARM, “AMBA Specification (Rev 2.0)”, available at http://www.arm.com.
[3] ARM, “AMBA AXI Protocol Specification”, available at http://www.arm.com
[4] Samir Palnitkar “Verilog HDL” [5] Chris Spear, SystemVerilog for Verification, New York : Springer, 2006
[6] http://www.testbench.co.in
[7] http://www.doulos.com/knowhow/sysverilog/ovm/tutorial_0
[8]. Virtual Socket Interface Alliance. http://www.vsi.org.
[9]. ARM. Advanced micro-controller bus architecture specification.
http://www.arm.com/armtech/AMBA spec, 1999.
[10]. Open Core Protocol Int'l Partnership Association Inc. Open core protocol specification.
http://www.ocpip.org, Release 1.0, 2001.
[11] IBM. 32-bit processor local bus, architecture specifications. http://www-
3.ibm.com/chips/products/coreconnect/, Version 2.9.
[12] J.Bergeron, “What is verification?” in Writing Test benches: Functional Verification of HDL
Models, 2nd
ed. New York: Springer Science, 2003, ch.1, pp. 1-24.
[13]International Technology Roadmap for Semiconductors [Online]. Available:
http://www.itrs.net/Links/2006Update
[14] infocenter.arm.com/help/topic/com.arm.doc.ihi0024b/index.html
[15] nthur.lib.nthu.edu.tw/bitstream/987654321/7242/9/630208.pdf
[16] http://en.wikipedia.org/wiki/Advanced_Microcontroller_Bus_Architecture
[17] http://www.differencebetween.net/technology/difference-between-ahb-and-apb/
[18] http://groups.google.com/group/comp.sys.arm/msg/55e6c80bfd9f99ce?pli=1
Authors
Akhilesh Kumar received B.Tech degree from Bhagalpur university, Bihar, India in 1986
and M.Tech degree from Ranchi, Bihar, India in 1993. He has been working in teaching and
research profession since 1989. He is now working as H.O.D. in Department of Electronics
and Communication Engineering at N.I.T. Jamshedpur, Jharkhand, India. His interest of
field of research is analog and digital circuit design in VLSI.
Richa Sinha received B. E. Degree from RajaRamBapu Institute of Technology Shivaji
University, Kolhapur, Maharashtra, India in 2007. Currently she is pursuing M. Tech project
work under the guidance of Prof. Akhilesh Kumar in the Department of Electronics &
Communication Engg, N. I. T., Jamshedpur. Her interest of field is ASIC Design &
Verification.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
318 Vol. 1, Issue 5, pp. 318-328
IMPLEMENTATION OF GPS ENABLED CAR POOLING
SYSTEM
Smita Rukhande, Prachi G, Archana S, Dipa D Deptt. of Information Tech., Mumbai University, Navi Mumbai City, Maharashtra, India
ABSTRACT Carpooling commonly known as car-sharing or ride-sharing is a concept in which commuters share a car while
travelling. Participants in carpooling share journey expenses such as fuel, tolls etc. which reduces the expenses
incurred on each participant. Carpooling helps to cut down traffic on the roads, carbon emissions and overall
parking space required, hence proving to be environmental friendly. The application discussed in this paper is a
mobile client using J2ME which allows it to work on any java enabled phone having GPRS connection. Thus,
Car pooling using GPS is a real time mobile based application that mainly aims at facilitating car pooling
amongst travelers. It allows users to book their journey with a person travelling on the same route beforehand.
It allows users to locate their travel partners on the map displayed on their mobile screen and accordingly make
changes in their itinerary.Implementation of the system is discussed in the paperwith help of the results.
KEYWORDS: GPS, Carpooling, GPRS, Google map
I. INTRODUCTION
With the advances in Mobile technology, mobiles are proving to be the next generation computers.
This application adds on to the pool of already existing, useful software’s. It runs on a mobile and
using GPS technology enables carpooling in a more efficient and flexible manner. It is a java
application that runs on a GPS enabled mobile phone. It interacts with a central server and provides
processed information to the users. This being a mobile application provides portability and requires
low maintenance. Thus, it reduces cost of travel, traffic on the road, pollution and ultimately global warming.
1.1 What is Car Pooling? Carpooling is a concept in which people who travel to the same destination can share
their vehicle with others which reduces the fuel cost, reduces the traffic on the road and ultimately reduces pollution and global warming. With the ever-increasing population worldwide, it is necessary
to carpool to preserve the world for our descendants.
1.2 Need for a mobile Application for GPS enabled Car Pooling System
1) Generally, cars travelling at peak hours consist of office goers which use a car in which a single person drives to his/her office.
2) This increases the fuel cost and the traffic on the road. A better way is to club up with travelers
destined to the same place. This will reduce fuel cost and traffic jams.
3) The main impediment when it comes to carpooling is how to find out who travels to the same
destination as yours every day or who is interested in carpooling.
4) In case the regular poolers don't work on days you do, e.g. Saturday, then how to find new
members for Saturday. 5) There are websites available which allow finding out information about interested poolers but it's
not handy in case you have to go to a new destination and need information in real time. The web
sites aren’t handy in case you are in a place where it's difficult to find public transport. You can't
carry your laptop along with you all the time.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
1.3 Benefits of GPS enabled Car Pooling System Advantages of developing a mobile application for GPS enabled Car Pooling System are as
follows:
1) Portable - As it is a mobile Application, portability is one of the most noticeable benefit of Pool’
up. Mobiles are handy and can be carried anywhere easily.
2) Real time- This application provides real time data about the users interested in carpooling and their location.
3) Flexibility - This application notifies users in case a participant in running late. It enables users to
continue their work in case their fellow user is not able to reach on time.
4) Low cost- As it runs on mobile, it requires low cost and maintenance. All that is maintenance
required is a cell phone with GPRS connection.
5) Easy to use - The only job of the user is to fill in some information about the source and destination of his journey and he will receive the relevant data transferred to his cell phone in an
understandable manner.
II. PROPOSED SYSTEM
GPS Enabled Car Pooling System is a real time mobile application that mainly aims at facilitating
carpool services to commuters by making them aware of the users interested in carpooling and also
providing security to carpooling participants. System helps the user to set up an account for which
he/she needs to provide identity proof for security purpose. He/she uses the same login id and
password every time he logs in.The application can be divided into following phases:
A. In case of synchronized pooling:
Step 1:User A logs in and enters his current location or it can be retrieved directly using
GPS,destination and the time he wants to start his journey.
Step 2: User A is presented with all the available and processed list of users from the database server
travelling to the same destination, at the same time as that entered by user A.
Step 3:User A selects the user most convenient to him and the selected user B is notified. User B may
accept or reject the Pool proposal.
Step 4: User A receives the reply whether his request has been accepted or not and the meeting
point.
Step5: Once user A's request is accepted, he starts the application before starting the journey to check
whether User B is on time or not and his present location.
Step 6: Depending on the location user A can decide the appropriate time to leave for the meeting point.
Figure1. Start of Journey Situation
B. In case of real time ad hoc pooling:
Step 1: User X logs in and enters his current location or it is retrieved directly using GPS.
Step 2: The query submitted to the database displays all the GPS Enabled Car Pooling System users
near user X on a Map.
Step 3: User X selects user Z from the imminent users from the map and sends him a request to pick
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
him up while on the way to the destination.
Step 4: User Z checks user X's location in the map, replies to user X’s request and picks him up.
Figure2. Once the request is accepted
C. Payment Mode:
Payment in both the cases is made by cash to the driver. In case the owner is not driving the vehicle
and has hired a driver for that, then the driver collects the fare and passes on to the owner. The owner
can see the service log for further details.
III. DESIGN OF THE GPS ENABLED CAR POOLING SYSTEM
3.1 Architectural block diagram:-
The Architecture of the proposed system is as shown below in Figure3.
Figure3. Architectural Block Diagramof the system
Sequence of steps in proposed system is explained below:-
1) Users can register themselves through website using registration module.
2) Once registered, a user can login through their mobile and perform various functions like:
a. Get nearest car location using get user location module
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
321 Vol. 1, Issue 5, pp. 318-328
b. Schedule Drive i.e. user will book his drive.
c. Check car schedule i.e. user can check schedule of the booked car using scheduling
journey module.
d. User can track the car location on the Google map.
3) The mobile application will perform the functions mentioned above using the car pooling server and Google map.
IV. IMPLEMENTATION DETAILS
Implementation of the system is explained below step wise with the help of results
4.1System Architecture:
Figure 4 shows client server application in which the server will be made up of the Servlet and SQL
server while the Client is made up of the J2ME or JSP application.
Users can register themselves through website. registered user can login
Get nearest car location . Schedule Drive ,Check car schedule
Server
Client
Figure 4.System architecture
It consists of following components:-
1) MS-SQL 2005 Database.
2) Website frontend in JSP.
3) Mobile frontend in J2ME.
4) Backend in Servlet.
4.1.1 MS-SQL 2005 Database.
The MS-SQL database will serve as a common information repository for both mobile as well as
website. The applicationsdatabase help in storing the user data, car details , journey details anduser
locations
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
.
Figure5.Database snapshot of the Pool up system
The system database consists of seven tables shown in Figure 5.
Seven tables are as follows:-
1) userAccount – to store user information.
2) Driver – to store car details
3) Bookdriveradhoc – to store driver adhoc mode data.
4) Bookdriversync – to store driver synchronized mode data.
5) Bookpassadhoc – to store passenger adhoc mode data.
6) Bookpasssync – to store passenger synchronized mode data.
7) Userloc – to store users current location.
4.1.2 Website frontend in JSP:- Website consists of various tabs such as home, register, login, book a ride, check ride status, fare,
contact us, help, download, etc which are shown below in Figure 6.
Figure6.Application’s Home page.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
4.1.3 Mobile Frontend in J2ME:-
Mobile Frontend is a J2ME application which will help users to login, book a ride, get current
location, check status, etc to manage their car pooling. Applications Splash screen is shown in Figure
7
Figure7.J2ME Application Splash screen.
4.1.4 Backend in Servlet:-
The backend processing for the J2ME will be done using the Servlet pages. J2ME application requires
Servlet to connect and access MS-SQL database through Http connections.
V. EXPERIMENTAL RESULTS Stepwise results of the application are explained below with the help of screen shots. GPS Enabled car
pooling user’s can login using login form as shown in Figure 8.
Figure8. Login form
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Usercan reigister with the system by entering their details into the registration form in order to create
an account so that they can go for car pooling with other users is shown in Figure 9.
Figure9.User registration form
In case the user is a driver he needs to specify the details about his car like car model, capacity and car
plate no are submitted to systemsdatabase. Hence registration form for the drivers is shown in
Figure10.
Figure10.Car Registration Form
While booking a journey the first step is to select the role and the mode i.e. the roles can be driver or
passenger and modes can be synchronized orad-hocmode. Figure 11shows the case where user is a
driver and he opts for the synchronized mode.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure11.Mode and user type selection
Figure 12 shows the form where user after opting for drivers role and synchronized mode , books the
journey details .Same form is applicable if users role is passenger or mode is adhoc mode.
Figure12. Book a journey form
After filling the journey form user needs to calculate the distance between the source and destination
and thus calculating the approximate fare depending on the rates which are provided by the system are
shown in Figure 13
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure13.Fare calculation form
J2me Application Snapshots:-
Login page for the mobile application which is similar to that of the website is shown in Figure 14
Figure14. Login Form
Figure 15shows the user the screen where he/she can select the mode i.e. synchronized or adhoc.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
Figure15.User mode selection
Figure16 shows the user the screen to selects his role i.e. driver or passenger.
Figure16. user type selection
Screen shown in Figure 17 helps users to fill in the journey details and submits it to the server for
processing.
Figure17. Book a Journey form
VI. CONCLUSION
GPS Enabled Car Pooling System,is a user friendly mobile application that not only facilitates
portability but also can be used easily by a novice user familiar with basic mobile functionalities.It
displays location of carpooling participants on the mobile screen using Google maps which make it
easy to interpret the exact position of the participant instead of providing ambiguous information such
as longitude and latitude.It also facilitates user with the time remaining for a participant to reach the
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
meeting point in order to avoid unnecessary wastage of time. Hence, it provides information about a
participant to other participants in real time which helps them to find out exact location using Google
maps displayed on their mobile screen. It also handles security issues by making a photo identity
mandatory for registration and participation. Thus,reduces the cost of travel by sharing of fuel and toll
expenses amongst participants. System tries to eliminate any territorial boundaries and thus does not
restrict users from its use.
REFERENCES
[1] Li, Sing; Knudsen, Jonathan (April 25, 2005). Beginning J2ME: From Novice to Professional (3rd
ed.). pp. 480.ISBN 1590594797.
[2] Herbert Schildt (August 13,2002). Java 2:The Complete Reference (5th
ed.) McGraw-Hill Osbourne
Media.
[3] http://news.thewherebusiness.com/content/dynamic-carpooling-your-mobile.
[4] http://code.google.com/apis/maps/documentation/staticmaps/
[5] http://code.google.com/apis/maps/documentation/staticmaps/index.html#StyledMaps
Authors Biography
SmitaRukhande working as Assistant Professor at Fr.C.R.I.T College, VashiNavimumbaiin
Information Technology department. Completed her Bachelors in Engineering from Amravati
University. Area of interest are Mobile Technology, Object Oriented Analysis and Design
PrachiGoel working as Assistant Professor at Fr.C.R.I.T College, Vashi , Navimumbaiin
Information Technology department. Completed her Bachelors in Engineering from Mumbai
University. Area of interest are Mobile Technology, Game Programming.
Dipa Dixit working as Assistant Professor atFr.C.R.I.T College, Vashi , Navimumbai in
Information Technology department. Completed her ME from Mumbai University. Area of
interest are Mobile Technology,Data Mining,Web mining.
ArchanaShirke working as Assistant Professor atFr.C.R.I.T College, Vashi , Navimumbai in
Information Technology department. Completed her MTech from VJTI, Mumbai. Area of
interest are Mobile Technology, Data Mining,Web Ontology
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
329 Vol. 1, Issue 5, pp. 329-336
APPLICATION OF MATHEMATICAL MORPHOLOGY FOR THE
ENHANCEMENT OF MICROARRAY IMAGES
Nagaraja J1, Manjunath S.S
2, Lalitha Rangarajan
3, Harish Kumar. N
4
1, 2 & 4Department of CSE, Dayananda Sagar College of Engineering, Bangalore, India
3Department of CSE, Mysore University, Mysore, India
ABSTRACT
DNA microarray technology has promised a very accelerating research inclination in recent years. There are
numerous applications of this technology, including clinical diagnosis and treatment, drug design and
discovery, tumour detection, and in the environmental health research. Enhancement is the major pre-
processing step in microarray image analysis. Microarray images when corrupted with noise may drastically
affect the subsequent stages of image analysis and finally affects gene expression profile. In this paper a fully
automatic technique to enhance microarray images is presented using mathematical morphology. Experiments
on Stanford and TBDB illustrate robustness of the proposed approach in the presence of noise, artifacts and
weakly expressed spots. Experimental results and analysis illustrates the performance of the proposed method
with the contemporary methods discussed in the literature.
KEYWORDS: Microarray, Dilation, Erosion, Adaptive Threshold and Noisy microarray images.
I. INTRODUCTION
DNA microarray technology [1] has a large impact in many application areas, such as diagnosis of
human diseases and treatments (determination of risk factors, monitoring disease stage and treatment
progress, etc.), agricultural development (plant biotechnology), and quantification of genetically
modified organisms, drug discovery, and design. In cDNA microarrays, a set of genetic DNA probes
(from several hundreds to some thousands) are spotted on a slide. Two populations of mRNA, tagged
with fluorescent dyes, are then hybridized with the slide spots, and finally the slide is read with a
scanner. The outlined process produces two images, one for each mRNA population, each of which
varies in intensity according to the level of hybridization represented as the quantity of fluorescent
dye contained in each spot.
Microarray image processing consists of the following sequence of three stages 1. Gridding,
separation of spots by assignment of image coordinates to the spots [2] . 2. Segmentation, separation
between the foreground and background pixels and 3. Intensity extraction, computation of the average
foreground and background intensities for each spot of the array [3]. Microarray image may contain
different sources of errors. Such as electronic noise, dust on slide, photon noise and other sources
causes high level of noise which may propagate through higher image analysis leading to difficulty in
identifying the genes that each type of cells is expressing to draw accurate biological conclusions.
Spot recognition is complicated task as microarray image gets corrupted by noise sources during
image acquisition also bright artifacts may be detected incorrectly as spots of microarray image.
Hence it is very much essential to remove the noise present in the image .The image enhancement is
necessary to improve the interpretability of information in images to provide better input for the
higher image processing applications. Low quality images are thus to be enhanced by appropriate
methods to interpret the accurate expression levels.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
330 Vol. 1, Issue 5, pp. 329-336
Image Enhancement improves the image quality by refining the image with respect to structural
content, statistical content, edges, textures and presence of noise. It can be further used for accurate
measurement of gene expression profiling.
The organization of rest of the paper is as follows: Section 2 describes the literature survey carried out
in the areas of microarray image enhancement. Section 3 presents morphological approach which
makes use of top and bottom hat transform to enhance microarray images. Section 4 highlights the
results of extensive experimentation conducted on some benchmark images. Finally conclusion is
discussed.
II. REVIEW OF LITERATURE
The literature survey carried out has revealed that a fair amount of research has been put in the areas
of microarray image enhancement. X. H. Wang, Robert S. H. Istepanian and Yong Hua Song [4] have
proposed a new approach based on wavelet theory to provide a denoising approach for eliminating
noise source and ensure better gene expression. Method of denoising applies stationary wavelet
transform to pre-process the microarray images for removing the random noises. Rastislav Lukac and
Bogdan Smolka [5] have proposed novel method of noise reduction, which is capable of attenuating
both impulse and Gauassian noise, while preserving and even denoising the sharpness of the image
edges. R. Lukac, et.al [6] have proposed vector fuzzy filtering framework to denoise cDNA
microarray images. This method adaptively determines weights in the filtering structure and provides
different filter structures. Noise removal using smoothening of coefficients of highest sub bands in
wavelet domain is described by Mario Mastriani and Alberto E. Giraldez [7]. Denoising switching
scheme based on the impulse detection mechanism using peer group concept is discussed by N.
Plataniotis et.al [8]. A two-stage approach for noise removal that processes the additive and the
multiplicative noise component, which decomposes the signal by a multiresolution transform, is
described by Hara Stefanou, Thanasis Margaritis, Dimitris Kafetzopoulos, Konstantinos Marias and
Panagiotis Tsakalides [9]. Guifang Shao, Hong Mi, Qifeng Zhou and Linkai Luo [10] have proposed
a new algorithm for noise reduction which included two parts: edge noise reduction and highly
fluorescence noise reduction. Ali Zifan, Mohammad Hassan Moradi and Shahriar Gharibzadeh [11]
have proposed an approach using of decimated and undecimated multiwavelet transforms. Denoising
of microarray images using the standard maximum a posteriori and linear minimum mean squared
error estimation criteria is discussed by Tamanna Howlader et.al [12]. J.K.Meher et.al [13] have
proposed novel pre-processing techniques such as optimized spatial resolution (OSR) and spatial
domain filtering (SDF) for reduction of noise from microarray data and reduction of error during
quantification process for estimating the microarray spots accurately to determine expression level of
genes. Weng Guirong has proposed a novel filtering method to denoise microarray images using edge
enhancing diffusion method [14].Factorial analysis on simulated microarray images to study the
effects and interaction of noise types at different noise levels is discussed by yogananda
Balagurunathan et.al [15]. Chiatra Gopalappa et.al [16] have proposed a novel methodology for
identification and scanning noise from microarray images using a dual tree complex wavelet
transform. A two phase scheme for removing impulse noise from microarray images by preserving the
feature of interest is discussed by Ram murugesan et.al [17].Arunakumari Kakumani et.al [18] have
proposed a method to denoise microarray images using independent component analysis.
Enhancement approach which uses principles of fuzzy logic in conjunction with data adaptive filter to
enhance noisy microarray images is presented by Rastislav Lukac et.al [19]. Wang li-qiang et.al [20]
presents a novel method to reduce impulse noise by employing the switching scheme which uses
differences between the standard deviation of the pixels within the filter window and the current pixel
of concern. Nader Suffarian et.al [21] have proposed an approach which is implemented as
conditional sub-block bi-histogram equalization (CSBE) which has the ability to improve the gridding
results in DNA microarray analysis.
Most of the methods proposed by researchers have either considered high SNR (signal-to-noise ratio)
images or various assumptions on factors such as type of threshloding used, parametric assumptions
and decomposition levels, which in turn leads to misclassification of foreground pixels from the
background pixels in the segmentation process and finally affects gene expression profile. Also some
of the methods have discussed only impulse, Gaussian noise and fluorescent noise. A method has to
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
331 Vol. 1, Issue 5, pp. 329-336
be proposed which works with low SNR images and estimate other types of noises so has to
accurately denoise the image. This is very essential at the pre-processing stage because in the
microarray image analysis each stage affects subsequent stage, so that an accurate biological
conclusion can be drawn. Denoising of microarray image is a challenging task in the pre-processing
step of microarray image analysis. So, techniques without the above mentioned constraints and which
depends exclusively on the image characteristics is in demand. Figure.1 shows a subgrid of
microarray image.
Figure. 1 Subgrid of Microarray image (ID: 32040)
III. ENHANCEMENT MODEL
The image enhancement is the process of improving the interpretability of information in images to
provide better input for the higher image processing applications. Enhancement model illustrates
phases involved to enhance microarray images as shown in Figure 2.
Figure 2. Enhancement Model
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
332 Vol. 1, Issue 5, pp. 329-336
Mathematical morphology being used to remove artifacts and insignificant spots in the subgrid. In the
pre-processing stage the noisy RGB image is converted in to gray level .The resulted image will be
pre-processed image say P(x,y). The tophat and bottom hat transform is performed on P(x,y).The
tophat is performed by erosion followed by dilation. The bottom hat is performed by dilation followed
by erosion. Dilation is an operation that grows or thickens: objects in a gray scale image. The specific
manner and extent of thickening is controlled by a shape referred through structuring element. Erosion
shrinks or thins objects in a gray scale image. The manner and extent of shrinking is controlled by a
structuring element. Structuring element is still the key factor of morphology operations. Applying
structuring elements with different radius leads to diverse results of analyzing and processing of
geometric characteristic. Therefore, structuring element determines the effect and performance of
morphological transformation. Structuring element used for dilation and erosion process is shown in
Figure.3.
Figure 3. structuring element with radius-5
After performing tophat transform (Th(x,y)) which will be added to pre-processed image (P(x,y))
results P’(x,y) . This is performed to improve quality of the image. P’(x,y) is subtracted from bottom
hat transformed image Bh(x,y) to remove artifacts pixels in the microarray image.
To eliminate insignificant spots adaptive threshold being used. Thresholds on spot size are first
computed on segments of the image. Insignificant spots are filtered using these thresholds. The gray
level image is converted in to binary level with low threshold to reside the information available in the
image. Binary image is divided into n segments. Number of segments can be increased depending on
the level of noise. The subgrid is divided into 4 segments in the proposed approach as follows.
1st segment 2
nd segment
Rows=0 to r/2 Rows= 0 to r/2
Columns=0 to c/2 Columns= c/2+1 to c
3rd
segment 4th segment
Rows=r/2+1 to r Rows= r/2 +1to r
Columns= 0 to c/2 Columns= c/2+1 to c
where r is the number of rows and c is number of columns of skew corrected image.
For each segment, the numbers of connected components are computed. The thresholds on spot size in
each segment are calculated using the equation 1.
=
Components connected ofnumber Total
segment in pixels ofNumber )(
thi
iT (1)
where i ranges from 1 to 4.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
333 Vol. 1, Issue 5, pp. 329-336
For example in the Figure 4, (ID: 32040) the number of bright pixels in the four segments are 5523,
5090, 6075, 2031. Total number of connected components is 894. The thresholds are 5523/894=6,
5090/894= 6, 6075/894=7, 2031/894=2.
The results of the proposed filtering process in removing the insignificant spots using the threshold
value and execution time (τf) are reported in Table 1.
Execution time for the filtering process is proportional to number of spots in a noisy microarray
image. Adaptive thresholds obtained in the previous step are used to filter insignificant noisy spots in
the segments. If the number of pixels in a component are less than threshold value (T(i)) in each
segment, then remove the spot (insignificant spot) by setting intensity zero to all pixels in that
component. The idea behind using adaptive threshold is, if in a sub array, suppose few successive
columns or rows have tiny spots filtering using global threshold will eliminate all these spots.
Table 1. Estimated Threshold Values and Execution Time (Τf) of the Proposed Filtering Process.
IV. RESULTS AND PERFORMANCE ANALYSIS
In this section, the performance of the proposed approach is evaluated on real noisy microarray
images drawn from SMD (Stanford microarray database), UNC (University of North California
microarray database) and TB database. The images are available for free download from website [22,
23]. Figure.4 shows noisy microarray image and in Figure.5 Enhanced image using proposed
approach is shown.
Enhancement is very much essential as it helps the biologists to take the decisions on gene expression
analysis, gene discovery, and drug analysis etc. with the clear spots the accuracy of analysis improves.
Application of mathematical morphology yields a high quality image and it reveals most of the
unidentified spots clearly.
Figure. 4 Noisy subgrid, Image ID: 32040 Fig. 5.Enhanced subgrid, Image ID: 32040
Figure.6. shows one subgrid of noisy microarray image. As discussed in section 3, Morphological
dilation, erosion and Adaptive threshold are used to perform filtering. Figure.7. shows enhanced
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
334 Vol. 1, Issue 5, pp. 329-336
image from this observation, it infers that, most of the contaminated (insignificant, noisy) pixels are
removed.
Figure. 6 Noisy subgrid, Image ID: 39119, Figure. 7 Enhanced subgrid, Image ID:
Database:TBDB 39119, Database:TBDB
Figure. 8 Noisy subgrid, Image ID: 35964, Figure. 9. Enhanced subgrid , Image ID:
Database:TBDB 35964, Database:TBDB
To quantify both the degree of filtering as well as the improvements due to enhancement algorithms,
various performance measures are used. Such as mean squared error and peak signal to noise ratio.
Higher the peak signal to noise ratio value higher is the quality of the image and lower the mean
squared value higher is the image quality. Here we have compared the performance of different filters
and bilateral works good with removing all the noise content from the image. Performance analysis is shown in Figure 10 and 11. Table 2 and 3 illustrates comparative results of the proposed method with
existing filters.
Table 2. Numerical Values on Signal to Noise Ratio for Denoising Methods
Table 3. Numerical Values on Mean Square Error for Denoising Methods
Image Id Peak Signal to Noise ratio in db
Weiner Median Gaussian Bayes Proposed
34133(TBDB) 79.00 79.06 77.69 82.03 87.73
32070(TBDB) 67.8758 68.5429 67.9479 80.7468 86.7109
422471(Stanford) 72.2693 72.5676 71.7460 83.2749 87.1175
400311(UNC) 70.1570 70.6426 69.5335 82.1275 87.2587
Image Id Mean Square Error
Weiner Median Gaussian Bayes Proposed
34133(TBDB) 23.36 23.96 27.4681 17.8004 10.0690
32070(TBDB) 20.23 22.13 20.12 18.14 11.1429
422471(Stanford) 18.1872 19.1612 22.1912 15.7216 10.7058
400311(UNC) 17.1614 18.2532 14.5212 13.1714 10.556
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
335 Vol. 1, Issue 5, pp. 329-336
Figure 10. Comparison chart of PSNR of different Figure 11. Comparison chart of MSE of different
denoising methods for microarray images denoising methods for microarray images
V. CONCLUSION
In this work automatic technique for enhancement of microarray image is presented. The noise
removal is performed through top hat and bottom hat transform which are implemented using
morphological dilation and erosion .To the morphological image adaptive threshold is used to
eliminate insignificant spots. From the experimental results it has been observed that most of the
contaminated pixels have been removed from the image. The entire process is robust, in the presence
of noise, artifacts and weakly expressed spots. The proposed work can be used at pre-processing
phase in microarray image analysis before using it in any of the stages of microarray image analysis,
which then results in accurate gene expression profiling.
REFERENCES
[1] Yuk Fai Leung and Duccio Cavalieri, (2003) “Fundamentals of cDNA microarray data analysis” in
TRENDS in Genetics Vol.19 No.11 pp 649-659.
[2] P. Bajcsy, (2004), “Gridline: Automatic grid alignment in DNA microarray scans,” IEEE Trans. Image
Process., vol. 13, no. 1, pp. 15–25.
[3] M. Steinfath, W. Wruck, H. Seidel, H. Lehrach, U.Radelof, and J. O’Brien, (2001), “Automated
image analysis for array hybridization experiments,” Bioinformatics, vol. 17, no. 7, pp. 634–641, Jul.
1,
[4] X. H.Wang, R. S. H. Istepanian, and Y. H. Song, (2003),“Microarray image enhancement by
denoising using stationary wavelet transforms”. IEEE Transactions Nanobiosciene , 2(4):184 –
189.
[5] Rastislav Lukac and Bogdan Smolka, (2003), “Application of the adaptive center weighted vector
median framework for the enhancement of cDNA microarray images”, International Journal of
applied mathematics and Computer Science (amcs), Vol. 13, No. 3, 369–383.
[6] Rastislav Lukaca, Konstantinos N. Plataniotis, Bogdan Smolka and Anastasios N.Venetsanopoulos,
(2005) “ cDNA microarray image processing using fuzzy vector filtering frame work”, Fuzzy Sets
and Systems ,ELSEVIER, 17-35.
[7] Mario Mastriani, and Alberto E.Giraldez, (2005), “Microarray denoising via smoothing of coefficients
in wavelet domain”, international journal of biological, biomedical and medical sciences, pp7-14.
[8] B. Smolka, R. Lukac, K.N. Plataniotis, (2006) ,“Fast noise reduction in cDNA microarray images”,
IEEE, 23rd Biennial Symposium on Communications, pp.348-351.
[9] Hara Stefanou, Thanasis Margaritis, Dimitri Kafetzopoulos, Konstantinos Marias and Panagiotis
Tsakalides, (2007), “ Microarray image denoising using a two- stage multiresolution technique”, IEEE
International Conference on Bioinformatics and Biomedicine ,pp.383- 389.
[10] Guifang Shao, Hong Mi, Qifeng Zhou and Linkai Luo, (2009), “ Noise estimation and reduction in
microarray images “, IEEE, World Congress on Computer Science and Information
Engineering,pp.564-568.
[11] Ali Zifan , Mohammad Hassan Moradi and Shahriar Gharibzadeh, (2010),”Microarray image
nhacment using decimated and undecimated wavelet transforms”, SIViP, pp.177- 185.
[12] Tamanna Howlader and Yogendra P. Chaubey, (2010),”Noise reduction of cDNA microarray images
using complex wavelets”, IEEE Transactions on Image Processing, Vol.19.pp.1953-1967.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
336 Vol. 1, Issue 5, pp. 329-336
[13] J.K.Meher, P.K.Meher and G.N.Dash, (2011), “Preprocessing of microarray by integrated OSR and
SDF approach for effective denoising and quantification”, IPCSIT, Vol.4.pp.158- 163.
[14] Weng guirong (2009), “CDNA Microarray image processing using morphological operator and edge-
Enhancing diffusion”, 3rd international conference on bioinformatics and biomedical engineering, pp. 1-
4.
[15] Yoganand Balagurunathan, Naisyin Wang, Edward R. Dougherty, Danh Nguyen, Yidong Chen, (2004),
“Noise factor analysis for cDNA microarrays”, Journal of Biomedical Optics, Vol. 9 , No. 4 , pp. 663-
678.
[16] Chaitra Gopalappa, Tapas K. Das, Steven Enkemann, and Steven Eschrich , (2009)“Removal of
Hybridization and Scanning Noise from Microarrays”, IEEE Transactions on NanoBioscience, Vol. 3,
pp. 210-218.
[17] Ram Murugesan, V.Thavavel, (2007), “ A Two-phase Scheme for Microarray Image Restoration”
Journal of Information and Computing Science, Vol. 2, No. 4, pp. 317-320.
[18] Arunakumari Kakumani, Kaustubha A. Mendhurwar, Rajasekhar Kakumani, (2010), “Microarray
Image Denoising using Independent Component Analysis”, Vol. 1, No. 11, pp. 87-95.
[19] Rastislav Lukac, Konstantinos N. Plataniotis, Bogdan Smolka, Anastasios N. Venetsanopoulos, (2005),
“A Data-Adaptive Approach to cDNA Microarray Image Enhancement”, ICCS 2005, pp. 886–893.
[20] Wang LQ, Ni XX, Lu ZK, Zheng XF, Li YS, (2004), “Enhancing The Quality Metric Of Protein
Microarray Image”, Journal of Zhejiang University Science, Vol. 5, No. 12, pp. 1621- 1628.
[21] Nader Saffarian, Ju Jai Zou, (2006), “DNA Microarray Image Enhancement Using Conditional Sub-
Block Bi-Histogram Equalization”, IEEE International Conference on Video and Signal Based
Surveillance,pp.86. [22] https://genome.unc.edu
[23] http://www.tbdb.org/cgi-in/data/clickable.pl.html
Authors
Nagaraja J has received B.E degree in 2007 from VTU University, Belgaum and M.Tech
degree in 2009 from VTU University, Belgaum, Karnataka, India. Currently he is working
as a Lecturer at Dayananda Sagar College of Engineering, Karnataka, India and His
experience in teaching started from the year 2009. Currently he is pursuing PhD in VTU
University. His areas of interests include microarray image processing, medical image
segmentation and clustering algorithms.
Manjunath S.S has received B.E degree in 2000from Mysore University, Mysore and
M.Tech degree in 2005 from VTU University, Belgaum, Karnataka India. Currently he is
working as a Assistant Professor at Dayananda Sagar College of Engineering, Karnataka,
India and His experience in teaching started from the year 2000. Currently he is pursuing
PhD in Mysore University. His areas of interests include microarray image processing,
medical image segmentation and clustering algorithms.
Lalitha Rangarajan has received master degree in Mathematics from Madras University,
India and from the Department of Industrial Engineering Purdue University. She
completed PhD in Computer science from university of Mysore, India. She has been
teaching courses in mathematics on operation research and computer science for master
degree students for more than 25years. She is presently a Reader at Department of
Computer Science, University of Mysore, India. Her current research interests are Image
Retrieval Feature Reduction and Bioinformatics. She has more than 40 publications in
reputed Journals and conferences.
Harish Kumar .N has received B.E degree in 2009 from VTU University, Belgaum and
M.Tech degree in 2011 from VTU University, Belgaum, Karnataka, India. Currently he is
working as a Lecturer at Dayananda Sagar College of Engineering, Karnataka, India and
His experience in teaching started from the year 2011. His areas of interests include
microarray image processing, medical image segmentation and clustering algorithms.
.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
337 Vol. 1, Issue 5, pp. 337-341
SECURING DATA IN AD HOC NETWORKS USING
MULTIPATH ROUTING
R.Vidhya1 and G. P. Ramesh Kumar
2
1Research Scholar, SNR Sons College, Coimbatore, India
2Prof & Head, Department of Computer Science, SNR Sons College, Coimbatore, India
ABSTRACT
Development of handheld features and mobile telephony makes Ad hoc networks widely adopted, but security
remains a complicated issue. Recently, there are several proposed solutions treating authentication, availability,
secure routing and intrusion detection etc, in Ad hoc networks. In this paper we introduce a securing data
protocol in Ad hoc networks, SDMP protocol. This solution increases the robustness of transmitted data
confidentiality by exploiting the existence of multiple paths between nodes in an Ad hoc network. This paper
also includes an overview of current solutions and vulnerabilities and attacks in Ad hoc networks.
I. INTRODUCTION
WLANs (Wireless Local Area Networks) provide an alternative to the traditional LANs where users
can access shared data or exchange information without looking for a place to plug in. In recent years,
demands for greater mobility and the military’s need for sensor networks have popularized the notion of infrastructure less or Ad hoc networks.
Mobile Ad hoc networks are self organizing network architectures in which a collection of mobile
nodes with wireless network interfaces may form a temporary network without the aid of any
established infrastructure or centralized administration. According to the IETF definition [1], a mobile
Ad hoc network is an autonomous system of mobile routers connected by wireless links. This union
forms an arbitrary graph. The routers are free to move randomly and organize themselves arbitrarily;
thus, the network’s wireless topology may change rapidly and unpredictably [2].This allows for greater mobility and dynamic allocation of nodes structures. Ad hoc networks are becoming popular
because of the fast development of the mobile hand-held and portable devices. Many research
projects are studying this domain to develop it more and more, and some of the proposals are
introduced in industry of mobile and wireless devices. The nodes in an Ad hoc network
communicate without wired connections among themselves by creating a network "on the fly". While
tactical military communications was the first application of Ad hoc networks, there are a growing
number of non-military applications, such as search-and-rescue, conferencing, and home networking.
Ad hoc networks have several characteristics: dynamic topology, infrastructure less, variable
capacity links, and energy-constrained operation.
From the characteristics of Ad hoc networks, we can deduce issues that exist in this kind of networks
[3]. Because of their specific characteristics, Ad hoc networks present a lot of issues for which
solutions must been found and researchers must bring many studies. Limited bandwidth, energy
constraints, high cost, security and no compatibility between different proposed norms are some of
encountered problems in this type of networks. One of important issues that must attract researchers’
attention is security.
In wireless mobile Ad hoc networks, security depends on several parameters (authentication,
confidentiality, integrity, non repudiation and availability) and concerns two aspects: routing security
and data security. These two aspects are exposed to many vulnerabilities and attacks. The organization
of the rest of this paper is as follows. In next section we quote most important vulnerabilities and
attacks faced in Ad hoc networks.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
338 Vol. 1, Issue 5, pp. 337-341
II. VULNERABILITIES AND ATTACKS IN AD HOC NETWORKS
In security domain, new vulnerabilities appear with Ad hoc technology. Nodes become easier to be
stolen since they are mobile, the computing capacity is limited. That makes using heavy solutions,
as PKI [4][5], not very practice. Also, Ad hoc networks services are provisional and batteries are a
limited alimentation resource what makes a Denial of Service attack by consumption of energy very
possible [6].
Ad hoc networks are exposed to many possible attacks. We can classify these attacks into two
kinds: Passive attacks and Active attacks [7]. In passive attacks [8], attackers don’t disrupt the operation of routing protocol but only attempt to
discover valuable information by listening to the routing traffic. Defending against such attacks is
difficult, because it is usually impossible to detect eavesdropping in a wireless environment.
Furthermore, routing infor- mation can reveal relationships between nodes or disclose their IP
addresses.
If a route to a particular node is requested more often than to other nodes, the attacker might expect
that the node is important for the functioning of the network, and disabling it could bring the entire
network down. While passive attacks are rarely detectable, active ones can often be detected.
An active attack can mainly be:
Black hole attacks [9]. A malicious node uses the routing protocol to advertise itself as having
the shortest path to the node whose packets it wants to intercept.
Wormhole attacks. In this type of attacks, an attacker records packet at one location in the
network, tunnels them to another location, and retransmits them there into the network. This attack is possible even if the attacker has not compromised any hosts and even if all
communication provides authenticity and confidentiality.
Routing tables overflow attacks [8]. Here the attacker attempts to create routes to
nonexistent nodes. The goal is to create enough routes to prevent new routes from being
created or to overwhelm the protocol implementation. It seems that proactive algorithms are more
vulnerable to table overflow attacks than reactive algorithms because they attempt to discover
routing information every time. Sleep deprivation attacks [11]. Because battery life is a critical parameter in Ad hoc
networks, devices try to conserve energy by transmitting only when necessary. An attacker
can attempt to consume batteries by requesting routes, or by forwarding necessary packets to
the node using, for example, a black hole attack.
Location disclosure attacks. It’s an attack which can reveal something about the nodes location
or the structure of the network. The attack can be as simple as using an equivalent of the trace
route command on UNIX systems. In this attack, the attacker knows which nodes are situated on the route to the target node.
Denial of service attacks [6]. Such attacks, generally, flood the network making it crashing or
congested. Also, wormhole, routing table overflow and sleep deprivation attacks might fall into
this attacks category.
Impersonation attacks [12]. If authentication is not supported, compromised nodes may be
able to send false routing information, masqueraded as some others, etc.
III. RELATED WORK
Recently, there are several researches about many security aspects in Ad hoc networks. We find for
example IPsec [13], WEP (Wireless Equivalent Privacy) [14], Distributed Trust model [15], Key
Agreement model [16], the Resurrecting Duckling solution, or using threshold cryptography as in
solution cited in [18]. As Secure Routing solutions, we can cite SAODV or SRP. Intrusion
Detection solutions as architecture proposed in an important researches area in Ad hoc security too.
There is no global solution for all kinds of Ad hoc networks, and no one is enough resistant for all
important vulnerabilities. There are partial solutions only for specific issues.
We can classify existing approaches into four principal categories:
1. Trust Models
2. Key Management Models
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
339 Vol. 1, Issue 5, pp. 337-341
3. Routing Protocols Security
4. Intrusion Detection Systems
We expose some important proposals from every category:
3.1 Distributed Trust Model
This proposal is based on the concept of trust. It adopts a decentralized approach to trust
management, generalizes the notion of trust, reduces ambiguity by using explicit trust statement and makes easier the exchange of trust-related information via a Recommendation Protocol [15]. Trust
categories and values are assigned to entities. There is no absolute trust in this model. An entity trust
degree or value can be changed by a new recommendation. The Recommendation Protocol is used in
this model to exchange trust information. Entities that are able to execute the Recommendation Protocol
are called agents. With decentralization, each agent is allowed to take responsibility for its own fate
and choose its own trusted recommenders. Trust relationships exist only within each agent’s own
database. Agents use trust categories to express trust towards other agents and store reputation records in their private databases to use them to generate recommendations to other agents.
In this solution, memory requirements for storing reputations, and the behavior of the
Recommendation Protocol are issues that have been not treated.
3.2 Resurrecting Duckling Security Policy This policy has been presented in [11] then extended in [17]. The basic concept in this approach is
that between two devices, it can exist a master/slave relation. Master and slave share a common secret.
This association can be only broken by the master. Duckling will recognize as mother the first
entity sending him a secret key on a protected channel. This procedure is called Imprinting. It will
obey always its mother, which says to him with which it can speak, by subjecting the slave an access
control checklist. If the link is stopped by the master with one of his slaves or if a network anomaly
happened, the slave state becomes death. It can be resurrected by accepting a new imprinting
operation. There is a hierarchy of master/slaves because a slave has the right to become master.
The root is a person who controls all the devices. This solution is only effective for devices with
weak processors and limited capacity.
3.3 Key Agreement Based Password The work developed in [16] draws up the scenario of a group wishing to provide a secured session in a
conference room without the support of any infrastructure. The properties of the protocol of this solution
are:
The shared secret. Only the entities that know an initial password, called Weak Password,
are able to know the Session Key. It is necessary that even if an attacker compromises a
member of the group and is in possession of all secret information, it cannot be able to recover
the session key. Key agreement. The session key generated is by the contribution of all the entities.
Tolerance with interruption attempts. The protocol should not be vulnerable to an attack
which tries to introduce a message. It is supposed that the possibility of modifying or
removing a message in a similar network is very improbable.
The approach describes that there is a Weak Password that the entire group will have (for example by
writing it on a table), each member contributes, then, to create a part of the session key and signs this
data by the weak password. This secured session key makes it possible to establish a secured channel without any centralized
trust or infrastructure. This solution is adapted, therefore, to the case of conferences and meetings,
where there are not a great number of nodes. It is rather strong solution since it does not have a strong
shared key. But this model is not sufficient for more complicated environments. By imagining a group
of people who do not know each other all and who want to communicate confidentially only between
them, one finds that this model becomes invalid in this case. Another problem emerges if nodes are
located in various places; the distribution of the Weak Password will not be possible any more.
3.4 Distributed Public Key Management Among the few schema and methods of security suggested for Ad hoc networks, there is a method
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
340 Vol. 1, Issue 5, pp. 337-341
based on a principle of cryptography appeared in the Seventies: the Threshold Cryptography [22].
The principle is purely mathematical and was combined with other technical to obtain a security
model for Ad hoc networks. The method suggested is that quoted in [18]. Since in an Ad hoc
network, there are no centralized entity and trust relations between nodes, this solution proposes a
key management scheme by distributing trust on an aggregate of nodes. In this model, key management service with an (n,t+1) configuration (n ≥ 3t+1)1, consists of n
special nodes, which are called Servers. The n servers share the ability to sign certificates. The
service can tolerate t compromised servers, that’s why we say that it employs an (n, t+1) threshold
cryptography scheme. The private key k of the service is divided into n shares (s1, s2, …, sn),
assigning one share to each server. To sign a certificate, each server generates a partial signature
using its private key share and submits the partial signature to a Combiner which is able to compute
the signature for the certificate. A compromised server could generate an incorrect partial signature.
Use of this partial signature would yield an invalid signature. Fortunately, a combiner can verify
the validity of a computed signature using the service public key. If verification fails, the combiner
tries another set of partial signatures. This process continues until the combiner constructs the
correct signature from at least t+1 correct partial signatures.
Besides threshold signature, this key management service also employs share refreshing to tolerate
mobile adversaries and to adapt its configuration to the network changes. New shares do not depend
on old ones, so the adversary cannot combine old shares with new ones to recover the private key of
the service. Thus, the adversary is challenged to compromise t+1 server between two periodic
refreshing. The base of this method is solid, but it deals with only the problem of certificates
signature and distribution of certification authority. With this method one is sure that no adversary
will be able to generate correct certificates. The authentication problem is well dealt but
confidentiality needs more solidity. In addition to that, this method is onerous. Each time there is a
secured exchange, it is necessary to call upon at least t+1 server, in addition of the Combiner process.
3.5 Secure Routing Protocol for Mobile Ad Hoc Networks
An important aspect of Ad hoc networks security is routing security. The discussed Secure
Routing Protocol (SRP) in counters malicious behavior that targets the discovery of topological
information. SRP provides correct routing information (factual, up-to-date, and authentic connectivity
information regarding a pair of nodes that wish to communicate in a secure manner). SRP discovers one
or more routes whose correctness can be verified. Route requests propagate verifiably to the sought,
trusted destination. Route replies are returned strictly over the reversed route, as accumulated in the route request packet. There is an interaction of the protocol with the IP layer functionality.
The reported path is the one placed in the reply packet by the destination, and the corresponding
connectivity information is correct, since the reply was relayed along the reverse of the discovered
route. In the same paper, Papadimitratos and Haas suggest to protect data transmission by using their
Protocol named Secure Message Transmission Protocol (SMT), which provides, according to them, a
flexible end-to-end secure data forwarding scheme that can naturally complements SRP. They use
methodology of to proof their protocol authentication correctness and a performance evaluation of SRP
under different kinds of attacks is available in [26]. They ensure that attackers cannot impersonate the
destination and redirect data traffic, cannot respond with stale or corrupted routing information, are
prevented from broadcasting forged control packets to obstruct the later propagation of legitimate
queries, and are unable to influence the topological knowledge of benign nodes. But in, authors
make analysis of SRP and proof by employing BAN logic that the source can’t guarantee that the
identified route is non-corrupted as said Papadimitratos and Haas in. They introduce an attack which demonstrates SRP’s vulnerabilities and propose a solution based on the watchdog scheme to make
SRP more efficient.
IV. INTRUSION DETECTION
In authors examine the vulnerabilities of wireless networks and argue that intrusion detection is a
very important element in the security architecture for mobile computing environment. They
developed such architecture and evaluated a key mechanism in this architecture, anomaly detection
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
341 Vol. 1, Issue 5, pp. 337-341
for mobile ad-hoc network, through simulation experiments. Intrusion prevention measures, such as
encryption and authentication, can be used in Ad hoc networks to reduce intrusion, but cannot
eliminate them. For example, encryption and authentication cannot defend against compromised
mobile nodes, which often carry the private keys. In their architecture, they suggest that intrusion
detection and response systems should be both distributed and cooperative to suite the needs of mobile Ad hoc networks. Also, every node participates in intrusion detection and response. So there
are individual IDS (Intrusion Detection Systems) agents placed on each and every node. It detects
intrusion from local traces and initiates response.
If anomaly is detected in the local data, neighboring IDS agents will cooperatively participate in
global intrusion detection actions. For their experimental results, they use Dynamic Source Routing
(DSR) protocol, Ad hoc On Demand Vector Routing (AODV) protocol, and Destination Sequenced
Distance-Vector Routing (DSDV) protocol. They demonstrate that this anomaly detection approach
can work well on different Ad hoc networks, but there are some limits on detection capabilities as the
mobility level. In this paper, we propose a solution to ensure data confidentiality. We focused Ad
hoc networks data security transmission aspect and will detail Securing Data based MultiPath
routing (Secured Data based MultiPath) protocol.
V. CONCLUSION
In this paper, we proposed a solution that treats data confidentiality problem by exploiting a very
important Ad hoc network characteristic which is MultiPath. Our proposal improves data security
robustly without being heavy. It takes profit from existing Ad hoc networks’ characteristics and doesn’t
modify existing lower layers protocols. This solution can be combined with other solutions which
ensure other security aspects than confidentiality. We are carrying out tests and evaluations to
emphasize its performances to ensure security.
REFERENCES
[1] B.Shrader May 2002 A proposed definition of Adhoc Royal Institute of Technology (KTH),
Stockholm, Swede
[2] M. M. Lehmus. May 2000. Requirements of Ad hoc Network Protocols. Technical report, Electrical
Engineering, Helsinki University of Technology.
[3] A. Qayyum. Nov 2000. Analysis and evaluation of channel access schemes and routing protocols for
wireless networks. Ph.D report. Dep Computer Science, Paris XI, Paris Sud University.
[4] W.Diffie, and M. Hellman. November 1976. New Directions in Cryptography. IEEE Transactions on
Information Theory. 22(6): 644-654.
[5] P. Guttmann. August 2002. PKI: It’s Not Dead, Just Resting. IEEE Computer. 41-49.
[6] H. Li, Z. Chen, X. Qin, C. Li, and H. Tan. April 2002. Secure Routing in Wired Networks and Wireless
Ad Hoc Networks. Technical Report, Department of Computer Science, University of Kentucky.
BIOGRAPHY
R. Vidhya received M.Sc (IT), SNR SONS College, Coimbatore, MBA Pondicherry University,
M. Phil Bharathiar University Coimbatore. She has published 2 International Journals. She has
published 11 National Conference and 5 International Conference. His area of interest in
Networks Security and Information Security.
G. P. Ramesh Kumar received MCA, M. Phil, about to submit Pursuing Ph.D under the guidance of Dr.
Antony Selvadoss Thanamani in VMRF University Chennai. He is having 17 Years of teaching Experience. He
has published 5 National Journals and 2 International Journals. He has published 22 National Conference and 3
International Conference. His area of interest in Networks Security and Information Security. He is a member of
ISTE and CSI.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
342 Vol. 1, Issue 5, pp. 342-350
COMPARATIVE STUDY OF DIFFERENT SENSE AMPLIFIERS IN
SUBMICRON CMOS TECHNOLOGY
Sampath Kumar1, Sanjay Kr Singh
2, Arti Noor
3, D. S. Chauhan
4 & B.K. Kaushik
5
1J.S.S. Academy of Technical Education, Noida, India
2IPEC, Ghaziabad, INDIA
3Centre for Development of Advance Computing, Noida, India
4 UTU, Dehradun, India
5IIT Roorkee, India
ABSTRACT
A comparison of different sense amplifiers are presented in consideration of SRAM memories using 250nm and
180nm technology. The sensing delay-time for different capacitance values of the bit line and for different
values of power supply results are given by considering worst case process corners and high temperatures. The
effect of various design parameters on the different sense amplifiers has been discussed and reported.
KEYWORDS: CMOS, SRAM, CTSA, CONV, CBL, DLT
I. INTRODUCTION
Performance of embedded memory and its peripheral circuits can adversely affect the speed and
power of overall system. Sense Amplifier is the most vital circuits in the periphery of CMOS memory
as its function is to sense or detect stored data from read selected memory. The performance of sense
amplifiers [1] strongly affects both memory access time and overall memory power dissipation. The
fallouts of increased memory capacity are increased bit line capacitance which in turn makes memory
slower and more energy hungry.
A sense amplifier is an active circuit that reduces the time of signal propagation from an accessed
memory cell to the logic circuit located at the periphery of the memory cell array and converts the arbitrary logic levels occurring on a bit line to the digital logic levels of the peripheral Boolean
circuits.
The memory cell being read produces a current "IDATA" that removes some of the charge (dQ) stored
on the pre-charged bit lines. Since the bit-lines are very long, and are shared by other similar cells, the
parasitic resistance "RBL" and capacitance "CBL" are large. Thus, the resulting bit-line voltage swing
(dVBL) caused by the removal of "dQ" from the bitline is very small dVBL = dQ/CBL. Sense amplifiers
are used to translate this small voltage signal to a full logic signal that can be further used by digital logic.
To improve the speed, performance of memory and to provide signals which conform the
requirements of driving peripheral circuits within the memory, understanding and analyzing the
circuit design of different sense amplifier types and other substantial elements of sense circuits is
necessary. Sense amplifiers may be classified by circuit types such as differential and non differential
and by operation modes such as voltage, current and charge amplifiers. A differential sense amplifier
can distinguish smaller signals from noise than its non differential counterpart, the signal detection
can start sooner than in a non differential sense amplifier .Although differential sensing compromises
some silicon area yet in most of the design the use of differential amplifier allow to combine very
high packaging density with reasonable access time and low power consumption. The rest of the
paper is organized as follows. In the section 11 describe the different sense amplifier, then in section
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
343 Vol. 1, Issue 5, pp. 342-350
III describes the comparative study of different current sense amplifier ,then in section IV describe
the conclusion of this paper.
II. DIFFERENTIAL SENSE AMPLIFIER
Differential sense amplifier may be classified as:
1. Voltage sense amplifier
2. Current sense amplifier
3. Charge transfer sense amplifier (CTSA)
The simplest voltage sense amplifier [2] is the differential couple. When a cell is being read, a small
voltage swing appears on the bit line which is further amplified by differential couple and use to drive
digital logic. However the bitline voltage swing is becoming smaller and is reaching the same
magnitude as bitline noise, the voltage sense amplifier become unusable.
The fundamental reason for applying current mode sense amplifier in sense circuit is their small input
impedances. Benefits of small input and output impedances are reductions in sense circuit delays,
voltage swings, cross-talking, substrate currents and substrate voltage modulations.
The operation of the CTSA is based on the charge re distribution mechanism between very high bit-
line capacitance and low output capacitance of the sense amplifier. A differential charge transfer
amplifier takes advantage of the increased bit-line capacitance and also offers a low-power operation
without sacrificing the speed.
2.1 Voltage sense amplifier
The voltage sense amplifier can be classified as follows
1. Basic differential voltage amplifier.
2. Simple differential voltage sense amplifier.
3. Full complementary differential voltage sense amplifiers
4. Positive feedback differential voltage sense amplifiers.
5. Full complementary positive feedback voltage sense amplifiers.
1. Basic differential voltage amplifier
The basic MOS differential voltage amplifier circuit contains all elements required for differential sensing. A differential amplifier takes small signal differential inputs and amplifies them to a large
signal single ended output. The effectiveness of a differential amplifier is characterized by its ability
to reject common noise and amplify true difference between the signals. Because of rather slow
operational speed provided at considerable power dissipation and inherently high offset basic
differential voltage amplifier is not applied in memories.
2. Simple differential voltage sense amplifier
It has less power dissipation and offset in comparison of basic differential voltage sense amplifier.
The simultaneous switching of load devices is fundamental drawback of differential voltage sense
amplifier in obtaining fast sensing operation.
3. Full complementary differential voltage sense amplifiers
The full complementary sense amplifier [3] reduces the duration of signal transients by using active
loads in large signal switching, improves small signal amplification and common mode rejection ratio
(CMRR) by providing virtually infinite load resistances and approximately constant source current of
the inception of signal sensing. The full complementary differential sense amplifier is able to combine
high initial gain, common mode rejection ratio, a large input impedance and small output impedance.
The operation can be made even faster by using positive feedback.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
344 Vol. 1, Issue 5, pp. 342-350
Figure 1.Basic differential voltage amplifier, Figure 2. Simple differential voltage sense amplifier, Figure 3.Full
complementary differential voltage sense amplifiers, Figure 4.Positive feedback differential voltage sense
amplifiers, Figure 5.Full complementary positive feedback voltage sense amplifiers.
4. Positive feedback differential voltage sense amplifiers
The positive feedback in differential sense amplifiers [4] makes possible to restore data in DRAM cell
simply, increases the differential gain in the amplifier and reduces switching times and delays in sense
circuit.
5. Full complementary positive feedback voltage sense amplifiers The full complementary positive feedback sense amplifier improves the performance of simple
positive feedback amplifier by using an active circuit constructed of devices MP4, MP5 and MP6 in positive feedback configuration.
There are many ways of enhancing the performance of different voltage mode sense amplifier by
adding a few devices to the differential voltage sense amplifier. Out of these few ways are
1. Temporary decoupling of bit lines from the sense amplifiers.
2. Separating the input and output in feedback sense amplifiers.
3. Applying constant current source to the source devices,
4. Optimizing the output signal amplitude. Approaches (1) and (2) decreases capacitive load of sense amplifier. By approach (3) the sense
amplifier source resistance is virtually increased to achieve high gain, and by approach (4) amount of
switched charges is decreased.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
345 Vol. 1, Issue 5, pp. 342-350
2.2 Current sense amplifier
Current sense amplifier can be broadly classified as:
1. Conventional current mode sense amplifier
2. Conventional current mirror sense amplifier
3. Clamped bit line sense amplifier
4. Simple 4T sense amplifier
5. PMOS bias type sense amplifier
6. Differential latch type sense amplifier.
7. Hybrid current sense amplifier
1) Conventional current mode sense amplifier
The conventional current mode sense amplifier [6] (CONV) is illustrated in Figure 6. The design of
the sensor is based on the classic cross-coupled latch structure (M4-M7) with extra circuitry for sensor
activation (M8) and bit-line equalisation (Ml-M3). The operation of the sense amplifiers presents two
common phases: precharge and sense signal amplification. In the precharging phase the EQ signal is
low and the bit-lines are precharged to Vdd. In the sensing phase the EQ and EN signals go to high.
This activates the cross-coupled structure and drives the outputs to the appropriate values.
Figure 6. Conventional current mode (CONV) sense amplifier
This structure is suitable for realizing high speed and large size memories. Also suitable for low
voltage operation, as no large voltage swing on the bitline is needed. However the performance of this
sense amplifier structure is strongly dependent on Cbl , because output node is loaded with bitline
capacitance. The performance is also degraded at low voltage operati on (<1.5V).
2) Conventional current mirror current mode sense amplifier This architecture includes two current-mirror cells shown in figure 7 that copy the current of bit-lines
and then subtract them and the outputs are complementary. This conventional sense amplifier uses
simple current mirror cell which has a strong dependence of the output current on output voltage. To
minimize the effect of finite output impedance, a cascade configuration can be used. The improved
Wilson mirror cell also can be used in a current sense amplifier [7]. This type of sense amplifier has
increased output impedance compared to conventional configuration. To minimize the loading effect,
input impedance can be decreased with an active gain element in the feedback loop of a conventional
current mirror cell [8].
3) Clamped bit line sense amplifier Figure 8 presents the clamped bit-line sense amplifier (CBL). The circuit is able to respond very
rapidly, as the output nodes of the sense amplifier are no longer loaded with bitline capacitance. The
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
346 Vol. 1, Issue 5, pp. 342-350
input nodes of the sense amplifier are low impedance current sensitive nodes. Because of this the
voltage swing of the highly capacitance bitlines change is very small.
Figure 7.Conventional current mode sense amplifier
Figure 8.Clamped bit-line (CBL) sense amplifier
The improvement in the driving ability [9] of output nodes due to positive feedback and the small
difference can be detected and translated to full logic. The is almost insensitive to technology and
temperature variations. The main limitation of this circuit is that the bitlines are pulled down
considerably from their precharge state through the low impedance NMOS termination. This result in
significant amount of energy consumption in charging and discharging the highly capacitive bitlines.
Also, the presence of two NMOS transistors in series with the cross-coupled amplifier results in an
increase in the speed of amplification.
4) Simple 4T current sense amplifier
The simple four-transistor (SFT) current mode sense amplifier [10] is shown in Figure 9. This SA
consists of only four equal-sized PMOS transistors. This configuration consumes lowest silicon area
and is most promising solution for low power design.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
347 Vol. 1, Issue 5, pp. 342-350
Figure 9. Simple four transistor (SFT) sense amplifier
Figure 10. PMOS bias type (PBT) sense amplifier
In many cases it can fit in the column pitch, avoiding the need for column select devices, thus reducing
propagation delay. This type of sense amplifier presents a virtual short circuit across the bitlines therefore
the potential of the bitlines will be independent of the current distribution. The sensing delay is unaffected
by the bitline capacitance since no differential capacitor discharging is required to sense the cell data.
Discharging current from the bitline capacitors, effectively precharge the sense amplifier. However the
performance is strongly affected at lower voltage operation. At lower power supply SFT is more sensitive
than the CBL.
5) PMOS bias type sense amplifier
The PMOS bias type (PBT) current mode sense amplifier is shown in Figure 10. In the operation of this
current sense amplifier, the voltage swing on the bit-lines or the common data lines does not play an
important role in obtaining the voltage swing in the sense amplifier output. This means that the current
sense amplifier can be used with a very small bit-line voltage swing, which shortens the bit-line signal
delay without pulsed bit-line equalisation. In the sensing circuitry, a normally-on equalizer is used in the
read cycle to make the bit-line voltage swing small enough to attain a fast bit-line signal transition.
Omitting the pulsed bit-line equalisation is also a power-saving factor.
6) Differential latch type sense amplifier
The differential latch type sense amplifier (DLT) is shown in Figure 11. This sense amplifier also has
separated inputs and outputs for low voltage operation and for the acceleration of the sensing speed.
The DLT can satisfactorily operate with low voltages, even under worst-case and high temperature
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
348 Vol. 1, Issue 5, pp. 342-350
conditions, with no significant speed degradation. This sense amplifier provides the most promising
solutions in low power designs.
Figure 11. Differential latch type (DLT) sense amplifier
7) Hybrid current sense amplifier
A hybrid current sense amplifier is shown in Figure 12. It introduces a completely different way of
sizing the aspect ratio of the transistors on the data-path, hence realizing a current-voltage hybrid
mode Sense Amplifier.
Figure 12. Hybrid current sense amplifier
It introduces a new read scheme that creatively combines the current and voltage-sensing schemes to
maximize the utilization of Icell, hence offering a much better performance in terms of both sensing
speed and power consumption. Since only one of the BLs and one of the DLs are discharged to lower
levels than Vdd while their complementary lines are kept at Vdd. The new SA is insensitive to the
difference between CDL and . This feature helps it to cope with the increasing fluctuation of these
parasitic capacitances due to the layout and fabrication processes. The new design can operate in a
wide supply voltage range, from 1.8 to 0.9 V with minimum performance degradation.
III. COMPARATIVE STUDY OF DIFFERENT CURRENT SENSE AMPLIFIER
Table I present the sensing delay time, for different capacitance values of the bit-line. The CBL and
DLT circuits exhibit a performance independent of the bitline capacitance (CBL), while the
performance of the rest of the sense amplifier circuits is strongly dependent on CBL.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
349 Vol. 1, Issue 5, pp. 342-350
Table II show the worst sensing delay time, for different values of the power supply voltage. The DLT
can satisfactorily operate with low voltages, even under worst case and high temperature conditions,
with no significant speed degradation. The performance of the CBL design is limited down to 1.5 V,
while for lower Vdd values the delay time significantly increases. The sensing delay time of the PBT is
not seriously affected by the Vdd reduction.
TABLE I: The sensing delay-time for different capacitance values of the bitline
Structures Sensing delay-time for different CBL (ns)
CBL=1pF CBL=2pF CBL=3pF CBL=4pF CBL=5pF
CONV_CSA 8 16 21 23 25
CM_CSA* 1 4 8 11 14
CBL_CSA 0.5 0.6 0.6 0.6 0.6
SFT_CSA 2
2.5 2.8 3 4
PBT_CSA 3 5 7 9 11
DLT_CSA 0.6 0.8 0.8 0.8 0.8
HBD_CSA* 0.3 0.3 0.3 0.3 0.3
Channel length=0.25µm; Vdd=2.5V; Temp. = 270C.
Channel length=0.18µm; Vdd=1.8V.
TABLE II : The sensing delay-time for different values of power supply
Structures Sensing delay-time (ns) for different Vdd
Vdd=1.1V Vdd=1.4V Vdd=1.7V Vdd=2.0V Vdd=2.3V Vdd=2.6V
CONV_CSA 14 11 9 8.5 8.5 8
CM_CSA* 13.4 13.39 13.42 13.4 13.4 13.43
CBL_CSA 5 2 1.5 1 0.8 0.5
SFT_CSA 7 6.8 6 2.5 2 2
PBT_CSA 5 4.5 4 3.5 3 3
DLT_CSA 2 1.5 1 1 1 1
HBD_CSA* 0.6 0.5 0.3 0.2 0.2 0.2
Channel length=0.25µm; CBL=1pF; Temp. = 270C. *Channel length=0.18µm
IV. CONCLUSION
A comparative study of various sense amplifiers proposed has been carried out. These sense
amplifiers have been designed in 250nm and 180nm CMOS technology. According to these results,
the CBL and DLT circuits exhibit a performance independent of the bit-line capacitance (CBL) and the
performance of the CBL design is limited down to 1.5V. The feature work can be done for analyzing
silicon area utilization without compromising on performance.
REFERENCES
[1] High-Performance and Low-Voltage Sense-Amplifier Techniques for sub-90nm SRAM Manoj Sinha*,
Steven Hsu, Atila Alvandpour,Wayne Burleson*, Ram Krishnamurthy, Shekhar Borhr Department of
Electrical and Computer Engineering, University of Massachusetts, Amherst, USA* Microprocessor
Research Labs, Intel Corporation, Hillsboro, OR 97124, USA, , pp.113-117, IEEE 2003.
[2] FF Offner, “Push-Pull Resistance Coupled Amplifiers,” Review of Scientific Instruments, Vol. 8, pp. 20-21,
January 1937. KY Toh, PK Ko, and RG Meyer.
[3] T. Doishi, et al., “A Well-Synchronized Sensing/Equalizing Method for Sub-1 .0-VOperating Advanced
DRAMs,” IEEE Journal of Solid-State Circuits, Vol. 29, No. 4, pp. 432-440, April 1994.
[4] N. N. Wang, “On the Design of MOS Dynamic Sense Amplifiers,” IEEE Transactions on Circuits and
Systems, Vol. CAS-29, No. 7, pp. 467-477, July 1982.
[5] E. Seevinck, P. van Beers, and H. Ontrop, “Current-mode techniques for high-speed vlsi circuits with
application to current sense amplifier for CMOS SRAM’s,” IEEE J. Solid-State Circuits, vol. 26, no. 4, pp.
525–536, Apr. 1991.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
350 Vol. 1, Issue 5, pp. 342-350
[6] N. Shibata, “Current sense amplifiers for low-voltage memories,” IEICE Trans. Electron, vol. 79, pp.
1120–1130, Aug. 1996.
[7] E. Seevinck, P. van Beers, and H. Ontrop, “Current-mode techniques for high-speed vlsi circuits with
application to current sense amplifier for CMOS SRAM’s,” IEEE J. Solid-State Circuits, vol. 26, no. 4, pp.
525–536, Apr. 1991
[8] A. Hajimiri and R. Heald, Design Issues in Cross-Coupled Inverter Sense Amplifier. New York, 1998, pp.
149–152.
[9] A.-T. Do, S. J. L. Yung, K. Zhi-Hui, K.-S. Yeo, and L. J. L. Yung, “A full current-mode sense amplifier for
low-power SRAM applications,” in Proc. IEEE Asia Pacific Conf. on Circuits Syst., 2008, pp. 1402–1405.
[10] Comparative study of different current mode sense amplifiers in submicron CMOS technology A.
Chrysanthopoulos, Y. Moisiadis, Y. Tsiatouhas and A. Arapoyanni.,pp-154-159 IEEE Proc.-Circuits
Devices Syst., Vol. 149, No. 3, June 2002.
About the Authors
Sampath Kumar V. a PhD scholar at the UPTU Lucknow ,(Uttar Pradesh) India . He is an
Assoc. Professor in the Department of Electronics and Communication Engineering in J.S.S.
Academy of Technical Education, Noida, INDIA. He has received his M.Tech. in VLSI
Design And B.E in Electronics and Communication Engineering in the year of 2007 and
1998 respectively. His main research interest is in reconfigurable memory design for low
power.
Sanjay Kr Singh, a PhD scholar at the UK. Technical university, Deharadun, (Uttrakhand)
India . He is an Asso. Professor in the Department of Electronics and Communication
Engineering in Indraprastha Engineering College, Ghaziabad (Uttar Pradesh) India. He has
received his M.Tech. in Electronics &Communication and B.E in Electronics and
Telecommunication Engineering in the year of 2005 and 1999 respectively. His main
research interests are in Deep-Sub Micron Memory Design for low power.
Arti Noor, completed her Ph. D from Deptt. of Electronics Engg., IT BHU, Varanasi in
1990. She has started her career as Scientist-B in IC Design Group, CEERI, Pilani from 1990-
95 and subsequently served there as Scientist-C from 1995-2000. In 2001 joined Speech
Technology Group, CEERI Center Delhi and served there as Scientist-EI upto April 2005. In
May 2005 Joined CDAC Noida and presently working as Scientist-E and HOD in M. Tech
(VLSI) Division. Supervised more than 50 postgraduate theses in the area of VLSI Design,
she has examined more than 50 M. Tech theses and supervising three Ph. D students in the area of
Microelectronics. Her main research interest is in VLSI Design of semi or full-custom chips for implementation
of specific architecture, Low power VLSI Design, Digital design.
D S Chauhan, He did his B.Sc Engg.(1972) in electrical engineering at I.T. B.H.U., M.E.
(1978) at R.E.C. Tiruchirapalli ( Madras University ) and PH.D. (1986) at IIT/Delhi. He did his
post doctoral work at Goddard space Flight Centre, Greenbelt Maryland . USA (1988-91).He
has been director KNIT sultanpur in 1999-2000 and founder vice Chancellor of U.P.Tech.
University (2000-2003-2006). Later on, he has served as Vice-Chancellor of Lovely Profession
University (2006-07) and Jaypee University of Information Technology (2007-2009).
Currently he has been serving as Vice-Chancellor of Uttarakhand Technical University for (2009-12) Tenure.
B. K. Kaushik ,He did his B.E. degree in Electronics and communication Engineering from C
R State college of Engineering, Murthal, Haryana in 1994.His M tech in Engineering system
from Dayal bag, Agra in 1997.His obtain PhD AICTE-QIP scheme from IIT Roorkee ,India..
He has published more than 70 papers in nation and international journal and conferences. His
research interest are in electronics simulation and low power VLSI designee .He is serving as a
Assistant Professor in department of electronics and computer engineering, Indian institute of
Technology, Roorkee, India.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
351 Vol. 1, Issue 5, pp. 351-360
CHARACTER RECOGNITION AND TRANSMISSION OF
CHARACTERS USING NETWORK SECURITY
Subhash Tatale1 and Akhil Khare
2
1Student &
2Assoc. Prof., Deptt. of Info. Tech., Bharti Vidyapeeth Deemed Uni., Pune, India.
ABSTRACT
This paper deals with character recognition of characters of vehicle number plate and these recognized
characters are transmitted through secure network channel by using encryption & decryption techniques.
This paper includes implementation of automatic number plate recognition, which ensures a process of
number plate detection, processes of proper characters segmentation, normalization and recognition also it
explains the implementation of respective algorithms. Automatic Number Plate Recognition is a real time
embedded system which automatically recognizes the license number of vehicles. In this paper, the
implementation of recognizing number plate is considered. After recognizing the characters from number plate
by implementing by various algorithms, the characters are transmitted through secure Channel. For Secure
transmission of recognized characters i.e. vehicle number, Steganography techniques are used. First recognized
characters are embedded into image and that data is encrypted by using private key at sender’s end. At the
receiving end, the data is extracted from the image by using decryption technique.
KEYWORDS: artificial intelligence, optical character recognition, encryption, decryption, KNN
I. INTRODUCTION
Automatic Number Plate Recognition is a mass surveillance system that captures the image of
vehicles and recognizes their license number. This project consists of two modules. First module
describes the implementation of recognition of vehicle number from vehicle number plate. For this, a
process of number plate detection processes of proper characters segmentation, normalization and
recognition is used. It also explains the implementation of respective algorithms. In this paper, the
implementation of recognizing number plate is considered. Second module describes transmission of recognized characters i.e. vehicle number through secure
network channel. The application of this concept is security and information hiding of the recognized
data. For Secure transmission of recognized characters Steganography techniques are used. First
recognized characters are embedded into image. An OutGuess algorithm is used to embed the
characters into image. This embedded data is encrypted at the senders end and data is transmitted over
network. At the receiver end, decryption technique is used to extract original data. A DES algorithm is
used for encryption and decryption of data.
II. IMPLEMENTATION OF CHARACTER RECOGNITION
The first step in a process of character recognition of number plate is a detection of a number plate
area. After detecting the number plate area the plate is segmented using horizontal projection. Once
plate is segmented then characters are extracted from horizontal segments. Extracted characters are
normalized by calculating parameters like brightness etc. and recognized using by KNN algorithm.
The following describes the implementation of character recognition from number plate of vehicle.
2.1 Edge detection of number plate
Let us define the number plate as a rectangular area with increased occurrence of horizontal and vertical edges. The high density of horizontal and vertical edges on a small area is in many cases
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
352 Vol. 1, Issue 5, pp. 351-360
caused by contrast characters of a number plate, but not in every case. This process can sometimes
detect a wrong area that does not correspond to a number plate. Because of this, we often detect
several candidates for the plate by this algorithm, and then we choose the best one by a further
heuristic analysis.
2.1.1 Convolution matrices
Each image operation is defined by a convolution matrix. The convolution matrix defines how the specific pixel is affected by neighboring pixels in the process of convolution. Individual cells in the
matrix represent the neighbors related to the pixel situated in the centre of the matrix. The pixel
represented by the cell y is affected by the pixels x0…..x8 according to the formula:
y = x0 x m0 + x1 x m1 + x2 x m2 + x3 x m3 + x4 x m4 + x5 x m5 + x6 x m6 + x7 x m7 + x8 x m8 (1)
where m represents matrix, x represents row and y represents column.
2.1.2 Horizontal and vertical edge detection
To detect horizontal and vertical edges, we convolve source image with matrices mhe and mve.
The convolution matrices are usually much smaller than the actual image. Also, we can use bigger
matrices to detect rougher edges.
In this section, technique of detection of number plate is explained. The edge of the number plate is
detected horizontally and vertically.
2.2 Horizontal and Vertical Image Projection
After the series of convolution operations, we can detect an area of the number plate according to a
statistics of the snapshot. There are various methods of statistical analysis. One of them is a horizontal
and vertical projection of an image into the axes x and y.
The vertical projection of the image is a graph, which represents an overall magnitude of the image
according to the axis y. If we compute the vertical projection of the image after the application of the
vertical edge detection filter, the magnitude of certain point represents the occurrence of vertical
edges at that point. Then, the vertical projection of so transformed image can be used for a vertical
localization of the number plate. The horizontal projection represents an overall magnitude of the
image mapped to the axis x.
Let an input image be defined by a discrete function f x, y. Then, a vertical projection py of the
function f at a point y is a summary of all pixel magnitudes in the yth
row of the input image. Similarly,
a horizontal projection at a point x of that function is a summary of all magnitudes in the xth column.
We can mathematically define the horizontal and vertical projection as:
∑∑−
=
−
=
==1
0
1
0
;),()(;),()(w
i
y
h
j
x yifypjxfxp (2)
where w and h are dimensions of the image.
The detection of the number plate area consists of a “band clipping” and a “plate clipping”.
The band clipping is an operation, which is used to detect and clip the vertical area of the number
plate (so-called band) by analysis of the vertical projection of the snapshot. The plate clipping is a consequent operation, which is used to detect and clip the plate from the band (not from the whole
snapshot) by a horizontal analysis of such band.
In this section, horizontal and vertical projection technique is explained. This technique is used for
detecting edge of number plate.
2.3 Segmentation of plate using a horizontal projection
Since the segmented plate is deskewed, we can segment it by detecting spaces in its horizontal
projection. We often apply the adaptive thresholding filter to enhance an area of the plate before
segmentation. The adaptive thresholding is used to separate dark foreground from light background
with non-uniform illumination. You can see the number plate area after the thresholding in figure 1.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
353 Vol. 1, Issue 5, pp. 351-360
Figure 1: Number plate after application of the adaptive thresholding
After the thresholding, we compute a horizontal projection px(x) of the plate f x, y. We use this
projection to determine horizontal boundaries between segmented characters. These boundaries
correspond to peaks in the graph of the horizontal projection (figure 2).
Figure 2: Horizontal projection of plate with detected peaks
The goal of the segmentation algorithm is to find peaks, which correspond to the spaces between
characters. At first, there is a need to define several important values in a graph of the horizontal
projection px(x):
Vm - The maximum value contained in the horizontal projection px(x), such as
(x)pmax x=mv where w is a width of the plate in pixels.
Va - The average value of horizontal projection px(x), such as ∑−
=
=1
0
)(1 w
x
xa xpw
v
Vb - This value is used as a base for evaluation of peak height. The base value is always calculated
as Vb =2. Va - Vm. The Va must lie on vertical axis between the values Vb and Vm.
The algorithm of segmentation iteratively finds the maximum peak in the graph of vertical projection.
The peak is treated as a space between characters, if it meets some additional conditions, such as
height of peak. The algorithm then zeroizes the peak and iteratively repeats this process until no
further space is found. This principle can be illustrated by the following steps:
1. Determine the index of the maximum value of horizontal projection: )(|maxarg0
xPxx xwx
mp≤
=
2. Detect the left and right foot of the peak as:
)(.)(|minarg;)(.)(|maxarg00
mxxxxx
rmxxxxx
l xPcxPxxxPcxPxxmm
≤=≤=≤≤ pp
3. Zeroize the horizontal projection px(x) on interval, rl xx ,
4. If px(xm) cw vm , go to step 7.
5. Divide the plate horizontally in the point xm.
6. Go to step 1.
7. End.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
354 Vol. 1, Issue 5, pp. 351-360
In this section, segmentation of number plate is explained. The number plate is segmented by using
horizontal projection.
2.4 Extraction of characters from horizontal segments
The segment of plate contains besides the character also redundant space and other undesirable
elements. We understand under the term “segment” the part of a number plate determined by a
horizontal segmentation algorithm. Since the segment has been processed by an adaptive thresholding
filter, it contains only black and white pixels. The neighboring pixels are grouped together into larger
pieces, and one of them is a character. Our goal is to divide the segment into the several pieces, and
keeps only one piece representing the regular character. This concept is illustrated in figure 3.
Figure 3: Horizontal segment of the number plate contains several pieces of neighboring pixels.
In this section, how the characters are extracted from horizontal segments are explained once the
number plate is segmented.
2.5 Normalization of Characters
To recognize a character from a bitmap representation, there is a need to extract feature descriptors of such bitmap. As an extraction method significantly affects the quality of whole
OCR process, it is very important to extract features, which will be invariant towards the various light
conditions, used font type and deformations of characters caused by a skew of the image.
The first step is a normalization of a brightness and contrast of processed image segments.
The second step is the characters contained in the image segments must be then resized to uniform
dimensions. The third step is the feature extraction algorithm extracts appropriate descriptors from the
normalized characters.
2.5.1 Normalization of brightness and contrast
The brightness and contrast characteristics of segmented characters are varying due to different light
conditions during the capture. Because of this, it is necessary to normalize them. There are many
different ways, but three most used: histogram normalization, global and adaptive thresholding.
Through the histogram normalization, the intensities of character segments are redistributed on the
histogram to obtain the normalized statistics. Techniques of the global and adaptive thresholding are
used to obtain monochrome representations of processed character segments. The monochrome (or black & white) representation of image is more appropriate for analysis, because it defines clear
boundaries of contained characters.
2.5.2 Normalization of dimensions and resampling
Before extracting feature descriptors from a bitmap representation of a character, it is necessary to
normalize it into unified dimensions. The term “resampling” is the process of changing dimensions of
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
355 Vol. 1, Issue 5, pp. 351-360
the character. As original dimensions of unnormalized characters are usually higher than the
normalized ones, the characters are in most cases downsampled. When we downsample, we reduce
information contained in the processed image.
There are several methods of resampling, such as the pixel-resize, bilinear interpolation or the
weighted-average resampling. We cannot determine which method is the best in general, because the successfulness of particular method depends on many factors. For example, usage of the weighed-
average downsampling in combination with a detection of character edges is not a good solution,
because this type of downsampling does not preserve sharp edges. Because of this, the problematic of
character resampling is closely associated with the problematic of feature extraction.
In this section, methods for normalization of characters are explained. The extracted characters of
number plate can be normalized by calculating the brightness and contrast. It also be normalized by
representing the dimensions and resampling of characters.
2.6 Character Recognition
After normalization, characters are recognized by using text classification. KNN algorithm is used for
text classification. Text categorization also called text classification is the process of identifying the
class to which a text document belongs. This generally involves learning, for each class, its
representation from a set of characters that are known to be members of that class. KNN algorithm is
used to achieve this task. The simplicity of this algorithm makes it efficient with respect to its
computation time, but also with respect to the ability for non expert users to use it efficiently, that is,
in terms of its prediction rate and the interpretability of the results. This section presents a simple
KNN algorithm adapted to text categorization that does aggressive feature selection. This feature
selection method allows the removal of features that add no new information given that some other
feature highly interacts with them, which would otherwise lead to redundancy, and features with weak
prediction capability. Redundancy and irrelevancy could harm a KNN learning algorithm by giving it
some unwanted bias, and by adding additional complexity.
2.6.1 KNN algorithm
The main idea of KNN algorithm is that given a testing sample, we can use certain neighbor measure
to calculate the neighbor degrees of testing and training samples on training sets, and then classify it
with its label of the K nearest neighbor, if its K nearest neighbor contains a number of labels, the
samples will be assigned to the majority class of their K nearest neighbor.
The following is the description of KNN algorithm.
a) Describe the training text vector according to the characteristics set, and the weight is always
calculated in TF-IDF method.
b) It is necessary to do word segmentation for new text according to feature words, and then describe
the vector of new text.
c) Find the K most similar neighbors of the new text among the training documents. To measure the
similarity efficiently, we make use of the cosine distance as follows:
=
∑∑
∑
==
=
M
k
jk
M
k
ik
M
k
jkik
ji
ww
ww
ddsim
11
1
22
*
),( (3)
Where, di denotes the feature vector of test text, dj denotes the center vector of j-type, M denotes the
dimension of feature vectors, Wk denotes the k-dimension of text feature vector. So far, there is no
good way to determine the value k. In general, it has an initial value, and then it will be adjusted
according to the results of experiment.
d) In the K nearest neighbors of the new text, then calculate the weight of each category, calculated as
follows:
∑
=
jijj cdydxsimcxp ,,,
___
(4)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
356 Vol. 1, Issue 5, pp. 351-360
Where, _
x denotes the feature vector of new text,
jdxsim ,
_
denotes the above similarity
formula,
ji cdy ,
_
denotes type function
e) To compare the weight of each category, move the new text to the category of the maximum
weight.
In this section, the algorithm for character recognition is explained. To recognize the character k-
nearest neighbor (KNN) algorithm is used.
III. CHARACTER TRANSMISSION USING NETWORK SECURITY
In this module, recognized characters of number plate are transmitted over network by using Network
Security. Steganography techniques are used to hide the information that is recognized characters.
First the recognized characters are embedded into image by using encryption technique. First the
image is selected from source location; the recognized characters hide into selected image. Public key is used for encryption at sender side. After sending this image containing characters to the receiver, at
the receiver end decryption technique is used to extract characters from image.
Steganography works by replacing bits of useless or unused data in regular computer files (such as
graphics, sound, text, HTML, or even floppy disks) with bits of different, invisible information. This
hidden information can be plaintext, cipher text, or even images and sound wave. In the field of
Steganography, some terminology has developed. The adjectives cover, embedded and stego were
defined at the Information Hiding Workshop held in Cambridge, England. The term “cover” is used to describe the original, innocent message, data, audio, still, video and so on. When referring to audio
signal Steganography, the cover signal is sometimes called the “host” signal. The information to be
hidden in the cover data is known as the “embedded” data. The “stego” data is the data containing
both the cover signal and the “embedded” information. Logically, the processing of putting the hidden
or embedded data, into the cover data, is sometimes known as embedding. Occasionally, especially
when referring to image Steganography, the cover image is known as the container.
3.1 Hiding text message inside image
The following steps show in details the procedure of hiding secret text inside cover image Block
diagram (Figure 4).
3.1.1 Preparing container image
1. Convert cover image to streams of binary bits. 2. Use two adjacent bits to hide one character.
3.1.2 Preparing secret text message
1. Convert each character of the secret message to decimal number. Example H = (72)10 = (0100
1000)2
(a) We take the 4 least significant bits alone; we can do that by perform AND operation:
(72)10 AND (15)10 = (0100 1000)2 AND (0000 1111)2 = (0000 1000) = (8)10.
(b) We take the 4 upper significant bits alone; we can do that by perform shift operation by 4:
(72)10 Shift to right be 4 = (0000 0100)2 = (4)10
2. Now we can add the secret message to the cover image by applying OR operation.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
357 Vol. 1, Issue 5, pp. 351-360
Figure 4: Block diagram to hide text into Image
As shown in the block diagram (figure 4), to hide each character of secret message we need two
pixels. So the number of character that we can hide in (n x n) image is given by the following
Equation:
Numberofcharacters < (n · n) ÷ 2 − n (5)
In equation (5), we subtract n pixels because we don’t set secret text in the first row of cover image;
we start setting data from the second row of cover image. The first row of covered image used to store
specific data, like position of last pixel in the covered image that contains secret data. The following
two equations show how to calculate the pixels that determine of secret text data:
Y pos = length (1strowofimage) modlength (secretmessage) × 2 (6)
X pos = (length (secretmessage) − Y pos) ÷ length (1strowofimage) (7)
3.1.3 Reconstruction the secret text file
Figure 5: To extract secret message from image
Reconstruction of the secret text message is performed by reversing the process used to insert the
secret message in the container image. The following steps describe the details of reconstruction the
hidden text file (Figure 5):
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
358 Vol. 1, Issue 5, pp. 351-360
1. Take two adjacent pixels from the stego image.
2. Shift the first pixel by 4 to right 1110 0100 shift to right by 4 = (0100 0000)2
3. Perform AND operation with 15 to the second pixel
(0101 1000) AND (00001111)2 = (0000 1000)2
4. ADD the result of step 2 and 3 together and we get (0100 0000)2 + (0000 1000)2 = (0100 1000) = (72)10 = H.
In this section, the embedding and encryption and decryption methods are explained for security and
information hiding of characters.
IV. RESULTS
According to the results, this system gives good responses only to clear plates, because skewed plates
and plates with difficult surrounding environment causes significant degradation of recognition
abilities.
Figure 6: Example of plate recognition.
ANPR solution has been tested on static snapshots of vehicles, which has been divided into several
sets according to difficultness.
Figure 7: Example of plate detection.
Sets of blurry and skewed snapshots give worse recognition rates than a set of snapshots, which has
been captured clearly. The objective of the tests was not to find a one hundred percent recognizable
set of snapshots, but to test the invariance of the algorithms on random snapshots systematically
classified to the sets according to their properties.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
359 Vol. 1, Issue 5, pp. 351-360
Figure 8: Example of character recognition.
V. CONCLUSIONS
The objective of this paper was to study and resolve algorithmic and mathematical aspects of the automatic number plate recognition systems, such as problematic of machine vision, pattern
recognition, and OCR and KNN algorithm. This paper also contains demonstration of ANPR
software, which comparatively demonstrates all described algorithms. The various algorithms of
recognizing the characters from number plate are explained.
This paper also contains the Steganography techniques which are used for the information hiding and
security. The recognized characters are embedded into images; encryption and decryption techniques
are used for network security.
REFERENCES
[1] Peter M. Roth, Martin K¨ostinger, Paul Wohlhart, and Horst Bischof, Josef A. Birchbauer (2010):
Automatic Detection and Reading of Dangerous, 2010 Seventh IEEE International Conference on
Advanced Video and Signal Based Survillance.
[2] Zhiyong Yan, Congfu Xu: Combining KNN Algorithm and Other Classifiers
[3] Ping Dong, Jie-hui Yang, Jun-jun Dong (2006): The Application and Development Perspective of
Number Plate Automatic Recognition Technique.
[4] Wenqian Shang, Haibin Zhu, Houkuan Huang, Youli Qu, and Yongmin Lin(IEEE 2006): The
Improved ontology kNN Algorithm and its Application
[5] W. K. I. L. Wanniarachchi, D. U. J. Sonnadara and M. K. Jayananda (2007): License Plate
Identification Based on Image Processing Techniques, Second International Conference on Industrial
and Information Systems.
[6] Zhang Yunliang, Zhu Lijun, Qiao Xiaodong, Zhang Quan: Flexible KNN Algorithm for Text
Categorization by Authorship based on Features of Lingual Conceptual Expression, 2009 World
Congress on Computer Science and Information Engineering
[7] Ankush Roy Debarshi Patanjali Ghoshal(2011): Number Plate Recognition for Use in Different
Countries Using an Improved Segmentation, IEEE.
[8] Yang Jun Li Na Ding Jun: A design and implementation of high-speed 3DES algorithm system, 2009
Second International Conference on Future Information Technology and Management Engineering
[9] YU WANG, ZHENG-OU WANG: A FAST KNN ALGORITHM FOR TEXT CATEGORIZATION ,
Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, Hong Kong,
19-22 August 2007.
[10] Ping Dong, Jie-hui Yang, Jun-jun Dong (2006): The Application and Development Perspective of
Number Plate Automatic Recognition Technique, IEEE.
[11] Tingyuan Nie, Teng Zhang: A Study of DES and Blowfish Encryption Algorithm, IEEE 2009.
[12] Muhammad Tahir Qadri, Muhammad Asif (2009): Automatic Number Plate Recognition System For
Vehicle Identification Using Optical Character Recognition, International Conference on Education
Technology and Computer.
[13] Chen-Chung Liu, Zhi-Chun Luo(2010): An Extraction Algorithm of Vehicle License Plate Numbers
Using Pixel Value Projection and License Plate Calibration, International Symposium on Computer,
Communication, Control and Automation.
[14] Pletl Szilveszter, Gálfi Csongor(2010): Parking surveillance and number plate recognition application,
IEEE 8th International Symposium on Intelligent Systems and Informatics.
[15] Zhihua Chen, Xiutang Geng, Jin Xu: Efficient DNA Sticker Algorithms for DES, IEEE 2008.
[16] C. Sanchez-Avilaf , R. Sanchez-Reillot: The Rijndael Block Cipher (AES Proposal): A Comparison
with DES, IEEE 2001.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
360 Vol. 1, Issue 5, pp. 351-360
[17] B. Raveendran Pillai, Prot: (Dr). Sukesh Kumar. A (2008): A Real-time system for the automatic
identification of motorcycle - using Artificial Neural Networks, International Conference on
Computing, Communication and Networking.
[18] Mohamed El-Adawi, Hesham Abd el Moneim Keshk, Mona Mahmoud Haragi: Automatic license plate
recognition.
[19] Luis Salgado, Jose' M. Mene'ndex, Enrique Renddn and Narciso Garcia (1999): Automatic Car Plate
Detection and Recognition through Intelligent Vision Engineering, IEEE.
[20] Hwajeong Lee, Daehwan Kim, Daijin Kim, Sung Yang Bang (2003): Real-Time Automatic Vehicle
Management System Using Vehicle Tracking and Car Plate Number Identification, IEEE.
[21] B. Raveendran Pillai, Prot: (Dr). Sukesh Kumar. A (2008): A Real-time system for the automatic
identification of motorcycle - using Artificial Neural Networks, International Conference on
Computing, Communication and Networking.
[22] Mohamed El-Adawi, Hesham Abd el Moneim Keshk, Mona Mahmoud Haragi: Automatic license plate
recognition.
[23] A. S. Johnson B. M. Bird, Department of Elect. & Electron. Engineering, University of Bristol:
Number-plate Matching for Automatic Vehicle Identification.
[24] Maged M. M. FAHMY: Toward Low Cost Traffic Data collection: Automatic Number-Plate
Recognition, The University of Newcastle Upon Tyne Transport Operations Research Group.
Authors Biographies
Subhash Tatale is a M.Tech Student.Having 4 yrs of experience in which 2 yrs of industry
and 2 yrs of academic.My reasearch area is Image Processing.
Akhil Khare is an associate professor working in Department of Information
Technology.Completed M.Tech.and Pursuing Ph.D. in software engineering field.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
361 Vol. 1, Issue 5, pp. 361-374
IMPACT ASSESSMENT OF SHG LOAN PATTERN USING
CLUSTERING TECHNIQUE
Sajeev B. U1, K. Thankavel2
1Research Scholar, Center for Research and Development PRIST University, Thanjavoor, T.N., India.
2Prof. & Head, Department of Computer Science, Periyar University, Salem, T.N., India.
ABSTRACT
Indian micro-finance sector, dominated by self help groups (SHGs), addresses issues like actualizing equitable
gains from the development and fighting poverty. A number of financial institutions provide micro-finance
services to the poor through banking and NGOs. Clustering analysis is a key and easy tool in data mining and
pattern recognition. We have applied K-Means and Fuzzy C-Means algorithms to study in detail the data’s
collected from the SHG members of 9 districts in Kerala state through field work and questionnaire. The study
reveals that the average range of rate of interest of SHG loans from various government agencies are from 12 to
15 %. Out of total members availing loans, 56% are taking loan from bank. District wise studies on the rate of
interest were also carried out. Study on the relationship of education and savings among SHG members’ shows
that members with higher education shows increased saving habits.
KEYWORDS: Data mining, Clustering, K-Means, Fuzzy C-Means, self help groups
I. INTRODUCTION
With the increased and widespread use of technologies, interest in data mining has increased rapidly. Companies are now utilized data mining techniques to exam their database looking for trends, relationships, and outcomes to enhance their overall operations and discover new patterns that may allow them to better serve their customers. Data mining provides numerous benefits to businesses, government, society as well as individual persons [1-5]. For many years, statistics have been used to analyze data in an effort to find correlations, patterns, and dependencies. However, with an increased in technology more and more data are available, which greatly exceed the human capacity to manually analyze them. Before the 1990’s, data collected by bankers, credit card companies, department stores and so on have little used. But in recent years, as computational power increases, the idea of data mining has emerged. Data mining is a term used to describe the “process of discovering patterns and trends in large data sets in order to find useful decision-making information.” With data mining, the information obtained from the bankers, credit card companies, and department stores can be put to good use. Data mining is a component of a wider process called “knowledge discovery from database”. It involves scientists and statisticians, as well as those working in other fields such as machine learning, artificial intelligence, information retrieval and pattern recognition. Before a data set can be mined, it first has to be “cleaned”. This cleaning process removes errors, ensures consistency and takes missing values into account. Next, computer algorithms are used to “mine” the clean data looking for unusual patterns. Finally, the patterns are interpreted to produce new knowledge [6-7]. Clustering is very popular descriptive data mining technique that aids describing characteristic of data sets. The goal of clustering is to form group of objects with similar characteristics [8]. Clustering analysis is important, but challenging task in unsupervised learning. Data clustering is a common technique for statistical data analysis and has been used in variety of engineering and scientific
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
362 Vol. 1, Issue 5, pp. 361-374
disciplines. The biggest innovation in microfinance in the past five years is the advent of data mining, that is, the analysis of data to inform practical responses to business challenges. In this paper we have discussed about the role of Self Help Groups in India using K-Means and Fuzzy C-Means algorithm for evaluating SHG loan pattern.
1.1 About Self Help Groups
SHG or self help group is a small group of rural poor, who have voluntarily come forward to form a group for improvement of the social and economic status of the members. The core of SHG bank linkage in India has been built around an important aspect of human nature - the feeling of self worth. Over the last ten years, it has come to symbolize an enduring relationship between the financially deprived and the formal financial system, forged through a socially relevant tool known as Self Help Groups (SHGs) [9-10]. An amazingly large number of formal and non-formal bodies have partnered with NABARD (National bank for Agriculture and Rural Development) in this unique process of socio-economic engineering. What had started off in 1992 as a modest pilot testing of linking around 500 SHGs with branches of half a dozen banks across the country with the help of a few NGOs, today involves about 20,000 rural outlets of more than 440 banks, with an advance portfolio of more than Rs.1, 200 crore ($ 240 m.) in micro Finance lending to SHGs. Financial services have reached the doorsteps of over 8 million very poor people, through 500,000 SHGs, hand-held by over 2,000 development partners. India is fiercely diverse as a nation, and most communities are also diverse in caste, opinion and religion. Indians are also known for their sense of personal independence, which is often translated into indiscipline, whether on the roads, in political assemblies or elsewhere. The SHG system reflects this independence and diversity. It allows people to save and borrow according to their own timetable, not as the bank requires. SHGs can also play a part in a whole range of social, commercial or other activities. They can be vehicles for social and political action as well as for financial intermediation. A most notable milestone in the SHG movement was when NABARD launched the pilot phase of the SHG Bank Linkage programme in February 1992. This was the first instance of mature SHGs that were directly financed by a commercial bank. The informal thrift and credit groups of poor were recognized as bankable clients. Soon after, the RBI advised commercial banks to consider lending to SHGs as part of their rural credit operations thus creating SHG Bank Linkage [11-13]. The linking of SHGs with the financial sector was good for both sides. The banks were able to tap into a large market, namely the low-income households, transactions costs were low and repayment rates were high. The SHGs were able to scale up their operations with more financing and they had access to more credit products. There are a number of criterias for getting loans for SHG members. For SHG to get loan from Bank, the SHG should open an account, operate the account regularly, maintain healthy relationship with bank, and the repayment of loan should be regular. The loans initially taken are usually for education, consumption, health, house repair, repaying of old loans. Apart from this, loans are taken for purchase of seeds, fertilizers, development of small business (Petty shops, vegetable vending, flower vending, hotels, saree business, animal husbandry activities, etc). Sanction of loans to SHGs by banks is based on the quantum of savings mobilized by the SHGs. Loan may be granted by the SHG for various purposes to its members. The bank does not decide the purpose for which the SHG gives loans to its members. A repayment schedules is drawn up with the SHG, and the loan is to be repaid regularly. Small and frequent installments will be better than large installments covering a long period. Problems in repayment of loans by SHGs were quite widespread. Since the amounts involved in these loans at the individual level were not of much significance to the banks, there was a tendency not to take a serious note of irregularities in the repayment schedules of SHGs. However, as the loans to SHGs also had a tendency to slip into the irregular mode more often than not, bankers need to exercise care and caution while dealing with SHGs as they would in case of other borrowers. These facts were supported by the news came in the Indian national daily THE HINDU on Friday, September 11 2010 at Chennai under the heading “Fall in SHGs' loan repayment rate: NABARD
chief” “The repayment by SHGs is not 100 per cent. It stood at 88 per cent the year before last and is falling further. Two reasons were attributed to it. Firstly, bank managers are not in touch with the SHGs and the loan members are not attending the monthly meetings. Bank managers should visit the
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
363 Vol. 1, Issue 5, pp. 361-374
borrowers at least once in three or six months to find out their problems,” K.G. Karmakar, Managing Director, NABARD said.
II. BACKGROUND
Earlier SHG data evaluations were done using statistical tools [14]. As research methods, a mix of quantitative and qualitative tools is applied. Through a questionnaire quantitative data are collected. The qualitative information will enable verification of the quantitative findings as well as give more insight into the reasons behind these findings. The survey has been conducted through structured questionnaires, related to the socio-economic status of SHG members. Since the purpose of the study is to understand the trends within groups, the survey focused on group level information. At the individual level of members and the following information has been collected. Data was collected regarding the following aspects: ¢ Loan taken and purpose of loan
¢ Savings and credit related activities of the group
¢ Socio-economic composition of the groups
¢ Social issues taken up by the groups
¢ Linkage between the groups and bank
¢ Assets before and after being a member
¢ Literacy and education status of group members
There are 14 districts in Kerala state and this study has been restricted to 9 districts namely Kannur, Calicut, Malappuram, Palakkad, Wyanad, Trichur, Kottayam, Alleppy and Trivandrum. The above mentioned data has been collected from 3500 SHG members with 51 attributes/parameters. Majority members are female. For the better understanding of the financial, utilization of loan, purpose of loan, savings educational and loan repayment status of the SHG members before and after availing the loans has been studied in detailed by applying clustering techniques by means of K- means and Fuzzy C-Means algorithm. Among the various clustering algorithms, K-Means (KM) and Fuzzy C-Means are the most popular methods used in data analysis due to their good computational performances. However, it is well known that KM might converge to a local optimum, and its result depends on the initialization process, which randomly generates the initial clustering. The main objective of this study is as follows
• To find the rate of interest given by the SHG members to various financial institution
• A study of various financial institution which provide maximum loans to SHG members
• Type of loans availed by the SHG members
• District wise study of rate of interest for loans provided by SHG
• Education status of SHG members in Kerala
• To find the relationship between education and savings
III. MATERIALS AND METHODS
3.1 K-means Algorithm
K-Means [15-18] clustering technique creates a one level partition of data objects. We first chose K initial centroids, where K is a user specified parameter namely number of clusters desired. Each point is then assigned to the closest centroid and each collection of points assigned to a centroid is a cluster. The centroid of each cluster is updated based on the points assigned to the cluster. We repeat the assignment and update the steps until no point changes clusters or equalently until the centroid remains the same. The K-Means algorithm is given below
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET
364
Require: set of input items, x, in Euclidean number of clusters, k.1: for 1 ≤ i ≤ k 2: kmeans[i] 3: centroid[i] 4: count[i] 5: repeat
6: for all x∈
7: mindist 8: for 1 ≤ i 9: if || x-10: 11: cluster[x] 12: centroid [mindist] 13: count [mindist] 14: for 1≤ i ≤ k 15: kmeans16: centroid[i] 17: count[i] 18: until no item reclassified
19: each x∈ items is now classified by cluster[x]
3.2 Fuzzy C-Mean Algorithm
The Fuzzy C-Means algorithm (FCM) [1algorithm is also used in analyzing the SHG data. However, these FCM algorithms have considerable trouble in a noisy environment and inaccuracy with a large number of different sample sized clusters
It is based on minimization of the following objective
Jm =
∑
1
where m is any real number greater than 1, the ith of d-dimensional measured data, expressing the similarity between any measured data and the center.The algorithm is composed of the following steps
3.3 Data Cleaning.
As data sets are not perfect, one can expect missing values for some attributes, some errors in
1. Initialize U=[uij] matrix, U
2. At k-step: calculate the centers vectors
3. Update U(k)
, U(k+1)
4. If || U(k+1)
- U(k)
||<
of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231
Vol. 1, Issue 5, pp.
K- Means algorithm
: set of input items, x, in Euclidean space; desired number of clusters, k.
≤ k do kmeans[i] random item from data centroid[i] 0 count[i] 0
∈ items do
mindist 1 ≤ i ≤ k do - Kmeans[i]|| 2 < || x- Kmeans[mindist]|| 2 then
mindist i cluster[x] mindist centroid [mindist] centroid[mindist] + x count [mindist] count[mindist + 1 ≤ i ≤ k do
kmeans[i] centroid[i]/count[i] centroid[i] 0 count[i] 0 no item reclassified or repetition count exceeded
items is now classified by cluster[x]
Means algorithm (FCM) [19-20], which is the best known unsupervised fuzzy clustering algorithm is also used in analyzing the SHG data. However, these FCM algorithms have considerable trouble in a noisy environment and inaccuracy with a large number of different sample sized clusters
It is based on minimization of the following objective function:
∑
1 || xi -Cj ||
2 , 1 ≤ m < ∞ …….. (1)
is any real number greater than 1, uij is the degree of membership of xi in the cluster dimensional measured data, cj is the d-dimension center of the cluster, and ||*|| is any norm
expressing the similarity between any measured data and the center.The algorithm is composed of the following steps.
As data sets are not perfect, one can expect missing values for some attributes, some errors in
] matrix, U(0)
step: calculate the centers vectors C(k)
=[cj] with U(k)
(k+1)
then STOP; otherwise return to step 2.
of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231-1963
Vol. 1, Issue 5, pp. 361-374
], which is the best known unsupervised fuzzy clustering algorithm is also used in analyzing the SHG data. However, these FCM algorithms have considerable trouble in a noisy environment and inaccuracy with a large number of different sample sized clusters
in the cluster j, xi is dimension center of the cluster, and ||*|| is any norm
expressing the similarity between any measured data and the center.
As data sets are not perfect, one can expect missing values for some attributes, some errors in
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
365 Vol. 1, Issue 5, pp. 361-374
transcription or data input, and duplicate entries[21-23]. Dealing with these issues is a topic of major study in itself. Sometimes, a received data set has already been ‘cleaned’. Perhaps ‘scrubbed’ is a better term: missing values are sometimes filled in with average values, or values copied from similar looking records.
3.4. Feature selection
It is a preprocessing method of choosing a subset of features from the original ones. It has proven effective in reducing dimensionality, improving mining efficiency, increasing mining accuracy, and enhancing result comprehensibility[24] .Feature selection methods can broadly fall into the wrapper
model and the filter model [25]. The wrapper model uses the predictive accuracy of a predetermined mining algorithm to determine the goodness of a selected subset. It is computationally expensive for data with a large number of features. The filter model separates feature selection from classifier learning and relies on general characteristics of the training data to select feature subsets that are independent of any mining algorithms. We have chosen filter method for the present study
IV. RESULTS AND DISCUSSION
Surveys were carried out among 3500 SHG members among 9 districts in Kerala. Detailed questionnaires’ were prepared. Qualitative information is gathered through semi-structured interviews with SHG members, SHG leaders, federation leaders, Bank officials, moneylenders and government officials. The selected SHG groups were found to be very stable for more than 3 years. From these groups we have collected 3500 objects with 51 attributes. The procedures adopted for clustering include data cleaning. The collected data have been cleaned with the help of domain experts and applying feature selection method. Finally we fixed the data set as 3434 objects with 12 attributes.
The selected attributes are given in the Table I
Table I shows the selected attributes for the study.
Loan amount from SHG(loan I) Interest rate (%) Loan period / Month Loan repay / month Balance loan in the book Loan taken other sources(Loan II) Amount taken Interest rate (%) Economic Benefits gained Savings / month Assets increased after joining SHG Savings outside the group
The K-Means and Fuzzy C-Means algorithm discussed in section 3.1 and 3.2are applied for the SHG data collected from 9 districts in Kerala. The K-Mean algorithm is applied for different values of k (number of clusters) to the 3434 members with 12 attributes. The K-mean algorithm has been performed for different values of k and it was found that the best value for k is 2. After analysis the activities and functionalities of two different group members ,one group has been identified as performing group and other one is non performing. Clusters obtained by k-means algorithm are dominated by the selection of initial seed or centroid. Hence K-Mean algorithm has been performed by selecting different set of initial seed and the result are tabulated in table II
Table II- Results of K-Means for different centroids.
Number of runs of K- Means algorithm with different seeds(Ri)
Number of patterns in cluster I (C1)
Number of patterns in cluster II (C2)
Seed values
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
366 Vol. 1, Issue 5, pp. 361-374
**, ©, these symbols indicate same number of pattern in the clusters. Each time when we apply K- means algorithm we obtained two clusters; of which one is performed cluster and other is non- performed cluster. Since the numbers of objects in some clusters are same, which shows their stability; we have selected 5 clusters from 18 clusters for further studies, since the selected clusters have different number of patterns. The following clusters are taken for studies R1C1, R1C2, R3C1, R3C2 and R4C1 By applying K–means algorithm for different centroids, we have obtained the patterns of each parameters like Loan amount from SHG(loan I), Interest rate (%), Loan period / Month, Loan repay / month, Balance loan in the book Loan taken other sources(Loan II), Amount taken, Interest rate (%), Economic Benefits gained , Savings / month, Assets increased after joining SHG, Savings outside the group. Table III shows the patterns obtained for different runs of K-Mean Algorithm.
Table III. Patterns obtained for different runs of K-Mean Algorithm.
R1 246 3188 Random seed
R2 268© 3166** Z=10 and Z=100
R3 3166** 268© Z=10 and Z=3000
R4 145 3289 Z=110 and Z=2000
R5 3166** 268© Z=300 and Z=3000
R6 3166** 268© Z=500 and Z=510
R7 268© 3166** Z=644 and Z=844
R8 268© 3166** Z=1400 and =1800
R9 3166** 268© Z=3000 and Z=3010
Patterns R1C1 R1C2 R3C1 R3C2 R4C1 R4C2
Loan amount from SHG in Rs 28717 20248 20019.35 30725.43 30440.76 20432.31
Interest rate (%) 14 14 14.35218 14.01512 13.87652 14.3457
Loan period per Month in Rs 10 11 10.61592 10.66433 10.44877 10.62724
Loan repay per month in Rs 305 288 288.0054 305.6946 305.5318 288.6744
Balance loan in the book in Rs 6133 4812 4781.629 6381.619 6335.819 4843.49
Loan taken other sources 1 1 0.591282 1.171644 1.089659 0.616601
Amount taken in Rs 89593 3252 3028.747 85146.01 116803 4703.866
Interest rate (%) 13 5 4.81396 13.19781 12.58633 5.154454
Economic Benefits gained in Rs 1574 1414 1410.676 1600.021 1499.722 1422.18
Savings per month in Rs 72 37 36.79659 67.50094 92.41822 36.84646
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
367 Vol. 1, Issue 5, pp. 361-374
To perform a comparative study we have applied Fuzzy C-Means algorithm in the SHG data for different values on ‘m’(weight exponent in the fuzzy membership) and the results are tabulated in table IV.
Table IV- Results of Fuzzy C-Means for different values m
*Indicates maximum number of iterations
We have applied different values for m and a total of 17 runs were performed. For further studies we have selected clusters with maximum number of iterations (100). Selected clusters are R2C1, R2C2, R3C1, R3C2, R4C1, R4C2, R5C1, and R5C2. The table V shows the
patterns obtained for different runs of Fuzzy C-Means algorithm with (m=100) with maximum iterations
Table V shows the patterns obtained for different runs of Fuzzy C-Means
Features R2C1 R2C2 R3C1 R3C2 R4C1 R4C2 R5C1 R5C2 Loan amount from SHG in Rs 19446 32785 35241 18013 46208 12748 11906 45325 Interest rate (%) 14 14 14 14 14 14 14 14 Loan period per Month in Rs 11 11 11 11 11 11 11 11 Loan repay per month in Rs 288 310 309 286 323 278 276 324 Balance loan amount in the book in Rs 4575 7298 8103 4217 10295 3187 3048 9947 Loan taken from other sources 1 1 1 1 1 1 1 1
Assets increased after joining SHG (credit points) 8 7 6.675616 8.190371 8.75899 6.707206
Savings outside the group per month in Rs 216 118 117.4037 214.4172 232.2939 120.2439
Different runs
Number of iterations
No: of members in cluster 1
No: of members in cluster II
m
R1 60 133 3301 1.25
R2 100* 3211 223 1.5
R3 100* 302 3132 1.75
R4 100* 773 2661 2
R5 100* 870 2564 2.25
R6 95 870 2564 2.5
R7 89 2499 935 2.75
R8 71 959 2475 3
R9 77 2453 981 3.25
R10 67 991 2443 3.5
R11 63 2433 1001 3.75
R12 90 1010 2424 4
R13 48 1156 2278 10
R14 46 1166 2268 20
R15 33 1174 2260 30
R16 27 1179 2255 40
R17 3 2164 1270 50
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
368 Vol. 1, Issue 5, pp. 361-374
Amount taken in Rs 3440 91490 71201 2719 27238 3236 3285 22887 Interest rate (%) 5 13 12 5 8 4 4 8 Economic Benefits gained 1402 1599 1682 1364 2170 1152 1104 2191 Savings per month in Rs 36 78 68 36 56 33 33 54 Assets increased after joining SHG 7 8 8 7 8 6 6 10 Savings per month out side the group in Rs 117 226 211 114 184 105 80 179
We have taken the cluster R3C2 (from table III) and R4C1 (from table V) for further studies since the SHG loan is maximum for these clusters. After analyzing this cluster, majority of SHG loan interest lies between 12 to 15% and maximum interest is 25%. In the cluster R3C2 (using K Means) out of all members who have taken loan, 88 % took loan from bank, 9% from society and only 3% depends on money lenders. In R3C2 highest loan amount is Rs 15000 and it was taken from bank. This group is found to have higher savings and deposits. Their balance loan amount is nominal. In the cluster R4C1 (using Fuzzy C-Means), out of all members who have taken loan, 77% of members took loan from bank, 10% from society and 11% from money lenders. In R4C1 the highest loan amount is Rs3,00,000 and it was taken from bank. But in R3C1(from table III- using K Means) 47% members have taken loan from bank, 8% from society and 27% from money lenders and this group shows less savings compared to R3C2. Analysis of R4C2 (from table V-using Fuzzy C-Means) 48% members have taken loan from bank, 9% from society and 29.8% from money lenders. Hence studies on both algorithms explain the same facts and the results are almost same. It shows banks linkage is more important for the smooth functioning of SHG. Bank linkage can be made more effective by:
1. Providing financial counseling service through face to face interaction. 2. To educate people in rural and urban areas with various financial products available from the
financial sector 3. To make the SHG members aware about the advantages of being connected with the formal
financial sector The study reveals that the selected members in the SHG cluster took loans from the following financial institution and the % range interest is shown in the following table VI
Table VI shows financial institutions and the range of interest
No: Financial institution Range of interest
1 Bank Loan 10%-16%
2 Society 12%-18%
3 Money lenders(blade) 12%-40%-60%
4 From Other SHG Groups 12%
5 Friends 0-15%
Rate of interest from bank is less compared to the interest taken by the money lenders, so it is necessary that the bank linkage [26-27] with the SHG members should be made effective, so that they can gain maximum benefits. Majority of members are not availing loan facilities, it may be due to the lack of awareness about different types of loans from the standard financial institution or the rules and regulations for getting loans is more difficult. It indicates that the banks or standard financial institution should take necessary steps to provide more loans to SHG members. Our study reveals that out of total members availing loan 56 % is taken from the bank and 22 % are from money lenders, 8 % from society and 13 % from other SHG group.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
369 Vol. 1, Issue 5, pp. 361-374
For ensuring the long term sustainability of SHGs, it is suggested that resource center can be set up in different parts of the country. The SHG - Bank Linkage Programme is now more than 20 years old. To achieve effective linkages with various financial institutions, resource centers can play a vital role. Resource centers can be set up by various stakeholders such as NGOs, banks, Government departments, NABARD at the State/ district level to play an important role in preparing training modules, developing a cadre of trainers, conduct of field studies and in promoting interface between SHG members and service providers. The specific role of Resource Centers could be to : • Work towards a comprehensive capacity building of SHGs, • Share innovative ideas and models that can be replicated elsewhere, • Enhance functional literacy among SHG members, • Support livelihood interventions among SHG members,
• Facilitate availability of all services to SHG members under one roof.
4.1 SHG Loan interest in each district
SHG members are taking loans from the SHG’s with the following rate of interest. Wayanad district is giving minimum rate of interest followed by Mallapuram, Calicut and Trichur. The district wise information regarding the minimum and maximum rate of interest taken and the different sources of loan is shown in table VII and VIII respectively.
Table VII shows the district wise information regarding the minimum and maximum rate of interest
District Minimum rate of interest Maximum rate of interest
Kannur 12% 24%
Calicut 11% 24%
Mallapuram 11% 12%
Palakad 15% 24%
Wayanad 9% 18%
Trichur 11% 24%
Kottayam 12% 24%
Alleppy 12% 24%
Trivandrum 12% 25%
Table VIII District wise information regarding different sources of loan
District No: of Loan taken
Bank Loan
Society Loan
Blade other groups
Friends
Kannur 335 117 9 38 62 0
Calicut 179 42 6 8 0 0
Mallapuram 215 85 4 23 29 1
Palakkad 148 20 4 8 31 0
Wayanad 232 128 * 2 29 1 1
Trichur 177 27 24 3 22 2
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
370 Vol. 1, Issue 5, pp. 361-374
Kottayam 175 69 2 19 0 0
Alleppy 288 42 14 48 0 0
Trivandrum 552 110 27 74 2 0
District wise analysis of the results shows that maximum bank loans are availed by the SHG groups of Wayanad district, maximum society loans are availed by SHG group of Trivandrum and Trichur.
4.2 Relationship between Education of SHG members and Savings
In India majority of the members of SHGs are illiterate and do not have access to formal Education. Even though this is not true in the case of Kerala state, were formal education is gained by almost all individuals. The handicap of literacy would be a hurdle for achieving many desired results. For example they will be unable to follow the accounts maintained by the group and hence remain ignorant about the amount pooled individually and in the group, and would be unable to draft an application to represent their case. It is therefore essential to provide them education through especially designed modules through distance education that are directly useful as a member of SHG. At this stage they do not need school or university certificate, Diploma or degrees. They need improvement in their professional skills and solving their day-to-day problems in the working and functioning of SHGs. They should be explained the advantage of group based strategies in poverty alleviation, Importance of savings and opening bank account, marketing of products, timely repayment and repeat loaning. It is important to explain that she is not alone and that such problems are being faced universally. Only by self-help they may fight against their misfortune and improve upon the fate of their family and children. Hence a detailed study was done on the role of education and related saving among SHG members For the study we have taken 3434 objects with 3 attributes, the attributes taken for the study are educational level, saving/month and saving/month outside the group. K-means and Fuzzy C-Means algorithm was applied in this data. The K-mean algorithm has been performed for different values of k and it was found that the best value for k is 2. Fuzzy C-Means was run with different values of m. Table IX and X shows clusters and patterns obtained by applying K-Means respectively.
Table IX Clusters obtained by applying K-Means in Education and savings
**,©, these symbols indicate same number of pattern in the clusters.. Since numbers of members are same in certain clusters, we will consider 6 clusters for our studies
Table X Patterns obtained for different runs of K-Mean Algorithm.
No: of runs of K- Means with different seeds(Ri)
No: of patterns in cluster I (C1)
No: of patterns in cluster II (C2)
Seed values
R1 3091 343 Random seed R2 3355* 79© Z=10 and Z=100 R3 3355* 79© Z=10 and Z=3000 R4 79© 3355* Z=110 and Z=2000 R5 3355* 79© Z=300 and Z=3000 R6 3355* 79© Z=500 and Z=510 R7 79© 3355* Z=644 and Z=844 R8 79© 3355* Z=1000 and Z=1500 R9 3355* 79© Z=1400 and Z=1800 R10 79© 3355* Z=3000 and Z=3010
R1C1 R1C2 R3C1 R3C2 R4C1 R4C2
Educational level 1.81 2.081 1.8 2.1 2.1 1.83 Savings per month in Rs 17 230 27.6 527.2 527 27
Savings per month 80 528 107 873 873 107
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
371 Vol. 1, Issue 5, pp. 361-374
Table XI and XII shows clusters and patterns obtained by applying Fuzzy C-Means respectively.
Table XI Clusters obtained by applying Fuzzy C-Means in Education and savings
*Indicates maximum number of iterations We have selected clusters with maximum number of iterations (100). Selected clusters are R2C1, R2C2, R3C1, R3C2, R4C1, R4C2, R5C1, and R5C2.The patterns obtained using Fuzzy C-Means is shown in table
Table XII Patterns obtained for different runs of Fuzzy C-Mean Algorithm
R2C1 R2C2 R3C1 R3C2 R4C1 R4C2 R5C1 R5C2
Educational level 1.810681 2.053234 2.037019 1.804108 1.798286 2.023985 2.014288 1.792751
Savings per month in Rs 17.03678 172.2002 149.3767 15.28171 13.59459 135.1113 125.8471 12.12893
Savings per month outside the group in Rs 71.65053 462.8756 428.3122 64.67036 59.0495 404.3528 386.7046 54.76946
The analysis clearly depicts that there is relationship a between educational level of SHG members and their savings. The table shows that if educational level is high savings per month within the group and outside the groups are high.
RIC2 has maximum savings of 17 runs of Fuzzy C-Means algorithm. RIC2 shows maximum savings and savings outside the group, so we have taken the clusters RICI and RIC2, which gives the total domain for further studies. The study reveals that the educational level of members is high in the cluster R1C2. This shows that there is a relationship between education and savings. The members with higher education show high savings. The figure 1 shows the relationship between % of members in the clusters with educational level.
outside the group in Rs
Different runs
No: of iterations
No: of members in cluster 1
No: of members in cluster II
m
R1 60 376 3058 1.25
R2 100* 3025 409 1.5
R3 100* 516 2918 1.75
R4 100* 2907 527 2
R5 100* 637 2797 2.25
R6 95 2771 663 2.5
R7 89 717 2717 2.75
R8 100 2714 720 3
R9 100 727 2707 3.25
R10 98 2702 732 3.5
R11 81 733 2701 3.75
R12 71 2697 737 4
R13 48 945 2489 10
R14 41 2481 953 20
R15 3 954 2480 30
R16 1 2326 1108 40
R17 60 1114 2320 50
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET
372
Figure 1 Educational level Vs.
Further analysis revels that Study of Cluster R1C2, R3C2 and R4C1 majority of members are in plus 2 educational levels. educational level and % of members. minimum school educational are from Wayanad and Alleppy.
Figure 2
So it is necessary that the Government must take effective measures to enroll the members of SHGs in the schemes like Open Schooling. It is observed that open education at present is mainly catering to the needs of elites in the urban areas and it has to make inroads into ruPolicy planners must think to integrate the economic benefits with education. The economic incentives and effective NGOs participation will definitely make the women empowerment a reality from a distant dream at present.
V. CONCLUSIONS
Surveys were carried out among 3500 SHG members among 9 districts in Kerala with 51 attributes. For this study we have selected 3434 valid from SHG, interest rate, loan periodsources, amount taken, interest rate,SHG, savings outside the group. Data analysis was carried out using Kalgorithm. Studies on both algorithms explain the same facts and the results are almost same.This study reveals that average range of rate of interest of SHG loans from various government agencies are from 12 to 15 %. But the range of interest from monetotal members availing loan, 56 % is taken from the bank and 22 % are from money lenders, 8 % from
of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231
Vol. 1, Issue 5, pp.
e 1 Educational level Vs. % of members in clusters
Study of Cluster R1C2, R3C2 and R4C1 (from table X) reveals that the majority of members are in plus 2 educational levels. Figure 2 shows the relationship between educational level and % of members. District wise analysis revealed that SHG members with minimum school educational are from Wayanad and Alleppy.
Figure 2 Educational level Vs % of members
he Government must take effective measures to enroll the members of SHGs in the schemes like Open Schooling. It is observed that open education at present is mainly catering to the needs of elites in the urban areas and it has to make inroads into rural areas where India lives. The Policy planners must think to integrate the economic benefits with education. The economic incentives and effective NGOs participation will definitely make the women empowerment a reality
Surveys were carried out among 3500 SHG members among 9 districts in Kerala with 51 attributes. For this study we have selected 3434 valid data’s with 12 attributes. The attributes are
SHG, interest rate, loan period, loan repay, balance loan in the book, loan taken from other amount taken, interest rate, economic benefit gained, saving, Assets increased after joining
Data analysis was carried out using K-Means and Fuzzy CStudies on both algorithms explain the same facts and the results are almost same.
This study reveals that average range of rate of interest of SHG loans from various government agencies are from 12 to 15 %. But the range of interest from money lenders is from 25
56 % is taken from the bank and 22 % are from money lenders, 8 % from
of Advances in Engineering & Technology, Nov 2011.
ISSN: 2231-1963
Vol. 1, Issue 5, pp. 361-374
reveals that the Figure 2 shows the relationship between
District wise analysis revealed that SHG members with
he Government must take effective measures to enroll the members of SHGs in the schemes like Open Schooling. It is observed that open education at present is mainly catering to
ral areas where India lives. The Policy planners must think to integrate the economic benefits with education. The economic incentives and effective NGOs participation will definitely make the women empowerment a reality
Surveys were carried out among 3500 SHG members among 9 districts in Kerala with 51 attributes. 12 attributes. The attributes are loan amount
loan repay, balance loan in the book, loan taken from other Assets increased after joining
Means and Fuzzy C-Means Studies on both algorithms explain the same facts and the results are almost same.
This study reveals that average range of rate of interest of SHG loans from various government y lenders is from 25-40%. Out of
56 % is taken from the bank and 22 % are from money lenders, 8 % from
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
373 Vol. 1, Issue 5, pp. 361-374
society and 13 % from other SHG group. But majority of members are not availing loan facilities, it may be due to the lack of awareness about different types of loans from the standard financial institution or the rules and regulations for getting loans is more difficult. This can be rectified to a greater extend by providing financial counseling service through face to face interaction, to educate people in rural and urban areas with various financial products available from the financial sector and to make the SHG members aware about the advantages of being connected with the formal financial sector. District wise studies on the rate of interest of SHG loans showed that minimum rate of interest was given by Wayanad district followed by Mallapuram, Calicut and Thrichur. Maximum bank loans are availed by the SHG members of Wayanad for agricultural purpose. Study on the relationship of education and savings among SHG members shows that members with higher education’s shows increased saving habits. 97 % members are literate and out of total members 68.2% are high school educated. It is therefore essential to provide them technical education and financial literacy through especially designed modules through workshops, seminars, open school, distance education that are directly useful as a member of SHG. Only by self-help and training they may fight against their
misfortune and improve upon the fate of their family and children. There are other parameters such as loan metrics, socio economic factors and education. This may be considered for further research.
REFERENCES
[1]. Hand, D., Mannila H., Smith P, (2001). “Principles of Data Mining”, MIT Press,Cambridge: MA,. [2]. Fayyad, U. M., Piatetsky-Shapiro, G., Smyth,P., & Uthurusamy, R. (Eds.) (1996). “Advances in
knowledge discovery and data mining”. Boston: AAAI/MIT Press. [3]. Fayyad, V., G. G. Grinstein, A. Wierse, (2002). “Information Visualization in Data Mining and
Knowledge Discovery”, Morgan Kaufmann, San Diego: CA,. [4]. Andrew Kusiak *, Matthew Smith, (2007) “Data mining in design of products and production
systems”; Annual Reviews in Control 31 147–156. [5]. Witten, I. H., & Frank, E. (2005). “Data mining: Practical machine learning tools and techniques” New
York: Elsevier. [6]. Rundensteiner, E. (ed.): (1999) “Special Issue on Data Transformation”. IEEE Techn. Bull. Data
Engineering 22 (1). [7]. Parent, C, Spaccapietra, S, (1998). “Issues and Approaches of Database Integration”. Comm. ACM
41(5):166-178. [8]. Rachsuda Jiamthapthaksin, Christoph F. Eick, and Vadeerat Rinsurongkawong, (2009). “An
Architecture and Algorithms for Multi-Run Clustering”, IEEE, 978-1-4244-2765-9/09. [9]. Chakrabarti,R.(2004).“The Indian Microfinance Experience – Accomplishments and Challenges”
www.microfinacegateway.org. [10]. Wilson,K.(2002).“The new microfinance-an essay on the self-help group movement in India”. Journal
of Microfinance, Vol. 4, No. 2, pp.21–245. [11]. Bansal,H.(1998), “Self-Help Group- NGOs-Bank Linkage Programmes in India: A Case Study”, M.S
University of Baroda, (www.prism.gatech.edu ) [12]. Wilson Kim,(2002), “The Role of Self Help GroupBank Linkage Programme in Preventing Rural
Emergencies in India” NABARD,(see www.nabard.org) [13]. Puhazhendhi,V,& Badaty K.C,(2002),“SHGBank Linkage Programme-An Impact Evaluation”.
NABARD, ( www.nabard.org ) [14]. B. Narayana swamy, K. Narayana Gowda and G. N. Nagaraj (2007). “Performance of Self Help
Groups of Karnataka in Farm Activities”; Karnataka J. Agric. Sci.,20(1):85 - 88. [15]. J. Pena, J. Lozano, and P. Larranaga, (1999) “An Empirical Comparison Of Four Initialization Methods
For The K-Means Algorithm,” Pattern Recognition Letters, Vol. 20 No. 10, pp. 1027-10. [16]. Madhu Yedla, Srinivasa Rao Pathkota, T.M.Srinivasa,(2010).“Enhancing K-means clustering
Algorithm with Improved Initial Center”. International Journal of computer science and information technologies, Vol.1, pp 121-125.
[17]. Ohn mar san, Van-nam huynh, Voshiteru nakamori, (2004).“An alternative extension of the K-Means algorithm for clustering categorical data”. Int. J. Appl. Math. Computer Science Vol. 14, No. 2, pp 241–247.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
374 Vol. 1, Issue 5, pp. 361-374
[18]. J. Macqueen, (1967) “Some Methods For Classification And Analysis Of Multivariate Observations,” In proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, University of California Press, pp. 281-297.
[19]. M.S. Yang, C.H. Ko, 1997. On cluster-wise fuzzy regression analysis, IEEE Trans. Systems, Man, Cybern. 27, 1–13.
[20]. Kuo-Lung Wu, Miin-Shen Yang; 2002. Alternative C-Means clustering algorithms, Pattern Recognition 35, 2267 – 2278.
[21]. H. Galhardas, D. Florescu, D. Shasha, E. Simon, and C. Saita, (2001). “Declarative Data Cleaning: Language, Model, and Algorithms,” roc. 2001 Very Large Data Bases (VLDB) Conf.
[22]. M. Hernandez and S. Stolfo, (1995). “The Merge/Purge Problem for Large Databases,” Proc. ACM SIGMOD Int’l Conf. Management of Data, pp. 127-138.
[23]. M.L. Lee, T.W. Ling, and W.L. Low, (2000), “Intelliclean: A Knowledge- Based Intelligent Data Cleaner,” Proc. Sixth ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining.
[24]. L. Yu and H. Liu. (2003) “Feature selection for high-dimensional data: a fast correlation-based filter solution”. In: Proceedings of the 20th International Conferences on Machine Learning, Washington DC.
[25]. R. Kohavi and G. John. (1997). “Wrappers for feature subset selection. Artificial Intelligence,” Vol. (1-2), pp. 273-324.
[26]. B. Kumar (2005). “Impact of Microfinance through SHG-Bank Linkage in India: A Micro Study” Vilakshan, XIMB Journal of Management, July 9.
[27]. Mahendra Varman P; (2005).“Impact of Self-Help Groups on Formal Banking Habits” Economic and Political Weekly April 23, pp 1705-13.
AUTHOR BIOGRAPHIES
SAJEEV B.U is pursuing Ph. D. in Computer Science and Engineering under the guidance of Dr.Thangavel K, from Center for Research and Development PRIST University, Thanjavoor, Tamil Nadu, India. He received his Masters degree in Mathematics from Calicut University in 1984, M.C.A. from Mahatma Gandhi University and M.Tech in Computer Science from Allahabad Agricultural Institute- Deemed University, Allahabad in 2006. Currently he is working as HOD, Department of Computer Applications at KVM, CE & IT, Cherthala, Kerala, India. His research interest includes Data Mining, Clustering and Pattern Recognition.
THANGAVEL KUTTIANNAN received the Master of Science from Department of Mathematics, Bharathidasan University in 1986, and Master of Computer Applications Degree from Madurai Kamaraj University, India in 2001. He obtained his Ph. D. Degree from the Department of Mathematics, Gandhigram Rural University in 1999. He worked as Reader in the Department of Mathematics, Gandhigram Rural University, up to 2006. Currently he is working as Professor and Head, Department of Computer Science, Periyar University, Salem, Tamilnadu, India. His areas of interest include medical image processing, artificial intelligence, neural network, fuzzy logic, data mining, pattern recognition and mobile computing. He is the recipient of Tamilnadu Scientist Award for the year 2009.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
375 Vol. 1, Issue 5, pp. 375-386
CASCADED HYBRID FIVE-LEVEL INVERTER WITH DUAL
CARRIER PWM CONTROL SCHEME FOR PV SYSTEM
R. Seyezhai Associate Professor, Department of EEE, SSN College of Engineering, Kalavakkam
ABSTRACT
Cascaded Hybrid MultiLevel Inverter (CHMLI) is an attractive topology for high voltage DC-AC conversion.
This paper focuses on a single-phase five-level inverter with reduced number of switches. The inverter consists
of a full bridge inverter and an auxiliary circuit with four diodes and a switch. The inverter produces output
voltage in five levels: zero, +0.5Vdc, +Vdc, -0.5Vdc and –Vdc.. A novel dual reference modulation technique has
been proposed for the CMLI. The dual carrier modulation technique uses two identical inverted sine carrier
signals each with amplitude exactly half of the amplitude of the sinusoidal reference signal to generate PWM
signals for the switches. Using Perturb and Observe (P&O) algorithm, Maximum Power Point (MPPT) has been tracked
for PV inverter. A Proportional Integral (PI) control algorithm is implemented to improve the dynamic response of the
inverter. Performance evaluation of the proposed PWM strategy for Multilevel Inverter (MLI) has been carried
out using MATLAB and it is observed that it gives reduced Total Harmonic Distortion (THD). An experimental
five-level hybrid inverter test rig has been built to implement the proposed algorithm. Gating signals are
generated using PIC microcontroller. The performance of the inverter has been analyzed and compared with
the result obtained from theory and simulation.
KEYWORDS: Multilevel inverter, dual carrier modulation, PI, PV and switching losses
I. INTRODUCTION
Due to the depletion of fossil energy and environmental issues caused by conventional power
generation, renewable energy such as wind and the solar have been widely used for a few decades. PV
sources are used today in many applications as they have the advantage of being maintenance and
pollution free, distributed through the earth. Solar electric energy demand has grown consistently by
20% - 25% per annum over the past 20 years, which is mainly due to the decreasing costs and prices.
PV inverter, which is the heart of a PV system is used to convert DC power obtained from PV modules into AC power to be fed into the load .In recent years, multilevel inverters are of special
interest in the distributed energy sources area because several batteries, fuel cell, solar cell and wind
turbine can be connected through multilevel inverter to feed a load without voltage balance problems.
There are several topologies of multilevel inverter but the one considered in this paper is the hybrid
multilevel full-bridge five-level inverter employing reduced number of switches [1]. A five-level
inverter is employed as it provides improved output waveforms, smaller filter size, reduced EMI and
lower THD compared to the three-level PWM inverter.
This paper presents a single phase five-level PV inverter which consist of a DC-DC boost converter
connected to two capacitors in series, a full bridge inverter and an auxiliary circuit with four diode
and a switch as shown in Fig.1.This paper employs a dual carrier modulation technique to generate
PWM signals for the switches and to produce five output voltage levels: zero, +0.5Vdc,+Vdc,-0.5Vdc
and –Vdc, where Vdc is the supply voltage[2]. As the number of output levels increases, the harmonic
content can be reduced. The modulation technique uses two identical inverted sine carrier signals each
with amplitude exactly half the amplitude of the sinusoidal reference signal.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
376 Vol. 1, Issue 5, pp. 375-386
Sinusoidal PWM is obtained by comparing the high frequency carrier with a low frequency sinusoidal
reference signal [3].In this paper ,the dual carrier modulation is employed which consists of two
carrier signals Vcarrier1 and Vcarrier2 which will take turns to be compared with the sinusoidal reference
signals Vref , to produce the switching signals. The inverter is used in PV system, a proportional-
integral (PI) controller scheme is employed to keep the output current sinusoidal and to have a better dynamic performance.
II. CASCADED FIVE LEVEL INVERTER
The basic operational principle of five level cascaded multilevel inverter is to generate a five level
output voltage i.e zero, +0.5Vdc,+Vdc,-0.5Vdc and –Vdc, where Vdc is the supply voltage. The auxiliary circuit which consists of four diodes and switch S1 is used between the DC-bus capacitor and the full
bridge inverter. By proper switching of the auxiliary circuit can generate half level of the supply
voltage i.e. zero, +0.5Vdc, +Vdc,-0.5Vdc and –Vdc.. The full bridge inverter configuration together with
an auxiliary circuit is shown in Fig.1. Table I illustrates the level of Vdc during S1- S5 switch on and
off.
Fig.1.Full-bridge inverter configuration together with an auxiliary circuit
The circuit operation is explained as follows: The switches S1 ,S2 and S3 will be switching at the
rate of the carrier signal frequency while S4 and S5 will operate at a frequency equivalent to the
fundamental frequency. The circuit operation is divided into four modes:
Mode 1: In this mode switches S1 and S5 conduct and the diodes D1 and D4 are forward biased.
The output voltage equals to +0.5Vdc.
Mode 2: In this mode switches S2 and S5 conduct. The output voltage equals to +Vdc.
Mode 3: In this mode switches S1 and S4 conduct and the diodes D2 and D3 are forward biased.
The output voltage equals to –0.5Vdc.
Mode 4: In this mode switches S3 and S4 conduct. The output voltage equals to –Vdc.
Table I: Conduction sequence of switches
III. DUAL CARRIER MODULATION OF MLI
There are many control techniques employed for cascaded five level inverter [4]. This paper presents
the dual carrier inverted sine modulation technique. The inverted sine PWM has a better spectral
quality and a higher fundamental voltage compared to the triangular based PWM. Two carrier signal
S1 S2 S3 S4 S5 Vinv
ON OFF OFF OFF ON +0.5Vdc
OFF ON OFF OFF ON +Vdc
OFF OFF ON ON ON 0
ON OFF OFF ON OFF -0.5Vdc
OFF OFF ON ON OFF -Vdc
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
377 Vol. 1, Issue 5, pp. 375-386
Vcarrier1 and Vcarrier2 each with amplitude exactly half of the amplitude of the sinusoidal reference signal
are considered as shown in Fig.2. Vcarrier2 is compared with the sinusoidal reference signal and pulses
are generated whenever the amplitude of the reference signal is greater than the amplitude of carrier
signal. If Vref exceeds the peak amplitude of the Vcarrier2, then Vcarrier1 will be compared with the Vref.
This will lead to the switching pattern as shown in Fig 3.The switches S2 and S3 will be switching at the rate of the carrier signal frequency while the switches S4 and S5 will operate at frequency
equivalent to the fundamental frequency.
Time (s)
Am
pli
tud
e (V
)
Fig.2. Carrier and reference sine waveform for dual carrier modulation technique
(a)PWM switching signals for S1
(b)PWM switching signals for S2
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
378 Vol. 1, Issue 5, pp. 375-386
(c)PWM switching signals for S3
(d)PWM switching signals for S4
(e)PWM switching signals for S5
Fig.3.Switching pattern for single phase five level inverter
IV. PV MODELLING
Recently Photo Voltaic (PV) system is recognized to be in the forefront in renewable electric power
generation. PV module represents the fundamental power conversion unit of a PV generator system.
The output characteristic of a PV module depends on the solar insulation, the cell temperature and the
output voltage of the PV module. Since PV module has non-linear characteristics, it is necessary to
model it for the design and simulation of Maximum Power Point Tracking (MPPT) for PV system
applications.[5,6] Equivalent circuit of a PV cell is shown in Fig.4.The current source Iph represent the
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
379 Vol. 1, Issue 5, pp. 375-386
cell photo current. Rsh and Rs are the shunt and series resistances of the cell respectively. The simulink
model of PV module is shown in Fig.5
IpvLoadIph
Io
Rs
Rsh
Fig.4.Equivalent circuit of PV cell
The current output of PV module is
Ipv= Np * Iph - Np*Io[exp(q*(Vpv + IpvRs)/NsAkT) -1] (1)
Fig.5 Simulink model of PV module
The I-V output a characteristic of PV module at 1000W/m2 irradiation is shown in Fig.6 and the P-V
characteristics of PV module at 25oC is shown in Fig.7.
Fig.6.I-V Characteristics of PV module
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
380 Vol. 1, Issue 5, pp. 375-386
Fig.7.P-V Characteristics of PV module
As the irradiance level is inconsistent throughout the day, the amount of electric power generated by
the solar module is always changing with weather conditions. To overcome this problem, Maximum
Power Point Tracking (MPPT) algorithm is used [7]. It tracks the operating point of the I-V curve to
its maximum value. Therefore, the MPPT algorithm will ensure maximum power is delivered from
the solar modules at any particular weather conditions. In this proposed inverter, Perturb & Observe
(P & O) algorithm is used to extract maximum power from the modules [8]. The flowchart for MPPT
is shown in Fig.8 and the simulink model for P&O is shown in Fig.9.
Fig.8.Flowchart for Perturb and Observe method
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
381 Vol. 1, Issue 5, pp. 375-386
Fig.9.Simulink model for MPPT
V. CONTROLLER DESIGN
The feedback controller used in this algorithm utilizes the PI controller. As shown in Fig.11, for
a grid connected system, the current injected into the load, also known as the load current Il., is
sensed and fed back to a comparator which compares it with the reference current Iref. Iref is
obtained from constant m which is derived from MPPT algorithm [9]. The instantaneous
current error is fed to a PI controller. The PI controller is tuned using Ziegler-Nichols tuning
method [10]. Ziegler and Nichols (refer Fig.10) proposed rules for determining values of
proportional gain kp and integral time Ti based on the transient response characteristics of the
given plant. There are two methods available. The first method of Ziegler- Nichols of tuning
rules is as follows:
For PI controller:
Kp=0.9 , Ti =
Where T and L are Time constant and delay time respectively.
Fig.10 Ziegler- Nichols first method
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
382 Vol. 1, Issue 5, pp. 375-386
Using Zeiglers method, the proportional gain Kp is set at 0.89 and the integral gain Ki is set at 88. The
integral term in the PI controller improves the tracking by reducing the instantaneous error between
the reference and the actual current. The resulting error signal u, form the sinusoidal reference signal
which is compared with two carrier signal Vcarrier1 and Vcarrier2 to produce PWM signals for the
inverter switches.
Fig.11.Five level inverter with control algorithm
VI. SIMULATION RESULTS
Simulation was performed by using MATLAB SIMULINK to verify that the proposed inverter can be
practically implemented in a PV system. It helps to confirm the PWM switching strategy for the five
level inverter. It consists of two carrier signals and a reference signal. Both the carrier signals are compared with the reference signal to produce PWM switching signals for switches. The DC-DC
boost converter output waveform and the five level PV inverter output are shown in Fig.12 and
Fig.13. Table II shows the specifications of inverter, boost converter, PI controller.
Fig.12.Output voltage ripple waveform of boost converter
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
383 Vol. 1, Issue 5, pp. 375-386
Fig.13.Five level output of PV inverter under open-loop condition
The inverter voltage and grid voltage are in phase and this is shown in Fig.14.
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1-300
-200
-100
0
100
200
300
Time(s)
Gri
d v
olt
ag
e(V
)
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1-300
-200
-100
0
100
200
300
Time(s)
Inv
ert
er
vo
lta
ge
(V)
Fig .14 Grid voltage and Inverter voltage in phase
Fig.15 FFT analysis of load voltage of five level inverter (closed loop, THD = 5.45 %)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
384 Vol. 1, Issue 5, pp. 375-386
Fig.16 FFT analysis of load current of five level inverter (closed loop, THD = 3.46 %)
TABLE II : Specifications of PV Module, Boost Converter, and Inverter
PV MODULE
Rated Power : 37.08 W
Voltage at Maximum Power(Vmp) :16.56 V
Current at Maximum Power(Imp) : 2.25 A
Open circuit voltage(Voc) : 21.24 V
Short circuit current(Ioc) 2.55 A
Total number of cells in series(Ns) : 36
Total number of cells in parallel(Np) : 1
MULTI-LEVEL INVERTER
C1-C2 : 1000 uF
Switching frequency : 2250 Hz
0
3
6
9
12
15
0 0.5 1 1.5
Convent ional
SPWM
Dual carrier PWM
Modulation Index (ma)
TH
D (%
)
Fig.17 THD Vs ma graph for conventional SPWM & Dual carrier PWM Technique
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
385 Vol. 1, Issue 5, pp. 375-386
VII. EXPERIMENTAL RESULTS
To experimentally validate the hybrid cascaded MLI using the proposed modulation, a prototype five-
level inverter has been built using FGA25N120 Si IGBT for the full bridge inverter as shown in Fig.1.
The gating signals are generated using PIC18F4550 microcontroller. The hardware implementation of
hybrid MLI is shown in Fig.18.
Fig.18 Photograph for hardware implementation of Hybrid MLI
The experimental load voltage of five-level inverter for R-load (R= 30 ohms) is shown in Fig.19
Fig .19 Five-level voltage of hybrid MLI
VIII. CONCLUSION
This paper has presented a single phase multilevel inverter for PV application. A dual carrier
modulation technique has been proposed for the multilevel inverter. The circuit topology, modulation
strategy and the operating principle of the proposed inverter has been analyzed. It is found that dual
carrier modulation gives a reduced THD compared to dual reference modulation as reported in the
literature. The inverter has been simulated using PV as a source. Using P&O algorithm, maximum
power point has been tracked. A PI current control algorithm is implemented to optimize the
performance of the inverter. The proposed strategy has been verified through MATLAB simulation.
By employing this technique, the Total Harmonic Distortion is reduced.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
386 Vol. 1, Issue 5, pp. 375-386
ACKNOWLEDGEMENT
The author wishes to express her gratitude to the management of SSN Institutions, Chennai, India for
providing the laboratory and computational facilities to carry out this work.
REFERENCES
[1]. J. Selvaraj, N. A. Rahim, “Multilevel Inverter for Grid- Connected PV System Employing Digital PI
Controller,”IEEE Trans. on Industrial Electronics, vol.56,issue.1,2009,pp.149-158.
[2]. R.Seyezhai, M.Dhasna ,R.Anitha ,”Design and Simulation of Dual carrier modulation technique for
five level inverter.2011. International Journal of Power System Operation and Energy Management,
Vol.1, Issue-2, July 2011, pp.88-93.
[3]. Muhammad H. Rashid, Power electronics: Circuits, Devices, and Applications, 3rd ed. Pearson
Prentice Hall ,2004.
[4]. M. Calais, L. J. Borle, V. G. Agelidis, “Analysis of Multicarrier PWM Methods for a Single-Phase
Five-Level Inverter,” IEEE Power Electronics Specialists Conference, 2001,Vol.3, pp.1351-356.
[5]. H. Altas and A.M. Sharaf, “A Photovoltaic Array Simulation Model for Matlab-Simulink GUI
Environment,” IEEE, Clean Electrical Power, International Conference on Clean Electrical Power
(ICCEP‘07), June14-16,2007,Ischia,Italy.
[6]. Cameron, Christopher P.; Boyson, William E.; Riley Daniel M.;” Comparison of PV system
performance model predictions with measured PV system performance” IEEE Photovoltaic Specialists
Conference, 2008, pp. 1-6.
[7]. Chee Wei Tan; Green, T.C.; Hernandez-Aramburo, “Analysis of perturb and observe maximum power
point tracking algorithm for photovoltaic applications C.A.; Power and Energy Conference, 2008.
PECon 2008, pp. 237 – 242.
[8]. Villalva, M.G.; Ruppert F, E.; “Analysis and simulation of the P&O MPPT algorithm using a
linearized PV array model”IEEE Conference on Industrial Electronics, 2009. IECON '09. , pp: 231 –
236.
[9]. Park S. J., Kang F. S., Lee M. H. and Kim C. U., 2003. A New Single-Phase Five-Level PWM Inverter
Employing a Deadbeat Control Scheme. IEEE Transactions on Power Electronics, 18 (18), 831-843.
[10]. Elena Villanueva, Pablo Correa, José Rodríguez and Mario Pacas, “Control of a Single-Phase
Cascaded H-Bridge Multilevel Inverter for Grid-Connected Photovoltaic Systems”, IEEE Transactions
on Industrial Electronics, Vol. 56, No: 11, 2009.
BIOGRAPHY
R. Seyezhai obtained her B.E. Electronics & Communication Engineering) from Noorul Islam
College of Engineering, Nagercoil in 1996 and her M.E in Power Electronics & Drives from
Shanmugha College of Engineering, Thanjavur in 1998 and Ph.D from Anna University, Chennai.
She has been working in the teaching field for about 13 Years. She has published several papers
in International Journals and International Conferences in the area of Power Electronics & Drives.
Her areas of interest include SiC Power Devices, Multilevel Inverters, Modeling of fuel cells,
Design of Interleaved Boost Converter, Multilport DC-DC Converter and control techniques for
DC-DC Converter.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
387 Vol. 1, Issue 5, pp. 387-393
A REVIEW ON: DYNAMIC LINK BASED RANKING
D Nagamalleswary , A. Ramana Lakshmi
Department of Computer Science & Engineering, PVP Siddhartha Institute of Technology
Vijayawada, Andhra Pradesh, India
ABSTRACT
Dynamic authority-based ranking methods such as personalized PageRank and ObjectRank. Since they
dynamically rank nodes in a data graph using an expensive matrix-multiplication method, the online execution
time rapidly increases as the size of data graph grows. ObjectRank spends 20-40 seconds to compute query-
specific relevance scores, which is unacceptable. We introduce a novel approach, BinRank, that approximates
dynamic link-based ranking scores efficiently. BinRank partitions a dictionary into bins of relevant keywords
and then constructs materialized subgraphs (MSGs) per bin in preprocessing stage. In query time, to produce
highly accurate top-K results efficiently, BinRank uses the MSG corresponding to the given keyword, instead of
the original data graph. In this project, a BinRank system that employs a hybrid approach where query time can
be traded off for preprocessing time and storage. BinRank closely approximates ObjectRank scores by running
the same ObjectRank algorithm on a small subgraph, instead of the full data graph.
KEYWORDS: Online keyword search, Object Rank, scalability, approximation algorithms
I. INTRODUCTION
The PageRank algorithm[1] utilizes the Web graph link structure to assign global importance to Web
pages. It works by modeling the behavior of a “random Web surfer” who starts at a random Web page
and follows outgoing links with uniform probability. The PageRank score is independent of a
keyword query. Recently, dynamic versions of the PageRank algorithm have become popular. They are characterized by a query-specific choice of the random walk starting points.
In particular, two algorithms have got a lot of attention: Personalized PageRank (PPR) for Web graph
data sets[2] [3] [4] [5] and ObjectRank for graph-modeled databases[6] [7] [8] [9] [10]. PPR is a
modification of PageRank that performs search personalized on a preference set that contains Web
pages that a user likes. For a given preference set, PPR performs a very expensive fixpoint iterative
computation over the entire Web graph, while it generates personalized search results.[3] [4] [5]
Therefore, the issue of scalability of PPR has attracted a lot of attention. ObjectRank [6]extends (personalized) PageRank to perform keyword search in databases. ObjectRank uses a query term
posting list as a set of random walk starting points and conducts the walk on the instance graph of the
database. The resulting system is well suited for “high recall” search, which exploits different
semantic connection paths between objects in highly heterogeneous data sets.
ObjectRank has successfully been applied to databases that have social networking components, such
as bibliographic data and collaborative product design. However, ObjectRank suffers from the same
scalability issues as personalized PageRank, as it requires multiple iterations over all nodes and links of the entire database graph. The original ObjectRank system has two modes: online and offline. The
online mode runs the ranking algorithm once the query is received, which takes too long on large
graphs. For example, on a graph of articles of English Wikipedia1 with 3.2 million nodes and 109
million links, even a fully optimized in-memory implementation of ObjectRank takes 20-50 seconds
to run. In the offline mode, ObjectRank precomputes top-k results for a query workload in advance.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
388 Vol. 1, Issue 5, pp. 387-393
This precomputation is very expensive and requires a lot of storage space for precomputed results.
Moreover, this approach is not feasible for all terms outside the query workload that a user may
search for, i.e., for all terms in the data set dictionary. For example, on the same Wikipedia data set,
the full dictionary precomputation would take about a CPU-year.
II. EXISTING SYSTEM
• PageRank algorithm utilizes the Web graph link structure to assign global importance to Web
pages It works by modeling the behavior of a “random Web surfer” who starts at a random
Web page and follows outgoing links with uniform probability.
• The PageRank score is independent of a keyword query.
• Personalized PageRank (PPR) for Web graph data sets and ObjectRank for graph-modeled
databases results. Therefore, the issue of scalability of PPR has attracted a lot of attention.
• ObjectRank extends (personalized) PageRank to perform keyword search in databases.
ObjectRank uses a query term posting list as a set of random walk starting points and
conducts the walk on the instance graph of the database.
III. PROPOSED SYSTEM
• In this project, a BinRank system that employs a hybrid approach where query time can be
traded off for preprocessing time and storage. BinRank closely approximates ObjectRank
scores by running the same ObjectRank algorithm on a small subgraph, instead of the full
data graph.
• BinRank query execution easily scales to large clusters by distributing the subgraphs between
the nodes of the cluster.
• We are proposing the BinRank algorithm for the trade time of search. Our alogorithm solves
the time consuming problem in query execution. Time will be reduced because of cache
storage and redundant query handling method.
IV. BIN CONSTRUCTION
As outlined above, we construct a set of MSGs for terms of a dictionary or a workload by partitioning
the terms into a set of term bins based on their co-occurrence. We generate an MSG for every bin
based on the intuition that a sub graph that contains all objects and links relevant to a set of related
terms should have all the information needed to rank objects with respect to one of these terms.
There are two main goals in constructing term bins. First, controlling the size of each bin to ensure
that the resulting sub graph is small enough for Object Rank to execute in a reasonable amount of
time. Second, minimizing the number of bins to save the preprocessing time. After all, we know that pre computing Object Rank for all terms in our corpus is not feasible. To achieve the first goal, we
introduce a max Bin Size parameter that limits the size of the union of the posting lists of the terms in
the bin, called bin size. As discussed above, Object Rank uses the convergence threshold that is
inversely proportional to the size of the base set, i.e., the bin size in case of sub graph construction.
Thus, there is a strong correlation between the bin size and the size of the materialized sub graph. The
value of max Bin Size should be determined by quality and performance requirements of the system.
The problem of minimizing the number of bins is NPhard. In fact, if all posting lists are disjoint, this problem reduces to a classical NP-hard bin packing problem [12]. We apply a greedy algorithm that
picks an unassigned term with the largest posting list to start a bin and loops to add the term with the
largest overlap with documents already in the bin. We use a number of heuristics to minimize the
required number of set intersections, which dominate the complexity of the algorithm. The tight upper
bound on the number of set intersections that our algorithm needs to perform is the number of pairs of
terms that co-occur in at least one document. To speed-up the execution of set intersections for larger
posting lists, we use KMV synopses [13] to estimate the size of set intersections. The algorithm in Fig. 1 works on term posting lists from a text index. As the algorithm fills up a bin,
it maintains a list of document IDs that are already in the bin, and a list of candidate terms that are
known to overlap with the bin (i.e., their posting lists contain at least one document that was already
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
389 Vol. 1, Issue 5, pp. 387-393
placed into the bin). The main idea of this greedy algorithm is to pick a candidate term with a posting
list that overlaps the most with documents already in the bin, without posting list union size exceeding
the maximum bin size.
V. ALGORITHM
Input: A set of workload terms P with their position lists
Output: A set of bins B
(1) while P is not empty do
create a new empty bin b then create a empty cache of candidate terms C
(2) pick term t € P with the largest positioning list size t (3) while t is not null do
add t to b and remove it from W then compute a set of terms T that co-occurance with t
(4) for each t’ € T do
insert mapping <t’, null> into C
end for each (5) best I:=0
for each mapping <c,i> € C do
if i= null then // b ∩ c has not been computed yet
i := b∩cthen update mapping <c,i> from C
end if (6) union:=b+c-i
if uninon union >Max Bin Size then remove <c, i> from C
else if i> best of I then best I:=I , t :=c
end if
end for each (7) if best I:=0 then
pick t € P with maximium t≤ maxBinsize -b
if no such t exists , t:=null
end if end while add completed to B
end while
fig1: bin algorithm
While it is more efficient to prepare bins for a particular workload that may come from a system
query log, it is dangerous to assume that a query term that has not been seen before will not be seen in
the future. We demonstrate that it is feasible to use the entire data set dictionary as the workload, in
order to be able to answer any query. Due to caching of candidate intersection results in lines 12- 14 of the algorithm, the upper bound on
the number of set intersections performed by this algorithm is the number of pairs of co-occurring
terms in the data set. Indeed, in the worst case, for every term t that has just been placed into the bin,
we need to intersect the bin with every term to that co occurs with t, in order to check if t0 is
subsumed by the bin completely, and can be placed into the bin “for free.”
For example, consider N terms with posting lists of size X each that all co-occur in one document d0
with no other co-occurrences. If maximum bin size is 2(X - 1), a bin will have to be created for every term. However, to get to that situation, our algorithm will have to check intersections for every pair of
terms. Thus, the upper bound on the number of intersections is tight.
In fact, it is easy to see from the above example that no algorithm that packs the bins based on the
maximum overlap can do so with fewer than N(N – 1)/2 set intersections in the worst case.
Fortunately, real-world text databases have structures that are far from the worst case.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
390 Vol. 1, Issue 5, pp. 387-393
VI. SYSTEM ARCHITECTURE
Fig2: Binranking System Architecture
Fig. 2 shows the architecture of the Bin Rank system. During the preprocessing stage (left side of
figure), we generate MSGs. During query processing stage (right side of figure), we execute the
Object Rank algorithm on the sub graphs instead of the full graph and produce high-quality
approximations of top-k lists at a small fraction of the cost. In order to save preprocessing cost and
storage, each MSG is designed to answer multiple term queries. We observed in the Wikipedia data
set that a single MSG can be used for 330-2,000 terms, on average.
6.1 Preprocessing
The preprocessing stage of Bin Rank starts with a set of workload terms W for which MSGs will be
materialized. If an actual query workload is not available, W includes the entire set of terms found in
the corpus. We exclude from W all terms with posting lists longer than a system parameter max
Posting List. The posting lists of these terms are deemed too large to be packed into bins. We execute
Object Rank for each such term individually and store the resulting top-k lists. Naturally, max Posting
List should be tuned so that there are relatively few of these frequent terms. In the case of Wikipedia, we used max Posting List=2,000 and only 381 terms out of about 700,000 had to be pre computed
individually. This process took 4.6 hours on a single CPU.
For each term w € W, Bin Rank reads a posting list T from the Lucene3 index and creates a KMV
synopsis T0 that is used to estimate set intersections. The bin construction algorithm, Pack Terms Into
Bins, partitions W into a set of bins composed of frequently co-occurring terms. The algorithm takes a
single parameter max Bin Size, which limits the size of a bin posting list, i.e., the union of posting
lists of all terms in the bin. During the bin construction, Bin Rank stores the bin identifier of each term into the Lucene index as an additional field. This allows us to map each term to the corresponding bin
and MSG at query time.
The Object Rank module takes as input a set of bin posting lists B and the entire graph G(V,E) with a
set of Object Rank parameters, the damping factor d, and the threshold value €. The threshold
determines the convergence of the algorithm as well as the minimum Object Rank score of MSG
nodes.
Our Object Rank implementation stores a graph as a row compressed adjacency matrix. In this format, the entire Wikipedia graph consumes 880 MB of storage and can be loaded into main memory for
MSG generation. In case that the entire data graph does not fit in main memory, we can apply parallel
Page Rank computation techniques such as hyper graph partitioning schemes described in [14].
6.1.1 Steps
I. User Registration
II. Authentication Module
III. Search - Query Submission
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
391 Vol. 1, Issue 5, pp. 387-393
IV. Index Creation
V. BinRank Algorithm Implementation Graph based on Rank
6.1.2 Data Flow Diagram
Fig3: Data Flow Processing Of Binranking
6.2 Query Processing
For a given keyword query q, the query dispatcher retrieves from the Lucene index the posting list bs
(q) (used as the base set for the Object Rank execution) and the bin identifier b(q). Given a bin
identifier, the MSG map per determines whether the corresponding MSG is already in memory. If it is
not, the MSG de-serialize reads the MSG representation from disk. The Bin Rank query processing module uses all available memory as an LRU cache of MSGs.
For smaller data graphs, it is possible to dramatically reduce MSG storage requirements by storing
only a set of MSG nodes V 0, and generating the corresponding set of edges E0 only at query time.
However, in our Wikipedia, data set that would introduce an additional delay of 1.5-2 seconds, which
is not acceptable in a keyword search system.
The Object Rank module gets the in-memory instance of MSG, the base set, and a set of Object Rank
calibrating parameters: 1) the damping factor d; 2) the convergence threshold ė; and 3) the number of top-k list entries k. Once the Object Rank scores are computed and sorted, the resulting document ids
are used to retrieve and present the top-k objects to the user.
VII. CONCLUSION
In this paper, we proposed BinRank as a practical solution for scalable dynamic authority-based
ranking. It is based on partitioning and approximation using a number of materialized subgraphs. We
showed that our tunable system offers a nice trade-off between query time and preprocessing cost.
We introduce a greedy algorithm that groups co-occurring terms into a number of bins for which we
compute materialized subgraphs. Note that the number of bins is much less than the number of terms.
The materialized subgraphs are computed offline by using ObjectRank itself. The intuition behind the
approach is that a subgraph that contains all objects and links relevant to a set of related terms should have all the information needed to rank objects with respect to one of these terms. Our extensive
experimental evaluation confirms this intuition. For future work, we want to study the impact of other
keyword relevance measures, besides term co-occurrence, such as thesaurus or ontologies, on the
performance of BinRank. By increasing the relevance of keywords in a bin, we expect the quality of
Search Query
Displaying related results
Graph based on rank
Index Creation
User Login
Bin rank
Implement
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
392 Vol. 1, Issue 5, pp. 387-393
materialized subgraphs, thus the top-k quality and the query time can be improved. We also want to
study better solutions for queries whose random surfer starting points are provided by Boolean
conditions. And ultimately, although our system is tunable, the configuration of our system ranging
from number of bins, size of bins, and tuning of the ObjectRank algorithm itself (edge weights and
thresholds) is quite challenging, and a wizard to aid users is desirable.
VIII. FUTURE WORK
To further improve the performance of BinRank, we plan to integrate BinRank and HubRank [8] by
executing HubRank on MSGs BinRank generates. Currently, we use the ObjectRank algorithm on
MSGs in query time. Even though HubRank is not as scalable as BinRank, it performs better than ObjectRank on smaller graphs such as MSGs. In this way, we can leverage the synergy between
BinRank and HubRank
REFERENCES
[1] J. Cho and U. Schonfeld, “Rankmass Crawler: A Crawler with High PageRank Coverage Guarantee,” Proc.
Int’l Conf. Very Large Data Bases
[2]R. Fagin, R. Kumar, M. Mahdian, D. Sivakumar, and E. Vee, “Comparing and aggregating rankings with
ties,” in PODS ’04.LDB), 2007.
[3] H. Hwang, A. Balmin, B. Reinwald, and E. Nijkamp, “Binrank: Scaling dynamic authority-based search
using materialized subgraphs,” in ICDE ’09, 2009, pp. 66–77.
[4] G. Jeh and J. Widom, “Scaling personalized web search,” in WWW ’03. New York, NY, USA: ACM, 2003,
pp. 271–279
[5]. Ding, L., Pan, R., Finin, T.W., Joshi, A., Peng, Y., Kolari, P.: Finding and ranking knowledge on the
semantic web. In: Proceedings of the International Semantic Web Conference. (2005) 156170
[6] Hogan, A., Harth, A., Decker, S.: Reconrank: A scalable ranking method for semantic web data with
context. In: Proceedings of Second International Workshop on Scalable Semantic Web Knowledge Base
Systems, Athens, GA, USA. (11 2006)
[7] S. Brin and L. Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Computer
Networks, vol. 30, nos. 1-7, pp. 107- 117, 1998.
[8] T.H. Haveliwala, “Topic-Sensitive PageRank,” Proc. Int’l World Wide Web Conf. (WWW), 2002.
[9] G. Jeh and J. Widom, “Scaling Personalized Web Search,” Proc. Int’l World Wide Web Conf. (WWW),
2003.
[10] D. Fogaras, B. Ra´cz, K. Csaloga´ny, and T. Sarlo´ s, “Towards Scaling Fully Personalized PageRank:
Algorithms, Lower Bounds, and Experiments,” Internet Math., vol. 2, no. 3, pp. 333-358, 2005.
[11] K. Avrachenkov, N. Litvak, D. Nemirovsky, and N. Osipova, “Monte Carlo Methods in PageRank
Computation: When One Iteration Is Sufficient,” SIAM J. Numerical Analysis, vol. 45, no. 2, pp. 890-904,
2007.
[12] A. Balmin, V. Hristidis, and Y. Papakonstantinou, “ObjectRank: Authority-Based Keyword Search in
Databases,” Proc. Int’l Conf. Very Large Data Bases (VLDB), 2004.
[13] Z. Nie, Y. Zhang, J.-R. Wen, and W.-Y. Ma, “Object-Level Ranking: Bringing Order to Web Objects,”
Proc. Int’l World Wide Web Conf. (WWW), pp. 567-574, 2005.
[14] S. Chakrabarti, “Dynamic Personalized PageRank in Entity- Relation Graphs,” Proc. Int’l World Wide
Web Conf. (WWW), 2007.
[15] H. Hwang, A. Balmin, H. Pirahesh, and B. Reinwald, “Information Discovery in Loosely Integrated Data,”
Proc. ACM SIGMOD, 2007.
[16] V. Hristidis, H. Hwang, and Y. Papakonstantinou, “Authority- Based Keyword Search in Databases,” ACM
Trans. Database Systems, vol. 33, no. 1, pp. 1-40, 2008.
[17] M. Kendall, Rank Correlation Methods. Hafner Publishing Co., 1955.[12] M.R. Garey and D.S. Johnson,
“A 71/60 Theorem for Bin Packing,” J. Complexity, vol. 1, pp. 65-106, 19 [18] M.R. Garey and D.S. Johnson, “A 71/60 Theorem for Bin Packing,” J. Complexity, vol. 1, pp. 65-106,
1985.
[19] K.S. Beyer, P.J. Haas, B. Reinwald, Y. Sismanis, and R. Gemulla,“On Synopses for Distinct-Value
Estimation under Multiset Operations,” Proc. ACM SIGMOD, pp. 199-210, 2007.
[20] J.T. Bradley, D.V. de Jager, W.J. Knottenbelt, and A. Trifunovic, “Hypergraph Partitioning for Faster
Parallel PageRank Computation,”Proc. Second European Performance Evaluation Workshop (EPEW), pp. 155-
171, 2005.
[21] , L., Brin, S., Motwani, R., Winograd, T.: The pagerank citation ranking: Bringing order to the web.
Technical Report 1999-66, Stanford InfoLab (1999)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
393 Vol. 1, Issue 5, pp. 387-393
Authors Biographies
D. Nagamalleswary pursuing M.Tech from P.V.P.S.I.T and received B.Tech from Nimra
engineering college. She currently is working as Assist. Professor in K.L University.
A. Ramna Lakshmi pursuing P.hd and she is currently working as Associate Professor in PVP
Siddhartha Institute of Engineering and Technology, Kanuru.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
394 Vol. 1, Issue 5, pp. 394-400
MODELING AND SIMULATION OF A SINGLE PHASE
PHOTOVOLTAIC INVERTER AND INVESTIGATION OF
SWITCHING STRATEGIES FOR HARMONIC MINIMIZATION
B. Nagaraju1, K. Prakash
2
1Assistant Professor, Vaagdevi College of Engineering, Warangal-India
2Professor, Vaagdevi College of Engineering, Warangal-India
ABSTRACT
The aim of this paper is to build an EMTDC model of a single phase photovoltaic inverter and to investigate
switching strategies for harmonic minimization. For the simulation of this model, the PSCAD/EMTDC software
package was used and the waveforms of interest were taken for further examination and discussion οn the
performance of the model. Α low rating, mains connected device was designed and was later used to demonstrate
that real and reactive power can flow in the desired direction just by changing the phase shift or the voltage
magnitude. The inverter device is intended for domestic use and will allow users to exploit voltage from photovoltaic
cells. This a.c. converted voltage will be useful for feeding small house appliances or by employing appropriate
techniques, real and reactive power exported from the inverter can reinforce the main power stream in the
“Distribution Grid”.
KEYWORDS: Single-phase photovoltaic inverter, EMTDC model, harmonic minimization
I. INTRODUCTION
In recent years the need for renewable energy has become more pressing. Among them, the photovoltaic
system (PV) such as solar cell is the most promising energy [1]. In literature, several models have been
developed for the modeling and simulation of the different components of PV power systems [2-5], based
on simulation approaches, which performed in various programming environments such as Pspice,
Matlab Simulink and Labview [6, 7].
The aim of this work is to build an EMTDC model of a single phase photovoltaic inverter and to
investigate switching strategies for harmonic minimization. The inverter device was intended for
domestic use and would allow users to exploit voltage from photovoltaic cells.
For the simulation of this model, the PSCAD/EMTDC software [8, 9] package was used and the
waveforms of interest were taken. Α low rating, mains connected device was designed and was later used
to demonstrate that real and reactive power can flow in the desired direction just by changing the phase
shift or the voltage magnitude. An inverter model that would convert the d.c. voltage supplied from a
battery into an a.c. voltage was designed, offering the capability of feeding this into the grid through an
inductance
II. TECHNICAL BACKGROUND INFORMATION
An inverter is a d.c. to a.c. converter i.e. it can convert d.c. voltage into a.c. for feeding into an a.c. utility
network. It is possible to obtain a single-phase, or a three-phase output from such a device, but in this
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
395 Vol. 1, Issue 5, pp. 394-400
work only the behaviour of a single-phase inverter was studied. An inverter system consists of the d.c.
input, the power circuit and the control circuit. The inverter finds very useful applications in standby
power supplies or uninterruptible power supplies (UPS) and also in a.c. motor control.
The d.c. input voltage into an inverter can be obtained in various ways. In UPS systems, it is almost
invariably obtained from a storage battery. In a.c. motor control, the d.c. link voltage is obtained from
rectified mains. For the case described in this work, the voltage-source inverter (VSI) was powered from a
stiff, low impedance d.c. voltage source provided in the form of a battery. The choice of the main devices
depends on factors such as the d.c. link voltage, the load current, the maximum operating frequency, etc.
The devices need to be force-commutated devices with high switching frequencies for example Insulated
Gate Bipolar Junction Transistors (IGBTs), power MOSFETS or Gate-Turn-Off thyristors (GTOs) that
can provide natural turn-off facilities.
III. SIMULATION PACKAGE PSCAD/ EMTDC
EMTDC and PSCAD [8, 9] are a group of related software packages which provide the user with a very
flexible power systems electromagnetic transients tool. PSCAD enables the user to design the circuit that
is going to be studied. EMTDC enables the user to simulate the circuit performance under any conditions
or disturbances of a complicated or non-linear model or process. The operation of such a model can be
tested by subjecting it to disturbances and parameter variations and the stability of its response can be
observed.
The EMTDC provides the facility that already available models can be interfaced with an electric circuit
or control system. It cannot alone provide the user with a complete analysis of the power system under
study so the analysis is assisted by some auxiliary programs. Graphics plotting of output of any desired
quantity can be provided in the package. Fourier analysis of any desired output is possible, using an
auxiliary program known as EMTFS. Another capability of the EMTFS program is the synthesizing of an
EMTDC output representing the response to some complicated model, up to a fourth order linear function
using an optimization technique.
IV. SIMULATION RESULTS
4.1 Inverter design procedure
The whole design of the inverter circuit was implemented using Gate-Turn-Off thyristοr (GTO) models.
These GTO models are normally used as controlling switches in H.V. devices with large power ratings,
whereas in this design they are just used to provide the switching pulses and finally produce the output.
The inverter circuit is given in Fig. 1.
Another adjustment needed to be considered was “locking” the phase of the inverter output voltage onto
that of the grid voltage. This means that the phase of the inverter voltage had to be made equal to the
phase of the grid voltage. It is possible to achieve this task in various ways such as using a Phase-Locked-
Loop (PLL), but in this work a much simpler implementation technique was employed. This technique
used a duplicate of the grid voltage source and used its output after being passed through a Zero-
Crossing-Detector (ZCD) to trigger the thyristors in the inverter device. The ZCD, as its name implies
detects zero crossings οn the input waveform and triggers at each zero crossing. In this way a sinusoidal
input is easily converted into a square wave.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
396 Vol. 1, Issue 5, pp. 394-400
Fig. 1: The inverter circuit
4.2 Representation - generation of the grid voltage Once the correct output from the inverter was obtained, a sinusoidal wave for representing the grid
voltage had to be generated. It was simple and easy to represent this grid voltage using the output of a low
impedance a.c. source. This a.c. source was to be used as the supply for obtaining a 50 Hz, 230 V rms
sinusoid that would represent the grid voltage. The initial parameters of this source, i.e. magnitude and
frequency were respectively set to 230 V rms and 50 Hz. Once this output was generated it was coupled
in the circuit as the grid voltage.
4.3 Coupling of the two circuits
After designing and implementing the inverter device and the a.c. source equivalent circuit, the two
circuits were coupled together through an inductance. An inductance of value 67.35 mH was used to
couple the two circuits together.
Another adjustment needed to be considered was “locking” the phase of the inverter output voltage onto
that of the grid voltage. This means that the phase of the inverter voltage had to be made equal to the
phase of the grid voltage. It is possible to achieve this task in various ways such as using a Phase-Locked-
Loop (PLL), but in this work a much simpler implementation technique was employed. This technique
used a duplicate of the grid voltage source and used its output after being passed through a Zero-
Crossing-Detector (ZCD) to trigger the thyristors in the inverter device. The ZCD, as its name implies
detects zero crossings οn the input waveform and triggers at each zero crossing. In this way a sinusoidal
input is easily converted into a square wave.
Another adjustment needed to be considered was “locking” the phase of the inverter output voltage onto
that of the grid voltage. This means that the phase of the inverter voltage had to be made equal to the
phase of the grid voltage. It is possible to achieve this task in various ways such as using a Phase-Locked-
Loop (PLL), but in this work a much simpler implementation technique was employed. This technique
used a duplicate of the grid voltage source and used its output after being passed through a Zero-
Crossing-Detector (ZCD) to trigger the thyristors in the inverter device. The ZCD, as its name implies
detects zero crossings οn the input waveform and triggers at each zero crossing. In this way a sinusoidal
input is easily converted into a square wave.
The ZCD output was used as the input to the triggering block. Applying a square-pulse generated from
the grid voltage sinusoid at the input of the triggering block, the triggering pulses obtained will eventually
produce a square-wave output that will be in phase with the grid voltage. This phase compatibility is
shown in Fig. 2 but in order to have the two voltages in phase the triggering pulses had to be swapped
around.
4.4 Power measurements With appropriate phase manipulation between the two voltages and voltage magnitude manipulation the
respective transfer of real and reactive power is feasible. In order to measure real and reactive power, the
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
397 Vol. 1, Issue 5, pp. 394-400
complex power (S) had to be measured first. The complex power at any point in the system can be found
by multiplying the corresponding voltage (V) and current (Ι) at that point.
Fig. 2: Inverter output and grid voltage waveforms
The current was measured by using an ammeter connected in series in the circuit. The voltage was also
measured. Multiplying graphically the waveforms of these two quantities the waveform corresponding to
the complex power was derived and from that an rms value for the complex power can be deduced.
First, setting the d.c. supply to 250 V rms the current was limited between the acceptable limits and it
actually had an rms value of 1.3 A. The current waveform was seen to be very distorted, containing all
orders of harmonics. The inverter output waveform was also changed since the load became inductive and
a “step” was observed in the waveform.
The complex power was measured using the current and voltage values. Α two input-one output
multiplier was used in order to obtain the complex power waveform simply by multiplying the voltage
and current waveforms. The complex power waveform was seen to be distorted due to the contribution
from the current waveform. The real power was measured by passing the complex power waveform
through a first order control transfer function of the form τsG+1/, where G is the gain introduced between
the input and the output and τ is the time constant of the system. This transfer function has no zeroes and
has only one pole that being at s=-1/τ.
The gain was set to 1 and the time constant τ was also set to 1 sec. The value of the time constant needed
to be as large as possible. The instantaneous values would not be taken into account and the output
waveform indicates that real power had reached a steady state value. For these measurements the
magnitude of the fundamental of the inverter output voltage was set to 250 V rms resulting in a current
flow of 1.3Athrough the circuit.
The real power flow was monitored and relative graphs showing the voltage waveform V2, the current I
a,
the complex power waveform and the real power waveform were plotted. Measurements were taken with
Vd.c.
= 250 V rms and phase shifts of +2 degrees and -2 degrees and the above waveforms were recorded
each time. Fig. 3 gives the waveforms obtained for the leading mode of operation.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
398 Vol. 1, Issue 5, pp. 394-400
Fig. 3: Leading mode waveforms, V
d.c.=250 V
4.5 Voltage magnitude manipulation- reactive power flow
The magnitude of the fundamental of the inverter output voltage was set to 250 V rms and the magnitude
of the grid voltage to 230 V rms. This had as a result a current of rms value 1.3 Α flowing through the
circuit. The current flow was due to the voltage difference between the a.c. side and the d.c. side and it
was expected that a reactive power flow occurred in the same direction. There was no easy way to
measure the reactive power Q so the flow of reactive power was demonstrated by inspection of the
current wave shapes for different supply voltages that would increase or decrease the magnitude of the
fundamental of the inverter output.
One set of measurements and graphs was obtained using a supply voltage of 250 V rms and a phase shift
of +2 degrees leading. These graphs were given in Fig. 4 but they are given again in Fig. 5 to support the
reactive power flow demonstration. Another set of graphs was taken this time using a supply voltage of
230 V rms and a phase shift of two degrees leading.
Comparing the two current waveforms obtained for supply voltages Vd.c.
=250 V rms, and Vd.c.
= 230 V
rms, is concluded that in the second case, where the supply voltage was reduced the current spikes seem
to have reduced in terms of magnitude. The rms value of the current was increased.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
399 Vol. 1, Issue 5, pp. 394-400
Fig. 4: Leading mode waveforms for V
d.c.=250V
4.6 Harmonic injection into the grid voltage The waveforms in Fig. 5 were obtained to demonstrate the effect that an increase of the series inductance
of the a.c. voltage source, had on the grid voltage V2. This inductance was increased from a value of
0.001 H to a value of 0.01 H i.e. by a factor of 10 and harmonic injection was evident on the grid voltage
waveform V2. Fig. 5 shows waveform V
2 containing harmonics, alongside the current waveforms,
complex and real power waveforms for Vd.c.
=250Vrms.
The reason for this harmonic injection is that the a.c. source is active for a frequency of 50 Hz, the pre-
defined frequency of the pure sinusoid generated by this source. In the case of higher frequency and
trying to simulate the circuit at the second harmonic the only “source” present would be the inverter
which has an output containing this 2nd
harmonic. At this frequency the a.c. source becomes short-
circuited and the remaining circuit acts as a voltage divider, dividing the square inverter output between
the series inductance and the coupling inductance. The larger the series inductance the more voltage
containing harmonics will appear across it as voltage drop.
Fig. 5: Leading mode waveforms, harmonics injection in grid voltage V
2: V
d.c.=250 V
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
400 Vol. 1, Issue 5, pp. 394-400
V. CONCLUSIONS
In this paper a model of a single photovoltaic voltage inverter was designed and simulated. The
simulation was performed using the PSCAD/EMTDC simulation package. This inverter model
was used in conjunction with an a.c. voltage source to show real and reactive power flow.
The operation of the inverter device showed the model’s ability to both absorb and generate
reactive power. It was shown that increasing the supply voltage at the input of the inverter
resulted in exporting reactive power from the inverter, and decreasing it resulted in importing
reactive power to the model. When the d.c. supply was increased, the magnitude of the
fundamental of the inverter output was increased with respect to the grid voltage magnitude.
Decreasing Vdc leads to exactly the opposite effects i.e. absorption of reactive power by the
inverter.
REFERENCES [1] Y. Sukamongkol, S. Chungpaibulpatana, W. Ongsakul, A simulation model for predicting the performance of a
solar photovoltaic system with alternating current loads, Renewable Energy, 2002, No. 27, pp. 237-258
[2] E Koutroulis, K. Kalaitzakis, et al., Development of a microcontroller-based, photovoltaic maximum power
point tracking control system, IEEE Trans Power Electronics, 2001, Vol. 16, No. 1, pp. 46-54
[3] F. Valenciaga, P.F. Puleston, P.E. Battiaiotto Power control of a photovoltaic array in a hybrid electric
generation system using sliding mode techniques, IEE Proc, Control Theory Appl., 2001; Vol. 148, No. 6, pp.
448-455
[4] T. Noguchi, S. Togashi, R. Nakamoto, Short-current pulse-based maximum power point tracking method for
multiple photovoltaic and converter module system, IEEE Trans Industrial Electronics, 2002; Vol. 49, No. 1,
pp. 217-223
[5] D.P. Hohm, M.E. Ropp, Comparative study of maximum power point tracking algorithm using an experimental,
programmable, maximum power point tracking test bed, Photovoltaic Specialists Conference, 2000, pp. 1699-
1702
[6] D.F. Hasti, Photovoltaic power system application, IEEE Power Engineering Review, Sandia National
Laboratories, 1994, pp. 8-19
[7] E. Koutroulis, K. Kalaitzakis, Development of an integrated data-acquisition system for renewable energy
systems monitoring, Renewable Energy, 2003, Vol. 28, pp. 139-52
[8] PSCAD/MTDC Power System Simulation Software, User’s Manual, Manitoba HVDC Research Centre,
Winnipeg, Canada, EMTDC version 2, 1994 release.
[9] Manitoba HVDC Research Center, PSCAD/EMTDC Power System, Simulation Software User’s Manual,
Version 3, 1998 release.
Authors Information:
B. Nagaraju received his M.Tech (Power Electronics, 2009) from, Vaagdevi College of
Engineering, Warangal. He is currently working as an Assistant Professor in Department of EEE;
Vaagdevi College of Engineering .His areas of interest include Power Quality Maintenance in
Smart Grid using Renewable Energy Sources.
K. Prakash received his M.Tech (Power Systems, 2003) from National Institute of technology,
warangal.He is currently pursuing his Ph.D (Electrical Engineering) National Institute of
technology, and warangal.His areas of interest include Distribution System Studies, Economic
Operation of Power Systems, Artificial Intelligence Techniques and Meta-Heuristics Techniques.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
401 Vol. 1, Issue 5, pp. 401-416
ENHANCEMENT OF POWER TRANSMISSION CAPABILITY OF
HVDC SYSTEM USING FACTS CONTROLLERS M. Ramesh
1, A. Jaya Laxmi
2
1Assoc. Prof. and HOD, Dept of EEE, Medak College of Engg. and Tech., Kondapak, Medak
Research Scholar, EEE Dept., Jawaharlal Nehru Technological Univ., Anantapur,
A. P., India 2Associate Professor, Dept. of EEE, Jawaharlal Nehru Technological Univ., College of
Engg., Kukatpally, Hyderabad,
A. P., India
ABSTRACT
The necessity to deliver cost effective energy in the power market has become a major concern in this emerging
technology era. Therefore, establishing a desired power condition at the given points are best achieved using
power controllers such as the well known High Voltage Direct Current (HVDC) and Flexible Alternating
Current Transmission System (FACTS) devices. High Voltage Direct Current (HVDC) is used to transmit large
amounts of power over long distances. The factors to be considered are Cost, Technical Performance and
Reliability. A Flexible Alternating Current Transmission System (FACTS) is a system composed of static
equipment used for the AC transmission of electrical energy. It is meant to enhance controllability and increase
power transfer capability of the network. It is generally a power electronics-based system. A Unified Power
Flow Controller (or UPFC) is a FACTS device for providing fast-acting reactive power compensation on high-
voltage electricity transmission networks. The UPFC is a versatile controller which can be used to control
active and reactive power flows in a transmission line. The focus of this paper is to identify the improved Power
Transmission Capability through control scheme and comprehensive analysis for a Unified Power Flow
Controller (UPFC) on the basis of theory, computer simulation. The conventional control scheme cannot
attenuate the power fluctuation, and so the time constant of damping is independent of active- and reactive-
power feedback gains integrated in its control circuit. The model was analyzed for different types of faults at
different locations, keeping the location of UPFC fixed at the receiving end of the line, With the addition of
UPFC, the magnitude of fault current and oscillations of excitation voltage reduces. Series and Shunt parts of
UPFC provide series and shunt injected voltage at certain different angles.
KEYWORDS: Flexible ac transmission system (FACTS), High-voltage dc transmission (HVDC), FACTS
devices, Power transfer controllability, PWM, Faults in HVDC System
I. INTRODUCTION
The rapid development of power systems generated by increased demand for electric energy initially
in industrialized countries and subsequently in emerging countries led to different technical problems
in the systems, e.g., stability limitations and voltage problems. However, breaking Innovations in
semiconductor technology then enabled the manufacture of powerful thrusters and, later of new
elements such as the gate turn-off thrusters (GTO) and insulated gate bipolar transistors (IGBT).
Development based on these semiconductor devices first established high-voltage dc transmission
(HVDC) technology as an alternative to long-distance ac transmission. HVDC technology, in turn, has provided the basis for the development of flexible ac Transmission system (FACTS) equipment
which can solve problems in ac transmission. As a result of deregulation, however, Operational
problems arise which create additional requirements for load flow control and needs for ancillary
services in the system. This paper summarizes Flexible ac transmission system (FACTS),High-
Voltage DC Transmission (HVDC), FACTS devices, Power transfer controllability, Faults in HVDC
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
402 Vol. 1, Issue 5, pp. 401-416
System are discussed in this paper to explain how greater performance of power network
transmission with various line reactance can be achieved.[1,2].
(a) Reduced maintenance (b) Better availability
(c) Greater reliability (d) Increased power
(e) Reduced losses (f) Cost-effectiveness
During the state of power exchange in interconnected lines to a substation under variable or constant
power, the HVDC converters comprehends the power conversion and later stabilizes the voltage
through the lines giving a breakeven margin in the power transmission. The first large-scale thyristors
for HVDC were developed decades ago. HVDC became a conventional technology in the area of
back-to-back and two- terminal long-distance and submarine cable schemes [3]. However, only few
multi terminal schemes have been realized up to now. However, further multi terminal HVDC
schemes are planned in the future (Fig. 1). The main application area for HVDC is the interconnection
between systems which cannot be interconnected by AC because of different operating frequencies or
different frequency controls. This type of interconnection is mainly represented by back-to-back
stations or long-distance transmissions when a large amount of power, produced by a hydropower
plant, for instance, has to be transmitted by overhead line or by submarine cable. HVDC schemes to
increase power transmission capability inside of a system have been used only in a few cases in the
past. However, more frequent use of such HVDC applications can be expected in the future to fulfill
the requirements in deregulated [4, 6].
Fig 1 Various types of HVDC Connections
Static var compensators control only one of the three important pameters (voltage, impedance, phase
angle) determining the power flow in ac power systems: the amplitude of the voltage at selected
terminals of the transmission line. Theoretical considerations and recent system studies (1) indicate
that high utilization of a complex, Interconnected ac power system, meeting the desired objectives for
availability and operating flexibility, may also require the real time control of the line impedance and
the phase angle. Hingorani (2) proposed the concept of flexible ac transmission systems or FACTS,
which includes the use of high power electronics, advanced control centers, and communication links,
to increase the usable power transmission capacity to its thermal limit. [5].
When using carrier based Pulse Width Modulation (PWM), its switching frequency has to be
increased (typically, 33 times fundamental frequency even higher) [17], which cause considerable
power losses. It reduces the total efficiency and economy of the UPFC-HVDC project. And they are
also the Impediments for equipment aimed at the green, renewable Sector. Therefore, with regard to
PWM technology suited for UPFC-HVDC, how to reduce switching frequency and possess good
harmonics performance, excellent transient control capability simultaneously become critical. And
this is exactly the aim of the paper. The paper presents an innovative hybrid PWM technology, which comprises a combination of a first PWM with a first switching pattern and a second PWM with a
second switching pattern. Hence during a first mode of operation, which may be a steady-state
operation, the converter is controlled by the first PWM and during a second mode of operation, which
may be a transient operation, the converter is controlled by the second PWM. An intelligent detection
function which enables the modulation and the corresponding control system will smoothly switch
from the first PWM to the second PWM and vice-versa when a disturbance causing a transient occurs.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
403 Vol. 1, Issue 5, pp. 401-416
The development of FACTS-devices has started with the growing capabilities of power electronic
components. Devices for high power levels have been made available in converters for high and even
highest voltage levels. The overall starting points are network elements influencing the reactive power
or the impedance of a part of the power system. The series devices are compensating reactive power.
With their influence on the effective impedance on the line they have an influence on stability and
power flow. The UPFC provides power flow control together with independent voltage control [7].
The main disadvantage of this device is the high cost level due to the complex system setup. The
relevance of this device is given especially for studies and research to figure out the requirements and
benefits for a new FACTS-installation. All simpler devices can be derived from the UPFC if their capability is sufficient for a given situation.[8].
II. HVDC AND FACTS
2.1 HVDC Converters and Functionalities for Power Transmission Enhancements.
During the state of power exchange in interconnected lines to a substation under variable or constant
power, the HVDC converters comprehends the power conversion and later stabilizes the voltage
through the lines giving a break even margin in the power transmission [9, 4]. The operation of
HVDC filters any system harmonics developed in the network and improves the power transmission to the receiving end by independently adjusting the real and reactive power control. The significance
of HVDC controller considered as part of FACTS family device is a structure of the back-to-back
converter that governs the conversion of ac-dc-ac; like FACTS [9,12,14]. HVDC is assigned for
frequency and phase independent short or long distance overhead or underground bulk power
transmission with high speed controllability [9, 4]. This provides greater real power transmission and
less maintenance. It reduces the chances of installing power cables Especially in difficult transmission
that travels under water [4, 10]. By making use of the back-to-back converters, power transmission under non-synchronous ac systems is easily adaptable. The installation of smoothing reactor the DC
Current and reactive power compensation at the sending and Receiving-ends smoothing reactor and
AC harmonics filter as Shown in Fig. 1. The installation of HVDC also depends on the dc voltage and
current ratings desired in the network that Yields for optimum converter cost. The converters
terminate. The DC overhead lines or cables that are linked to AC buses and network [9].HVDC used
for submarine cables connection will normally have 12-pulse converters as shown in Fig. 1 and Fig. 3.
The bridge converter circuit contains delta and Wye type transformer. The transformer windings filter
out system harmonics that occur by using the 6-pulse Graetz bridge converter [10]. Passive filters
involved components like reactors, capacitors and resistors are the ones that remove the Harmonics
[9]. For instance harmonics filtration Insulated Gate Bipolar Transistor (IGBT) or gate-turn-off
thyristors (GTO) are the passive filters used for HVDC connection [9].
Fig. 2 HVDC terminal station in cable transmission [1]
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
404 Vol. 1, Issue 5, pp. 401-416
Fig.3 Schematic diagram of HVDC back-to-back converter station [9].
The operation of HVDC is restricted when network system contains low short circuit ratios.
Therefore, insulation in the HVDC is essential in such cases. However, this does not
Restrict the converter stations operation. The HVDC insulation must withstand the stress produced in
ac and dc voltages to allow full operation of HVDC in the lines. In addition to this Graetz’s theory is
applied into the system to measure system harmonics occurring in the system to further allow energy
conversion in the HVDC system.
Fig. 4 Transformers and valve in 12-pulse bridge converter
2.2 Operation Condition of HVDC converter Rectification of voltage-current using the sending-end converter, pole 1 filters the system harmonics
and ‘noises’ Occurring in the transmission. When power is filtered, the Conversion from DC is direct back into the AC line at the Receiving-end of the HVDC pole 2 (Fig. 2). This sequence Operated
instantaneously when matching the AC and DC Voltages during the conversion process.
Requirements for this Conversion must have adequate impedance either on the AC or DC side of the
HVDC [10], see Fig. 3. The availability of the Smoothing inductors is to control the pulses of constant
current flows into the transformer’s secondary windings. This is because the transmission current has
pulses travels from the Primary side of the transformer, which have specific types of Connection and
ratio [9]. Thyristor schemes are more feasible in the converters. HVDC and FACTS used this scheme
to generate automated switching for close accuracy in their voltage conversion. The HVDC rectifier
produces commutation effects when power is fired into the pulses from the thyristor. The rectified
power is only then sent to the inverter for power inversion back to the AC line with the required
frequency at the receiving-end.
For an optimal converter utilization and low peak inverse Voltage across the converter valves, typical
3-phase bridge Converter is normally used. Simple transformers that installed in the lines resist voltage variation and high direct voltages when insulated. The assumption and representation of
HVDC block-set are expressed in equations (5) to (17) for MATLAB.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
405 Vol. 1, Issue 5, pp. 401-416
( ) /d c R d c I d c d c d c d cI V V R I L= − − -----------------------------(1)
( )r I R O d cx K I I= −---------------------------------------
(2)
( )I I RO dcx K I I= − ---------------------------------------------------------(3)
n d c n d ck m R d c d c
n
V IP V I
S= -------------------------(4)
2 2[ ]ndc ndc
mk r Rdc dc
n
V IQ S V I
S= − ------------------------------------ (5)
ndc ndcmk Idc dc
n
V IP V I
S= ------------------------------------------------------ (6)
2 2[ ]ndc ndcm k I Idc dc
n
V IQ S V I
S= − --------------------------------- (7)
The assumptions for the algebraic equations are then
cos ( )R P RO dcx K I Iα = + − ---------------------------------------------(8)
3 2 3 3cos cosRdc k k dc
IR
V V V Iα α= − −Π Π Π
--------------------(9)
kRO
R
VI
m= -------------------------------------------(10)
3 2 3cos( )
Idc m tI dcV V X Iγ= Π − −
Π Π ---------------------------(11)
3 2 ndc ndcI m dc
n
V IS V I
S=
Π ---------------------------------------------------(12)
mIO
I
VI
m= --------------------------------------------------------------------(13)
TABLE 1: HVDC data format in MATLAB
S.NO VARIABLE
DESCRIPTION UNIT
1 k Sending bus(SE) Int
2 m Receiving end (RE) Int
3
nS Power rating MVA
4 n kV Voltage rating at (SE)
KV
5 n mV Voltage rating at (RE) KV
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
406 Vol. 1, Issue 5, pp. 401-416
6 nf
Frequency rating Hz
7
n d cV DC voltage rating KV
8
ndcI
DC current rating KA
9 trX
Transformer reactance (rectifier) p.u
10
t iX
Transformer reactance (inverter) p.u
11
rM Tap ratio (rectifier) p.u
12
iM Tap ratio (inverter) p.u
13
IK Integral gain 1/s
15 pK
Proportional gain p.u/p.u
15 d cR
Resistance of the DC connection ohm
16
d cL Inductance of DC connection H
17 maxr
α Max. firing angle Deg
18 m inrα
Min. firing angle Deg
19 Im ax
γ Max. extinction angle Deg
20 Iminγ
Min. extinction angle Deg
21 maxroI
Max. reference current (rectifier) p.u
22
minroI
Min. reference current (rectifier) p.u
23
maxioI
Max. reference current (inverter) p.u
24
minioI
Min. reference current (inverter) p.u
This expression represents a single DC line circuit with two AC/DC converters connected as a RL
circuit. The MATLAB has PI controllers to control the extinction angle and also the firing angle of the HVDC [6]. The type of HVDC used and available in MATLAB is a thyristor based model.
2.3 Flexible AC Transmission System (FACTS) The objective of incorporating FACTS is into the power system lines are similar to HVDC but greater flexibility are involved like improving real power transfer capability in the lines, prevention of sub-
synchronous resonance (SSR)oscillations and damping of power swings [9]. FACTS devices have
four well- known types which are used in many power systems in the world [9, 4, 10]. ‘ Single ’ type
controller is the types of FACTS that installed in series or shunt in an AC transmission line, while ‘
unified ’ type controller are the combined converters type of FACTS controllers like UPFC and
HVDC. The size of a controller is dependent on the requirements of the network and desired power
transmission at loading point Voltage Source Controller (VSC) is sinusoidal voltage and is used in power system and other application. The quality of the sine wave is dependent on the size or amount
of the power electronics installed. The following types of FACTS devices are VSC type based
controllers:
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
407 Vol. 1, Issue 5, pp. 401-416
(a) Shunt controller: example device, STATCOM emulates like a variable inductor or can be a
capacitor in shunt or parallel connection in the transmission line. This type of device is capable of
imitating inductive or capacitive reactance in turns to regulate line voltage at the point of coupling.
Shunt controller in general controls the voltage injection [4].
(b)Series controller: example device, SSSC emulates like a variable inductor or a capacitor in series
with a transmission line and it imitates inductive or capacitive reactance in turn to regulate effective
line reactance between the two ends. Series controller in general controls current injection [4].
(c) Shunt-series controller: can be a standalone controller as STATCOM and SSSC. This type of
controller is a reactive compensator with the exception of producing its own losses. It is also recognized as “unified” controller and requires small amount of power for DC circuit exchange
occurring between the shunt and series converters [4]. See Fig.2 for shunt- series controller.
Fig. 5 Series-shunt compensator, UPFC
III. SIMULATION RESULTS
The rectifier and the inverter are 12-pulse converters using two Universal Bridge blocks connected in
series. The converters are interconnected through a 110-km line and 0.78H smoothing reactors as
shown in Fig 5(a).The converter transformers (Wye grounded/Wye/Delta) are modeled with Three-
Phase Transformer (Three-Winding) blocks. The transformer tap changers are not simulated. The tap
position is rather at a fixed position determined by a multiplication factor applied to the primary
nominal voltage of the converter transformers (0.90 on the rectifier side,0.96 on the inverter side).
The HVDC transmission link uses 12-pulse thyristor converters. Two sets of 6-pulse converters are
needed for the implementation stage. AC filters and DC filters are also required to minimize
harmonics.
Fig. 5(a) Simulink diagram of the HVDC Circuit
The firing-angle control system is configured based on two 6-pulse converters in series, one of which
is operated as a modified HVDC bridge. The HVDC power converters with thyristor valves will be
assembled in a converter bridge of twelve pulse configuration. This is accomplished by star-star
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
408 Vol. 1, Issue 5, pp. 401-416
connection and star-delta connection. Reduction of harmonic effects is another factor of investigation.
Here, MATLAB/SIMULINK program is used as the simulation tool.
Two 6-pulse Graetz bridges are connected in series to form a 12-pulse converter. The two 6-pulse
bridges are 275Kv, 60 Hz totally identical except there is an in phase shift of 30° for the AC supply
voltages. Some of the harmonic effects are cancelled out with the presence of 30° phase shift. The
harmonic reduction can be done with the help of filters. The firing angles are always maintained at
almost constant or as low as possible so that the voltage control can be carried out. Six or eight of
equal rating bridges are the best way to control the DC voltage. More than these numbers of series
bridges are not preferable due to the increase in harmonic content. The control of power can be achieved by two ways i.e., by controlling the current or by controlling the voltage. It is crucial to
maintain the voltage in the DC link constant and only adjust the current to minimize the power loss.
The rectifier station is responsible for current control and inverter is used to regulate the DC voltage.
Firing angle at rectifier station and extinction angle at inverter station are varied to examine the
system performance and the characteristics of the HVDC system. Both AC and DC filters act as large
capacitors at fundamental frequency. Besides, the filters provide reactive power compensation for the
rectifier consumption because of the firing angle. The main circuit of an UPFC is rated at 10 kVA and its circuit parameters are represented in Fig .5.
The main circuit of the series device consists of three single-phase H-bridge voltage-fed Pulse Width
Modulation (PWM) inverters. A PWM control circuit compares reference voltage VC with a
triangular carrier signal of fsw=1 kHz in order to generate twelve gate signals. An equivalent switching
frequency is 2 kHz, which is twice as high as fsw because three H-bridge PWM inverters are used. The
AC terminals of the PWM inverters are connected in series through matching transformers with a turn
ratio of 1:12. Since the rms voltage of the series device is 12 V, a kilovolt ampere rating of which is 11% of the controllable active power of 10 kW flowing between Vs and Vr.
Fig. 5(a), Shows HVDC system with UPSC the real power Output in the line is controlled to obtain
steady-state condition. when system harmonics is introduced. The weak power Transmission normally
occurring in long transmission lines was studied using MATALB. The diagram given in Fig. 5 shows
the computational layout of HVDC which is simulated for damping system harmonics and
rectification as well as with power inversion in its converters. Simulation of HVDC System carried
out using MATLAB / SIMULINK with UPFC and Simulation results was presented to create
oscillations with the line current and power waveforms during the power transmission. Fig 7 to Fig
14 shows the simulation results of HVDC system when three phase , Line to Ground and double
line ground with and with out UPSC. From the simulations results , it is observed that when different
types of faults i.e. three phase ., Line to Ground and Double Line to ground occurs the system are
having more oscillations and system takes more time to reach the steady state operation.. By using
UPFC the system reduces oscillation and thereby enhanced the power transfer capability of HVDC system.
Fig: 6 Simulation Result HVDC system
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
409 Vol. 1, Issue 5, pp. 401-416
In Fig. 6, fault is created in phase A of the rectifier bus at t=0.03sec, it results in unbalancing of the
phase voltages and generates harmonic oscillations in DC voltages and currents. The DC voltages and
currents of the rectifier are distorted and attain peak values up to 0.9 per unit and 0.016per unit
respectively at time t=0.12sec.
Fig.7 Simulation Result HVDC system when three phase fault occurs on Inverter
In Fig .7, it is observed that a 3-phase fault is created in the inverter side of HVDC system. The PWM
controller activates and clears the fault. The fault clearing can be seen first by a straight line of ‘0’
voltage between t=0.03sec to t=0.08sec. Before the fault a Vabc=0.17pu and Iabc=0.15pu. After the
fault is cleared at t=0.3sec, the recovery is slow and there are oscillations in DC voltage and current of
the magnitude 0.13pu and 0.1pu respectively. The rectifier DC voltage and current oscillate and
settles to the prefault values in about 3 cycles after the fault is cleared.
Fig 8 Simulation Result HVDC system when three phase facult occurs on Inverter with UPSC
From Fig 8,it is observed that different types of faults i.e., three phase, line to ground and double line
to ground is created in the inverter side of HVDC system at t=0.15 sec. When these faults occur in the
system, it takes more time to reach the steady state operation. The PWM controller activates and
clears the fault. Further, with the addition of UPFC the system reduces oscillations and get pure
sinusoidal waveform at voltage Vabc=0.9 p. u and current Iabc=0.95 p.u at time t=0.15 sec.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
410 Vol. 1, Issue 5, pp. 401-416
Fig 9 Simulation Result for steady state operation of HVDC system on rectifier side.
At the rectifier side, when the fault is applied at time t=0.03sec, voltage and current magnitudes are of
the order of 1pu and 1.5pu respectively and alpha angle is equal to 7 degrees which is shown in Fig
9.If alpha angle is changed to higher value the system takes longer time to reach steady state .If alpha
value increases, current value decreases. The waveforms obtained at rectifier side are same for
different types of faults.
Fig 10 Simulation Result for steady state operation of HVDC system on Inverter side
At the inverter side, when the fault is applied at time t=0.02sec,voltage and current
magnitudes are of the order of 0.03pu and 0.8pu respectively and extension angle is equal to
180 degrees which is shown in Fig . 10. The waveforms obtained at inverter side are same for
different types of faults.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
411 Vol. 1, Issue 5, pp. 401-416
Fig 11 Simulation Result for Injected active and reactive powers of HVDC system
Fig 12 Simulation Result for line active and reactive powers of HVDC system
In Fig 12, when a fault is created at time t=0.21sec, the active and reactive power is maintained at
800KW and 400KVAR respectively from time t=0sec to t=0.21sec.At time t=0.27sec both active and
reactive power attain stability and becomes steady state. It is observed that no power fluctuations
occur in P and Q after t=0.27sec.By trial and error, the integral gain is set to be 5, so that the steady
state errors are reduced for P and Q.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
412 Vol. 1, Issue 5, pp. 401-416
Fig.13 Simulation Result HVDC system when Line to Ground facult occurs on Inverter side
In Fig 13, it is observed that a Line to Ground fault is created in the inverter side of HVDC
system at time t=0.025sec. The PWM controller activates and clears the fault. Before the fault
a Vabc=0.14pu and Iabc=0.013pu. After the fault is cleared at t=0.08sec, the recovery is slow
and there are oscillations in DC voltage and current of the magnitude 0.2pu and 0.05pu
respectively.
Fig 14 Simulation Result HVDC system when Line to Ground faculty with UPSC
From Fig 14,it is observed that different types of faults i.e., three phase, line to ground and double line
to ground is created in the inverter side of HVDC system at t=0.15 sec. When these faults occur in the
system, it takes more time to reach the steady state operation. The PWM controller activates and
clears the fault. Further, with the addition of UPFC the system reduces oscillations and get pure
sinusoidal waveform at voltage Vabc=0.9 p. u and current Iabc=0.95 p.u at time t=0.15 sec.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
413 Vol. 1, Issue 5, pp. 401-416
Fig.15 Simulation Result HVDC system when Double Line to Ground facult occurs on Inverter side
In Fig 15, it is observed that a Double Line to Ground fault is created in the inverter side of HVDC
system at time t=0.02sec. The PWM controller activates and clears the fault. Before the fault a
Vabc=0.17pu and Iabc=0.15pu. After the fault is cleared at t=0.33sec, the recovery is slow and there
are oscillations in DC voltage and current of the magnitude 0.33pu and 0.1pu respectively
Fig 16 Simulation Result HVDC system when Double Line to Ground faults with UPSC
IV. CONCLUSION
According to results that UPFC improves the system performance under the transient and the normal conditions. However, it can control the power flow in the transmission line, effectively. With the
addition of UPFC, the magnitude of fault current reduces and oscillations of excitation voltage also
reduce. The "current margin" is essential to prevent misfire of the thyristor valves. DC filters and AC
filters can not only eliminate the harmonic effects but also reduce the total harmonic distortion (THD)
as well. The current waveform in the case of a conventional controller has a lot of crests and dents and
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
414 Vol. 1, Issue 5, pp. 401-416
suffers from prolonged oscillations, whereas by using PWM controller, DC current fast returns to its
nominal value. The overshoot in case of the PWM controller is slightly less than conventional
controllers. It is more economical for the HVDC transmission system to transfer more power as the
power factor is almost near to unity and the energy loss is low. UPFC, however, has shown its
flexibility in easing line congestion and promoting a more controllable flow in the lines. HVDC can
be very useful for long transmission lines. It is more recommended in networks or interconnected
lines that have high variation of power demands and complicated network connections with different
power frequencies. UPFC in general is good for promoting line load-ability and pool through
interconnected network buses more effectively. UPFC can be very useful for deregulated energy market as an alternative choice for more power generation to the load area.
REFERENCES
[1]. E.M. Yap, Student Member, IEEE School of Electrical and Computer Engineering, RMIT University,
Melbourne, AU
[2]. Hideaki Fujita, Member, IEEE, Yasuhiro Watanabe, and Hirofumi Akagi, Fellow, IEEE, “Control and
Analysis of a Unified Power Flow Controller” IEEE TRANSACTIONS ON POWER ELECTRONICS,
VOL. 14, NO. 6, NOVEMBER 1999
[3]. Lee Wei Sheng, Ahmad Razani and Neelakantan Prabhakaran, Senior Member, IEEE “Control of High
Voltage Direct Current (HVDC) Bridges for Power Transmission Systems” Proceedings of 2010 IEEE
Student Conference on Research and Development (SCOReD 2010), 13 - 14 Dec 2010, Putrajaya, Malaysia
[4]. N.G. Hingorani, L. Gyugyi, "Understanding FACTS," New York: IEEE Press, 2000, pp. 2, 29, 135, 300
and 417.
[5]. Padiyar, HVDC Power Transmission System. New Delhi, India:Wiley Eastern, 1993
[6]. W.Kimbark, Direct Current Transmission. New York: Wiley,1971, vol. I.
[7]. Gyugyi, L., "A Unified Power Flow Control Concept for Flexible AC Transmission Systems," IEE
PROCEEDINGS-C, Vol. 139, No.4,july 1992.
[8]. M. H. Haque, Senior Member, IEEE “ Application of UPFC to Enhance Transient Stability Limit”.
[9]. J.W. Evan, “Interface between automation and Substation,” Electric Power substations engineering, J.D.
Macdonald, Ed. USA: CRC Press,2003, pp. 6-1 (Chapter 6).
[10]. J. Arillaga, "Flexible AC Transmission technology," Y. H. Songs, and A.T. Johns, Ed. UK: Stevenage,
Herts IEE., 1999, pp. 99.
[11]. S.H. Hosseini, A. Sajadi and M.Teimouri, “Three phase harmonic load flow in an unbalanced AC
system. Including HVDC link,” Power Electronics and Motion Control Conference, IPEMC 2004. The 4th
International, vol. 3, pp. 1726-1730, Aug. 2004.
[12]. E.M. Yap, M. Al-Dabbagh and P.C Thum, “Using UPFC Controller in Mitigating Line Congestion for
Cost-efficient Power Delivery, “submitted at the Tencon 2005, IEEE conference, May 2005.
[13]. E.M. Yap, M. Al-Dabbagh, “Applications of FACTS Controller for Improving Power Transmission
Capability,” submitted at the IPEC 2005, IEEE conference, May 2005.
[14]. R. S. Tenoso, L.K. Dolan, and S. A. Yari, “Flexible AC Transmission systems benefits study,” Public
Interest Energy Research (PIER), Resource energy, Trn: P600-00-037, California, Oct 1999.
[15]. X.-P. Zhang, "Multiterminal Voltage-Sourced Converter Based HVDC Models for Power Flow
Analysis", IEEE Transactions on Power Systems, vol. 18, no. 4, 2004, pp.1877-1884.
[16]. D J Hanson, C Horwill, B D Gemmell, D R Monkhouse, "ATATCOM-Based Reloadable SVC Project
in the UK for National rid", in Proc. 2002 IEEE PES Winter Power Meeting, New York City, 7- 31 January
2002. uip_2.pd.
[17]. F. Schettler, H. Huang, and N. Christl, “HVDC Transmission Systems Using Voltage Sourced
Converters: Design and Application.” Conference Proceedings, IEEE Summer Meeting 2000, Paper No.
2000 SM-260, vol. 2, pp. 716-720.
[18]. Sheng Li, Jianhua Zhang, Guohua Zhang C Jingfu Shang, Mingxia Zhou CYinhui Li “Design of
Integrative Fuzzy Logic Damping Controller of VSC-HVDC” IEEE/PES Power Systems Conference and
Exposition,(PSCE '09), pp1-6.
[19]. A. K. Moharana, Ms. K. Panigrahi, B. K. Panigrahi and P. K. Dash, Senior Member, IEEE “VSC
Based HVDC System for Passive Network with Fuzzy Controller” International Conference on Power
Electronics, Drives and Energy Systems,(PEDES '06), pp 1 – 4.
[20]. Guo-Jie Li, T. T. Lie, Yuan-Zhang Sun, Si-Ye Ruan, Ling Peng, Xiong Li “Applications of VSC-
Based HVDC in Power System Stability Enhancement” 7th
Int. Conf. Power Engineering (IPEC 2005), pp
1-6.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
415 Vol. 1, Issue 5, pp. 401-416
[21]. K.H. Chan J.A. Parle N. Johnson E. Acha “Real-Time Implementation of a HVDC-VSC Model
forApplication in a scaled-down Wind Energy Conversion System (WECS)” Seventh International
Conference on AC-DC Power Transmission, 2001, pp 169-174.
[22]. H. C . Lin “Intelligent Neural Network based Dynamic Power System Harmonic
Analysis”.lntemational Conference on Power system Technology – (POWERCON 20) Nov. 2004,pp 244-
248.
[23]. Miguel Torres , Jose Espinoza, and Romeo Ortega “Modeling and Control of a High Voltage Direct
Current Power Transmission System based on Active Voltage Source Converters” 30th
IEEE Annual Conf.
of the Industrial Electronics Society 2004,pp 816-821.
[24]. Ruihua, Song Chao, Zheng Ruomei, Li Xiaoxin, Zhou “VSCs based HVDC and its control strategy”
IEEE/PES Transmission and Distribution Conference & Exhibition,2005,pp 1-6.
[25]. Yongsheng Alan Wang, John T Boys, and Aiguo Patrick Hu, Senior Member, IEEE “Modelling and
Control of an Inverter for VSC-HVDC Transmission System with Passive Load” IEEE Int. Joint Conf. on
Power System Technology and Power India Conference (POWERCON 2008), pp 1-6.
[26]. Sheng Li, Jianhua Zhang, Jingfu Shang, Ziping WU,Mingxia International Conference on Zhou “A
VSC-HVDC Fuzzy Controller for Improving the Stability of AC/DC Power System” Int. Conf. on
Electrical Machines and Systems (ICEMS 2008), pp 1-6.
[27]. Ke Li, and Chengyong Zhao,“New Technologies of Modular Multilevel Converter for VSC-HVDC
Application” Asia-Pacific Power and Energy Engineering Conference (APPEEC 2010), pp 1-4.
[28]. Hua WENG, Zheng xu, Member, IEEE, Zhendong DU “Inverter Location Analysis for Multi-infeed
HVDc Systems”. International Conference on Power System Technology, 2010, pp 1-6.
[29]. Guanjun Ding, Ming Ding and Guangfu Tang “An Innovative Hybrid PWM Technology for VSC in
Application of VSC-HVDC Transmission System” IEEE Electrical Power & Energy Conference 2008, pp
1-8.
[30]. Hua Li, Fuchang Lin, Junjia He,Yuxin Lu, Huisheng Ye, Zhigang Zhang “Analysis and Simulation of
Monopolar Grounding Fault in Bipolar HVDC Transmission System” IEEE Power Engineering Society
General Meeting, 2007, pp 1-5. [31]. Jie Yang&Jianchao Zheng Guangfu Tang& Zhiyuan He “Characteristics and Recovery Performance
of VSC-HVDC DC Transmission Line Fault” Asia-Pacific Power and Energy Engineering Conference
(APPEEC-10),pp1-4.
[32]. H. Jm, V.K. Sood, and W. Chen “Simulation Studies of A HVDC System with Two DC Links” IEEE
Region 10 Conf. on Computer, Communication, Control and Power (TENCON'93),pp 259-262
[33]. Jing Yong, Wu Xiaochen, Du Zhongming, Jin Xiaoming, Wang Yuhong, D. H. Zhang and J. Rittiger
“Digital Simulation of ACDC Hybrid Transmission System” 9th IET International Conference on
AC and DC Power Transmission,(ACDC 2010), pp 1-5.
[34]. Yong Chang, Hairong Chen CGaihong Cheng Cand Jiani Xie “Design of HVDC Supplementary
Controller Accomodating Time Delay of the WAMS Signal in Multi-Machine System” IEEE
Power Engineering Society General Meeting, 2006. [35]. Paulo Fischer de Toledo, Jiuping Pan, Kailash Srivastava, WeiGuo Wang, and Chao Hong “Case
Study of a Multi-Infeed HVDC System” IEEE Joint International Conference on Power System
Technology and Power India. (POWERCON2008),pp1-7.
[36]. Chengyong Zhao, Chunyi Guo “Complete-Independent Control Strategy of Active and Reactive
Power for VSC Based HVDC System” IEEE Power & Energy Society General Meeting (PES '09), pp 1-6.
[37]. E. Chiodo, D. Lauria, G. Mazzanti, and S. Quaia “Technical Comparison among Different Solutions
for Overhead Power Transmission Lines” Int. Symposium on Power Electronics, Electrical Drives,
Automation and Motion (SPEEDAM 2010), pp 68-72.
[38]. Hui Ding,Yi Zhang,Aniruddha M. Gole, Dennis A. Woodford, Min Xiao Han, and Xiang Ning
Xiao“Analysis of Coupling Effects on Overhead VSC-HVDC Transmission Lines From AC Lines With
Shared Right of Way” IEEE Trans on Power Delivery, Vol. 25, No. 4, Oct 2010, pp 2976-2986.
[39]. Jinliang Kang, Haifeng Liang,Gengyin Li,Ming Zhou, Member, and Hua Yang “Research on Grid
Connection of Wind Farm Based on VSC-HVDC” International Conference on Power System Technology
2010, pp 1-6.
Authors
M Ramesh is working as a Associate Professor and HOD EEE Dept, Medak College of
Engineering and Technlogy, Kondapak Meadk Dist, and pursuing Ph.D. at JNT University,
Anantapur is B.Tech. Electronics & Electronics Engineering and M.Tech in Advanced
Power Systems, JNTU, and Kakinada. He has many research publications in various
international and national journals and conferences. His current research interests are in the
areas of HVDC and Power System
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
416 Vol. 1, Issue 5, pp. 401-416
A. Jaya laxmi, B.Tech. (EEE) from Osmania University College of Engineering, Hyderabad in
1991, M. Tech.(Power Systems) from REC Warangal, Andhra Pradesh in 1996 and completed
Ph.D.(Power Quality) from JNTU, Hyderabad in 2007. She has five years of Industrial
experience and 12 years of teaching experience. Presently she is working as Associate
Professor, JNTU College of Engineering, JNTUH, Kukatpally, Hyderabad. She has 10
International Journals to her credit. She has 50 International and 10 National papers published
in various conferences held at India and also abroad. Her research interests are Neural
Networks, Power Systems & Power Quality. She was awarded “Best Technical Paper Award” for Electrical
Engineering in Institution of Electrical Engineers in the year 2006.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
417 Vol. 1, Issue 5, pp. 417-421
EIGEN VALUES OF SOME CLASS OF STRUCTURAL
MATRICES THAT SHIFT ALONG THE GERSCHGORIN CIRCLE
ON THE REAL AXIS
T. D. Roopamala1 and S. K. Katti
2
1Deptt. of Comp. Sc. and Engg., S.J.C.E Mysore University, Mysore City, Karnataka India
2Research Supervisor, S.J.C.E. Mysore University, Mysore City, Karnataka India
ABSTRACT
In this paper, we have presented a simple approach for determining eigenvalues for some class of structural
matrices. It has been shown that if all the principle diagonal elements of the given structural matrices are
increased by ± ε , it is as good as the Gerschgorin circle drawn for the given matrix is shifted by ± ε amount
with respect to the origin. The main advantage of the proposed method is that there is need to use time-
consuming iterative numerical technique for determining the eigenvalues. The proposed approach is expected to
be applicable in various computer sciences like Pattern Recognition, Face Recognition identification of
geometrical figures and also in control system application for obtaining the stability of the system.
KEYWORDS: Eigenvalues, Gerschgorin theorem, structural matrices, trace of the matrix.
I. INTRODUCTION
The concept of stability plays very important role in the analysis of systems. A system can be
modeled in the state space form [1]. In this state space form, stability can be determined by computing
the eigenvalues of the system matrix A. There exist several methods in the literature for the
computation of eigenvalues [2, 3]. Moreover, in the engineering applications, some structural matrices
have been used and their eigenvalues computations are also important. In the mathematical literature,
we found that there exists Gerschgorin theorem [4-6], which gives the bounds under which, all
eigenvalues lie. Now a day’s eigenvalues can be calculated easily using MATLAB. But, we found
that the proposed method computes eigenvalues without involving iterative numerical technique. In
this paper a simple formulae has been derived that helps in the computation of the eigenvalues, which
is faster than the MATLAB for the class of structural matrices.
II. GERSCHGORIN THEOREM [4-6]
For a given matrix A of order (nxn), let kP be the sum of the moduli of the elements along the kth
row excluding the diagonal elementskka . Then every eigenvalues of A lies inside the boundary of at
least one of the circles.
kk ka Pλ − = (1)
III. DETERMINATION OF EIGENVALUES OF THE STRUCTURAL MATRICES
For the given matrix [A] of the form
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
418 Vol. 1, Issue 5, pp. 417-421
−
−
−−−
−−
−−
=
a
b
b
bbb
bab
bba
AM
L
MMMM
L
L
(2)
where, 0, 0, ( 1)a b a n b> > = − (3)
The Gerschgorin’s circle of the above matrix are given below
FIG (1):- GERSCHGORIN’S BOUND ])1(,0[ an −
Eigenvalues of the above matrix are
λ =0, λ = (a + b ) , (a + b ) ,…., (a + b ) (4)
(n - 1) times
Step (1):-In the above matrix [A] if all the principle diagonal elements are changed by ε we obtain
the following matrix [B] as
+
−
−
−−−
−+−
−−+
=
)(a
b
b
bbb
b)(ab
bb)(a
][B
ε
ε
ε
M
L
MMMM
L
L
(5)
such that
|b1)-(n|aand0b0,ab,ε)(a =>>>+ (6)
Gerschgorin’s circle of the above matrix is
Fig (2):- Gerschgorin bound [ , (n -1)a ]ε ε−
Applying above Gerschgorin theorem to the above matrix [B] , we get
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
419 Vol. 1, Issue 5, pp. 417-421
1
n
ii ij j
ii j
a a rλ=
≠
− ≤ =∑ (7)
In the matrix [ A ] replace
jaii (a ε) r | (n -1) b|= + = (8)
From eq. ( 4 ) and eq. (5) we get
| (a ε) | (n -1) b|λ − + ≤ (9)
| (a ε) | aλ − + ≤ (10)
By removing modulus of the above equation, we get
a≤+−± )ε)a((λ (11)
So a≤+− )ε)a((λ (12)
or - ( λ - ( a + ε ) ) ≤ a (13)
Now consider the equation (13)
a≤+−− )ε)a((λ (14)
ελ ≤−− (15)
ελ ≤
Here ελ < is rejected since Gerschgorin’s bound are positive. So, from eq.(15) , ε is one of the
eigenvalues of the structural matrix . Thus for the above matrix [B] one of the eigenvalues is at ε . Let
us consider one of the eigenvalues ελ =n. Now, we calculate remaining eigenvalues which is
given below in the following steps
Applying Gerschgorin theorem to the matrix [B], we get
1| (a ε) | | (n -1) b|λ − + ≤ (16)
2| (a ε) | | (n -1) b|λ − + ≤ (17)
.
.
.
1| (a ε) | | (n -1) b|nλ−
− + ≤ (18)
By subtracting eq,(16) from eq,(18) we get ,
1 2| | 0λ λ− ≤ (19)
In the above eq. (18), | 0|21 <− λλ is rejected, since the absolute value of any number is always
positive. Thus, we get, 1 2| | 0λ λ− = i.e., 21 λλ = (20)
Similarly from subtracting remaining equations we can show that
1 2 3 1... n kλ λ λ λ−
= = = = = where, k is the repeated eigenvalues
. Step 2: Using definitions of trace of the matrix, i.e.,
1
( )n
ii
i
Trace A a=
=∑ (21)
1
( )n
i
i
Trace A λ=
=∑ (22)
We calculate
( )Trace A na= (23)
ε)a( +n =1 2 3 1
...n n
λ λ λ λ λ−
+ + + + + (24)
Since, ε=nλ we get (25)
1)-k(nεε)a( +=+n (26)
k1)-ε)/(nε)a(( =−+n (27)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
420 Vol. 1, Issue 5, pp. 417-421
Thus, the eigenvalues of the system matrix [B] are11 2 1
, , ,...,n
k k kε−
, where 1 2 1n
k k k k−
= = =
Remark: In matrix[ ]A , if we replace ε)a( + by ε)a( +− and b− by b , then eigenvalues
for the matrix [ ]A are 1 2 1, , ,..., nk k kε
− where
1 2 1... nk k k k−
= = = and
( (a ε) ε) / (n -1)n k− + − = (28)
4. Examples and figures
Consider the matrix [A] as
−−−
−
−
=
41-1-1-
1411
11-41-
11-1-4
][A
Gerschgorin’s circle of the above matrix is
Fig (3): - Gerschgorin bound is [1, 7]
The above structural matrix is similar to matrix as shown in eq (5). In this above matrix,
1ε3a4ε)a(4 ===+=n . So we have directly determined its eigenvalues. From eq. (25) it has
one eigenvalue at ε and the remaining eigenvalues can be calculated using eq.(26) as follows
(4(3 1) -1) / 3 5k = + =
Thus the eigenvalues of the matrix B are 1 , 5 , 5 , 5.
IV. CONCLUSIONS
In this paper, we have proposed a simple technique for calculating eigenvalues of the structural
matrices. It has been observed that as the Gerschgorin’s circle move ε distances on the real axis, then
one of the eigenvalues will ε for the structural matrix and the other eigenvalues are repeated. The
proposed method needs on iterative methods and instead of that a simple formula is derived using
Gerschgorin’s theorem to calculate the repeated eigenvalues. Computations of eigenvalues have many
application in Computer Engineering and the control systems.
REFERENCES
[1] Nagath. I.J. and Gopal, “Control System Engineering”, Wiley Eastern Limited.
[2] Jain, M.K. ., Iyengar R.K., and Jain.R.K.” Numerical Methods for Scientific Computations “, 1983 Wiley
Eastern Limited.
[3] Shastry. S.S., “Introduction to Numerical Analysis “, 1989 Prentice hall of India.
[4] Gerschgorin.S. , “Ober die Abgrenzung der Eigenwerte einer Matrix “, izv, Akad.Nauk.USSR Otd.Fiz-
Mat.Nauk7, pp., 749-754.1931.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
421 Vol. 1, Issue 5, pp. 417-421
[5] Pusadkar V.S., and S.K.Katti., “A New computational Technique to identify Real Eigenvalues of the system
matrix Via Gerschgorin Theorem” , Journal of institution of Engineering (India) , Vol78,pp.,121-123 1997.
[6] HoteY.V., choudhury D.Roy, and Gupta, J.R.P.,” Gerschgorin Theorem and its Applications in Control
Systems Problems”, IEEE Conference on IndustraTechnology, pp2438-2443.2006.
AUTHORS BIOGRAPHIES
T. D. Roopamala was born in Mysore. She has been graduated from Mysore University in
B.Sc (Electronics – in 1984), M.Sc (Mathematics- 1986) , PGDCA- ( 6th
rank -1991) and
Ms(Software Systems–BITS pilani – 1998 ) . She is presently working in department of
Computer science and Engg., S.J.C.E., Mysore with a teaching experience of 23 years. Her
area of interest is Computational techniques, Computer Engineering.
S K. Katti was born in 1941 in India. He has graduated in B.E.(Tele-com –(1964)) ,
B.E.(Elect- (1965)) and M.E. (in control systems-(1972)) from Pune University (India). He
has obtained his Ph.D degree in.’ Systems Science’, from Indian Institute of Science
Bangalore in 1984. He has a teaching experience of 42 years. He has worked as a Professor of
Electrical Engineering at Pune Engineering College during 1994-1999 and finally he has
retired from the college. Presently he has been working as the Professor of Computer science
and Engineering at S.J.C.E, Mysore since 2001. His areas of Research interest are:
Multivariable control system designs, Artificial intelligence, Digital signal processing, Cognitive Science, Fussy
logic and Speech Recognition via HMM models. He has 7 International publications, 2 International
Conferences and 7 papers at National level. He has worked as a Reviewer for IEEE transaction on Automatic
Control and also he was reviewer for Automatica. He has worked as external examiner for few Ph.D thesis in
Computer Science. Presently, two Research Scholars are working under him in the area of Computer science for
Ph.D studies.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
422 Vol. 1, Issue 5, pp. 422-428
TYRE PRESSURE MONITORING AND COMMUNICATING
ANTENNA IN THE VEHICULAR SYSTEMS
1K. Balaji,
1B. T. P. Madhav,
1P. Syam Sundar,
2P. Rakesh Kumar,
3N. Nikhita,
3A. Prudhvi Raj,
3M. Mahidhar
1Department of ECE, K L University, Guntur DT, AP, India
2Department of ECE, LBRC (Autonomous), Mylavaram, AP, India
3Project Students, K L University, Guntur DT, AP, India
ABSTRACT
Modern vehicles are coming with advanced gadgets and luxurious inbuilt devices. Satellite audio radio
communication devices, Tyre pressure monitoring systems, accident avoidance systems, weather reports, route
maps etc. Tyre pressure monitoring system gives the indication and assurance to the driver that the Tyres are
operating at their expectations. The vehicle handling characteristics will be affected if the Tyre pressure is low
and which may causes the accidents. The Tyre pressure monitoring system with the support of antenna, sensor,
control unit and indicators will help the driver to know the condition of the Tyre instantly and avoid so many
problems and issues related to this. The radio transmitters with the help of sensors will provide the alarm or any
indication to the driver regarding the Tyre pressure. This present paper carries the design and simulation of
compact patch antenna for the communication purpose related to these things. The complete simulation of the
antenna is carried out by HFSS.
KEYWORDS: Tyre pressure Monitoring System (TPMS), Sensors, Accident avoidance systems.
I. INTRODUCTION
TPMS systems measure the actual Tyre pressure using sensors which incorporate radio tranmitters.
The radio signals are picked up by a receiver unit which provides an alarm signal to the driver.
Various types of information can be provided for the driver (alarm lamp, actual pressure, audible
alarm, voice), and the sensors are either internally wheel mounted or may be externally fitted on the
Tyre valve in place of the valve cap [1-3].
More advanced TPMS show the actual Tyre pressure on a display/receiver unit inside the vehicle.
Actual Tyre pressure is measured by miniature sensors in each wheel which each transmit an encoded
radio signal. The receiver/display is a digital back-lit display unit which recognizes your vehicle's pre-
coded radio signals and sounds an alarm at high or low pressure conditions. Some also indicate and
monitor Tyre temperature. Most work with no external aerial fitted to the receiver, others require an
aerial laid along the car underbody. Models are available for various types of vehicle (2 wheeled, 4 / 5
/ 6 wheeled, or even 24 wheeled installations. For the motorcyclist simple operation and weather
proofing is more important. For the car user, style may be important. Some TPMS wheel sensors
transmit adverse pressure conditions immediately, others that power off when parked only wake-up
after the vehicle has achieved a minimum speed (usually 15 mph). For the racing specialist, RS232
links are available to enable conditions to be sent via computer telemetry to the pit [4-7].
The receiver/display typically require either a 12v or 24v DC supply, usually switched with the
ignition. Options include combined Display and Receiver, or separate Display Module and Receiver
Module with interconnecting cord [8-9].
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
423 Vol. 1, Issue 5, pp. 422-428
The TPM system consists of the following major component.
• Sensor/Transmitter Device
• RF Receiver Module with Antenna
• Low-Frequency (LF) Commander Device
• Control Unit
• Pressure Vessel (Tyre)
Figure (1) Tyre Pressure Monitoring system schematic diagram
Figure (2) TPMS fixed at car Tyre
The TPM system primarily monitors the internal temperature and pressure of an automobile’s Tyre.
An auto-location system can dynamically detect the position of a specific sensor, which is useful
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
424 Vol. 1, Issue 5, pp. 422-428
when Tyres are rotated [10]. The heart of the TPM system is the Sensor/Transmitter (S/TX) device
and it is based on the Microchip.
Figure (1) shows the circuitry for complete schematic of the Tyre pressure monitoring system. Figure
(2) shows the Tyre pressure monitoring system fixed car Tyre overview. We are concentrated in the
design of antenna for transmission purpose of sensor data and its processing. A typical compact low
profile antenna was designed and simulated using Ansoft HFSS software and the antenna output
parameters are presented in this paper. Moreover from the simulation results the applicability of the
antenna was estimated. This antenna can be used to pass the signals regarding the Tyre pressure to
nearest automobile workshops so that to alert the people to solve the problem in lesser time.
II. SIMULATION RESULTS AND DISCUSSION
Figure (3) Loop Antenna Model
The Tyre pressure monitoring system communication antenna will work at 435 MHz. Figure (3)
shows the Loop antenna model. Figure (4) shows the return loss curve for the loop antenna at 435
MHz and a Return loss of -15.45dB is obtained at desired frequency.
200.00 300.00 400.00 500.00 600.00 700.00Freq [MHz]
-16.00
-14.00
-12.00
-10.00
-8.00
-6.00
-4.00
-2.00
0.00
dB
(St(
1,1
))
Ansoft Corporation Patch_Antenna_ADKv1Return Loss
m1
Curve Info
dB(St(1,1))
Setup1 : Sw eep1
Name X Y
m1 436.1919 -15.4557
Figure (4) Return Loss Vs Frequency
Figure (5) shows the Input impedance smith chart for the antenna. Rms of 0.822 and input impedance
bandwidth of 0.92% is achieved from the current model.
5.002.001.000.500.20
5.00
-5.00
2.00
-2.00
1.00
-1.00
0.50
-0.50
0.20
-0.20
0.00-0.000
10
20
30
40
50
6070
8090100110
120
130
140
150
160
170
180
-170
-160
-150
-140
-130
-120-110
-100 -90 -80-70
-60
-50
-40
-30
-20
-10
Ansoft Corporation Patch_Antenna_ADKv1Input Impedance
Curve Info
St(1,1))
Setup1 : Sw eep1
Figure (5) Input Impedance Smith Chart
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
425 Vol. 1, Issue 5, pp. 422-428
Figure (6) shows the two dimensional gain curve for the antenna. Maximum gain of 8dB can be
attained from the current model and it shown in the figure (6).
-200.00 -150.00 -100.00 -50.00 0.00 50.00 100.00 150.00 200.00Theta [deg]
-25.00
-20.00
-15.00
-10.00
-5.00
-0.00
5.00
10.00
Y1
Ansoft Corporation Patch_Antenna_ADKv1ff_2D_GainTotal
m1Curve Info
dB(GainTotal)
Setup1 : LastAdaptive
dB(GainTotal)_1
Setup1 : LastAdaptive
Name X Y
m1 0.0000 8.0078
Figure (6) 2D-Gain
Figure (7) shows the VSWR Vs frequency curve and it is showing the VSWR of 1.408 at desired
frequency. The current result maintains the 2:1 ratio of VSWR as per the standards. These results
showing the applicability of this antenna for the proposed operation.
200.00 300.00 400.00 500.00 600.00 700.00Freq [MHz]
0.00
20.00
40.00
60.00
80.00
100.00
120.00
140.00
VS
WR
t(co
ax_
pin
_T
1)
Ansoft Corporation Patch_Antenna_ADKv1VSWR
m1
Curve Info
VSWRt(coax_pin_T1)
Setup1 : Sw eep1
Name X Y
m1 436.1919 1.4060
Figure (7) VSWR Vs Frequency
Figure (8) and (9) shows the radiation pattern of the antenna. The far-zone electric field lies in the E-
plane and far-zone magnetic field lies in the H-plane. The patterns in these planes are referred to as
the E and H plane patterns respectively. Figure (8) shows the radiation pattern of E-plane(y-z plane)
in 3-Dimensional view. Figure (9) shows the radiation pattern of H-plane(x-z plane) in 3-Dimensional
view.
Figure (8) Radiation pattern in Phi direction
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
426 Vol. 1, Issue 5, pp. 422-428
Figure (9) Radiation pattern in Theta direction
Figure (10) is giving polarization plot of the antenna in three dimensional view. The axial ratio is a
parameter which measures the purity of the circularly polarized wave. The axial ratio will be larger
than unity when the frequency deviates from f0. Figure (11) shows the axial ratio for the current
model in 3-Dimensional view.
Figure (10) Polarization ratio
Figure (11) Axial Ratio
Table (1) and Table (2) giving the antenna parameters and maximum field data. From the table (1) it
is clear that the antenna is having peak gain of 6.5dB and radiation efficiency is about nearer to one.
The table (2) showing the antenna maximum field data with respect to x, y and z coordinates. The
LHCP and RHCP is also presented in this work for the proposed antenna. All the values are showing
good agreement with the expected values and which gives the applicability of this antenna in the real
time system.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
427 Vol. 1, Issue 5, pp. 422-428
III. CONCLUSION
Tyre pressure monitoring system based antenna was simulated at 435 MHz and the results are
presented in this work. If we need stronger signal from our Tyre pressure sensors we can add booster
antenna to the receiver station. Also we can bring our antenna outside the car and this will allow us to
place the controller board anywhere in the car. Unlike regular wire on our controller the TPMS
antenna projects the signal horizontally disregarding any other signals on different altitude and as
result we see stronger and much clear noise free transmission. Two types of arrangements can be done
while fitting these antennas. One is to be connected externally mounted on the top of the vehicle
where signal is not being blocked by metal walls and another can be connected in the interior of the
vehicle to improve the reception of the Tyre pressure USB unit. The second type is fitted inside the
vehicle and transmits the signal with the help of the sensor and communication devices.
ACKNOWLEDGMENTS
The authors like to express their thanks to the management of K L University and the department of
ECE for their continuous encouragement during this work. Madhav also express his thanks to his
family members for their support during this work.
REFERENCES [1] Jiaming Zhang Quan Liu Yi Zhong, A Tire Pressure Monitoring System Based on Wireless Sensor Networks
Technology Proceeding MMIT '08 Proceedings of the 2008 International Conference on MultiMedia and
Information Technology.
[2] Tianli Li, Hong Hu, Gang Xu, Kemin Zhu and Licun Fang, “Pressure and Temperature Microsensor Based
on Surface Acoustic Wave in TPMS”, Acoustic Waves, pp. 466, September 2010.
[3] http://www.rospa.com/roadsafety/info/tyre_pressure_mon.pdf
[4] IRU POSITION ON THE TYRE PRESSURE MONITORING SYSTEMS (TPMS), unanimously adopted
by the IRU International Technical Commission on 9 March 2010.
[5] Brendan D. Pell, Edin Sulic, Wayne S. T. Rowe, Kamran Ghorbani and Sabu John,” Advancements in
Automotive Antennas”, New Trends and Developments in Automotive System Engineering,”Aug-2010.
[6] Joseph J. Carr,” Using the Small Loop Antenna”, Joe Carr's Radio Tech-Notes, Universal Radio Research
[7] Micheal A Jensen and yahya rahmat samii, “Electromagnetic characteristics of superquadric wire loop
antennas”, IEEE Transactions of antennas and propagation, vol 42, No 2, Feb-1994.
[8] A.V.Kudrin, M.Yu.Lyakh, E.Yu. Petrov, T. M. Zaboronkova, “Whistlerwave Radiation From An Arbitrarily
Oriented Loop Antenna Located In A Cylindrical Density Duct In A Magnetoplasma”, Journal of
Electromagnetic radiation systems”, vol 3, No2, 2001.
[9] Langley, R.J. Batchelor, J.C, “Hidden antennas for vehicles”, Electronics & Communication Engineering
Journal , vol 14, issue 16, 2004
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
428 Vol. 1, Issue 5, pp. 422-428
[10] Nicholas DeMinco, Modeling Antennas on Automobiles in the VHF and UHF Frequency Bands, 2005
ACES
Authors Biography:
B.T.P.Madhav was born in India, A.P, in 1981. He received the B.Sc, M.Sc, MBA, M.Tech
degrees from Nagarjuna University, A.P, India in 2001, 2003, 2007, and 2009 respectively.
From 2003-2007 he worked as lecturer and from 2007 to till date he is working as Assistant
professor in Electronics Engineering. He has published more than 55 papers in International and
National journals. His research interests include antennas, liquid crystals applications and
wireless communications.
P. Syam Sundar received his B.Tech. from JNTU College of Engineering, Ananthapur in
1999 and M.Tech. from JNTU College of Engineering, Ananthapur in 2006. He currently
working as Associate Professor in the department of ECE at K L University, Vaddeswaram,
Guntur DT. His field of interest includes digital communication, antennas and signal
processing. He is having one International Journal Paper Publication.
K.Balaji was born on 14-12-1963 in India. He did his B.Tech in 1988 from VRSEC and M.S
from Bits Pilani in 1994. He is having 22 years of Teaching experience and currently he is
working as Associate professor in the Department of ECE, K L University. His research area
includes Antennas and Communication systems.
P.Rakesh Kumar Was born in India in 1984. He did his B.Tech From CR Redyy Engineering
college and M.Tech from KLC Engineering college. Presently he is working as Assistant
professor in the department of ECE of LBRC Engineering College, Mylavaram. He is having two
years of teaching experience and he is having 8 International Journal papers in his credit. His
field of interest includes antennas and signal processing.
N.Nikhita, A.Prudhviraj and M.Mahidhar pursuing their B.Tech from K L University. Their field of Interest
includes Antennas and communication systems.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
429 Vol. 1, Issue 5, pp. 429-436
DEEP SUB-MICRON SRAM DESIGN FOR DRV ANALYSIS
AND LOW LEAKAGE Sanjay Kr Singh
1, Sampath Kumar
2, Arti Noor
3, D. S. Chauhan
4 & B.K.Kaushik
5
1IPEC, Ghaziabad, India.
2J.S.S. Academy of Technical Education, Noida, India.
3Centre for Development of Advance Computing, Noida, India.
4 UTU, Dehradun, India.
5IIT Roorkee, India.
ABSTRACT
This paper deals with the design opportunities of Static Random Access Memory (SRAM) for lower power
consumption and propagation delay. Initially the existing SRAM architectures are investigated, and thereafter a
suitable basic 6T SRAM structure is chosen. The key to low power dissipation in the SRAM data path is to
reduce the signal swings on the highly capacitive nodes like the bit and data lines. While designing the SRAM,
techniques such as circuit partitioning, divide word line and low power layout methodologies are reviewed to
minimize the power dissipation.
KEYWORDS: SRAM,SNM, DRV,SOC,CMOS, DIBL
I. INTRODUCTION
Ever since the early days of semiconductor electronics, there has been a desire to miniaturize the
components, improve their reliability and reduce the weight of the system. All of these goals can be
achieved by integrating more components on the same die to include increasingly complex electronic
functions on a limited area with minimum weight. Another important factor of successful proliferation
of integration is the reduced system cost and improved performance.
SRAM cell design considerations are important because of following reasons.
1. The design of an SRAM cell is key to ensure stable and robust SRAM operation.
2. The continuous drive to enhance the on-chip storage capacity; the SRAM designers are motivated
to increase the packing density. Therefore, an SRAM cell must be as small as possible while
meeting the stability, speed, power and yield constraints.
3. Near minimum size cell transistors exhibit higher susceptibility with respect to process variations.
4. The cell layout largely determines the SRAM critical area, which is the chip yield limiter.
5. In scaled technologies the cell stability is of paramount significance. Static Noise Margin (SNM)
of a cell is a measure of its stability
A significantly large segment of modern SoCs is occupied by SRAMs. SRAM content in ASIC
domain is also increasing. Therefore, understanding SRAM design and operation is crucial for enhancing various aspects of chip design and manufacturing. The memory leakage power [13] has
been increasing dramatically and becomes one of the main challenges in future system-on-a-chip
(SoC) design.
For mobile applications low standby power [4] [16] is crucial. A mobile device often operates in the
standby mode. As a result, the standby leakage power has a large impact on the device battery life.
Memory leakage suppression [18] is important for both high speed and low power SoC designs. A
large variety of circuit design techniques available to reduce the leakage power of SRAM cells and the memory peripheral circuits.
In recent years, significant progress has been made in design and development of low power
electronics circuits. Power dissipation has become a topic of intense research and development of
portable electronic devices and systems. In VLSI chip, with higher levels of integration, packaging
density of transistors is increasing. As a result, for high levels of integration power dissipation
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
430 Vol. 1, Issue 5, pp. 429-436
becomes the dominant factor. CMOS technology is known for using low power at low frequency with
high integration density. There are two main components that determine the power dissipation of a
CMOS gate, first component is the static power dissipation [9] due to leakage current and second
component is the dynamic power dissipation [10] due to switching transient current and
charging/discharging of load capacitance. In order to accurately determine the heat produced in a chip, one must determine the power dissipated by the number of gates and the number of off-chip drivers
and receivers.
The need for low power design [11] [10] is becoming a major issue in high performance digital
systems, such as portable communication devices, microprocessors, DSP’s and embedded throughput.
Hence low power design of digital integrated circuits has emerged as a very active developing field.
As integrated chip designers accelerate their adoption of today’s deep sub micron semiconductor
(DSM) technologies, squeezing the maximum transistor count into and the maximum performance,
minimum power and noise out of their high performance designs, increasing importance is placed on
the accuracy of cell characterization systems. The common traits of high-performance chips are the
high integration density and high clock frequency. The power dissipation of the chip increases with
the increasing clock frequency. In most of the real time applications, the requirements for low power
consumption must be met along with the high chip density.
In this paper, a circuit level leakage technique is adapted for the core cell to minimize the leakage by
having good data stability. In section II, the SRAM cell design opportunities are explained and
corresponding design trade-offs are listed. The existing leakage techniques are investigated and
optimal VDD value is fixed with the help of SNM and DRV in section III. The simulation results are
presented to compare the stability and optimal VDD are given in section IV and conclusion is given in
section V.
II. SRAM DESIGN OPPORTUNITIES
Modern SRAMs strive to increase bit counts while maintaining low power consumption [6] and high
performance. These objectives require continuous scaling of CMOS transistors. The supply voltage
must scale down accordingly to control the power consumption and maintain the device reliability.
Scaling the supply voltage and minimum transistor dimensions that are used in SRAM cells challenge
the process and design engineers to achieve reliable data storage in SRAM arrays. This task is
particularly difficult in large SRAM arrays that can contain millions of bits. Random fluctuations in
the number and location of the doping atoms in the channel induce large threshold [5] voltage
fluctuations in scaled-down transistors.
Figure 1. Schematic of SRAM cell
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
431 Vol. 1, Issue 5, pp. 429-436
Other factors affecting the repeatability of the threshold voltage and introducing VTH mismatches even
between the neighboring transistors in SRAM cells are the line edge roughness, the variations of the
poly critical dimensions and the short channel effects. SRAM stability margin or the Static Noise
Margin (SNM) is projected to reduce by 4X as scaling progresses from 250 nm CMOS technology
down to 50 nm technology [3]. Since the stability of SRAM cells is reducing with the technology scaling, accurate estimation of SRAM data storage stability in pre-silicon design stage and
verification of SRAM stability in the post-silicon testing stage are increasingly important steps in
SRAM design.
III. EXISTING AND PROPOSED WORK
A large variety of circuit design techniques used to reduce the leakage power of SRAM cells and the
memory peripheral circuits (decoding circuitry, I/O, etc). The leakage of the peripheral circuits can be
effectively suppressed by turning off the leakage paths with switched source impedance (SSI) during
idle period. Our work focuses on the leakage control of 6T -structure SRAM core cell of Fig 1 during
the standby mode. The existing SRAM cell leakage reduction techniques include novel SRAM cell
design, dynamic-biasing [1], and VDD-gating. Memory operations at such a low voltage effectively
reduce both the active and standby power. The dynamic-biasing techniques use dynamic control on transistor gate-source and substrate-source
bias to enhance the driving strength of active operations and create low leakage paths during standby
period. At the current technology nodes (130nm and 90nm), the above dynamic-biasing schemes
typically achieve 5-7X leakage power reduction. This power saving becomes less as the technology
scales, because the worsening short-channel effects cause the reverse body bias effect on leakage
suppression to diminish [12]. In order to design for a higher (>30X) and sustainable leakage power
reduction [7], an SRAM designer needs to integrate multiple low-power design techniques, rather than
using dynamic-biasing only.
The VDD-gating techniques either gate-off the supply voltage of idle memory sections, or put less
frequently used sections into a low-voltage standby mode. There are three types of leakage
mechanisms in an SRAM cell: sub-threshold leakage, gate leakage and junction leakage. A lower
VDD reduces all of these leakages effectively. The reduction ratio in leakage power is even higher
because both the supply voltage and leakage current are reduced. In recent years as the need of
leakage reduction in high-utilization memory structures increases, there have been many research
activities on low-voltage SRAM standby techniques.
Although the available techniques can be very effective in enhancing the efficiency of low-voltage
memory standby operation, an important parameter needed by all of these schemes is the value of
SRAM standby VDD. This is because a high standby VDD preserves memory data but produces high
leakage current, and a very low standby VDD effectively reduces leakage power but does not
guarantee a reliable data-retention [8]. An optimal standby VDD is needed to maximize the leakage power saving and satisfy the data preservation requirement at the same time. This will be the main
focus of our work.
To determine the optimal standby VDD of an SRAM, it is important to understand the voltage
requirement for SRAM data retention. Based on an in-depth study of SRAM low voltage data-
retention behavior, this work defines the boundary condition of SRAM data retention voltage (DRV),
and then derives both the theoretical and practical limits of DRV as functions of design and
technology parameters. These DRV analysis and results provide insights to SRAM designers and facilitate the development of low power memory standby schemes. In addition to the analytical DRV
study, developed a design technique that aggressively reduces the SRAM standby leakage.
In a typical 6T - SRAM design, the bit line voltages are connected to VDD during standby mode. This
cell can be represented by a flip-flop comprised of two inverters. These inverters include access
transistors M5 and M6. When VDD is reduced to DRV during standby operation, all six transistors in
the SRAM cell are in the sub-threshold region. Thus, the capability of SRAM data retention strongly
depends on the sub-threshold current conduction behavior.
As the minimum VDD required for data preservation, DRV of an SRAM cell is a measure of its state-
retention capability under very low voltage. In order to reliably preserve data in an SRAM cell, the
cross-coupled inverters must have a loop gain greater than one. The stability of an SRAM cell is also
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
432 Vol. 1, Issue 5, pp. 429-436
indicated by the static-noise margin (SNM) [14] [17]. As shown in Fig 2, the SNM can be graphically
represented as the largest square between the voltages transfers characteristic (VTC) curves of the
internal inverters.
Figure 2. VTC of SRAM Cell Inverters
Noise margin can be defined using the input voltage to output voltage transfer characteristic (VTC).
In general, Noise Margin (NM) is the maximum spurious signal that can be accepted by the device
when used in a system while still maintaining the correct operation. If the consequences of the noise
applied to a circuit node are not latched, such noise will not affect the correct operation of the system
and can thus be deemed tolerable. It is assumed that noise is presented long enough for the circuit to
react, i.e. the noise is “static” or dc. A Static Noise Margin is implied if the noise is a dc source. In
case when a long noise pulse is applied, the situation is quasi-static and the noise margin
asymptotically approaches the SNM.
When VDD scales down to DRV [19], the VTC of the cross-coupled inverters degrade to such a level
that the loop gain reduces to one and SNM of the SRAM cell falls to zero. If VDD is reduced below the
DRV, the inverter loop switches to the other biased state determined by the deteriorated inverter VTC
curves, and loses the capability to hold the stored data.
Since DRV is a function of the SRAM circuit parameters, a design optimization used to reduce DRV.
At a fixed SNM, a lower DRV reduces the minimum standby VDD and the leakage power. When the
VDD is fixed, a lower DRV improves the SNM and enhances the reliability of SRAM data retention.
Traditionally, a standard SRAM cell is designed based on a performance-driven design methodology,
which does not optimize the data retention reliability. For example, using a large NMOS pull-down device and a small PMOS pull-up device reduce data access delay, but cause a degraded SNM at low
voltage. In order to gain a larger SNM and lower the DRV, the P/N strength ratio needs to be
improved during the standby operation.
The global variation in Vt or L has a much weaker impact on DRV. This is because a global variation
affects both inverters in the same direction and does not cause significant SNM degradation. The
leakage current increases substantially with a high VDD. This is caused by the DIBL (Drain Induced
Barrier Lowering) effect in short channel transistors. In the DRV analysis of a typical SRAM cell, the
DIBL effect can be ignored because all the SRAM transistors operate in a weak-inversion mode. But
when VDD is significantly higher than the DRV, the DIBL effect causes a rapid increase in leakage
current. This phenomenon reflects the importance of low-voltage standby leakage control in CMOS
technologies, where the short-channel effect increases.
The memory structure method is adopted to minimize the power consumption. The memory squaring
technique is one of the structural method but in this, larger the number of words in a row the larger the
power consumption. For this reason, as long as area is not an issue, memory squaring is not an
optimal solution. A divided word line structure is a better solution. In this, the number of cells on the
WL (Word Line) is the number of bits per word, so the length of the WL will vary because of this,
structure cannot be expanded into large memories. The used structural method is partitioned
structure; it is a superior solution [19] to the hierarchical word line structure. The partitions can be
seen as independent parts that may be placed where required without the bounds given by the
hierarchical word line structure. The partitioning is implemented on 64 Kb SRAM architecture, which is an asynchronous design. The
entire SRAM can be divided into four blocks. Each block is of 32x32 columns, where each word is 16
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
433 Vol. 1, Issue 5, pp. 429-436
bits. The sense amplifier is placed with each column and column circuitry is placed below sense
amplifier. The typical specification of the RAM is an access time of 10ns; therefore the sense
amplifier is placed before column circuitry.
IV. RESULTS AND DISCUSSIONS
Core Cell SNM
The Static Noise Margin (SNM) serves as a figure of merit in stability evaluation of SRAM cells. The Fig 3 shows the simulated result of SNM for the designed SRAM. Fig 4 and 5 represents the Read and
Write margin [15] simulation results respectively. After the layout and schematic designs, the DRC
and LVS procedures are verified for the designs.
Figure 3. Static noise margin
The Fig 3 plots the voltage transfer characteristic (VTC) of Inverter 2 of Fig 1 and the inverse VTC of
Inverter 1. The resulting two-lobed curve is called a “butterfly curve” and is used to determine the
SNM. The internal node of the bit cell that represents a zero gets pulled upward through the access
transistor due to the voltage dividing effect across the access transistor and drive transistor. This
increase in voltage severely degrades the SNM during the read operation (read SNM).
Figure 4: Read margin
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
434 Vol. 1, Issue 5, pp. 429-436
Figure 5: Write margin
TABLE I :CR vs. SNM
Technology(nm) CR SNM(mV)
130nm
0.8 38
1.0 44
1.2 48
1.4 54
1.6 58
The SRAM cell ratio (CR) (i.e. the ratio of the driver transistor’s W/L to the access transistor’s W/L)
was introduced to simplify consideration of SNM optimization. The Table I show the variation of
SNM with CR. From the graph of Fig 7 cell ratio vs. static noise margin, the value of static noise
margin increases with the increase of cell ratio of the SRAM cell in 130 nm technology. As the cell ratio is increased, average value of SNM increases
because the driver transistor now has higher drive strength and is less susceptible to noise. At the
same time, the variation in SNM reduces with increasing cell ratio. This is expected because in a
wider driver transistor, there will be higher number of dopants and small variation in the
number/location of these dopants will result in a smaller effect on overall device characteristics.
Figure 7. SNM vs. CR (130 nm)
V. CONCLUSION
This paper proposes a method to investigate optimal VDD with the help of SNM and also size of the
cell (CR). It also addresses the critical issues in designing a low power static RAM in Deep sub
0000
10101010
20202020
30303030
40404040
50505050
60606060
0.80.80.80.8 1111 1.21.21.21.2 1.41.41.41.4 1.61.61.61.6
....
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
435 Vol. 1, Issue 5, pp. 429-436
micron (DSM) 130nm technologies. The bit cell operates properly for static noise margin of 0.466V,
Read margin of 0.3985V and Write margin of 0.5028V. The feature work can be extended for
minimizing leakage at architecture level and also on reconfigurable cell.
REFERENCES
[1] K. Zhang, U. Bhattacharya, Z. Chen, F. Hamzaoglu, D. Murray, N. Vallepali, Y. Wang, B. Zheng, and M. Bohr. A 3-GHz 70-Mb SRAM in 65-nm CMOS technology with integrated column-based dynamic power supply. IEEE Journal of Solid State Circuits (JSSC), 41:146–151, January 2006.
[2] A. Bhavnagarwala, X. Tang, and J. Meindl. The impact of intrinsic device fluctuations on CMOS SRAM cell stability. IEEE Journal of Solid-State Circuits (JSSC), 36:658–665, April 2001.
[3] F. Lai and C. Lee. On-chip voltage down converter to improve SRAM read-write margin and static power for sub-nano CMOS technology. IEEE Journal of Solid-State Circuits (JSSC), Vol 42, Issue -9, :2061–2070, Aug 2007.
[4] M. Horiguchi, T. Sakata, and K. Itoh, "Switched-source-impedance CMOS circuit for low standby subthreshold current giga-scale LSI's," IEEE Journal of Solid-State Circuits, vol. 28, issue 11, pp. 1131-1135, Nov. 1993.
[5] B. H. Calhoun, A. Chandrakasan, "A 256kb sub-threshold SRAM in 65nm CMOS," IEEE International Solid-State Circuits Conference, pp. 628, Feb 2005.
[6] Andrew Carlson,Zheng Guo,Sriram Balasubramanian,Radu Zlatanovici,Tsu-Jae King Liu, and Borivoje Ni•Z. Guo, S. Balasubramanian, R. Zlatanovici, T. King Liu, and B. Nikolic, "FinFET-based SRAM design," in Proc. ISLPED '05, Piscataway, NJ: IEEE, 2005, pp. 2-7
[7] H. Mizuno and T. Nagano, "Driving source-line (DSL) cell architecture for sub-1-V High-speed low power applications," Digest of Technical Papers. Symposium on VLSI Circuits, pp. 25–26, June 1995.
[8] H. Kawaguchi, Y. Iataka, and T. Sakurai, "Dynamic Leakage Cut-off Scheme for Low-Voltage SRAM's," Digest of Technical Papers, Symposium on VLSI Circuits, pp.140-141, June 1998.
[9] F. Li, D. Chen, L. He, and J. Cong, “Architectureevaluation for power-efficient FPGAs,” In Proceedings of ACM International Symposium on Field Programmable Gate Arrays, 2003, 175—184.Feb 2002.
[10] L. Shang, A. S. Kaviani, and K. Bathala, “Dynamic power consumption in Virtex-II FPGA family,” in FPGA '02 Proceedings of the 2002 ACM/SIGDA tenth international symposium on Field-programmable gate arrays – pp 157-164.
[11] Avant Star-Hspice Manual Volume III- MOSFET Models 1999-2000 A. Keshavarzi, S. Ma, S. Narendra, B. Bloechel, K. Mistry, T. Ghani, S. Borkar, and V. De,“Effectiveness of Reverse Body Bias for Leakage Control in Scaled Dual Vt CMOS ICs,” Proceedingsof the International Symposium on Low Power Electronics and Design (ISLPED), Huntington Beach, CA, August 2001, pp. 207–212.
[12] K. Flautner et al, "Drowsy caches: simple techniques for reducing leakage power," International Symposium on Computer Architecture, pp. 148-157, May 2002.
[13] J. Lohstroh, E. Seevinck, and J.D. Groot, "Worst-Case Static Noise Margin Criteria for Logic Circuitsand Their Mathematical Equivalence," IEEE Journal of Solid-State Circuits, vol. SC-18, no. 6, pp.803-807, Dec 1983.
[14] K. Takeda, H. Ikeda, Y. Hagihara, M. Nomura and H. Kobatake, "Redefinition of write margin for next-generation SRAM and write-margin monitoring circuit", International Solid-State Circuits Conference Vol. 42, No. 1, pp. 161--169, 2007.
[15] A. Kumar, et al., "Fundamental bounds on power reduction during SRAM standby data-retention", in press, IEEE International Symposium on Circuits and Systems, 2007.
[16] Seevinck E, List FJ, Lohstroh J. Static-noise margin analysis of MOS SRAM cells. IEEE J Solid State Circuits 1987;22:748–54.
[17] H. Qin, R. Vattikonda, T. Trinh, Y. Cao and J. Rabaey, “SRAM cell optimization for ultra-low power standby,”Journal of Low Power Electronics, 2(3), pp. 401–411, Dec. 2006.
[18] Amrutur, Bharadwaj S., Design and Analysis of Fast Low Power SRAMs, dissertation at Stanford University, 1999.
Author
Sanjay Kr Singh, a PhD scholar at the UK. Technical university, Deharadun, (Uttrakhand)
India . He is an Asso. Professor in the Department of Electronics and Communication
Engineering in Indraprastha Engineering College, Ghaziabad (Uttar Pradesh) India. He has
received his M.Tech. in Electronics &Communication and B.E in Electronics and
Telecommunication Engineering in the year of 2005 and 1999 respectively. His main
research interests are in Deep-Sub Micron Memory Design for low power.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
436 Vol. 1, Issue 5, pp. 429-436
Sampath Kumar V. a PhD scholar at the UPTU Lucknow ,(Uttar Pradesh) India . He is an
Assoc. Professor in the Department of Electronics and Communication Engineering in J.S.S.
Academy of Technical Education, Noida, INDIA. He has received his M.Tech. in VLSI
Design And B.E in Electronics and Communication Engineering in the year of 2007 and
1998 respectively. His main research interest is in reconfigurable memory design for low
power.
Arti Noor, completed her Ph. D from Deptt. of Electronics Engg., IT BHU, Varanasi in
1990. She has started her career as Scientist-B in IC Design Group, CEERI, Pilani from
1990-95 and subsequently served there as Scientist-C from 1995-2000. In 2001 joined
Speech Technology Group, CEERI Center Delhi and served there as Scientist-EI upto April
2005. In May 2005 Joined CDAC Noida and presently working as Scientist-E and HOD in
M. Tech (VLSI) Division. Supervised more than 50 postgraduate theses in the area of VLSI
Design, she has examined more than 50 M. Tech theses and supervising three Ph. D
students in the area of Microelectronics. Her main research interest is in VLSI Design of
semi or full-custom chips for implementation of specific architecture, Low power VLSI Design, Digital design.
D S Chauhan . He did his B.Sc Engg.(1972) in electrical engineering at I.T. B.H.U., M.E.
(1978) at R.E.C. Tiruchirapalli ( Madras University ) and PH.D. (1986) at IIT/Delhi. He
did his post doctoral work at Goddard space Flight Centre, Greenbelt Maryland . USA
(1988-91).He has been director KNIT sultanpur in 1999-2000 and founder vice Chancellor
of U.P.Tech. University (2000-2003-2006). Later on, he has served as Vice-Chancellor of
Lovely Profession University (2006-07) and Jaypee University of Information Technology
(2007-2009). Currently he has been serving as Vice-Chancellor of Uttarakhand Technical
University for (2009-12) Tenure.
B. K. Kaushik ,He did his B.E. degree in Electronics and communication Engineering
from C R State college of Engineering, Murthal, Haryana in 1994.His M tech in
Engineering system from Dayal bag, Agra in 1997.His obtain PhD AICTE-QIP scheme
from IIT Roorkee ,India.. He has published more than 70 papers in nation and international
journal and conferences. His research interest are in electronics simulation and low power
VLSI designee .He is serving as a Assistant Professor in department of electronics and
computer engineering, Indian institute of Technology, Roorkee, India.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
437 Vol. 1, Issue 5, pp. 437-440
SAG/SWELL MIGRATION USING MULTI CONVERTER
UNIFIED POWER QUALITY CONDITIONER SaiRam.I
1, Amarnadh.J
2, K. K. Vasishta Kumar3
1Assoc. Prof., 2Prof. & HOD, 3Asstt. Prof., Deptt. of Electrical and Electronics
Engineering, Dhanekula Institute of Engineering & Technology, Vijayawada.
ABSTRACT
This paper presents a new unified power-quality conditioning system (MC-UPQC), capable of
simultaneous compensation for voltage and current in multibus/multifeeder systems. In this configuration, one
shunt voltage-source converter (shunt VSC) and two or more series VSCs exist. The system can be applied to
adjacent feeders to compensate for supply-voltage and load current imperfections on the main feeder and full
compensation of supply voltage imperfections on the other feeders. In the proposed configuration, all converters
are connected back to back on the dc side and share a common dc-link capacitor. Therefore, power can be
transferred from one feeder to adjacent feeders to compensate for sag/swell and interruption. The performance
of the proposed configuration has been verified through simulation studies using MATLAB/SIMULATION on a
two-bus/two-feeder system and results are presented.
KEYWORDS: Power quality (PQ), unified power-quality conditioner (UPQC), voltage-source converter
(VSC).
I. INTRODUCTION Power quality is the quality of the electrical power supplied to electrical equipment. Poor power
quality can result in mal-operation o f the equipment .The electrical utility may define power
quality as reliability and state that the system is 99.5% reliable.
MCUPQC is a new connection for a unified power quality conditioner (UPQC), capable
of simultaneous compensation for voltage and current in multibus/multifeeder systems.
A MCUPQC consists of a one shunt voltage-source converter (shunt VSC) and two or more
series VSCs, all converters are connected back to back on the dc side and share a common dc-
link capacitor. Therefore, power can be transferred one feeder to adjacent feeders to compensate
for sag/swell and interruption. The aims of the MCUPQC are:
A. To regulate the load voltage (ul1) against sag/swell, interruption, and disturbances in the
system to protect the Non-Linear/sensitive load L1. B. To regulate the load voltage (ul2) against sag/swell, interruption, and disturbances in the
system to protect the sensitive/critical load L2. C. To compensate for the reactive and harmonic components of nonlinear load current (il1).
As shown in this figure 1 two feeders connected to two different substations supply the loads L1 and
L2. The MC-UPQC is connected to two buses BUS1 and BUS2 with voltages of ut1 and ut2,
respectively. The shunt part of the MC-UPQC is also connected to load L1 with a current of il1.
Supply voltages are denoted by us1 and us2 while load voltages are ul1 and ul2. Finally, feeder
currents are denoted by is1 and is2 and load currents are il1 and il2. Bus voltages ut1 and ut2 are distorted and may be subjected to sag/swell. The load L1 is a
nonlinear/sensitive load which needs a pure sinusoidal voltage for proper operation while its current is
non-sinusoidal and contains harmonics. The load L2 is a sensitive/critical load which needs a purely
sinusoidal voltage and must be fully protected against distortion, sag/swell and interruption. These
types of loads primarily include production industries and critical service providers, such as medical
centers, airports, or broadcasting centers where voltage interruption can result in severe economical
losses or human damages.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
438 Vol. 1, Issue 5, pp. 437-440
Fig.1: Typical MC-UPQC used in a distribution system.
A Unified Power Quality Conditioner (UPQC) can perform the functions of both D-STATCOM and
DVR. The UPQC consists of two voltage source converters (VSCs) that are connected to a common
dc bus. One of the VSCs is connected in series with a distribution feeder, while the other one is
connected in shunt with the same feeder. The dc- links of both VSCs are supplied through a common
dc capacitor.
It is also possible to connect two VSCs to two different feeders in a distribution system is called
Interline Unified Power Quality Conditioner (IUPQC). This paper presents a new Unified Power
Quality Conditioning system called Multi Converter Unified Power Quality Conditioner (MC-UPQC).
II. MC-UPQC TO CONTROL POWER QUALITY
The series and shunt connected forms the basic principle for the operation of UPQC as it is the
back to back connection of the series and shunt connection of the VSCs. If the UPQC device is connected between two feeders fed from different substations then it is called as interline Unified
Power Quality Conditioner (IUPQC). If the UPQC device is connected between multibus/multifeeders
fed from different substations then it is called as Multi-Converter Unified Power Quality Conditioning
System (MCUPQC) MCUPQC can improve the power quality by injecting voltage in to any feeder
from the DC link Capacitor.
This whole operation is controlled by controlling the three voltage source converters (VSC) connected
between the two feeders in the Electrical distribution system.
III. DISTORTION AND SAG/SWELL ON THE BUS VOLTAGE IN FEEDER-1 AND
FEEDEER-2
Let us consider that the power system in Fig. 1 consists of two three-phase three-wire 380(v)
(RMS, L-L), 50-Hz utilities. The BUS1 voltage (ut1) contains the seventh-order harmonic with a
value of 22%, and the BUS2 voltage (ut2) contains the fifth order harmonic with a value of 35%. The BUS1 voltage contains 25% sag between 0.1s<t<0.2s and 20% swell between 0.2s<t<0.3s. The BUS2
voltage contains 35% sag between 0.15s<t<0.25s and 30% swell between 0.25s<t<0.3s.
The nonlinear/sensitive load L1 is a three-phase rectifier load which supplies an RC load of 10Ω and
30µF. The simulink model for distribution system with MC-UPQC is shown in figure 2.
Figure 2: Simulink model of distribution system with MC-UPQC
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
439 Vol. 1, Issue 5, pp. 437-440
IV. SIMULATION RESULTS
The critical load L2 contains a balanced RL load of 10Ω and 100mH. The MC–UPQC is
switched on at t=0.02s. The BUS1 voltage, the corresponding compensation voltage injected by
VSC1, and finally load L1 voltage are shown in Figure 3.
Figure 3:BUS1 voltage, series compensating voltage, and load voltage in Feeder1.
Similarly, the BUS2 voltage, the corresponding compensation voltage injected by VSC3, and
finally, the load L2 voltage are shown in figure 4.
Figure 4:BUS2 voltage, series compensating voltage, and load voltage in Feeder2.
As shown in these figures, distorted voltages of BUS1 and BUS2 are satisfactorily compensated for
across the loads L1 and L2 with very good dynamic response.
The nonlinear load current, its corresponding compensation current injected by VSC2,
compensated Feeder1 current, and, finally, the dc-link capacitor voltage are shown in Fig. 5.
The distorted nonlinear load current is compensated very well, and the total harmonic distortion
(THD) of the feeder current is reduced from 28.5% to less than 5%. Also, the dc voltage
regulation loop has functioned properly under all disturbances, such as sag/swell in both feeders.
Fig 5: Nonlinear load current, compensating current, Feeder1 current, and capacitor voltage.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
440 Vol. 1, Issue 5, pp. 437-440
V. CONCLUSIONS The present topology illustrates the operation and control of Multi Converter Unified Power Quality
Conditioner (MC-UPQC). The system is extended by adding a series VSC in an adjacent feeder. The
device is connected between two or more feeders coming from different substations. A non-
linear/sensitive load L-1 is supplied by Feeder-1 while a sensitive/critical load L-2 is supplied through
Feeder-2. The performance of the MC-UPQC has been evaluated under voltage sag/swell in either
feeder. In case of voltage sag, the phase angle of the bus voltage in which the shunt VSC (VSC2) is
connected plays an important role as it gives the measure of the real power required by the load. The
MC- UPQC can mitigate voltage sag in Feeder-1 and in Feeder-2 for long duration. The performance
of the MC-UPQC is evaluated under sag/swell conditions and it is shown that the proposed MC-
UPQC offers the following advantages:
1. Power transfer between two adjacent feeders for sag/swell and interruption compensation;
2. Compensation for interruptions without the need for a battery storage system and,
consequently, without storage capacity limitation; 3. Sharing power compensation capabilities between two adjacent feeders which are not
connected.
REFERENCES
[1]. Hamid Reza Mohammadi, Ali Yazdian Varjani, and Hossein Mokhtari, “Multiconverter Unified
Power- Quality Conditioning System: MC- UPQC” IEEE TRANSACTIONS ON POWER DELIVERY, VOL.
24, NO.3, JULY 2009.
[2]. R.Rezaeipour and A.Kazemi, “Review of Novel control strategies for UPQC” Internal Journal of Electric
and power Engineering 2(4) 241-247, 2008.
[3]. S. Ravi Kumar and S.Siva Nagaraju“Simulation of D-STATCOM and DVR in power systems” Vol. 2, No. 3,
June 2007 ISSN 1819-6608 ARPN Journal of Engineering and Applied Sciences.
[4]. M.V.Kasuni Perera” Control of a Dynamic Voltage Restorer to compensate single phase voltage sags”
Master of Science Thesis, Stockholm, Sweden 2007.
[5]. M. Basu, S. P. Das, and G. K. Dubey, “Comparative evaluation of two models of UPQC for suitable interface
to enhance power quality,” Elect.Power Syst. Res., pp. 821–830, 2007.
[6]. A. K. Jindal, A. Ghosh, and A. Joshi, “Interline unified power quality conditioner,” IEEE Trans. Power Del.,
vol. 22, no. 1, pp.364–372, Jan. 2007.
[7]. K. Çalatay BAYINDIR on “Modeling of Custom power devices” PhD THESIS, ADANA, 2006.
[8]. Olimpo Anaya-Lara and E. Acha “Modeling and Analysis of Custom Power Systems by
PSCAD/EMTDC” IEEE Transactions on Power Delivery, Vol. 17, NO. 1, January 2002.
[9]. G. Ledwich and A. Ghosh, “A flexible DSTATCOM operating in voltage and current control mode,” Proc.
Inst. Elect. Eng., Gen., Transm. Distrib., vol. 149, no. 2, pp. 215–224, 2002.
[10]. M. K. Mishra, A. Ghosh, and A. Joshi, “Operation of a DSTATCOM in voltage control mode,” IEEE
Trans. Power Del., vol. 18, no. 1, pp. 258–264, Jan. 2003.
[11]. Cai Rong,” Analysis of STATCOM for Voltage Dip Mitigation”. Thesis for the Degree of Master of
Science, December 2004.
[12]. Paisan Boonchiam and Nadarajah Mithulananthan”Understanding of Dynamic Voltage Restorers
Through MATLAB Simulation “ Thammasat Int. J. Sc.Tech., Vol. 11,N o. 3, July-September 2006.
Authors Biography: I. Sai Ram is currently working as Associate Professor in EEE department, Dhanekula Institute of Engineering & Technology, Vijayawada. His research areas include Power Systems, Electrical Machines and Control Systems.
J. Amarnadh is currently working as Professor in EEE department, University College of Engineering, JNTU, Hyderabad. His research areas include High Voltage and Gas Insulated substations. K. K. Vasishta Kumar is currently working as Assistant Professor in EEE department,
Dhanekula Institute of Engineering & Technology, Vijayawada. His research areas include
Power Systems, Power Quality and Electrical Machines.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
441 Vol. 1, Issue 5, pp. 441-446
A NOVEL CLUSTERING APPROACH FOR EXTENDING THE
LIFETIME FOR WIRELESS SENSOR NETWORKS
Puneet Azad1, 3
, Brahmjit Singh2, Vidushi Sharma
3
1Department of Electronics & Communication Engineering, Maharaja Surajmal Institute of
Technology, GGSIP University, Delhi, India 2Department of Electronics and Communication Engineering, NIT, Kurukshetra, India.
1,3School of Information & Comm. Tech., Gautam Buddha University, Gr. Noida, India
ABSTRACT
A new energy efficient clustering algorithm based on the highest residual energy is proposed to improve the
lifetime of wireless sensor network (WSN). In each cycle, a fixed number of cluster heads are selected based on
maximum residual energy of the nodes. Each cluster head is associated with a group of nodes based on the
minimum distance among them. In such scheduling, all the nodes dissipate uniform energy and subsequently
remain alive for long time. The simulation results show that our proposed clustering approach is more effective
in prolonging the network lifetime compared with the existing protocols such as Low-energy adaptive clustering
hierarchy (LEACH) and Distributed hierarchical agglomerative clustering (DHAC).
KEYWORDS: Wireless Sensor Networks, Homogeneous, Clustering.
I. INTRODUCTION
Recent advances in micro-electromechanical systems and low power digital electronics have led to the
development of micro-sensors having sensing, processing and communication capabilities equipped
with a power unit. These sensors are randomly deployed down in a remote location for sensing the
ambient conditions such as temperature, humidity, lightening conditions, pressure, noise levels etc.
[1,2]. They are also used for a wide variety of applications such as multimedia surveillance [3], storage of potential relevant activities such as thefts, car accidents, traffic violations and health and
home applications. The wireless sensor network consists of a large number of sensor nodes with
limited power capacity and a base-station which is responsible for collecting data from the nodes.
One of the major issues (in wireless sensor network) is to minimize energy loss during collecting data
from the environment and transmitting it to the base-station. In this context various methodologies
and protocols have been proposed and found to be efficient [4]. However further improvement is required in order to enhance the wireless sensor network. We have made an attempt to design an
efficient clustering protocol for extending the lifetime of the network. Clustering of nodes is found to
be an effective way to increase lifetime of network. Clustering is the classification of the objects of
relatively similar objects [5]. The variety of clustering methods has been effectively used in many
science and technology fields. In WSN, these sensor nodes are classified into clusters based on their
attributes (e.g. location, signal strength and connectivity etc) [6]. In this article, different methodology
for selecting cluster head is discussed.
II. BACKGROUND
Several protocols have been developed till now to improve the lifetime of the network using
clustering techniques. The main goal is to use the energy of the nodes efficiently and performing data
aggregation to decrease the number of transmitted messages to the base-station and transmission distance of the sensor nodes. In this context, low-energy adaptive clustering hierarchy (LEACH) [7,8]
is the most popular distributed cluster-based routing protocols in wireless sensor networks. LEACH
randomly selects few nodes as cluster heads and rotates this role to balance the energy dissipation of
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
442 Vol. 1, Issue 5, pp. 441-446
the sensor nodes in the networks. The cluster head nodes fuse and aggregate data arriving from nodes
from every cluster and send an aggregated data to the base-station in order to reduce the amount of
data and transmission of the duplicated data. Data collection is centralized to base-station and
performed periodically. When clusters are being created, each node decides whether to become cluster
head or not depending upon a probability. In LEACH, the optimal number of cluster heads is estimated to be about 5% of the total number of nodes. All the nodes will find their nearest cluster
head and will send their data in their time slot in each round.
Another method reported as adaptive decentralized re-clustering protocol (ADRP) [9,10] is a
clustering protocol for Wireless Sensor Networks in which the cluster heads and next heads are
elected based on residual energy of each node and the average energy of each cluster. The selection of
cluster heads and next heads are weighted by the remaining energy of sensor nodes and the average
energy of each cluster. The sensor nodes with the highest energy in the clusters can be a cluster heads
at different cycles of time. By means of the former, the role of cluster heads can be switched
dynamically. However, Attea et al. [11] alleviates the undesirable behavior of the evolutionary
algorithm when dealing with cluster routing problem in WSN by formulating a new fitness function
that incorporates two clustering aspects, viz. cohesion and separation error. Their simulation results in
heterogeneous environment show that the evolutionary based clustered routing protocol (ERP)
increases the network lifetime and preserves more energy than existing earlier protocols.
Energy efficient heterogeneous clustered scheme EEHC [12] adopts the heterogeneity of the nodes in
terms of their initial energy i.e. a percentage of nodes are equipped with more energy than others. In
order to improve the lifetime and performance of the network system, this paper reports on the
weighted probability of the election of cluster heads, which is calculated as a function of as a function
of increased energy. Performance is evaluated against LEACH using ns-2 simulator and it shows that
the lifetime of the network has extended by 10% as compared with LEACH in the presence of same
setting of powerful nodes in a network. DHAC [13] is a hierarchical agglomerative clustering algorithm, which adopts a bottom-up clustering approach by grouping similar nodes together before
the cluster head is selected. This algorithm avoids re-clustering and achieves uniform energy
dissipation through the whole network. The clusters are formed on the basis of quantitative (location
of nodes, received signal strength) as well as qualitative data (connectivity). After the formation of
clusters using some well known hierarchical methods like SLINK, CLINK, UPGMA, and WPGAM,
the cluster heads are selected having minimum id in the group. The simulation results show the
improved lifetime of the network as compared to the LEACH protocol. An energy-efficient protocol [14] is designed to improve the clustering scheme in which the cluster
head selection is based on a method of energy dissipation forecast and clustering management
(EDFCM). EDFCM considers the residual energy and energy consumption rate in all nodes.
Simulation results in MATLAB show that EDFCM balances the energy consumption better than the
conventional routing protocols and prolongs the lifetime of networks obviously. An energy efficient
multi-hop clustering algorithm [15] is designed for reducing the energy consumption and prolonging
the system lifetime using an analytical clustering model with one-hop distance and clustering angle.
The cluster head will continue to act as the local control center and will not be replaced by another
node until its continuous working times reach the optimum value. With the mechanism, the frequency
of updating cluster head and the energy consumption for establishing new cluster head can be
reduced. The simulation results in MATLAB demonstrate that the clustering algorithm can effectively
reduce the energy consumption and increase the system lifetime. DEEC [16] is an energy efficient
clustering protocol in which the cluster-heads are elected by a probability based on the ratio between
residual energy of each node and the average energy of the network. The nodes with high initial and
residual energy will have more chances to be the cluster-heads than the nodes with low energy. The
simulation results show that DEEC achieves longer lifetime and more effective messages than current
important clustering protocols in heterogeneous environments.
Another scheme considers the strategic deployment [17] for selecting the cluster head. The clusters
are formed in the form of multiple-sized fixed grids while taking into account the arbitrary-shaped area sensed by the sensor nodes. The simulation results show that the proposed scheme alleviates high
energy consumption and a short lifetime of the wireless sensor networks supported by existing
schemes. Soro et al. [18] presented a unique method at the cluster head election problem,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
443 Vol. 1, Issue 5, pp. 441-446
concentrating on the applications, where the maintenance of full network coverage is the main
requirement. This approach for cluster-based network organization is based on a set of coverage-
aware cost metrics that favor nodes deployed in densely populated net-work areas as better candidates
for cluster head nodes, active sensor nodes and routers.
III. METHOD AND RESULTS
The present results are analyzed for homogenous network, where all the nodes are equipped with the
same initial energy before they begin to transmit their data in the clustered network. The nodes keep
on sensing the environment and transmit the information to their respective cluster head. We describe
our system model of homogeneous sensor network in a 100 m x 100 m sensor field with 100 nodes placed [19] as shown in Figure 1. The whole network is divided into a fixed number of clusters (ten
clusters are considered in this study). Each cluster contains a cluster head, which is responsible for
data collection from all the nodes (within the cluster) and finally sending it to the base-station. These
cluster heads are selected on the basis of highest residual energy of the nodes. After each round of
data transmission, ten new nodes of maximum residual energy are selected as new cluster heads in the
entire network. Clusters are reformed for each cluster head based on relative distances between the
nodes. In this way, all the nodes are associated with one of the maximum residual energy nodes (cluster head) and sending data in their respective TDMA schedule.
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
Base Station
X
Y
Node
Figure 1. Node placement in Homogeneous Model
It is to be noted that distance plays an important role in overall energy dissipation and as per radio energy dissipation model [20] (as shown in Figure 2) in order to achieve an acceptable Signal-to-noise
ratio (SNR) in transmitting a k bit message over a distance d, energy expanded by the radio is given
by
4***
2***
dmpkEeleck
dfskEeleck
TXE
ε
ε
+
+=
if
if
dod
dod
≥
≤ (1)
where Eelec is the energy dissipated per bit to run the transmitter or the receiver circuit, εfs and εmp
depend on the transmitter amplifier, and d the distance between the sender and the receiver. By
equating the two expressions at d = do, one can get
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
444 Vol. 1, Issue 5, pp. 441-446
mp
fsdo
ε
ε= (2)
Figure 2. Radio Energy Model
To receive a k bit message, the radio expends ERX = k * Eelec. Ultimately the total energy consumption
per round is calculated and the lifetime of the network is plotted in terms of “Number of alive nodes”
per round.
We have considered first order radio model similar to LEACH and the simulation parameters for our
model are mentioned in Table 1. The base-station is in the center and so, the maximum distance of
any node from the base-station is approximately 70 m. The size of the message that nodes send to
their cluster heads as well as the size of the (aggregate) message that a cluster head sends to the base-
station is set to 2000 bits. The performance of the proposed protocols is measured in terms of network
lifetime, which represents the number of alive node vs time. The difference in the extension of the
lifetime of our protocol is compared with LEACH and DHAC as shown in Figure 3. It is clear that the
present method of selecting cluster head works efficiently than the reported protocols (DHAC and
LEACH) for similar input parameters.
0 200 400 600 800 1000 1200 1400 1600 18000
10
20
30
40
50
60
70
80
90
100
Time (Rounds)
Nu
mb
er o
f A
liv
e N
od
es
CAEL (Proposed Protcol)
DHAC
LEACH
Figure.3: Number of Alive Nodes vs Time using various protocols
Table 1. Transmission parameters value
Description Symbol Value
Number of nodes in the system N 100
Energy consumed by the amplifier to
transmit at a short distance
εfs 10 pJ/bit/m2
Energy consumed by the amplifier to
transmit at a longer distance
εmp 0.0013 pJ/bit/m4
Energy consumed in the electronics
circuit to transmit or receive the
signal
Eelec 50 nJ/bit
Data aggregation energy EDA 5 nJ/bit/report
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
445 Vol. 1, Issue 5, pp. 441-446
IV. CONCLUSIONS
We have proposed an energy efficient clustering scheme for wireless sensor networks. A fixed
number of nodes are selected as cluster heads with highest residual energy in the whole network and
the role of cluster heads is switched dynamically between other nodes on the basis of residual energy.
Simulations in MATLAB shows that our protocol has extended the lifetime of the network as
compared with LEACH and DHAC in the presence of same input parameters of the nodes in a
network. The performance of the proposed system is better in terms of lifetime and is 28 % higher
than DHAC and 70 % higher than LEACH. Further study is required to improve WSN by inclusion of multi criterion for the cluster head selection such as consideration of distance between nodes and
cluster head and base-station and cluster head. Also the optimal number cluster heads need to be
derived using optimization techniques.
REFERENCES
[1] I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci (2002) “Wireless sensor networks: A survey”,
Computer Networks, vol. 38, pp 393–422
[2] Jane Y. Yu, Peter H. J. Chong, (2005) “A survey of clustering schemes for mobile ad hoc networks”, IEEE
Communications Surveys & Tutorials, Vol. 7, No.1, pp 32-48,
[3] Ian F. Akyildiz, Tommaso Melodia, Kaushik R. Chowdhury, (2007) “A survey on wireless multimedia
sensor networks”, Computer Networks, Vol. 51, pp 921–960
[4] A. Abbasi, M. Younis, (2007) “A survey on clustering algorithms for wireless sensor networks”, Computer
Communications, vol. 30, pp 2826–2841
[5] H. Charles Romesburg, (1990) “Cluster Analysis for Researchers”, Lifetime Learning Publications,
Belmont, California
[6] Y. Wang, T.L.X. Yang, D. Zhang, (2009) “An energy efficient and balance hierarchical unequal clustering
algorithm for large scale sensor network”, Information Technololgy Journal, vol. 8, no.1, pp. 28–38
[7] W.B. Heinzelman, A. Chandrakasan, H. Balakrishnan, (2000) “Energy-efficient communication protocol for
wireless microsensor networks”, Proceedings of 33rd Hawaii International Conference on System Sciences
(HICSS), Wailea Maui, Hawaii, USA, vol.2
[8] W.B. Heinzelman, Anantha P. Chandrakasan, Hari Balakrishnan, (2002) “An application-specific protocol
architecture for wireless microsensor networks”, IEEE Transactions on Wireless Communications, Vol. 1, No.
4, pp 660-670
[9] F. Bajaber, I. Awan, (2008) “Dynamic/static clustering protocol for wireless sensor network” Proceedings of
the 2nd European Symposium on Computer Modeling and Simulation, pp. 524–529.
[10] F. Bajaber, I. Awan, (2011) “Adaptive decentralized re-clustering protocol for wireless sensor networks”,
Journal of Computer and System Sciences, vol. 77, pp. 282-292
[11] Bara’a A. Attea, E. A. Khalil, (2011) “A new evolutionary based routing protocol for clustered
heterogeneous wireless sensor networks”, Applied Soft Computing, doi:10.1016/j.asoc.2011.04.007
[12] D. Kumar, T. C. Aseri, R. B. Patel, (2009) “EEHC: Energy efficient heterogeneous clustered scheme for
wireless sensor networks”, Computer Communications, vol. 32, pp. 662-667
[13] C.H. Lung, C. Zhou, (2010) “Using hierarchical agglomerative clustering in wireless sensor networks: An
energy-efficient and flexible approach”, Ad Hoc Networks, Elsevier, vol. 8, pp. 328–344
[14] H. Zhou, Y. Wu, Y. Hu, G. Xie, (2010) “A novel stable selection and reliable transmission protocol for
clustered heterogeneous wireless sensor networks”, Computer Communications, vol. 33, pp.1843-1849
[15] X. Min, Shi Wei-ren, J.Chang-jiang, Z. Ying, (2010) “Energy efficient clustering algorithm for maximizing
lifetime of wireless sensor networks”, International Journal of Electronics and Communications, vol. 64, pp.
289–298
[16] Li Qing, Q. Zhu, M. Wang, (2006) “Design of a distributed energy-efficient clustering algorithm for
heterogeneous wireless sensor networks”, Computer Communications, Elsevier, vol. 29, pp. 2230-2237
[17] T. Kaur, J. Baek, (2009) “A strategic deployment and cluster-header selection for wireless sensor
networks”, IEEE Transactions on Consumer Electronics, vol. 55
[18] S. Soro, W. B. Heinzelman, (2009) “Cluster head election techniques for coverage preservation in wireless
sensor networks” Ad Hoc Networks, vol. 7, pp. 955–972
[19] S. S. Dhillon, K. Chakrabarty, (2003) “Sensor placement for effective coverage and surveillance in
distributed sensor networks” Conference on Wireless Communications and Networking, vol.3, pp.1609-1614
[20] T. Rappaport, (1996) “Wireless communications: Principles and practice”, IEEE Press, Piscataway, NJ,
USA
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
446 Vol. 1, Issue 5, pp. 441-446
BIOGRAPHIES Puneet Azad received his B.E. degree from Bhilai Institute of Technology, Durg (M.P.) in
1999 and M.E. degree from Delhi Technological University (formerly Delhi College of
Engineering), Delhi in 2000. He started his carrier as a Software Engineer with TCG
Software Service, Calcutta and presently working as Assistant Professor (Reader) in
Department of Electronics & Communication Engineering in Maharaja Surajmal Institute
of Technology, New Delhi. His research interests are Lifetime Maximization and Data
Fusion in Wireless Sensor Networks, Optimization and Simulation.
Brahmjit Singh received B.E. degree in Electronics Engineering from Malaviya National
Institute of Technology, Jaipur in 1988, M.E. degree in Electronics and Communication
Engineering from Indian Institute of Technology, Roorkee in 1995 and Ph.D. degree from
Guru Gobind Singh Indraprastha University, Delhi in 2005 (India). He started his career as
a lecturer at Bundelkhand Institute of Engineering and Technology, Jhansi (India).
Currently, he is Professor & Chairman in Department of Electronics & Communication
Engineering department at National Institute of Technology, Kurukshetra (India). He
teaches post-graduate and graduate level courses on Wireless communication and CDMA
systems. His research interests include mobility management in cellular / wireless
networks, Planning, Designing and optimization of Cellular Networks & Wireless network
security. He has a large number of publications in International / National journals and
Conferences. He also received the Best Research Paper Award from ‘The Institution of
Engineers (India)’ in 2006.
Vidushi Sharma has done Ph.D in computer Science and is presently working as
Assistant Professor in Gautam Buddha University. She teaches post graduate and graduate
level courses and has large number of International and national publications and has also
written a book on Information Technology. Her research interests includes IT applications
in management and performance evaluation of Information Systems which includes
Wireless Systems, Application Software, Ecommerce System.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
447 Vol. 1, Issue 5, pp. 447-453
SOLAR HEATING IN FOOD PROCESSING
N. V. Vader1 and M. M. Dixit
2
1Department of Electrical Power System, V.P.M.’s Polytechnic, Thane, India.
2Department of Electrical Power System, B. L. Patil Polytechnic, Khopoli, India.
ABSTRACT
In conventional method of food processing, hot air (thermal energy) is being used to dry the food products such
as grapes, fish, banana etc. by using fuels like kerosene, fire- wood, diesel, electricity. High moisture content is
one of the reasons for food spoilage during storage and preservation. The conventional methods of heating
though are popular but have some problems. Solar air heating system makes maximum use of air heating
potential of sunlight. Special solar heat absorber is used for food processing applications by absorbing the heat
and using for hot air generation. Solar collector like parabolic dish, solar shuffler system can be used. The
trials carried out with parabolic systems show not only fuel saving but also great value addition because of
better quality of product in terms of color, aroma and taste.
KEYWORDS: Food processing, solar heating system, solar collector
I. INTRODUCTION
In conventional method hot air (thermal energy) is being used to dry the food products such as grapes,
fish, banana etc. by using fuels like kerosene, fire- wood, diesel, electricity. Present energy scenario
indicates these sources are costly and depleting day by day. They also pollute the environment and
responsible for hazards like global warming. The renewable energy bridges the gap between mounting
energy demand and diminishing supply of conventional sources of energy. Need of cleaner
environment and the increase in demand of more healthy and hygienic food-products encourages the
use of renewable energy in agro-industrial production process.
Solar energy, the mother of renewable energy sources, is an inexhaustible, clean, cheap source of
energy. Lying between 80 to 36
0 norths, India has 2500 to 3200hours of sunshine per year providing
5.4 to 5.8Kw of power per m2 per day @1kJ/sec/m
2. Utilizing small portion of this immense resource
would save our fossil fuels and forest without sacrificing our energy consumption. Solar hot air
generation systems are more reliable, durable and cost effective energy production methods for
agricultural and industry process. It is more efficient, easily adaptable from existing fuel-driven
systems, environmentally friendly and hygienic. [1]
II. BACKGROUND
Food preservation or processing is done by drying or heating process. High moisture content is one of
the reasons for food spoilage during storage and preservation. The conventional approach is with
direct heating and indirect heating methods. These methods though are popular but have some
problems such as:
• Higher cost of fuels and Requirement of bulk quantity of fuels
• Depletion of conventional fuels
• Environmental impacts with emission of CO2
• Cost of electricity and load shedding
Considering these difficulties some new methods are to be adopted. As far as food processing and
preservation is concerned solar energy known as green energy is the best option available. Solar air
heating system makes maximum use of air heating potential of sunlight. Special solar heat absorber is
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
448 Vol. 1, Issue 5, pp. 447-453
used for absorbing the heat and using for hot air generation. Solar collector like parabolic dish, solar
scheffler system can be used. [2]
2.1 Conventional Drying Process
Traditionally, in India people have been using solar energy for centuries mainly for agricultural
purpose such as drying of grains and species, drying of fish, preservation of food products. The drying
process removes moisture and helps in the preservation of the product. Open drying (or direct solar
heating) of food products is done under sun by spreading it on open ground or a base plate is a
common practice a various places[3]. This method is cheap but has several disadvantages:
• Possibility of contamination of the food product dirt, insects, rodents, birds which makes it
unhygienic.
• Exposure of food product to the elements such as rain and wind which causes spoilage and
losses.
• Loss of nutrition values and natural appearance like color, texture etc.
• The process is slow and long time period is required.
• Uneven heating or drying can be done.
Figure 1 Solar Indirect Heating
In indirect heating (drying) is done by using a solar heater of a type which furnishes hot air to a
separate drying unit. This can be advantageously used for big industries which require hot air. The
system consists of air heater, drying chamber and thermal storage device. Solar collector collects
radiation which heats the air which is blown to drying chamber for drying process. Air, thermal
liquids or water can be used as heating medium. Thermal liquid is limited in quantity whereas water
has uncertainty and low thermal efficiency. Air is ideal medium as it is free, easily available in bulk
quantity and no extra auxiliary equipment is required. Parabolic dish collectors, flat plate collectors
and shuffler system can be used for collecting solar radiations. Figure 1 shows the block diagram of
Indirect heating. It consists of some basic components which are a) Solar collectors b) Solar heating
chamber c) Drying chamber d) Inlet Fan [4]
III. CASE STUDY 1: SOLAR DRYER FOR BANANA SLICES USING PARABOLIC
SOLAR COLLECTOR DISH
3.1 Working principle
The basic principle of solar dryer is to make use of solar energy to heat the air which is used to dry the
products. When air is heated, its relative humidity decreases and it is able to hold more moisture.
Warm, dry air flowing through the dryer carries away the moisture that evaporates from the surface of
the food. Banana contains 80% of water, when heated up-to 700C moisture content reduces to 10%
[5].
3.2 Solar dryer system
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
449 Vol. 1, Issue 5, pp. 447-453
System consists of solar collector, heat absorber, drying chamber and control unit.
1) Solar collector :-Solar parabolic dish collector (Aperture diameter- 1.4m ; Focal length -
0.28m)
2) Heating cabinet:-It consists of no. of tubes made up of copper wound in a coil, placed in a
black box that absorbs maximum heat energy. (Tube diameter- 1cm. ; Length- 33cm ; Width- 4cm ; No. of turns – 18)
3) Drying chamber :- Length – 33cm ; Width – 4cm; No. of plates -5
4) Control unit:- Three control circuits are required.
a) Orifice plate - For air control at inlet valve
b) RTD(PT-100) – To regulate the temperature of drying chamber
c) Load cells – to register end point of the process ( reduction in weight of banana) in
terms of mill volt. This output voltage is amplified using amplifier and gives signal to
relay to operate alarm.
Figure 2 Solar Dryer
3.3 Observations
Table1
Before Drying
Initial weight of banana 1kg (1000gms)
Initial temp. of banana 280C
After Drying
Final weight of banana Approximately 778gms
Expected reduction in moisture content 20%
Drying time 3 hrs 53 min. (theoretical )
Actual drying time 4 to 5 hrs
3.4 Conclusion
• The average drying rate is found to be 0.17kg/hr
• Drying rate can be increased by controlling inlet air temperature and air velocity at drying
chamber.
• Drying time can be reduced by combined mode of free and forced convection.
• Outlet air can be recycled in heating unit to increase the efficiency, drying rate as well as to
reduce drying time.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
450 Vol. 1, Issue 5, pp. 447-453
3.5 Applications
• This system can be used for processing of grain and other food products like spices, tea
leaves, fish, dehydrating fruits& vegetables.
• This system can also be used in industry for producing paper& board, supplying hot air to
boilers, space heating at hill stations, processing leather& hides, etc.
• Same system can be used for heating the thermal liquid which can be used as heat source.
3.6 Advantages
Compared to conventional methods are
• Makes product more uniform, healthy, and hygienic
• Preserves color, texture and natural appearance and Retains nutrients like beta carotene
• Gives long life to products
• Maintains moisture level at optimum level
• Can be easily adopted into fossil fuel systems
• The system Functions consistently and efficiently for 15-20years.
3.7 Improvement
• To generate more quantity of hot air shuffler solar dish can be used which also increases
drying rate with reduction in drying time.
• For small quantity of food stuff solar cooker can also be used.
IV. CASE STUDY 2: SOLAR DRYER (OVEN) FOR CASHEW NUT ROASTING
Arrangement is proposed to install the cabinet for loading the material on rooftop, while collector
panels were laid on south side towards ground. This saved cost of fabricated support structure. As the
cabinet is placed at higher elevation than the collector panels, with uniform slope, natural draught
assists the induced draught created by fan. Because of combined draught overall auxiliary power
consumption for fan is reduced.
Solar collectors were constructed in powder coated M.S. sheets instead of aluminum sheets. This
reduced the cost of solar collector panels by around 50%. Outer shell of panel is constructed in single sheet without any joints, which takes care of possibility of hot air leakages. Cabinet for loading
material was constructed with plastic sheets on three sides & plywood door on rear side. Cost of the
cabinet contributes a lot in conventional solar or other mechanized dryer as it is to be constructed in
stainless steel and need to be properly insulated. Replacing this envelop by plastic sheets saves 85%
of the cabinet cost. No insulation is required in this case [6].
Design of cabinet permits even distribution of hot air throughout cross section, which permits uniform
drying, rates. Control on maintaining moisture at desired level is easily possible. Even unskilled worker can operate the unit. Negligible running cost. Mechanized unit require 8 kWh of auxiliary
power and 50 kg of coal per day for a 100-kg/day capacity while solar dryer requires less than 2 kWh
of auxiliary power for fan, for same capacity [7].
4.1 Trials and results
• In Cashew processing the shelled kernel is covered with the testa and to facilitate removal, i.e.
to peel in order to produce the blanched kernel, the shelled kernel is dried. The moisture
content is approximately 6% before drying and 3% after. Same unit was used for drying
shelled kernel successfully.
• In Cashew nut processing, roasting of the nut in box ovens give excellent quality nuts.
Breakage of nuts was reduced by 50% and roasting was uniform. Nuts roasted in box ovens
followed by drying kernel in solar dryers, not only save energy cost but also fetch handsome
Rs. 50/- per kg more than the nuts produced by electrical boilers and dryers.
• Roasting application with solar concentrator requires great skill and there were incidences of
food burning, especially with cashew nuts, soybean and groundnut. It is observed that solar
ovens are better suited for baking and roasting applications than concentrators. Uniform
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
451 Vol. 1, Issue 5, pp. 447-453
baking and roasting is observed in solar ovens. Even an unskilled worker can work well with
ovens, but not with the concentrators.
• Moisture removal rate was observed at around 3 kg per sq. mtr. area of panel in dry climate.
• Apart from fossil fuel savings, quality improvement of the food product and better process
control are main advantages.
V. GOVERNMENT SUPPORT
The Ministry of Food Processing Industries is the nodal agency of the Government of India for
processed foods and is responsible for developing a strong and vibrant food processing sector. In the
era of economic liberalization where the private, public and co-operative sectors are to play their
rightful role in development of food processing sector, the Ministry acts as a catalyst for bringing in
greater investment into this sector, guiding and helping the industry in a proper direction, encouraging exports and creating a conducive environment for the healthy growth of the food processing industry.
Ministry of Food Processing Industries or nominated nodal agencies are responsible for implementing
programs relating to this sector in the concerned State Governments. The Ministry also interacts with
various promotional organizations like
• Agricultural Products Export Development Authority (APEDA),
• Marine Products Export Development Authority (MPEDA),
• Coffee Board and Cashew Board
• National Research Development corporation (NRDC),
• National Cooperative Development Corporation
• National Horticulture Board(NHB)
This growth of the Food Processing Industry will bring immense benefits to the economy, raising
agricultural yields, meeting productivity, creating employment and raising the standard of very large
number of people throughout the country, specially, in the rural areas. Economic liberalization and
rising consumer prosperity is opening up new opportunities for diversification in Food Processing
Sector. [8]
5.1 MOFPI Schemes
• Scheme for infrastructure Development - Setting up of Mega Food Park, Cold Chain
infrastructure Modernization of Abattoirs
• Scheme for Technology Up Gradation, Establishment And Modernization Of Food
Processing Industries
• Scheme for Quality Assurance, Codex Standards, Research & Development And Other
Promotional Activities
Table 2 Projects Assisted by MOFPI
State-wise Financial Assistance Extended under Plan Scheme for Technology
Up-gradation/Establishment/ Modernization of Food Processing Industries in India (2002-2003 to 2006-2007)
( in Lakh)
States/UTs 2002-03 2003-04 2004-2005 2005-2006 2006-07
Andhra Pradesh 124.74 465.57 797.67 689.80 504.21
Maharashtra 239.95 529.03 778.67 1251.94 721.80
Karnataka 41.85 151.49 425.32 419.73 199.65
West Bengal 163.54 132.96 325.74 400.14 271.08
5.2 NHB Schemes
National Horticultural Board (NHB) is providing schemes related to technology development and
transfer, Introduction of New Technologies, Domestic visit of farmers, Technology Awareness. It is
also releasing up to 100% financial assistance as under
a) Up to 25.00 lakh
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
452 Vol. 1, Issue 5, pp. 447-453
b) As per actual
c) Up to 50,000/seminar
NHB also provides market information service for horticulture crops
a) General information on wholesale prices, arrivals and trends in various markets for
horticulture produce and
b) Dissemination of information through Media & Publications
c) To assist farmers, exporters, dealers research organizations etc.
5.3 Government Schemes and Policies related to Solar Energy
Ministry of new and renewable energy (MNRE) is supporting to promote use of renewable energy in
different areas of application. Various schemes and programs are lunched by MNRE to spread the
importance of renewable energy applications and products. It is also providing subsidies for installing
renewable applications in different areas. For solar energy MNRE has launched a program called
Jawaharlal Nehru Solar energy Mission. Under this various Solar Air Heating schemes are introduces
by MNRE. [9]
Salient features of the scheme are:
• To promote Solar Air Heating/Steam Generating Systems, financial support in form of 50%
of the cost of system
• Subject to a maximum of 5000 per sq. m of dish area for solar concentrating systems, and
2500 per sq. m. of collector area for FPC based solar air heating systems/ dryers will be provided to non-profit making institutions/organizations.
• 35% of the cost of system, subject to a maximum of 3500/-per sq. m of dish area for solar
concentrating systems, and 1750 per sq. m. of collector area for FPC based solar air heating
systems/ dryers will be provided to commercial/industrial organizations (profit making and
claiming depreciation).
• Proposals could be directly generated by the beneficiaries in association with suppliers &
State Nodal Agencies (SNAs) and submitted to the Ministry through implementing agencies,
which will be provided service charges @ 3% of MNRE support.
VI. CONCLUSION
Conventional methods used for heating for in food processing are costly and energy consuming. Need
of cleaner environment and the increase in demand of more healthy and hygienic food-products
encourages the use of renewable energy in agro-industrial production process. For promoting solar
energy application on a large scale in food processing industry, it is very important to integrate
knowledge of food processing with capabilities of different solar gadgets. Great quality improvement
in solar processed food was observed in terms of retention of color, aroma and taste.
REFERENCES
[1] G. N. Tiwari, “Solar Energy-Fundamentals, Design, Modeling”, pp 220-223
[2] S. P. Sukhatme, “Solar Energy- Principles of thermal collection & Storage”, pp 38-48
[3] G. D. Rai, “Solar Energy Utilization”, pp 180-185.
[4] Ajay Chandak, Sham Patil, Vilas Shah, Solar energy for quality improvement in food processing
industry
[5] Proceeding of international conference on Advances in energy research -2007,IIT Bombay.
[6] Deepak Gadhia, Shirin Gadhia, “Parabolic Solar Concentrators For Cooking, Food Processing And
Other Applications”, Gadhia Solar Energy Systems Pvt. Ltd
[7] R. D. Jilte, “Performance of analysis of Solar dryer with Electrical Food drier- a case study of
Mumbai”
[8] Government Schemes of Ministry of food processing industry, www.mofpi.nic.in
[9] Solar Energy Schemes of Ministry of new and renewable energy, www.mnre.org
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
453 Vol. 1, Issue 5, pp. 447-453
Authors N.V. Vader has done B.E. in electrical engineering from B.V.B. College of engineering,
Hubli, Karnataka University in 1984. She has a teaching experience of 27 years in teaching
field. She is working as Head of electrical power system department in V.P.M.’s polytechnic,
Thane. She is an energy manager certified by bureau of energy efficiency, ministry of power,
government of india. She is a life member of ISTE and member of IEEE. She has published 07
papers at national level and 02 at international level. She has published 3 books in electrical
engineering field. She has taken special effort for receiving AICTE grant of Rs.5 lakh for
MODROB project and establishment of “District level renewable energy park” with the grant received from
MNRE in her institute premises. She is an active member of several activities conducted by Maharashtra State
Board of Technical Education, Mumbai like Curriculum Revision project, Lab Manual Project, Development of
Sample Question Papers, Development of Question Bank. He is taking active efforts for conducting
extracurricular activities for development of staff and students.
M. M. Dixit is presently working as Head of Electrical Power System Department at B. L. Patil
Polytechnic, Khopoli. His Teaching Experience is 11 years. He has done B.Tech.in Electrical
Engineering from Dr. Babasaheb Ambedkar Technological University, Lonere in 1999 and
presently pursuing M.E. in Power System from Pune University. He is a Life Member of ISTE
and Member of RENET and IEEE. He has Published 07 papers at National Level and 01 at
International Level. He has written 2 books in electrical engineering field. His areas of interest
are Power System, Renewable energy and Energy Conservation. He is an active member of
several activities conducted by Maharashtra State Board of Technical Education, Mumbai like Curriculum
Revision project, Lab Manual Project, Development of Sample Question Papers, Development of Question
Bank. He is taking active efforts for conducting extracurricular activities for development of staff and students.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
454 Vol. 1, Issue 5, pp. 454-461
EXPERIMENTAL STUDY ON THE EFFECT OF METHANOL -
GASOLINE, ETHANOL-GASOLINE AND N-BUTANOL-
GASOLINE BLENDS ON THE PERFORMANCE OF 2-STROKE
PETROL ENGINE
Viral K Pandya1, Shailesh N Chaudhary
2, Bakul T Patel
3, Parth D Patel
4
1, 2, 3Assistant Professor, Department of Mechanical Engineering,
Laljibhai Chaturbhai Institute of Technology, Bhandu, Gujarat, India 4Research Scholar, Department of Mechanical Engineering,
Shri Sakalchand Patel College of Engineering, Visnagar, Gujarat, India
ABSTRACT
This experimental study investigates the effect of using unleaded gasoline and alcohol as additives blends on
spark ignition engine (SI engine) performance. Two strokes, single cylinder SI engine were used for conducting
this study. Performance tests were conducted for fuel consumption, brake thermal efficiency, brake power,
engine power, indicated thermal efficiency and brake specific fuel consumption using unleaded gasoline and
additives blends with different percentages of alcohol at varying engine load condition and at constant engine
speed. The result showed that blending unleaded gasoline with additives increases the brake power, indicated
and brake thermal efficiencies, fuel consumption and mechanical efficiency. The addition of 5% methanol, 5%
ethanol and 5%n-butanol to gasoline gave the best results for all measured parameters at all engine
torque/power values.
KEYWORDS: Fuel additive; Gasoline-Additives blend; Methanol; Ethanol, n-Butanol.
I. INTRODUCTION
Alcohols have been suggested as an engine fuel almost since automobile was invented [1]. The alcohol used to change/modify the attitude toward the present fuel, i.e., gasoline and Search for new
alternatives. In this study, the first approach was selected with the aim of improving the combustion
characteristics of gasoline, which will be reflected in improving the engine performance and that is
done by mixing methanol, ethanol and n-butanol. It is the dream of engineers and scientists to
increase the performance of the engine a very limited techniques are available with safety. Additives
are integral part of today’s fuel. Together with carefully formulated base fuel composition they
contribute to efficiency and long life. They are chemicals, which are added in small quantities either
to enhance fuel performance or to correct a deficiency. They can have surprisingly large effects even
when added in little amount [2].
In recent years several researches have been carried out to the influence of methanol and ethanol on
the performance of spark ignition engines. Alvydas Pikunas, Saugirdas Pukalskas & Juozas Grabys
[3] presented the influence of composition of gasoline -ethanol blends on parameters of internal
combustion engines .The study showed that when ethanol is added, the heating value of the blended
fuel decreases, while the octane number of the blended fuel increases .Also the results of the engine
test indicated that when ethanol–gasoline blended fuel is used, the engine power and specific fuel
consumption of the engine slightly increase.
Effect of ethanol–unleaded gasoline blends on engine performance and exhaust emission was studied
by M .Al-Hasan [4] .A four stroke, four cylinder SI engine (type TOYOTA, TERCEL-3A)
Experimental Study of Gasoline –Alcohol Blends on Performance of Internal Combustion Engine 17
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
455 Vol. 1, Issue 5, pp. 454-461
was used for conducting the study .The study showed that blending unleaded gasoline with ethanol
increases the brake power, torque, volumetric and brake thermal efficiencies and fuel consumption,
while it decreases the brake specific fuel consumption and equivalence air–fuel ratio .The 20 %vol.
ethanol in fuel blend gave the best results for all measured parameters at all engine speeds.
M .Abu-Zaid, O .Badran, and J .Yamin[5] introduced an experimental study to investigate into the effect of methanol addition to gasoline on the performance of spark ignition engines .The performance
tests were carried out, at variable speed conditions, over the range of 1000 to 2500 rpm, using various
blends of methanol-gasoline fuel .It was found that methanol has a significant effect on the increase
the performance of the gasoline engine .The addition of methanol to gasoline increases the octane
number, thus engines performance increase with methanol-gasoline blend can operate at higher
compression ratios.
Experimental Study of Exhaust Emissions & Performance Analysis of Multi Cylinder S.I.Engine
When Methanol Used as an Additive studied by M.V .Mallikarjun and Venkata Ramesh Mamilla [6].
Experimental study in four cylinders ,S.I engine by adding methanol in various percentages in
gasoline and also by doing slight modifications with the various subsystems of the engine under
different load conditions .For various percentages of methanol blends(0-15) pertaining to performance
of engine it is observed that there is an increase of octane rating of gasoline along with increase in
brake thermal efficiency, indicated thermal efficiency and reduction in knocking.
D. Balaji [7] introduced influence of isobutanol blend in spark ignition engine performance operated
with gasoline and ethanol .A four stroke, single cylinder SI engine was used for conducting this study.
Performance tests were conducted for fuel consumption, volumetric efficiency, brake thermal
efficiency, brake power, engine torque and brake specific fuel consumption, using unleaded gasoline
and additives blends with different percentages of fuel at varying engine torque condition and
constant engine speed .The result showed that blending unleaded gasoline with additives increases the
brake power, volumetric and brake thermal efficiencies and fuel consumption addition of 5% isobutanol and 10% ethanol to gasoline gave the best results for all measured parameters at all engine
torque values . In this paper we studied the effect of ethanol –gasoline blend, ethanol –gasoline blend
and mixture ethanol- methanol –gasoline blend, also compare between them.
By considering the environmental and the financial consideration, an attempt has been made to
increase the performance of the engine by dealing with the alcohol additives. The engine performance
analysis measured, running the engine at varying load and constant speed. Hopeful results were
obtained and the work carried out is presented.
1.1 Statement of the problem: As the two stroke engines are using different types of fuels like petrol, diesel, gas etc. In current days
the use of two stroke petrol engines is reduced because of emission of harm full gasses, maximum
fuel consumption, less efficient. To overcome these difficulties the methanol, ethanol and n-butanol
are used as an additive with gasoline to increase the performance of engine and minimize the fuel
consumption.
1.2 Objective of the study: The objective of the study is to analyze the performance of the two stroke petrol engine using
methanol, ethanol and n-butanol as an additive with the gasoline so as to overcome the above stated
difficulties.
1.3 Scope of the study: To increase the performance of the two stroke petrol engine the methanol, ethanol and n-butanol been
used as an additive with gasoline. The readings obtained from the conducted tests have been evaluated
and the results and graphs are compared.
II. EXPERIMENTAL SET UP AND PROCEDURE
The engine is 150 cc 2 strokes, single cylinder SI engine loaded by a rope toll dynamometer. Table 1
lists some of the important specification of the engine under test. The schematic layout of the
experimental set up is shown in figure 1. Fuel consumption was measured by using a calibrated
burette and a stopwatch with an accuracy of 0.2sec.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
456 Vol. 1, Issue 5, pp. 454-461
Table.1 Engine specifications
Sr. No. Description Data
1 Type of engine Two stroke cycle, single acting air cooled petrol engine
2 No. of cylinder Single cylinder
3 Max B.P 7.48 HP(5.93 Kw)
4 Max speed 5200 rpm
5 Direction of rotation Clock wise
6 Bore diameter 57 mm
7 Stroke length 57mm
8 Cubic capacity 145.45 cc
2.1 Specifications of other device and fluid used in experiment
1. Co-efficient of discharge of orifice = 0.6
2. Orifice diameter = 20 mm
3. Density of petrol = 720 Kg / m3
4. Density of water = 1000 Kg / m3
5 .Calorific value of petrol = 48000KJ/ Kg
6. Calorific value of methanol=22700KJ/Kg
7. Calorific value of ethanol=29700KJ/Kg
8. Calorific value of n-butanol=33075KJ/Kg
Figure 1 Experimental setup for the effect of methanol -gasoline, ethanol-gasoline and n-butanol-gasoline
blends
The engine was started and allowed to warm up for a period of 15-20 min. The fuel consumption was
constant at 10 cc for each performance. Engine test were performed by constant speed and varying the
loading condition for each individual fuel. Before running the engine to a new fuel blend, it was
allowed to run for sufficient time to consume the remaining fuel from the previous experiment. For
each experiment, four runs were performed to obtain an average value of the experimental data.
III. EXPERIMENTAL DATA
For Petrol
Wt
in Kg
Speed
in
rpm
N
Time to
consume
10 cc of
fuel in sec
Manometer Reading
H1
in
cm
H2
in
cm
Hw=H1-H2
In mt
2 2950 55 15.1 14.8 0.003
4 2470 53 15.1 14.8 0.003
6 2325 49 15.1 14.8 0.003
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
457 Vol. 1, Issue 5, pp. 454-461
8 2200 44 15.1 14.8 0.003
For M-5
Wt
in
Kg
Speed
in rpm
N
Time to
consume
10 cc of
fuel in sec
Manometer Reading
HI in
cm
H2 in
cm
Hw=H1-
H2 in mt
2 2745 49 15.1 14.8 0.003
4 2150 44 15.1 14.8 0.003
6 1975 41 15.1 14.8 0.003
8 1850 38 15.1 14.8 0.003
For E-5
Wt
in
Kg
Speed
in rpm
N
Time to
consume
10 cc of
fuel in sec
Manometer Reading
HI in
cm
H2 in
cm
Hw=H1-
H2 in mt
In mt
2 2700 59 15.1 14.8 0.003
4 2450 56 15.1 14.8 0.003
6 2340 52 15.1 14.8 0.003
8 2150 49 15.1 14.8 0.003
For B-5
Wt
in
Kg
Speed
in
rpm
N
Time to
consume 10
cc of fuel
in sec
Manometer Reading
H1
in
cm
H2 in
cm
Hw=H1-
H2 In mt
2 2550 61 15.1 14.8 0.003
4 2100 57 15.1 14.8 0.003
6 2000 54 15.1 14.8 0.003
8 1950 51 15.1 14.8 0.003
IV. RESULT AND DISCUSSION
The effect of methanol, ethanol and n-butanol addition to unleaded gasoline on SI engine performance
at various engine powers was investigated.
4.1 Fuel consumption
The effect of methanol, ethanol, n-butanol-unleaded gasoline blends on the fuel consumption is shown
in Figure 2. From Figure 2, the fuel consumption increases on the engine power increases at engine speed. This behavior is attributed to the Lower Heating Value (LHV) per unit mass of the alcohol
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
458 Vol. 1, Issue 5, pp. 454-461
fuel, which is distinctly lower than that of the unleaded gasoline fuel. Therefore the amount of fuel
introduced in to the engine cylinder for a given desired fuel energy input has to be greater with the
alcohol fuel
Mass of Fuel Consumed Vs BP
0
0.00002
0.00004
0.00006
0.00008
0.0001
0.00012
0.00014
0.00016
0.00018
0.0002
0.502711488 0.827995392 1.18285056 1.537705728
BP in Kw
Mass o
f Fuel C
onsum
ed in m
3/s
ec
Pure Petrol
Petrol +5%Methanol
Petrol +5%Ethanol
Petrol +5%Butanol
Figure 2 Fuel Consumption Vs Brake Power at various loads
4.2 Brake thermal efficiency
Figure 3 presents the effect of methanol, ethanol and n-butanol-unleaded gasoline blends on brake
thermal efficiency. As shown in the figure break thermal efficiency increases as the engine torque
increases. The maximum brake thermal efficiency is recorded with 5% ethanol in the fuel blend at
constant engine speed.
Brake Thermal Efficiency Vs Brake Power
0
5
10
15
20
25
0.502711488 0.827995392 1.18285056 1.537705728
Brake Power in Kw
Bra
ke T
herm
al E
ffic
ien
cy in
%ag
e
Pure Petrol
Petrol +5%Methanol
Petrol +5%Ethanol
Petrol +5%Butanol
Figure 3 Brake Thermal Efficiency Vs Brake Power at various loads
4.3Specific fuel consumption
The effect of using methanol, ethanol and n-butanol-unleaded gasoline blends on brake specific fuel consumption (BSFC) is shown in Figure 4. As shown in the figure SFC decreases as the engine torque
increases. This is normal consequence of the behavior of the engine brake thermal efficiency.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
459 Vol. 1, Issue 5, pp. 454-461
BSFC Vs Brake Power
0
0.00005
0.0001
0.00015
0.0002
0.00025
0.0003
0.502711488 0.827995392 1.18285056 1.537705728
Brake Power in Kw
BS
FC
Kg
/Kw
Sec
Pure Petrol
Petrol +5%Methanol
Petrol +5%Ethanol
Petrol +5%Butanol
Figure 4 Specific Fuel Consumption (BSFC) Vs Brake Power at various loads
4.4 Mechanical Efficiency
The effect of using methanol, ethanol and n-butanol -unleaded gasoline blends on Mechanical
efficiency is shown in Figure 5. As shown in the figure efficiency increases as the engine torque
increases. The comparison of efficiency after adding the additive is given below. As the percentage of
additives increases in the gasoline, the performance of the engine increases.
Mechanical Efficiency Vs Brake Power
0
10
20
30
40
50
60
70
80
90
0.502711488 0.827995392 1.18285056 1.537705728
Brake Power in Kw
Me
ch
an
ica
l E
ffic
ien
cy
in
%a
ge
Pure Petrol
Petrol +5%Methanol
Petrol +5%Ethanol
Petrol +5%Butanol
Figure 5 Mechanical Efficiency Vs Brake Power at various loads
4.5 Indicated thermal efficiency
Figure 6 presents the effect of methanol, ethanol and n-butanol -unleaded gasoline blends on indicated
thermal efficiency. As shown in the figure indicated thermal efficiency increases as the engine torque
increases. The minimum brake thermal efficiency is recorded with 5% n-butanol in the fuel blend at
engine speed.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
460 Vol. 1, Issue 5, pp. 454-461
Indicated Thermal Efficiency Vs Brake Power
0
5
10
15
20
25
30
35
40
0.502711488 0.827995392 1.18285056 1.537705728
Brake Power in KW
Ind
icate
d T
herm
al
Eff
icie
nc
y I
N %
ag
ePure Petrol
Petrol +5%Methanol
Petrol +5%Ethanol
Petrol +5%Butanol
Figure 6.Indicated Thermal Efficiency Vs Brake Power at various loads
V. CONCLUSION
From the results of the study, the following conclusions can be deduced:
1. Using methanol, ethanol and n-butanol as a fuel additive to unleaded gasoline causes an
improvement in engine performance.
2. Methanol, ethanol and n-butanol addition to gasoline results in an increase in brake power,
brake thermal efficiency, volumetric efficiency, and fuel consumption respectively.
3. The addition of 5% methanol, 5% ethanol and 5% n-butanol to the unleaded gasoline is
achieved in our experiments without any problems during engine operation.
ACKNOWLEDGEMENT
The author would like to thank the technical staff of the Internal Combustion Engine laboratory at the
Mechanical Engineering Department of L. C. Institute of Technology.
REFERENCES
[1]. T.O. Wagner, D.S. Gray, B.Y. Zarah, A.A. Kozinski, Practicality of alcohols as motor fuel, SAE
Technical Paper 790429 (1979) 1591–1607.
[2] L O Gulder, Technical aspect of ethanol and ethanol gasoline blends as automotive fuel, the scientific
and Technical Research Council of Turkey, Project No. 526 (1979).
[3]Alvydas Pikunas, Saugirdas Pukalskas & Juozas Grabys" influence of composition of gasoline - ethanol
blends on parameters of internal combustion engines"Journal of KONES Internal Combustion Engines vol
.10, 3-4 (2003).
[4] M .Al-Hasan "Effect of ethanol–unleaded gasoline blends on engine performance and exhaust emission
"Energy Conversion and Management 44, 1547–1561 (2003).
[5] M .Abu-Zaid, O .Badran, and J .Yamin" effect of methanol addition to gasoline on the performance of
spark ignition engines "Energy & Fuels 18, pp(312-315), (2004).
[6] S. Y. Liao, D. M. Jiang, Q. Cheng, Z. H. Huang, and K. Zeng "Effect of Methanol Addition into
Gasoline on the Combustion Characteristics at Relatively Low Temperatures "Energy & Fuels, 20, 84-90
(2006).
[7] M.V .Mallikarjun1 and Venkata Ramesh Mamilla2" Experimental Study of Exhaust Emissions &
Performance Analysis of Multi Cylinder S.I.Engine When Methanol Used as an Additive" Volume 1
Number 3, pp .201–212 (2009).
[8] D.BALAJI" influence of isobutanol blend in spark ignition engine performance operated with gasoline
and ethanol "Vol .2)7(, 2859-2868 ( 2010).
Authors Biographies
Viralkumar K Pandya was born in chanasma, India in 1982. He graduated (Mechanical
Engineering) from Vishweshvriaha Technological University, Belguam in 2007. In 2008 he
joined the Department of Mechanical Engineering, L. C. Institute of Technology Bhandu, Gujarat
as Lecturer. His area of interest includes Internal Combustion Engines, Thermal Engineering,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
461 Vol. 1, Issue 5, pp. 454-461
Alternative fuels and Design.
Shailesh N Chaudhar was born in Mehsana, India in 1986. He graduated (Mechanical
Engineering) from Ganpat University, Kherva in 2008. In 2009 he joined the Department of
Mechanical Engineering, L. C. Insititute of Technology Bhandu, Gujarat as Lecturer. His area of
interest includes Machine Design, Dynamics of Machines and Alternate Energy Sources.
Bakul T Patel was born in Gandhinagar, India in 1985. He graduated (Mechanical Engineering)
from Hemchndracharya North Gujarat University, Patan in 2007. He has one year industrial
experience in suzlon industries. In 2010 he has completed his Master degree from Gujarat
University in IC/Auto from L. D. Engineering College, Ahmedabad. In 2010 he joined the
Department of Mechanical Engineering, L. C. Institute of Technology Bhandu, Gujarat as
Assistant Professor. His area of interest includes Machine Design, IC/Auto, Dynamics of
Machines and Alternate Energy Sources.
Parth D Patel was born in Patan, India in 1983. He graduated (Mechatronics Engineering) from
Ganpat University, Kherva in 2007. In 2009 he joined as Research scholar for M. Tech in S. P.
college of Engineering Visnagar, He also worked as lecturer in L. C. Institute of Technology
Bhandu, Gujarat. His area of interest CAD/CAM and Control Engineering.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
462 Vol. 1, Issue 5, pp. 462-472
IMPLEMENTATION OF MOBILE BROADCASTING USING
BLUETOOTH/3G
Dipa Dixit, Dimple Bajaj and Swati Patil Department of IT Engineering, Mumbai University, Vashi, Fr .C. R.I.T, Maharashtra, India
ABSTRACT
Mobile-PC Multimedia broadcasting aims at developing an application which mainly focuses on image and video
live streaming from mobile to desktops/laptops using 3G technology and Bluetooth. Bluetooth is used for one-to-one
connection (i.e. from mobile to PC) and 3G is used for one-to-many connections (i.e. from mobile to many PCs
and/or other mobile handsets). The Mobile-to-PC solution offers a new level of 3G service to both enterprise and
consumer markets. This application can also be used as an in-built feature in mobile phones for entertainment
purposes. Paper focuses on the architecture and implementation of broadcasting of images and video live streaming
to desktop or laptop using Bluetooth/3G technology.
KEYWORDS: Multimedia Broadcasting, Wireless communication, 3G, Bluetooth
I. INTRODUCTION
Wireless Mobile communications has emerged as the most popular and convenient form of
communications in the past decade. Mobile networks are increasingly being used to connect to the
internet and the demand for faster technologies has never been more. Over the years, mobile technologies
have evolved rapidly to provide users with their demands and equip them with advanced tools and
provide stronger connectivity. Among all the technologies of today, GPRS remains a popular service and
later, with the emergence of 3G, users have been provided with strong and convenient mobile
connectivity with enhanced features. Bluetooth is also one of the leading wireless technology and an open
source technology standard for exchanging data over short distances (using short wavelength radio
transmissions) from fixed and mobile devices, creating personal area networks (PANs) with high levels of
security. It can connect several devices, overcoming problems of synchronization.
The number of wireless mobile devices is increasing globally. As wireless mobile devices, such as
personal digital assistants, smart cellular phones, and mobile media players are getting very popular and
computationally powerful, watching TV on the move has become a reality. At the same time, wireless
systems are achieving higher data rates to support Internet and other data-related applications.
The various technologies analyzed for implementation of above system are discussed in brief below:
1.1. Multimedia Broadcasting
Multimedia broadcasting [1] or data casting refers to the use of the existing broadcast infrastructure to
transport digital information to a variety of devices (not just PCs).
The essential characteristics of multimedia broadcasting include:
1. Digital data stream
2. Asynchronous
3. Bandwidth asymmetry
4. Downstream backbone
5. High speed (up to 20 Mbps)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
463 Vol. 1, Issue 5, pp. 462-472
6. Universal access
7. Low cost
8. Layered architecture
9. Wireless
10. Mobile and fixed service
11. Existing infrastructure
1.2. Bluetooth
Bluetooth[2] is an open wireless technology standard for exchanging data over short distances using short
wavelength radio transmissions from fixed and mobile devices, creating personal area networks (PANs)
with high levels of security. Created by telecoms vendor Ericsson in 1994, it was originally conceived as
wireless alternative to RS-232 data cables. It can connect several devices, overcoming problems of
synchronization.
1.3. 3G
International Mobile Telecommunications-2000 (IMT--2000), better known as 3G or 3rd Generation, is a
generation of standards for mobile phones and mobile telecommunications services fulfilling
specifications by the International Telecommunication Union. Application services include wide-area
wireless voice telephone, mobile Internet access, video calls and mobile TV, all in a mobile environment.
Compared to the older 2G and 2.5G standards, a 3G system must allow simultaneous use of speech and
data services, and provide peak data rates of Mobile Broadcasting Using Bluetooth/3G Page 4 at least 200
kbit/s according to the IMT-2000 specification. Recent 3G releases often denoted 3.5G and 3.75G , also
provide mobile broadband access of several Mbit/s to laptop computers and smartphones.
1.4. Wireless communication Mobile computers require wireless network access, although sometimes they may physically attach to the
network for a better or cheaper connection. Wireless communication is much more difficult to achieve
than wired communication because the surrounding environment interacts with the signal, blocking signal
paths and introducing noise and echoes. As a result wireless connections have a lower quality than wired
connections: lower bandwidth, less connection stability, higher error rates and moreover, with a highly
varying quality.
1.4.1. Issues in networked wireless multimedia systems
Issues which were identified in networked wireless multimedia systems are listed below:
1. The need to maintain quality of service (throughput, delay, bit error rate, etc) over time-varying
channels.
2. To operate with limited energy resources, and
3. To operate in a heterogeneous environment.
4. Pre-configuration of system is required.
5. Firewall blocks.
Hence, all above problems using wireless communications can be solved using 3G technology as well
as energy efficiency can also be obtained. Thus, Bluetooth/3G technologies are used to implement one to
one connection as well as one too many connections between mobile and laptops/desktops. The rest of the
paper is organized as follows: In section 2, we have described the proposed system and the applications of
the system in various fields, Section 3 describes design consideration for Mobile Broadcasting using
Bluetooth/3G , Section 4 and 5 describes the implementation and step wise results for the implementation
of the system. Finally Section 6 summarizes the paper.
II. PROPOSED SYSTEM
The main aim of the application is to stream live videos and images from any camera compatible mobile
device supporting wireless technologies. Steps for broadcasting the images and videos are explained
below:
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
464 Vol. 1, Issue 5, pp. 462-472
1. First the mobile handset is connected to the PC using 3G or Bluetooth and then the video is
transmitted as and when it is captured and simultaneously shown on a PC. This is an exciting step
forward in the development of 3G and offers an easy solution for mobile operators to offer 3G visual
communications to subscribers on desktops and laptops as well as on 3G handsets. PC-to-Mobile
allows operators to immediately increase the critical mass of 3G enabled handsets, encouraging and
developing a larger community of 3G users and expanding the boundaries of peer-to-peer visual
communications 3G networks.
2. Secondly, the PC to which this live video is transmitted then broadcasts the video to other PCs or
mobile handsets depending upon the user's choice using internet. The Mobile-to-PC solution leverages
the power of both PCs and 3G mobile devices to fuel 3G proliferation and enable subscribers to use
any PC with a broadband Internet connection as an extension of their 3G mobile handsets,
subscriptions and accounts.
2.1. Applications of the system Applications for the system in different fields are given below:
1. Conferences-Broadcasting of Business Conferences, News Conferences and educational conferences
in minutes without setting up anything.
2. Premium content -Expressing interest in recent movies, premium sporting events, and other
programming on a subscription or pay-per-view basis.
3. Advertisements -Consumers are increasingly willing to view ads as part of a mobile media experience,
highlighting the potential for a smooth transition of local broadcastings free-to-air value proposition to
mobile. The potential for subscription-based services is also strong with almost 50% of viewers would
prefer ads on mobiles.
4. Critical Delivery of Live, Local Information and Emergency Alerts on Mobile Devices -
The key strength of any broadcaster is its ability to respond quickly to live events and to reach millions
of viewers with a single digital broadcast transmission -- a system designed to enable fast, easy, and
robust reception on mobile.
5. Non Real-Time Services - Enables delivery of content for local storage for playback/display at a later
time. For example, local advertiser locations and sales could be sent in advance; when a device
determined that it was close, a promo could be displayed. Another example might involve the Mobile
receiver in the vehicle gathering content for playback on a trip.
6. Social Networking Site - It can allow users to stream video directly to any social networking site. For
example they can broadcast videos directly to their Facebook wall
III. DESIGN ARCHITECTURE
3.1. Architectural block diagram
The Architectural Block Diagram for the application is as shown in fig.1
Figure1: Architectural Block Diagram for the system
CLIENT DATABASE SERVER
LINK
CREATION
UPLOAD CAPTURE
VIDEO
MAP
DISPLAY
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
465 Vol. 1, Issue 5, pp. 462-472
Various blocks which are involved in the application are explained below:
1. Client: This block represents the user interface of a mobile device through which the video is
captured. The user is authenticated through this interface with the help of user id and password.
2. Database: Database contains user ids and passwords of all the users that have been registered.
It also contains videos which were captured from the mobile device.
3. Capture Video: The video is captured through the mobile device.
4. Upload: The video captured through the mobile device is then uploaded on server. Through server, the
video is transmitted to other devices.
5. Server: The videos are uploaded on server. Server then links the video to other devices with the help
of the database.
6. Link Creation: After uploading the video on server, a link is created for each video in order to map
the video with other devices on which the video is to be broadcasted.
7. Map Display: After mapping, the video is broadcasted on multiple devices such as computer, mobile
and other devices
IV. IMPLEMENTATION DETAILS
The application is a J2ME application taking advantage of Bluetooth in mobile phones. Bluetooth allows
devices to communicate wirelessly and J2ME allows you to write custom applications and deploy them
on mobile devices. The implementation details for the system are explained below:
4.1. User Interface
1. The User interface is deployed on the client phone. The application starts with a splash screen
“Mobile Broadcasting”.
2. The next screen is displayed after a lag which gives the user the options to choose from Image
using Bluetooth/3G, Video using 3G.
3. The user can select from the options depending on whether he has to broadcast images, video or
chat via Bluetooth.
The User Interface with options is shown in the following figure 2
Figure2: Start Screen of the Application
Options from the above figure 2 are explained as follows:
4.1.1. Image using Bluetooth
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
466 Vol. 1, Issue 5, pp. 462-472
If the user selects the image using Bluetooth option, user is directed to perform the following operations:
1. Start Bluetooth Device Enquiry
2. Match the service with the device where server is running
3. Transmit the live images to the server
The working of the above procedure is explained in the following part. For the client to match service
with the server, the service has to be first started on the server. The live images are taken immediately
after the services are matched and the transmit button is clicked.
4.1.2. Video using Bluetooth/3G
If the user selects the video using 3G option, user is directed to perform the following operations:
1. Start Bluetooth Device Enquiry
2. Match the service with the device where server is running.
3. Transmit the video to the server
The working of the above procedure is explained in the following part. For the client to match service
with the server, the service has to be first started on the server.
V. RESULTS AND DISCUSSION
Implementation results of the system using Bluetooth and 3G are explained below:
The important features provided by the application are:
1. Broadcast of Images
2. Broadcast of Video
The results for the above procedure are explained in the following part. For the client to match service
with the server, the service has to be first started on the server.
5.1. BLUETOOTH
The client side results for the systems are being illustrated by the following figures
1. Splash Screen : As the application starts, the splash screen displays “Mobile Broadcasting” at the
client side.
Figure 3: Client UI
2. Option Screen: Option screen displays the various options which a client can choose for
broadcasting of images using Bluetooth/3G and video using 3G. Figure 4 shows that the user has
selected the option Image using Bluetooth
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
467 Vol. 1, Issue 5, pp. 462-472
\
Figure 4: Option Screen
3. Search for Bluetooth Devices: Then the application on the client mobile starts searching for
devices which are connected through Bluetooth.
Figure 5: Service Search Screen
4. Starting Device Inquiry: Start device Inquiry helps in identifying the devices and initiates the
process.
Figure 6: Service Starting Screen
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
468 Vol. 1, Issue 5, pp. 462-472
5. Listing of Devices and Matching Service with Devices found: Then the application will
display the list of devices connected through Bluetooth and match the service with the device
where application server is running
Figure 7: Service Search Completed Screen
6. Display of Image on the Mobile: Finally the image which is to be transferred to the server is clicked
in mobile.
Figure8: Display Image UI on Client
The server side results are being illustrated by the following figures:
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
469 Vol. 1, Issue 5, pp. 462-472
1. Bluetooth Receiver : To transmit the images on the server, the matched server connected through
Bluetooth found in the above steps is selected to transmit the images.
Figure 9: Server UI
2. Starting service: For the client to match service with the server, the service has to be first started on
the server. The live images are taken immediately after the services are matched and the transmit
button is clicked.
Figure10: Start Service
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
470 Vol. 1, Issue 5, pp. 462-472
3. Display of images on the server: Thus the image is transferred to the server.
Figure 11: Display Image UI on Server
4. Broadcast to multiple computers: In the same way image can be broadcasted to multiple
computers.
Figure 12: Image UI on Multiple Devices
5.2. 3G The client side results for transferring of images using 3G are being illustrated by the following figures:
1. Option Screen: Again Option screen displays the various options which a client can choose for
broadcasting of images using Bluetooth/3G and video using 3G.The option selected is Image
transfer using 3G.
Figure 13: Option Screen
2. Capture image on mobile: The image is live captured through mobile
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
471 Vol. 1, Issue 5, pp. 462-472
Figure 14: Capture Image
3. Transfer image to the server: Using the same above procedure image is transferred to the
server.
Figure 15: Transfer Image Screen
The server side results are being illustrated by the following figures:
1. Display the Image or the Video on the server side: Thus the image/video is finally transferred to the
server as shown.
Figure 16: Display Image UI on Server
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
472 Vol. 1, Issue 5, pp. 462-472
VI. CONCLUSION
Mobile Broadcasting using Bluetooth/3G is a mobile application which can broadcast images and
videos to multiple devices such as computers or any other mobiles using Bluetooth / 3G.
This application can be used as an in-built feature in mobile phones for entertainment purposes and
also for other personal uses. It can also offer advertisers and companies new opportunities to reach
mobile consumers. As, most of the mobile devices are equipped with a camera, and are enabled with
Bluetooth and 3G, which helps user in capturing and broadcasting live image/ video. Thus this is an
advantage for the application.
REFERENCES
[1] http://voip.about.com/od/mobilevoip/p/3G.htm
[2]"Bluetooth traveler" http://www.hoovers.com/business-information/--pageid__13751--/global-hoov-index.xhtml.
Retrieved 9 April 2010.
[3] http:// en.wikipedia.org/wiki/3G, 22 Oct 2010 11:35:10 GMT
[4] http:// en.wikipedia.org/wiki/Bluetooth, 17 Oct 2010 12:47:52 GMT
[5] Clint Smith, Daniel Collins, "3G Wireless Networks", page 136. 2000.
[6]"Cellular Standards for the Third Generation". ITU. 1 December 2005. http://www.itu.int/osg/spu/imt
2000/technology.html#Cellular%20Standards%20for%20the%20Third%20Generation.
[7] Stallings,William,”Wireless communications & networks” Upper Saddle River, Pearson Prentice Hall.
[8] Borko Furht (Editor), Syed A. Ahson (Editor ), "Handbook of Mobile Broadcasting: DVB-H, DMB, ISDB-T,
AND MEDIAFLO (Internet and Communications) " Auerbach Publications 2008.
Authors Biography:
Dipa Dixit working as Assistant Professor at Fr.C.R.I.T College, Vashi , NaviMumbai in
Information Technology department. She has completed her ME from Mumbai University. Her
area of interest are Mobile Technology, Data Mining, Web mining.
Dimple Bajaj working as Assistant Professor at Fr.C.R.I.T College, Vashi NaviMumbai in
Information Technology department. She has completed her Bachelors in Engineering from
Mumbai University. Her areas of interest are Mobile Technology, Internet Programming, and
Networking
Swati Patil working as lecturer at Fr.C.R.I.T College, Vashi , NaviMumbai in Information
Technology department. She has completed her Bachelors in Engineering from Mumbai
University. Her areas of interest are Mobile Technology, Object Oriented Analysis and Design,
Networking,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
473 Vol. 1, Issue 5, pp. 473-479
IMPROVED DIRECT TORQUE CONTROL OF INDUCTION
MOTOR USING FUZZY LOGIC BASED DUTY RATIO
CONTROLLER
Sudheer H1, Kodad S.F
2and Sarvesh B
3
1Research Scholar JNTU Anantapur, Hyderabad, India.
2Principal, Krishna Murthy Inst. of Tech.and Engg., Hyderabad, India.
3HoD, Electrical and Electronics Dept. JNTU, Anantapur, India
ABSTRACT
Classical DTC has inherent disadvantages such as: problems during starting resulting from the null states, the
compulsory requirement of torque and flux estimators, and torque ripple. In this paper the improved response of
the DTC is achieved by varying the duty ratio of the selected voltage vector during each switching period
according to the magnitude of the torque error and position of the stator flux using Fuzzy logic. A duty ratio
control scheme for an inverter-fed induction machine using DTC method is presented in this paper. Fuzzy logic
control is used to implement the duty ratio controller. The effectiveness of the duty ratio method was verified by
simulation using Matlab SIMULINK.
KEYWORDS: DTC, Fuzzy Logic, Duty Ratio controller, Membership function (MF), Fuzzy controller,
switching table.
I. INTRODUCTION
In recent years much research has been developed in order to find simpler control schemes of
induction motors that meet the requirements like low torque ripple, low harmonic distortion and quick
response [1]. The IM offers several features, which make it attractive for use in electric drive systems.
Among various proposals Direct Torque control (DTC) found wide acceptance. In the 1980s,
Takahashi proposed a direct torque control for an induction machine drive [2-3]. Furthermore, DTC
provides very quick response with simple control structure and hence, this technique is gaining
popularity in industries In DTC it is possible to control directly the stator flux and the torque by
selecting the appropriate inverter state [2-4].
The main advantages of DTC are absence of coordinate transformation, current regulator and separate
voltage modulation block. However common disadvantages of conventional DTC are high torque and
stator flux ripple, requirement of torque and flux estimators, implying the consequent parameters
identification and sluggish speed response during start up and abrupt change in Torque command.
Many methods have been proposed to reduce the torque ripple like multi level inverters [19], and
matrix converters. Many solutions are proposed to reduce the torque ripple as mention in literature
like (a) hysteresis band with variable amplitude based on fuzzy logic [20]. (b) AN optimal
switching instant during one switching cycle is calculated for torque ripple minimization [21]. (c)
Using duty ratio control by increasing the number of voltage vectors beyond the available eight
discrete ones, without any increase in the number of semiconductor switches in the inverter [23]. (d)
Fuzzy logic control has been used to implement the duty ratio during each switching cycle using the
torque and flux errors as input [25]. (e) Space vector based hybrid pulse width modulation (HPWM)
method for direct torque controlled induction motor drive to reduce the steady state ripples [26]. In
order to overcome these disadvantages we can employ new Artificial Intelligent techniques like neural
networks, Fuzzy logic.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
474 Vol. 1, Issue 5, pp. 473-479
Based upon literature the duty ratio control of DTC is the promising method minimizes torque and
flux ripple. In the classical DTC, a voltage vector is applied for the entire switching period, and this
causes the stator current and electromagnetic torque to increase over the whole switching period. Thus
for small errors, the electromagnetic torque exceeds its reference value early during the switching
period, and continues to increase, causing a high torque ripple. The duty ratio technique based on
applying to the inverter the selected active states just enough time to achieve the torque and flux
references values. The rest of the switching period a null state is selected which won't almost change
both the torque and the flux [14].
This paper deals with the development of an improved Fuzzy logic based Duty ratio controllers for
DTC of induction motor. The main improvement is torque ripple reduction. The suggested technique
is based on applying switching state to the inverter and the selected active state just enough time to
achieve the torque and flux reference values. Therefore, a duty ratio (δ) has to be determined each
switching time. By means of varying the duty ratio between its extreme values (0 up to 1), it is to
apply any voltage to the motor [16-17]. The Elements of space phasor notation are introduced and
used to develop a compact notation. All simulations are obtained using MATLAB\ SIMULINK.
The paper is organized as follows. Section 2 gives the theoretical and mathematical analysis of
conventional Direct Torque control of induction motor, its basic block diagram and the switching
table. The need for duty ratio controller to overcome the conventional DTC draw backs are discussed
at the end of the section 2. Section 3 gives design of fuzzy logic based duty ratio controller. In this
section the procedure to develop the fuzzy logic controller using the membership function and fuzzy
rules are specified. In section 4 the results of conventional DTC and proposed model results
presented. Finally based upon results it is concluded in section 5.
II. DIRECT TORQUE CONTROL
In a DTC drive, flux linkage and electromagnetic torque are controlled independently by the selection
of optimum inverter switching modes. The selection is made to restrict the flux linkages and
electromagnetic torque errors within the respective flux and torque hysteresis bands, to obtain fast
torque response, low inverter switching frequency and low harmonic losses.
The basic Functional block diagram of classical DTC scheme is shown in Figure 1. The instantaneous
values of the stator flux and torque are calculated from stator variable by using a closed loop
estimator. Stator flux and torque can be controlled directly and independently by properly selecting
the inverter switching configuration.
Figure 1. Schematic of Classical stator-flux-based DTC [5]
In a voltage fed three phase inverter, the switching commands of each inverter leg are complementary.
So for each leg a logic state Ci (i=a,b,c) can be defined. Ci is 1 if the upper switch is commanded to be
closed and 0 if the lower one in commanded to be close (first). Since three are 3 independent legs
there will be eight different states, so 8 different voltages. Applying the vector transformation
described as
(1)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
475 Vol. 1, Issue 5, pp. 473-479
Out of 8 voltage vectors, there are six non-zero voltage vectors and two zero voltage vectors which
correspond to (C1, C2, C3) = (111)/ (000) as shown by Figure.2
Figure2. Partition of the d-q planes in to six angular sectors
As shown in Figure 2, eight switching combinations can be selected in a voltage source inverter, two
of which determine zero voltage vectors and the others generate six equally spaced voltage vectors
having the same amplitude.
The Switching logic block receives the input signals Xλ, XT and θ generates the appropriate control
voltage vector (switching states) for the inverter by lookup table, which is shown in table I. The
inverter voltage vector (six active and two zero states) and a typical Ψs are shown in Figure1.
Neglecting the stator resistance of the machine, we can write
)( ssdt
dV λ=
(2)
Or
. tVss ∆=∆λ (3)
Which means that λs can be changed incrementally by applying stator voltage Vs for time increment
∆t. The flux in machine is initially established to at zero frequency (dc) along the trajectory. With the
rated flux, the command torque is applied and the *
sλ vector starts rotating.
Consider the total and incremental torque due to sλ∆ the stator flux vector changes quickly by, but the
rλ change is very sluggish due to large time constant Tr. Since rλ is more filtered, it moves uniformly
at frequency ωe, whereas sλ movement is jerky. The average speed of both, however, remains the
same in the steady-state condition.
According to the principle of operation of DTC, the selection of a voltage vector is made to maintain
the torque and stator flux within the limits of two hysteresis bands. The switching selection table for
stator flux vector lying in the first sector of the d-q plane is given in Table1.
Table 1: Switching table of inverter voltage vectors
Hλ HTe S(1) S(2) S(3) S(4) S(5) S(6)
1
1 V2 V3 V4 V5 V6 V1
0 V0 V7 V0 V7 V0 V7
-1 V6 V1 V2 V3 V4 V5
-1
1 V3 V4 V5 V6 V1 V2
0 V7 V0 V7 V0 V7 V0
-1 V5 V6 V1 V2 V3 V4
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
476 Vol. 1, Issue 5, pp. 473-479
Torque is increased by the 432 and ,, VVV Vectors, but decreased by the 651 and ,, VVV vectors. The zero
vectors (V0 or V7) short-circuit the machine terminals and keep the flux and torque unaltered.
A major concern in DTC of induction motor drives is torque and flux ripples, since none of the
inverter switching vectors is able to generate the exact stator voltage required to produce the desired
changes in torque and flux. Possible solutions involve the use of high switching frequency or
alternative inverter topologies. Increased switching frequency is desirable since it reduces the
harmonic content of the stator currents, and reduces torque ripple. High switching frequency results in
significantly increased switching losses leading to reduced efficiency and increased stress on the
inverter semiconductor devices. Furthermore, in the case of high switching frequency, a fast processor
is required since the control processing time becomes small. When an alternative inverter topology is
used [16], it is possible to use an increased number of switches, but this also increases the cost.
However, if instead of applying a voltage vector for the entire switching period, it is applied for a
portion of the switching period, then the ripple can be reduced. This is defined as duty ratio control in
which the ratio of the portion of the switching period for which a non-zero voltage vector is applied to
the complete switching period is known as the duty ratio.
Duty ratio control the selected inverter switching state is applied for a portion of the sample period as
a duty ratio δ, and the zero switching state is applied for the period [7, 9]. The duty ratio is chosen to
give an average voltage vector, which causes torque change with ripple reduction. Fuzzy controller
includes two inputs (torque error ∆τ and the position of the stator flux linkage us according on sector)
and one output (duty ratio δ).
The duty ratio controller prepares an optimal voltage vector for optimization outputs, which generate
fuzzy DTC. So, the fuzzy controller generates a number between 0 and 1, it is a filling of signal in one
period (0 to 100%).
III. DESIGN OF THE DUTY RATIO FUZZY CONTROLLER
The Fuzzy logic based duty ratio controller which generates optimal voltage vector for optimization
output is design using Matlab base fuzzy toolbox. Two Mamdani type Fuzzy controllers are
developed one for stator flux above the reference value another for below the reference value. Each
fuzzy controller has two inputs (torque error, angle) and one output (duty ratio). Figure 3 shows the
membership functions of inputs and outputs. As shown in Figure3 Gaussian membership functions are
employed. The fuzzy logic controller is a Mamdani type and contains a rule base. This Rule base
comprises two groups of rules, each of which contains nine rules as shown in table3.The Centroid
method is employed for defuzzification.
Figure3. Fuzzy membership functions
Table 2. Rules for fuzzy Duty ratio controllers
Torque error Position of stator flux error
Small medium large
Stator flux<
Ref.Value
small Medium Small Small
medium Medium medium medium
large Large large Large
Stator flux>
Ref.Value
small Small Small medium
medium Medium medium Large
large Large large Large
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
477 Vol. 1, Issue 5, pp. 473-479
Figure 4. Flux response
Figure 5. Speed response
Figure 6. Electrical Torque Response
IV. RESULTS AND ANALYSIS
In order to study the performance of the developed Conventional DTC and fuzzy duty ratio controller
based DTC their Simulink model is developed in Matlab 7.1 environment for 11Kw, 400V, 4pole,
50Hz, 3-phase induction motor. The Simulink model of Fuzzy logic duty ratio controller direct torque
control developed is used to obtain the results. The Fuzzy controllers are developed using fuzzy
toolbox. The above simulink models are subjected to Sampling period of the system is 0.001s. To
compare with Conventional DTC and Fuzzy Duty ratio DTC the load torque is varied in step initially
it started with 0 N-m at t=0.2sec its is increased to 4 N-m. Initially the reference speed is kept at 0
rad/sec and it is increased to 50 rad/sec at t=0.03 sec.
Figure 4(a) and 4(b) shows stator flux trajectory for classical DTC and proposed Duty ratio DTC.
Classical DTC flux path contains more ripples compares to proposed DTC scheme where the
trajectory path is smooth and less ripples.
Figure 5(a) and 5(b) shows that the Speed response of conventional DTC and Proposed Fuzzy logic
Duty Ratio DTC. The reference speed is subjected to step change of 0 to 50 rad/sec to study the
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
478 Vol. 1, Issue 5, pp. 473-479
dynamic response both conventional and proposed schemes. From the results we can observe that the
peak overshoot Duty ratio DTC is decreased and shows improved steady state response compared to
conventional DTC.
Figure 6(a) and 6(b) shows electric torque response of classical DTC and Proposed Duty ratio DTC
respectively. To study both dynamic ad steady state behavior initially it is subjected to no-load later
load is suddenly increased to 4 N-m. As shown in 6(a) the classical DTC causes high torque ripple
between 5 N-m to 3N-m when subjected to 4N-m. Torque ripple in proposed DTC scheme has been
reduced as shown in Figure 6(b) where the torque steady state response is in line with reference value.
The peak overshoot when torque is subjected to sudden perturbation from 0 to 4 N-m is 6.4 N-m in
conventional DTC and in proposed DTC scheme it is reduced to 4.8 N-m which represents better
dynamic response.
The simulation results suggest that proposed Fuzzy Logic Duty Ratio controlled DTC of induction
machine can achieve precise control of the stator flux and torque. On comparison of results derived
from simulation shows that Fuzzy Logic Duty Ratio DTC is superior to conventional DTC and
minimizes the Torque ripple to large extent.
V. CONCLUSIONS In this paper Fuzzy logic Duty Ratio controller based DTC have been proposed. An improved Torque
and Speed response is obtained using fuzzy Duty ratio Controller DTC for induction motor. The
simulation results suggest that Fuzzy logic Duty Ratio Controller DTC of induction machine can
achieve precise control of the stator flux and torque. Compared to Conventional DTC, presented
method is easily implemented, and the steady performances of ripples of both torque and flux are
considerably improved. The main improvements shown are:
• Considerable reduction in torque and Speed ripples.
• Simulation results shows the validity of proposed method in achieving considerable reduction
in torque and speed ripples, adaptation of proposed controller, maintaining good performance
and reducing the energy consumption from supply mains.
• The method of selection on duty ratio between active and null state is promising and easier to
implement.
• Reduction of over and undershoots in speed and Torque response.
• Smoother flux trajectory path.
As a future work we can develop duty ratio controller for SVPWM based fuzzy direct torque control
of induction machine and development of adaptive fuzzy controller suitable for any type of motor.
REFERENCES
[1] B.K.Bose, Power electronics and variable frequency drives, IEEE Press, New York, 1996.
[2] Takahashi I, Naguchi T. “A new quick-response and high efficiency control strategy of an induction
motor”, Proc. Of the IEEE Transactions on Industry Application [ISSN0093-9994], Vol. 22, No. 5, pp. 820-
827, 1986.
[3] Takahashi and Y. Ohmori, "High-Performance Direct Torque Control of an Induction Motor", IEEE Trans.
On Industry Applications, vol. 25, no. 2, Mar./Apr. 1989, pp.257-264
[4] P.Tiitinen, P.Pohkalainen, and J.Lalu, “The next generation motor control method: Direct Torque Control
(DTC)”, EPE Journal, Vol.5, No.1, March 1995, pp. 14-1 8.
[5] D. Casadei, F. Profumo, G. Serra, A. Tani. “FOC and DTC: Two variable schemes for induction motors
torque control”, Proc. of the IEEE Trans. Power Electronics, Vol.17, No. 5, 2002.
[6] J. Kang, S. Sul, New direct torque control of induction motor for minimum torque ripple and constant
switching frequency, IEEE Trans. Ind. Applicat., vol. 35, Sept./Oct. 1999, pp. 1076–1082.
[7] P. R.Toufouti, S.meziane, H. Benalla ,”Direct Torque Control of Induction motor using fuzzy logic”, ACSE
journal volume(6), Issue(2) June 2006.
[8] D. Casadei, G. Grandi, G. Serra, A. Tani. “Effectes of flux and torque hysteresis band amplitude in direct
torque control of induction machines”, Proc. IEEE-IECON-94,pp. 299 – 304, 1994.
[9] R.Toufouti S.Meziane ,H. Benalla, “Direct Torque Control for Induction Motor Using Fuzzy Logic”
ICGST Trans. on ACSE, Vol.6, Issue 2, pp. 17-24, June, 2006.
[10] Ji-Su Ryu, In-Sic Yoon, Kee-Sang Lee and Soon-Chan Hong, “Direct Torque Control of Induction Motors
Using Fuzzy Variable switching Sector”, Industrial Electronics, 2001. Proceedings. ISIE 2001. IEEE
International Symposium on Volume 2, Issue , 2001 Page(s):901 - 906 vol.2
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
479 Vol. 1, Issue 5, pp. 473-479
[11] Thomas G. Habetla. Deepakraj M. Divan: “Control strategies for Direct Torque Control using Discrete
pulse modulation”, IEEE Transactions on Industrial drives and Applications, Vol. 27, No. 5, Sept/Oct 1991
[12] D. Casadei, G.Serra, A. Tani, and L. Zarri, “Assessment of direct torque control for induction motor drives”
, Bulletin of the Polish Academy of Technical Sciences, Vol. 54, No. 3, 2006
[13] Hui-Hui Xia0, Shan Li, Pei-Lin Wan, Ming-Fu Zhao, "Study on Fuzzy Direct Torque Control
System"Proceedings of the Fourth International Conference on Machine Learning and Cybernetics,
Beijing, 4-5 August 2002.
[14] G. Escobar, A.M. Stankovic, E. Galvan, J.M. Carrasco, and R.A. Ortega, “A family of switching control
strategies for the reduction of torque ripple in DTC”, IEEE Trans. on Control Systems Technology 11 (6),
933–939 (2003).
[15] TANG, L. et al: “A New Direct Torque Control Strategy for Flux and Torque Ripple Reduction for
Induction Motors Drive by Space Vector Modulation”, Conf. Rec. IEEE-PESC’2001, Vol. 2, pp. 1440–
1445, 2001.
[16] Milan Zalman - Ivica Kuric “Direct Torque and flux control of Induction machine and fuzzy controller”.
Journal of Electrical engineering Vol. 56, No.. 9-10, 2005, 278–280.
[17] Shahbazi, M. Moghani, J.S Mirtalaei, S.M.M, “An improved direct torque control scheme for a matrix
converter-fed induction motor”, Universities of power Engineering conference AUPEC 2007, Australia.
[18] Dal.Y.Ohm, “Dynamic model of Induction motors for vector control” Drivetech, Inc., Blacksburg, Virginia
[19] Cascone V. 1989. Three Level Inverter DSC control strategy for traction drives. Proc. Of 5th
European
Conference on Power Electronics and Applications. 1(377): 135-139.
[20] Fatiha Zidani, Rachid Nait said. 2005. Direct Torque Control of Induction Motor with Fuzzy Minimization
Torque Ripple. Journal of Electrical Engineering. 56(7-8): 183-188.
[21] Kang J. K., Sul. S. K. 1998. Torque Ripple Minimization Strategy for Direct Torque Control of Induction
Motor. IEEE-IAS annual meeting. pp. 438-443
[22] Lascu C., Boldea. I, Blaabjerg. 1998. A Modified Direct Torque Control (DTC) for Induction Motor
Sensorless Drive. IEEE-IAS Annual Meeting. pp. 415-422.
[23] Pengcheng Zhu, Yong Kang and Jian Chen. 2003. Improve Direct Torque Control Performance of
Induction Motor with Duty Ratio Modulation. Conf. Rec. IEEE-IEMDC’03. 1: 994-998.
[24] Sayeed Mir and Malik E. Elbuluk. 1995. Precision Torque Control in Inverter-Fed Induction Machines
using Fuzzy Logic. IEEE-IAS annual meeting. pp. 396-401.
[25] Malik E. Elbuluk, “Torque Ripple Minimization in Direct Torque Control of Induction Machines,”IEEE-
IAS annual meeting, Vol. 1, pp. 12-16, oct 2003.
[26] T. Brahmananda Reddy, J. Amarnath and D. Subba Rayudu “Direct Torque Control of Induction Motor
Based on Hybrid PWM Method for Reduced Ripple:A Sliding Mode Control Approach” ACSE Journal,
Volume (6), Issue (4), Dec., 2006
Authors
Sudheer Hanumanthakari received the B.Tech degree in EEE from JNTU, Hyderabad and
M.Tech Degree in Power Electronics from NTU, Hyderabad and currently pursuing PhD. in
Electrical Engineering at JNTU, Anantapur. He has got a teaching experience of nearly 8 years
He is currently working as Asst. Professor in FST-IFHE (ICFAI University), Hyderabad. His
areas of interests are neural networks and fuzzy logic applications in power electronics drives
like FOC and DTC.
Kodad S.F. received the B.E. degree in EEE from Karnataka University and the M.Tech degree
in Energy Systems Engg. from JNTU, Hyderabad. He received his Ph.D. degree in Electrical
Engg. from JNTU, Hyderabad, India in the year 2004. He has got a teaching experience of
nearly 20 years. Currently, he is working as Principal in Krishna Murthy Institute if Tech. and
Engineering. His area of interests are neural networks, Fuzzy logic, Power electronics, power
systems, artificial intelligence, Matlab, Renewable energy sources, etc.
Sarvesh Botlaguduru received the B.Tech degree in EEE from JNTU, Anantapur and M.Tech in
Instrumentation and Control from SV University, Tirupathi. He received his Ph.D. degree in Electrical Engg.
from IIT, Kharaghpur India in the year 1995. He has got a teaching experience of nearly 30 years. Currently, he
is working as Professor and Head of EEE in JNTUA, Anantapur, and Andhra Pradesh, India. His areas of
interests are Instrumentation and Control, Control Systems.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
480 Vol. 1, Issue 5, pp. 480-491
INFLUENCE OF ALUMINUM AND TITANIUM ADDITION ON
MECHANICAL PROPERTIES OF AISI 430 FERRITIC
STAINLESS STEEL GTA WELDS
G.Mallaiah1, A.Kumar
2 and P. Ravinder Reddy
3
1 Department of Mechanical Engineering, KITS, Huzurabad, A.P., India 2Department of Mechanical Engineering, NIT, Warangal, A.P., India
3Department of Mechanical Engineering, CBIT, Hyderabad, A.P., India
ABSTRACT
An attempt has been made to study the influence of grain refining elements such as aluminium (Al) and
titanium (Ti) on mechanical properties of AISI 430 ferritic stainless steel welds through gas tungsten arc
welding (GTAW) process. Aluminium(Al) and titanium(Ti) powders of -100µm mesh was added in the range
from 1g to 3g between the butt joint of ferritic stainless steel . The effect of post-weld annealing at
830°c,30min holding followed by water quenching on microstructure and mechanical properties of AISI 430
ferritic stainless steel welds was also studied. From this investigation, it is observed that the joints fabricated by
the addition of 2g Al (2.4 wt %) and 2g Ti (0.7 wt %) led to improved strength and ductility compared to all
other joints. The observed mechanical properties have been correlated with the microstructure and fracture
features.
KEYWORDS: AISI 430Ferritic Stainless Steel, Gas Tungsten Arc Welding, Aluminium, Titanium, Mechanical
Properties
I. INTRODUCTION
Ferritic stainless steels (FSS) contain 16-30 wt. % Cr depending on alloy element. Since this steel
class is easy forming and resistant to atmospheric corrosion, it is commonly used in architecture,
interior and exterior decoration, food industry, dry machine and chemical industry. Ferritic stainless
steels are increasingly used for the automotive exhaust systems [1] because of their excellent
resistance to stress corrosion cracking, good toughness, ductility and weldability, compared with
conventional austenitic stainless steels [2, 3]. In certain applications such as the production of
titanium by kroll process, where titanium tetrachloride (TiCl4) is reduced by magnesium, austenitic
stainless steels are used for the reduction retorts with an inner lining of ferritic stainless steels to
mitigate the problem of leaching of the nickel by molten magnesium. Gas tungsten arc welding
(GTAW) is generally used for welding of these alloys because it produces a very high quality welds.
Lower heat input and lower current density reduces the arc temperature and arc forces in GTAW [4]
The principal weldability issue with the ferritic stainless steels is maintaining adequate toughness and
ductility in the weld zone (WZ) and heat affected zone (HAZ) of weldments, this is due to large grain
size in the fusion zone [5, 6] because they solidify directly from the liquid to the ferrite phase without
any intermediate phase transformation. Normally, FSS has a fine grained, ductile and ferrite structure.
However, in melting welding method, intergranular carbon settles, grain coarsening and inter granular
carbon precipitation negatively effect on mechanical characteristics of welding joint and such grain
coarsening results in lower toughness [7-9].The pronounced grain growth takes place in the HAZ and
carbide precipitation occurs at the grain boundaries and this makes the weld more brittle and
decreases its corrosion resistance. According to the literature, all stainless steels with carbon content
above 0.001% are susceptible to carbide precipitation [10,11].Chromium carbide precipitation may be
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
481 Vol. 1, Issue 5, pp. 480-491
responsible for embrittlement, intergranular corrosion and may reduce resistance to pitting corrosion.
Furthermore, cracks can occur in the weld metal when it cools down. For this reason, the application
of this group of alloys is limited [12]. The problem of grain coarsening in the weld zone of ferritic
stainless steel welds is addressed by limiting heat input by employing low heat input welding
processes [13-16]. The formation of fine equiaxed grains in weld fusion zone helps in reducing
solidification cracking and also in improving the mechanical properties [17, 18]. It has also been
suggested that nitride and carbide formers such as B, Al, V and Zr can be added to FSS to suppress
grain growth during welding [19]. Studies have been conducted to grain refining of ferritic stainless
steel welds by electromagnetic stirring, current pulsing [20, 21], as well as through liquid metal
chilling [22]. The current pulsing reduces overall heat input without any spatter [23]. Earlier, attempts
have been made to grain refine the welds of these steels by addition of elements such as titanium,
aluminium and copper [24, 25].
From the reported literature it is observed that the grain refinement in the weld zone of ferritic
stainless steel welds by the addition of grain refining elements such as aluminium (Al) and titanium
(Ti) with specified weight percentage for increasing the mechanical properties is not studied. The
objective of the present study is to investigate the influence of Al and Ti addition on the
microstructure and mechanical properties of AISI 430 ferritic stainless steel welds.
II. EXPERIMENTAL PROCEDURE
The rolled plates of 5mm thick AISI 430 ferritic stainless steel were cut into the required dimension.
The chemical composition and mechanical properties of the base material (AISI 430 ferritic stainless
steel) were presented in Tables 1 and 2 respectively. GTA welding was carried out using a Master
TIG AC/DC 3500W welding machine (Make: kemppi). GTAW process is well suitable for joining
thin and medium thickness material like aluminium alloys, steels and for the applications where
metallurgical control is critical. The advantages of the GTAW process are low heat input, less
distortion, resistance to hot cracking and better control of fusion zone, there by improved mechanical
properties. A single ‘V’ butt-joint configuration (Fig.1) was selected to fabricate the weld joints. Prior
to welding the base metal plates were wire brushed and degreased using acetone and preheated to
100°c. All the necessary care was taken to avoid the joint distortion during welding. A filler material
confirming to the composition given in Table 1 is used.
Table 1. Chemical composition of the base material and filler material (wt. %)
Material C Mn Si P S Ni Cr Fe
Base material
(AISI 430 FSS) 0.044 0.246 0.296 0.023 0.002 0.164 17.00 balance
Filler material
(AISI 430 FSS) 0.044 0.246 0.296 0.023 0.002 0.164 17.00 balance
Table 2. Mechanical properties of base material
Material
Ultimate
Tensile
Strength
(UTS),
MPa
Yield
Strength
(YS),
MPa
Percentage of
elongation,
(% El )
Impact
Toughness,
J
Fusion zone
hardness,
Hv
Base material
(AISI 430 FSS) 424 318 13 22 220
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
482 Vol. 1, Issue 5, pp. 480-491
Figure 1 Schematic sketch of the weld joint (All dimensions are in ‘mm’)
Al and Ti were added as a powder of -100µm mesh (99% purity level) in the range from 1g to 3g
between the butt joint of ferritic stainless steel. Weld joint is completed in three passes. The welding
parameters were given in Table 3. In order to investigate the influence of post-weld heat treatment on
microstructure and mechanical properties of welds, the post-weld annealing at 830°C, 30min holding
followed by a water quenching was adopted [26].
Table 3. GTA welding parameters
Parameter Value
Welding current (Amps) 120
Welding speed (mm/min) 50
Electrode polarity DCSP
Arc voltage (V) 10-13
Arc gap (mm) 2
Filler wire diameter (mm) 1.6
Electrode 2% Thoriated tungsten
Number of passes 3
Shielding gas (Argon), flow rate (L/min) 10
Purging gas(Argon) flow rate (L/ min) 5
Preheat temperature (°C) 100
2.1. Metallography
The objective of this section is to carry out the detailed weld microstructural examinations of ferritic
stainless steel weldments using optical microscope and Scanning electron microscope (SEM).
In order to observe the microstructure under the optical microscope, specimens were cut from the
welds, and then prepared according to the standard procedures, and etched using aquaregia (1part
HNO3, 3parts HCL). Micro structures of welds in as-welded and post-weld annealed conditions were
studied and recorded. Scanning electron microscope was used for fractographic examination.
2.2. Mechanical Testing
The objective of this section is to evaluate the transverse tensile properties such as tensile strength,
yield strength and percentage of elongation of FSS weldments in the as-welded and post weld
annealed conditions by conducting the tensile test. Fusion zone hardness of all the weldments is to be
measured. FSS are higher in chromium (16-30%) and carbon (0-12%) tends to form chromium
carbide at grain boundaries in the weld heat affected zone. The refinement of grains at the weldments
and increase of weld ductility and toughness are the major requirements in FSS weldments. In order to
assess the toughness of the weld joints, charpy impact tests are to be performed.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
483 Vol. 1, Issue 5, pp. 480-491
The tensile test specimens were made as per ASTM standards by cutting the weld joints and machined
by EDM wire cut to the required dimensions. The configuration of the tensile test specimen adopted is
given in Fig.2. The tensile test was conducted with the help of a computer controlled universal testing
machine (Model: TUE-C- 600) at a cross head speed of 0.5mm/min. During tensile tests all the weld
specimens were failed within the weld region. Micro-hardness tests were carried out using a Vickers
digital micro-hardness tester in transverse direction of the weld joint. A load of 300g was applied for
duration of 10 s. The micro-hardness was measured at an interval of 0.1mm across the weld, 0.5mm
across the heat-affected zone (HAZ) and unaffected base metal.
Charpy impact test specimens were prepared to the dimensions shown in Fig.3 to evaluate the impact
toughness of the weld metal. Since the thickness of the plate was small, subsize [27] specimens were
prepared. The impact test was conducted at room temperature using a pendulum type charpy impact
testing machine.
III. RESULTS
3.1. Mechanical properties
Mechanical properties of all the weld joints in as-welded and post-weld annealed conditions were
evaluated and the results are presented in Tables 4 and 5 respectively.
Figure 2 Configuration of tensile test specimen (All dimensions are in ‘mm’)
Figure 3 Configuration of Charpy V-notch impact test specimen
(All dimensions are in ‘mm’)
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
484 Vol. 1, Issue 5, pp. 480-491
Table 4. Mechanical properties of AISI 430 ferritic stainless steel weldments in as-welded condition
Joint Condition
Ultimate
Tensile
Strength
(UTS),
MPa
Yield
Strength
(YS),
MPa
Percentage of
elongation,
(% El )
Impact
Toughness,
J
Fusion
zone
hardness,
Hv
1g Al (1.7 wt % )
addition 455 346 3.6 2 200
2g Al (2.4 wt % )
addition 468 357 6.0 4 230
3g Al (6.2 wt % )
addition 440 328 2.7 4 210
1g Ti (0.3 wt %)
addition 419 335 2.7 4 210
2g Ti (0.7 wt %)
addition 424 356 4.6 4 245
3g Ti ( 0.9 wt % )
addition 414 330 2.5 3 232
Filler material
(AISI 430 FSS)
addition without
Al and Ti
385 325 2.3 3 195
Table 5. Mechanical properties of AISI 430 ferritic stainless steel weldments in post-weld annealed condition
Joint Condition
Ultimate
Tensile
Strength
(UTS),
MPa
Yield Strength
(YS),
MPa
Percentage of
elongation,
(% El )
Impact
Toughness,
J
Fusion
zone
hardness,
Hv
1g Al (1.7 wt % )
addition 467 355 12 4 215
2g Al (2.4 wt % )
addition 478 385 14 6 240
3g Al (6.2 wt % )
addition 450 346 8 4 220
1g Ti (0.3 wt %)
addition 421 340 8 4 225
2g Ti (0.7 wt %)
addition 484 365 15 6 255
3g Ti ( 0.9 wt % )
addition 415 334 10 4 240
Filler material
(AISI 430 FSS)
addition without
Al and Ti
393 330 7.8 4 200
From the results it is observed that by the addition of 2g Al (2.4 wt %) and 2g Ti (0.7wt %) to the
weld pool led to an increase in its strength and ductility as compared to all other joints. This can be
attributed to fine grain microstructure and also formation of precipitates such as aluminium carbides
(Al4C3) and titanium carbides (TiC) respectively in the weld zone of ferritic stainless steel weldments,
which are believed to be responsible for the grain refinement.
3.2 Microstructure studies
Microstructures of all the joints were examined at the weld region of ferritic stainless steel welds in
as-welded and post-weld annealed conditions and the results are presented in Figs. 4, 5 and 6. From
the results it is observed that the joints fabricated by the addition of 2g Al (2.4 wt %) and 2g Ti (0.7wt
%) resulted in fine equiaxed grains compared to all other joints. Grain size in the weld zone of ferritic
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
485 Vol. 1, Issue 5, pp. 480-491
stainless steel weldments were measured by using line intercept method [28] and the results are
presented in Table 6. The chemical composition of the all weld metals (wt %) is given in Table 7.
Scanning electron microscopy (SEM) was applied to observe the distribution of precipitates in the
fusion zone of weldments made by the addition of 2g Al (2.4 wt %) and 2g Ti (0.7wt %). SEM
micrographs of precipitations are shown in Fig.7
Table 6. Grain size in the weld zone of AISI 430 ferritic stainless steel
Joint condition Grain size(µm)
1g Al (1.7 wt % )
addition 300
2g Al (2.4 wt % )
addition 200
3g Al (6.2 wt % )
addition 300
1g TI (0.3 wt % )
addition 250
2g Ti (0.7 wt % )
addition 200
3g Ti (0.9 wt % )
addition 360
Filler material
(AISI 430 FSS) addition
without Al and Ti
380
Table 6. Grain size in the weld zone of AISI 430 ferritic stainless steel weldments
Joint condition Grain size(µm)
1g Al (1.7 wt % )
addition 300
2g Al (2.4 wt % )
addition 200
3g Al (6.2 wt % )
addition 300
1g TI (0.3 wt % )
addition 250
2g Ti (0.7 wt % )
addition 200
3g Ti (0.9 wt % )
addition 360
Filler material
(AISI 430 FSS)
addition without Al and
Ti
380
Figure 4 Microstructure of weld region of AISI 430 ferritic stainless welds
in as-welded condition
(a) 1g Al (1.7wt %) addition (b) 2g Al (2.4wt %) addition
(c) 3g Al (6.2wt %) addition (d) 1g Ti (0.3wt %) addition
(e) 2g Ti (0.7wt %) addition (f) 3g Ti (0.9wt %) addition
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
486 Vol. 1, Issue 5, pp. 480-491
Figure 5 Microstructure of weld region of AISI 430 ferritic stainless welds in
post-weld annealed condition
(a) 1g Al (1.7wt %) addition (b) 2g Al (2.4wt %) addition
(c) 3g Al (6.2wt %) addition (d) 1g Ti (0.3wt %) addition
(e) 2g Ti (0.7wt %) addition (f) 3g Ti (0.9wt %) addition
Figure 6 Microstructure of weld region of AISI 430 ferritic stainless welds
made by the addition of filler material without Al and Ti
(a) As-welded condition (b) Post-weld annealed condition
Fig.7 SEM micrographs of the precipitations in the fusion zone
of ferritic stainless steel weldments
(a) 2g Al (2.4 wt %) addition
(b) 2g Ti (0.7 wt %) addition
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
487 Vol. 1, Issue 5, pp. 480-491
3.3 Fractography
The objective of this section is to analyze the fracture surfaces of tensile and impact specimens of the
ferritic stainless steel weld joints using SEM to under stand the fracture surface morphology.
The fractured surfaces of the tensile and impact specimens of AISI 430 ferritic stainless steel
weldments in as-welded and post-weld annealed conditions were analyzed using SEM to reveal the
fracture surface morphology. Figs. 8,9 and Figs.10,11 displays the fractographs of tensile and impact
specimens of weldments made by the addition of Al , Ti and filler material (AISI 430 FSS) addition
without Al and Ti in as-welded and post-weld annealed conditions respectively.
Figure 8 Fractographs of tensile (a, b, c, d) and impact specimens (e, f, g, h)
of ferritic stainless steel weldments in as-welded condition
(a) 1g Al (1.7 wt %) addition (b) 2g Al (2.4 wt %) addition
(c) 3g Al (6.2 wt %) addition (d) filler material (AISI 430 FSS)
addition without Al
(e) 1g Al (1.7 wt %) addition (f) 2g Al (2.4 wt %) addition
(g) 3g Al (6.2 wt %) addition (h) filler material (AISI 430 FSS)
addition without Al
Figure 9 Fractographs of tensile (a, b, c, d) and impact specimens (e, f, g, h)
of ferritic stainless steel weldments in as-welded condition
(a) 1g Ti (0.3 wt %) addition (b) 2g Ti (0.7 wt %) addition
(c) 3g Ti (0.9 wt %) addition (d) filler material (AISI 430 FSS)
addition without Ti
(e) 1g Ti (0.3 wt %) addition (f) 2g Ti (0.7 wt %) addition
(g) 3g Ti (0.9 wt %) addition (h) filler material (AISI 430 FSS)
addition without Ti
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
488 Vol. 1, Issue 5, pp. 480-491
Figure 10 Fractographs of tensile (a, b, c, d) and impact specimens (e, f, g, h) of ferritic
stainless steel weldments in post-weld annealed condition
(a) 1g Al (1.7 wt %) addition (b) 2g Al (2.4 wt %) addition
(c) 3g Al (6.2 wt %) addition (d) filler material (AISI 430 FSS)
addition without Al
(e) 1g Al (1.7 wt %) addition (f) 2g Al (2.4 wt %) addition
(g) 3g Al (6.2 wt %) addition (h) filler material (AISI 430 FSS)
addition without Al
Figure 11 Fractographs of tensile (a, b, c, d) and impact specimens (e, f, g, h)
of ferritic stainless steel weldments in post-weld annealed condition
(a) 1g Ti (0.3 wt %) addition (b) 2g Ti (0.7 wt %) addition
(c) 3g Ti (0.9 wt %) addition (d) filler material (AISI 430 FSS)
addition without Ti
(e) 1g Ti (0.3 wt %) addition (f) 2g Ti (0.7 wt %) addition
(g) 3g Ti (0.9 wt %) addition (h) filler material (AISI 430 FSS)
addition without Ti
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
489 Vol. 1, Issue 5, pp. 480-491
Table 7. Chemical composition of all weld metals (wt. %)
Joint
condition C Mn Si P S Ni Cr Al Ti Fe
1g Al
addition 0.040 0.11 0.27 0.006 0.028 0.261 17.02
1.7
0.01 balance
2g Al
addition 0.029 0.25 0.30 0.004 0.030 0.330 17.09 2.4
0.01
balance
3g Al
addition 0.035 0.18 0.25 0.002 0.027 0.235 17.20 6.2 0.02 balance
1g Ti
addition 0.035 0.05 0.70 0.021 0.005 0.164 17.04 0.03 0.3 balance
2g Ti
addition 0.023 0.36 0.31 0.024 0.005 0.342 17.21 0.06 0.7 balance
3g Ti
addition 0.024 0.28 0.29 0.021 0.006 0.322 17.40 0.09 0.9 balance
Filler material
(AISI 430 FSS)
addition
without Al and
Ti
0.036 0.38 0.41 0.007 0.030 0.241 16.23 0.036 0.013 balance
IV. DISCUSSION
From this investigation it is observed that the addition of Al to the weld pool, up to 2g (2.4 wt %)
resulted in increased mechanical properties, this can only be attributed to the formation of precipitates
such as aluminium carbides ( Al4C3). Whereas, by increasing Al content beyond 2g (2.4 wt %)
resulted in decreased mechanical properties this may be attributed to the strong detrimental effect of
ferrite promotion compared to the beneficial effect of precipitation. The addition of Ti to the weld
pool, up to 2g( 0.7 wt %) resulted in increased mechanical properties, this may be attributed to solid-
solution strengthening by the formation of titanium carbides( TiC), which are believed to be
responsible for the grain refinement. Whereas, by increasing Ti content beyond 2g (0.7 wt %) resulted
in decreased mechanical properties this can be attributed to the titanium addition can be in excess of
that required for the formation of TiC and the effect of ferrite promotion.
The tensile and impact fracture surfaces of ferritic stainless steel weldments with Al addition and filler
material (AISI 430 FSS) addition without Al in as-welded condition (Fig. 8 a-h) shows cleavage
fracture indicating brittle failure. The tensile and impact fracture surfaces of weldments made by the
addition of 1g Al (1.7wt %) , 3g Al (6.2 wt % ) in post-weld annealed condition (Fig.10 (a),(c),(e)
& (g) ) shows quasi cleavage fracture indicating both ductile and brittle fracture. The tensile and
impact fracture surfaces of ferritic stainless steel weldments with Ti addition and filler material (AISI
430 FSS) addition without Ti in as-welded condition (Fig.9 a-h) shows cleavage fracture indicating
brittle failure. The tensile and impact fracture surfaces of weldments made by the addition of 1g TI (
0.3 wt %) , 3g Ti ( 0.9 wt %) in post-weld annealed condition (Fig. 11 (a),(c),(e) &(g) ) shows quasi
cleavage fracture indicating both ductile and brittle fracture. Whereas, the tensile and impact fracture
surfaces of weldments made by the addition of 2g Al (2.4 wt %) and 2g Ti (0.7 wt %) in post-weld
annealed condition Fig.10 (b) & (f)) and (Fig.11 (b) & (f)) respectively represents ductile fracture as
fine dimples are seen in the joints. Since fine dimples are the characteristic feature of ductile fracture,
the joints made by the addition of 2g Al (2.4 wt %) and 2g Ti (0.7 wt %) in post-weld annealed
condition have shown higher ductility compared to all other joints and base material, this is attributed
to the martensite formed in the HAZ is tempered during post-weld annealing, which reduces the
embrittlement and hence the ductility is improved.
V. CONCLUSIONS
The influence of Al and Ti addition in the range from 1g Al (1.7wt%) to 3g Al (6.2 wt %) and 1g Ti
(0.3 wt %) to 3g Ti (0.9 wt %) and filler material (AISI 430 ferritic stainless steel) addition without Al
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
490 Vol. 1, Issue 5, pp. 480-491
and Ti on microstructure and mechanical properties of AISI 430 ferritic stainless steel welds have
been analyzed in detail and the following conclusions are derived.
1. The addition of 2g Al (2.4 wt %) and 2g Ti (0.7 wt %) resulted in better tensile properties (Ultimate
Tensile Strength, Yield Strength & percentage of elongation) compared to all other joints. This is due
to the fine grain microstructure and also formation of aluminium carbides (Al4C3) and Titanium
carbides (TiC) in the weld zone of ferritic stainless steel weldments respectively, which are believed
to be responsible for grain refinement.
2. There is a marginal improvement in the ductility of ferritic stainless steel weldments made by the
addition of 2g Al (2.4 wt %) and 2gTi (0.7 wt %) in post-weld annealed condition compared to all
other joints. This is attributed to the formation of fine dimples, ductile voids in the weld zone of
ferritic stainless steel weldments
3. The hardness was the highest in the fusion zone of ferritic stainless steel weldments made by the
addition of 2g Ti (0.7 wt %) compared to all other joints. This could be explained by the existence of
fine Ti-based carbides (TiC) and solid-solution strengthening by the element Ti during welding
ACKNOWLEDGEMENTS
The authors are thankful to Dr. G.Madhusudhan Reddy, Defence Metallurgical Research Laboratory,
Hyderabad, India for his support and continued encouragement for doing this work. The authors are
also thankful to authorities of NIT, Warangal for providing the facilities to carryout this work. One of
the authors (G.Mallaiah) is thankful to the principal and the management of KITS, Huzurabad for
their constant support during this work
REFERENCES
[1] Wang X, Ishii H, Sato K. Fatigue and microstructure of welded joints of metal sheets for automotive
exhaust system. JSAE Review 2003; 24(3):295-301.
[2] Fujita N, Ohmura K, Yamamoto A. Changes of microstructures and high temperature properties during high
temperature service of Niobium added ferritic stainless steels. Mat.Sci.Engg: A 2003; 351(1-2): 272-281.
[3] The Iron and steel Institute of Japan, Ferrum 2006; 11(10):2-6. [in Japanese].
[4] Balasubramanian V, Lakshminarayana AK. Mechanical Properties of GMAW, GTAW and FSW Joints of
RDE-40 Aluminium Alloy [J]. International Journal of Microstructure and Materials Properties. 2008; 3(6):
837.
[5] Hedge J.C., Arc Welding Chromium Steel and Iron, Metal Progress. 27(4), 1935, pp.33-38.
[6] Miller W.B., “Welding of Stainless and Corrosion Resistant alloys”, Metal Progress.20 (12), 1931, pp.68-
72.
[7] Lippold JC, Kotecki DJ. Welding metallurgy and weldability of stainless steels. A John Wiley &
Sons,Inc.,Publication 2005;pp.88-135.
[8] Moustafa IM, Moustafa MA, Nofal AA. Carabide formation mechanism during solidification and annealing
of 17% Cr-ferritic steel. Mater Lett. 2000; 42(6):371-379.
[9] Ghosh PK, Gupta SR, Randhawa HS.Characteristics of a pulsed-current, vertical-up gas metal arc weld in
steel. Metall Mater Trans A2000; 31A:2247-2259.
[10] Folkhard E. Welding metallurgy of stainless steels.New York: Spring-Verlag Wien; 1988.
[11] Kou S. Welding metallurgy. New York: Jhon Wiley & Sons;1987.
[12] Parmar R S. Welding Processes and Technology [M]. Khanna Publishers, New Delhi, 2003.
[13] Madhusudhan Reddy G, Mohandas T. Welding aspects of ferritic stainless steels, Indian welding journal.
27(2).1994, p7.
[14] Dorschu K.E., “Weldability of a new ferritic stainless steel, weld”. J., 50(9), 1971, p 408s.
[15] Kah, Weldability of Ferritic Stainless Steels, Weld. J., 1981, p 135s.
[16] Brando W.S., Avoiding Problems when welding AISI 430 Ferritic Stainless Steel, Welding International, 6,
1992, p713.
[17] Kou S and Y. Le, Metall. Trans., 16A, 1345 -1352(1985).
[18] Kou S and Y. Le, Welding Journal, 65,305s – 313(1986).
[19] Martin van Warmelo, David Nolan, John Norrish. Mitigation of Sensitization Effects in Unstabilised
12% Cr Ferritic Stainless Steel Welds [J]. Materials Science and Engineering. 2007; 464a (1-2):157.
[20] Villafuerte J.C. and Kerr. H.W., Electromagnetic stirring and grain refinement in stainless steel GTA
welds, Weld. J., 69(1), 1990, p 1s.
[21] Madhusudhan Reddy G. and Mohandas T., in Proceedings of Symposium on Journal of Materials,
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
491 Vol. 1, Issue 5, pp. 480-491
Welding Research Institute, Tiruchirapalli, India, September,1996, edited by Venkatraman G., B105-
B108.(1996)
[22] Villafuerte J.C, Kerr H.W, David S.A, Material Science & Engineering. A, 194,187-191(1995).
[23] Thamodharan M, Beck HP and Wolf A. Steady and pulsed direct current welding with a single converter.
Weld J 1999; 78(3):75-79.
[24] Villafuerte J.C, Pardo E, Kerr H.W., Metall. Trans.21A, 2090(1990).
[25] Mohandas T, Reddy G.M, and Mohammad Naveed. Journal of Materials Processing Tech., 94,133(1999).
[26] Pollard B, Welding. J. 51(1972) 222s-230s.
[27] Annual Book of ASTM Standards (2004) American Society for Testing of Materials. Philadelphia, PA.
[28] ASTM E112-96. Standard test methods for determining average grain size; 2004.
AUTHORS
G. MALLAIAH is born in 1969, received his B.Tech (Mech.Engg) from
Kakatiya University, Warangal, and M.Tech (CAD/CAM) from the JNTU,
Hyderabad. He is working as Associate Professor at the Kamala Institute of
Technology & Science, Huzurabad, Karimnagar.He has published 6 papers in
various National/International Conferences. His areas of interest are Welding,
CAD/CAM, and FEA. He has guided 6 B.Tech students’ projects. He is a life
member of ISTE, ISME, IWS and MIE.
A. KUMAR is born in 1969, received his B.Tech (Mech.Engg) from Kakatiya
University, Warangal, M.Tech. from Sri Venkateshwara University, Tirupati, and
Ph.D from the Osmaniya University, Hyderabad. He is working as Assistant
Professor at the NIT, Warangal. He has published 25 papers in various
National/International journals and conferences. His areas of interest are welding,
unconventional machining processes, and optimization techniques. He is a life
member of ISTE, IWS, and SAQR.
P. RAVINDER REDDY is born in 1965, received his B.Tech (Mech.Engg) from
Kakatiya University, ME (Engg Design) from the PSG College of Technology,
Coimbatore, and Ph.D from the Osmania University, Hyderabad. He is working
as a Professor and Head of Mechanical Engineering, Chaitanya Bharathi Institute
of Technology, Hyderabad. He is having 22 Years of Teaching, Industrial and
Research experience. Taught Postgraduate and under graduate Engineering
subjects. Published Research Papers over 132 in International and national
Journals, and Conferences. Guided 5 Ph.Ds and 6 Ph.D scholars submitted their
thesis. Guided over 250 M.E/M.Tech Projects and carried out research and
consultancy to a tune of Rs. 1.9 Cr sponsored by BHEL, AICTE, UGC, NSTL and other industries.
Organized 23 Refresher/STTPs/ workshops, one international conference and delivered 63 invited/
keynote/ special lecturers. Received “UGC Fellowship” award by UGC (1999). Raja Rambapu Patil
National award for promising Engineering Teacher by ISTE for the year 2000 in recognition of his
outstanding contribution in the area of Engineering and Technology. Excellence “A” Grade awarded
by AICTE monitoring committee for the MODROB project sponsored by AICTE in 2002. “Engineer
of the year Award-2004” for his outstanding contribution in Academics and research by the Govt. of
Andhra Pradesh and Institution of Engineers (India), AP State Centre on 15th September 2004 on the
occasion of 37th Engineer’s Day. Best Technical Paper Award in the year Dec. 2008 by National
Governing Council of Indian Society for Non Destructive Testing. He is a life member of ISTE,
ISME, ASME, IEEE and Fellow of Institution of Engineers.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
492 Vol. 1, Issue 5, pp. 492-499
ANOMALY DETECTION ON USER BROWSING BEHAVIORS
FOR PREVENTION APP_DDOS
Vidya Jadhav1 and Prakash Devale
2
1Student, Department of Information Technology, Bharti Vidyapeeth Deemed University,
Pune, India 2Professor & Head, Department of Information Technology, Bharti Vidyapeeth Deemed
University, Pune, India
ABSTRACT
Some of the hardest to mitigate distributed denial of service attacks (DDoS) are ones targeting the application
layer. Over the time, researchers proposed many solutions to prevent denial of service attacks (DDoS) from IP
and TCP layers instead of the application layer. New application Layer based DDoS attacks utilizing legitimate
HTTP requests to overwhelm victim resources are more undetectable. This may be more serious when such
attacks mimic or occur during the flash crowd event of the website. This paper present a new application layer
anomaly detection and filtering based on Web user browsing behavior for create defense against Distributed
Denial of Service Attack(DDoS). Based on hyperlink characteristics such as request sequences of web pages.
This paper, uses a large scale Hidden Semi Markov Model (HsMM) to describe the web access behavior and
online implementation of model based observation sequence on user browsing behavior fitting to the model
measure of user’s normality.
KEYWORDS: Hidden Semi Markov Model, APP_DDOS, user’s normality detection, browsing behavior.
I. INTRODUCTION
In the last couple of years, attacks against the Web application layer have required increased attention
from security professionals. The main APP_DDOS attack techniques that have been used, is utilizing
the HTTP “/GET” request by requesting home page of victim server repeatedly. Without specifying
URL of web page of victim website, attackers easily find out the domain name of the victim web site.
Many statistical or dynamical techniques that have been used to create defense against distributed
denial of service (DDOS) attack on web application.
Statistical detection detect Automated attacks using tools such as Nikto or Whisker or Nessus Attacks
that check for server misconfiguration, HTML hidden field attacks (only if GET data –rare)
Authentication brute-forcing attacks, Order ID brute-forcing attacks (possibly) – but if it is POST
data, then order IDs cannot be seen .Static Detection fail to detect attacks that overflows various HTTP header field, Web Application attacks in a POST form. Statistical method can hardly
distinguish the vicious HTTP request from the normal one [12].
To overcome these issues, anomaly detection system on web browsing behavior, this supports
detection of new APP_DDOS attacks. This paper presents a model to capture the browsing patterns of
web users using Hidden Semi Markov Model (HsMM) and to detect the APP_DDOS Attacks.
II. RELATED WORK
Most of current research has focus on network layer (TCP/IP) instead of application layer. To detect
DDOS attack IP address, time to leave (TTL) values were used [1][2]. C. Douligeris and A.
Mitrokotsa [3] classify DDOS defense mechanism depending on the activity deployed and location
deployment. Cabrera [4] shown that Statistical Tests applied in the time series of MIB(Management
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
493 Vol. 1, Issue 5, pp. 492-499
Information Base) traffic at the Target and the Attacker are effective in wextracting the correct
variables for monitoring in the Attacker Machine.
To the best of my knowledge, a few existing work has been done on the detection of APP_DDOS
attacks. S. Ranjan[5] deployed a counter mechanism which assign a suspicious measure to a session in
proportion to legitimate behaviour and decide when whether the session is serviced using DDOS
Scheduler. C. Kruegel introduced a novel approach to perform anomaly detection using HTTP queries parameter (e.g String length of an attribute value) [6].
The existing work for web user behavior can be summarized as the following ways 1) Based on
probabilistic model, a double Pareto distribution for long normal distribution and link choice for the
revisiting etc.[9]. 2) Based on click stream and web contents e.g. data mining [10] to capture web
user’s usage patterns from page content and click streams data set. 3) Based on Markov chain e.g.
Markov chain to model the URL access patterns that are observed on the navigation logs based on the
previous state[11] . 4) User behaviour to implement anomaly detection e.g. uses system call data sets
generated by program to detect the anomaly access of UNIX system based on data mining [13]
Disadvantages with existing system 1) This system does not take into account the user’s series of operation information e.g. which
page will be requested next. They can not explain the browsing behavior of a user because the
next page the user will browse is primarily determined by the current page he is browsing
2) The method omits dwell time that the user stays on a page while reading and they do not
consider the cases that a user may not follow the hyperlink provided by the current page.
3) From the network perspective, protecting is considered in effective. attacks flows can still
incur congestion along the attack path
4) It is very hard to identify DDoS attack flows at sources since the traffic is not so aggregate.
Thus a new system is designed that take into account the users series of operation information. There
is an intensive computation for page content processing and data mining and hence they are very
suitable for online detection. The dwell time that the user stays on a page while reading and we can
find cases that a user may follow the hyperlinks provided by the current page.
III. APP_DDOS ATTACKS
APP_DDOS Attacks may exhausting the limited server resources such as CPU cycle ,network
bandwidth, DRAM space, database, disk or specific protocol data structures, including service
degradation or outage in computing infrastructures for the client [7]. System downtime resulting from
DDOS attacks could lead to million dollars’ loss. Thus, APP_DDOS attacks may cause more serious
threats in high speed internet because increasing in computational complexity of internet application
& larger network bandwidth those server resources may become bottleneck of that application.
First characteristics of APP_DDOS attacks is that attacker targeting at some popular Websites are
increasing moving away from pure bandwidth flooding to more surreptitious attacks that hide in normal flash crowds of the website. Thus, such website become more & more demands of information
broadcast and e-commerce, the challenges of network security are how to detect and respond to the
APP_DDOS attacks if they occur during a flash crowd event.
Second characteristics of APP_DDOS attacks is that application layer request originating from
compromised hosts on internet are indistinguishable from those generated by legitimate users.
APP_DDOS attacks can be mounted with legitimate request from legitimately connected network
computer. To launch the attacks, APP_DDOS attacks utilize the weakness enabled by the standard practice of opening service such as HTTP and HTTPS (TCP port 80) through most firewalls. Many
protocol or applications, both legitimate and illegitimate, can use these openings to tunnel through
firewalls by connecting over a standard TCP port 80. Legitimate users may request services to the
website, but these clients are unable to complete their transactions, website will be put busy giving
responses to the Zombie processes. In this paper, APP_DDOS attacks can be identified by using
browsing behavior of user, the elements of browsing behaviour of user are HTTP request rate, page
viewing time, page requesting sequence.
IV. PROBLEMS WITH APP_DDOS DETECTION
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
494 Vol. 1, Issue 5, pp. 492-499
The main aim of a DDoS defense system is to `relieve victim’s resources from high volume of
counterfeit packets sent by attackers from distributed locations, so that these resources could be used
to serve legitimate users. There are four approaches to combat with DDoS attack as proposed by
Douligeris et al. [3]: Prevention, Detection and Characterization, Trace back, and Tolerance and
Mitigation. Attack prevention aims to fix security holes, such as insecure protocols, weak authentication schemes and vulnerable computer systems, which can be used as stepping stones to
launch a DoS attack. This approach aims to improve the global security level and is the best solution
to DoS attacks in theory. Attack detection aims to detect DDoS attacks in the process of an attack and
characterization helps to distinguish attack traffic from legitimate traffic. Trace back aims to locate
the attack sources regardless of the spoofed source IP addresses in either process of attack (active) or
after the attack (passive). Tolerance and mitigation aims to eliminate or curtail the effects of an attack
and try to maximize the Quality of Services (QoS) under attack. Carl et al. Douligeris et al. and
Mirkovic et al. have reviewed a lot of research schemes based on these approaches but still no
comprehensive solution to tackle DDoS attacks exist. One of the main reasons behind it is lack of
comprehensive knowledge about DDoS incidents. Furthermore the design and implementation of a
comprehensive solution which can defend Internet from variety of APP_ DDOS attacks is hindered by
following challenges:
1. Large number of unwitting participants.
2. No common characteristics of DDoS streams.
3. Use of legitimate traffic models by attackers.
4. No administrative domain cooperation.
5. Automated DDoS attack tools.
6. Hidden identity of participants because of source addresses spoofing.
7. Persistent security holes on the Internet.
8. Lack of attack information.
9. The APP_DDOS attacks utilize high layer protocol to pass through most of the current anomaly detection system designed for low layer & arrive at victim website.
10. Flooding is not the unique way for the APP_DDOS. There are many other forms, such as
consuming the resources of the server, arranging the malicious traffic to mimic the average
request rate of legitimate user or utilizing the large scale botnet to produce low rate attack
flows.
11. APP_DDOS attacks usually depend on successful TCP connection, which makes the general
defense schemes based on detection of spoofed IP address useless.
V. WEB BROWSING BEHAVIOR
The browsing behavior of web user is mainly influenced by the structure of website (e.g. hyperlink
and the web documents) and the way users access web pages. Web user browsing behavior can be
abstracted & profiled by user request sequences. User can access the web pages by two ways. First
users click a hyperlink pointing to a page, the browser will send number of request for the page and
it’s in line objects. Then, user may follow series of hyperlink provided by the current browsing pages
to complete his access. Second way, the user jump from one page to another by typing URLs in
address bar, selecting from the favorites of the browser or using navigation tools.
Fig 1 shows web browsing model. Webpage clicked by a web user can uniquely represented by semi
Markov state(S). State transition probability matrix A presents the hyperlink relation between
different webpages. The duration of a state present the number of HTTP requests received by the webserver. The output sequences of each state throughout its duration present those requests of the
clicked page which pass through all proxies and then arrive at webserver. Take a simple example to
explain these relations by fig.1 The unseen page sequences is page1,page2,page3 .Except those
responded by cashes or proxies, HTTP request sequences received by the webserver
is(r1,r2,r3,r4,r5,r6,r7,r8,r9,r10,r11). When the observed request sequences inputted to the HsMM, the
algorithm may group them into three clusters (r1,r2,r3,r4), (r5,r6,r7), (r8,r9,r10,r11) and denote them
state sequence (1,2,3). The state transition probability a12 represent the probability that page2 may be
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
495 Vol. 1, Issue 5, pp. 492-499
accessed after accessing current page1 by the user. The duration of the first state 1 is d=4, which
means 4 HTTP requests of page1 arrived at the webserver.
Frequency of the clicking behavior of user for multiple page requests will be calculated by using
HsMM
Figure 1: Web browsing behavior
VI. TECHNIQUE USED OR ALGORITHMS USED
To achieve early attack detection and filtering for the application-layer-based DDoS attack we use an
extended hidden semi-Markov model is proposed to describe the browsing behaviors of web surfers.
In order to reduce the computational amount introduced by the model’s large state space, a novel
forward algorithm is derived for the online implementation of the model based on M algorithm.
Entropy of the user’s HTTP sequence fitting to the model is used as a criterion to measure the user’s
normality.
6.1 Hidden Semi-Markov Model
HsMM is an extension of the hidden Markov Model with explicit state duration. It is a stochastic
finite state machine, specified by (S, π, A, P) where:
1 S is a discrete set of hidden states with cardinality N, i.e. S = 1, N.
2 π is the probability distribution for the initial state π m ≡ Pr [s1 = m], st denotes the state that
the system takes at time and m Є S. The initial state probability distribution satisfies Σmπ m =1;
3 A is the state transition matrix with probabilities: amn≡ Pr[st =nst-1 = m], m, n Є S, and the
state transition coefficients satisfy Σn amn = 1;
4 P is the state duration matrix with probabilities: pm (d) ≡ Pr[ґt = dst = m], ґt denote the
remaining ( or residual) time of the current state st, m Є S, d Є 1,…,D, D is the maximum
interval between any two consecutive state transitions, and the state duration coefficients
satisfy Σdpm (d) = 1.
Consider a semi-Markov chain of M states, denoted s1,s2…….SM, with the probability of transition
from state sm to state sn being denoted amn(m, n=1,2….M). The initial state probability distribution is
given by πm . Let ot stands for the observable output at t and let qt denote the state of the semi-
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
496 Vol. 1, Issue 5, pp. 492-499
Markov chain at time t, where t = 1,2,….T. The observable and the state are related through the
conditional probability distribution bm (vk) = Pr[ot =vk | qt = sm].where vk is a set of k distinct values
that may assumed by observation ot . bm(oa|b) = πt=a | b bm(ot) when the “conditional independence” of
outputs is assumed, where oa|b = ot : a ≤ t ≤ b represent the observation sequences from time a toy
time b. If the pair process (qt, rt) takes on value (sm,d), the semi Markov chain will remain in the
current state sm until time t+d-1 and transits to another state at time t+d, where d ≥ 1. Let λ stands for
the complete set of model parameters λ = (amn, πm, bm(vk), pm(d) ).
Figure 2: Markov Chain
We first define the forward and backward variable.
We define the forward variable by
αt(m,d) = Pr [o1|t, (qt,rt) = (sm,d)] (1)
A transition into state (qt,rt) = (sm,d) takes place either from (qt-1,rt-1) = (sm, d+1) or from (qt-1,rt-1)=
(sn,1) for n ≠ m . Therefore , we readily obtain the following forward recursion formula
αt(m, d) = αt-1(m, d + 1) bm (ot) + ( ) .bm(ot)pm(d), d ≥1 (2)
for a given state sm and time t > 1, with the initial condition
α1 (m, d) = πmbm (o1)pm(d). (3)
We define backward variable by
βt(m,d) = Pr[ot+1|T| (qt, rt) = (sm, d)]. (4)
By examining the possible states that follow (qt ,rt) = (sm, d), we see that when d > 1 the next state
must be (qt+1, rt+1) = (sm, d-1), and when d=1 it must be (qt+1 ,rt+1) =(sn, d’) for some n ≠ m and d’ ≥ 1.
We thus have the following recursion formula:
βt(m,d) = bm(ot+1)βt+1(m,d-1) for d > 1 (5)
and
βt(m, 1) = ∑n ≠ m amn bn(ot+1) (∑d ≥ 1 pn (d) βt+1 (n,d)) (6)
for a given states sm and time t < T , with the initial condition (in the backward recursive steps)
βT(m, d) = 1 d ≥ 1 (7)
the algorithm of HsMM can be found in [15] & [16].
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
497 Vol. 1, Issue 5, pp. 492-499
6.2 M-Algorithm for Normality Detection
The M-algorithm is being widely adopted in decoding digital communications because it requires far fewer computations than the Viterbi algorithm. The aim of the M-algorithm is to find a path with
distortion or likelihood metrics as good as possible (i.e., minimize the distortion criterion between the
symbols associated to the path and the input sequence).
M-Algorithm work as follow:
I. Select Only the best M paths ,at time t.
II. Each path associated with value called as path metric, which act as distortion measure of the
path and is the accumulation of transition metric.
III. The transition metric is the distance between the symbol associated to a trellis transition and the input symbol.
IV. Path metric is criterion to select best M path.
V. To the next time instant t+1 by extending the M paths is has retained to generate N.M new
paths.
VI. All terminal branches compared to input data to path metric and the (N-1).
VII. Deleted M poorest paths.
VIII. Until all the input sequences have been processed this process is repeated.
VII. ANOMALY DETECTION
Anomaly detection relies on detecting behaviors that are abnormal with respect to some normal
standard. Many anomaly detection systems and approaches have been developed to detect the faint
signs of DDoS attacks. Due to constraint in computing power, the detector and filter is unable to adapt its policy rapidly. Because the web access behavior is short term stable[14]. The filter policy must be
fixed for only a short period of time. Define Td as a length of the request sequence for anomaly
detection. For a given HTTP request sequences of the lth user, we calculate the average entropy from
mean entropy of the model. If the deviation is larger than a predefined threshold the user is regarded
as an abnormal one, and the request sequences will be described by the filter when the resources is
scarce. Otherwise user’s request can pass through the filter and arrive at the victim smoothly. Then ,
when given slot is time out , the model can implement the online update by the self adaptive algorithm
proposed in [15].
Figure 3: Algorithm for anomaly detection
VIII. PROPOSED SYSTEM
1) Monitor browsing behavior of web surfer.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
498 Vol. 1, Issue 5, pp. 492-499
2) HsMM will be used to calculate behavior of the system for abnormal user browsing, which will
done by maintaining state transition.
3) Train system to distinguish between normal user browsing and abnormal user browsing, which can
be done by Normality Detection and Filter policies. Detector and filter between internet and the
victim will accept the HTTP request and decides whether to accept or not.
4) Make use of efficient algorithm to minimize the lot of computations for anomaly detection, so M-
algorithm will be used to minimize these lots of computations.
Figure 4: Anomaly detection based on behavior model
IX. RESULTS
We try to insert the APP_DDOS attack request into normal traffic shown in fig.5(a). In order to
generate a stealthy attack which is not easily detected by the traditional methods, each attack node’s
output traffic to approximate the average request rate of normal user. The APP_DDOS attack
aggregated from the low rate malicious traffic show in fig 5(b).
Figure 5(a) : Arrival rate Vs time of traffic without Attack
Figure 5(b) : Arrival time Vs time of traffic with Attack
X. CONCLUSION AND FUTURE SCOPE
This paper focuses on protecting Web servers from APP_DDOS attacks by using web browsing
behaviour of user. We presented novel algorithm based on Large Hidden semi-Markov model that
distinguish the normal and deviated behavior users. A set of real traffic data collected from an
educational website and applied the M-algorithm to differentiate the normal and abnormal behaviors.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
499 Vol. 1, Issue 5, pp. 492-499
Several issues will need further research: 1) if all clients are getting service from one proxy and
Zombie is behind that proxy among the legitimate clients, blocking the IP results the service annoy
and service delays to the legitimate users also.2) applying this model for other schemes to detect the
App.DDoS attacks, such as FTP attacks.
REFERENCES
[1]. C. Jin, H. Wang, and K. G. Shin, “Hop-count filtering: An effective defense against spoofed traffic,” in
Proc. ACM Conf Computer and Communications Security, 2003, pp. 30–41.
[2]. T. Peng, K. R. mohanarao, and C. Leckie, “Protection from distributed denial of service attacks using
history-based IP filtering,” in Proc. IEEE Int. Conf. Communications, May 2003, vol. 1, pp. 482–486.
[3]. C. Douligeris and A. Mitrokotsa, “DDoS attacks and defense mechanisms:Classification and state-of-
the-art,” Computer Networks: The Int. J. Computer and Telecommunications Networking, vol. 44, no.
5,pp. 643–666, Apr. 2004.
[4]. J. B. D. Cabrera et al., “Proactive detection of distributed denial of service attacks using MIB traffic
variables a feasibility study,” in Proc. IEEE/IFIP Int. Symp. Integrated Network Management, May
2001, pp. 609–622.
[5]. S. Ranjan, R. Swaminathan, M. Uysal, and E. Knightly, “DDoS-resilient scheduling to counter
application layer attacks under imperfect detection,” in Proc. IEEE INFOCOM, Apr. 2006 [Online].
Available: http://www- ece.rice.edu/~networks/papers/dos-sched.pdf
[6]. C. Krugel & G. Vigna “Anomaly detection of Web-based attacks” in CCS’03,October 27-31,2003
washingtone, DC,USA.
[7]. S. Ranjan, R. Karrer, and Knightly, “Wide area redirection of dynamic content by Internet data
centers,” in Proc. 23rd
Ann. Joint Conf. IEEE Comput. Commun. Soc., Mar. 7–11, 2004, vol. 2, pp.
816–826.
[8]. S.-Z. Yu and H. Kobayashi, “An efficient forward-backward algorithm for an explicit duration hidden
Markov model,” IEEE Signal Process. Lett., vol. 10, no. 1, pp. 11–14, Jan. 2003.
[9]. S. Z. Yu, Z. Liu, M. Squillante, C. Xia, and L. Zhang, “A hidden semi-Markov model for web
workload self-similarity,” in Proc. 21st IEEE Int. Performance, Computing, and Communications
Conf. (IPCCC 2002), Phoenix, AZ, Apr. 002, pp. 65–72.
[10]. S. Bürklen et al., “User centric walk: An integrated approach for modeling the browsing behavior of
users on the web,” in Proc. 38th Annu. Simulation Symp. (ANSS’05), Apr. 2005, pp. 149–159.
[11]. J. Velásquez, H. Yasuda, and T. Aoki, “Combining the web content and usage mining to understand
the visitor behavior in a web site,” in Proc. 3rd IEEE Int. Conf. Data Mining (ICDM’03), Nov. 2003,
pp. 669–672.
[12]. D. Dhyani, S. S. Bhowmick, and W.-K. Ng, “Modelling and predicting web page accesses using
Markov processes,” in Proc. 14th Int. Workshop on the Database and Expert Systems Applications
(DEXA’03), 2003, pp. 332–336.
[13]. J. Mirkovic, G. Prier, and P. L. Reiher, “Attacking DDoS at the source,” in Proc. 10th IEEE Int. Conf.
Network Protocols, Sep. 2002, pp. 312–321.
[14]. X. D. Hoang, J. Hu, and P. Bertok, “A multi-layer model for anomaly intrusion detection using
program sequences of system calls,” in Proc. 11th IEEE Int. Conf. Networks, Oct. 2003, pp. 531–536.
[15]. M. Kantardzic, Data Mining Concepts, Models, Methods And Algorithm. New York: IEEE Press, 2002.
[16]. X. Yi and Y. Shunzheng, “A dynamic anomaly detection model for web user behavior based on
HsMM,” in Proc. 10th Int. Conf. Computer Supported Cooperative Work in Design (CSCWD 2006),
Nanjing, China, May 2006, vol. 2, pp. 811–816.
[17]. S.-Z. Yu and H. Kobayashi, “An efficient forward-backward algorithm for an explicit duration hidden
Markov model,” IEEE Signal Process. Lett., vol. 10, no. 1, pp. 11–14, Jan. 2003.
Biography: Vidya Jadhav, PG Scholer in information Technology at bharatividya peeth deemed university,
Pune. Her field of interst are computer networking, Operating syatem and anomaly detection.
Prakash Devale, presently working as a professor and Head department of Information
Technology at bharati vidyapeeth deemed University College of Engineering, Pune. He received
his ME from Bharati Vidyapeeth University and pursuing Ph.D degree in natural language
processing.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
500 Vol. 1, Issue 5, pp. 500-506
DESIGN OF LOW POWER LOW NOISE BIQUAD GIC NOTCH
FILTER IN 0.18 µM CMOS TECHNOLOGY
Akhilesh kumar1, Bhanu Pratap Singh Dohare
2 and Jyoti Athiya
3
1Department of E&C Engineering, NIT Jamshedpur, Jharkhand, India
2Department of E&C Engineering, BACET, Jamshedpur, Jharkhand, India
3Department of E&C Engineering, NIT Jamshedpur, Jharkhand, India
ABSTRACT
In design of analog circuits not only the gain and speed are important but power dissipation, supply voltage,
linearity, noise and maximum voltage swing are also important. In this paper a biquad GIC notch filter is
design which provides low power. In this research, the design and VLSI implementation of active analog filter,
based on the Generalized Impedance Converter (GIC) circuit, are presented [1]. The circuit is then modeled
and simulated using the Cadence Design Tools software package. Active filters are implemented using a
combination of passive and active (amplifying) components, and require an outside power source. Operational
amplifiers are frequently used in active filter designs. These can have high Q factor, and can achieve resonance
without the use of inductors. This paper presents a new biquad GIC notch filter topology for image rejection in
heterodyne receivers and Front End receiver applications. The circuit contains two op-amp, resistor, capacitor
topology for testing purposes. It is implemented with standard CMOS 0.18µm technology. The circuit consumes
0.54 mW of power with a open loop gain 0dB, 1 dB compression point the linear gain obtained +7.5dBm at 1.1
kHz and 105 degree phase response from a 1.8V power supply optimum [2].
KEYWORDS: Opamp, GIC, Notch filter, low power.
I. INTRODUCTION
In concern of power, a low power design has made a revolutionary change in our life style. And still
people are fighting for low power and better performance.
The design of analog circuits itself has evolved together with the technology and the performance
requirements. As the device dimension shrink, the supply voltage of integrated circuit drops, and the
analog and digital circuit are fabricated on one chip, many design issues arise that were unimportant
only few decade ago. In design of analog circuits not only the gain and speed are important but also
power dissipation, supply voltage, linearity, noise and maximum voltage swing.
Active filters are implemented using a combination of passive and active (amplifying) components,
and require an outside power source. Operational amplifiers are frequently used in active filter
designs. A filter is an electrical network that alters the amplitude and/or phase characteristics of a
signal with respect to frequency. Ideally, a filter will not add new frequencies to the input signal, nor
will it change the component frequencies of that signal, but it will change the relative amplitudes of
the various frequency components and/or their phase relationships.
In circuit theory, a filter is an electrical network that alters the amplitude and/or phase characteristics
of a signal with respect to frequency. Ideally, a filter will not add new frequencies to the input signal,
nor will it change the component frequencies of that signal, but it will change the relative amplitudes
of the various frequency components and/or their phase relationships. Filters are often used in
electronic systems to emphasize signals in certain frequency ranges and reject signals in other
frequency ranges. Such a filter has a gain which is dependent on signal frequency.
II. THE GIC TOPOLOGY
The integrated circuit manufacturing of resistors and inductors is wrought with difficulty, exhibits
poor tolerances, is prohibitively expensive, and is, as a result, not suitable for large scale
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
501 Vol. 1, Issue 5, pp. 500-506
implementation. The use of active components, the General Impedance Converter (GIC) design will
allow for the elimination of resistors and inductors by simulating their respective impedances.
The generalized impedance converter (GIC) is highly insensitive to component variation. The GIC
filter design was introduced by Mikhail and Bhattacharya and proved to be very insensitive to non–
ideal component characteristics and variations in component values. Figure 10 shows the general
topology of the GIC filter. GIC biquads are two op–amps with good high frequency performance. All
but the even notch stages are tuneable. The high pass, low pass and band pass stages are gain
adjustable. The notch and all pass stages have a fixed gain of unity. All GIC stages have equal
capacitor values, unless a capacitor is required to adjust the gain. Notch stages do not rely on element
value subtractions for notch quality and are thus immune from degradations in notch quality due to
element value error [3].
Analog circuits such as audio and radio amplifiers have been in use since the early days of electronics.
Analog systems carry the signals in the form of physical variables such as voltages, currents, or
charges, which are continuous functions of time. The manipulation of these variables must often be
carried out with high accuracy. On the other hand, in digital systems the link of the variables with the
physical world is indirect, since each signal is represented by a sequence of numbers. Clearly, the
types of electrical performance that must be achieved by analog and digital electronic circuits are
quite different. Nowadays, analog circuits continue to be used for direct signal processing in some
very-high-frequency or specialized applications, but their main use is in interfacing computers to the
analog world. The development of the very-large-scale-integration (VLSI) technology has led to
computers being pervasive in telecommunications, consumer electronics, biomedicine, robotics, the
automotive industry, etc. As a consequence, the analog circuits needed around them are also
pervasive. Interfacing computers or digital signal processors to the analog world requires various
analog functions, among them amplification, filtering, sampling, (de)multiplexing, and analog-to-
digital (A/D) and digital-to-analog (D/A) conversions. Since analog circuits are needed together with
digital ones in almost any complex chip and the technology for VLSI is the complementary metal–
oxide–circuits. Semiconductors (CMOS), most of the current analog circuits are CMOS.[4]
Figure 1.Generalized Biquad GIC Schematic
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
502 Vol. 1, Issue 5, pp. 500-506
It has been shown that in order to implement all possible filter types using passive components a
circuit network must contain resistors, capacitors, and inductors. Modern IC manufacturing
techniques allow for the accurate construction of capacitors, and a method for the elimination of
resistors by using switched capacitors. However, we are still left with the problem of inductors.
Discrete inductors of suitable impedance values are available for use in circuits. Discrete inductors of
suitable impedance values are available for use in circuits. However, these inductors tend to be large
and costly. Additionally, the focus of modern electronics on fully integrated circuits. Integrated circuit
manufacture of suitable inductors is very difficult, if not possible.
IC inductors take up vast quantities of valuable chip area, and suffer from terrible tolerances. How
then can we develop the full range of filter types in light of the problems involving inductors? It was
recognized in the 1950s that size and cost reductions, along with performance increases, could be
achieved by replacing the large costly inductors used in circuits with active networks. This is not to
say that the need for inductive impedance was obviated. Rather a suitable replacement, or means
simulation was necessary. A variety of methods for the simulation of inductances have been
developed. One of the most important and useful of these methods is the Generalized Impedance
Converter (GIC) developed by Antoniouetal.
III. DESIGN OF TWO STAGE DIFFERENTIAL OPERATIONAL AMPLIFIER
The most commonly used configuration for CMOS operational amplifiers is the two stage amplifier.
There is a differential front end which converts a differential voltage into a current and a common
source output stage that converts the signal current into an output voltage. An important criterion of
performance for these op amps in many applications is the settling time of the amplifier.
Figure 2. Schematic of two stage op-amp
In a never-ending effort to reduce power consumption and gate oxide thickness, the integrated circuit
industry is constantly developing smaller power supplies. Today’s analog circuit designer is faced
with the challenges of making analog circuit blocks with sub 1V supplies with little or no reduction in
performance. Furthermore, in an effort to reduce costs and integrate analog and digital circuits onto a
single chip, the analog designer must often face the above challenges using plain CMOS processes. A
schematic diagram of the two stage op-amp with output buffer is shown in figure 2. The First stage is
a Differential-input, single-ended output stage. The second stage is a common-source gain stage that
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
503 Vol. 1, Issue 5, pp. 500-506
has an active load. Capacitor Cc is included to ensure stability when op-amp is used with feedback. It
is Miller capacitance. The third stage is a common drain buffer stage. If the op-amp is intended to
drive a small purely capacitive load. An operational amplifier, often referred to as an 'op-amp', is a
DC-coupled electronic differential voltage amplifier, usually of very high gain, with one inverting and
one non-inverting input.
Design of op-Amp: operational amplifier is very important to get accurate result. The Op-Amp is
characterized by various parameters like open loop gain, Bandwidth, Slew Rate, Noise and etc. The
performance measures are fixed due to design parameters such as transistors size, bias current and etc.
This op-amp is designed using UMC 0.18 µm technology with a supply voltage of 1.8 V. The value of
the load capacitance is taken as 1pF. The main constraints in the design are the requirement of low
power consumption. The open Loop Gain obtained 70.49dB, which confirm the design parameters we
took at the starting of the design. Open loop gain should be greater than 70dB (figure.5).
IV. EQUATION
The first goal will be to develop the transfer function of the circuit in terms of the generic admittance
values. Then we can substitute in values for the admittances in order to realize the various filter types.
s2 (2a - c) + s (ω0/Q) (2b -c) + cω02
T(s) = V2/V1 =
S2 + sω0/Q + ω0
2
We observe that above equation can realize an arbitrary transfer function with zeros anywhere the s-
plane.
V. DESIGN OF ACTIVE BIQUAD GIC NOTCH FILTER
Design the notch filter with the GIC biquad of figure. To be eliminated is the frequency component at
f0 = 1 kHz from a signal. The low and high frequency gains must be 0 dB and the attenuation must not
be larger than 1 dB in a band of width 100 Hz around f0. The transfer function of this filter is
Figure 3.Schematic design of CMOS biquad GIC notch filter
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
504 Vol. 1, Issue 5, pp. 500-506
To design schematic of notch filter, we have chose C = 0.1µF, R = 1/(ω0C) = 1.918 kΩ, and
Q = 16.3.
It is the schematic of CMOS biquad GIC notch filter using the AM biquad topology. The design of
this CMOS biquad GIC notch filter is done using Cadence Tool. The Simulation results are found
using Cadence Spectre environment with UMC 0.18 µm CMOS technology.
VI. SIMULATION RESULT OF ACTIVE NOTCH FILTER AND OP-AMPLIFIER
Figure 4.Simulation result of Gain and Phase response
The open Loop Gain obtained 0dB which confirm to the design parameters we took at the starting of
the design. This simulation result shows the phase response of the given filter, its gives 105 degree. Its
value obtains by adjusting the value of capacitances.
Figure 5. Gain and phase response of CMOS Op-amp
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
505 Vol. 1, Issue 5, pp. 500-506
Figure 6.Simulation result of PSRR+ response(notch filter)
Figure 7. Simulation result of PSRR- response (notch filter)
Above figure shows the simulation result of power supply rejection ratio (PSRR).In this method we
apply common mode dc potential to the input transistors and ±1.8V AC signal is inserted between
Vdd supply and Vdd port of the circuit. The power supply rejection ratios are obtained as 74 dB and
70 dB with PSRR+ and PSRR- respectively.
VII. CONCLUSION
In this design, a low-voltage CMOS biquad GIC notch filter is designed using a Generalized
Impedance Converter topology. The proposed techniques can be used to design low-voltage and low-
power biquad GIC notch filter in a standard CMOS process. To demonstrate the proposed techniques,
a ±1.8V, second-order filter implemented in a standard 0.18µm CMOS process. In this designing
mainly work on low power, linearity and phase response. The Active- RC biquadratic cell exploits the
frequency response of the op-amp to synthesize a complex poles pair, reducing the unity gain
bandwidth requirements of the op-amp in the closed loop topologies. A proper bias circuit is used to
fix the operating point of the biquad. The third design exploits the source follower principle. Very low
current consumption (0.54mW) is performed at ±1.8 supply voltage in the 1 KHz cut-off frequency.
REFERENCES
[1] Akhilesh Kumar, Bhanu Pratap Singh Dohare and Jyoti Athiya,’ DESIGN AND NOISE ANALYSIS OF
BIQUAD GIC NOTCH FILTER IN 0.18 µM CMOS TECHNOLOGY’, IJAET, vol.1 Issue 3,pp.138-144.
[2] Kubicki, A. R., The Design and Implementation of a Digitally Programmable GIC Filter, Master’s Thesis,
Naval Postgraduate School, Monterey, California, September 1999.
International Journal of Advances in Engineering & Technology, Nov 2011.
©IJAET ISSN: 2231-1963
506 Vol. 1, Issue 5, pp. 500-506
[3]A. Bevilacqua, A. Vallese, C. Sandner, M. Tiebout, A. Gerosa, and A. Neviani, “A 0.13µm CMOS LNA with
integrated balun and notch filter for 3-to-5GHz UWB receivers,” in IEEE ISSCC
[4]M. De Matteis1, S. D Amico A.Baschirotto “Advanced Analog Filters for Telecomm-unications’’, IEEE
Journal of Solid-State Circuits, volume 65, page no. 06–12, Sept. 2008
[5]Yeal Nemirovsky, “1/f Noise in CMOS Transistor for Analog Application”, IEEE Transaction on Electronic
Devices, vol.48, no. 5, May 2001.
[6]John W.M. Rogers and Calvin, “A completely Integrated 1.8V 5GHz Tuneable Image Reject Notch Filter”
IEEE, 2001
[7] Milne, Paul R., The Design, Simulation, and Fabrication of a BiCMOS VLSI Digitally Programmable GIC
Filter, Master’s Thesis, Naval Postgraduate School, Monterey, California, September 2001.
[8] G. Cusmai, M. Brandolini, P. Rossi, and F. Svelto, “A 0.18-µm CMOS selective receiver front-end for UWB
applications,” IEEE Journal of Solid-State Circuits, vol. 41, no. 8, pp. 1764–1771, 2006
[9]. Fouts, D. J., VLSI Systems Design: Class Notes, Naval Postgraduate School, Monterey, California, 2004.
[10] Geiger, Randall L., Allen, Phillip E. and Strader, Noel R., VLSI Design Techniques for Analogy and Digital
Circuit, McGraw–Hill, 1990.
[11] Mead, Carver and Conway, Lynn, Introduction to VLSI systems, Addition–Wesley, Inc., 1980.
[12]Alessio Vallese, Andrea Bevilacqua, “An Analog Front-End with Integrated Notch FilterFor 3–5 GHz
UWB Receivers in 0.13 µm CMOS” IEEE Journal of Solid-State Circuits,2007
Authors
Akhilesh Kumar received B.Tech degree from Bhagalpur university, Bihar, India in 1986
and M.Tech degree from Ranchi, Bihar, India in 1993. He has been working in teaching
and research profession since 1989. He is now working as H.O.D. in Department of
Electronics and Communication Engineering at N.I.T. Jamshedpur, Jharkhand, India. His
interested field of research digital circuit design.
Bhanu Pratap Singh Dohare received B.E. degree from R.G.P.V. University, Madhya
Pradesh, India in 2008 and M.Tech degree from S.G.S.I.T.S. , Indore, Madhya Pradesh
India in 2010. He is now working as Assistant Professor in Department of Electronics and
Communication Engineering at B.A.C.E.T., Jamshedpur, Jharkhand, India. His interested
field of research is analog filter design.
Jyoti Athiya received B.E. Degree from R.G.P.V. University, Madhya Pradesh, India in
2007 and M.Tech degree from S.G.S.I.T.S., Indore, Madhya Pradesh India in 2010. He is
now working as Assistant Professor in Department of Electronics and Communication
Engineering at N.I.T. Jamshedpur, Jharkhand, India. Her interested field of research is
FPGA based digital circuit design.
International Journal of Advances in Engineering & Technology.
©IJAET ISSN: 2231-1963
pg. A
MEMBERS OF IJAET FRATERNITY
Editorial Board Members from Academia
Dr. P. Singh,
Ethiopia.
Dr. A. K. Gupta,
India.
Dr. R. Saxena, India.
Dr. Natarajan Meghanathan,
Jackson State University, Jackson.
Dr. Rahul Vaish,
School of Engineering, IIT Mandi, India.
Dr. Syed M. Askari,
University of Texas, Dellas.
Prof. (Dr.) Mohd. Husain, A.I.E.T, Lucknow, India.
Dr. Vikas Tukaram Humbe,
S.R.T.M University, Latur, India.
Dr. Mallikarjun Hangarge,
Bidar, Karnataka, India.
Dr. B. H. Shekar,
Mangalore University, Karnataka, India.
Dr. A. Louise Perkins,
University of Southern Mississippi, MS.
Dr. Tang Aihong,
Wuhan University of Technology, P.R.China.
Dr. Rafiqul Zaman Khan, Aligarh Muslim University, Aligarh, India.
Dr. Abhay Bansal, Amity University, Noida, India.
Dr. Sudhanshu Joshi, School of Management, Doon University, Dehradun, India.
Dr. Su-Seng Pang, Louisiana State University, Baton Rouge, LA,U.S.A.
Dr. Avanish Bhadauria, CEERI, Pilani,India.
Dr. Dharma P. Agrawal University of Cincinnati, Cincinnati.
Dr. Rajeev Singh University of Delhi, New Delhi, India.
Dr. Smriti Agrawal JB Institute of Engineering and Technology, Hyderabad, India
Prof. (Dr.) Anand K. Tripathi
International Journal of Advances in Engineering & Technology.
©IJAET ISSN: 2231-1963
pg. B
College of Science and Engg.,Jhansi, UP, India.
Prof. N. Paramesh University of New South Wales, Sydney, Australia.
Dr. Suresh Kumar Manav Rachna International University, Faridabad, India.
Dr. Akram Gasmelseed Universiti Teknologi Malaysia (UTM), Johor, Malaysia.
Dr. Umesh Kumar Singh Vikram University, Ujjain, India.
Dr. A. Arul Lawrence Selvakumar Adhiparasakthi Engineering College,Melmaravathur, TN, India.
Dr. Sukumar Senthilkumar
Universiti Sains Malaysia,Pulau Pinang,Malaysia.
Dr. Saurabh Pal VBS Purvanchal University, Jaunpur, India.
Dr. Jesus Vigo Aguiar University Salamanca, Spain.
Dr. Muhammad Sarfraz Kuwait University,Safat, Kuwait.
Dr. Xianbo Qui Xiamen University, P.R.China.
Dr. C. Y. Fong
University of California, Davis.
Prof. Stefanos Gritzalis
University of the Aegean, Karlovassi, Samos, Greece.
Dr. Hong Hu
Hampton University, Hampton, VA, USA.
Dr. Donald H. Kraft Louisiana State University, Baton Rouge, LA.
Dr. Veeresh G. Kasabegoudar COEA,Maharashtra, India.
Dr. Nouby M. Ghazaly Anna University, Chennai, India.
Dr. Paresh V. Virparia Sardar Patel University, V V Nagar, India.
Dr.Vuda Srinivasarao
St. Mary’s College of Engg. & Tech., Hyderabad, India.
Dr. Pouya Derakhshan-Barjoei Islamic Azad University, Naein Branch, Iran.
Dr. Sanjay B. Warkad Priyadarshini College of Engg., Nagpur, Maharashtra, India.
Dr. Pratyoosh Shukla Birla Institute of Technology, Mesra, Ranchi,Jharkhand, India.
Dr. Mohamed Hassan Abdel-Wahab El-Newehy King Saud University, Riyadh, Kingdom of Saudi Arabia.
Dr. K. Ramani
International Journal of Advances in Engineering & Technology.
©IJAET ISSN: 2231-1963
pg. C
K.S.Rangasamy College of Tech.,Tiruchengode, T.N., India.
Dr. J. M. Mallikarjuna Indian Institute of Technology Madras, Chennai, India.
Dr. Chandrasekhar Dr.Paul Raj Engg. College, Bhadrachalam, Andhra Pradesh, India.
Dr. V. Balamurugan Einstein College of Engineering, Tirunelveli, Tamil Nadu, India.
Dr. Anitha Chennamaneni Texas A&M University, Central Texas, U.S.
Dr. Sudhir Paraskar S.S.G.M.C.E. Shegaon, Buldhana, M.S., India.
Dr. Hari Mohan Pandey Middle East College of Information Technology, Muscat, Oman.
Dr. Youssef Said Tunisie Telecom / Sys'Com Lab, ENIT, Tunisia.
Dr. Mohd Nazri Ismail University of Kuala Lumpur (UniKL), Malaysia.
Dr. Gabriel Chavira Juárez Autonomous University of Tamaulipas,Tamaulipas, Mexico.
Dr.Saurabh Mukherjee Banasthali University, Banasthali,Rajasthan,India.
Prof. Smita Chaudhry Kurukshetra University, Kurukshetra, Harayana, India.
Dr. Raj Kumar Arya Jaypee University of Engg.& Tech., Guna, M. P., India.
Dr. Prashant M. Dolia Bhavnagar University, Bhavnagar, Gujarat, India.
Editorial Board Members from Industry/Research Labs.
Tushar Pandey,
STEricsson Pvt Ltd, India.
Ashish Mohan,
R&D Lab, DRDO, India.
Amit Sinha,
Honeywell, India.
Tushar Johri,
Infosys Technologies Ltd, India.
Dr. Om Prakash Singh ,
Manager, R&D, TVS Motor Company, India.
Dr. B.K. Sharma
Northern India Textile Reserch Assoc., Ghaziabad, U.P., India.
International Journal of Advances in Engineering & Technology.
©IJAET ISSN: 2231-1963
pg. D
Advisory Board Members from Academia & Industry/Research Labs.
Prof. Andres Iglesias, University of Cantabria, Santander, Spain.
Dr. Arun Sharma,
K.I.E.T, Ghaziabad, India.
Prof. Ching-Hsien (Robert) Hsu,
Chung Hua University, Taiwan, R.o.C.
Dr. Himanshu Aggarwal,
Punjabi University, Patiala, India.
Prof. Munesh Chandra Trivedi,
CSEDIT School of Engg.,Gr. Noida,India.
Dr. P. Balasubramanie,
K.E.C.,Perundurai, Tamilnadu, India.
Dr. Seema Verma,
Banasthali University, Rajasthan, India.
Dr. V. Sundarapandian,
Dr. RR & Dr. SR Technical University,Chennai, India.
Mayank Malik,
Keane Inc., US.
Prof. Fikret S. Gurgen, Bogazici University Istanbul, Turkey.
Dr. Jiman Hong Soongsil University, Seoul, Korea.
Prof. Sanjay Misra, Federal University of Technology, Minna, Nigeria.
Prof. Xing Zuo Cheng, National University of Defence Technology, P.R.China.
Dr. Ashutosh Kumar Singh Indian Institute of Information Technology Allahabad, India.
Dr. S. H. Femmam University of Haute-Alsace, France.
Dr. Sumit Gandhi Jaypee University of Engg.& Tech., Guna, M. P., India.
Dr. Hradyesh Kumar Mishra JUET, Guna , M.P., India.
Dr. Vijay Harishchandra Mankar Govt. Polytechnic, Nagpur, India.
Prof. Surendra Rahamatkar Nagpur Institute of Technology, Nagpur, India.
Dr. B. Narasimhan Sankara College of Science And Commerce, Coimbatore, India.
Dr. Abbas Karimi Islamic Azad University,Arak Branch, Arak,Iran.
Dr. M. Munir Ahamed Rabbani Qassim University, Saudi Arabia.
International Journal of Advances in Engineering & Technology.
©IJAET ISSN: 2231-1963
pg. E
Dr. Prasanta K Sinha Durgapur Inst. of Adva. Tech. & Manag., Durgapur, W. B., India.
Dr. Tole H. Sutikno Ahmad Dahlan University(UAD),Yogyakarta, Indonesia.
Research Volunteers from Academia
Mr. Ashish Seth,
Ideal Institute of Technology, Ghaziabad, India.
Mr. Brajesh Kumar Singh,
RBS College,Agra,India.
Prof. Anilkumar Suthar,
Kadi Sarva Viswavidhaylay, Gujarat, India.
Mr. Nikhil Raj,
National Institute of Technology, Kurukshetra, Haryana, India.
Mr. Shahnawaz Husain,
Graphic Era University, Dehradun, India.
Mr. Maniya Kalpesh Dudabhai
C.K.Pithawalla College of Engg.& Tech.,Surat, India.
Dr. M. Shahid Zeb
Universiti Teknologi Malaysia(UTM), Malaysia.
Mr. Brijesh Kumar
Research Scholar, Indian Institute of Technology, Roorkee, India.
Mr. Nitish Gupta
Guru Gobind Singh Indraprastha University,India.
Mr. Bindeshwar Singh
Kamla Nehru Institute of Technology, Sultanpur, U. P., India.
Mr. Vikrant Bhateja
SRMGPC, Lucknow, India.
Mr. Ramchandra S. Mangrulkar
Bapurao Deshmukh College of Engineering, Sevagram,Wardha, India.
Mr. Nalin Galhaut
Vira College of Engineering, Bijnor, India.
Mr. Rahul Dev Gupta
M. M. University, Mullana, Ambala, India.
Mr. Navdeep Singh Arora
Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab, India.
Mr. Gagandeep Singh
Global Institute of Management and Emerging Tech.,Amritsar, Punjab, India.
International Journal of Advances in Engineering & Technology.
©IJAET ISSN: 2231-1963
pg. F
Ms. G. Loshma
Sri Vasavi Engg. College, Pedatadepalli,West Godavari, Andhra Pradesh, India.
Mr. Mohd Helmy Abd Wahab
Universiti Tun Hussein ONN Malaysia, Malaysia.
Mr. Md. Rajibul Islam
University Technology Malaysia, Johor, Malaysia.
Mr. Dinesh Sathyamoorthy
Science & Technology Research Institute for Defence (STRIDE), Malaysia.
Ms. B. Neelima
NMAM Institute of Technology, Nitte, Karnataka, India.
Mr. Mamilla Ravi Sankar
IIT Kanpur, Kanpur, U.P., India.
Dr. Sunusi Sani Adamu
Bayero University, Kano, Nigeria.
Dr. Ahmed Abu-Siada
Curtin University, Australia.
Ms. Shumos Taha Hammadi
Al-Anbar University, Iraq.