67

IJITCE Feb 2011

Embed Size (px)

DESCRIPTION

IInternational Journal of Innovative Technology and Creative Engineering Feburary issue. No.1 Vol.2

Citation preview

Page 1: IJITCE Feb 2011
Page 2: IJITCE Feb 2011

UK: Managing Editor

International Journal of Innovative Technology and Creative Engineering

1a park lane,

Cranford

London

TW59WA

UK

E-Mail: [email protected] Phone: +44-773-043-0249

USA: Editor

International Journal of Innovative Technology and Creative Engineering

Dr. Arumugam

Department of Chemistry

University of Georgia

GA-30602, USA.

Phone: 001-706-206-0812

Fax:001-706-542-2626

India: Editor

International Journal of Innovative Technology & Creative Engineering

Dr. Arthanariee. A. M

Finance Tracking Center India

261 Mel quarters

Labor colony,

Guindy,

Chennai -600032.

Mobile: 91-7598208700

www.ijitce.co.uk

Page 3: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

IJITCE PUBLICATION

INTERNATIONAL JOURNAL OF

INNOVATIVE TECHNOLOGY & CREATIVE

ENGINEERING

Vol.1 No.2

February 2011

www.ijitce.co.uk

Page 4: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

From Editor's Desk

Greetings!

Our digital era has shown tremendous progress towards research, technology design and

creative/innovative thinking. Computer Games like multiplayer Interactive Games and Interactive

Media have emerged as some of the most vibrant elements of today’s entertainment and military

industries respectively.. Following are some of the areas that are being worked on, in the current

technological market:

i) Expanding our computer technology in terms of hardware and software for different media.

ii) Validating innovative procedures including algorithms and architectures for technological

advancements.

iii) Exploring novel applications of computer gaming technology for entertainment.

Apart from media and entertainment, when it comes to technology and workplace, we need to admit

the fact that there is always a perceived gap in the way the skill sets map to the workplace

requirements. We must have an integrated approach that blends the competencies related to the

computer skills with those of Arts and Management so as to improve the employability. We need a

connecting factor that ties both the ends of computer technology and arts/management discipline

together, to bring out techy management people whose profile is most suitable for any workplace.

This journal has many Ph.D qualified members who can help in promoting research on a continuous

basis.

This issue has many innovative and creative research papers which make readers to gain more

knowledge and lead them in right path.

Editorial Team

IJITCE

Page 5: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

Editorial Members Dr. Chee Kyun Ng Ph.D Department of Computer and Communication Systems, Faculty of Engineering, Universiti Putra Malaysia,UPM Serdang, 43400 Selangor,Malaysia. Dr. Simon SEE Ph.D Chief Technologist and Technical Director at Oracle Corporation, Associate Professor (Adjunct) at Nanyang Technological University Professor (Adjunct) at Shangai Jiaotong University, 27 West Coast Rise #08-12,Singapore 127470 Dr. sc.agr. Horst Juergen SCHWARTZ Ph.D, Humboldt-University of Berlin, Faculty of Agriculture and Horticulture, Asternplatz 2a, D-12203 Berlin, Germany Dr. Marco L. Bianchini Ph.D Italian National Research Council; IBAF-CNR, Via Salaria km 29.300, 00015 Monterotondo Scalo (RM), Italy Dr. Nijad Kabbara Ph.D Marine Research Centre / Remote Sensing Centre/ National Council for Scientific Research, P. O. Box: 189 Jounieh, Lebanon Dr. Aaron Solomon Ph.D Department of Computer Science, National Chi Nan University, No. 303, University Road, Puli Town, Nantou County 54561, Taiwan Dr. Arthanariee. A. M M.Sc.,M.Phil.,M.S.,Ph.D Director - Bharathidasan School of Computer Applications, Ellispettai, Erode, Tamil Nadu,India Dr. Takaharu KAMEOKA, Ph.D Professor, Laboratory of Food, Environmental & Cultural Informatics Division of Sustainable Resource Sciences, Graduate School of Bioresources, Mie University, 1577 Kurimamachiya-cho, Tsu, Mie, 514-8507, Japan Mr. M. Sivakumar M.C.A.,ITIL.,PRINCE2.,ISTQB.,OCP.,ICP Project Manager - Software, Applied Materials, 1a park lane, cranford, UK Dr. Bulent Acma Ph.D Anadolu University, Department of Economics, Unit of Southeastern Anatolia Project(GAP), 26470 Eskisehir, TURKEY Dr. Selvanathan Arumugam Ph.D Research Scientist, Department of Chemistry, University of Georgia, GA-30602, USA.

Page 6: IJITCE Feb 2011

Contents 1. A Web Based Information & Advisory System for Agriculture ……….[1]

2. Performance Evaluation on the Basis of Energy in NoCs ……….[6]

3. Implementation of Authentication and Transaction Security based on Kerberos …..[10]

4. Cultural Issues and Their Relevance in Designing Usable Websites…[20]

5. Software Cost Regressing Testing Based Hidden Morkov Model …[30]

6. Handoff scheme to enhance performance in SIGMA……[40]

7. A Fast Selective Video Encryption Using Alternate Frequency Transform….[45]

8. Impact of Variable Speed Wind Turbine driven Synchronous Generators in Transient Stability

of Power Systems………….[54]

Page 7: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

1

A Web Based Information & Advisory System for Agriculture

Shrikant G. Jadhav #1, G.N. Shinde *2 #1 Department of Computer Science, Yeshwant Mahavidyalaya, Nanded-431601 [MS], INDIA,

*2 Principal, Indira Gandhi College, CIDCO, Nanded-431605 [MS], INDIA

Abstract: The business of farming has entered a new era –an age where key to success is perfect, timely info rmation and careful decision- making. Now when the producti on is stagnating it has become essential that the farmers collect important and updated information about any of the crop and to get the proper advice regarding the farming.

This paper introduces the IT initiatives in India for Agriculture like AGMARKNET, DACNET and also discusses a web based information and advisory syst em for agriculture which is implemented using HTML and JavaScript. The paper focus on the development methodology used and system functions, constraints and obstacles for the system

Keywords : Agriculture, AGMARKNET, DACNET, Advisory service, farmers guide, software engineering

I INTRODUCTION

Agriculture is one of the most important sector

for human beings all over the world. In India near about 70% of population depend on agriculture. The credit of the increased production of the agriculture products in the past could be given to the efforts of farmers. Now when the production is stagnating it has become essential that the farmers collect important and updated information about any of the crop and to get the proper advice regarding the farming [1]. Keeping this in view, there is a need of farmer advisory system for farm entrepreneur which could help them in farming.

The business of farming has entered a new era, i.e. an age where key to success is perfect, timely information and careful decision- making. International competition has resulted in a continued pressure on profit margins. Moreover, the farmer has to decide about various production options utilizing the results of latest developments of research and technology. Informed and

quick decision making is therefore required to ensure profitable performance of the farmers [1,2].

II IT INITIATIVES IN INDIA FOR AGRICULTURE

In the era of IT and globalization, Different Government bodies , NGOs and leading business territories have come forward for IT initiative that support the agricultural business and related activities. Some of these introduced below.

1) Agricultural Marketing Information System –AGMARKNET: (http://www.agmarknet.nic.in)

This initiative was taken by Department of Agriculture &Cooperation, Ministry of Agriculture Govt. of India. As a step towards globalization of agriculture, the Directorate of Marketing & Inspection (OMI) has embarked upon an IT project: NICNET based Agricultural Marketing Information System Network (AGMARKNET)" in the country, during the Ninth Plan, for linking all important APMCS (Agricultural Produce Market Committees), State Agricultural marketing Boards / Directorates and OMI regional offices located throughout the country, for effective information exchange on market prices. The advantages of AGMARKNET database accrue to the farmers, as they have choices to sell their produce in the nearest market at remunerative prices[3].

2) DACNET: (http://www.dacnet.nic.in)

The department of Agriculture and Cooperation (DAC), Ministry of Agriculture and National Information Centre (NIC) has implemented this project. The aim of this project is to strengthen the infrastructure of ICT in all the Directorates, Regional Directorates and its field units

Page 8: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

2

DACNET is an e-governance project to facilitate Indian ‘Agriculture-on-line’ It was built using the key criteria such as ease of use, speed of delivery, simplicity of procedure, single window access etc[4].

3) iKisan Project : (http://www.ikisan.com/default.asp)

iKisan is the ICT initiative of the Nagarjuna group of companies, the largest private entity supplying farmers’ agricultural needs. iKisan was set up with two components, the iKisan.com website, to provide agricultural information online, and technical centers at village level. The project operates in Andhra Pradesh and Tamil Nadu[5].

4) Warana Wired Village project:

The Warana cooperative complex in Maharashtra has become famous as a fore-runner of successful integrated rural development emerging from the cooperative movement. The Warana cooperative sugar factory, registered in 1956, has led this movement, resulting in the formation of over 25 successful cooperative societies in the region. The total turnover of these societies exceeds Rs. 60 million. Warana Nagar has an electronic telephone exchange, connecting nearly 50 villages, which has permitted dial-up connections from village kiosks to the servers, located at Warana Nagar. There are many infrastructure facilities in and around Warana Nagar. About 80% of the population is agriculture-based and an independent agricultural development department has been established by the cooperative society. The region is considered to be one of the most agriculturally prosperous in India[6].

III OBJECTIVE OF THE STUDY

India possesses a valuable agricultural knowledge and expertise. However, a wide information gap exists between the research level and practice. Indian farmers need timely expert information to make them more productive and competitive.

Concerning widespread nature of India in terms of whether & culture , it will be a better practice to establish farmer advisory systems in region wise manner. Such system will be beneficial for a particular region as it contains the local information rather than

global one[7]. There are several objective of this study as :

• To make an effort to present a solution to bridge the information gap by exploiting advances in Information Technology.

• To propose a framework of a cost-effective agricultural information system to disseminate expert agriculture knowledge to the farming community to improve the crop productivity.

• To develop a web based farmer advisory system for farmers in Nanded, Marathwada region for Maharashtra state.

(http://www.farmersguide.info)

IV THE METHODOLOGY

Software engineering’s classic life cycle method is used for developing proposed farmer advisory system. Classic life cycle is also called as linear sequential model and it is widely used paradigm for such system development [8].

Figure -1 : Linear Sequential Model

As shown in figure-1, the linear sequential model encompasses the following activities:

System/information engineering and modelling: . System engineering and analysis encompass requirements gathering at the system level with a small amount of top level design and analysis

Software requirements analysis: The requirements gathering process is intensified and focused specifically on software. To understand the nature of the program(s) to be built, the software engineer ("analyst") must

Page 9: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

3

understand the information domain for the software, as well as required function, behavior, performance, and interface. Requirements for both the system and the software are documented and reviewed. Design: Software design is actually a multistep process that focuses on four distinct attributes of a program: data structure, software architecture, interface representations, and procedural (algorithmic) detail.

Code generation: The design must be translated into a machine-readable form. The code generation step performs this task.

Testing: Once code has been generated, program testing begins. The testing process focuses on the logical internals of the software, ensuring that all statements have been tested, and on the functional externals; that is, conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required results.

Support: Software will undoubtedly undergo change after it is delivered to the customer. Support is the phase required to perform the changes required. Software support/maintenance reapplies each of the preceding phases to an existing program rather than a new one.

V THE SYSTEM DEVELOPMENT

1 The Information Gathering Phase:

The information gathering phase is an important in any system development as it establishes the foundation for the new system development. For our system development we have gathered the information from the different sources which include

• Information Gathering through different Web Resources

• By visiting local APMC Nanded • By interacting with the farmers in the region • By collecting Historical data from Tahasil Office

Nanded. 2 The Analysis Phase:

The analysis phase bridges the gap between the system engineering and the system design phase. In this phase we have defined the scope of work by

specifying functions , and constraints of the proposed system.

a) System Functions:

• The System should provide the fundamental geographical information of the region

• The system should provide the information about agricultural products for the region

• The information should contain basic product information, suitable conditions for the product and crop management and protection

• The system should be able to provide the information to queries asked by end user.

• The system should provide the other supporting information and links to the useful resources.

b) System Constraints:

There are several constraints found with the system. The performance of the system depends on the advisor. It is necessary for the advisor to always check the user queries and provide the timely response. This will make the information useful for the end user, regular update of information like rainfall, climate changes and market prices etc. is essential for system administrator. If these constraints are followed system will be very useful for the farmers in the region.

3 The Design Phase:

The design phase focuses on the development of framework and establishing the architecture of the system. The proposed system is integrated system of human and technology. So it becomes essential to understand the role and place of these components in the system. Figufe-2 shows the Schematic Outline of Structure of System.

The proposed system has the following components

• Farmer should have easy access to information, Convenient facilities to post queries.

• System Administrator Should Continuously update the system and act as interface between farmer and Agricultural Experts

• Agricultural Experts : Continuously get feed backs, Be able to update information from his

• source and provide the response to the

administrator

Page 10: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

4

Figure -2 :- Schematic Outline of Structure of System

Figure -3 :- Schematic working of System

Above Figure 3 show schematic working model of System. The farmer (user) will interact with the system by using the url of the system. The home page of the system will provide the various options for the user and it interns contains the different types of farming information. The system interface is expected user friendly.

The user can download the useful information if required. User can also use query interface to post query and ask for the advise. The query posted by user will be received in administrators mail box. The administrator then forward the query from user to agricultural expert . Agricultural expert will provide the suggestion to query and sent it back to the administrator and finally administrator will forward this reply to the user.

4 The Source Code Development:

Figure 4: Snap shot of Home page

Figure 5: Snap shot of page where user can select the crop

The system is developed using HTML and JavaScript. The main interface is ‘index.html’ file which is the home page for the system. From this home page the links are given to various functions like accessing information, posting query etc. Figure-4, figure-5 & figure-6 shows the few snap shot of the proposed system.

Figure 6 : Snap shot of page providing crop protection information

Page 11: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

5

VI SYSTEM EVALUATION

India is to be expected as “Knowledge Society” in coming few years by which any farmer in a remote village can access the information using IT resources[9]. To achieve “knowledge society ” in agricultural sector, it is necessary that there should be an agricultural information center in each village. but there are certain barriers in the achievement of this expectation [10].

Significant Obstacles are as follows. • Poor literacy rate. • Language barriers. • Unawareness of technology. • Unavailability of technical resources. • Unavailability skilled human resources. • Electricity problems.

All above problems are foundational problems. There

is a need that government organizations, NGOs, researchers and educational institutions should come forward , which decides the uniform policies and apply the efforts to solve these problems[7]. As long as such problems remain exist ,then it is very difficult to make efficient use of IT for agricultural development

1. Efforts should be made to increase the literacy rate.

2. It has been seen that skilled people are not interested to work in rural areas, such people should be encouraged and promoted to work in the area.

3. Necessary Funds for Resources should be availed. 4. Efforts should be made to incorporate IT in all

endeavors related to agricultural development. 5. The organizations and departments concerned

with agricultural development need to realize the potential of IT for the speedy dissemination of information to farmers.

VII CONCLUSION

The business of farming has entered a new era – an age where key to success is perfect, timely information and careful decision- making. In this era, now when the production is stagnating it has become essential that the farmers collect important and updated information about any of the crop and to get the proper advice regarding the farming.

From Indian farming perspective, farming community is facing a multitude of problems to maximize crop productivity. In spite of successful research on new agricultural practices concerning various areas in farming, the majority of farmers are not getting upper-bound yield due to several reasons.

One of the reasons is that expert/scientific information is not reaching farming community. Indian farmers need timely expert information to make them more productive and competitive.

Here an attempt is made by developing ‘farmersguide.info’ –a web based farmer advisory system for farmers in Nanded, Marathwada region for Maharashtra state. Concerning widespread nature of India in terms of whether & culture , it will be a better practice to establish farmer advisory systems in region wise manner. Such system will be beneficial for a particular region as it contains the local information rather than global one. It will also be useful for removing the information gap that exists between the research level and actual business practice

VIII REFERENCES

1. I.V.Subba Rao (2002), “Indian agriculture – Past laurels and future challenges, Indian agriculture: Current status, Prospects and challenges” Volume, 27th Convention of Indian agricultural Universities Association, December 9-11, 2002, pp. 58-77

2. J.C.Katyal, R.S.Paroda, M.N.Reddy, Aupam Varma, N.Hanumanta Rao (2000), “Agricultural scientists perception on Indian agriculture: scene scenario and vision”, National Academy of Agricultural Science, New Delhi, 2000.

3. Agmarknet Documentation (2008) “Marketing research and information network Revised operational guidelines for agmarknet” Retrieved on July 11, 2010 from http://www.agmarknet.nic.in

4. Dacnet Broucher (2009), Retrieved on July 11, 2010 from http://www.dacnet.nic.in

5. Deepak Kumar (2005) “ Private Sector Participation in Indian Agriculture An Overview” Business Environment, July 2005, PP -19-24.

6. Shaik. N. Meera, anita jhamtani, (2004), “Information and communication technology in agricultural development: a comparative analysis of Three projects from india” , Agricultural Research & Extension Network, Network paper no 135 Retrieved on July 11, 2010 from Http://www.odi.org.uk/

7. P.Krishna Reddy (2004), “A Framework of Information Technology Based Agriculture Information Dissemination System to Improve Crop Productivity” , Proceedings of 22nd Annual Conference of Andhra Pradesh Economic Association, 2004, D.N.R. College, Bhimavaram, 14-15, Februaray 2004.

8. Pressman, R. (2000). “Software engineering: A practitioner’s approach” (5th ed.) McGraw-Hill Publications.

9. Rita Sharma (2002), Reforms in Agricultural extension: new policy framework. Economic and Political weekly, July 27, 2002, pp. 3124-3131.Shinde G.N., Jadhav S.G (2008), “The Role ICT In Agricultural Development - A Indian Perspective” Paper Presented in National Conference on “Advances in Information Communication Technology” Organized by Computer Society of India ,Allahabad chapter, at Allahabad on Mar. 15-16,2008

10. Vayyavuru Sreenivasulu and H.B. Nandwana (2001), “networking of agricultural information systems and services in india” , INSPEL (2001) Vol-4, pp 226-235.

Page 12: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

6

Performance Evaluation on the Basis of Energy in NoCs

Lalit Kishore Arora #1, Rajkumar *2

# MCA Dept, AKG Engg College,Ghaziabad, UP, India. *CSE Dept, Gurukul Kangri Vishva Vidyalaya, Haridwar, UK, India.

Abstract— The classical interconnection network topologies such as point-to-point and bus-based, recently has been replaced by the new approach Network-on-Chip (NoC). NoC can consume significant portions of a chip’s energy budget, so analyzing th eir energy consumption early in the design cycle become s important for architectural design decisions. Altho ugh numerous studies have examined NoC implementation and performance, few have examined energy. This paper determines the energy efficiency of some of t he basic network topologies of NoC. We compared them, and results show that the CMesh topology consumes less energy than Mesh topology.

Keywords : Network-on-Chip, Interconnection Networks, Topologies, Multi-core processor.

I. INTRODUCTION

In the design cycle of system-on-chip (SoCs) [20], the main emphasis is on the computational aspect. However, as the number of components on a single chip and their performance continue to increase, the design of the communication architecture plays a major role in defining the area, performance, and energy consumption of overall system. Furthermore, with technology scaling, the global interconnects cause severe on-chip synchronization errors, unpredictable delays, and high power consumption[27]. To remove these effects, the network-on-chip (NoC) approach emerged recently as a promising alternative to classical bus-based and point-to-point (P2P) communication architectures [31],[1],[25].

The remainder of this paper is organized as follows. Section 2 explains the related work and motivation behind this work. Section 3 describes the overview of the topologies which we have used in this experiment. In section 4 we describe the results of

experiments for both topologies for energy consumption.

II. MOTIVATION

To connect the increasing number of cores in a scalable way, researchers are evaluating packet-switched networks-on-chip (NoCs) [9], [10], [23]. The increasing disparity between wire and transistor delay [11] and the dependence between interconnect and

memory system performance suggest that the relative importance of NoCs will increase in future CMP designs. As a result, there has been significant research in topologies [7], [16], [28], router microarchitecture [15], [21], wiring schemes [4], and power optimizations [32]. Nevertheless, there is a great need for further understanding of interconnects for large-scale systems at the architectural level. Previous studies also focused on CMPs [18], have used synthetic traffic patterns [7], [15], [21], or traces [28], or do not model the other components of the memory hierarchy [16].

In the previous paper [33] we determined the network energy efficiency for the Fat Tree and Mesh, and results shown that Mesh consumes less energy than the Fat Tree topology.

Here we determine the network energy efficiency (in pJ/bit) as a function of network bandwidth for networks with a fixed size of 64 nodes running different-different traffic patterns. We also changed the network bandwidth by changing the channel width. The four data point for each topology corresponds to channel width of 16, 24, 48, 72 bits.

III. TOPOLOGIES FOR EVALUATION

The topology defines how routers are connected with each other and the network endpoints. For a large-scale system, the topology has a major impact on the performance and cost of the network. our study aims to determine the energy consumed by network topologies across a range of network parameters including network bandwidth, traffic pattern, network frequency. In the experiments we study four realistic topologies the Mesh, the concentrated Mesh (CMesh).

A. Mesh Topology

Linear arrays are called 1-D meshes and they are incrementally scalable. When dealing with a mesh, we usually assume that its dimension n is fixed. If we want to change its size, we change the side lengths. The most practical meshes are, of course, 2-D and 3-D ones [6].

In a mesh network, the nodes are arranged in a k dimensional lattice of width w, giving a total of wk nodes.[usually k=1 (linear array) or k=2 (2D array)

Page 13: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

7

e.g. ICL DAP]. Communication is allowed only between neighboring nodes. All interior nodes are connected to 2k other nodes.

The most important mesh-based parallel computers are Intel's Paragon (2-D mesh) [14] and MIT J-Machine (3-D mesh). Also transputers used 2-D mesh interconnection. Processors in mesh-based machines are allocated by submeshes and the submesh allocation strategy must handle possible dynamic fragmentation and compaction of the global mesh network, similarly to hypercube machines [30].

Figure-1. Mesh

B. CMesh Topology The 2D mesh is a popular interconnect choice in

large-scale CMPs [5], [14]. Each of the T/C routers connects to its four neighbouring routers and C source or destination nodes. T represents the number of sources and destinations in the network and degree of concentration C.

The degree of concentration C, in nodes per router, is typically applied to reduce the number of routers and therefore hops. In this way, mesh with a concentration factor, commonly referred to as CMesh.

Figure-2. CMesh

The major advantage of the mesh is its simplicity. All links are short and balanced and the overall layout is very regular. The routers are low radix with up to C + 4 input and output ports, which reduces their area footprint, power overhead, and critical path. The major disadvantage is the large number of hops that flits have to potentially go through to reach their final destination (proportional to "N for N routers). Each

router imposes a minimum latency (e.g., 3 cycles) and is a potential point of contention. A large number of hops has a direct impact on the energy consumed in the interconnect for buffering, transmission, and control. Hence, meshes could face performance and power scalability issues for large-scale systems. To address this shortcoming, researchers have proposed meshes with physical [8] or virtual [17] express links.

IV. EVALUATION

Our network-on-chip (NoC) topology study aims to determine the energy efficiency of network topologies across a range of network parameters including network bandwidth, traffic pattern, network frequency. In the experiments we compared the Mesh and CMesh topologies. In this experiment we used a RTL based router model and spice based channel model to obtain the energy results. The router RTL were place and routed using a commercial 45 nm lower power library running at 200MHz. The channel model uses technology parameters from the same library.

Figure-3 shows network energy efficiency (in pJ/bit) as a function of network bandwidth for networks with a fixed size of 64 nodes running uniform random traffic. We change the network bandwidth by changing the channel width. The four data point for each topology corresponds to channel width of 16, 24, 48, 72 bits. For each channel width configuration, the network is running at 50% of saturationbandwidth.

Figure 3 and 4 shows the effect of varying traffic

patterns on the energy efficiency of both network topologies. Each network configuration is running at 50% saturation throughput under the test traffic pattern. Both Mesh and CMesh network topology uses dimension order routing. In figure 5, using dimension order routing under transpose traffic, much of the network infrastructure is idle except for few heavily loaded channels. As a result the energy per bit of Mesh topology increases.

In figure 5, nearest neighbour traffic heavily favours the mesh topology. Each node in the mesh has a dedicated channel to each of its immediate neighbours, this result in very high network bandwidth. For other one topology, nearest neighbour traffic under utilizes network resources such as the long channels of the Mesh. As a result, this under utilized resources decreases the energy efficiency of CMesh topology when compared to the Mesh.

Page 14: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

8

Fig. 3. Network energy per bit sent under uniform random traffic vs. network bandwidth

Fig. 4. Network energy per bit sent under transpose

traffic vs. network bandwidth

Fig. 5. Network energy per bit sent under nearest neighbour traffic vs. network bandwidth

V. CONCLUSION AND FUTURE SCOPE

As we discussed in [33] we have shown that Mesh gives better energy efficiency than the Fat Tree. Here we compared two another popular interconnection networks, Mesh and CMesh network topology. After evaluation Mesh and CMesh, in different traffic patterns we found that CMesh topology consumes less energy than Mesh topology as shown in different charts. In future we are trying to evaluate two more

topologies CMesh and FBFly with above traffic

patterns.

REFERENCES [1] Hemani, et al., “Network on Chip: An Architecture for Billion

Transistor Era,” In Proc. IEEE NorChip Conf., Nov. 2000. [2] ARG Database CD, http://amalfi.dis.unina.it/.T. Messmer,

H. Bunke, “A Decision Tree Approach to Graph and bgraph Isomorphism Detection,” Pattern Recognition, Dec. 1999.

[3] BALASUBRAMONIAN, R., MURALIMANOHAR, N., RAMANI, K., AND VENKATACHALAPATHY, V. Microarchitectural wire management for performance and power in partitioned architectures. In Proceedings of the 11th International Symposium on High-Performance Computer Architecture. IEEE, Los Alamitos, CA, 2005.

[4] BELL, S., EDWARDS, B., AMANN, J., CONLIN, R., JOYCE, K., LEUNG, V., MACKAY, J., REIF, M., BAO, L., ET AL. TILE64 processor: A 64-core SoC with mesh interconnect. In Proceedings of the International Solid-State Circuits Conference. IEEE, Los Alamitos, CA,2008

[5] Benini, L.; De Micheli, G., “Networks on chips: a new SoC paradigm”, IEEE Computer Society , Computer, 70 – 78, Jan 2002.

[6] BONONI, L., CONCER, N., GRAMMATIKAKIS, M., COPPOLA, M., AND LOCATELLI, R. NoC topologies exploration based on mapping and simulation models. In Proceedings of the 10th Conference on Digital System Design Architectures, Methods and Tools. IEEE, Los Alamitos, CA. 2007.

[7] DALLY, W. Express cubes: Improving the performance of k-ary n-cube interconnection networks. IEEE Trans. Comput. 40, 9, 1016–1023. 1991.

[8] DALLY, W. J. AND TOWLES, B. Route packets, not wires: On-chip interconnection networks. In Proceedings of the 38th Conference on Design Automation. ACM, New York, 2001.

[9] DE MICHELI, G. AND BENINI, L. Networks on chip: A new paradigm for systems on chip design. In Proceedings of the Conference on Design, Automation and Test in Europe. ACM, New York, 2002.

[10] HO, R., MAI, K., AND HOROWITZ, M. The future of wires. Proc. IEEE. 89, 4, 24. ,2001

[11] http://csrc.nist.gov/CryptoToolkit/aes/rijndael/ [12] http://vlado.fmf.uni-lj.si/pub/networks/pajek/ [13] Intel Tera-scale Computing Research Program.

http://www.intel.com/go/terascale ,2008. [14] KIM, J., PARK, D., THEOCHARIDES, T., VIJAYKRISHNAN,

N., AND DAS, C. R. A low latency router supporting adaptivity for on-chip interconnects. In Proceedings of the 42nd Annual Conference on Design Automation. ACM, New York, 2005.

[15] KIM, M. M., DAVIS, J. D., OSKIN, M., AND AUSTIN, T. Polymorphic on-chip networks. In Proceedings of the 35th Annual International Symposium on Computer Architecture. ACM, New York.2008.

[16] KUMAR, A., PEH, L.-S., AND JHA, N. K. Token flow control. In Proceedings of the 41th Annual International Symposium on Microarchitecture. IEEE, Los Alamitos, CA, 2008..

[17] KUMAR, R., ZYUBAN, V., AND TULLSEN, D. M. Interconnections in multi-core architectures: Understanding mechanisms, overheads and scaling. In Proceedings of the 32nd Annual International Symposium on Computer Architecture. ACM, New York, 2005..

[18] LEISERSON, C E, “Fat-trees - University networks for hardware-efficient supercomputing” , IEEE Transactions on Computers. Vol. C-34, pp. 892-901. Oct. 1985.

60 70 80 90 100 150 200 250 3001

1.2

1.4

1.6

1.8

2

MeshCmesh

Network Throghput (Gbps)

En

erg

y p

er

Bit

(pj)

20 30 40 50 100 130

0

1

2

3

4

5

MeshCmesh

Network Throughput (Gbps)

Energ

y per Bit

(pj)

10 20 30 40 50

0

2

4

6

8

10

MeshCmesh

Network Throghput (Gbps)

Energ

y per B

it (p

j)

Page 15: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

9

[19] M. Kreutz, et. al. “Communication Architectures for System-On-Chip,” In 14th Symp. on Integrated Circuits and Systems Design, Sep. 2001.

[20] MULLINS, R., WEST, A., AND MOORE, S. Low-latency virtual-channel routers for on-chip networks. In Proceedings of the 31st Annual International Symposium on Computer Architecture. ACM, New York, 2004..

[21] Ohring, S.R.; Ibel, M.; Das, S.K.; Kumar, M.J., “On generalized fat trees”, in proceedings Parallel Processing Symposium, 1995.

[22] OWENS, J. D., DALLY, W. J., HO, R., JAYASIMHA, D. N., KECKLER, S. W., AND PEH, L.-S. Research challenges for

on-chip interconnection networks. IEEE Micro. 27, 5, 96–108. 2007.

[23] P. Foggia, et al., “A Performance Comparison of Five Algorithms for Graph Isomorphism,” In Proc. 3rd IAPR TC-15 Workshop on Graphbased Representations in Pattern Recognition, May, 2001.

[24] P. Guerrier, A. Greiner, “A Generic Architecture for On-Chip Packet Switched Interconnections,” In Proc. DATE, March 2000.

[25] Pierre Guerrier, Alain Greiner, “A generic architecture for on-chip packet-switched interconnections”, Proceedings of the conference on Design, automation and test in Europe,pp 250 – 256, 2000

[26] S. Murali, G. De Micheli, “SUNMAP: A Tool for Automatic Topology Selection and Generation for NoCs,” In Proc. 41st DAC, June 2004.

[27] TOTA, S., CASU, M. R., AND MACCHIARULO, L. 2006. Implementation analysis of NoC: a MPSoC trace-driven approach. In Proceedings of the 16th Great Lakes Symposium on VLSI. ACM, New York.V.S. Adve, M.K.

[28] Vernon , “Performance analysis of mesh interconnection

networks with deterministic routing”, IEEE [29] Transactions on parallel and distributed

systems, vol. 5, no3, pp. 225-246, 1994. [30] VS Adve, MK Vernon, “Performance analysis of

multiprocessor mesh interconnection networks with wormhole routing”, Computer Sciences Technical Report #1001a, June ,1992.

[31] W. Dally and B. Towles, “Route packets, not wires: On-chip interconnection networks,” In Proc. 38th DAC, June 2001.

[32] WANG, H., PEH, L.-S., ANDMALIK, S. Power-driven design of router microarchitectures in onchip networks. In Proceedings of the 36th Annual International Symposium on Microarchitecture. IEEE, Los Alamitos, CA 2003..

[33] LALIT K. ARORA, RAJKUMAR, Network-on-Chip Evaluation Study for Energy, Proceeding of International conference on Reliability, Infocom Technology and Optimization, Lingaya's University, India, pg 314-320, Nov,2010

Page 16: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

10

Implementation of Authentication and Transaction Security based on Kerberos

Prof R.P. Arora Head of the Deaprtment, Computer Sc and Engg.

Dehradun Institute of Technology, Dehradun Ms. Garima Verma

Asstt. Professor, MCA Department, Dehradun Institute of Technology, Dehradun

Abstract— Kerberos is a network authentication protocol. It is designed to provide strong authenti cation for client/server applications by using secret-key cryptography.

Kerberos was created by MIT as a solution to networ k security problems. The Kerberos protocol uses stron g cryptography so that a client can prove its identit y to a server (and vice versa) across an insecure network connection. After a client and server have used Ker beros to prove their identity, they can also encrypt all of their communications to assure privacy and data integrity as they go about their business.

In this paper we tried to implement authentication and

transaction security in a Network using Kerberos. T his project is embedded with Authentication Server application and used to derive a 64 bit key from us er’s password. This key is used by authentication server , to encrypt ticket granting ticket + session key. The k ey generated by authentication server will be used by the client at the time of transaction through the trans action server to authenticate that transaction client is v alid or not.

Key Words : secret key , cryptography, authentication,

ticket, session key etc.

I. INTRODUCTION

With the advent of computer the need for automated tools for protecting files and other information stored on the computer became evident [14]. This is specially the case for a shared system, such as time-sharing system, and the need is even more acute for systems that can be accessed over a public telephone network, data network, or the internet. Computer and network security is important for the following reasons [16].

• To protect company assets: One of the primary goals of computer and network security is the protection of

company assets. By "assets," means the hardware and software that constitute the company's computers and networks. The assets are comprised of the "information" that is housed on a company's computers and networks.

• To gain a competitive advantage: Developing and maintaining effective security measures can provide an organization with a competitive advantage over its

competition. Network security is particularly important in the arena of Internet financial services and e-commerce.

• To comply with regulatory requirements and fiduciary responsibilities: Corporate officers of every company have a responsibility to ensure the safety and soundness of the organization. Part of that responsibility includes ensuring the continuing operation of the organization. Accordingly, organizations that rely on computers for their continuing operation must develop policies and procedures that address organizational security requirements.

• To keep your job: Finally, to secure one's position within an organization and to ensure future career prospects, it is important to put into place measures that protect organizational assets. Security should be part of every network or systems administrator's job. Failure to perform adequately can result in termination. One thing to keep in mind is that network security costs money: It costs money to hire, train, and retain personnel; to buy hardware and software to secure an organization's networks; and to pay for the increased overhead and

Page 17: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

11

degraded network and system performance that result from firewalls, filters, and intrusion detection systems (IDSs). As a result, network security is not cheap.

1.1 KERBEROS Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography [2][10]. The Internet is an insecure place. Many of the protocols used in the Internet do not provide any security. Tools to "sniff" passwords off of the network are in common use by malicious hackers. Thus, applications which send an unencrypted password over the network are extremely vulnerable. Worse yet, other client/server applications rely on the client program to be "honest" about the identity of the user who is using it. Other applications rely on the client to restrict its activities to those which it is allowed to do, with no other enforcement by the server. Some sites attempt to use firewalls to solve their network security problems. Unfortunately, firewalls assume that "the bad guys" are on the outside, which is often a very bad assumption. Most of the really damaging incidents of computer crime are carried out by insiders. Firewalls also have a significant disadvantage in that they restrict how your users can use the Internet. (After all, firewalls are simply a less extreme example of the dictum that there is nothing more secure than a computer which is not connected to the network --- and powered off!) In many places, these restrictions are simply unrealistic and unacceptable. Kerberos was created by MIT as a solution to these network security problems. The Kerberos protocol uses strong cryptography so that a client can prove its identity to a server (and vice versa) across an insecure network connection [13]. After a client and server have used Kerberos to prove their identity, they can also encrypt all of their communications to assure privacy and data integrity as they go about their business. 1.1.1 Basic Concepts The Kerberos protocol relies heavily on an authentication technique involving shared secrets [14]. The basic concept is quite simple: If a secret is known by only two people, then either person can verify the identity of the other by confirming that the other person knows the secret. For example, let’s suppose that Alice often sends messages to Bob and that Bob needs to be sure that a

message from Alice really has come from Alice before he acts on its information. They decide to solve their problem by selecting a password, and they agree not to share this secret with anyone else. If Alice’s messages can somehow demonstrate that the sender knows the password, Bob will know that the sender is Alice. The only question left for Alice and Bob to resolve is how Alice will show that she knows the password. She could simply include it somewhere in her messages, perhaps in a signature block at the end—Alice, Our$ecret. This would be simple and efficient and might even work if Alice and Bob can be sure that no one else is reading their mail. Unfortunately, that is not the case. Their messages pass over a network used by people like Carol, who has a network analyzer and a hobby of scanning traffic in hope that one day she might spot a password. So it is out of the question for Alice to prove that she knows the secret simply by saying it. To keep the password secret, she must show that she knows it without revealing it. The Kerberos protocol solves this problem with secret key cryptography. Rather than sharing a password, communication partners share a cryptographic key, and they use knowledge of this key to verify one another’s identity. For the technique to work, the shared key must symmetric—a single key must be capable of both encryption and decryption. One party proves knowledge of the key by encrypting a piece of information, the other by decrypting it.

Page 18: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

12

Figure 1: functional block dia. of Kerberos

Page 19: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

13

2. Literature Review

K. Aruna et. al (2010), The aim of this paper is to establish a collaborative trust enhanced security model for distributed system in which a node either local or remote is trustworthy. They have also proposed a solution with trust policies as authorization semantics. Kerberos, a network authentication protocol is also used to ensure the security aspect when a client requests for certain services. In the proposed solution, they have also considered the issue of performance bottlenecks.

Steve Mallard(2010), he has defined various authentication method in order to protect the assets on your network like username and password, Biometric systems, Kerberos etc.

Dr.Mohammad N. Abdullah & May T. Abdul-Hadi (2009) they tried to establish a secured communication between the clients and mobile-bank application server in which they can use their mobile phone to securely access their bank accounts, make and receive payments, and check their balances.

Hongjun liu et. al(2008), This paper has discussed potential server bottleneck problem when the Kerberos model is applied in large-scale networks because the model uses centralized management. They have proposed an authentication model based on Kerberos,which tries to overcome the potential server bottleneck problem and can balance the load automatically

Frederick Butler , Iliano Cervesato, Aaron D. Jaggard, Andre Scedrov and Christopher Walstad

(2006) Analysed Kerberos 5 protocol, and concluded that Kerberos supports the expected authentication and confidentiality properties, and that it is structurally sound; these results rely on a pair of intertwined inductions.

I. Cervesato,A. D. Jaggard,A. Scedrov,C. Walstad (2004) they presented a formalization of Kerberos 5 cross-realm authentication in MSR, a specification language based on multiset rewriting. We also adapt the Dolev-Yao intruder model to the cross-realm setting and prove an important property for a critical field in a cross-realm ticket. They also documented several failures of authentication and confidentiality in the presence of compromised intermediate realms.

2.1 Objective of the study

When we see the overall functioning of the Kerberos there are various module that need to be made for implementing Kerberos as whole for any network. For authentication of any client there is a centralized Authentication server which will generate ticket for the

client using password by applying encryption

technique. Simultaneously authentication server will

pass a copy of ticket to the respective data server. Ticket will be unique for every data server as well as valid for only one session. Whenever client wants to perform any transaction through the server it has to send message with that ticket, the server will authenticate whether client’s ticket is right or wrong, if the ticket is right it will accept the message or data sent by the client.

3. Research Methodology

To implement this project we have used Java and NetBeans 5.5 because we found Java as most suitable language to do the network programming. We have divided the whole project into three modules client who is a user wants to access data server, authentication server is the module which is used to generate ticket and return back to the client who accesses data server so that data server can easily check whether the client who is coming with ticket is correct or not, and data server is the site where data is stored and can be utilized by the clients. We have used the concept of socket programming to implement client, authentication server and data server.

Page 20: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

14

For the generation of ticket from the authentication server we have used Data Encryption Standard (DES) in authentication server which will use 64 bit plain text and 56 bit key.

4. Implementation

The whole project is divided into three modules- Client site, authentication server and data server.

4.1 Client Module

Client is any user who can apply to any data server for service. The obvious security risk is that of impersonation. An opponent client can pretend to be another client and obtain unauthorized privileges on the data server sites. To counter this threat, data servers must be able to confirm the identities of clients who request service. We followed following steps:

• Client will logon to its own terminal by using user name and password. These user and passwords and predefined and assigned to every client on the network. Every client have unique user name with two passwords, one password is used to logon to the client terminal and another is called as transaction password which he will submit to the authentication server.

Code Section

Class.forName("com.mysql.jdbc.Driver").newInstance();

Connection con = DriverManager.getConnection("jdbc:mysql://localhost/test?"+"user=root&password=garima");

Statement stmt=con.createStatement();

ResultSet rs=stmt.executeQuery("select * from user where userid='"+usertxt.getText()+"'");

String u,p;

rs.next();

u=rs.getString(1);

p=rs.getString(2);

if(p.compareTo(passtx.getText())!=0)

{

JOptionPane.showMessageDialog(this,"Password Wrong");

}

• After the successful login client will submit his

details with transaction password to the authentication server. Details include – username, transaction password and name of the data server.

• Again entered transaction password will be

checked into the client database then finally sent to the authentication server.

Code Section

Class.forName("com.mysql.jdbc.Driver").newInstance();

Page 21: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

15

Connection con = DriverManager.getConnection("jdbc:mysql://localhost/test?"+ "user=root&password=garima");

Statement stmt=con.createStatement();

ResultSet rs=stmt.executeQuery("select * from user where userid='"+txtuser.getText()+"'");

String u,p;

if(rs==null)

{ JOptionPane.showMessageDialog(this,"User Name is Wrong");

}

rs.next();

//writing data to authentication serveru=rs.getString(1);

p=rs.getString(3);

if(p.compareTo(txtpas.getText())!=0)

{ JOptionPane.showMessageDialog(this,"Password Wrong");

}

else

{

try

{

Socket clientSocket = new Socket("192.168.10.133", 6789); PrintStream ps=new PrintStream(clientSocket.getOutputStream());

DataInputStream dis=new DataInputStream(clientSocket.getInputStream());

ps.println(u);

ps.println(p);

ps.println(serverip.getText());

s=dis.readLine().toString();

JOptionPane.showMessageDialog(this,s);

// clientSocket.close();

}catch(Exception e)

{ JOptionPane.showMessageDialog(this,e.toString()+" :he");

}

}

}catch(Exception e)

{

JOptionPane.showMessageDialog(this,"User Name is Wrong");

}

}

• After receiving ticket from the authentication server

the client will send message + ticket to the data server.

Code section

try

{

clientSocket2 = new Socket("192.168.10.216", 7211);

JOptionPane.showMessageDialog(this,"sending");

PrintStream pserver=new PrintStream(clientSocket2.getOutputStream());

DataInputStream diserver=new DataInputStream(clientSocket2.getInputStream());

Page 22: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

16

// sending message to data server

pserver.println(servermsg.getText());

pserver.println(s);

JOptionPane.showMessageDialog(this,diserver.readLine());

clientSocket2.close();

}catch(Exception e){}

4.2 Authentication Server Module

Authentication server is central authorities that knows the passwords of all clients and store these in a centralized database. In addition, the AS shares a unique secret key with each server [14]. These keys have been distributed physically or in some other secure manner. For example – the user logs on to a workstation and requests access to server V: the client module C in the user’s workstation requests the user’s password and then sends a message to AS that

includes the user’s ID, server’s ID and user’s password. The AS checks its database to see if the user has supplied the proper password for this user ID and whether this user is permitted access to server V. If both tests are passed, the AS accepts the user as authentic and creates a Ticket. This ticket is then sent back to C. For the encryption we have used DES algorithm.

The Following steps are included in this module:

• After Start of authentication server as well as data

server

After starting the authentication server it can accept any request coming to its port address from client side.

Code Section

welcomeSocket = new ServerSocket(6789);

connectionSocket = welcomeSocket.accept();

Page 23: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

17

• Generation of ticket using DES algorithm and store

a copy of ticket in its own database as well as send

a copy on server site.

Code Section

key = KeyGenerator.getInstance("DES").generateKey();

System.out.println("Client :"+ connectionSocket);

PrintStream pserver=new PrintStream(cs.getOutputStream());

PrintStream ps=new PrintStream(connectionSocket.getOutputStream());

DataInputStream dis=new DataInputStream(connectionSocket.getInputStream());

clientname=dis.readLine();

clientSentence=dis.readLine();

servername=dis.readLine();

DesEncrypter ds=new DesEncrypter(key,clientSentence);

String enc= ds.encrypt(clientSentence);

pstm.setString(1,clientname);

pstm.setString(2,enc);

pstm.setString(3,servername);

pstm.executeUpdate();

pstm1.setString(1,clientname);

pstm1.setString(2,enc);

pstm1.executeUpdate();

ps.println(enc);

4.3 Data Server Module

Now after receiving ticket client can now apply to Server for service. Client send a message to server containing its ID and ticket. Server decrypts the ticket and match it with ticket stored in the database. If these two match, the server considers the user authenticated and requested service.

Following steps we have included in this module-

• Client will send a message with ticket to the data server after receiving ticket from authentication server.

Code Section

clientSocket2 = new Socket("192.168.10.216", 7211);

JOptionPane.showMessageDialog(this,"sending");

PrintStream pserver=new PrintStream(clientSocket2.getOutputStream());

DataInputStream diserver=new DataInputStream(clientSocket2.getInputStream());

pserver.println(servermsg.getText());

pserver.println(s);

Page 24: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

18

JOptionPane.showMessageDialog(this,diserver.readLine());

clientSocket2.close();

• Now the data server will verify the ticket, after verification the data server will send a message to client whether he is authentic or not.

Code Section

msg=dclient.readLine();

k1=dclient.readLine();

ResultSet rs=stmt.executeQuery("select * from dser where keyid='"+k1+"'");

if(rs!=null)

ps.println("authenticated client");

else

ps.println(“Not authorized”);

System.out.println(msg);

new job1(consoc);

ps.close();

5. Conclusion

Authentication is critical for the security of computer systems. Without knowledge of the identity of a principal requesting an operation, it is difficult to decide whether the operation should be allowed. Traditional authentication methods are not suitable for use in computer networks where attackers monitor network traffic to intercept passwords. The use of strong authentication methods that do not disclose passwords is imperative. The Kerberos authentication system is well suited for authentication of users in such environments.

If we talk about the unprotected environment, any client can apply to any server for service. This has a security risk of impersonation. An opponent can pretend to be another client and obtain unauthorized privileges on server machines. In the above scheme the transaction will be highly secured in the sense that Authentication server creates a ticket which is further encrypted using the secret key shared by the server and authentication Server. This ticket then sends back to client. Because the ticket is encrypted, it cannot be altered by client or by an opponent.

Page 25: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

19

References

[1] K. Aruna et. al (2010), “A new collaborative trust enhanced

security model for distributed systems”. International Journal of Computer Application, No-26

[2] Steve Mallard(2010), “Methods of authentication”, Bright Hub [3] Hongjun liu et. al(2008), “A distributed expansible

authentication model based on Kerberos” Journal of Network and Computer Application, Vol.31, Issue 4

[4] Dr.Mohammad N. Abdullah & May T. Abdul-Hadi, “A Secure Mobile Banking Using Kerberos Protocol”, Engg & Technology Journal, Vol 27, No 6, 2009.

[5] “How Kerberos Authentication Works”, Network on line magazine, Jan 2008

[6] “How Kerberos Authentication Works“,Learn Networking on line magazine, Jan’2008

[7] Frederick Butler,, Iliano Cervesato, Aaron D. Jaggard, Andre Scedrov and Christopher Walstad, “Formal Analysis of Kerberos 5”, Sep 2006

[8] Rong Chen, Yadong Gui and Ji Gao, “Modification on Kerberos Authentication Protocol in Grid Computing Environment”, vol 3032, 2004.

[9] I. Cervesato,A. D. Jaggard,A. Scedrov,C. Walstad, “Specifying Kerberos 5 cross-realm authentication”, vol 3032, 2004.

[10] “Security of Network Identity: Kerberos or PKI”, System News (2002), Vol.56, Issue-II

[11] Ian Downnard, “Public-key cryptography extensions into Kerberos”. IEEE Potentials 2002.

[12] B. Clifford Neuman and Theodore Ts'o, Kerberos: An Authentication Service for Computer Networks, IEEE Communications 32 (1994), no. 9, 33--38.

[13] MIT Kerberos Website, “ http://web.mit.edu/kerberos/www”. [14] William Stallings, “Cryptography and Network Security”, Third

Edition. [15] Ravi Ganesan, “Yaksha’ : Augmenting Kerberos with Public

Key cryptography”. [16] John E. Canavan, “Fundamentals of Network Security”. [17] Chris Brenton with Cameron hunt , “ ACTIVE DEFENCE A

comprehensive guide to network security”

Page 26: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

20

Cultural Issues and Their Relevance in

Designing Usable Websites

Alao Olujimi Daniel1, Awodele Oludele2, Rehema Baguma3, and Theo van der Weide4

1. Computer Science & Mathematics Department, Babcock University, Illishan-Remo, Nigeria*

2. Computer Science & Mathematics Department, Babcock University, Illishan-Remo, Nigeria*

3. Faculty of Computing & Information Technology, Makerere University, Kampala, Uganda

4. Radboud University, Institute for Computing and Information Sciences. Nijmegen, The Netherlands.

Abstract— Cultural characteristics of users play a significant role in their interactions and understa nding of web based systems. Hence consideration of cultural issues in the design of a web based system can impr ove the usability of such a system. The relation betwee n culture and the internet is symbiotic, that is, exp erience obtained from using the internet (with its rich cul tural diversity) can also have an influence on the local culture. This makes culture a moving target. However to-date , not much research has been done about what cultural iss ues influence the usability of websites and the level o f influence. This paper examines theoretically the cu ltural issues that influence web design/usability and the significance of this influence to the general usabi lity of a website and also establish how culture can be utili zed to develop more usable websites. Thus the main contribution of this study is to identify what char acterizes usable websites with reference to cultural needs of the user, specific web features applicable to cultural dimension that can enhance cultural understanding a nd help web designers to customize the web sites to sp ecific cultures.

Keywords: Human Computer Interaction (HCI), Web Usability, Culture/User Centered Design, Cultural dimensions.

1 INTRODUCTION

As the World Wide Web spreads across countries, it has become increasingly important for designers to respect and understand cultural differences in how people

communicate and use the Internet. This knowledge is particularly crucial for people in international business, technology professions, and other work areas that require people from different cultures to interact online (Sapienza, 2008).

According to the International Telecommunications Union as of December 31, 2009,the number of users interacting with internet increased 399.3 percent since year 2000. A survey by Forrester Research indicated that North American consumers alone spent $172 billion shopping online in 2005, up from $38.8 billion in 2000. By 2010, consumers are expected to spend $329 billion each year online.

With the number of online consumers on the Web steadily increasing, there is a need to seek a better understanding of user cultural preferences in the design elements. The results of an on-line experiment that exposed American and Chinese users to sites created by both Chinese and American designers indicated that users perform information-seeking tasks faster when using web content created by designers from their own cultures (Faiola and Matei, 2005).

Evers and Day (1997) In examining user satisfaction, found that 67.9% of a user interface would be satisfied using an interface with technology adapted to their culture.

Page 27: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

21

Web site usability is to a large extent affected by culture of the user, or there is a relationship between culture and usability, or "culturability" as it is known or termed by Barber and Badre ( 2001). They argue that the success of an interface is achievable when the user interface design reflects the cultural characteristics of the target audience. Ease of use with cultural acceptability has become the pre-eminent requirement of designing software and other computer applications. To meet this necessity, “culturability” has emerged as a serious field of research. According to Nantel and Glaser (2008), a “culturally adapted website results in greater ease of navigation and a more positive attitude towards the site”. Thus indicating ease of use.

Presently, few information systems such as application software with graphical user interfaces, government websites, online shopping sites and even corporate websites satisfy usability and cultural criteria, resulting in a lot of frustration among users, the reason for this is that the design of these information systems are technology-entered, in which the cultural needs of the users have not been taken into consideration during the development process. Interacting with a website is a form of communication. For a website to achieve a successful communication with the users, two variables need to be considered, the language in which it is coded and the context in which the information was embedded. If these are not shared by the system designer and the users, their meaning will differ thus not achieving efficient communication (Mantovani, 2001). While language can be easily determined, context identification can be a complex task. Language does not mention what is commonly known. So at least culture provides extra context, that what is commonly known by people sharing that culture. Furthermore, we communicate by using symbols. But symbols are very culture dependent. Finally, how to go about this has a cultural component. The look and feel of a website is derived from the common strategies to solve a problem. A way to do this is based on culture, because it allows clustering people in groups that share common characteristics and traits. This paper discusses cultural that influence web usability and how culture can be utilized to develop more usable websites. It will explore the meaning of usability, culture and investigate in which ways objective and subjective cultural issues affects the usability and design of websites.

“No longer can issues of culture and usability remain separate in design for the World Wide Web. Usability must be re-defined in terms of a cultural context, as

what is user-friendly for one culture can be vastly different for another culture, and usability must therefore take on a cultural context”,1

Wendy Barber and Albert Badre, Graphics,

Visualization & Usability Centre/ Georgia Institute of Technology, Atlanta. 1The merging of culture and usability in website design or culturability as termed by Barber and Badre (2001), challenges the idea of usability as being culturally neutral by claiming that cultural values such as thought patterns and customs are expected to directly affect the way a user interact or affect the usability of a website.

1.1 Objectives of this study

The major goal of this paper are as follows: • To find out cultural issues that influences Web

usability • To establish how websites can be adapted to

meet cultural needs of users • To establish how culture can be utilized to

develop more usable website.

2 Web Usability and Culture

2.1 Website Usability

There are many definitions of usability proposed by various individuals, but there is no common definition of usability, which is generally accepted within the HCI community. Precce et al (1994) defined usability as "a measure of the ease with which a system can be learned or used, its safety, effectiveness and efficiency, and attitude of its users towards it". Nielsen(1993) defined the usability of a computer system in terms of the following attributes: learnability, efficiency, memorability, errors, and satisfaction. On the other hand, ISO 9241-11 defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use”. From the above definitions, it can be concluded that usability of a website is generally concerned with making website interfaces that are easy to use or user friendly.

Page 28: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

22

2.2 Culture

There is a wide range of culture definitions that vary throughout authors and time. As Kluckhohn (1962) states, culture is a set of definitions of reality, including language, values and rules that set the limits for behavior, held in common by people who share a distinctive way of life. Evers and Day (1997) affirms that culture shapes the way people behave, view the world, communicate and think. It is formed by historical experiences and values, traditions and surroundings. Hall(1959) sustains that culture stands for a frame of reference developed by a group of people used to understand each other. For him, key issues for developing this frame are ways of life, behavioral patterns, attitudes and material objects. When a group of people, no matter its scale, start sharing common ways of thinking, feeling and living, culture emerges (Keiichi Sato & Kuosiang Chen 2008). The word culture also come from the Latin word "colere" (to inhabit, cultivate). The original meaning was used in the biological sciences (for example, a bacterial culture). In the mid-to-late 19th century, the term came to be applied to the social development of humans (Sapienza, 2008). Ernest Gellner (1997), gave the most commonly accepted meaning who calls culture "the socially transmitted and sometimes transformed bank of acquired traits”. Although culture is a social phenomenon, biological characteristics are often connected to it. For example, we see people of a particular gender, age, skin color, or body type (height, weight, etc) and we assume they must belong to a particular culture (Sapienza, 2008).

2.2 Classification of Culture

Culture can be broadly categorized into objective and subjective culture as shown in figure 1. Objective culture is the visible, easy to examine and tangible, aspect of culture represented in terms of text orientation, metaphor, date and number formats, page layout, color and language while Subjective culture is “the psychological feature of a culture, including assumptions, beliefs, values, and pattern of thinking”(Hoft, 1996).

Fig 1: Classification of Culture

2.2 Cultural Models

Cultural models consist of cultural variables, which can focus on easy-to-research objectives like political and economic contexts, reading directions and formats for dates and numbers. Cultural variables can also focus on subjective information, like value-systems and behavioral patterns.

Table 1. Cultural dimensions and their definitions

Hofstede

Power Distance

Masculinity vs. Femininity

Individualism vs. Collectivism

Uncertainty Avoidance

Time Orientation

Trompenaars Universalism vs. Particularism

Neutral or emotional

Individualism vs. Collectivism

Specific vs. Diffuse

Achievement vs. Ascription

Time

Environment

Victor

Language

Environment and Technology

Social Organisation

Contexting

Authority Conception

Nonverbal Behaviour

Temporal Conception

Hall Speed of Messages

Context

Space

Time

Information Flow

Action Chains

Page 29: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

23

Hoft (1996) identified four models of culture developed by Hofstede, Hall, Trompenaars and Victor.

Hofstede’s model: This model is about patterns of thinking, feeling, and acting that form a culture’s mental model.

Edward T. Hall: Dealt with the purpose of determining what releases the right responses for effective communication.

Fons Trompenaar’s model: Developed a model of culture with the purpose of determining the way in which a group of people solve problems.

David A. Victor’s model: This is about an aspects of culture that affects communication in a business setting.

These models identified a number of cultural dimensions that are used to illustrate their various models of culture. Due to space limitation and because some of the dimensions are common in some of the models, a description of a few of the cultural dimensions and their definitions are as shown in Table1 above, and the cultural models and their dimensions are as shown in table 2.

Table2. Cultural models and their dimensions (adapted from Hoft, 1996)

Culture Dimension

Definition

Power-distance PD (Hofstede)

The extent to which people accept unequal power distribution in a society.

Collectivism/ Individualism IC

(Hofstede)

the extent to which people prioritize or weigh their individuality versus their willingness to submit to the goals of the group.

Feminine/Masculine MASFEM

(Hofstede)

the extent to which a culture exhibits traditionally masculine or feminine value.

Uncertainty Avoidance UA

(Hofstede, Trompenaars)

The extent to which a society willingly embraces or avoids the unknown.

Time Orientation

(Hofstede, Trompenaars

Hall,Victor)

Present in all four models: is about peoples concern for past,

present and future., stands for the fostering of virtues oriented towards future rewards, in particular, perseverance and thrift.

Universalism-Particularism

(Trompenaars)

Degree to which people in a country compare generalist rules about what is right with more situation-specific relationship obligations and unique circumstances

Neutral vs. Emotional Relationship Orientations (Trompenaars)

Degree to which people in a country compare ‘objective’ and ‘detached’ interactions with interactions where emotions is more readily expressed.

Achievement vs.

Ascription (Trompenaars)

Degree to which people in a country compare cultural groups which make their judgments of others on actual individual accomplishments (achievement oriented societies) with those where a person is ascribed status on grounds of birth, group membership or similar criteria.

Specific vs. Diffuse Orientations (Trompenaars)

Degree to which people in a country have been involved in a business relationships with in which private and work encounters are demarcated and ‘segregated-out’

Context (Hall, Victor) Context refers to the amount of information given in a message. A high context is one in which much is said and information is detailed. And in a low context, little is said and the information is distorted.

Page 30: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

24

This paper will adopt the five dimensions of culture from hofstede for the investigation of subjective cultural aspect of this study.

Hofstede's dimensions of culture are often quoted in relation to cultural usability. It has gained wide acceptance among anthropologists, and has been proposed as a framework for cross-cultural HCI design (Vöhringer-Kuhnt, 2001). Hofstede viewed culture as 'programming of the mind' in the sense that certain reactions were more likely in certain cultures than in other ones, based on differences between basic values of the members of different cultures. Hofstede proposed that all cultures could be defined through the following dimensions: Power distance (PD), Individualism vs. Collectivism IC, Masculinity vs. Femininity (MASFEM), Uncertainty avoidance (UA) and Longterm orientation (LTO) vs. Short term Orientation. (See table 1 above for explanation of the dimensions).

3 Related research

Marcus and Gould (2001) in their paper on cultural dimensions and global web design discussed the impact of culture on websites design. They examined how Hofstede’s five dimensions of culture might affect user interface design. By drawing from the Internet sites of several corporate and non-corporate entities of differing nationalities (e.g., Britain, Belgium, Malaysia, the Netherlands, Costa Rica, and Germany), the authors concluded that societal levels of power distance, individualism, masculinity, uncertainty avoidance, and long-term orientation are reflected in several aspects of user-interface and web design.

Barber and Badre (2001) posited the existence of prevailing interface design elements and Web site features within a given culture, called cultural markers. These are interface design elements and features such as color preference, fonts, shapes, icons, metaphors, language, flags, sounds, motion, preferences for text vs. Graphics, directionality of how language is written, help features, and navigation tools that are prevalent, and possibly preferred, within a particular cultural group. Such markers signify a cultural affiliation. They examined the cultural markers of web sites from different nations and cultures, by grouping several web

sites according to their language, nation and genre and manually inspecting each cluster looking for recurrent design preferences. They concluded that web sites that contain the cultural markers of their target audience are considered more acceptable by users of their underlying culture.

Evers and Day (1997) in a more comprehensive study of usability and culture found culture to be an important factor regarding the perceptions of efficiency, effectiveness, satisfaction, and user behavior when using a software application. They discovered that there is a difference between Chinese and Indonesian in terms of user interface acceptance. They concluded that culture is likely to influence many elements affecting the usability of a product.

Nantel and Glaser (2008) demonstrated that perceived usability of a website increased when the website was originally conceived in the native language of the user. Translation, even of excellent quality, created a cultural distance which impacted users’ evaluation of site usability. A similar result from Information Retrieval is that documents are best searched in the language in which they were written. While evaluating the quality of an offer on the web, however, language had little or no impact on the evaluation.

Vohringer-Kuhnt (2001) investigated cultural influences on the usability of globally used software products. The survey was conducted online by way of the internet. The overall results revealed differences in the attitude towards usability across members of different national groups. The study concluded that only Hofstede´s Individualism/Collectivism was significantly connected to the attitude towards product usability. But further research is needed to deepen the value of Hofstede’s cultural specific variables to cultural design and evaluation of software and web applications.

Andy Smith et al (2003) posited the concept of cultural attractors to define the interface design elements of a website that reflects the sign and their meanings to match the expectations of a local culture. This cultural attractors are colours, banner adverts, trust signs, use of metaphor, navigation controls and similar visual elements that together create a look and feel to match the cultural expectations of the users for that particular domain.

Page 31: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

25

Shen et al (2006) suggested Culture centered design CCD, in which the design process should be concentrated around the target user and his/her specific cultural conditions. The design process needs to be characterized by iterative analyses. These analyses checked design choices in each phase in the design process on cultural appropriateness, relevance, semiotics, functionality and usability. They also introduced the idea of a ‘cultural filter’ derived from the book ‘Psychoanalysis and Zen Buddhism’ by Erich Fromm (German philosopher 1900–1980).

The main gaps that were found in this few previous researches are:

• Most of the studies could not conclude whether their various dimension of culture applied for their research has an influence on overall usability of a website or an interface.

• The result of their numerous researches on culture and web design did not recommend how culture can be utilized to develop usable website.

The next section discusses cultural issues that influence Web usability and how understanding the culture of a given community can be utilized to develop more usable websites.

4 Cultural Issues in Web Design and Usability

Several frameworks (Barber & Badre, 2001), (Sapienza, 2008), (Tanveer et al, 2009),(Smith et al 2003) to mention a few exists to show that there is a linkage between culture and web design/usability.

Over the last few years, more and more localized versions of websites have been developed in order to address target national or cultural user groups. Culture is a huge consideration when designing websites. Not everybody reads or understands information the same way, and culture especially plays a very big role in how we view websites. Even the most basic understanding of this principle is needed before designing sites that may be viewed by people from different cultures. When designing a website the culture of the target audience is a major factor in the design process.

4.1 Influence of Objective Culture on Web Design and Usability

Objective culture is the visible, easy to examine and tangible, aspect of culture represented in terms of text orientation, metaphor, date and number formats, page layout, color and language (Hoft, 1996).

The impact of objective cultural design elements such as languages, colors, metaphor, and page layout will be discussed next as it is not possible to discuss all aspect of objective cultural elements in the present study.

4.1.1 Color

An objective cultural factor that should be considered when designing a website is the use of color. Color is connected to feelings of people and it has different meanings in different cultures. “Colors also have important meanings in web design. Color could be used for background, frame, images, hyperlink, etc. Website designers need to take into consideration the color preferences and the meaning of various colors for the targeted audience. Barber and Badre (2001) gave an example of the color-culture of different countries. For example, the red color means different things to different people: for the Chinese it means happiness; for the Japanese, anger/danger; for Egyptians, death; and for Americans, danger/stop. The use of color can also be associated with religion. For example the Judeo-Christian tradition is associated with red, blue, white, and gold; Buddhism with saffron yellow and Islam with green. Therefore, when designing a large-scale website, it would be very helpful to conduct a survey and an analysis of the color preferences of the target audience and the meanings of color for the market before designing the website.

4.1.2 Metaphor

One of the most important aspects in designing a culturally relevant interface is the accurate and deliberate use of the metaphor. The metaphor is a powerful tool for translating the technical happenings that take place beyond the interface into a concept that makes sense to the average user, appearing on the interface itself. The majority of software are developed

Page 32: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

26

in, or contracted by the USA, and its interfaces have therefore been based primarily on American metaphors (Shen et al. 2006). Often a metaphor applied out of context is open to misinterpretation. For example, the ‘my computer’ icon of MS Windows has proved to have lead to much confusion as it suggests ownership which often is not the case. In some cultures the idea of something that can be retrieved from the trash bin after it has been deleted seems illogical and degrading (Shen et al. 2006). Successful interface metaphors should be developed or adapted through cultural requirements by, or with reference to, representatives of the culture for which they are intended (Shen et al. 2006).

4.1.3 Language

The most distinctive cultural symbol is language and language indicates the speech used by a particular group of people including dialect, syntax, grammar and letterform (Tong and Robertson, 2008). Language is the building block from which users gain information from a website (Cyr and Trevor-Smith, 2003). Even though most websites users can speak English, they are almost always more comfortable in their native languages. In a study conducted by Marlow et al, 2007) on the multilingual needs of website visitors to the Tate Online, the web site for Britain's Tate art galleries, they found out that many individuals would appreciate having more content available in their own language, either due to necessity or out of

preference. However, the best means of providing this content depends on a variety of factors, including the pragmatic consideration of resources available for translation. While some countries, especially Asian or developing countries, like to display their English speaking abilities, other countries prefer to maintain their own native language for reasons of national pride. This is especially true in some European countries. Due to the fact that English is one of the most popular languages all over the world, it is advisable to design a site in English and then incorporate a translator to translate to the local language of the intended users.

4.1.4 Page Layout

This is the physical arrangement of text elements and graphical elements on a web page, this also vary from one culture to another. it can therefore be described as a cultural component. Also the flow direction of a page either horizontally or vertically varies from one culture to another.

A good design layout will enhanced a better understanding and hence usability of a website. For example, France has a centered orientation, suggesting that features on a French site would most likely be centered on the page (Cyr and Trevor-Smith, 2003). While in the Islamic countries, page layout will flow from top to bottom. The design of a website must take into account text flow which also varies from one culture to another. The direction in which text in some languages is written can be unidirectional, such as English, or bi-directional such as Arabic. Also, some languages are read from left to right, others right to left, this must also be taken into consideration when designing a web page layout.

4.2 Influence of Subjective Culture on Web Design and Usability

Hoft (1996) defined subjective culture as “the psychological feature of a culture, including assumptions, beliefs, values, and pattern of thinking”. Its influence on usability is a contentious issue in the field of Human Computer Interaction HCI as some members of the discipline regard the lack of accommodation of subjective culture into the design of interfaces as an important cause for decrease in usability (Ford, 2005). Most researches done on the influence of subjective culture on usability have been inconclusive or without adequate result. The influence of subjective culture aspect of this study will be based on Hofstede’s framework as applied by Marcus and Gould (2001) to web and user interface design. (Marcus & Gould, 2001) applied Hofstede’s cultural dimensions to web and user interface design. They mentioned each of Hofstede’s five cultural dimensions and the aspects of user interface design that can be influenced by that particular dimension resulting in specific design recommendations that can influence usability for each dimension. Due to space limitation see

Page 33: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

27

Marcus & Gould (2001). The influence of each Hofstede’s cultural dimension on web design and usability are as follows:

4.2.1 Power Distance

Marcus and Gould (2001) uncovered that members of high Power Distance (PD) cultures such as Chinese, generally prefer a clear hierarchical navigational structure and generally exhibit a strong preference for symmetry in web design. Marcus and Gould’s study also found that on a Malaysian university Web site, for example, they point out evidence of high power distance. This characteristic is displayed on the Web site through a concentration on the power structure of the university: the prominent area of the site devoted to the university’s seal, graphics of items such as faculty, buildings, and administration. compared to a Web site of a university in the Netherlands, a low power distance culture. This site displayed pictures of students rather than leaders, and reveals a stronger use of asymmetrical layout meaning that there is a less-structured power hierarchy.

4.2.2 Individualism/Collectivism

According to Sudhair et al(2007), in an individualist societies such as the US and Australia, “I consciousness” prevails and the individual tends to have fairly weak ties with others, they will place great salience on website personalization but in a collectivist societies such as Taiwan and Pakistan, people regard themselves as part of a larger group such as the family or clan and would be more favorably disposed towards websites that make references to the appropriate in-groups or slogan to emphasis a national agenda.

4.2.3 Masculinity/Feminism

Masculine societies such as Japan and Austria tend to be hero worshippers whereas feminine societies such as Sweden and the Netherlands tend to sympathize with the underdogs (Sudhair et al, 2007), therefore web document in a masculine society should contain references to such characteristics as success, winning, strength, and assertiveness whereas in a feminine

society web document will contain information on charitable causes and family oriented images.

4.2.4 Uncertainty Avoidance

Low UA societies like Denmark and Sweden conditions their members to handle uncertainty and ambiguity with relative ease and little discomfort Sudhair et al(2007), while members of high UA cultures (such as New Zealanders) will like web site navigation that will prevent the user from being lost (Marcus and Gould, 2001), this can also be seen in a high uncertainty avoidance society like Japan and Belgium where they attempt to create as much certainty as possible in the day to day lives of people through the imposition of procedures, rules and structure. Therefore web document in a high UA society will contain references to precise and detailed information, references to relevant rules and regulations.

4.2.5 Time Orientation

Long Time Orientation is about being thrifty and sparing with resources, and perseverance towards slow result. Short time orientation societies lives in the present with little or no concern for tomorrow. Long Time Orientation (LTO) societies such as China and Hong Kong tend to save more and exhibit more patience in reaping the results of their actions whereas Short Time Orientation (STO) societies like most West African nation and Norway want to maximize present rewards and are relatively less prone to saving or anticipating long term rewards (Sudhairet al, 2007). Web document in LTO culture will emphasize perseverance, future orientation, resources for conversation, respect for the demands of virtue, and de-emphasize truth and falsity as a strictly binary, black-and-white relationship (Zahedi et al, 2001). While web document from STO societies like Nigeria will show clean functional design aimed at achieving goals quickly.

5 Recommendations for designing to meet cultural needs

1. Understand the local culture

Page 34: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

28

Study the local culture specific demands in a website for the target culture. • Identify culturally specific metaphors, visual

and representational aspects of local culture.

2. Language factor Even though most websites users can speak English, they are almost always more comfortable in their native languages. It is advisable to design a site in English and then incorporate a translator to translate to the local language of the intended users.

3. Basic Web Design Elements (visual) Simple symbols or icons that are commonly understood in the U.S. may confuse, or even insult, visitors from other regions. Icons and other visual elements are very specific to each country, Thus when using this visual elements on the web pages, country-specific understanding is needed. An example is the mail box with raised flag conveying "email." Many local users may not recognize this little mailbox; an envelope would serve to convey the same message to them. It is possible for symbols to have "unintended" or "hidden" meanings in other cultures as well.

4. Contact Information Names, postal addresses, phone numbers, fax numbers, etc are important pieces of contact information. Website forms need to accommodate longer names, addresses, phone number, fax numbers and zip codes to satisfy the local needs of website users.

5. Currency If a website offers any product or service for purchase, currency issues may arise with local visitors. If you were targeting a product to a specific audience, it will be a good practice to give a rough estimate of the price in their local currency.

6. Dates, Time, and Place Dates are often critical pieces of information to be communicated on-line and the American convention of using month-day-year is not universally accepted, as day-month-year is used in many parts of the world. Time can be referenced by the 24 hour time system internationally, so that 8:52 p.m. becomes a standardized 20:52. Time references, such as the hours of office operations, should be accompanied by the appropriate time zone or reference to Greenwich Mean Time.

5 CONCLUSION

Cultural characteristics of website users is a key factor to determining the user acceptance of a website, current design practice take little account of cultural issues during the design process. It is evident from the views presented in this paper that culture has a significant impact on how the user perceives a website.

Incorporation of cultural factors in web design process is critical in achieving the high quality of human-website interaction between users and the websites. That is why a better approach to designing website should involve taking into consideration the cultural and usability needs of the users.

REFERENCES

1. Mantovani, G (2001) “The psychological construction of the Internet: From information foraging to social gathering to cultural mediation,” CyberPsychology & Behavior vol. 4 pp.47 – 56.

2. Dianne Cyr, Joe Ilsever, Carole Bonanni, and John Bowes (2004?): Website Design and Culture: An Empirical Investigation. http://www.diannecyr.com/cyr2005_webdesign_culture.pdf

3. Barber, W., & Badre, A.N. 2001. Culturability: The merging of culture and usability. 4th Conference on Human Factors and the Web. Basking Ridge, New Jersey, USA Conference Proceedings http://research.microsoft.com/en-us/um/people/marycz/hfweb98/barber/

4. Gellner, Ernest. (1997).Nationalism. New York: New York University Press.

5. Hall, E.T. (1976). Beyond Culture. Garden City, NY: Doubleday. 6. Hofstede, G. (1980).Culture's Consequences: International

Differences in Work-Related Values.Beverly Hills, CA: Sage Publications.

7. Sapienza, F (2008) Culture and Context: A Summary of Geert Hofstede’s and Edward Hall’s Theories of Cross-Cultural Communication for Web Usability. (Usability Bulletin, Issue No. 19). http://www.usabilityprofessionals.ru .

8. Evers, V. and Day, D. (1997): The role of culture in interface acceptance. In S.Howard, J. Hammond and G. Lindegaard (Ed), Human Computer Interaction INTERACT'97. Chapman and Hall, London.

9. Hall, E. (1959): The Silent Language. Doubleday, New York.

10. Tanveer Ahmed, Haralambos Mouratidis, David Preston (2009): Website Design Guidelines: High Power Distance and High-Context Culture. International Journal of Cyber Society and Education. Pages 47-60, Vol. 2, No. 1, June 2009

11. Aaron Marcus and Emilie W. Gould (2001): Cultural Dimensions

and Global Web Design: What? So What? Now What? Proceedings of the 6th Conference on Human Factors and the Web, Austin, Texas.

Page 35: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

29

12. Andy Smith, Lynne Dunckley, Tim French, Shailey Minocha, Yu Chang (2003): A process model for developing usable cross-cultural websites. Interacting with Computers, pp 63-91. Elsevier.

13. Del Galdo, E. (1990). Internationalisation and translation: some guidelines for the design of human - computer interfaces. In J. Nielsen (Ed), Designing User Interfaces for International Use, 1-10. New York: Elsevier.

14. D. Fink and R. Laupase, ‘Perceptions of Web site design characteristics: a Malaysian/Australian comparison’, Internet Research: Electronic Networking Applications and Policy, 10, 2000, pp. 44–55.

15. Vanessa Evers 2001: Cultural Aspects of User Interface Understanding, Phd Thesis.

16. C. Kluckhohn, Culture and Behaviour, University of Arizona Press: Tucson, 1962.

17. Keiichi Sato & Kuosiang Chen,”Special Issue Editorial: Cultural Aspects of Interaction Design. Vol. 2 No 2, 2008.

18. Steve Wallace and Hsiao-Cheng Yu: The effect of culture on usability: Comparing the Perceptions and Performance of Taiwanese and North American MP3 Player Users:Journal of Usability Studies, Volume 4, Issue 3, May 2009, pp. 136-146.

19. Ford and Kotzé (2003): Designing Usable Interfaces with Cultural Dimensions. Retrieved on June,27 2010 from http://www.hufee.meraka.org.za

20. Nantel, J., Glaser, E., 2008. The impact of language and culture on perceived website usability, Journal of Engineering and Technology Management 25(1/2), 112-122.

21. Faiola, A., and Matei, S. A. (2005). Cultural cognitive style and web design: Beyond a behavioral inquiry into computer-mediated communication. Journal of Computer-Mediated Communication, 11(1), article 18. http://jcmc.indiana.edu/vol11/issue1/faiola.html

22. Vöhringer-Kuhnt, T. 2001. The influence of culture on usability. Master’s thesis (paper draft). URL: http://userpage.fu-berlin.de/~kuhnt/thesis/ results.pdf [retrieved July 24, 2009].

23. Zahedi, F. M., Van Pelt, W. V., & Song, J. (2001). A Conceptual Framework for International Web Design. IEEE Transactions On Professional Communication, 44(2), 83-103. Retrieved June 23, 2010, from http://tc.primaryspaces.com/zahedi.pdf

24. Sudhair H Kale et al 2007 cultural adaptation on the web. Working Paper: Global development working Center, Bond University Australia.

25. Nielsen, J. (1993). Usability engineering. New York: Academic Press.

26. Ali H. Al-Badi and Pam J. Mayhew: A Framework for Designing Usable Localised Business Websites. Journal of Communications of the IBIMA from http://www.ibimapublishing.com/journals/CIBIMA/cibima.html

27. Ford G (2005): Researching the effects of culture on usability Msc Thesis.

28. Siu-Tsen S, Woolley M, Prior S (2006): Towards culture-centred design. Interacting with Computers xx p.1–33 Published by Elsevier B.V.

29. PREECE, J., ROGERS, Y., SHARP, H., BENYON, D., HOLLAND, S. & CAREY, T. 1994. Human-computer interaction, Workingham, England, Addison-Wesley.

30. Hoft, N 1996: Developing a cultural model. In E del Galdo and J Nielsen (Eds.), International User Interfaces. New York: John Wiley and Sons.

31. Tong. M. C., & Robertson, K. (2008). Political and cultural representation in Malaysian websites. International Journal of Design, 2(2), 67-79.

32. Marlow, Jennifer; Clough, Paul; Dance, Katie (2007): Multilingual needs of cultural heritage website visitors: A case study of Tate Online, International Cultural Heritage Informatics Meeting - ICHIM07, Toronto, Ontario, Canada.

Page 36: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

30

Software Cost Regressing Testing Based Hidden Morkov Model 1Mrs. P.Thenmozhi , 2Dr. P. Balasubramanie ,

1Assistant Professor, Kongu Arts and Science College, Erode – 638 107, Tamil Nadu, India,

2Professor & Head, Department of Computer Science and Engineering, Kongu Engineering College,Perundurai – 638052, Tamil Nadu,

Abstract— Maintenance of software system accounts for

much of the total cost associated with developing

software. The nature of the modifying the software is a

highly error-prone task which is the main reason fo r the

cost. Correcting fault by changing software or add new

functionality can cause existing functionality to r egress,

introducing new faults. To avoid such defects, one can re-

test software after modifications, a task commonly known

as regression testing. Re-execution of test cases

developed for previous versions is typically called

Regression test. However, is often costly and somet imes

even infeasible due to time and resource constraint s. Re-

running test cases that do not exercise changed or

change-impacted parts of the program carries extra cost

and gives no benefit. This paper presents a novel

framework for optimizing regression testing activit ies,

based on a probabilistic view of regression testing . The

proposed frame- work is built around predicting the

probability that each test case finds faults in the

regression testing phase, and optimizing the test s uites

accordingly. To predict such probabilities, we mode l

regression testing using a Hidden Morkov Model Netw ork

(HMMN), a powerful probabilistic tool for modeling

uncertainty in systems. We build this model using

information measured directly from the software sys tem.

The results show that the proposed framework can

outperform other techniques on some cases and

performs comparably on the others. This paper shows

that the proposed framework can help testers improv e the

cost effectiveness of their regression testing task s.

Keywords: Software testing, Testing tools, Regression

testing, Software maintenance

1. Introduction

The nature of Software systems is to evolve with

time and specially as a result of maintenance tasks.

Software maintenance is defined as “The modification of

a software product after delivery to correct faults, to

improve performance or other attributes, or to adapt the

product to a modified environment”.

The presence of a costly and long maintenance

phase in most software projects, specially those

manipulating large systems, has persuaded engineers

that software evolution is an inherent attribute of

software development. Moreover, maintenance activities

are reported to account for high proportions of total

software costs, with estimates varying from 50% in the

80s to 90% in recent years. Reducing such costs has

motivated many advancements in software engineering

in recent decades. The objective of maintenance is “to

modify the existing software product while preserving its

integrity”. The later part of the stated objective,

preserving integrity, refers to an important issue raised

as a result of software evolution. One need to ensure

that the modifications made to the product for

Page 37: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

31

maintenance have not damaged the integrity of the

product.

From theory and practice that changing a

system in order to fix bugs or make improvements can

affect its functionality in ways not intended. These

potential side effects can cause the software system to

regress from its previously tested behavior, introducing

defects called regression bugs. Although rigorous

development practices can help isolate modifications,

the inherent complexity of modern software systems

prevents us from accurately predict the effects of a

change. Practitioners recognized such a phenomenon

and hence are reluctant to change their programs in fear

of introducing new defects. Researchers have tried to

find ways of analyzing the impact of a change on

different parts of a system and predicting the effects. In

absence of formal presentations of software systems,

however, such attempts, although helpful, will not

provide the required confidence levels.

Unless we are able to find regression bugs,

once they occur, Software maintenance remains a risky

task. Despite the introduction and adaptation of other

verification methods (such as model checking and peer

reviews), testing remains the main tool to find defects in

software systems. Naturally, retesting the product after

modifying it is the most common way of finding

regression bugs. Such a task is very costly and requires

great of organizational effort. This has motivated a great

deal of research to understand and improve this crucial

aspect of software development and quality assurance.

This paper is organized as follows. Literature

surveys are given in section 2. In section 3 we will

devote ourselves to discussing the probabilistic

modeling and reasoning in detail. Conclusions will be

drawn in section 5.

2. Literature survey

In this section, research areas related to the

topic of this paper are elaborated. The subject to start

with is that of the problem in question, “Software

Regression Testing”. There exists an extensive body of

research addressing this problem using many different

approaches. This section takes a critical look at this line

of research, trying to find strong points and ideas as well

as the gaps. Through this examination, many terms and

concepts related to software testing area will be

introduced as well.

2.1 Software Regression Testing

Research in regression testing spans a wide

range of topics. Earlier work in this area investigated

different environments that can assist regression testing.

Such environments particularly emphasize automation

of test case execution in the regression testing phase.

For example, techniques such as capture playback have

been proposed to help achieve such an automation.

Furthermore, test suite management and maintenance

have been addressed by much research. Measurement

of regression testing process has also been researched

extensively and many models and metrics have been

proposed for it. Most of the research work in this area,

however, has focused on test suite optimization.

Test suite optimization for regression testing

consists of altering the existing test suite from a previous

Page 38: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

32

version to meet the needs of regression testing. Such an

optimization intends to satisfy an objective function,

which is typically concerned with reducing the cost of

retesting and increasing the chance of finding bugs

(reliability). There exists a variety of techniques

addressing this problem. Most of these techniques can

be categorized into two families of test case selection

and test case prioritization. Regression test selection

techniques reduce testing costs by including a subset of

test cases in the new test suite. These techniques are

typically not concerned with the order in which test

cases appear in the test suite. Prioritization techniques,

on the other hand, include all test cases in the new test

suite but change their order in order to optimize a score

function, typically the rate of fault detection. These two

approaches can be used together; one can start with

selecting a subset of test cases and then prioritize those

selected test cases for faster fault detection. The rest of

this section first looks into test case selection

approaches from the literature and then touches up to

an existing techniques for test case prioritization.

2.1.1 Test Case Selection

Test case selection, as the main mechanism of

selective regression testing, have been widely studied

using a variety of approaches. In a survey of techniques

proposed up to 1996, Rothermel and Harrold[12]

propose an approach for comparison of selection

techniques and discuss twelve different family of

techniques form the literature accordingly.They evaluate

each technique based on four criteria: inclusiveness (the

extent to which it selects modification revealing tests),

precision (the extent to which it omits tests that are not

modification revealing), efficiency (its time and space

required), and generality (its ability to function on

different programs and situations). These four criteria,

in principle, capture what we expect form a good test

case selection approach. These four criteria inherently

impose a trade-off situation where proposed techniques

usually satisfy one of the criteria in expense of the

others.

Main Approaches

An early trend in test selection research evolved

around minimizing test cases selected for regression.

This approach, often called test case minimization, is

based on a system of linear equations to find test suites

that cover modified segments of code. Linear equations

are used to formulate the relationship between test

cases and program segments (portions of code through

which test execution can be tracked, e.g., basic-blocks

or routines). This system of equations is formed based

on matrices of test-segment coverage, segments-

segment reachability and (optionally) definition-use

information about the segments. A 0-1 integer

programming algorithm is used to solve the equations

(an NP-hard problem) and find a minimum set of test

cases that satisfies the coverage conditions. This

approach is called minimization in the sense that it

selects a minimum set of test cases to achieve the

desired coverage criteria. In doing so, test cases that do

cover modified parts of code can be omitted because

other selected test cases cover same segments of the

code.

A different set of approaches have focused on

developing safe selection techniques. Safe techniques

aim to select a subset of test cases which could

guarantee, given certain preconditions, that the left-out

test cases are irrelevant to the changes and hence will

pass. Informally speaking, the aforementioned

conditions as described in are:

Page 39: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

33

• the expected result for test cases have not

changed from the last version to the current

version;

• Test cases execute deterministically (i.e.,

different executions results in identical execution

path).

Safe techniques first perform change analysis to

find what parts of the code can be possibly affected by

the modifications. Then, they select any test case that

covers any of the modification-affected areas of the

code. Safe techniques are inherently different from

minimization techniques in that they select all test cases

that have a chance of revealing faults. In comparison,

safe techniques usually result in a larger number of

selected test cases but also achieve a much better

accuracy.

Many techniques are neither minimizing nor

safe. These techniques typically use a certain coverage

requirement on modified or modification affected parts of

code to decide whether a test case should to be

selected. For example, the so-called dataflow-coverage-

based techniques. select test cases that exercise data

interactions (such as definition-use pair) that have been

affected by modifications. These selections techniques

are different in two aspects: the coverage requirement

they target and the mechanism the use to identify of

modification-affected code. For example, Kung et al[10]

propose a technique which accounts for the constructs

of object-oriented languages. In performing change

analysis, their approach takes into account object-

oriented notions such as inheritance. The relative

performance of these selection techniques tend to vary

from program to program, a phenomenon that could be

understood only through empirical studies.

Cost Effectiveness

Many empirical studies have evaluated the

performance of the test case selection algorithms. In

general, these empirical studies show that there is an

efficiency-effectiveness (or inclusiveness-precision in

terminology) tradeoff between different approaches to

selection. Some (such as safe) techniques reduce the

size of test suite by a small factor but find most (or all)

bugs detectable with existing test cases. Others (such

as minimization techniques), reduce the size

dramatically but can potentially leave out many test

cases that can in fact reveal faults. Other techniques are

somewhere in between; they may miss some faults but

they reduce the test suite size significantly. The

presence of such a tradeoff situation renders the direct

comparison of techniques hard.

A meaningful comparison between

regression testing techniques requires answering one

fundamental question: is the regression effort resulting

from the use of a technique justified by the gained

benefit? To answer such a question one needs to

quantify the notions of costs and benefits associated

with each technique. To that goal, researchers have

proposed models of cost-benefit analysis. These

modelstry to capture the cost encountered as a result of

missing faults, running test cases, running the technique

itself including all the necessary analysis, etc. The most

recent of all these models is that of Do et al[5]. Their

approach computes costs directly and in dollars and

hence is heavily dependent on good estimations of real

costs from the field. An important feature of their model

is that it can compare not only test case selection but

also prioritization techniques. Most interestingly, it can

compare selection techniques against prioritization

techniques.

Page 40: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

34

The existence of the mentioned trade-off has also

encouraged the researchers to seek multi-objective

solutions to the test selection problem. Yoo and

Harman[15] have proposed pareto efficient multi-

objective test case selection. They use genetic

algorithms to find the set of pareto optimal solutions to

two different representations of the problem: a 2-

dimensional problem of minimizing execution time and

maximizing code coverage and the 3-dimensional

problem of minimizing time and maximizing both code

coverage and history of fault coverage. The authors

compare their solutions to those of greedy algorithms

and observe that greedy algorithms surprisingly can

outperform genetic algorithms in this domain. Coverage

information, a necessary input to most existing

techniques, can be measured only if the underlying code

is available and its instrumentation is cost effectively

possible. To be able to address more complex systems,

where those conditions do not hold, some recent

techniques have shifted their focus to artifacts other than

code, such as software specification and component

models. These techniques typically substitute code

based coverage information with information gathered

from formal (or semi-formal) presentations of the

software. Orso et al[11]., for example, use component

meta data to analyze the modifications across large

component-based systems. The trend in current test

case selection research seems to be that of using new

sources of information or formalizations of a software

system to understand the impacts of modifications.

2.1.2 Test Case Prioritization The regression Test Prioritization (RTP)

problem seeks to re-order test cases such that an

objective function is optimized. Different objective

functions render different instances of the problem, a

handful of which have been investigated by researchers.

Besides targeted objective functions, the existing body

of prioritization techniques typically differs in the type of

information they exploit. The algorithm employed to

optimize the targeted objective function, also, is another

source of difference between the techniques.

Conventional Coverage-based Techniques

Test case prioritization is introduced in [16] by

Wong et al. as a flexible method of selective regression

testing. In their view, RTP is different from test case

selection and minimization in that it provides a means of

controlling the number of test cases to run. They

propose a coverage-based prioritization technique and

specify cost per additional coverage as the objective

function of prioritization. Given the coverage information

recorded from a previous execution of test cases, this

coverage-based technique orders test cases in terms of

the coverage they achieve according an specific

criterion of coverage (such as the number of covered

statements, branches, or blocks). Because the purpose

of RTP in their work is selective regression testing, they

compare its performance against minimization and

selection techniques. The coverage-based approach to

prioritization is built upon by Rothermel et al. in [13].

They refer to early fault detection as the objective of test

case prioritization. They argue that RTP can speed up

fault detection, an advantage besides the flexibility it

provides for selective regression testing. Early detection

makes faults less costly and hence is beneficial to the

testing process. They introduce Averaged Percentage of

Faults Detected (APFD) metric to measure how fast a

particular test suite finds bugs. They also introduce

Page 41: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

35

many variations of coverage-based techniques, using

different criteria for coverage such as branch coverage,

statement coverage, and fault-exposing-potential. These

coverage-based techniques differ not only on the

coverage information they use, but also on their

optimization algorithm. When ordering test cases

according to their coverage, a feedback mechanism

could be used. Here, feedback means that each test

case is placed in the prioritized order taking into account

the effect of test cases already added to the order. A

coverage-based technique with feedback prioritizes test

cases in terms of the numbers of additional (not-yet

covered) entities they cover, as opposed to total number

of entities. This is done using a greedy algorithm that

iteratively selects the test case covering the most not-

yet-covered entities until all entities are covered, then

repeats this process until all test cases have been

prioritized. For example, assume we have a system with

six elements: e1. . . e6and the coverage relations

between test cases and elements are as follows:t1→ {e2,

e5}, t2→ {e1, e3}, t3→ {e4, e5, e6}. According to a

coverage based technique, the first chosen test case is

t3 because it covers three elements, while the others

cover two elements each. After selecting t3, two test

cases are left, both of which cover two elements. In the

absence of feedback, we would choose randomly

between the remaining two. However, we know that e5is

already covered by t3; therefore t1has merely one

additional coverage, whereas t2 has two. After adding t3,

we can update the model of coverage data such that the

already tested elements do not effect subsequent

selections. This allows choosing t2before t1based on its

additional coverage. The notion of using additional

coverage is what feedback mechanism provides;

techniques employing feedback are often called

additional. Many empirical studies have been conducted

to evaluate the performance of coverage-based

approach [13], most of which use APFD measure for

comparison. These studies show that coverage-based

techniques can outperform control techniques (including

random and original ordering) in terms of APFD but

have a significant room for improvement comparing to

optimal solutions. They also indicate that in many cases,

feedback employing techniques tend to outperform their

non-feedback counterparts, an observation which could

not be generalized to all cases. Indeed, an important

finding of all these studies is that the relative

performance of different coverage-based techniques

depends on the programs under test and the

characteristics of its test suite. Inspired by this

observation, Elbaum et al.[6] have attempted to develop

a decision support system (using decision trees) to

predict which technique works better for what

product/process characteristics. Many research works

have enhanced the idea of coverage-based techniques

by utilizing new sources of information. Srivastava

et.al.[1] propose the Echelon frame work for change-

based prioritization. Echelon first computes the basic

blocks modified from the previous version (using binary

codes) and then prioritizes test cases based on the

number of additional modified basic blocks they cover. A

similar coverage criteria used in the context of

aviationindustry called Modified Condition/Decision

Coverage (MCDC) is utilized in. Elbaum et al.[6] use

metrics of fault-proneness, called fault-index, in order to

guide their coverage-based approach to focus on the

parts of code more prone to containing faults. Recently,

in, Jeffery and Gupta[4] propose incorporating to

prioritization a concept extensively used in test selection

called relevant slices, modified sections of the code

which also impact the outcome of a test case. Their

approach prioritizes test cases according to the number

of relevant slices they cover. Most recently, Zhang et

al.[17] propose a technique which could incorporate

Page 42: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

36

varying test coverage requirements and prioritize

accordingly. They work also takes into account different

costs associated with test cases.

Recent Approaches

Walcott et al.[15] formulate a time-aware version

of the prioritization problem in which a limited time is

available for regression testing and also the execution

time of test cases are known. Their optimization problem

is to find a sequence of test cases that could be

executed in the time limit and also maximize speed of

code coverage. They use genetic algorithms to find

solutions to this optimization problem. Their objective

function of optimization is based on summations of

coverage achieved, weighted by execution times.

Their approach could be thought of as a

multiobjective optimization problem where most

coverage in minimum time is required.

All the code coverage-based techniques

assume the availability of source/byte code. They also

assume that the available code can be instrumented to

gather coverage information. These conditions do not

always hold. The code could be unavailable or

excessively expensive to instrument. Hence,

researchers have explored using other source of

information for test case prioritization.

Srikanth et al. [7] have proposed PORT

framework which uses four different requirement-related

factors for prioritization: customer-assigned priority,

volatility, implementation complexity, and fault-

proneness. Although the use of these factors is

conceptually justifiable and based on solid assumptions,

their subjective nature (especially the first and third

factors) make the outcome dependent on the

perceptions of customers and developers. While it is

hard to evaluate or rely on such approaches, it should

be understood that it is the subjective nature of

requirement engineering that imposes such properties.

Also, their framework is not concerned with specifics of

regression testing but prioritization in general.

Bryce et al. have proposed a prioritization

technique for Event-Driven Software (EDS) systems. In

their approach, the criteria of t-way interaction coverage

is used to order test cases. The concept of interactions

is defined in terms of events and the approach is tested

on GUI-based systems and against traditional coverage

based systems. Based on a similar approach, Sampath

et al[1]. target prioritization of test cases developed for

web applications. Their technique prioritizes test cases

based on different criteria such as test lengths,

frequency of appearance of request sequences, and

systematic coverage of parameter-values and their

interactions. Taking a different approach from coverage-

based techniques, Kim and Porter[9] propose using

history information to assign a probability of finding bugs

to each test case and prioritize accordingly. Their

approach, inspired by statistical quality control

techniques, can be adjusted to account for different

history-based criteria such as history of execution,

history of fault detection, and history of covered entities.

These criteria, respectively, give precedence to test

cases that have not been recently executed, have

recently found bugs, and have not been recently

covered. From a process point of view, history-based

approach makes the most sense when regression

testing is performed frequently, as opposed to a one-

time activity. Kim and Porter evaluate their approach in

such a process model (i.e., considering a sequence of

regression testing sessions) and maintain that comparing to

selection techniques and in the presence of time/resource

constraints, it finds bugs faster.

Page 43: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

37

Most recently, Qu et al.[2] use the history of test execution

for black-box testing and build a relation matrix between

test cases. This matrix is used to move the test cases up or

down in the final order. Their approach also includes some

algorithms for building and updating such a matrix based

on outcome of test cases and types of revealed faults. In

addition to research works addressing the stated

prioritization problem directly, there are research closely

related to this area but from different perspectives. Saff and

Ernst use behavior modeling to infer developers’ beliefs

and propose a test reordering schema based on their

models. They propose running test cases continuously in

background while software is being modified. They claim

their approach leads to reducing the wasted time of

development by approximately 90%. Leon and Podgurski[3]

compare coverage-based techniques of regression testing

with another family called distribution-based. Distribution-

based approaches look at the execution profile of test

cases and use clustering techniques to locate test cases

that can reveal faults better. The experiments indicate that

distribution based approach can be as efficient or more

efficient compared to coverage-based. Leon and

Podgurski, then, suggest combining these two approaches

and report achieved improvement using that strategy.

3. Probabilistic Modeling and Reasoning

The probability theory provides a powerful way of

modeling systems. It is especially useful for situations

where the effects of events in a system are not fully

predictable and a level of uncertainty is involved. The

behaviors of large complex software systems are

sometimes hard to precisely model and hence probabilistic

approaches to software measurement have gained

attention.

In the center of modeling a system with probability

theory is to identify events that can happen in the system

and model them as random variables. Moreover, the

distribution of these random variables also needs to be

estimated. The events in the real systems and hence the

corresponding random variables can be dependent on each

other. Bayes theorem provides a basis for modeling the

dependency between the variables through the concept of

conditional probability. The probability distribution of

random variables could be conditioned on others. This

makes modeling systems more elaborate but also more

complex. Different modeling techniques have been

developed to facilitate such a complex task.

Probabilistic graphical model are one family of such

modeling techniques. A probabilistic graphical models aims

to make modeling system events more comprehensible by

representing independencies among random variables. A

probabilistic graphical model is a graph in which each node

is a random variable, and the missing edges between the

nodes represent conditional independencies. Different

families of graphical models have different graph

structures. One well-known family of graphical networks,

used in this research work, is Hidden Morkov Model

Networks.

3.1 Hidden Morkov Model Networks

Hidden Morkov Model Networks (HMMN) is a special type

of probabilistic graphical model. In a HMMN, like all

graphical models, nodes represent random variables and

arcs represent probabilistic dependency among those

variables. The missing edges from the graph, hence,

indicate that two variables are conditionally independent.

Intuitively, two events (e.g., variables) are conditionally

independent if knowing the value of some other

variables makes the outcomes of those events

independent. The conditional independence is a

fundamental notion here because the idea behind the

graphical models is to capture these independencies.

What differentiates a HMMN from other types of

Page 44: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

38

graphical models (such as Markov Nets) is that it is a

Directed Acyclic Graph (DAG). That is, each edge has a

direction and there should be no cycles in the graph. In

a HMMN, in addition to the graph structure, the

Conditional Probability Distribution (CPD) of each

variable given its parent nodes should be specified.

These probability distributions are often called the

“parameters” of the model. The most common way of

representing CPDs is using a table called Conditional

Probability Distribution Table (CPT) for each variable

(node). Each possible outcome of the variable forms a

row, where each cell gives the conditional probability of

observing that outcome, given a combination of the

outcomes of the parents of the node. That is, these

tables include the probabilities of outcomes of a variable

given the values of its parents .The inference problem

can get very hard in complex networks. Two types of

inference, forward (causal) inference, an inference in

which the observed variables are parent of the query

nodes. The inference could be done

backwards(diagnostic), from symptoms to causes. The

inference algorithms typically perform both type of

inference to propagate the probabilities from observed

variables to the query variables. Researchers have

studied the inference problem in depth. It is known that

in general case the problem is NP-hard. Therefore,

researchers have sought different algorithms that

perform better for special cases. For example, if the

network is a polytree, inference algorithms exist that run

in linear time with the size of the network. Also

approximate algorithms have been proposed which use

iterative sampling to estimate the probabilities. The

sampling algorithms sometimes run faster but do not

give the exact right answer. Their accuracy is dependent

on the number of samples and iterations, a factor which

in turn increases the running-time.

Designing a HMMN model is not a trivial task. There are

two facets to modeling a HMMN, designing the structure

and computing the parameters. Regarding the first

issue, the first step is to identify the variables involved in

the system. Then, the included and excluded edges

should be determined. Here, the notions of conditional

independence and casual relation can be of great help.

It is important to make sure that conditionally

independent variables are not connected to each other.

One way to achieve that is to design based on causal

relation: an edge from a node to another is added if and

only if the former is a cause for the latter. For computing

the parameters, expert knowledge, probabilistic

estimations, and statistical learning can be used. The

learning approach has gained much attention in the

literature due to its automatic nature. Here, learning

means using an observed history of variable values to

automatically build the model (either parameters or the

structure). There are numerous algorithms proposed to

learn a HMMN based on history data, some of which are

resented in.

One situation faced frequently when designing a

HMMN is that one knows the conditional distribution of a

variable given each of its parents separately, but does not

have its distribution conditioned on all parents. In these

situations, Noisy OR assumption can be helpful. The Noisy-

OR assumption gives the interaction a graph with at most

one undirected path between any two vertices. between the

parents and the child a causal interpretation and assumes

that all causes (parents) are independent of each other in

terms of their influence on the child.

44.. CCOONNCCLLUUSSIIOONN

This paper presented a novel framework for

regression testing of software using Hidden Morkov Model

Networks (HMMN). The problem of software regression test

Page 45: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

39

optimization is targeted using a dynamic Bayesian network.

The framework models regression fault detection and as a

set of random variables that interact through conditional

dependencies. In future Software measurement techniques

are used to quantify those interactions and Hidden Morkov

Model Networks are used to perform probabilistic inference

on the distributions of those random variables. The

inference gives the probability of each test case finding

faults; this data can be then used to optimize the test suite

for regression.

References

1. Amitabh Srivastava and Jay Thiagarajan, “Effectively Prioritizing

Tests in Development Environment”, In Proceedings of the

International Symposium on Software Testing and Analysis,

pages 97-106, 2002.

2. Bo Qu, Changhai Nie, Baowen Xu and Xiaofang Zhang, “Test

Case Prioritization for Black Box Testing”, 31st Annual

3. International Computer Software and Applications Conference (COMPSAC 2007), 2007.

4. David Leon and Andy Podgurski, “A Comparison of Coverage-Based and Distribution-Based Techniques for Filtering and Prioritizing Test Cases”, Proc. Int’l Symp. Software Reliability Eng., pp. 442-453, 2003.

5. Dennis Jeffrey and Neelam Gupta, “Test Case Prioritization Using Relevant Slices”, In Proceedings of the 30th Annual International Computer Software and Applications Conference, Volume 01, 2006, pages 411-420, 2006.

6. Do.H, Rothermel.G, and Kinneer.A, “Empirical studies of test case prioritization in a JUnit testing environment”, In Proc. Of 15th ISSRE, pages 113-124, 2004.

7. Elbaum.S, Mailshevsky. A.G., and Rothermel. G., “Prioritizing Test Cases for Regression Testing,” Proc. Int’l Symp. Software Testing and Analysis, ACM Press, 2000, pp. 102–112.

8. Hema Srikanth and Laurie Williams, “Requirements-Based Test Case Prioritization”, North Carolina State University, ACM SIGSOFT Software Engineering, pages 1-3, 2005.

9. Jung-Min Kim, Adam Porter and Gregg Rothermel, “An Empirical Study of Regression Test Application Frequency”, ICSE2000, 2000.

10. Jung-Min Kim and Adam Porter, “A History-Based Test Prioritization Technique for Regression Testing in Resource Constrained Environments”, In Proceedings of the International

11. Conference on Software Engineering (ICSE), pages 119–129. ACM Press, 2002.

12. Kung, D., Suchak, N., Hsia, P., Toyoshima, Y., and Chen, C., “ On object state testing”, In Proceedings of COMPSAC’94, IEEE Computer Society Press, 1994.

13. Orso, A., Harrold, M. J., Rosenblum, D., Rothermel, G., Soffa, M. L., and Doo, H., “Using Component Metadata to support the regression testing of component-based software”, In Proceedings of the International Conference on Software Maintenance (ICSM2001), pp 716-725, November, Florence, Italy, 2001.

14. Rothermel. G., Untch . R. H.Chu,.C and Harrold. M. J., “Test case

prioritization: An empirical study”, In Proceedings ICSM 1999, pages 179–188, Sept. 1999.

15. Rothermel.G et al., “On Test Suite Composition and Cost-

Effective Regression Testing,” ACM Trans. Software Eng. and

Methodology, vol. 13, no. 3, 2004, pp. 277–331.

16. Shin Yoo and Mark Harman (2007), “Pareto efficient multi-

objective test case selection “, proceeding of 2007 International

symposium on software testing and analysis, ISBN 978-1-59593-

734-6

17. Walcott.K.R, Soffa. M. L., Kapfhammer G. M. and Roos. R. S.,

“Time-Aware Test Suite Prioritization”, In Proceedings of the

International Symposium on Software testing and Analysis, pages

1-12, 2006.

18. Wong W.E. Horgan .J. R., London. S., and Agrawal. H., “A Study

of Effective Regression Testing in Practice,” Proc. 8th Int’l Symp.

Software Reliability Eng., 1998, pp. 264–274.

19. Xiaofang Zhang, Changhai Nie, Baowen Xu and Bo Qu, “Test

Case Prioritization based on Varying Testing Requirement

Priorities and Test Case Costs”, Proceedings of Seventh

International Conference on Quality Software (QSIC’07), 2007.

Brief Bio-data of P.Thenmozhi

P.Thenmozhi has completed her M.Phil degree in Mother teresa women’s University kodaikanal in 2004.She has completed 9 years of service in teaching. Currently she is Assistant Professor, Department of Computer Science, Kongu Arts and Science college, Tamilnadu, INDIA. She has guided 2 M.Phil students.She has presented 5 papers in various conferences.

Brief Bio-data of Dr.P.Balasubramanie

Dr.P.Balasubramanie has completed his M.Phil degree in Anna University Chennai in 1990. He has Qualified for National level eligibility test conducted by Council of Scientific and Industrial Research(CSIR) and Joined as a Junior Research Fellowship(JRF) in Anna University, Chennai. He has completed his Ph.D degree in Theoretical Computer Science in 1996. He has completed 15 years of service in teaching. Currently he is Professor, Department of Science & Engineering, Kongu Engineering College, Tamilnadu, INDIA. He is the recipient of Best Staff Award consecutively for two years in Kongu Engineering College. He is also the recipient of Cognizant-Technology Solutions(CTS) Best faculty award 2008 for outstanding performance. He has published more than 80 research articles in International and National Journals. He has authored 7 books with the reputed publishers. He has guided 6 part time Ph.D scholars and number of scholars is working under his guidance on various topics like image processing, data mining, networking and so on. He has organized several AICTE sponsored National seminar/ workshops.

Page 46: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

40

Handoff schem e to enhance performance in SIGMA

B.Jaiganesh 1, Dr.R.Ramachandran 2 1Research Scholar, ECE Department, Sathyabama University, Chennai

2Principal, Sri Venkateswara College of Engineering, Chennai

Abstract—Mobile Internet Protocol(MIP), an industry standard for handling mobility, suffers from high handover latency and packet loss, in addition to change in network infrastructure. To overcome thes e problems, we proposed a new approach called Seamles s IP diversity based Generalized Mobility Architectur e (SIGMA). Although SIGMA achieved a low latency handoff, use of IP diversity resulted in some instability during handoff. In this paper, we p ropose a new handoff policy, called HANSIG-HR, to solve the instability problem of SIGMA. HANSIG-HR is based on Signal to Noise Ratio (SNR), hysteresis and route c ache flushing. Our experimental results show that HANSIG -HR improves the stability of SIGMA.

Keywords: Hand off Latency, MIP, SIGMA, Throughput, SNR, HANSIG, HANSIG-H,and HANSIG-HR.

I. INTRODUCTION

Mobile IP Perkins [1] is the standard proposed by IETF to handle mobility of Internet hosts for mobile data communication. Mobile IP suffers from a number of problems, such as high handover latency, high packet loss, and requires change in network infrastructure. To solve these problems, we are earlier proposed a transport layer based mobility management scheme called Seamless IP diversity based Generalized Mobility Architecture (SIGMA). SIGMA exploits multiple addresses available to most mobile hosts to perform a seamless handoff. Stream Control Transmission Protocol (SCTP) [2], a transport layer protocol being standardized by IETF was used to validate and test the concepts and performance of SIGMA. Use of multiple interface cards in our previous studies on SIGMA, resulted in some instability during handoff due to the handoff latency. The instability was due to of excessive number of handoffs in the overlapping region.

There are previous work on reducing number of handoffs and handoff latencies for Cellular IP, Mobile IP, and Layer 2 handoffs. For example, work on Cellular IP [3], [4] used average receiving power, receiving window, bit error ratio and signal strengths. Portoles et al. [7] reduced Layer 2 handoff latency by using signal strength and buffering techniques. Aust et al. [8] used Signal-to-Noise Ratio for Mobile IP handoffs. It should be noted the above work deal with either link layer handoffs, or are designed for specific architectures (like cellular IP and Mobile IP). The authors are not aware of any work which studied handoff schemes for transport layer based mobility management schemes.

The objective of this paper is to remove the instability

observed in previous studies of SIGMA by proposing a handoff

scheme for SIGMA. Initiation of handoff, also known as handoff trigger, is a crucial part of any handoff policy. Signal-to-Noise Ratio, Signal-to-Interference Ratio, Bit-Error-Rate and Frame Error Rate (FER) are generally used for link layer handoff triggers [9]. Since our experimental environment has noise and negligible interference, we use Signal-to-Noise Ratio (SNR) as the handoff trigger in our proposed handoff policy of SIGMA. We designed three HANdoff schemes for SIGMA, (i) HANSIG, with SNR alone, (ii) HANSIG-H, with SNR and Hysteresis and (iii) HANSIG-HR, with SNR, Hysteresis and Route cache flush. Results from experimental testbed of SIGMA are collected for these three schemes and compared. The rest of this paper is organized as follows. Sec. II is

a brief introduction to SIGMA. Instability of SIGMA, the motivation for this work, is illustrated in Sec. III. Previous work on handoff schemes, their methods, advantages, and disadvantages are described in Sec. IV. Our proposed handoff scheme is described in Sec. V. Experimental setup for testing proposed handoff schemes is described in Sec. VI, followed by experimental results and concluding remarks in Secs. VII and Sec. VIII, respectively.

II. INTRODUCTION TO SIGMA

SIGMA is a transport layer based seamless handoff scheme which is based on IP diversity offered by multiple interfaces in mobile nodes to carry out a soft handoff. Stream Control Transmission Protocol’s (SCTP) multi-homing feature is used to illustrate the concepts of SIGMA. SCTP allows an association (see Fig. 1) between two end points to span multiple IP addresses of multiple network interface cards. Addresses can be dynamically added and deleted from an association by using ASCONF chunks of SCTP’s dynamic address reconfiguration feature [2]. One of the addresses is designated as the primary while the others can be used as a backup in the case of failure of the primary address. In Fig. 1, a multi homed Mobile Node (MN) is connected to a Correspondent Node (CN) through two wireless networks. The various steps of SIGMA (see Fig. 1) are given below.

1) STEP 1: Obtain new IP address : The handoff procedure begins when the MN moves into the overlapping radio coverage area of two adjacent subnets. Once the MN receives the router advertisement from the new access point (Access Point 2), it should initiate the procedure of obtaining a new IP address (IP2 in Fig. 1).

Page 47: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

41

IP-2

AD

D I

P

UN

ST

AB

LE

RE

GI O

N

Correspondent Node

III. INSTA BILITY OF SIGMA In this section, we illustrate the instability of SIGMA using

Data Path 1

CS NETWORK

129.15.78.0

Data Path 2

the timeline shown in Fig. 2. When MN moves between regions of the wireless networks it is in one of the two states: (i) Stable state is where the MN receives data and sends SACK through same IP address; (ii) Unstable state where the MN receives data through one IP address and sends SACK

Wireless Network 1

Wireless Network 2 through another IP address. In SIGMA each Set Primary sent.

Eth0 Eth0 Eth1 Movement of Mobile Node Eth1

Gateway1 Gateway2 Sec. II-.3). In Fig. 2, Indicates the time where the initial One SCTP Association Set Primary is being sent from the MN to the CN during

ACCESS POINT 1

10.1.8.1

IP-1 ACCESS POINT 2 10.1.6.1

handoff. This Set Primary request will be processed by the CN, and CN will start sending data to the new IP. At the

Mobile Node same time, the MN will change its routing table but it will take its effect only during 2 due to route cache (see Sec.V).

Overlapping Region of Wireless Network 1 and 2

Fig. 1. Experimental testbed.

2) STEP 2: Add IP addresses to association : When the SCTP association was initially setup, only the CN’s IP address and the MN’s first IP address (IP1) were exchanged between CN and MN. After the MN obtains another IP address (IP2 In the meantime, a large number of Set Primaries are issued

and the routing table is changed due to ping pong effect. So even at 2 of Fig. 2, the MN might not route through the new IP, since routing table has already changed. So MN will send SACKs through old IP when CN sends data to new IP. Moreover after the initial Set Primaries, the subsequent Set Primaries requests are ignored by the CN, because of many Set Primaries arriving in a short interval of time. So, only at 3 , the last routing table change will have its effect, and the data and SACK both will go through new IP. Therefore we

in STEP 1), MN binds IP2 into the association (in addition call the time from 4 to 5 as Unstable State, where the MN

to IP1), and notify CN about the availability of the new IP Address

uses one IP to receive data and another IP to send SACK.

3) STEP 3: Redirect data packets to new IP address: When MN moves further into the coverage area of Wireless Network 2, Data Path 2 becomes increasingly more reliable than Data Path 1. CN can then redirect data traffic to IP2 to increase the possibility of data being delivered successfully to the MN. MN accomplishes this task by sending an ASCONF chunk with the Set Primary Address parameter, which results in CN setting its primary destination address to MN as IP2. The MN’s routing table is also changed so that packets leaving MN are routed through IP2.

This routing table change doesn’t take

its effect immediately due to the routing

cache, this will take its effect sometime

later ,say here, but then the routing

table is changed

1

2 Change

Mobile Node

Correspondent Node

Only this set primary

request is processed, CN

4 starts sending data to new IP

These set primaries

4) STEP 4: Updating the location manager: Location management of SIGMA is implemented by a location manager that maintains a database of correspondence between MN’s identity its current primary IP address. MN can use any unique information as its identity, such as the home address

already. So MN cannot route through new IP due to this frequent

change of routing table and route

cache, so SACK are still sent through old

routing table

Many Setprimaries are

being sent and MN routing table is changed many times

due to ping pong effect

are not processed since these are issued

continuously without much time

interval

(as in MIP), domain name, or a public key defined in the Public Key Infrastructure (PKI).

5) STEP 5: Delete or deactivate obsolete IP address: When

MN moves out of the coverage of Wireless Network 1, no new or retransmitted data packets are directed to IP1. MN notifies CN that IP1 is out of service for data transmission by sending

IP till here This route change

3 here will take its after here,

since there are no more routing table

change

This lag is again due to route cache 5

From here onwards

Data and SACK through new IP, since MN crossed the overlapping region and completely

under new network

an ASCONF chunk to CN. Once received, CN deletes IP1 from its local association control block and acknowledges to MN indicating successful deletion.

The actual handoff takes place in Step 3; the handoff scheme for SIGMA has to consider the exact time at which MN should send Set Primary, the objective being to reduce the number of

handoffs and avoid instability.

Fig. 2. Timeline for SIGMA explaining the unstable region.

To illustrate the instability of SIGMA in real data transfer, we use Fig. 3 that shows the throughput of SIGMA in our experimental setup (given in Sec. VI) with HANSIG scheme.

Page 48: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

42

Fig. 3 shows the throughput as a function of time. The figure

is divided into five regions representing that the MN alternates

between the stable and unstable states as given below.

1) From time 0 to 23 seconds, the MN is in Wireless Network 1 during which data are received and SACK are sent through IP1; this is the stable state for MN.

2) From 23 to 36 seconds, MN is in unstable state, where data are received from IP2 and SACKs are sent from IP1 which is due to excessive number of handoff and route cache.

3) After MN enters Wireless Network 2 completely, where the MN is again in stable state.

4) When MN moves back from Wireless Network 2 to Wireless Network 1, it is in an unstable state between 38 to 52 seconds.

5) From 52 seconds, the MN is completely under Wireless Network 1 and in the stable state.

We can see from Fig. 3, the MN is in unstable state for a longer period, which is due to the number of Set Primaries that are being sent to the CN (discussed in Sec. II) because of a large number of handoff and due to the route cache (see Sec. V).

The unstable state for SIGMA can also be called handoff latency, because for other schemes, such as Mobile IP which uses single interface, handoff latency has been defined in pre- vious work as the time taken by the MN to completely switch from between networks. In SIGMA the MN is completely under the new network, i.e., uses new IP for both data and SACK, only after the unstable state. So our aim is to reduce the time during which MN is in unstable state, thus reducing the handoff latency. Reducing the unstable state is important because packet losses will occur if an access point becomes unavailable while MN is using both interfaces. We remove this unstable state by using an efficient handoff scheme. In the next section, we discuss previous work on reducing handoff latency.

IV. PREVIOUS WORK

One of the first work to reduce the number of handoffs [9]

describes various criteria that can be used to trigger Layer 2 handoffs. The criteria include Relative Signal Strength (RSS), RSS with Threshold (T ), RSS with Hysteresis (H) and RSS with Threshold (T ) and Hysteresis (H). There are many previous work on reducing the handoff latency. Most of them depend on architectural features such as Mobile IP, Cellular IP etc. Hua et al. [10] have designed a scheme for Mobile IP which makes use of concept called Multi-tunnel where the HA copies an IP packet destined to the MN and sends them to multiple destinations through multi- tunnel. Belghoul et al. [10] present pure IPv6 Soft Handover mechanisms, based on IPv6 flows duplication and merging in order to offer pure IP-based mobility management over heterogenous networks by using Duplication & Merging Agent

(D&M). In Polimand (Policy based handoff policy) Aust et al. [7] reduce handoff latency to accelerate the handoff process through a combination of MIP signaling and link layer hits obtained from General Link Layer.

Portoles et al. [7] try to reduce Layer 2 handoff latency by buffering Layer 2 in the driver and card of the AP1 and forwarding them to AP2. Shin et al. [11] reduce the MAC layer handoff latency by selective scanning (a well-selected subset of channels will be scanned, reducing the probe delay) and caching (AP built a cache table which uses the MAC address of the current AP as the key).

RSS and BER based algorithms have been reported by Chia et al. [5] for Cellular IP. They compiled a radio propagation and BER database for handover simulation in typical city microcellular radio systems, so as to provide realistic data for handover simulation, thus minimizing inaccuracies due to inadequacies in propagation modeling.

Austin et al. [4] studied velocity adaptive algorithm for Cellular IP. They use average receiving power, i.e., calculate signal strength time averages from N neighboring base stations and reconnect the mobile subscriber to an alternate BS whenever the signal strength of the alternate BS exceeds that of the serving BS by at least H dB.

As discussed above, most of the previous work focused on techniques to reduce the number of handoffs and handoff latency. The techniques used to reduce number of handoffs [9] can be applied to SIGMA, since SIGMA can get the Layer 2 information.

However previous work to reduce the latency per handoff is not applicable to SIGMA because, work such as [5], [11] are for reducing handoff latency at Layer 2, whereas SIGMA is based at Layer 4. Other work such as [9], [10] are based on architectures like Mobile IP and Cellular IP which are different from SIGMA architecture.

Considering the above facts, we develop our own handoff scheme to avoid to enhance stability in SIGMA by making use of the architectural features of SIGMA.

V. HANDOFF SCHEME TO ENHANCE PERFORMANCE IN SIGMA

The instability of SIGMA described in Sec. III depends on two factors: 1) Fluctuation of signal strength, which increases the

number of handoffs due to ping pong effect. Ping pong

Page 49: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

43

effect can be reduced by using one of the techniques to reduce number of handoff as discussed in Sec. IV. 2) Route cache effect, where the kernel first searches the

route cache for a matching entry for the destination of a packet, followed by search in the main routing table (also called Forwarding Information Base (FIB)). If the kernel finds a matching entry during route cache lookup, it forwards the packet immediately and stops traversing the routing tables. Because the routing cache is maintained by the kernel separately from the routing tables, manipulating the routing tables may not have an immediate effect on the kernel’s choice of path for a given packet.

We use IP route flush cache to avoid a non-deterministic lag between the time that a new route is entered into the kernel routing tables and the time that a new lookup in

those route tables is performed. Once the route cache has been flushed, new route lookups (if not by a packet, then manually with IP route get) will result in a new lookup to the kernel routing tables.

Our proposed handoff scheme, called HANSIG-HR is designed to remove both the ping pong and route cache effects as described below.

HANSIG-HR: Our proposed HANSIG-HR scheme makes use of Signal-to-Noise Ratio (SNR) (discussed in Sec. I), hysteresis to reduce the number of handoff (discussed in the Sec. IV), and route cache flush (discussed in Sec. V). The pseudo code for HANSIG-HR is given below, where SNR1 and SNR2 are the Signal to Noise Ratios of AP1 and AP2, respectively, and Hysteresis is the hysteresis value.

while(1) { Calculate SNR1 = (SignalStrenth/NoiseStrength) for AP1 Calculate SNR2 = (SignalStrenth/NoiseStrength) for AP2 If (SNR2 > SNR1) and (SNR2 - SNR1 > Hyst) Issue Set\_Primary to set IP2 as primary address in CN If

(SNR1 > SNR2) and (SNR1 - SNR2 > Hyst) Issue Set\_Primary to set IP1 as primary address in CN

Change routing table of MN and flush route cache } The pseudo code for HANSIG and HANSIG-H is similar to

HANSIG-HR, but for HANSIG-H there is no flush route cache. Similarly for HANSIG, the Hysteresis value is zero and there is no flush route cache.

Optimum value of Hysteresis: Based on signal strength fluctuations, we now determine an optimum hysteresis value. Fig. 4 shows the variation of SNR, as measured in our testbed, as the MN moves at a uniform speed from Wireless Network 1 to Wireless Network 2. In Fig. 4, we can see that the maximum difference between the access point’s SNRs is 3 dB in the ping pong region. For example, if the hysteresis value is less than

3 dB, then many unnecessary handoff would have taken place between 45 and 46 seconds in Fig. 4. We, therefore, assigned a hysteresis value of 4 in our experimental test bed.

This HANSIG, HANSIG-H, and HANSIG-HR are implemented in the MN, and results are obtained using the experimental setup discussed in the next section.

VI. EXPERIMENTAL SETUP

The HANSIG, HANSIG-H and HANSIG-HR discussed in

Sec. V was implemented in the testbed shown in Fig. 1. The testbed consists of MN, CN and gateways (used to form Wireless Network 1 and Wireless Network 2).

The gateways and CN are Dell Desktops running RedHat Linux 9 with kernels 2.4.20 and 2.6.6, respectively. The MN is a Dell-Inspiron 1100 Laptop with two wireless NIC cards (Avaya PCMCIA and Netgear USB wireless cards) running RedHat Linux 9 kernel 2.6.6.

VII. RESULTS FOR T HE HANDOFF

SCHEME

In this section, we present results to demonstrate the effectiveness of different handoff schemes we proposed using our experimental test bed described in Sec. VI. The effectiveness of handoff schemes of HANSIG, HANSIG-H and HANSIG-HR are presented and compared. We use throughput and handoff frequency as measures of effectiveness of our proposed handoff schemes.

A. Effect of hysteresis on number of handoffs

We observed the number of handoffs for different values of hysteresis. It was observed that for a hysteresis value of 0, 1, 2, 3 and 4, the average number of handoffs were 15, 11, 6, and 1, respectively. Therefore, for the rest of the results, we used a hysteresis value of 4.

B. Effect of hysteresis on data flow

The effect of hysteresis on the throughput of SIGMA is shown in Fig. 5 implementing HANSIG-H.

As shown in the Fig. 5, the graph is divided into five regions where the MN will be in these two states alternatively. From time 0 to 20.57 seconds, the MN is in Wireless Network 1 during which data are received and SACKs sent through IP1. From time 20.57 to 22.18 seconds, MN is said to be in unstable state, where the data are received from IP2 and SACKs are sent from IP1 due to the excessive number of

Page 50: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

44

handoff due to ping pong effect resulting from signal strengths variation. Between 22.18 to 73.97 seconds MN enters Wireless Network 2 completely, during which the MN is in stable state receiving data and sending SACKs through a single IP i.e., IP2. When MN moves back from Wireless Network 2 to Wireless Network 1, it again goes to unstable state; from time 74.97 to 75.50 seconds data are being received from IP1 and SACKs are

Fig. 5. Throughput for HANSIG-H.

sent from IP2, which is again due to the number of handoffs resulting from ping pong effect. The MN is then completely under Wireless Network 1 and is in stable state from 75.50 seconds onwards.

The MN is unstable during the periods from 20.57 to 22.18 seconds and 74.97 to 75.50 seconds even with hysteresis implemented. The instability is due to the caching effect of the routing table, even though there was only one handoff.

C. Effect of hysteresis and route cache flush on da ta flow

The number of handoffs in the overlapping region and throughput were measured for HANSIG and HANSIG-HR. The throughput for the HANSIG is shown in Fig. 3. The durations of the unstable state are 5 seconds and 20 seconds. These are due to the excessive number of handoffs which have taken place without using the hysteresis, when the MN is in the overlapping region. So we can see that the duration of time, for which the MN receive data from one IP and send SACK through another IP, depends on the number of handoff taking place, when MN moves between wireless networks. Packets will be lost if the MN loses contact with one of the access points during which the MN is using both interfaces (one for receiving data and another for sending SACK). The throughput of HANSIG-HR is shown in Fig. 6 with only three regions: From 0 to 19 seconds, MN sends and receives data through IP1, from 19 to 39 seconds MN receives and sends data through IP2, and from 39 seconds onwards it receives and sends data through IP1. These three regions were identified by analyzing the ethereal captures during the data transfers. From this we can infer that at any point of time MN will always be in stable state. We can, therefore, see that the MN is in unstable for a longer time when no hysteresis is used (Figs. 3) as compared to when hysteresis is used (Fig. 6) i.e. hysteresis and route cache flushing (HANSIG-HR) improves the performance of

SIGMA. VIII. CONCLUSION AND FUTURE WORK

We have proposed an new handoff policy for SIGMA, and analyzed the effect of the policy on enhancement of stability of SIGMA. We observed that the new handoff

Fig. 6. Throughput for HANSIG-HR.

Policy HANSIG-HR, which is based on signal to noise ratio, hysteresis and route cache flush, significantly improved the performance of SIGMA. Future work consists of improving the handoff policy by using dwell timer, threshold, and dynamically determining the value of the hysteresis based on the characteristic of signal fluctuations.

REFERENCES

[1] C.E. Perkins, “Mobile Networking Through Mobile IP,” IEEE Internet Computing, vol. 2, no. 1, pp. 58 – 69, January - February 1998.

[2] R. Stewart, “Stream Control Transmission Protocol (SCTP) dynamic address configuration.” IETF DRAFT,draft-ietf-tsvwg-addip-sctp-12.txt, June 2005.

[3] M.D. Austin and G.L. Stuber, “Velocity adaptive handoff algorithm for microcellular systems,” IEEE Trans. Veh. Technol., vol. 43, no. 3, pp. 549 – 561, August 1994.

[4] S. Chia and R.J. Warburton, “Handover criteria for city microcellular radio systems,” Proc. IEEE Veh. Tech. Conf., Orlando, FL USA, pp. 276– 281, 6 - 9 May 1990.

[5] M. Portoles, Z. Zhong, S. Choi, and C.T. Chou, “IEEE 802.11 link-layer forwarding for smooth handoff,” Proc. 14th IEEE Personal, Indoor and Mobile Radio Communications, Beijing, China, pp. 1420 – 1424, 7 - 10 September 2003.

[6] S. Aust, D. Proetel, N.A. Fikouras, C. Pampu, and C. Gorg, “Policy based mobileip handoff decision (POLIMAND) using generic link layer information,” 5th IEEE International Conference on Mobile and Wireless Communication Networks, Singapore, 27 - 29 Ocber 2003.

[7] A. Festag, “Optimization of handover performance by link layer trig- gers in ip-based networks: Parameters, protocol extensions and APIs for implementation,” tech. rep., Telecommunication Networks Group, Technische University, Berlin, July 2002.

[8] G.P. Pollini, “Trends in handover design,” IEEE Communications Mag- azine, vol. 34, no. 3, pp. 82 – 90, March 1996.

[9] Y.M. hua, L. Yu, and Z. Hui-min, “The MobileIP handoff between hybrid networks,” The 13th IEEE International Symposiuim on Indoor and Mobile Radio Communications, Portugal, pp. 265 – 269, 15 – 18 September 2002.

[10] F. Belghoul, Y. Moret, and C. Bonnet, “Performance analysis on IP- based soft handover across ALL-IP wireless networks,” IWUC, PORTO, Portugal, pp. 83 – 93, 13 - 14 April 2004.

[11] S. Shin, A. Forte, A. Rawat, and H. Schulzrinne, “Reducing MAC layer handoff latency in IEEE 802.11 Wireless LANs,” ACM MobiWAC,2004, Philadelphia, PA USA, pp. . 19 – 26 September 26 – October 1 2004.

Page 51: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

45

A Fast Selective Video Encryption Using

Alternate Frequency Transform Ashutosh Kharb

Department of ECE , USIT, New Delhi, India.110006

Seema

*Department of CSE,BMIET, Sonipat, Haryana, India.131001

Ravindra Purwar

GGSIPU, Sonipat, New Delhi, India.110006

Abstract — With commercialization of multimedia data over public networks security of multimedia data is a challenging issue. Further multimedia data is generally very large therefore it requires efficient compression to save transmission cost. In this manuscript a modified 4 point butterfly method is proposed to compute DCT for encoding of frames in video data. It has been experimentally compared with existing technique based on parameters like PSNR, compression ratio, execution time of each frame, time taken for evaluating DCT method. Also it has been shown theoretically that the proposed technique take lesser time than the existing method. Keywords: DCT, motion estimation, selective encryption, spatial compression, video encoding.

I. INTRODUCTION

With commercialization of multimedia data over public networks security of multimedia data is a challenging issue. Further multimedia data is generally very large therefore it requires efficient compression to save transmission cost. In this manuscript a modified 4 point butterfly method is proposed to compute DCT for encoding of frames in video data. It has been experimentally compared with existing technique based on parameters like PSNR, compression ratio, execution time of each frame, time taken for evaluating DCT method. Also it has been shown theoretically that the proposed technique take lesser time than the existing method. Now a days public networks like internet is heavily used for various multimedia based applications like video on demand, video conferencing, pay per TV etc. as the data size in such applications is very large in comparison to text data it is necessary to compress

the data before transmission. Digital video signals get compressed using some coding standards MPEG 1-4, H.264 / AVC before transmission over the wired or wireless channel. These standards do not provide security to the multimedia data. So, various encryption schemes are proposed to secure the data. Traditional solution to [1,2] provide confidentiality is to scramble the data in frequency or temporal domain but these days these techniques are vulnerable to attacks. Another way is to encrypt either uncompressed data or to compressed data (bit stream level) using the conventional cryptosystems like DES and AES that works on the blocks of data therefore known as block ciphers. These procedures provide highest security but also require high processing time that is undesirable for real time applications. Also the video data is voluminous than text data so this results in a decrease in speed. Also the information density is lower in multimedia data than text data so whole video data encryption is unnecessary. Hence the focus shifts from complete encryption schemes to the partial or selective encryption schemes that provides lower computational costs and increases speed by reducing the processing time. The basic concept of partial encryption is to select the most important icoefficients and encrypt them with conventional cryptographic ciphers. The non selected coefficients are sent to the transmission channel with no encryption. Since selected coefficients are protected it is impossible for an attacker to recover any information from these coefficients. The rest of the paper is organized as follows. In section 2,we discuss the basic concept of video compression. Section 3, introduces the partial video encryption technique. In section 4, proposed modified technique is discussed. The results of experiments are detailed in section 5, where we present comparison results with Yengs et al algorithm [1].

Page 52: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

46

Finally in section 6, conclusions are drawn and future studies are explored.

II. BASIC CONCEPT OF VIDEO COMPRESSION A brief introduction to the process of video compression is given in this section. Video compression comprises of two levels. Firstly, spatial compression takes place when there is high correlation between pixels (samples) of same frames and is equivalent to that of JPEG compression. And then Temporal compression is used to remove temporal redundancy between adjacent blocks by using the concept of motion estimation. Video sequence is a collection of group of pictures or still images called frames. There are three such types of frames. I frame (intra frame): This is the first frame that represents the beginning of a scene and followed by P and B frames. Spatial compression process is only applied to I frame. P frame (predicted frame): This frame is predicted by the past reconstructed frame. B frame (bidirectional frame): These frames are predicted from the I frame and P frames. The general sequence of frames in a GOP can be illustrated as in figure1:

Figure 1: A sequence of GOP

The overall video compression process can be depicted as in figure 2. The main components of compression are: Transform encoding Quantization Motion compensation and estimation Zigzag reordering and RLE (Run Length Encoding) Entropy encoding.

Figure 2: General Block Diagram of Video Compressio n [4]

Pixels in a video exhibit a certain level of correlation with the adjacent or neighboring pixels in the same frame and in the neighboring frames. The correlation in consecutive frames within a video is high. So in transform encoding phase a transformation from spatial (correlated) domain to uncorrelated domain takes place. This phase results in a transformation that maintains the relative relationship between the pixels but the redundancies are revealed. Some of transforms that can be used [3] are image based transform (DWT (this is best suited for still images)), block based transform (DCT, KLT etc). The choice of transform depends on following factors:

• The data in transformed domain should be uncorrelated and compact (most of the energy should be concentrated into small number of values)

• Transform should be reversible. • Transform should be computationally

tractable. The block based transform are best suited for compressing the block based motion compensated residuals. The 1-D DCT (unitary transform) is applied on 1 D sample values and can be evaluated using the formula

…………. (1)

Where,

c(x) =

and,

Page 53: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

47

and, IDCT (inverse DCT) can be evaluated as,

…………… (2)

for n=0, 1, 2, …………, N-1.

The first value at x=0 is known as DC coefficient that is the average value of the pixels, as at x=0,

…………………… (3)

and , all other coefficients are known as AC coefficients.

Similarly, 2D DCT (DCT-II) is used for calculating a 2D sample sequence and is given as in equation 4

…… (4)

In other form,

…………………… (5)

where, X is a block of N x N samples and A is known as transform matrix. Equation 4 can be viewed as applying successively 1D DCT twice once for column values and than for row values or vice versa. This property of DCT is known as separability. Quantization: After the transform encoding the transformed coefficients are quantized to reduce the number of bits required for encoding. A quantizer maps a signal with a range of values X to a quantized signal with a reduced range of values Y. The quantizers can be broadly classified as scalar or vector quantizer. A scalar quantizer maps one sample of the input signal to one quantized output value and a vector quantizer maps a group of input samples (a ‘vector’) to a group of quantized values. Motion estimation and compensation: this phase is the heart of temporal compression where the encoding side estimates the motion in the current frame with respect to a previous or future frame. A motion compensated image for the current frame is

then created from the blocks of image from the reference frame. The motion vectors for blocks used for motion estimation are transmitted, as well as the difference of the compensated image with the current frame is also encoded. The main purpose of motion estimation based video compression is to save on bits by sending encoded difference images which have less energy and can be highly compressed as compared to sending a full frame. This is the most computationally expensive operation in the entire compression process. The matching of one block with another is based on the output of a cost function. The block that results in the least cost is one that matches the closest to current block. There are various cost functions, of which the most popular and less computationally expensive is Mean Absolute Difference (MAD) given by equation (6). Another cost function is Mean Squared Error (MSE) given by equation (7).

……………………………… (6)

…………………….. (7)

where N is the side of the macro block, Cij and Rij are the pixels being compared in current macro block and reference macro block, respectively.

Peak-Signal-to-Noise-Ratio (PSNR) given by equation (8) characterizes the motion compensated image that is created by using motion vectors and macro blocks from the reference frame.

….. (8)

Zigzag reordering and RLE: Quantized transform

coefficients are required to be encoded as compactly

as possible prior to storage and transmission. In a

transform-based image or video encoder, the output

of the quantizer is a sparse array containing a few

nonzero coefficients and a large number of zero-

valued coefficients. Reordering (to group together

nonzero coefficients) and efficient representation of

zero coefficients are applied prior to entropy

encoding.

The significant DCT coefficients of a block of image or residual samples are typically the ‘low frequency’

Page 54: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

48

positions around the DC (0, 0) coefficient. The nonzero DCT coefficients are clustered around the top-left (DC) coefficient and the distribution is roughly symmetrical in the horizontal and vertical directions. After quantization, the DCT coefficients for a block are reordered to group together nonzero coefficients, enabling efficient representation of the remaining zero-valued quantized coefficients. The optimum reordering path (scan order) depends on the distribution of nonzero DCT coefficients. For a typical frame block scan order is a zigzag starting from the DC (top-left) coefficient as shown in figure 3.

Figure 3: Zig Zag Reordering

Starting with the DC coefficient, each quantized coefficient is copied into a one-dimensional array. Nonzero coefficients tend to be grouped together at the start of the reordered array, followed by long sequences of zeros. The output of the reordering process is an array that typically contains one or more clusters of nonzero coefficients near the start, followed by strings of zero coefficients. Higher-frequency DCT coefficients are very often quantized to zero and so a reordered block will usually end in a run of zeros.

III. PARTIAL VIDEO ENCRYPTION USING

ALTERNATE TRANSFORM [3] It focuses on 4x4 block of data. This scheme incorporates more transform rather than only one that explained in section 2, the general method for calculating the DCT. These new transforms are as efficient as DCT encoding of residual frames. The new unitary transforms can be derived from 1-D DCT for N = 4 sample values using equation (1), For N=4,

…………………………………. (8)

and, ………(9)

………………. (10)

…………. (11)

Due to symmetric property of cosine function,

…………………………. (12)

………………………… (13)

Using the above relations 1D DCT can be represented in a structure known as butterfly approach. The junction represents the addition operation and the number on line represents the multiplication operation.

Figure 4: 1-D 4 POINT DCT METHOD [3]

[3] The flow graph consists of three stages. A plane based rotation of stage 1 and plane based rotation

of and at stage 2 and a permutation. New unitary

transforms can be created by keeping stage 1 and 3 unchanged and changing the rotation angle at stage 2 as shown in figure 3.2.2 below, by varying the

angles from to and from to 3 .

Range of .

Page 55: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

49

Figure 5: 4 POINT DCT INCORPORATING ROTATION ANGLES

[3] The scheme shows highest EPE (Energy Packing Efficiency) for highly correlated data (I frames) is when both a1 and a2 are set to zero. For weaker correlation between data (P and B frames) maximum EPE is shown at a1= and a2= -

An encryption algorithm consists of two parts. First one is key generation and second is encryption using that key. This process is proposed for residual data only. For the purpose of key generation RC4 key generator is used. Steps for partial video encryption using ADE are as follows:

Design 2 M-1 transform tables. Repeat for each frame. Initialize the RC4 key generator by a random 128 bit key. For an input residual block of size 4*4 get M bit from the RC4. Chose a transform table and apply it on input block based on M-1 bits. Mth bit is used to encrypt the sign of DC component as change the sign of DC component if Mth bit is “1”.

IV. MODIFIED ALTERNATE FAST DCT METHOD

The ADE scheme described in section 3 results in a increase in computational time for transform encoding as compared to general DCT method described in section 2, and hence a decrease in speed. On comparing equations (1) and the butterfly structure of fig.3 it is concluded that general DCT requires three additions and five multiplications operations for computing DCT of 4 elements that is lesser as

compared to ADE which requires one addition, two subtractions and six multiplication operations. So for a 4x4 block of data general DCT requires 128 (64x2) multiplication operations and 24 (4x4x3) addition operations while ADE requires 192 (96x2) multiplication operations and 96 (48x2) addition operations. So we modify the above scheme and propose an alternating transforms to reduce the computations and hence increase the speed. This can be achieved by interchanging the stage 1 and stage 3 of the ADE scheme as illustrated in figure 4.

Figure 6: 4 PIONT DCT FOR MAFD

Figure 7: 4-POINT FAST DCT INCORPORATING ROTAION ANGLES

From the figure 7 it can be concluded that MAFD scheme require 96 (48x2) addition operations and 96 (48x2) multiplication operations to compute transform for a 4x4 block of data. Hence, resulting in a total reduction of 25% as compared to general DCT and 50% as compared to ADE scheme in computations.

V. RESULTS

Page 56: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

50

In this section experimental results have been shown to demonstrate the effectiveness of the proposed scheme MAFD over ADE. For this purpose the four

equi-space rotation angles are used

and four test video streams in grayscale mode are considered viz Miss America video consisting of 13 frames and Akiyo video with 30 frames both having resolution 176 x 144, 15 frames of Bear video of resolution 720 x 480 and 119 frames of Susie video with resolution 352 x 240 are chosen and both the procedures as explained in section 3 and 4 are implemented on blocks of 4 x 4 data of all the above video sequences using Image Processing Toolbox of MATLAB 7.0. Hence PSNR values, total number of bits required per pixel by each frame, time taken to compute DCT method and total execution time per frame, quality factor (PSNR / average bits per pixel) within the limitations of hardware and software are computed. Results are summarized as: Table I: AVERAGE PSNR VALUES PER ENCRYPTED FRAME

VIDEO SEQUENCE

ADE MAFD

MISS AMERICA (176X144) (13 FRAMES)

60.04938 60.01092308

AKIYO (176 X 144) (30 FRAMES)

55.984 55.92803333

SUSIE (352 X 240) (119 FRAMES)

55.20551 55.17861345

BEAR (720 X 480) (15 FRAMES)

55.872 55.8416

Table I above shows the comparison between average PSNR values for encrypted frames of different video sequences. It can be observed that both ADE and MAFD schemes results in approximately same average PSNR values.

Table II: AVERAGE BITS REQUIRED PER PIXEL PER FRAME

VIDEO

SEQUENCE ADE MAFD

MISS AMERICA

1.230169 1.105838462

AKIYO 1.30983 1.14553

SUSIE 1.273624 1.068459664

BEAR 1.314787 1.212766667

Table II above shows the average bits required per pixel per frame values for different video sequences. It can be observed that MAFD scheme results in a decrease in number of bits per pixel requirement to 15% (approx) as compared to ADE scheme.

Table III: EXECUTION TIME TAKEN BY DCT METHOD VIDEO

SEQUENCE ADE MAFD

MISS AMERICA

1.450846 1.041923077

AKIYO 1.509333 1.072233333

SUSIE 4.931202 3.441420168

BEAR 21.57433 14.97227

Table III above shows the experiment results for execution time of DCT methods for different video sequences using both the schemes. It can be observed that MAFD scheme results in a decrease in execution time of DCT to 40% (approx) as compared to ADE scheme. Table IV: TOTAL EXECUTION TIME PER FRAME

VIDEO SEQUENCE

ADE MAFD

MISS AMERICA

2.241538 1.832615385

AKIYO 2.358 1.9209

SUSIE 12.27923 10.78944538

BEAR 137.9821 131.38

Table IV above shows the experiment results for average total execution time taken by different video sequences using both the schemes. It can be observed that MAFD

Page 57: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

51

scheme results in a decrease in total execution time to 22% (approx) as compared to ADE scheme.

Table V: AVERAGE QUALITY FACTOR PER FRAME

VIDEO SEQUENCE

ADE MAFD

MISS AMERICA

49.06731 54.93738462

AKIYO 43.0224 49.4201

SUSIE 43.38891 51.78489916

BEAR 43.31273 47.31186667

Table V above shows the experiment results for average quality factor per frame for different video sequences using both the schemes. It can be observed that MAFD scheme results in an increase in quality to 12% (approx) as compared to ADE scheme.

Figure8: COMPARISON OF PSNR VALUES OBTAINED BY APPLYING BOTH METHODS FOR ENCRYPTED MISS AMERICA VIDEO

Figure 8 above represents the comparison between PSNR values of encrypted frames of Miss America video obtained by both the schemes (ADE and MAFD). It can be observed that PSNR values of encrypted frames obtained by MAFD scheme is lower except for the first frame (I frame).

Figure 9: COMPARISON OF NO. OF BITS REQUIRED PER PIXEL

OBTAINED BY BOTH METHODS FOR MISS AMERICA VIDEO

Figure 9 above represents the comparison between number of bits required after entropy encoding (Huffman Encoding) by encrypted frames of Miss America video obtained by both the schemes (ADE and MAFD). It can be observed that no of bits required by encrypted frames obtained by MAFD scheme is lower in case of P and B frames while its approximately similar in case of first frame(I frame).

Figure10: COMPARISON OF EXECUTION TIME TAKEN BY DCT METHOD IN BOTH SCHEMES FOR MISS AMERICA VIDEO

Figure 10 above represents the comparison of execution time of DCT method taken for Miss America video by both the schemes (ADE and MAFD). It can be observed that time taken by DCT method as obtained by MAFD scheme is lower. This is due to reduction in computations in modified schemes as compared to the ADE scheme.

Page 58: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

52

Figure11: COMPARISON OF TOTAL EXECUTION TIME PER FRAME FOR MISS AMERICA VIDEO

Figure 11 above represents the comparison of total execution time per frame taken for Miss America video by both the schemes (ADE and MAFD). It can be observed that time taken by DCT method as obtained by MAFD scheme is lower.

Figure 12: COMPARISON OF QUALITY FACTOR VALUES FOR MISS AMERICA VIDEO

Figure 12 above represents the comparison of quality factor i.e. ratio of PSNR and average bit required per pixel per frame for Miss America video by both the schemes (ADE and MAFD). It can be observed that quality factor is higher in case of modified scheme (MAFD). Figures 13 to 16 below displays the screenshots of original frame, reconstructed encrypted frames and

the predicted frames of Akiyo, Miss America and Susie video sequences under both the methods i.e. ADE and MAFD.

a) b) c)

Figure 13 : AKIYO VIDEO (FRAME 3 ADE scheme) a) ORIGINAL FRAME b) ENCRYPTED RECONSTRUCTED FRAME c) PREDICTED FRAME

a) b) c) Figure 14 : AKIYO VIDEO (FRAME 29 ADE scheme) a) ORIGINAL FRAME b) ENCRYPTED RECONSTRUCTED FRAME c) PREDICTED FRAME

a) b) c) Figure 15 : AKIYO VIDEO MODIFIED (FRAME 3 MAFD meth od) a) ORIGINAL FRAME b) ENCRYPTED RECONSTRUCTED FRAME c) PREDICTED FRAME

Page 59: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

53

a) b) c) Figure 16 : AKIYO VIDEO (FRAME 29 MAFD method) a) ORIGINAL FRAME b) ENCRYPTED RECONSTRUCTED FRAME c) PREDICTED FRAME

VI. CONCLUSION AND FUTURE SCOPE The procedures as explained in section 3 and 4 are implemented in MATLAB and compared practically on the basis of parameters PSNR, number of bits per pixel required, execution time taken by each frame to evaluate DCT method, total execution time taken by and quality factor (ratio of PSNR and number of bits per pixel required). It is shown both theoretically and practically that MAFD scheme requires less computational time as compared to that of ADE. And it can be concluded from the results that both the procedures are providing approximately same average PSNR values that is approximately 56 db. A higher PSNR value represents less error. The average bits required per pixel per frame values for different video sequences in case of MAFD scheme results in a decrease in no of bits per pixel requirement to 15% (approx) as compared to ADE scheme. Execution time of DCT methods for different video sequences using both the schemes in case of MAFD scheme results in a decrease in execution time of DCT to 40% (approx) as compared to ADE scheme. The reason for this can be explained theoretically by examining figure 4,figure 5, figure 6 and figure 7, for the evaluation of 1 -D 4 sample will require evaluating 24 multiplications and 12 additions. Hence it results in an increased overhead of computations. With the modified scheme, that is interchanging stage 1 and stage 3 will require 12 multiplications and 12 additions for a 1-D sequence of 4 sample values. Hence results in reduction in number of multiplications to half. So

this results in a reduction in time taken to evaluate DCT function and hence the overall time for execution. The experimental results for average total execution time taken by different video sequences using both the schemes shows that MAFD scheme results in a decrease in total execution time to 22% (approx.) as compared to ADE scheme. Average quality factor per frame for different video sequences using both the schemes in case of MAFD scheme results in an increase in quality to 12% (approx) as compared to ADE scheme. As the selective sign encryption of DC coefficients is used for the encryption purpose so the overhead due to encryption process is very less. This work is carried out for 4x4 input blocks size due to this the reconstructed frames will be more accurate but on the other side as we decrease the block size the energy of the block decreases but results in an increase in computations and complexity as compared to the 8x8 blocks. Also both the schemes (ADE and MAFD) are compared on the basis of the parameters like PSNR, average bits per pixel, execution time etc. the analysis can be done on the basis of various attacks for which the system can be vulnerable.

REFERENCES 1. I. Agi and L. Gong, “An Empirical Study of Secure MPEG

Video Transmission,” Proceedings of the Symposium on Network and Distributed Systems Security, pp 137-144, IEEE, 1996.

2. L. Qiao and K. Nahrstedt, “Comparison of MPEG Encryption Algorithms,” International Journal on Computer and Graphics, Special Issue on Data Security in Image Communication and Network, 22(3), pp 437-438, 1998.

3. Siu-Kei Au Yeung, Shuyuan Zhu and Bing Zeng,”Partial Video Encryption Based on Alternating Transforms”, IEEE Signal Processing Letters, Vol. 16, No. 10, pp 893-896, October 2009

4. I. Richardson, H.264 and MPEG-4 Video Compression. Hoboken, NJ: Wiley, 2003.

5. Jian Zhao, ”Applying Digital Watermarking Techniques to Online Multimedia Commerce”, In: Proc. of the International Conference on Imaging Science, Systems, and Applications (CISSA97), June 30-July 3, 1997, Las Vegas, USA.

6. http://trace.eas.asu.edu/

Page 60: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

54

Impact of Variable Speed Wind Turbine driven

Synchronous Generators in Transient Stability of

Power Systems

Dr.D. Devaraj, R.Jeevajothi

Department of EEE,Kalasalingam University,Virudhunagar District,Tamilnadu,India. ,PIN-626190

Abstract—With the scenario of wind power constituting up to 20% of the electric grid capacity in the futu re, the need for systematic studies of the impact of wind p ower on transient stability of the grid has increased. T his paper investigates possible improvements in grid transient stability while integrating the large-sca le variable speed wind turbine driven synchronous generator. A dynamic modeling and simulation of a g rid connected variable speed wind turbine (VSWT) driven synchronous generator with controllable power inver ter strategies suitable for the study was developed, te sted and verified. This dynamic model with its control scheme can regulate real power , maintain reactive power, generate voltage, and speed at different win d speeds. For this paper, studies were conducted on a standard IEEE 9 bus system augmented by a radially connected wind power plant (WPP) which contains 28 variable speed wind turbines with controllable powe r inverter strategies. Also it has the potential to control the rotor angle deviation and increase the critical clearing time during grid disturbance with the help of controllable power inverter strategy.

Keywords: Variable speed wind turbine, direct drive synchronous generator , rotor angle deviation and critical clearing time, transient stability, grid connected.

I. INTRODUCTION Installed wind power generation capacity is

continuously increasing. Wind power is the most quickly growing electricity generation source with a 20% annual growth rate for the past five years. Variable speed operation yields 20 to 30 percent

more energy than the fixed speed operation, reduces power fluctuations and improves reactive power supply [1]. Stable grid interface requires a reliable tool PSAT/Matlab for simulating and assessing the dynamics of a grid connected variable speed wind turbine driven synchronous generators [2].There are many papers dedicated to dynamic model development of variable speed wind turbine driven synchronous generators[3,7]. Taking an IEEE three-machine, nine-bus system [4], we attach the WPP system radially through a transmission system and transformers at bus 1 in Fig. 2.The equivalent WPP has a set of 28 turbines connected in daisy-chain fashion within the collector system. The direct driven synchronous generator is operated in a variable speed with capability to control the voltage at the regulated bus at constant power factor, or at constant reactive power. In this study, we set the wind turbines to have constant unity power factor. The 28 wind turbine generators have a combined rating of 100MW. The impact of wind-generation technology on power system transient stability is also shown in [5&6].

Page 61: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

55

II. MODELING OF VSWT DRIVEN SYNCHRONOUS GENERATOR

Fig.1presents a schematic diagram of the proposed VSWT driven Synchronous generator connected to the grid.

A.Wind Turbine The wind turbine is described by the following

equations(1)(2) and (3)

W

M

V

Rωλ = (1)

WPM VCRP 2

2

1 ρπ= (2)

3

25

2

1

λωρπ

ωM

PM

MM CR

PT == (3)

where λ = tip speed ratio

ωM=Mechanical speed of wind turbine [rad/s]

R= Blade radius [m]

VW=wind speed [m/s]

PM =Mechanical power from wind turbine [kW]

ρ =Air density [kg/m3]

CP= Power coefficient

TM= Mechanical torque from wind turbine [N · m]

The mechanical torque obtained from equation (3) enters into the input torque to the synchronous generator, and is driving the generator. CP may be expressed as a function of the tip speed ratio (TSR) λ given by equation (2) .

( ) ( ) ( )βλβ

λπβ 200184.03.013

2sin0167.044.0 −−

−−−=PC (4)

where β is the blade pitch angle. For a fixed pitch

type the value of β is set to a constant value 4.50

Figure1. Schematic diagram of the proposed VSWT driven Synchronous generator connected to the grid

Fixed pitch

angle=4.50

wind

1.5MVA,

SSG

VSWT

Transformer

Transformer Rectifier

DC link

VSI

grid

1.5 MVA,

600V,50Hz

2.5KV 1.5MVA,

2KV/130KV

1.5MVA,

Page 62: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

56

B. Synchronous Generator The synchronous generator is equipped with an exciter identical to IEEE type 1 model [8 ].The exciter plays a role of helping the dc link to meet the adequate level of inverter output voltage as given in (5) below

RMS

RMSACdc D

VV _22 ⋅

= (5)

where V AC_RMS is RMS line to neutral voltage of

the inverter and DMAX is maximum duty cycle. The exciter plays a role of meeting the dc link voltage requirement.

C. Power Electronics Control The power conversion system composed of a

six-diode rectifier and a six-MOSFET voltage source inverter, which is simple, cost-effective and widely used for industrial applications[9].The VSI includes a LC harmonic filter at its terminal to reduce harmonics it generates. The rectifier converts ac power generated by the wind generator into dc power in an uncontrollable way; therefore, power control has to be implemented by the VSI. A current-controlled VSI can transfer the desired real and reactive power by generating an ac current with a desired reference waveform.

The maximum power available from VSWT driven synchronous generator is given by(6

33

5

2

1M

OPT

MAXPMAX

M

CRP ω

λπρ= (6)

The desired real power reference Pref values are calculated by (7)

MAXMref PP η= (7)

The desired reactive power reference Qref values are calculated by (8)

PF

PFPQ refref

21−⋅= (8)

By using proportional-integral-derivative (PID) control gains, errors between Pref and Pinv

(measured real power of inverter )and between Qref and Qinv (measured reactive power of inverter ) are processed into the q- and d-axis reference current Iq ref and Id ref , respectively, which are transformed into the a-, b- and c- axis reference current Ia ref, Ib ref and Ic ref by the dq to abc transformation block. When the desired currents on the a-b-c frame are set, a pulse

width modulation (PWM) technique is applied. The error signal is compared with a carrier signal and the switching signals are created for the 6-MOSFETs of the VSI.

III. ASSESSMENT OF TRANSIENT STABILITY Analysis of transient stability of power systems

involves the computation of their nonlinear dynamic response to large disturbances, usually a transmission network fault, followed by the isolation of the faulted element by protective relaying. In these studies, two methods are used for assessing dynamic performance of the power system following a large disturbance:

• Calculation of critical fault clearing times for faults on the power system; and

• Examination of the rotor angle deviation of generators following a large disturbance.

A. Critical Clearing Time The critical clearing time (CCT) is the

maximum time interval by which the fault must be cleared in order to preserve the system stability.

Generating units may lose synchronism with the power system following a large disturbance and be disconnected by their own protection systems if a fault persists on the power system beyond a critical period. The critical period will depend on number of factors. The nature of the fault (e.g. a solid three phase bus fault or a line to ground fault midway on a transmission circuit);

• The location of the fault with respect to the generation; and;

Page 63: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

57

• The capability and characteristics of the generating unit.

The calculation of the critical clearing time for a generating unit for a particular fault is determined by carrying out a set of simulations in the time domain in which the fault is allowed to persist on the power system for increasing amounts of time before being removed.

B. Rotor angle deviation Rotor angle deviation assessment of wind power generator is one of main issues in power system security and operation.

IV. SIMULATION RESULTS AND DISCUSSION

Fig.2represents the Power system network with fault near by bus-7 with only conventional synchronous generators.

Figure 2. Power system network used in the study(IEEE 9 bus system with fault near by bus-7 with only conventional synchronous generators.

• Bus-1 - 100MVA, 16.5KV • Bus-2 - 100MVA, 18KV • Bus-3 -100MVA,13.8KV • Tr-1-16.5KV/230/KV • Tr-1-18KV/230/KV • Tr-1-13.8KV/230/KV • Load at bus5-125MW,50MVar • Load at bus6-90MW,30MVar

• Load at bus8-100MW,30MVar

The capacity of the VSWT driven synchronous generator is chosen to be 1.5 MVA and real power 1.5 MW. The rated speed of the rotor is chosen to be 40 rpm. The rated wind speed is 15 m/s. the cut-in and cut-out speeds are 4 m/s and 23 m/s respectively. The switching frequency of the grid interface inverter is 1.040 kHz. The capacitor value of grid interface rectifier is 2500uF and d.c link voltage is 2.5 KV. The generated voltage of synchronous generator is 0.6KV. The transformer rating of grid connected side is 2KV/130KV. The p.u voltage magnitude of primary of the transformer is 0.99 p.u.. The grid voltage is 130KV. Figures[3-8] represents the Simulation waveform of the modeled VSWT driven synchronous generator.

0 5 10 15 20 25 30 35 400

0.5

1

1.5

2

2.5

time in sec

Rea

l pow

er in

MW

Figure 3. Simulation waveform of Real power of variable speed wind turbine

0 5 10 15 20 25 30 35 400

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

time in sec

Rea

ctive

power

in M

var

Figure 4. Simulation waveform of Reactive power of variable speed wind turbine

Page 64: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

58

0 5 10 15 20 25 30 35 40-1.5

-1

-0.5

0

0.5

1

1.5

time in sec

Gen

erat

ed ph

ase

volta

ges

Va

in p

.u

Figure 5. Simulation waveform of Generated phase voltage in p.u of variable speed wind turbine driven synchronous generator

0 5 10 15 20 25 30 35 40-500

0

500

1000

1500

2000

2500

3000

3500

time in sec

Vdc

in volts

Figure 6. Simulation waveform of d.c link voltage 2.5 KV of variable speed wind turbine driven synchronous generator.

Figure 7. Simulation waveform of real power 1..5MW in grid side in p.u of variable speed wind turbine driven synchronous generator

Figure 8. Simulation waveform of injected 0.25 MVAR reactive power in grid side in p.u of variable speed wind turbine driven synchronous generator

Figures[9-12] represents the voltage, real power, reactive power, rotor angle deviation for line fault near bus 7 with only conventional synchronous generators.

Figure 9. Voltages for line fault near bus 7 with only conventional

synchronous generators

Figure 10. Real power for line fault near bus 7 with only conventional

synchronous generators

Figure 11. Reactive power for line fault near bus 7 with only conventional synchronous generators

Figure 12. Rotor angle deviation for line fault near bus 7 with only conventional synchronous generators

WPP having 28 no. of wind turbine generators of capacity 1.5 MVA,600V ,50Hz is connected at bus-1. Figures[13-16] represents the voltage, real power, reactive power, rotor

Page 65: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

59

angle deviation for line fault near bus 7 with conventional synchronous generators replaced by the wind turbine generators.

Figure 13. Voltages for line fault near bus 7 with a

WPP at bus 1

Figure 14. Real power for line fault near bus 7 with a

WPP at bus 1

Figure 15. Reactive power for line fault near bus 7 with a

WPP at bus 1

Figure 16. Rotor angle deviation for line fault near bus7 with a

WPP at bus 1

Results obtained shows that for the investigated IEEE9 bus system considered in

this paper, the critical fault clearing time of the generator was increased by three cycles, when the above modeled variable speed wind turbine driven synchronous generator was connected at one of the generation buses.

Rotor angle deviation was reduced nearly by 300 when the above modeled variable speed wind turbine driven synchronous generator was connected at one of the generation buses.

V. CONCLUSION The dynamic model of a VSWT driven synchronous generator with power electronic interface was proposed for computer simulation study and was implemented in a reliable power system transient analysis program. This paper has mainly focused on the modeling, assessment of the rotor angular stability and critical clearing time (CCT). This was done by observing the behavior of the test system with only conventional synchronous generators and then by connecting the modeled VSWT driven synchronous generator with the test system, when a three phase fault is included. Comprehensive impact studies are necessary before adding wind turbines to real networks. In addition, users or system designers who have a plan to install or design wind turbines in networks must ensure that their systems have well performed while meeting the requirements for grid interface. The work illustrated in this study may provide a reliable tool for evaluating the performance of a VSWT driven synchronous generators and its impacts on power networks in terms of dynamic behaviors; therefore, serve as a preliminary analysis for actual applications. Fault tests carried out has proven that the integration of this model could enhance the transient stability.

REFERENCES

[1] “20% Wind Energy by 2030 – Increasing Wind Energy’s Contribution to U.S. Electricity Supply.” U.S.Department of Energy, May 2008. DOE/GO-102008-2567, http://www1.eere.energy.gov/windandhydro/pdfs/41869.pdf [2] F. Milano, “PSAT, Matlab-based Power System Analysis Toolbox,” 2002, available at http://thunderbox.uwaterloo.ca/_fmilano. [3] Slootweg, H. Wind Power: Modeling and Impact on PowerS ystem Dynamics. Ph.D. Thesis, Technical University Delft, Delft, the Netherlands, 2003.

Page 66: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

60

[4] Sauer, P. W.; Pai, M. A. “Power System Dynamics and Stability,” ISBN 1-58874-673-9, Stipes Publishing L.L.C., Champaign, IL, 2006. [5] E. Muljadi ,T. B. Nguyen, M.A. Pai ,” Impact of Wind Power Plants Impact of Wind Turbine Systems on Power System Voltage Stability Enhancement on Voltage and Transient Stability of Power Systems”, IEEE Energy 2030 ,Atlanta, Georgia,USA,17-18 November 2008. [6] Samarasinghe C.; Ancell, G. “Effects of large scale wind generation on transient stability of the New Zealand power system.” IEEE Power and EnergySociety, General Meeting, July 20-24,2008,Pittsburgh, PA. [7] Petersson, A.; Thiringer, T.; Harnefors, L.; Petru, T.,“Modeling and Experimental Verification of Grid Interaction of a DFIG Wind Turbine.” IEEETransactions on Energy Conversion; Vol. 20, Issue 4, December 2005; pp. 878 – 886. [8] J. G. Slootweg, S. W. H. de Haan, H. Polinder, and W.L. Kling, “General model for representing variable speed wind turbines in power system dynamics simulations,” IEEE Trans. Power Systems, vol. 18, no.1,pp. 144–151, Feb. 2003 Article in a conference proceedings: [9] Tande, J.O.G.; Muljadi, E.; Carlson, O.; Pierik, J.;Estanqueiro, A.; Sørensen, P.; O’Malley, M.;Mullane,A.; Anaya-Lara, O.; Lemstrom,B. “Dynamic models of wind farms for power system studies–status by IEA Wind R&D Annex 21.” European WindEnergy Conference & Exhibition, November 22−25,2004, London, U.K.

Page 67: IJITCE Feb 2011

INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.2 FEBRUARY 2011

61