4
ET JOURNAL Autumn 2010 oFENGINEERING &TECHNOLOGY OJ N OJ LC) I 0> N N N Z C/) C/) Performance Evaluation of Information Systems c. S. Yada/ Abstract Performance evaluation of Information Systems and its utility during: selection studies of all the components of the Information System, improvement studies for the purpose of improvement of performance, decreasing costs and design studies, when an Information system is being designed and different implementation alternatives are being evaluated[1, 2]. Benchmarking is the only tool available for studying the performance of existing information systems. Performance matrix of information systems has been described in this paper. The term Benchmarking is widely used in a general context as a systematic method by which organizations measure their deliverables against the best in the industry. The information system (IS) development activity in large organizations is a source of increasing cost and concern to management. Information system development projects are often over-budget, late, costly to maintain and not done to the satisfaction of the requesting user [1, 2]. Professionals in the computer industry have used the term benchmarking since the early 1960s. [3] Initially benchmarking meant comparing the processing power of products produced by competing manufacturers within a realistic business or scientific environment. Key words: Performance metrics, maintainability, cyclomatic complexity, configuration management and customer satisfaction. IDepartment 0/ Computer Science and Engineering, NIET, Gr. Noida, india, [email protected] 'Department of Computer Science and Engineering, HBTI, Kanpur; india, [email protected] Raghuraj Singh' Introd uction Tracking cost and schedule data typically requires the implementation and use of a project management system devoted to the task. Developers need to record their time spent and such actual data must be matched against previously budgeted milestones in order to generate the appropriate management information. There are four factors; initial development cost, maintainability, timeliness and effectiveness for performance evaluation in IS development. In an IS development context the cost and benefits have both long-term and short term components. In the short term, the emphasis is on initial systems development costs, most prominently labor costs. However, there are also longer term maintenance costs associated with each system. If a system is delivered on time, this corresponds to the notion of timeliness. The value of the system is due to the provision of user-desirable functionality which improves organizational performance. This is the notion of effectiveness [1]. Performance Metrics of IS' The process of software development is influenced by the customer, business and development environments. Once the size of the Information System (FPC or LOC) is estimated, the next important parameter is the cost of development. The costs to be considered are as under: Personnel cost, Hardware cost, Software cost, Communication, travel and stay costs, Training cost, Outsourcing cost, Marketing cost and Administrative cost. COCOMO cost model is l ..".,,.,.-..., a method to first estimate effort in man monthsl~!:!:::!!:!1

N Performance Evaluation of N Z - niet.co.in Evaluation of Information Sy… · Schedule variance, SV = BCWP - BCWS SPI is an indication of the efficiency with which the project is

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: N Performance Evaluation of N Z - niet.co.in Evaluation of Information Sy… · Schedule variance, SV = BCWP - BCWS SPI is an indication of the efficiency with which the project is

ETJOURNAL Autumn 2010oFENGINEERING&TECHNOLOGY

OJNOJLC)

I

0>NNNZC/)C/)Performance Evaluation of

Information Systemsc. S. Yada/

Abstract

Performance evaluation of Information Systems and itsutility during: selection studies of all the components ofthe Information System, improvement studies for thepurpose of improvement of performance, decreasingcosts and design studies, when an Information systemis being designed and different implementationalternatives are being evaluated[1, 2].

Benchmarking is the only tool available for studying theperformance of existing information systems.

Performance matrix of information systems has beendescribed in this paper.

The term Benchmarking is widely used in a generalcontext as a systematic method by which organizationsmeasure their deliverables against the best in theindustry. The information system (IS) developmentactivity in large organizations is a source of increasingcost and concern to management. Information systemdevelopment projects are often over-budget, late,costly to maintain and not done to the satisfaction of therequesting user [1, 2].

Professionals in the computer industry have used theterm benchmarking since the early 1960s. [3] Initiallybenchmarking meant comparing the processing powerof products produced by competing manufacturerswithin a realistic business or scientific environment.

Key words: Performance metrics, maintainability,cyclomatic complexity, configuration management andcustomer satisfaction.

IDepartment 0/ Computer Science andEngineering, NIET, Gr. Noida, india,[email protected]

'Department of Computer Science andEngineering, HBTI, Kanpur; india,

[email protected]

Raghuraj Singh'

Introd uction

Tracking cost and schedule data typically

requires the implementation and use of a projectmanagement system devoted to the task.Developers need to record their time spent andsuch actual data must be matched againstpreviously budgeted milestones in order togenerate the appropriate managementinformation. There are four factors; initialdevelopment cost, maintainability, timelinessand effectiveness for performance evaluation inIS development. In an IS development contextthe cost and benefits have both long-term andshort term components. In the short term, theemphasis is on initial systems developmentcosts, most prominently labor costs. However,there are also longer term maintenance costsassociated with each system. If a system isdelivered on time, this corresponds to the notionof timeliness. The value of the system is due tothe provision of user-desirable functionalitywhich improves organizational performance.This is the notion of effectiveness [1].

Performance Metrics of IS'

The process of software development isinfluenced by the customer, business anddevelopment environments. Once the size of theInformation System (FPC or LOC) is estimated,the next important parameter is the cost ofdevelopment. The costs to be considered are asunder: Personnel cost, Hardware cost, Softwarecost, Communication, travel and stay costs,Training cost, Outsourcing cost, Marketing costand Administrative cost. COCOMO cost model isl..".,,.,.-...,a method to first estimate effort in man monthsl~!:!:::!!:!1

Page 2: N Performance Evaluation of N Z - niet.co.in Evaluation of Information Sy… · Schedule variance, SV = BCWP - BCWS SPI is an indication of the efficiency with which the project is

and then, from that, the cost of development [5,6].

It is estimated that maintenance cost is about 40to 60 % ofthe cost incurred in information systemdevelopment life cycle. The cost is dependent onthe efforts taken to maintain the informationsystem. COCOMO II computes maintenanceeffort using size variable computed as followsand suggested by Boehm 1995.

Size=ASLOC(AA+SU+0.4DM+0.3CM+0.3IM)/100

where

ASLOC= Number of source code lines to beadapted

DM = % of design to be modified

CM = %ofcodeto be modified

1M= % of external code to be integrated

SU = Rating on software understanding

AA= Rating on Assessment and Assimilation(AA)

Maintainability of the information systemdepends on the quality of code and also onspecifications, design, reviews, tests anddocumentation. [6] Maintainability of theinformation system depends on the quality andskill set which the maintenance team has, andexternal factors in relation to the software.Another important factor is structural complexityof the source code, an internal factor in relationto the software. Structural complexity ismeasured by the 'cyclomatic number'. A lownumber indicates low structural complexity.

Cyclomatic complexity is a software metric thatprovides a quantitative measure of the logicalcomplexity of a program. Actually cyclomaticcomplexity defines the number of independentpaths in the basis set of a program and providesus with an upper bound for the number of teststhat must be conducted to ensure that allstatements have been executed at least once.

Cyclomatic complexity, V (G), for a flow graph, Gis defined as

V (G) = E-N+2

Where E is the number of flow graph edges, andN is the number offlow graph nodes [5].

Maintainability is measured by Mean time to

Autumn 2010

Repair (MTR), the mean time is between the timeat which the problem is reported and the time theproblem is resolved with updates indocumentation. A record of various data inputsis to be kept to compare the MTR. A professional,good maintenance team attempts to reduceMTR over a period of time by providingknowledge of problems, failures, solutions, cost,and reasons of fail ures to the development team.A close liaison between the maintenance teamand development team improves the quality ofinformation system development and reducesMTR over a period oftime [5, 6].

Configuration management is very important inmaintenance. Keeping track of changes andtheir effects on the rest of the Information Systemis a difficult but critical task. The maintenanceteam follows change management systems toensure that only the right changes are made andinformation system integrity is maintained.

Change management control is a key step inconfiguration management. The following arethe factors in change of management control:

Timing: When is the change made?

Change agent: Who made the change?

Components: Which components of the IS werechanged?

Process: Has the change been effected properlythrough change management process?

Change: Has the change occurred due tosystem failure or enhancement or extension?

Nature of Maintenance: Which maintenancestrategy is used?

Update: Has the knowledge database beenupdated forfuture benefits?

There are six factors that help to asses theimpact ofthe change on the information system.

1. Risk of change2. Efforts to effect the change

3. Benefits ofthe change4. Resource requirement and availability5. Scheduling the change for its effectiveness

6. Cost of change

Maintenance is a team activity where a team is acohesive work group working in collaboration to

Page 3: N Performance Evaluation of N Z - niet.co.in Evaluation of Information Sy… · Schedule variance, SV = BCWP - BCWS SPI is an indication of the efficiency with which the project is

ETJOURNALOFENGINEERING&TECHNOLOGY

ensure the desired quality and to control thequality from declining. People skills areimportant in maintenance. Maintenance involvestechnical and people related problems.

When creating an information system schedule,the planner begins with a set of tasks. Effort,duration, and start date are an input for eachtask. A timeline chart or Gantt chart can bedeveloped for the entire information systemdevelopment life cycle.

Earned value is a measure of progress. Itenables us to assess the "percent ofcompleteness" of a project using quantitativeanalysis rather than relying on a gut feeling.

To determine the earned value, the followingsteps are performed:

1. The budgeted cost of work scheduled(BCWS) is determined for each work taskrepresented in the schedule. During estimation,the work (in person-hours or person-days) ofeach software engineering task is planned.Hence, BCWSi is the effort planned for work taski. The value of BCWS is the sum of the BCWSivalues for all work tasks that should have beencompleted by that point in time on the projectschedule.

2. The BCWS values for all work tasks aresummed to derive the budget at completion,BAC. Hence, BAC = S (BCWSk) for all tasks k

3. Next, the value for the budgeted cost of workperformed (BCWP) is computed.

Schedule performance index (SPI), SPIBCWP/BCWS

Schedule variance, SV = BCWP - BCWS

SPI is an indication of the efficiency with whichthe project is utilizing scheduled resources. AnSPI value close to 1.0 indicates efficientexecution of the project scheduled. SV is simplyan absolute indication of variance from theplanned schedule.

Percent schedule for completion = BCWS/BACprovides an indication of the percentage of workthat should have been completed by time t.

Percent complete = BCWP/BAC provides aquantitative indication of the percent ofcompleteness ofthe project ata given point intime, t.

Autumn 2010

It is also possible to compute the actual cost ofwork performed, ACWP. The value for ACWP isthe sum of the effort actually expended on worktasks that have been completed by a point intime on the project schedule. It is then possibleto compute.

Cost performance index, CPI = BCWP/ACWP

Cost variance, CV= BCWP-ACWP

A CPI value close to 1.0 provides a strongindication that the project is within its definedbudget. CV is an absolute indication of costsavings or shortfall at a particular stage of aproject. [5]

Effectiveness

Customer satisfaction, a business term, is ameasure of how products and services suppliedby a company meet or surpass customerexpectations.

In a competitive marketplace where businessescompete for customers, customer satisfaction isseen as a key differentiator and increasingly hasbecome a key element of business strategy.

The term customer delight means that hereceived something more than he bargained for.e. g. When you stayed in a hotel, you found thatyour breakfast was free. Many airlines now givetiny presents as free gifts when you travel in theirflight. This is to "delight" you as a customer. Dokeep in mind however, any delight factor, over aperiod of time, turns out to be a productspecification.

If the functionality of an information system isaccording to user requirements, it means aneffective information system.

Functional testing of information system orhardware is testing conducted on a complete,integrated system to evaluate the system'scompliance with its specified requirements.Functional testing falls within the scope of blackbox testing, and as such, should require noknowledge of the inner design of the code orlogic. [www.wikipedia.com]

In order to test the satisfaction level of users, twotests are proposed:

• Usability testing

• User satisfaction testing

Page 4: N Performance Evaluation of N Z - niet.co.in Evaluation of Information Sy… · Schedule variance, SV = BCWP - BCWS SPI is an indication of the efficiency with which the project is

Usability testing measures ease of use andcomfort that users have while using theinformation system. Here major focus should bein design and development of UI/GUI. Usabilitytests are conducted using use cases. Theprocess of conducting a usability test is simplelike reviews and walkthroughs along with theusers using typical use cases. Also, collectuser's experience on using the informationsystem to decide on usability.

User satisfaction test is another test thatmeasures, by some attributes of usability,functions, features and cost. Usability can bemeasured by defining measurable goals thatconfirm the high level of usability. The followingtable 1.0 summarizes the responses of 100 userson 10 parameters of satisfaction and on overallinformation system.

Parameters Score on satisfaction

1 2 3 4 5Ease of Use .j

Ease of Learning .j

Scope Coverage .j

SRS Coverage .j

Incidence of Change .j

Reliability .j

Cost .j

Performance .j

Security .j

UI quality .j

Information System .j

1. Unacceptable2. Low3. Medium

Table 1.0 User Satisfaction Test TemplateUser Satisfaction Index (US I)

USI = Actual Score/Total Highest Score

User satisfaction testing can be conductedduring the information system development lifecycle by exposing users to SRS, design andarchitecture.

In the end, performance testing has been doneto uncover performance problems that can result

4.High5. Highly Acceptable

Autumn 2010

from lack of server-side resources, inappropriateband width, inadequate data base capabilities,faulty or weak operating system capabilities,poorly designed web application functionalityand other hardware or software issues that canlead to degraded client-server performance.Here intent is: (i) to understand how the systemresponds to loading (i.e. number of users,number of transactions or overall data volume)and (ii) to collect metrics that will lead to designmodifications to improve performance. Twodifferent performance tests are conducted: Loadtesting- real world loading is tested on a variety ofload levels and in a variety of combinations.Stress testing - loading is increased to thebreaking point to determine how muchcapability the web application environment canhandle.

ConclusionThis paper has suggested several performancematrices for the evaluation of informationsystem. This paper provides a theoreticallygrounded formal model which defines criteriathat predict the choice of performance metrics ininformation systems organizations. For futureresearch, a formal empirical validation of theproposed set of performance evaluationmatrices could be performed.

00

References

1. Rajiv D. Banker, Chris F. Kemerer, "PerformanceEvaluation Metrics for Information SystemsDevelopment: A Principal-Agent Model", TheInstitute of Management Sciences.

2. Domenico Ferrari, "Computer SystemsPerformance Evaluation", Prentice Halllnc, 1978.

3. Besterfield D. H. et. aI., "Total Quality Management",Prentice Hall of India, Third Edition, 2007, pp, 207-222

4. Raj Jain, "The Art of Computer SystemsPerformance Analysis", John Wiley and Sons, 1991.

5. Roger S. Pressman, "Software Engineering, APractitioner's Approach", Sixth Edition, McGraw-HiliInternational Edition

6. Waman S Jawadekar, "Software Engineering,Principles and Practice" The McGraw-HiliCompanies

000