163
Pravin Prabhakar Chalke [email protected] 1 Notes On System Analysis and Design For Master of Computer Application Semester-I According to University of Mumbai Syllabus By: Pravin Prabhakar Chalke [email protected]

System Analysis and Design Mumbai University MCA Sem1

Embed Size (px)

Citation preview

Page 1: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

1

Notes On

System Analysis and Design

For Master of Computer Application

Semester-I

According to University of Mumbai Syllabus

By: Pravin Prabhakar Chalke [email protected]

Page 2: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

2

Index 1. Introduction

a. Systems & computer based systems, types of information system

b. System analysis & design

c. Role, task & Attribute of the system analyst

2. Approaches to system development

a. SDLC

b. Explanation of the phases

c. Different models their advantages and disadvantages

i. Waterfall approach

ii. Iterative approach

iii. Extreme programming

iv. Rad model

v. Unified process

vi. Evolutionary software process model

1. Incremental model

2. Spiral model

vii. Concurrent development model

3. Analysis: investigating system requirements

a. Activities of the analysis phase

b. Fact finding methods

i. Review existing reports, forms and procedure descriptions

ii. Conduct interviews

iii. Observe & document business. Processes

iv. Build prototypes

v. Questionnaires

vi. Conduct jad sessions

c. Validate the requirements

i. Structured walkthroughs

4. Feasibility Analysis

a. Feasibility study and cost estimates

b. Cost benefit analysis

c. Identification of list of deliverables

5. Modeling system requirements

a. Data flow diagrams logical and physical

b. Structured English

c. Decision tables

d. Decision trees

e. Entity relationship diagram

6. Design

a. Design phase activities

Page 3: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

3

b. Develop system flowchart

c. Structure chart

i. Transaction Analysis

ii. Transform analysis

d. Software design and documentation tools

i. Hipo Chart

ii. Warnier orr diagram

e. Designing databases

i. Entities

ii. Relationships

iii. Attributes

iv. Normalization

7. Designing input, output & user interface

a. Input design

b. Output design

c. User interface design

8. Testing

a. Strategic approach to software testing.

b. Test strategies for conventional software.

c. Test strategic for object-oriented software

d. Validation testing

e. System testing

f. Debugging

9. Implementation & maintenance

a. Activities of the implementation & support phase

10. Documentation

a. Use of case tools

b. Documentation Importance

c. Types of Documentation

Page 4: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

4

Chapter 1- Introduction

System • A system is a group of elements working together to achieve a common goal.

• These element are related to each other in the work that they carry out.

• They communicate with each other in order to coordinate and control the delivery of the

total work of the system.

Further system is part of large system.

Need of learning systems concepts • To identify the objective of the system - what the system is aimed at achieving

• To identify the components of the system

• To identify what are the roles each component is performing in helping the overall

system to achieve common goal

• To identify how these components communicate with each other to work as a team.

• To relate the knowledge of this system to a similar other system and thereby gain

knowledge of that system faster

• To identify the error in new systems much faster

Characteristic of System As per the definition of System we have following characteristics:

• The system works to achieve a common Goal Eg. Goal of traffic system is to facilitate

road with high safety and speed to all the road users equally in a fair manner.

• The system has several components working together to contribute their respective part

to meet the overall objective of the system. Together they all, the system is said to be

working. Eg. Traffic system has the vehicle driver signals, traffic police etc as element of

system.

• If any of the components is missing or not working as desired, it affects the performance

of the system as a whole. thus one can identify all the necessary tasks that all the

components together are required to perform, so that entire system meet its goal.

• The system components communicate with each other and you will be able to identify

what they communicate with each other (DATA), when (EVENT), and what is purpose of

the communication , such asco-ordinate the work or control the work etc.

Computer Based System The word system is possibly the most overused and abused term in the technical lexicon.

Page 5: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

5

We speak of political systems and educational systems, of avionics systems and manufacturing

systems, of banking systems and subway systems. The word tells us little. We use the adjective

describing system to understand the context in which the word is used. Webster's Dictionary

defines system in the following way:

1. a set or arrangement of things so related as to form a unity or organic whole;

2. a set of facts, principles, rules, etc., classified and arranged in an orderly form so as to

show a logical plan linking the various parts;

3. a method or plan of classification or arrangement;

4. an established way of doing something; method; procedure . . .

Five additional definitions are provided in the dictionary, yet no precise synonym is

suggested. System is a special word. Borrowing from Webster's definition, we define a

computer-based system as A set or arrangement of elements that are organized to accomplish

some predefined goal by processing information.

The goal may be to support some business function or to develop a product that can

be sold to generate business revenue. To accomplish the goal, a computer-based system

makes use of a variety of system elements:

• Software. Computer programs, data structures, and related documentation that serve to

effect the logical method, procedure, or control that is required.

• Hardware. Electronic devices that provide computing capability, the interconnectivity

devices (e.g., network switches, telecommunications devices) that enable the flow of

data, and electromechanical devices (e.g., sensors, motors, pumps) that provide external

world function.

• People. Users and operators of hardware and software.

• Database. A large, organized collection of information that is accessed via software.

• Documentation. Descriptive information (e.g., hardcopy manuals, on-line help files, Web

sites) that portrays the use and/or operation of the system.

• Procedures. The steps that define the specific use of each system element or the

procedural context in which the system resides.

The elements combine in a variety of ways to transform information. For example, a

marketing department transforms raw sales data into a profile of the typical purchaser of a

product; a robot transforms a command file containing specific instructions into a set of control

signals that cause some specific physical action. Creating an information system to assist the

marketing department and control software to support the robot both require system

engineering.

One complicating characteristic of computer-based systems is that the elements constituting

one system may also represent one macro element of a still larger system. The macro element is

a computer-based system that is one part of a larger computer-based system. As an example,

Page 6: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

6

we consider a "factory automation system" that is essentially a hierarchy of systems. At the

lowest level of the hierarchy we have a numerical control machine, robots, and data entry

devices. Each is a computerbased system in its own right. The elements of the numerical control

machine include electronic and electromechanical hardware (e.g., processor and memory,

motors, sensors), software (for communications, machine control, interpolation), people (the

machine operator), a database (the stored NC program), documentation, and procedures.

A similar decomposition could be applied to the robot and data entry device. Each is a

computer-based system.

At the next level in the hierarchy, a manufacturing cell is defined. The manufacturing

cell is a computer-based system that may have elements of its own (e.g., computers, mechanical

fixtures) and also integrates the macro elements that we have called numerical control

machine, robot, and data entry device.

To summarize, the manufacturing cell and its macro elements each are composed of

system elements with the generic labels: software, hardware, people, database, procedures,

and documentation. In some cases, macro elements may share a generic element. For example,

the robot and the NC machine both might be managed by a single operator (the people

element). In other cases, generic elements are exclusive to one system.

The role of the system engineer is to define the elements for a specific computerbased

Page 7: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

7

system in the context of the overall hierarchy of systems (macro elements). In the sections that

follow, we examine the tasks that constitute computer system engineering.

Classification of System There are various types of system. To have a good understanding of these systems, these

can be categorized in many ways. Some of the categories are open or closed, physical or

abstract and natural or man made information systems, which are explained next.

Classification of systems can be done in many ways.

Physical or Abstract System

Physical systems are tangible entities that we can feel and touch. These may be static or

dynamic in nature. For example, take a computer center. Desks and chairs are the static parts,

which assist in the working of the center. Static parts don't change. The dynamic systems are

constantly changing. Computer systems are dynamic system. Programs, data, and applications

can change according to the user's needs.

Abstract systems are conceptual. These are not physical entities. They may be formulas,

representation or model of a real system.

Open Closed System

Systems interact with their environment to achieve their targets. Things that are not part

of the system are environmental elements for the system. Depending upon the interaction with

the environment, systems can be divided into two categories, open and closed.

Open systems: Systems that interact with their environment. Practically most of the

systems are open systems. An open system has many interfaces with its environment. It can also

adapt to changing environmental conditions. It can receive inputs from, and delivers output to

the outside of system. An information system is an example of this category.

Closed systems:Systems that don't interact with their environment. Closed systems exist

in concept only.

Man made Information System

The main purpose of information systems is to manage data for a particular

organization. Maintaining files, producing information and reports are few functions. An

information system produces customized information depending upon the needs of the

organization. These are usually formal, informal, and computer based.

Formal Information Systems: It deals with the flow of information from top management to

lower management. Information flows in the form of memos, instructions, etc. But feedback can

be given from lower authorities to top management.

Informal Information systems: Informal systems are employee based. These are made to solve

the day to day work related problems. Computer-Based Information Systems: This class of

systems depends on the use of computer for managing business applications.

Page 8: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

8

Information System

In business we mainly deal with information systems we'll further explore these systems.

We will be talking about different types of information systems prevalent in the industry.

Information system deals with data of the organizations. The purposes of Information

system are to process input, maintain data, produce reports, handle queries, handle on line

transactions, generate reports, and other output. These maintain huge databases, handle

hundreds of queries etc. The transformation of data into information is primary function of

information system.

These types of systems depend upon computers for performing their objectives. A

computer based business system involves six interdependent elements. These are hardware

(machines), software, people (programmers, managers or users), procedures, data, and

information (processed data). All six elements interact to convert data into information. System

analysis relies heavily upon computers to solve problems. For these types of systems, analyst

should have a sound understanding of computer technologies.

In the following section, we explore three most important information systems namely,

transaction processing system, management information system and decision support system,

and examine how computers assist in maintaining Information systems.

Types of Information System

The information system aims at providing detailed information on a timely basis

throughout the organisation so that the top management can take proper and effective

decisions. The information system cuts across departmental lines and help achieving overall

optimization for the organisation.

The organisation is viewed as a network of inter-related sub-systems rather than as a

hierarchy of manager-subordinate relationship. The information system can be of two types:

• Integrated information system

• Distributed information system

(a) Integrated Information System

The integrated information system is based on the presumption that the data and information

are used by more than one system in the organisation and accordingly, the data and

information are channeled into a reservoir or database. All the data processing and provision of

information is derived and taken from this common database. The development of an

integrated information system requires a long-term overall plan, commitment from

management at all levels, highly technical personnel, availability of sufficient fund, and

sophisticated technology. It also requires adequate standby facilities, without which the system

Page 9: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

9

is doomed to failure. Because of its integrated component, the modification to the system is

quite difficult and the system development takes a fairly long time.

(b) Distributed Information System

There are opinion that development of an integrated information system is embodied with

several practical problems and therefore, not feasible. This view has been reinforced by the

failure of integrated systems in various large organisations. The concept of a distributed

information system has emerged as an alternative to the integrated information system. In the

distributed information system, there are information sub-systems that form islands of

information systems. The distributed information system aims at establishing relatively

independent sub-systems, which are, however, connected through communication interfaces.

Following are the advantages of the distributed information system:

• The processing equipment as well as database are dispersed, bringing them closer to the

users.

• It does not involve huge initial investment as is required in an integrated system.

• It is more flexible and changes can be easily taken care of as per user’s requirements.

• The problem of data security and control can be handled more easily than in an integrated

system.

• There is no need of standby facilities because equipment breakdowns are not as severe as

in an integrated system.

The drawbacks of the distributed system are:

• It does not eliminate duplication of activities and redundancy in maintaining files.

• Coordination of activities becomes a problem.

• It needs more channels of communication than in an integrated system.

It is possible to consider several alternative approaches, which fall between the two

extremes - a completely integrated information system and a totally independent sub-system. It

is to be studied carefully what degree of integration is required for developing an information

system. It depends on how the management wants to manage the organisation, and the level of

diversity within the organisation.

(C) Transaction Processing Systems

TPS processes business transaction of the organization. Transaction can be any activity

of the organization. Transactions differ from organization to organization. For example, take a

railway reservation system. Booking, canceling, etc are all transactions. Any query made to it is

a transaction. However, there are some transactions, which are common to almost all

organizations. Like employee new employee, maintaining their leave status, maintaining

employees accounts, etc.

This provides high speed and accurate processing of record keeping of basic operational

processes. These include calculation, storage and retrieval.

Page 10: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

10

Transaction processing systems provide speed and accuracy, and can be programmed to follow

routines functions of the organization.

(d)Decision Support Systems

These systems assist higher management to make long term decisions. These type of

systems handle unstructured or semi structured decisions. A decision is considered unstructured

if there are no clear procedures for making the decision and if not all the factors to be

considered in the decision can be readily identified in advance.

These are not of recurring nature. Some recur infrequently or occur only once. A decision

support system must very flexible. The user should be able to produce customized reports by

giving particular data and format specific to particular situations.

(e)Management Information Systems

These systems assist lower management in problem solving and making decisions. They

use the results of transaction processing and some other information also. It is a set of

information processing functions. It should handle queries as quickly as they arrive. An

important element of MIS is database.

A database is a non-redundant collection of interrelated data items that can be

processed through application programs and available to many users.

Role, Task & Attribute of System Analyst Role and Tasks of System Analyst The primary objective of any system analyst is to identify the need of the organization by

acquiring information by various means and methods. Information acquired by the analyst can

be either computer based or manual. Collection of information is the vital step as indirectly all

the major decisions taken in the organizations are influenced. The system analyst has to

coordinate with the system users, computer programmers, manager and number of people who

are related with the use of system. Following are the tasks performed by the system analyst:

Defining Requirement: The basic step for any system analyst is to understand the requirements

of the users. This is achieved by various fact finding techniques like interviewing, observation,

questionnaire etc. The information should be collected in such a way that it will be useful to

develop such a system which can provide additional features to the users apart from the

desired.

Prioritizing Requirements: Number of users use the system in the organization. Each one has a

different requirement and retrieves different information. Due to certain limitations in

computing capacity it may not be possible to satisfy the needs of all the users. Even if the

Page 11: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

11

computer capacity is good enough is it necessary to take some tasks and update the tasks as per

the changing requirements. Hence it is important to create list of priorities according to users

requirements. The best way to overcome the above limitations is to have a common formal or

informal discussion with the users of the system. This helps the system analyst to arrive at a

better conclusion.

Gathering Facts, data and opinions of Users: After determining the necessary needs and

collecting useful information the analyst starts the development of the system with active

cooperation from the users of the system. Time to time, the users update the analyst with the

necessary information for developing the system. The analyst while developing the system

continuously consults the users and acquires their views and opinions.

Evaluation and Analysis: As the analyst maintains continuous he constantly changes and

modifies the system to make it better and more user friendly for the users.

Solving Problems: The analyst must provide alternate solutions to the management and should

a in dept study of the system to avoid future problems. The analyst should provide with some

flexible alternatives to the management which will help the manager to pick the system which

provides the best solution.

Drawing Specifications: The analyst must draw certain specifications which will be useful for the

manager. The analyst should lay the specification which can be easily understood by the

manager and they should be purely non-technical. The specifications must be in detailed and in

well presented form.

System design: Logical design of system is implemented and the design must be modular.

Evaluating Systems: Evaluate the system after it has been used for some time, Plan the

periodicity for evaluation and modify the system as needed.

Attribute Of System Analysis 1. Knowledge of Organizations

A system analyst must understand the way in which various organizations

function. He must understand the management structure and the relationship between

the departments in organizations. He must also find out how day to day operations are

performed in the organization for which the information system being developed. As

many systems are building for marketing, accounting and materials management, he

must have a working knowledge of this area. As a generalist he must try to familiarize

himself with a variety of organizations.

Page 12: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

12

2. Knowledge of Computer Systems and software

The analyst must know about recent developments in computer systems and

software. He need not be an expert programmer or computer manager, but know

enough technical details to interact effectively with program designer and computer

manager. He acts as intermediary in understanding user need and advising an

application programmer on how the needs can be realized on a computer system. He

must also advise on the information presentation techniques, appropriate use of

packaged software, newer computer languages and environments.

An analyst’s knowledge of computer system must be deep enough to determine

the feasibility of developing the required systems on a given hardware configuration.

Conversely, an analyst must be able to advice on the hardware configuration needed to

develop the required applications. Sometime the analyst may also advise on

argumentation of existing hardware to implement new systems.

3. Good Inter-Personal Relation

An analyst must be able to interpret and sharpen fuzzily stated user needs, must

be a good listener, a good diplomat and win users over as friends. He must be able to

resolve conflicting requirements and arrive at a consensus. He must understand people

and be able to influence them to change their minds and attitudes. He must understand

their needs and motivate them to work under stressful conditions such as meeting

deadlines.

4. Ability To Communicate

An analyst is also required to orally present his design to groups of users. Such

oral presentations are often made to non-technical management personnel. He must

thus be able to organize his thoughts and present them in a language easily understood

by users. Good oral presentations and satisfactory replies to question are essential to

convince management about the usefulness of computer-based information system.

5. An Analytical Mind

Analysts are required to find solutions to problems. A good analyst must be able

to perceive the core of problem and discard redundant data and information in the

problem statement. Any practical solution is normally non-ideal and requires

appropriate trade-offs. A good analyst would use appropriate analytical tools as

necessary and use commonsense.

6. Breath Of Knowledge

A systems analyst has to work with persons performing various jobs in an

organization. He may have to interact with accountants, sales persons, clerical staff,

production supervisors, store offices, purchase officers, directors etc. A system analyst

must understand how they do their jobs and design system to enable them to do their

Page 13: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

13

jobs better. During his career he will be working with a variety of organizations such as

hospitals, hotels, departmental stores, transport companies, educational institutions,

etc. Thus a general broad-base education is very useful.

Page 14: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

14

• Chapter 2- Approaches To System Development

System Development Life Cycle The process of reaching the state of computerisation is called systems

development. Systems development process consists of 7 steps called phases. These

phases are collectively known as SYSTEMS DEVELOPMENT LIFE CYCLE (SDLC).

Systems development process should be:

• Cost effective (economical)

• Efficient

• Controllable

• Flexible

• Universally applicable

• Correctable mid-course.

PHASES IN SDLC:

1. CONCEPTION (or selection) This phase initiates the process of examining whether an

organisation should opt for computerisation. The 4 major areas investigated during

the phase are:

• What are the problems?

• What are the end-objectives?

• What are the benefits?

• What are the areas to be covered?

The Proposal Form or the Project Request Form will contain details of these 4 areas.

• Ideas are conceived in the conception phase. Source of ideas can be :

• Request from users

• Opportunities created by technological advances

• Ideas from previous systems studies.

• Outside sources.

2. INITIATION: The objective of this phase is

• to indicate the possible ways of accomplishing the project objectives

• finding out the feasibility of each of the solutions

• recommend the best feasible solution.

During this phase, the Systems Analyst meets the user and gathers the following

information:

Page 15: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

15

• what are the current systems?

• How do they function?

• Drawbacks of the present system.

• Additional requirements of the system.

The Systems Analyst must take decisions on:

• What are the proposed solutions?

• What are the alternative solutions?

• What are the benefits?

• What are the time frames and what are the resources required?

The Systems Analyst must prepare a feasibility report regarding operational,

technical and economic feasibility of the project.

3. DETAILED ANALYSIS PHASE: The objective is to carry out a detailed analysis of every aspect

of the existing system and prepare systems specifications of the proposed system.

Main activities in this phase are :

• Determine the objectives f the current system.

• Study the current system to see how far it meets the objectives.

• Analyse user and company requirements to develop objectives of the new

system.

• Identify constraints.

• Identify user responsibilities for data input.

• Examine the interaction of the proposed system with other systems.

• Prepare systems specifications consisting of inputs, outputs, processes, storage,

controls, checks, and security procedures.

• Prepare a detailed Analysis Phase Report, the contents of which are:

a) Objectives definition

b) Definition of constraints

c) Names of inputs, outputs and procedures

d) Storage requirements

e) Controls, checks, and security procedures

f) Plan for design, development and implementation phases

g) Definitions of responsibilities for data entry, data updation, error control,

etc.

h) Definitions of time constraints..

4. DESIGN PHASE: In this phase we try to find out how to achieve the automated solution.

The following must be considered:

Page 16: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

16

• How to accept input details

• How to generate output details

• How and where to store the data

• What is the format of the data to be stored

• What controls and checks are to be provided to ensure accurate running of the

system

• What are the requirements in respect of manpower, computer systems, etc.

Steps in Design Phase:

The following steps are involved in design phase:

i. Prepare output specifications. - nature and type of output. The output may be

human-readable or machine readable.

ii. Prepare input specifications - The volume, frequency and user involvement must

be considered.

iii. Prepare file specifications - For input and output files. This includes:

• Name of the files

• Method of creation - by data entry or processing

• Type of file

• File layout - field names, types of fields, length of fields, etc

• Frequency of creation or updation

• File organisation or volume

• Retention period.

iv. Develop overall system design logic - i.e. systems flow charts, data flow

diagrams, etc

v. Develop program specifications - inputs, outputs, processing steps.

vi. Revise the plan for development phase and implementation phase, if necessary.

vii. Prepare a DESIGN PHASE REPORT, get it approved by the user and freeze the

design specifications

5. DEVELOPMENT PHASE: The objective of this phase is to obtain an operational system, fully

documented. The main activities in this phase are:

a) Coding the programs - i.e. writing programs based on user specifications in the

design phase.

b) Test programs for errors - individually.

c) Debug the programs - identify errors and correct them.

d) Test the system as a whole unit.

e) Test with historical data.

f) Document the system. Documents are in the form of three manuals:

• User Manual for User Dept.

• Systems Manual for Analysts / Programmers

• Operations manual for Operators.

Page 17: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

17

6. IMPLEMENTATION PHASE: In this phase, the old system is replace by the new system. The

activities required are: training the users and operators, and conversion from old system

to new system.

User training involves

• Document preparation - training how to prepare documents

• Training on data correction - what sort of errors can come up and how to correct

them.

• Interpretation of output

Operator's training involves

• Data entry

• Installation of a new computer system, terminals, and data entry equipment

• Using the equipment

• Training on data handling

• Training on common malfunctions

• Training on systems maintenance like formatting disks, equipment cleaning, etc.

7. EVALUATION PHASE: This phase allows us to assess the actual performance of the

system. Purpose of this phase is:

a) To examine the efficiency of the system

b) Compare actual achievements with plans

c) Provide feedback to systems analyst

d) Assess the performance of the system in the following aspects:

• Actual development cost and operating cost

• Realised benefits

• Timings achieved

• User satisfaction

• Error rates

• Problem areas

• Ease of use

Different Model For Software Development And Their Advantages And

Disadvantage

Watefall Model or Classic life cycle model This model is also known as the waterfall or linear sequential model. This model

demands a systematic and sequential approach to software development that begins at the

Page 18: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

18

system level and progresses through analysis, design, coding testing and maintenance. Figure

1.1 shows a diagrammatic

representation of this model.

The life-cycle paradigm

incorporates the following activities:

System engineering and analysis: Work on software development begins by establishing the

requirements for all elements of the system. System engineering and analysis involves gathering

of requirements at the system level, as well as basic top-level design and analysis. The

requirement gathering focuses especially on the software. The analyst must understand the

information domain of the software as well as the required function, performance and

interfacing. Requirements are documented and reviewed with the client.

Design: Software design is a multi-step process that focuses on data structures, software

architecture, procedural detail, and interface characterization. The design process translates

requirements into a representation of the software that can be assessed for quality before

coding begins. The design phase is also documented and becomes a part of the software

configuration.

Coding: The design must be translated into a machine-readable form. Coding performs this task.

If the design phase is dealt with in detail, the coding can be done mechanically.

Testing: Once code is generated, it has to be tested. Testing focuses on the logic as well as the

function of the program to ensure that the code is error free and that o/p matches the

requirement specifications.

Maintenance: Software undergoes change with time. Changes may occur on account of errors

encountered, to adapt to changes in the external environment or to enhance the functionality

and / or performance. Software maintenance reapplies each of the preceding life cycles to the

existing program.

The classic life cycle is one of the oldest models in use. However, there are a few associated

problems. Some of the disadvantages are given below.

Disadvantages:

Page 19: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

19

• Real projects rarely follow the sequential flow that the model proposes. Iteration always

occurs and creates problems in the application of the model.

• It is difficult for the client to state all requirements explicitly. The classic life cycle

requires this and it is thus difficult to accommodate the natural uncertainty that occurs

at the beginning of any new project.

• A working version of the program is not available until late in the project time span. A

major blunder may remain undetected until the working program is reviewed which is

potentially disastrous.

In spite of these problems the life-cycle method has an important place in software

engineering work. Some of the reasons are given below.

Advantages:

• The model provides a template into which methods for analysis, design, coding, testing

and maintenance can be placed.

• The steps of this model are very similar to the generic steps that are applicable to all

software engineering models.

• It is significantly preferable to a haphazard approach to software development.

Iterative Approach The iterative enhancement life cycle model counters the third limitation of the waterfall

model and tries to combine the benefits of both prototyping and the waterfall model. The basic

idea is that the software should be developed in increments, where each increment adds some

functional capability to the system until the full system is implemented. At each step extensions

and design modifications can be made. An advantage of this approach is that it can result in

better testing, since testing each increment is likely to be easier than testing entire system like in

the waterfall model. Furthermore, as in prototyping, the increments provides feedback to the

client which is useful for determining the final requirements of the system.

In the first step of iterative enhancement model, a simple initial implementation is done

for a subset of the overall problem. This subset is the one that contains some of the key aspects

of the problem which are easy to understand and implement, and which forms a useful and

usable system. A project control list is created which contains, in an order, all the tasks that

must be performed to obtain the final implementation. This project control list gives an idea of

how far the project is at any given step from the final system.

Each step consists of removing the next step from the list. Designing the implementation

for the selected task, coding and testing the implementation, and performing an analysis of the

Page 20: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

20

partial system obtained after this step and updating the list as a result of the analysis. These

three phases are called the design phase, implementation phase and analysis phase. The

process is iterated until the project control list is empty, at the time the final implementation of

the system will be available. The process involved in iterative enhancement model is shown in

the figure below.

The project control list guides the iteration steps and keeps track of all tasks that must

be done. The tasks in the list can be including redesign of defective components found during

analysis. Each entry in that list is a task that should be performed in one step of the iterative

enhancement process, and should be simple enough to be completely understood. Selecting

tasks in this manner will minimize the chances of errors and reduce the redesign work.

Advantages

The advantages of the Iterative Model are:

• Faster Coding, testing and Design Phases

• Facilitates the support for changes within the life cycle

Disadvantages

The disadvantages of the Iterative Model are:

• More time spent in review and analysis

• A lot of steps that need to be followed in this model

• Delay in one phase can have detrimental effect on the software as a whole

Extreme Programming

The first Extreme Programming project was started March 6, 1996. Extreme

Programming is one of several popular Agile Processes. It has already been proven to be very

successful at many companies of all different sizes and industries world wide.

Extreme Programming is successful because it stresses customer satisfaction. Instead of

delivering everything you could possibly want on some date far in the future this process

delivers the software you need as you need it. Extreme Programming empowers your developers

to confidently respond to changing customer requirements, even late in the life cycle.

Page 21: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

21

Extreme Programming emphasizes teamwork. Managers, customers, and developers are

all equal partners in a collaborative team. Extreme Programming implements a simple, yet

effective environment enabling teams to become highly productive. The team self-organizes

around the problem to solve it as efficiently as possible.

Extreme Programming improves a software project in five essential ways;

communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly

communicate with their customers and fellow programmers. They keep their design simple and

clean. They get feedback by testing their software starting on day one. They deliver the system

to the customers as early as possible and implement changes as suggested. Every small success

deepens their respect for the unique contributions of each and every team member. With this

foundation Extreme Programmers are able to courageously respond to changing requirements

and technology.

The most surprising aspect of Extreme Programming is its simple rules. Extreme

Programming is a lot like a jig saw puzzle. There are many small pieces. Individually the pieces

make no sense, but when combined together a complete picture can be seen. The rules may

seem awkward and perhaps even naive at first, but are based on sound values and principles.

Our rules set expectations between team members but are not the end goal themselves.

You will come to realize these rules define an environment that promotes team collaboration

and empowerment that is your goal. Once achieved productive teamwork will continue even as

rules are changed to fit your company's specific needs.

This flow chart shows how Extreme Programming's rules work together. Customers enjoy

being partners in the software process, developers actively contribute regardless of experience

level, and managers concentrate on communication and relationships. Unproductive activities

have been trimmed to reduce costs and frustration of everyone involved.

Page 22: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

22

Advantages

• Customer focus increase the chance that the software produced will actually meet the

needs of the users

• The focus on small, incremental release decreases the risk on your project:

o by showing that your approach works and

o by putting functionality in the hands of your users, enabling them to provide

timely feedback regarding your work.

• Continuous testing and integration helps to increase the quality of your work

• XP is attractive to programmers who normally are unwilling to adopt a software process,

enabling your organization to manage its software efforts better.

Disadvantages

• XP is geared toward a single project, developed and maintained by a single team.

• XP is particularly vulnerable to "bad apple" developers who:

o don't work well with others

o who think they know it all, and/or

o who are not willing to share their "superior” code

• XP will not work in an environment where a customer or manager insists on a complete

specification or design before they begin programming.

• XP will not work in an environment where programmers are separated geographically.

• XP has not been proven to work with systems that have scalability issues (new

applications must integrate into existing systems).

Page 23: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

23

Rapid Application Development Method (RAD Model) Rapid Action Development is an incremental software development process model that

emphasizes an extremely short development cycle. The RAD model is a high-speed adaptation of

the linear sequential model in which rapid development is achieved by using component-based

construction. If requirements are well understood and project scope is constrained, the RAD

model enables a development team to create a fully functional system within 60-90 days. Used

primarily for information system applications, the RAD approach encompasses the following

phases:

• Business modeling: The information flow among business functions is modeled so as to

understand the following:

i) The information that drives the business process

ii) The information generated

iii) The source and destination of the information generated

iv) The processes that affect this information

• Data modeling: The information flow defined, as a part of the business-modeling phase

is refined into a set of data objects that are needed to support the business. The

attributes of each object are identified and the relationships between these objects are

defined.

• Process modeling: The data objects defined in the previous phase are transformed to

achieve the information flow necessary to implement a business function. Processing

descriptions are created for data manipulation.

• Application generation: RAD assumes the use of fourth generation techniques. Rather

than using third generation languages, the RAD process works to reuse existing

programming components whenever possible or create reusable components. In all

Page 24: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

24

cases, automated tools are used to facilitate construction.

• Testing and turnover: Since RAD emphasizes reuse, most of the components have

already been tested. This reduces overall testing time. However, new components must

be tested and all interfaces must be fully exercised.

In general, if a business function can be modularized in a way that enables each function

to be completed in less than three months, it is a candidate for RAD. Each major function can be

addressed by a separate RAD team and then integrated to form a whole.

Advantages:

• Modularized approach to development

• Creation and use of reusable components

• Drastic reduction in development time

Disadvantages:

• For large projects, sufficient human resources are needed to create the right number of

RAD teams.

• Not all types of applications are appropriate for RAD. If a system cannot be modularized,

building the necessary components for RAD will be difficult.

• Not appropriate when the technical risks are high. For example, when an application

makes heavy use of new technology or when the software requires a high degree of

interoperability with existing programs.

Evolutionary Software Process Model

Incremental model This model combines elements of the linear sequential model with the iterative

philosophy of prototyping. The incremental model applies linear sequences in a staggered

fashion as time progresses. Each linear sequence produces a deliverable increment of the

software. For example, word processing software may deliver basic file management, editing

and document production functions in the first increment. More sophisticated editing and

document production in the second increment, spelling and grammar checking in the third

increment, advanced page layout in the fourth increment and so on. The process flow for any

increment can incorporate the prototyping model.

When an incremental model is used, the first increment is often a core product. Hence,

basic requirements are met, but supplementary features remain undelivered. The client uses the

core product. As a result of his evaluation, a plan is developed for the next increment. The plan

Page 25: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

25

addresses improvement of the core features and addition of supplementary features. This

process is repeated following delivery of each increment, until the complete product is produced.

As opposed to prototyping, incremental models focus on the delivery of an operational product

after every iteration.

Advantages:

• Particularly useful when staffing is inadequate for a complete implementation by the

business deadline.

• Early increments can be implemented with fewer people. If the core product is well

received, additional staff can be added to implement the next increment.

• Increments can be planned to manage technical risks. For example, the system may

require availability of some hardware that is under development. It may be possible to

plan early increments without the use of this hardware, thus enabling partial

functionality and avoiding unnecessary delay.

Disadvantages

• Each phase of an iteration is rigid and do not overlap each other.

• Problems may arise pertaining to system architecture because not all requirements are

gathered up front for the entire software life cycle.

Page 26: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

26

Spiral model The spiral model in software engineering has been designed to incorporate the best

features of both the classic life cycle and the prototype models, while at the same time adding

an element of risktaking analysis that is missing in these models. The model, represented in

figure 1.3, defines four major activities defined by the four quadrants of the figure:

• Planning: Determination of objectives, alternatives and constraints.

• Risk analysis: Analysis of alternatives and identification or resolution of risks.

• Engineering: Development of the next level product.

• Customer evaluation: Assessment of the results of engineering.

An interesting aspect of the spiral model is the radial dimension as depicted in the figure.

With each successive iteration around the spiral, progressively more complete versions of the

software are built. During the first circuit around the spiral, objectives, alternatives and

constraints are defined and risks are identified and analyzed. If risk analysis indicates that there

is an uncertainty in the requirements, prototyping may be used in the engineering quadrant to

assist both the developer and the client.

The client now evaluates the engineering work and makes suggestions for improvement. At

each loop around the spiral, the risk analysis results in a .go / no . go. decision. If risks are too

great the project can be terminated.

In most cases however, the spiral flow continues outward toward a more complete model of

the system, and ultimately to the operational system itself. Every circuit around the spiral

requires engineering that can be accomplished using the life cycle or the prototype models. It

should be noted, that the number of development activities increase as activities move away

from the center of the spiral.

Page 27: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

27

Like all other models, the spiral model too has a few associated problems, which are

discussed below.

Disadvantages:

• It may be difficult to convince clients that the evolutionary approach is controllable.

• It demands considerable risk assessment expertise and relies on this for success.

• If major risk is not uncovered, problems will undoubtedly occur.

• The model is relatively new and has not been as widely used as the life cycle or the

prototype models. It will take a few more years to determine efficiency of this process

with certainty.

This model however is one of the most realistic approaches available for software

engineering. It also has a few advantages, which are discussed below.

Advantages:

• The evolutionary approach enables developers and clients to understand and react to

risks at an evolutionary level.

• It uses prototyping as a risk reduction mechanism and allows the developer to use this

approach at any stage of the development.

• It uses the systematic approach suggested by the classic life cycle method but

incorporates it into an iterative framework that is more realistic.

Page 28: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

28

• This model demands an evaluation of risks at all stages and should reduce risks before

they become problematic, if properly applied.

Prototype model

Often a customer has defined a set of objectives for software, but not identified the

detailed input, processing or output requirements. In other cases, the developer may be unsure

of the efficiency of an algorithm, the adaptability of the operating system or the form that the

human-machine interaction should take. In these situations, a prototyping approach may be the

best approach. Prototyping is a process that enables the developer to create a model of the

software that must be built. The sequence of events for the prototyping model is illustrated in

figure. Prototyping begins with requirements gathering. The developer and the client meet and

define the overall objectives for the software, identify the requirements, and outline areas

where further definition is required. In the next phase a quick design is created. This focuses on

those aspects of the software that are visible to the user (e.g. i/p approaches and o/p formats).

The quick design leads to the construction of the prototype. This prototype is evaluated by the

client / user and is used to refine requirements for the software to be developed. A process of

iteration occurs as the prototype is .tuned. to satisfy the needs of the client, while at the same

time enabling the developer to more clearly understand what needs to be done.

The prototyping model has a few associated problems. These are discussed below.

Disadvantages:

• The client sees what is apparently a working version of the software unaware that in the

rush to develop a working model, software quality and long-term maintainability is not

considered. When informed that the system must be rebuilt, most clients demand that

the existing application be fixed and made a working product. Often software developers

are forced to relent.

Page 29: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

29

• The developer often makes implementation compromises to develop a working model

quickly. An inappropriate operating system or language may be selected simply because

of availability. An inefficient algorithm may be used to demonstrate capability.

Eventually the developer may become familiar with these choices and incorporate them

as an integral part of the system.

Although problems may occur prototyping may be an effective model for software

engineering. Some of the advantages of this model are enumerated below.

Advantages:

• It is especially useful in situations where requirements are not clearly defined at the

beginning and are not understood both by the client and the developer.

• Prototyping is also helpful in situations where an application is built for the first time

with no precedents to be followed. In such circumstances, unforeseen eventualities may

occur which cannot be predicted and can only be dealt with when encountered.

Component assembly model

Object oriented technologies provide the technical framework for a component based

process model for software engineering. This model emphasizes the creation of classes that

encapsulate both data and the algorithms used to manipulate the data. The component-based

development (CBD) model incorporates many characteristics of the spiral model. It is

evolutionary in nature, thus demanding an iterative approach to software creation.

However, the model composes applications from pre-packaged software components called

classes. The engineering begins with the identification of candidate classes. This is done by

examining the data to be manipulated, and the algorithms that will be used to accomplish this

manipulation. Corresponding data and algorithms are packaged into a class. Classes created in

past applications are stored in a class library. Once candidate classes are identified the class

library is searched to see if a match exists. If it does, these classes are extracted from the library

and reused. If it does not exist, it is engineered using object-oriented techniques. The first

iteration of the application is then composed. Process flow moves to the spiral and will

ultimately re-enter the CBD during subsequent passes through the engineering activity.

Advantages:

• The CBD model leads to software reuse, and reusability provides software engineers with

a number of measurable benefits.

• This model leads to a 70% reduction in development cycle time and an 84% reduction in

projection cost.

Disadvantages:

Page 30: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

30

• The results mentioned above are inherently dependent on the robustness of the

component library.

Page 31: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

31

Fourth generation techniques (4GT)

The term .fourth generation techniques. encompasses a broad array of software tools

that have one thing in common: each enables the s/ware engineer to specify some characteristic

of the software at a higher level. The tool then automatically generates source code based on

the developer’s specifications.

Currently, a software development environment that supports the 4GT model includes

some or all of the following tools: nonprocedural languages for database query, report

generation, data manipulation, screen interaction and definition, code generation, high-level

graphics capability, spreadsheet capability, automated generation of HTML, etc. initially many

of these tools were available only for very specific application domains, but today 4GT

environments have been extended to address most software application categories.

Like all other models, 4GT begins with a requirements gathering phase. Ideally, the

customer would describe the requirements, which are directly translated into an operational

prototype. Practically, however, the client may be unsure of the requirements, may be

ambiguous in his specs or may be unable to specify information in a manner that a 4GT tool can

use. Thus, the client/developer dialog remains an essential part of the development process.

For small applications, it may be possible to move directly from the requirements

gathering phase to the implementation phase using a nonprocedural fourth generation

language. However for larger projects a design strategy is necessary. Otherwise, the same

difficulties are likely to arise as with conventional approaches. As with all other models, the 4GT

model has both merits and demerits.

These are enumerated below:

Advantages:

• Dramatic reduction in software development time.

• Improved productivity for software developers.

Disadvantages:

• Not much easier to use as compared to programming languages.

Page 32: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

32

• Resultant source code produced by such tools is sometimes inefficient.

• The maintainability of large software systems built using 4GT is open to question.

Page 33: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

33

The Concurrent Development Model

The concurrent process model can be represented schematically as a series of major

technical activities, tasks, and their associated states. For example, the engineering activity

defined for the spiral model is accomplished by invoking the following tasks: prototyping and/or

analysis modeling, requirements specification, and design.

Figure provides a schematic representation of one activity with the concurrent process

model. The activity—analysis—may be in any one of the states10 noted at any given time.

Similarly, other activities (e.g., design or customer communication) can be represented in an

analogous manner. All activities exist concurrently but reside in different states. For example,

early in a project the customer communication activity (not shown in the figure) has completed

its first iteration and exists in the awaiting changes state. The analysis activity (which existed in

the none state while initial customer communication was completed) now makes a transition

into the under development state. If, however, the customer indicates that changes in

requirements must be made, the analysis activity moves from the under development state into

the awaiting changes state.

The concurrent process model defines a series of events that will trigger transitions from

state to state for each of the software engineering activities. For example, during early stages of

design, an inconsistency in the analysis model is uncovered. This generates the event analysis

model correction which will trigger the analysis activity from the done state into the awaiting

changes state.

The concurrent process model is often used as the paradigm for the development of

client/server11 applications. A client/server system is composed of a set of functional

components. When applied to client/server, the concurrent process model defines activities in

two dimensions a system dimension and a component dimension. System level issues are

addressed using three activities: design, assembly, and use. The component dimension is

addressed with two activities: design and realization. Concurrency is achieved in two ways:

(1) system and component activities occur simultaneously and can be modeled using the state-

oriented approach described previously;

(2) a typical client/server application is implemented with many components, each of which can

be designed and realized concurrently.

In reality, the concurrent process model is applicable to all types of software

development and provides an accurate picture of the current state of a project. Rather than

confining software engineering activities to a sequence of events, it defines a network of

activities. Each activity on the network exists simultaneously with other activities. Events

generated within a given activity or at some other place in the activity network trigger

transitions among the states of an activity.

Page 34: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

34

V-Shaped Model Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of

processes. Each phase must be completed before the next phase begins. Testing is emphasized

in this model more so than the waterfall model though. The testing procedures are developed

early in the life cycle before any coding is done, during each of the phases preceding

implementation.

Requirements begin the life cycle model just like the waterfall model. Before

development is started, a system test plan is created. The test plan focuses on meeting the

functionality specified in the requirements gathering.

The high-level design phase focuses on system architecture and design. An integration

test plan is created in this phase as well in order to test the pieces of the software systems

ability to work together.

The low-level design phase is where the actual software components are designed, and

unit tests are created in this phase as well.

The implementation phase is, again, where all coding takes place. Once coding is

complete, the path of execution continues up the right side of the V where the test plans

developed earlier are now put to use.

Advantages

• Simple and easy to use.

• Each phase has specific deliverables.

• Higher chance of success over the waterfall model due to the development of test plans

early on during the life cycle.

• Works well for small projects where requirements are easily understood.

Disadvantages

Page 35: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

35

• Very rigid, like the waterfall model.

• Little flexibility and adjusting scope is difficult and expensive.

• Software is developed during the implementation phase, so no early prototypes of the

software are produced.

• Model doesn’t provide a clear path for problems found during testing phases.

Page 36: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

36

Chapter 3- Analysis: Investigating System Requirements

Activities of the Analysis Phase The requirement analysis task is a process of discovery, refinement, modeling and

specification. The software scope is refined in detail. Models of the required information, control

flow, operational behavior and data content are created. Alternative solutions are analyzed and

allocated to various software elements.

Both the developer and the customer take an active role in requirements analysis and

specification. The customer attempts to reformulate a sometimes, unclear concept of software

function and performance into concrete detail. The developer acts as interrogator, consultant

and problem-solver.

• Requirement analysis is a software engineering task that bridges the gap between

system level software allocation and software design.

• It enables the system engineer to specify software function and performance indicate

software’s interface with other system elements and establish design constraints that

the software must meet.

• It allows the software engineer to refine the software allocation and build models of the

process, data and behavioral domains that will be treated by software.

• It provides the software designer with a representation of information and function that

can be translated into data, architectural and procedural design.

• It also provides the developer and the client with the means to assess quality once the

Software is built.

The principles of requirement analysis call upon the analyst to systematically approach

the specification of the system to be developed. This means that the analysis has to be done

using the available information. Generally, all computer systems are looked upon as information

processing systems, since they process data input and produce a useful output.

The logical view of a system gives the overall feel of how the system operates. Any

system performs three generic functions: input, output and processing. The logical view focuses

on the problem-specific functions. This helps the analyst to identify the functional model of the

system. The functional model begins with a single context level model. Over a series of

iterations, more and more functional detail is provided, until all system functionality is

represented.

The physical view of the system focuses on the operations being performed on the data

that is either taken as input or generated as output. This view determines the actions to be

performed on the data under specific conditions. This helps the analyst to identify the behavioral

Page 37: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

37

model of the system. The analyst can determine an observable mode of behavior that changes

only when some event occurs. Examples of such events are:

i) An internal clock indicating some specified time has passed.

ii) A mouse movement.

iii) An external time signal.

Activities or Analysis Task Analysis tasks

All analysis methods are related by a set of fundamental principles:

• The information domain of the problem must be represented and understood.

• Models that depict system information, function and behavior should be developed.

• The models and the problem must be partitioned in a manner that uncovers detail in a

layered or hierarchical fashion.

• The analysis process should move form essential information to implementation detail.

Software requirement analysis may be divided into five areas of effort:

i) Problem recognition

Initially, the analyst studies the system specification and the software project plan. Next

communication for analysis must be established so that problem recognition is ensured. The

analyst must establish contact with management and the technical staff of the user/customer

organization and the software development organization. The project manager can serve as

a coordinator to facilitate establishment of communication paths. The objective of the

analyst is to recognize the basic problem elements as perceived by the client.

ii) Evaluation and synthesis

Problem evaluation and synthesis is the next major area of effort for analysis. The

analyst must evaluate the flow and content of information, define and elaborate all software

functions, understand software behavior in the context of events that affect the system,

establish interface characteristics and uncover design constraints. Each of these tasks serves

to define the problem so that an overall approach may be synthesized.

iii) Modeling

We create models to gain a better understanding of the actual entity to be built. The

software model must be capable of modeling the information that software transforms, the

functions that enable the transformation to occur and the behavior of the system during

transformation. Models created serve a number of important roles:

• The model aids the analyst in understanding the information, function and behavior

of the system, thus making the requirement analysis easier and more systematic.

• The model becomes the focal point or review and the key to determining the

completeness, consistency and accuracy of the specification.

Page 38: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

38

• The model becomes the foundation for design, providing the designer with an

essential representation of software that can be mapped into an implementation

context.

iv) Specification

There is no doubt that the mode of specification has much to do with the quality of the

solution. The quality, timeliness and completeness of the software may be adversely

affected by incomplete or inconsistent specifications.

Software requirements may be analyzed in a number of ways. These analysis techniques

lead to a paper or computer-based specification that contains graphical and natural language

descriptions of the software requirements.

v) Review

Both the software developer and the client conduct a review of the software

requirements specification. Because the specification forms the foundation of the development

phase, extreme care is taken in conducting the review.

The review is first conducted at a macroscopic level. The reviewers attempt to ensure

that the specification is complete, consistent and accurate. In the next phase, the review is

conducted at a detailed level. Here, the concern is on the wording of the specification. The

developer attempts to uncover problems that may be hidden within the specification content.

Fact Finding Methods Fact-finding takes place from the start of the project – during the feasibility study right

through to the final implementation and review. Although progressively lower levels of detail

are required by analysts during the logical design, physical design and implementation phases,

the main fact-finding activity is during analysis. This fact-finding establishes what the existing

system does and what the problems are, and leads to a definition of a set of options from which

the users may choose their required system. In this chapter we consider the second stage of our

systems analysis model – asking questions and collecting data. Fact-finding interviewing, an

important technique used by analysts during their investigation, is described in detail, and a

number of other methods used to collect data are also described.

We know the important of the importance of planning and of preparing thoroughly

before embarking on detailed analysis. If this has been done, you will already have collected

quite a lot of background information about the project and the client’s organisation. You may

also have been involved in carrying out a feasibility study. You will have some facts about the

current system: these may include details of staff and equipment resources, manual and

Page 39: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

39

computer procedures, and an outline of current problems. This background information should

be checked carefully before beginning the detailed fact-finding task. You should now be ready to

begin asking questions, and, as a result of your planning, these questions will be relevant,

focused and appropriate.

In carrying out your investigation you will be collecting information about the current

system, and, by recording the problems and requirements described by users of the current

system, building up a picture of the required system. The facts gathered from each part of the

client’s organisation will be concerned primarily with the current system and how it operates,

and will include some or all of the following: details of inputs to and outputs from the system;

how information is stored; volumes and frequencies of data; any trends that can be identified;

and specific problems, with examples if possible, that are experienced by users.

In addition, you may also be able to gather useful information about:

• departmental objectives;

• decisions made and the facts upon which they are based;

• what is done, to what purpose, who does it, where it is done and the reasonwhy;

• critical factors affecting the business;

• staff and equipment costs.

In order to collect this data and related information, a range of fact-finding methods can

be used. Those most commonly used include interviewing, questionnaires, observation,

searching records and document analysis. Each of these methods is described in this chapter,

but it is important to remember that they are not mutually exclusive. More than one method

may be used during a factfinding session: for example, during a fact-finding interview a

questionnaire may be completed, records inspected and documents analysed.

Review existing reports, forms and procedure description / Record Inspection

Time constraints can prevent systems analysts from making as thorough an investigation

of the current system as they might wish. Another approach, which enables conclusions to be

drawn from a sample of past and present results of the current system, is called record

searching. This involves looking through written records to obtain quantitative information, and

to confirm or quantify information already supplied by user staff or management. Information

can be collected about:

• the volume of file data and transactions, frequencies, and trends;

• the frequency with which files are updated;

• the accuracy of the data held in the system;

• unused forms;

• exceptions and omissions.

Page 40: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

40

Using this information, an assessment of the volatility of the information can be made,

and the usefulness of existing information can be questioned if it appears that some file data is

merely updated, often inaccurate or little used. All of the information collected by record

searching can be used to cross-check information given by users of the system. This doesn’t

imply that user opinion will be inaccurate, but discrepancies can be evaluated and the reasons

for them discussed.

Where there are a large number of documents, statistical sampling can be used. This will

involve sampling randomly or systematically to provide the required quantitative and qualitative

information. This can be perfectly satisfactory if the analyst understands the concepts behind

statistical sampling, but is a very hazardous exercise for the untrained. One particularly common

fallacy is to draw conclusions from a non-representative sample. Extra care should be taken in

estimating volumes and data field sizes from the evidence of a sample. For instance, a small

sample of cash receipts inspected during a mid-month slack period might indicate an average of

40 per day, with a maximum value of £1500 for any one item. Such a sample used

indiscriminately for system design might be disastrous if the real-life situation was that the

number of receipts ranged from 20 to 2000 per day depending upon the time in the month, and

that, exceptionally, cheques for over £100,000 were received. It is therefore recommended that

more than one sample is taken and the results compared to ensure consistency.

Conduct Interview Interviewing is the most commonly used, and normally most useful, fact-finding

technique. We can interview to collect information from individuals face-to-face. There can be

several objectives to using interviewing, such as finding out facts, verifying facts, clarifying facts,

generating enthusiasm, getting the end-user involved, identifying requirements, and gathering

ideas and opinions. However, using the interviewing technique requires good communication

skills for dealing effectively with people who have different values, priorities, opinions,

motivations, and personalities. As with other fact-finding techniques, interviewing is not always

the best method for all situations. The advantages and disadvantages of using interviewing as a

fact-finding technique are listed in Table.

There are two types of interview: unstructured and structured. Unstructured interviews

are conducted with only a general objective in mind and with few, if any, specific questions. The

interviewer counts on the interviewee to provide a framework and direction to the interview.

This type of interview frequently loses focus and, for this reason, it often does not work well for

database analysis and design.

In structured interviews, the interviewer has a specific set of questions to ask the

interviewee. Depending on the interviewee’s responses, the interviewer will direct additional

Page 41: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

41

questions to obtain clarification or expansion. Open-ended questions allow the interviewee to

respond in any way that seems appropriate. An example of an open-ended question is: ‘Why are

you dissatisfied with the report on client registration?’ Closedended questions restrict answers

to either specific choices or short, direct responses. An example of such a question might be:

‘Are you receiving the report on client registration on time?’ or ‘Does the report on client

registration contain accurate information?’ Both questions require only a ‘Yes’ or ‘No’ response.

To ensure a successful interview includes selecting appropriate individuals to interview,

preparing extensively for the interview, and conducting the interview in an efficient and

effective manner. Table:Advantage & Disavantage Of Interviewing

Advantage Disadvantage

i. Allows interviewee to respond freely

and openly to questions

i. Very time-consuming and costly, and

therefore may be impractical

ii. Allows interviewee to feel part of

project

ii. Success is dependent on

communication skills of interviewer

iii. Allows interviewer to follow up on

interesting comments made by

interviewee

iii. Success can be dependent on

willingness of interviewees to

participate in interviews

iv. Allows interviewer to adapt or re-word

questions during interview

v. Allows interviewer to observe

interviewee’s body language

Observe & document business process Observation

The systems analyst is constantly observing, and observations often provide clues about

why the current system is not functioning properly. Observations of the local environment

during visits to the client site, as well as very small and unexpected incidents, can all be

significant in later analysis, and it is important that they are recorded at the time.

The analyst may also be involved in undertaking planned or conscious observations when

it is decided to use this technique for part of the study. This will involve watching an operation

for a period to see exactly what happens. Clearly, formal observation should only be used if

agreement is given and users are prepared to cooperate. Observation should be done openly, as

covert observation can undermine any trust and goodwill the analyst manages to build up. The

technique is particularly good for tracing bottlenecks and checking facts that have already been

noted.

A checklist, highlighting useful areas for observation, is shown as Figure.

Page 42: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

42

A related method is called systematic activity sampling.This involves making

observations of a particular operation at predetermined times. The times are chosen initially by

some random device, so that the staff carrying out the operation do not know in advance when

they will next be under observation.

Document Analysis

When investigating the data flowing through a system another useful technique is to

collect documents that show how this information is organised. Such documents might include

reports, forms, organisation charts or formal lists. In order to fully understand the purpose of a

document and its importance to the business, the analyst must ask questions about how, where,

why and when it is used. This is called document analysis, and is another fact-finding technique

available. It is particularly powerful when used in combination with one or more of the other

techniques.

Figure : Observation Checklist

Build Prototypes Prototyping:

a) Prototyping is a technique for quickly building a functioning but incomplete model of the

information system.

b) A prototype is a small, representative or working model of user’s requirements of a

proposed design for an information system.

c) The Development of the prototype is based on the fact that the requirements are seldom

fully known at the beginning of the project. The idea is to build a first simplified version

Page 43: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

43

of the system and seek feedback from the people involved in order to then design a

better subsequent version. This process is repeated until the system meets the client

condition of acceptance.

d) Any given prototype may omit certain functions or features until such a time as the

prototype has sufficiently involved into an acceptable implementation of requirements.

e) There are two types of prototyping models

• Throw-away prototyping - In this model, the prototype is discarded once it has served

the purpose of clarifying the system requirements.

• Explanatory prototyping - In this model, the prototype evolves into the main system.

Need for Prototyping:

a) Information requirement are not always well defined. Users may know only that certain

business areas need improvements or that the existing procedures must be changed. Or

may know that they need better information for managing certain activities but are not

sure what that information is.

b) The user’s requirements may not be too vague to even begin formulating a design.

c) Developers may have neither information nor experience of some unique situations or

some high-cost or high-risk situations, in which the proposed design is new and untested.

d) Developers may be unsure of the efficiency of an algorithm, the adaptabily of an

operating system, or the form that human-machine interaction should take.

e) In these and many other situations a prototypingapproach may offer the best approach.

Steps in Prototyping:

a) Requirement Analysis:

Identify the users information and operating requirements.

b) Prototype creation or modification :

Develop a working prototype that focuses on only the most important functions using a

basic database.

c) Customer Evalution :

Allow the customers to use the prototype and evaluate it.Gather the feedback, reviews,

comments and suggestion.

d) Prototype refinement :

Integrate the most important changes into the current prototype on the basis of

customer feedback.

e) System development and implementation :

Once the refined prototype satisfies all the clients conditions of acceptance, it is

transformed into the actual system.

Advantages :

• Shorter development time.

• More accurate user requirements.

Page 44: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

44

• Greater user participation and support

• Relatively inexpensive to build as compared with the cost of conventional system.

Disadvantages :

• An appropriate operating system or programming languages may be used simply

because is it available.

• The completed system may not contain all the features and final touch. For instance

headings titles and page numbers in the report may be missing.

• File organization may be temporary nd record structures may left incomplete.

• Processing and input controls may be missing and documentation of system may have

been avoided entirely.

• Development of system may become never ending process as changes will keep

happening.

• Adds to cost and time of the developing system if left uncontrolled.

Application :

• This method is most useful for unique applications where developers have little

information or experience or where risk or error may be high.

• It is useful to test the feasibility of the system or to identify user requirements.

Questionnaries The use of a questionnaire might seem an attractive idea, as data can be collected from

a lot of people without having to visit them all. Although this is true, the method should be used

with care, as it is difficult to design a questionnaire that is both simple and comprehensive. Also,

unless the questions are kept short and clear, they may be misunderstood by those questioned,

making the data collected unreliable.

A questionnaire may be the most effective method of fact-finding to collect a small

amount of data from a lot of people: for example, where staff are located over a widely spread

geographical area; when data must be collected from a large number of staff; when time is

short; and when 100% coverage is not essential. A questionnaire can also be used as a means of

verifying data collected using other methods, or as the basis for the question-and-answer

section of a fact-finding interview. Another effective use of a questionnaire is to send it out in

advance of an interview. This will enable the respondent to assemble the required information

before the meeting, which means that the interview can be more productive.

When designing a questionnaire, there are three sections to consider:

• a heading section, which describes the purpose of the questionnaire and contains the

main references – name, staff identification number, date, etc.;

Page 45: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

45

• a classification section for collecting information that can later be used for analysing and

summarising the total data, such as age, sex, grade, job title, location;

• a data section made up of questions designed to elicit the specific information being

sought by the analyst.

These three sections can clearly be observed in the questionnaire in Figure: the heading

section is at the top (name and date), the next line down is for recording classification

information (job title, department and section), and the data section is designed to gather

information about the duties associated with a particular job.

The questionnaire designer aims to formulate questions in the data section that are not

open to misinterpretation, that are unbiased, and which are certain to obtain the exact data

required. This is not easy, but the principles of questioning described earlier in this chapter

provide a useful guide. A mixture of open and closed questions can be used, although it is useful

to restrict the scope of open questions so that the questionnaire focuses on the area under

investigation. To ensure that your questions are unambiguous, you will need to test them on a

sample of other people, and if the questionnaire is going to be the major factfinding instrument

the whole document will have to be field tested and validated before its widespread use.

One way of avoiding misunderstanding and gaining cooperation is to write a covering

letter explaining the purpose of the questionnaire and emphasising the

Page 46: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

46

date by which the questionnaire should be returned. Even with this explanation, an expectation

that all the questionnaires will be completed and returned is not realistic. Some people object to

filling in forms, some will forget, and others delay completing them until they are eventually

lost! To conclude, then, a questionnaire can be a useful fact-finding instrument, but care must

be taken with the design. A poor design can mean that the form is difficult to complete, which

will result in poor-quality information being returned to the analyst.

Conduct JAD Session JAD is a technique used to expedite the investigation of system requirements. Usually,

the analysts first meet with the users and document the discussion through notes & models

(which are later reviewed). Unresolved issues are placed on an open-items list, and are

eventually discussed in additional meetings.

The objective of this technique is to compress all these activities into a shorter

series of JAD sessions with users and project team members. During a session, all of the fact-

finding, model-building, policy decisions, and verification activities are completed for a

particular aspect of the system.

Page 47: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

47

The success of a JAD session depends on the presence of all key stakeholders and their

contribution and decisions.

Validate the requirements Structured Walkthroughts

A structured walkthrough is a planned review of a system or its software by persons

involved in the development efforts. Sometimes structured walkthrough is called peer

walkthrough because the participant are colleagues of the same levels in the organization

Purpose:

1. The purpose of structured walkthrough is to find area where improvement can be made

in the system or the development process.

2. Structured walkthrough are often employed to enhance quality and to provide guidance

to system analyst and programmers.

3. A walkthrough should be viewed by the programmers and analyst as an opportunity to

receive assistance not as an obstacle to be avoided tolerated.

4. The structured walkthrough can be used to process a constructive and cost effective

management tool after detailed investigation following design and during program

development.

PROCESS OF STRUCTURED WALKTHROUGH:

1. The walkthrough concept recognizes that system development is team process. The

individuals who formulated the design specifications or crated the program code are

parts of the review team

2. A moderated is chosen to lead the review.

3. A scribe a recorder is also needed to capture the details of the discussionsand the ideas

that are raised.

4. Maintenance should be addressed during walkthrough.

5. Generally no more than seven persons should be involved including the individuals who

actually developed the product under review, the recorder and the review leader.

6. Structured review rarely exceed 90 minutes in length.

REQUIREMENT REVIEW:

1. A requirement review is a structured walkthrough conducted to examine therequirement

specifications formulated by the analyst.

2. It is also called as specification review.

3. It aims at examining the functional activities and processes that the review the new

system will handle.

4. it includes documentation that participants read and study price to actual walkthrough.

Page 48: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

48

DESIGN REVIEW

1. Design review focuses on design specification for meeting previously identified system

requirements

2. The purpose of this type of structured walkthrough is to determine whether the

proposed design will meet the requirement effectively and efficiently.

3. If the participant find discrepancies between the design and requprement they will point

out them discuss them.

CODE REVIEW:

1. A code review is structured walkthrough conducted to examine the program code

developed in a system along its documentation.

2. It is used for new systems and for systems under maintenance

3. A code review does not deal with the entire software but rather with individual modules

or major components in a program.

4. when programs are reviewed the participants also assess the execution efficiency use of

standard data names and modules and program errors.

Page 49: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

49

Chapter 4- Feasibility Analysis

Feasibility Study A feasibility study is a preliminary study undertaken before the real work of a project

starts to ascertain the likelihood of the project's success. It is an analysis of possible solutions to

a problem and a recommendation on the best solution to use. It involves evaluating how the

solution will fit into the corporation. It, for example, can decide whether an order processing be

carried out by a new system more efficiently than the previous one.

A feasibility study is defined as an evaluation or analysis of the potential impact of a

proposed project or program. A feasibility study is conducted to assist decision-makers in

determining whether or not to implement a particular project or program. The feasibility study

is based on extensive research on both the current practices and the proposed project/program

and its impact on the selected organization operation. The feasibility study will contain

extensive data related to financial and operational impact and will include advantages and

disadvantages of both the current situation and the proposed plan.

Why Prepare Feasibility Studies?

Developing any new business venture is difficult. Taking a project from the initial idea

through the operational stage is a complex and time-consuming effort. Most ideas, whether

from a cooperative or invest or owned business, do not develop into business operations. If

these ideas make it to the operational stage, most fail within the first 6 months. Before the

potential members invest in a proposed business project, they must determine if it can be

economically viable and then decide if investment advantages outweigh the risks involved.

Many cooperative business projects are quite expensive to conduct. The projects involve

operations that differ from those of the members’ individual business. Often, cooperative

businesses’ operations involve risks with which the members are unfamiliar. The study allows

groups to preview potential project outcomes and to decide if they should continue. Although

the costs of conducting a study may seem high, they are relatively minor when compared with

the total project cost. The small initial expenditure on a feasibility study can help to protect

larger capital investments later.

Feasibility studies are useful and valid for many kinds of projects. Evaluations of a new

business venture both from new groups and established businesses, are the most common, but

not the only usage. Studies can help groups decide to expand existing services, build or remodel

facilities, change methods of operation, add new products, or even merge with another

business. A feasibility study assists decision makers whenever they need to consider alternative

development opportunities.

Feasibility studies permit planners to outline their ideas on paper before implementing

them. This can reveal errors in project design before their implementation negatively affects the

Page 50: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

50

project. Applying the lessons gained from a feasibility study can significantly lower the project

costs. The study presents the risks and returns associated with the project so the prospective

members can evaluate them. There is no "magic number" or correct rate of return a project

needs to obtain before a group decides to proceed. The acceptable level of return and

appropriate risk rate will vary for individual members depending on their personal situation.

Cooperatives serve the needs and enhance the economic returns of their members, and not

outside investors, so the appropriate economic rate of return for a cooperative project may be

lower than those required by projects of investor-owned firms. Potential members should

evaluate the returns of a cooperative project to see how it would affect the returns of all of their

business operations.

The proposed project usually requires both risk capital from members and debt capital

from banks and other financiers to become operational. Lenders typically require an objective

evaluation of a project prior to investing. A feasibility study conducted by someone without a

vested interest in the project outcome can provide this assessment.

What Is a Feasibility Study?

This analytical tool used during the project planning process shows how a business

would operate under a set of assumptions — the technology used (the facilities, equipment,

production process, etc.) and the financial aspects (capital needs, volume, cost of goods, wages

etc.). The study is the first time in a project development process that the pieces are assembled

to see if they perform together to create a technical and economically feasible concept. The

study also shows the sensitivity of the business to changes in these basic assumptions.

Feasibility studies contain standard technical and financial components, as discussed in

more detail later in this report. The exact appearance of each study varies. This depends on the

industry studied, the critical factors for that project, the methods chosen to conduct the study,

and the budget. Emphasis can be placed on various sections of an individual feasibility study

depending upon the needs of the group for whom the study was prepared. Most studies have

multiple potential uses, so they must be designed to serve everyone’s needs.

The feasibility study evaluates the project’s potential for success. The perceived

objectivity of the evaluation is an important factor in the credibility placed on the study by

potential investors and financiers. Also, the creation of the study requires a strong background

both in the financial and technical aspects of the project. For these reasons, outside consultants

conduct most studies.

Feasibility studies for a cooperative are similar to those for other businesses, with one

exception. Cooperative members use it to be successful in enhancing their personal businesses,

so a study conducted for a cooperative must address how the project will impact members as

individuals in addition to how it will affect the cooperative as a whole.

Page 51: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

51

The feasibility study is conducted to assist the decision-makers in making the decision

that will be in the best interest of the school food service operation. The extensive research,

conducted in a non-biased manner, will provide data upon which to base a decision.

A feasibility study could be used to test a new working system, which could be used because:

• The current system may no longer suit its purpose,

• Technological advancement may have rendered the current system redundant,

• The business is expanding, allowing it to cope with extra work load,

• Customers are complaining about the speed and quality of work the business provides,

• Competitors are not winning a big enough market share due to an effective integration

of a computerized system.

Although few businesses would not benefit from a computerized system at all, the

process of carrying out this feasibility study makes the purchaser/client think carefully about

how it is going to be used.

After request clarification, analyst proposes some solutions. After that for each solution

it is checked whether it is practical to implement that solution.

This is done through feasibility study. In this various aspects like whether it is technically

or economically feasible or not. So depending upon the aspect on which feasibility is being done

it can be categorized into four classes:

• Technical Feasibility

• Economic Feasibility

• Operational Feasibility

• Legal Feasibility

The outcome of the feasibility study should be very clear. It should answer the following issues.

• Is there an alternate way to do the job in a better way?

• What is recommended?

STEPS IN FEASIBILITY ANALYSIS:

Feasibility study involves eight steps:

1. Form a project team and appoint a project leader

• The concept behind a project team is that future system users should be involved in its

design and implementation.

• Their knowledge & experience in the operations area are essential to the success of the

system.

• For small projects, the analyst and an assistant usually suffice.

Page 52: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

52

• The team consists of analysts and user staff enough collective expertise to devise a

solution to the problem.

2. Prepare system flowcharts

• Prepare generalized system flowcharts for the system.

• Information oriented charts and data flow diagrams prepared in the initial investigation

are also previewed at this time.

• The charts bring up the importance of inputs, outputs and data flow among key points in

the existing system.

• All other flowcharts needed for detailed evaluation are completed at this point.

3. Enumerate potential candidate systems.

• This step identifies the candidate systems that are capable of producing the output

included in the generalized flowcharts.

• The above requires a transformation from logical to physical system models.

• Another aspect of this step is consideration of the hardware that can handle the total

system requirement.

• An important aspect of hardware is processing and main memory.

• There are a large number of computers with differing processing sizes, main memory

capabilities and software support.

• The project team may contact vendors for information on the processing capabilities of

the system available.

4. Describe and identify characteristics of candidate systems.

• From the candidate systems considered the team begins a preliminary evaluation in an

attempt to reduce them to a manageable number.

• Technical knowledge and expertise in the hardware/ software area are critical for

determining what each candidate system can and cannot do.

5. Determine and Evaluate performance and cost effectiveness of each candidate system.

• Each candidate systems performance is evaluated against the system performance

requirements set prior to the feasibility study.

• Whatever the criteria, there has to be a close match as practicable, although trade-offs

are often necessary to select the best system.

• The cost encompasses both designing and installing the system.

• It includes user training, updating the physical facilities and documenting..

• System performance criteria are evaluated against the cost of each system to determine

which system is likely to be the most cost effective and also meets the performance

requirements.

Page 53: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

53

• Costs are most easily determined when the benefits of the system are tangible and

measurable.

• An additional factor to consider is the cost of the study design and development.

• In many respects the cost of the study phase is a “sunk cost” (fixed cost).

• Including it in the project cost estimate is optional.

6. Weight system performance and cost data

• In some cases, the performance and cost data for each candidate system show which

system is the best choice.

• This outcome terminates the feasibility study.

• Many times, however the situation is not so clear cut.

• The performance/cost evaluation matrix does not clearly identify the best system.

• The next step is to weight the importance of each criterion by applying a rating figure.

• Then the candidate system with the highest total score is selected.

7. Select the best candidate system

• The system with the highest total score is judged the best system.

• This assume the weighting factors are fair and the rating of each evaluation criterion is

accurate.

• In any case , management should not make the selection without having the experience

to do so.

• Management co-operation and comments however are encouraged.

8. Feasibility Report

• The feasibility report is a formal document for management use.

• Brief enough and sufficiently non technical to be understandable, yet detailed enough to

provide the basis for system design.

• The report contains the following sections:

o Cover letter – general findings and recommendations to be considered.

o Table of contents – specifies the location of the various parts of the report.

o Overview – Is a narrative explanation of the purpose & scope of the project. The

reason for undertaking the feasibility study, and the department (s) involved or

affected by the candidate system.

o Detailed findings outline the methods used in the present system, the system

effectiveness & efficiency as well as operating costs are emphasized. This section

also provides a description of the objectives and general procedures of the

candidate system.

o Economic justification- details point by point cost comparisons and preliminary

cost estimates for the development and operation of the candidate system. A

return on investment (ROI) analysis of the project is also included.

o Recommendations & conclusions

Page 54: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

54

o Appendix – document all memos and data compiled during the investigation,

they are placed at the end of the report for reference.

Technical Feasibility, Economic Feasibility, Operational Feasibility, Legal

Feasibility

Technical Feasibility

In technical feasibility the following issues are taken into consideration.

• Whether the required technology is available or not

• Whether the required resources are available

o Manpower- programmers, testers & debuggers

o Software and hardware

Once the technical feasibility is established, it is important to consider the monetary

factors also. Since it might happen that developing a particular system may be technically

possible but it may require huge investments and benefits may be less. For evaluating this,

economic feasibility of the proposed system is carried out.

Economic Feasibility

For any system if the expected benefits equal or exceed the expected costs, the system

can be judged to be economically feasible. In economic feasibility, cost benefit analysis is done

in which expected costs and benefits are evaluated. Economic analysis is used for evaluating the

effectiveness of the proposed system.

In economic feasibility, the most important is cost-benefit analysis. As the name

suggests, it is an analysis of the costs to be incurred in the system and benefits derivable out of

the system. Click on the link below which will get you to the page that explains what cost benefit

you can perform a cost benefit analysis.

Operational Feasibility

Operational feasibility is mainly concerned with issues like whether the system will be

used if it is developed and implemented. Whether there will be resistance from users that will

effect the possible application benefits? The essential questions that help in testing the

operational feasibility of a system are following.

• Does management support the project?

• Are the users not happy with current business practices? Will it reduce the time

(operation) considerably? If yes, then they will welcome the change and the new system.

• Have the users been involved in the planning and development of the project? Early

involvement reduces the probability of resistance towards the new system.

Page 55: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

55

• Will the proposed system really benefit the organization? Does the overall response

increase? Will accessibility of information be lost? Will the system effect the customers in

considerable way?

Legal Feasibility

It includes study concerning contracts, liability, violations, and legal other traps

frequently unknown to the technical staff.

Schedule Feasibility study

This involves questions such as how much time is available to build the new system,

when it can be built (i.e. during holidays), whether it interferes with normal business operation,

etc.

Organizational Feasibility study

This involves questions such as whether the system has enough support to be

implemented successfully, whether it brings an excessive amount of change, and whether the

organization is changing too rapidly to absorb it.

Market Feasibility

Another concern is market variability and impact on the project. This area should not be

confused with the Economic Feasibility. The market needs analysis to view the potential impacts

of market demand, competitive activities, etc. and "divertible" market share available. Price war

activities by competitors, whether local, regional, national or international, must also be

analyzed for early contingency funding and debt service negotiations during the start-up, ramp-

up, and commercial start-up phases of the project.

Environmental Feasibility

Often a killer of projects through long, drawn-out approval processes and outright active

opposition by those claiming environmental concerns. This is an aspect worthy of real attention

in the very early stages of a project. Concern must be shown and action must be taken to

address any and all environmental concerns raised or anticipated. A perfect example was the

recent attempt by Disney to build a theme park in Virginia. After a lot of funds and efforts,

Disney could not overcome the local opposition to the environmental impact that the Disney

project would have on the historic Manassas battleground area.

Political Feasibility

A politically feasible project may be referred to as a "politically correct project." Political

considerations often dictate direction for a proposed project. This is particularly true for large

projects with national visibility that may have significant government inputs and political

implications. For example, political necessity may be a source of support for a project regardless

Page 56: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

56

of the project's merits. On the other hand, worthy projects may face insurmountable opposition

simply because of political factors. Political feasibility analysis requires an evaluation of the

compatibility of project goals with the prevailing goals of the political system.

Safety Feasibility

Safety feasibility is another important aspect that should be considered in project

planning. Safety feasibility refers to an analysis of whether the project is capable of being

implemented and operated safely with minimal adverse effects on the environment.

Unfortunately, environmental impact assessment is often not adequately addressed in

complex projects. As an example, the North America Free Trade Agreement (NAFTA) between

the U.S., Canada, and Mexico was temporarily suspended in 1993 because of the legal

consideration of the potential environmental impacts of the projects to be undertaken under the

agreement.

Social Feasibility

Social feasibility addresses the influences that a proposed project may have on the social

system in the project environment. The ambient social structure may be such that certain

categories of workers may be in short supply or nonexistent. The effect of the project on the

social status of the project participants must be assessed to ensure compatibility. It should be

recognized that workers in certain industries may have certain status symbols within the society.

Cultural Feasibility

Cultural feasibility deals with the compatibility of the proposed project with the cultural

setup of the project environment. In labor-intensive projects, planned functions must be

integrated with the local cultural practices and beliefs. For example, religious beliefs may

influence what an individual is willing to do or not do.

Financial Feasibility

Financial feasibility should be distinguished from economic feasibility. Financial

feasibility involves the capability of the project organization to raise the appropriate funds

needed to implement the proposed project. Project financing can be a major obstacle in large

multi-party projects because of the level of capital required. Loan availability, credit worthiness,

equity, and loan schedule are important aspects of financial feasibility analysis.

Cost Benefit Analysis Developing an IT application is an investment. Since after developing that application it

provides the organization with profits. Profits can be monetary or in the form of an improved

working environment. However, it carries risks, because in some cases an estimate can be

wrong. And the project might not actually turn out to be beneficial.

Page 57: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

57

Cost benefit analysis helps to give management a picture of the costs, benefits and risks.

It usually involves comparing alternate investments.

Cost benefit determines the benefits and savings that are expected from the system and

compares them with the expected costs.

The cost of an information system involves the development cost and maintenance cost.

The development costs are one time investment whereas maintenance costs are recurring. The

development cost is basically the costs incurred during the various stages of the system

development.

Each phase of the life cycle has a cost. Some examples are :

• Personnel

• Equipment

• Supplies

• Overheads

• Consultants' fees

Sources of benefits : Benefits are usually come under two major sources : decreased costs or increased

revenues.

Costs savings come from greater efficiency in the operations of the company. Areas

to look for reduced costs include the following :

1. Reducing staff due to automating manaual functions or increasing efficiency.

2. Maintaining constant staff with increasing volumes of work.

3. Decreasing operating expenses, such as shipping charges for emergency shipments.

4. Reducing error rates due to automated editing or validation.

5. Reducing bad accounts or bad credit losses.

6. Collecting accounts receivables more quickly.

7. Reducing the costs of goods with the volume discounts and purchases.

Cost and Benefit Categories In performing Cost benefit analysis (CBA) it is important to identify cost and benefit

factors. Cost and benefits can be categorized into the following categories.

There are several cost factors/elements. These are hardware, personnel, facility,

operating, and supply costs.

In a broad sense the costs can be divided into two types

Page 58: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

58

1. Development costs-

Development costs that are incurred during the development of the system are one time

investment.

• Wages

• Equipment

2. Operating costs,

e.g. , Wages

Supplies

Overheads

Another classification of the costs can be:

Hardware/software costs:

It includes the cost of purchasing or leasing of computers and it's peripherals. Software costs

involves required software costs.

Personnel costs:

It is the money, spent on the people involved in the development of the system. These

expenditures include salaries, other benefits such as health insurance, conveyance allowance,

etc.

Facility costs:

Expenses incurred during the preparation of the physical site where the system will be

operational. These can be wiring, flooring, acoustics, lighting, and air conditioning.

Operating costs:

Operating costs are the expenses required for the day to day running of the system. This

includes the maintenance of the system. That can be in the form of maintaining the hardware or

application programs or money paid to professionals responsible for running or maintaining the

system.

Supply costs:

These are variable costs that vary proportionately with the amount of use of paper, ribbons,

disks, and the like. These should be estimated and included in the overall cost ofthe system.

Benefits

We can define benefit as

Profit or Benefit = Income – Costs

Benefits can be accrued by :

• Increasing income, or

• Decreasing costs, or

• both

Page 59: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

59

The system will provide some benefits also. Benefits can be tangible or intangible, direct

or indirect. In cost benefit analysis, the first task is to identify each benefit and assign a

monetary value to it.

The two main benefits are improved performance and minimized processing costs.

Further costs and benefits can be categorized as

Tangible or Intangible Costs and Benefits

Tangible cost and benefits can be measured. Hardware costs, salaries for professionals,

software cost are all tangible costs. They are identified and measured.. The purchase of

hardware or software, personnel training, and employee salaries are example of tangible costs.

Costs whose value cannot be measured are referred as intangible costs. The cost of breakdown

of an online system during banking hours will cause the bank lose deposits.

Benefits are also tangible or intangible. For example, more customer satisfaction,

improved company status, etc are all intangible benefits. Whereas improved response time,

producing error free output such as producing reports are all tangible benefits. Both tangible

and intangible costs and benefits should be considered in the evaluation process.

Direct or Indirect Costs and Benefits

From the cost accounting point of view, the costs are treated as either direct or indirect.

Direct costs are having rupee value associated with it. Direct benefits are also attributable to a

given project. For example, if the proposed system that can handle more transactions say 25%

more than the present system then it is direct benefit.

Indirect costs result from the operations that are not directly associated with the system.

Insurance, maintenance, heat, light, air conditioning are all indirect costs.

Fixed or Variable Costs and Benefits

Some costs and benefits are fixed. Fixed costs don't change. Depreciation of hardware,

Insurance, etc are all fixed costs. Variable costs are incurred on regular basis. Recurring period

may be weekly or monthly depending upon the system. They are proportional to the work

volume and continue as long as system is in operation.

Fixed benefits don't change. Variable benefits are realized on a regular basis.

Performing Cost Benefit Analysis (CBA)

Example:

Cost for the proposed system ( figures in USD Thousands)

Page 60: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

60

Benefit for the propose system

Profit = Benefits - Costs

= 300, 000 -154, 000

= USD 146, 000

Since we are gaining , this system is feasible.

Steps of CBA can briefly be described as:

• Estimate the development costs, operating costs and benefits

• Determine the life of the system

• When will the benefits start to accrue?

• When will the system become obsolete?

• Determine the interest rate

This should reflect a realistic low risk investment rate.

Select Evaluation Method When all the financial data have been identified and broken down into cost categories,

the analyst selects a method for evaluation.

There are various analysis methods available. Some of them are following.

1. Present value analysis

2. Payback analysis

3. Net present value

4. Net benefit analysis

Page 61: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

61

5. Cash-flow analysis

6. Break-even analysis

Present value analysis:

It is used for long-term projects where it is difficult to compare present costs with future

benefits. In this method cost and benefit are calculated in term of today's value of investment.

To compute the present value, we take the following formula Where,

i is the rate of interest &

n is the time

Example:

Present value of $3000 invested at 15% interest at the end of 5th year is calculates as

P = 3000/(1 + .15)5

= 1491.53

Table below shows present value analysis for 5 years

Year Estimation Future Value

Present Value Cumulative present Value of Benefits

1 3000 2608.69 2608.69

2 3000 2268.43 4877.12

3 3000 1972.54 6949.66

4 3000 1715.25 8564.91

5 3000 1491.53 10056.44

Net Present Value : NPA

The net present value is equal to benefits minus costs. It is expressed as a percentage of

the investment.

Net Present Value= Costs - Benefits

% = Net Present Value/Investments

Example: Suppose total investment is $50000 and benefits are $80000

Then Net Present Value = $(80000 - 50000)

= $30000

Page 62: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

62

% = 30000/80000

=.375

Break-even Analysis:

Once we have determined what is estimated cost and benefit of the system it is also

essential to know in what time will the benefits are realized. For that break-even analysis is

done.

Break -even is the point where the cost of the proposed system and that of the current

one are equal. Break-even method compares the costs of the current and candidate systems. In

developing any candidate system, initially the costs exceed those of the current system. This is

an investment period. When both costs are equal, it is break-even. Beyond that point, the

candidate system provides greater benefit than the old one. This is return period.

Fig. 3.1 is a break-even chart comparing the costs of current and candidate systems. The

attributes are processing cost and processing volume. Straight lines are used to show the

model's relationships in terms of the variable, fixed, and total costs of two processing methods

and their economic benefits. B' point is break-even. Area after B' is return period. A'AB' area is

investment area. From the chart, it can be concluded that when the transaction are lower than

70,000 then the present system is economical while more than 70,000 transaction would prefer

the candidate system.

Cash-flow Analysis:

Page 63: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

63

Some projects, such as those carried out by computer and word processors services,

produce revenues from an investment in computer systems. Cash-flow analysis keeps track of

accumulated costs and revenues on a regular basis.

Payback analysis:

The pay back method is a common measure of the relative time value of a project. It determines

the time it takes for the accumulated benefits to equal the initial investment. It is easy to

calculate and allows two or more activities to be ranked. The payback period may be

computed by the following formula:

Where

A=Capital Investment

B=Investment Credit

C=Cost investment.

D=Companies income tax.

E=State and local taxes.

F=Life of capital

G=Time to install system.

H=Benefits and Savings.

Return on Investment analysis:

The ROI analysis technique compares the life time probability of alternative solutions

and projects. The ROI for a solution or project is a percentage note that measures the

relationship between the amount the business gets back from an investment and the amount

invested.

The ROI is calculated as follows:

Procedure for cost benefit determination: The determination of cost and benefit entails the following steps :

1. Identify the cost and benefits pertaining to a given project

2. Categorize the various costs and benefits for analysis

3. Select a method for evaluation

4. Interpret the result of the system

5. Take action

Page 64: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

64

LIST OF DELIVERABLES When the design of an information system is complete the specifications are

documented in a form that outlines the features of the application

These specifications are termed as the ‘deliverables’ or the ‘design book’ by the system

analysts.

No design is complete without the design book, since it contains all the details that must

be included in the computer software, datasets & procedures that the working information

system comprises.

The deliverables include the following:

1. Layout charts

Input & output descriptions showing the location of all details shown on reports, documents, &

display screens.

2. Record layouts:

Descriptions of the data items in transaction & master files, as well as related database

schematics.

3. Coding systems:

Descriptions of the codes that explain or identify types of transactions, classification, &

categories of events or entities.

4. Procedure Specification:

Planned procedures for installing & operating the system when it is constructed.

5. Program specifications:

Charts, tables & graphic descriptions of the modules & components of computer

software & the interaction between each as well as the functions performed &

data used or produced by each.

6. Development plan:

Timetables describing elapsed calendar time for development activities; personnel

staffing plans for systems analysts, programmers , & other personnel; preliminary

testing & implementation plans.

7. Cost Package:

Anticipated expenses for development, implementation and operation of the new

system, focusing on such major cost categories as personnel, equipment,

communications, facilities and supplies.

Page 65: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

65

Chapter 5-Modeling System Requirements

Decision Trees, Decision Tables, and Structured English are tools used to represent

process logic.

Data Flow Diagram Data flow diagrams (DFDs) reveal relationships among and between the various

components in a program or system. DFDs are an important technique for modeling a system’s

high-level detail by showing how input data is transformed to output results through a sequence

of functional transformations. DFDs consist of four major components: entities, processes, data

stores, and data flows. The symbols used to depict how these components interact in a system

are simple and easy to understand; however, there are several DFD models to work from, each

having its own symbology. DFD syntax does remain constant by using simple verb and noun

constructs. Such a syntactical relationship of DFDs makes them ideal for object-oriented analysis

and parsing functional specifications into precise DFDs for the systems analyst.

DEFINING DATA FLOW DIAGRAMS (DFDs)

When it comes to conveying how information data flows through systems (and how that

data is transformed in the process), data flow diagrams (DFDs) are the method of choice over

technical descriptions for three principal reasons.

• DFDs are easier to understand by technical and nontechnical audiences

• DFDs can provide a high level system overview, complete with boundaries and

connections to other systems

• DFDs can provide a detailed representation of system components

DFDs help system designers and others during initial analysis stages visualize a current

system or one that may be necessary to meet new requirements. Systems analysts prefer

working with DFDs, particularly when they require a clear understanding of the boundary

between existing systems and postulated systems. DFDs represent the following:

1. External devices sending and receiving data

2. Processes that change that data

3. Data flows themselves

4. Data storage locations

The hierarchical DFD typically consists of a top-level diagram (Level 0) underlain by

cascading lower level diagrams (Level 1, Level 2…) that represent different parts of the system.

Purpose/Objective:

Page 66: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

66

The purpose of data flow diagrams is to provide a semantic bridge between users and systems

developers. The diagrams are:

• graphical, eliminating thousands of words;

• logical representations, modeling WHAT a system does, rather than physical models

showing HOW it does it;

• hierarchical, showing systems at any level of detail; and

• jargonless, allowing user understanding and reviewing.

The goal of data flow diagramming is to have a commonly understood model of a system. The

diagrams are the basis of structured systems analysis. Data flow diagrams are supported by

other techniques of structured systems analysis such as data structure d iagrams, data

dictionaries, and procedure-representing techniques such as decision tables, decision trees, and

structured English.

Data flow diagrams have the objective of avoiding the cost of:

• user/developer misunderstanding of a system, resulting in a need to redo systems or in

not using the system.

• having to start documentation from scratch when the physical system changes since the

logical system, WHAT gets done, often remains the same when technology changes.

• systems inefficiencies because a system gets "computerized" before it gets

"systematized".

• being unable to evaluate system project boundaries or degree of automation, resulting

in a project of inappropriate scope.

Symbols Used In Data Flow Diagram

Data Flow Diagrams are composed of the four basic symbols shown below.

< Cash > Resource Flow

The External Entity symbol represents sources of data to the system or destinations of data from

the system.

The Data Flow symbol represents movement of data.

Page 67: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

67

The Data Store symbol represents data that is not moving (delayed data at rest).

The Process symbol represents an activity that transforms or manipulates the data (combines,

reorders, converts, etc.).

Any system can be represented at any level of detail by these four symbols.

Defining DFD Components

DFDs consist of four basic components that illustrate how data flows in a system: entity,

process, data store, and data flow.

External Entity

An entity is the source or destination of data. The source in a DFD represents these

entities that are outside the context of the system. Entities either provide data to the system

(referred to as a source) or receive data from it (referred to as a sink). Entities are often

represented as rectangles (a diagonal line across the right-hand corner means that this entity is

represented somewhere else in the DFD). Entities are also referred to as agents, terminators, or

source/sink.

Process

The process is the manipulation or work that transforms data, performing computations,

making decisions (logic flow), or directing data flows based on business rules. In other words, a

process receives input and generates some output. Process names (simple verbs and dataflow

names, such as “Submit Payment” or “Get Invoice”) usually describe the transformation, which

can be performed by people or machines. Processes can be drawn as circles or a segmented

rectangle on a DFD, and include a process name and process number.

Data Store

A data store is where a process stores data between processes for later retrieval by that

same process or another one. Files and tables are considered data stores. Data store names

(plural) are simple but meaningful, such as “customers,” “orders,” and “products.” Data

stores are usually drawn as a rectangle with the righthand side missing and labeled by the name

of the data storage area it represents, though different notations do exist.

Data Flow

Data flow is the movement of data between the entity, the process, and the data store.

Data flow portrays the interface between the components of the DFD. The flow of data in a DFD

is named to reflect the nature of the data used (these names should also be unique within a

specific DFD). Data flow is represented by an arrow, where the arrow is annotated with the data

name.

Resource Flow

A resource flow shows the flow of any physical material from its source to its destination.

For this reason they are sometimes referred to as physical flows.

The physical material in question should be given a meaningful name. Resource flows are

Page 68: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

68

usually restricted to early, high-level diagrams and are used when a description of the physical

flow of materials is considered to be important to help the analysis.

Steps To Create Data Flow Diagram

The procedure for producing a data flow diagram is to:

1. Identify and list external entities providing inputs/receiving outputs from system;

2. Identify and list inputs from/outputs to external entities;

3. Create a context diagram with system at center and external entities sending and

receiving data flows;

4. Identify the business functions included within the system boundary;

5. Identify the data connections between business functions;

6. Confirm through personal contact sent data is received and vice-versa;

7. Trace and record what happens to each of the data flows entering the system (data

movement, data storage, data transformation/processing)

8. Attempt to connect any diagram segments into a rough draft;

9. Verify all data flows have a source and destination;

10. Verify data coming out of a data store goes in;

11. Redraw to simplify--ponder and question result;

12. Review with "informed";

13. Explode and repeat above steps as needed.

The context (level 0) diagram

A context (level 0) diagram documents the system’s boundaries by highlighting its

sources and destinations. Documenting the system’s boundaries by drawing a context diagram

helps the analyst, the user, and the responsible managers visualize alternative high-level logical

system designs.

For example, shows a context diagram for a typical inventory system. The system itself is

shown as a single process. It provides data to the FINANCIAL SYSTEM. It both provides data to

and gets data from MANAGER, SUPPLIER, and CUSTOMER. Note that the data flows are labeled

with (at this level) composite names.

Moving the boundaries significantly changes the system, and the ability to visualize the

implications of different boundary assumptions is a powerful reason for creating a context

diagram. For example, in the financial system and the inventory system are independent. An

alternative logical design might move the financial system inside the inventory system (or vice

versa), effectively integrating them. The result would be a somewhat more complex (but

perhaps more efficient) system.

Page 69: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

69

The level 1 data flow diagram

A level 1 data flow diagram shows the system’s primary processes, data stores, sources,

and destinations linked by data flows. Generally, a system’s primary processes are independent,

and thus, separated from each other by intermediate data stores that suggest the data are held

in some way between processes.

For example, Figure shows a level 1 data flow diagram for an inventory system. Start at

the upper left with source/destination FINANCIAL SYSTEM. Data flow to FINANCIAL SYSTEM

from process 9, Report cash flow. Data enter process 9 from data store D1, SALES. Data enter D1

from process 2, Sell appliance. Process 2 gets its data from CUSTOMER and from data stores D1,

D3, D5, and D6, and so on. Note how intermediate data stores serve to insulate the primary

processes from each other and thus promote process independence.

A level 1 process is a composite item that might incorporate related programs, routines, manual

procedures, hardware-based procedures, and other activities. For example, process 2, Sell

appliance might imply (in one alternative) a set of sales associate’s guidelines, while another

alternative might include a point-of-sale terminal equipped with a bar code scanner and

necessary support software. In effect, the level 1 process Sell appliance represents all the

hardware, software, and procedures associated with selling an appliance. As the data flow

diagram is decomposed, the various sub-processes are eventually isolated and defined.

Data Flow Diagram Types

1. Physical Data Flow Diagram: Physical data flow diagrams

are implementation-dependent and show the actual devices,

department, people, etc., involved in the current system.

2. Logical or Conceptual Data Flow Diagram: Logical data

flow diagram represents business functions or processes. It

Page 70: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

70

describes the system independently of how it is actually

implemented, and focuses rather on how an activity is

accomplished.

Advantages of data flow diagrams

• It gives further understanding of the interestedness of the system and sub-systems

• It is useful from communicating current system knowledge to the user

• Used as part of the system documentation files

• Dataflow diagram helps to substantiate the logic underlining the dataflow of the

organization

• It gives the summary of the system

• DFD is very easy to follow errors and it is also useful for quick reference to the development

team for locating and controlling errors

Disadvantages of data flow diagram

• DFD is likely to take many alteration before agreement with the user

• Physical consideration are usually left out

• It is difficult to understand because it ambiguous to the user who have little or no knowledge

Structured English This is another method of overcoming ambiguity in defining conditions and actions. This

method uses narrative statements to describe a procedure. It works well for decision analysis

and can be carried over into programming and software development. Structured English uses

three basic types of statements to describe a process. They are:

• Sequence structures

A sequence structure is a single step or action included in a process. It does not depend

on the existence of any condition, and when encountered, is always taken.

• Decision structures

The action sequences are often included within decision structures that identify

conditions. Decision structures occur when two or more actions can be taken,

depending on the value for a certain condition.

• Iteration structures

In routine operating activities, it is common to find that certain activities are repeated

while a certain condition exists or until a condition occurs. Iteration instructions permit

analysts to describe these cases.

The following example illustrates all these structures:

Page 71: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

71

DO WHILE still examining more books

Read title of book.

IF title is interesting

THEN

Pick up book and thumb through it.

Look at price.

IF you decide you want book

THEN put in wanted book stack.

ELSE put back on shelf.

END IF

ELSE continue.

END IF

END DO ...

The two building blocks of Structured English are (1) structured logic or instructions

organized into nested or grouped procedures, and (2) simple English statements such as add,

multiply, move, etc. (strong, active, specific verbs).

Five conventions to follow when using Structured English:

• Express all logic in terms of sequential structures, decision structures, or iterations.

• Use and capitalize accepted keywords such as: IF, THEN, ELSE, DO, DO WHILE, DO UNTIL,

PERFORM

• Indent blocks of statements to show their hierarchy (nesting) clearly.

• When words or phrases have been defined in the Data Dictionary, underline those words

or phrases to indicate that they have a specialized, reserved meaning.

Be careful when using "and" and "or" as well as "greater than" and "greater than or equal to"

and other logical comparisons.

Decision tables Decision tables are a precise yet compact way to model complicated logic. Decision

tables, like if-then-else and switch-case statements, associate conditions with actions to

perform. But, unlike the control structures found in traditional programming languages,

decision tables can associate many independent conditions with several actions in an elegant

way.

Structure

Decision tables are typically divided into four quadrants, as shown below.

The four quadrants Conditions Condition

Alternative

Actions Action Entities

Page 72: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

72

Each decision corresponds to a variable, relation or predicate whose possible values are

listed among the condition alternatives. Each action is a procedure or operation to perform, and

the entries specify whether (or in what order) the action is to be performed for the set of

condition alternatives the entry corresponds to. Many decision tables include in their condition

alternatives the don't care symbol, a hyphen. Using don't cares can simplify decision tables,

especially when a given condition has little influence on the actions to be performed. In some

cases, entire conditions thought to be important initially are found to be irrelevant when none of

the conditions influence which actions are performed.

Aside from the basic four quadrant structure, decision tables vary widely in the way the

condition alternatives and action entries are represented. Some decision tables use simple

true/false values to represent the alternatives to a condition (akin to ifthen- else), other tables

may use numbered alternatives (akin to switch-case), and some tables even use fuzzy logic or

probabilistic representations for condition alternatives. In a similar way, action entries can

simply represent whether an action is to be performed (check the actions to perform), or in more

advanced decision tables, the sequencing of actions to perform (number the actions to perform).

Example

The limited-entry decision table is the simplest to describe. The condition alternatives are

simple Boolean values, and the action entries are check-marks, representing which of the

actions in a given column are to be performed. A technical support company writes a decision

table to diagnose printer problems based upon symptoms described to them over the phone

from their clients.

Printer troubleshooter

Of course, this is just a simple example (and it does not necessarily correspond to the

reality of printer troubleshooting), but even so, it is possible to see how decision tables can scale

to several conditions with many possibilities.

Software engineering benefits

Page 73: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

73

Decision tables make it easy to observe that all possible conditions are accounted for. In

the example above, every possible combination of the three conditions is given. In decision

tables, when conditions are omitted, it is obvious even at a glance that logic is missing. Compare

this to traditional control structures, where it is not easy to notice gaps in program logic with a

mere glance --- sometimes it is difficult to follow which conditions correspond to which actions!

Just as decision tables make it easy to audit control logic, decision tables demand that a

programmer think of all possible conditions. With traditional control structures, it is easy to

forget about corner cases, especially when the else statement is optional. Since logic is so

important to programming, decision tables are an excellent tool for designing control logic. In

one incredible anecdote, after a failed 6 man-year attempt to describe program logic for a file

maintenance system using flow charts, four people solved the problem using decision tables in

just four weeks. Choosing the right tool for the problem is fundamental.

Decision tree In operations research, specifically in decision analysis, a decision tree is a decision

support tool that uses a graph or model of decisions and their possible consequences, including

chance event outcomes, resource costs, and utility. A decision tree is used to identify the

strategy most likely to reach a goal. Another use of trees is as a descriptive means for

calculating conditional probabilities. In data mining and machine learning, a decision tree is a

predictive model; that is, a mapping from observations about an item to conclusions about its

target value.

More descriptive names for such tree models are classification tree or reduction tree. In

these tree structures, leaves represent classifications and branches represent conjunctions of

features that lead to those classifications. The machine learning technique for inducing a

decision tree from data is called decision tree learning, or (colloquially)

Four major steps in building Decision Trees:

1. Identify the conditions

2. Identify the outcomes (condition alternatives) for each decision

3. Identify the actions

4. Identify the rules.

Data Directory A data dictionary is a set of metadata that contains definitions and representations of

data elements. Within the context of a DBMS, a data dictionary is a read-only set of tables and

views. Amongst other things, a data dictionary holds the following information:

• Precise definition of data elements

• Usernames, roles and privileges

Page 74: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

74

• Schema objects

• Integrity constraints

• Stored procedures and triggers

• General database structure

• Space allocations

One benefit of a well-prepared data dictionary is a consistency between data items

across different tables. For example, several tables may hold telephone numbers; using a data

dictionary the format of this telephone number field will be consistent.

When an organization builds an enterprise-wide data dictionary, it may include both semantics

and representational definitions for data elements. The semantic components focus on creating

precise meaning of data elements. Representation definitions include how data elements are

stored in a computer structure such as an integer, string or date format (see data type). Data

dictionaries are one step along a pathway of creating precise semantic definitions for an

organization. Initially, data dictionaries are sometimes simply a collection of database columns

and the definitions of what the meaning and types the columns contain. Data dictionaries are

more precise than glossaries (terms and definitions) because they frequently have one or more

representations of how data is structured. Data dictionaries are usually separate from data

models since data models usually include complex relationships between data elements.

Data dictionaries can evolve into full ontology (computer science) when discrete logic has been

added to data element definitions.

Example of Data Directory

Library Managment System(Data Directory For Book Record)

Entity Relationship Diagram The Entity Relationship (ER) data model allows us to describe the data involved in a real world

enterprise in terms of objects and their relationships and is widely used to develop an initial

data base design. Within the larger context of the overall design process, the ER model is used in

a phase called “Conceptual database design”.

Page 75: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

75

Database design and ER Diagrams:

The database design process can be divided into six steps. The ER model is most relevant

to the first three steps.

1. Requirement Analysis:

The very first step in designing a database application is to understand what data is to

be stored in the database, what application must be built in top of it, and what operations are

most frequent and subject to performance requirements. In other words, we must find out what

the users want from the database.

2. Conceptual database Design:

The information gathered in the requirements analysis step is used to develop a high-

level description of the data to be stored in the database, along with the constraints known to

hold over this data. The ER model is one of several high-level or semantic, data models used in

database design.

3. Logical Database Design:

We must choose a database to convert the conceptual database design into a database

schema in the data model of the chosen DBMS. Normally we will consider the Relational DBMS

and therefore, the task in the logical design step is to convert an ER schema into a relational

database schema.

Beyond ER Design

4. Schema Refinement:

This step is to analyze the collection of relations in our relational database schema to

identify potential problems, and refine it.

5. Physical Database Design:

This step may simply involve building indexes on some table and clustering some tables

or it may involve substantial redesign of parts of database schema obtained from the earlier

steps.

6. Application and security Design:

Any software project that involves a DBMS must consider aspects of the application that

go beyond the database itself. We must describe the role of each entity (users, user group,

departments)in every process that is reflected in some application task, as part of a complete

workflow for the task. A DBMS Provides several mechanisms to assist in this step.

Entity Types, Attributes and Keys:

Page 76: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

76

Entities are specific objects or things in the mini-world that are represented in the database.

For example, Employee or staff, Department or Branch , Project are called Entity.

Attributes are properties used to describe an entity.

For example an EMPLOYEE entity may have a Name, SSN, Address, Sex, BirthDate and

Department may have a Dname, Dno, DLocation.

A specific entity will have a value for each of its attributes.

For example a specific employee entity may have Name='John Smith', SSN='123456789',

Address ='731, Fondren, Houston, TX', Sex='M', BirthDate='09-JAN-55‘

Each attribute has a value set (or data type) associated with it – e.g. integer, string,

subrange, enumerated type, …

Types of Attributes:

• Simple (Atomic) Vs. Composite

• Single Valued Vs. Multi Valued

• Stored Vs. Derived

• Null Values

Simple Vs. Composite Attribute:

� Simple

– Attribute that are not divisible are called simple or atomic attribute.

For example, Street_name or Door_number. i.e. the division of composite attribute.

� Composite

Page 77: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

77

– The attribute may be composed of several components.

For example, Address (Apt#, House#, Street, City, State, ZipCode, Country) or Name

(FirstName, MiddleName, LastName). Composition may form a hierarchy where

some components are themselves composite.

Single Valued Vs. Multi Valued:

� Single Valued

– An attribute having only one value.

For example, Age, Date of birth, Sex, SSN

� Multi-valued

– An entity may have multiple values for that attribute.

For example, Color of a CAR or Previous Degrees of a STUDENT. Denoted as {Color} or

{PreviousDegrees}.Phone number of an employee.

Stored Vs. Derived

In some cases two are more attributes values are related – for example the age and date

of birth of a person. For a particular person entity, the value of age can be determined from the

current (today’s) date and the value of that person’s Birthdate.

The Age attribute is hence called a derived attribute and is said to be derivable from the

Birthdate attribute , which is called stored attribute.

Null Values

In some cases a particular entity may not have an applicable value for an attribute. For

example, a college degree attribute applies only to persons with college degrees. For such

situation, a special value called null is created.

Key attributes of an Entity Type:

An important constraint on the entities of an entity type is the Key or uniqueness

constraint on attributes. An entity type usually has an attribute whose values are distinct for

each individual entity in the entity set. Such an attribute is called a Key attribute, and its values

can be used to identify each entity uniquely.

For example name attribute is a key of the company entity type, because no two companies are

allowed to have the same name. For the person entity type, a typical key attribute is

socialSecurityNumber (SSN). In ER diagrammatic notation, each key attribute has its name

underlined inside the oval.

Relationships and Relationship sets:

Page 78: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

78

A relationship is an association among two or more entities. For example we may have

the relationship that Antony works in the Marketing department. A relationship type R among

the n entity types E1,E2,… ,En defines a set of associations or a relationship set- among entities

from these entity types.

Informally each relationship instance ri in R is an association of entities, where the

association includes exactly one entity from each participating entity type. Each such

relationship instance ri represents the facts that the entities participating in ri are related in

some way in the corresponding mini world situation. For example consider a relationship type

Works_for between the two entity types Employee and Department, which associates each

employee with the department for which the employee works. Each relationship instance in the

relationship set Works_for associates one employee entity and one department entity. Figure

illustrates this example.

Degree of a Relationship:

The degree of a relationship is the number of participating entity types. Hence the above

work_for relationship is of degree two. A relationship type of degree two is called Binary, and

one of degree three is called Ternary. An example of Ternary relationship is given below.

Page 79: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

79

Relationship of degree:

o two is binary

o three is ternary

o four is quaternary

Another Example for Binary Relationship

Example for Ternary Relationship.

Page 80: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

80

Example for quaternary Relationship

Constraints on relationship Types:

Relationship types usually have certain constraints that limit the possible combinations

of entities that may participate in the corresponding relationship set. These constraints are

determined from the mini world situation that the relationship represent. For example in the

works_for relationship, if the company has a rule that each employee must work for exactly one

department, that we would like to describe this constraint in the schema. There are two type.

1. Cardinality Ratio

Describes maximum number of possible relationship occurrences for an entity

participating in a given relationship type.

2. Participation:

Determines whether all or only some entity occurrences participate in a relationship.

● Cardinality ratio (of a binary relationship): 1:1, 1:N, N:1, or M:N

SHOWN BY PLACING APPROPRIATE NUMBER ON THE LINK.

● Participation constraint (on each participating entity type): total (called existence

dependency) or partial.

SHOWN BY DOUBLE LINING THE LINK

NOTE: These are easy to specify for Binary Relationship Types.

Example for 1:1 relationship

Page 81: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

81

Example for M:N relationships

Alternative (min, max) notation for relationship structural constraints:

● Specified on each participation of an entity type E in a relationship type R

● Specifies that each entity e in E participates in at least min and at most max relationship

instances in R

● Default(no constraint): min=0, max=n

● Must have min≤max, min≥0, max ≥1

● Derived from the knowledge of mini-world constraints

Examples:

● A department has exactly one manager and an employee can manage at most one

department.

Page 82: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

82

– Specify (0,1) for participation of EMPLOYEE in MANAGES

– Specify (1,1) for participation of DEPARTMENT in MANAGES

● An employee can work for exactly one department but a department can have any

number of employees.

– Specify (1,1) for participation of EMPLOYEE in WORKS_FOR

– Specify (0,n) for participation of DEPARTMENT in WORKS_FOR

Types of Entity:

1. Strong Entity

a. Entity which has a key attribute in its attribute list.

2. Weak Entity

a. Entity which doesn’t have the Key attribute.

ER Diagram

Notations for ER Diagram

Page 83: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

83

Sample ER diagram for a Company Schema with structural Constraints is shown below.

Page 84: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

84

Identifying Relationship:

It is a relationship between Strong entity and weak entity.

An ER Diagram for a Bank database Schema

Page 85: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

85

Data Directory The data dictionary (or data repository) or system catalog is an important part of the

DBMS. It contains data about data (or metadata). It means that it contains the actual database

descriptions used by the DBMS. In most DBMSs, the data dictionary is active and integrated. It

means that the DBMS checks the data dictionary every time the database is accessed. The data

dictionary contains the following information.

• Logical structure of database

• Schemas, mappings and constraints

• Description about application programs.

• Descriptions of record types, data item types, and data aggregates in the database.

• Description about physical database design, such as storage structures, access paths etc.

• Descriptions about users of DBMS and their access rights.

A data dictionary may be a separate part of DBMS (non-integrated data dictionary). It

may be a commercial product or a simple file developed and maintained by a designer. This type

of data dictionary is referred to as freestanding data dictionary. This type of data dictionary is

useful in the initial stage of design for collecting and organizing information about data. For

Page 86: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

86

example, each data item, its source, its name, its uses, its meaning, its relationship to other

items etc. is stored in data dictionary

Analyst use data dictionaries for five important reasons -

1. To manage the detail in large systems.

2. To communicate a common meaning.

3. To document the features of the system.

4. To facilitate analysis of the details in order to evaluate

characteristics and determine where system changes should be

made.

5. To locate errors and omissions in the system.

A data dictionary is a catalog - a repository - of the elements in the system. Data dictionary is

list of all the elements composing the data flowing through a system. The major elements are

data flows, data stores, and processes.

Analysts use data dictionary for five important reasons:

1. To manage the details in large systems:- Large systems have huge volumes of data flowing

through them in the form of documents, reports, and even conversations. Similarly, many

different activities take place that use existing data or create new details. All systems are

ongoing all of the time and management of all the descriptive details is a challenge. So the best

way is to record the information.

2. To communicate a common meaning for all system elements:- Data dictionaries assist in

ensuring common meanings for system elements and activities.

For example: Order processing (Sales orders from customers are processed so that specified

items can be shipped) -have one of its data element invoice. This is a common business term but

does this mean same for all the people referring it?

Does invoice mean the amount owed by the supplier?

Does the amount include tax and shipping costs?

How is one specific invoice identified among others?

Answers to these questions will clarify and define systems requirements by more completely

describing the data used or produced in the system. Data dictionaries record additional details

about the data flow in a system so that all persons involved can quickly look up the description

of data flows, data stores, or processes.

3. To document the features of the system :- Features include the parts or components and the

characteristics that distinguish each. We want to know about the processes are data stores. But

we also need to know under what circumstances each process is performed and hoe often the

circumstances occur. Once the features have been articulated and recorded, all participants in

the project will have a common source for information about the system.

Page 87: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

87

For example: Payment voucher - vendor details include field vendor telephone in which area can

be optional if local phone. Item details can be repeated for each item. Purchasing Authorisation

field may be added after invoice arrives if special order.

4. To facilitate analysis of the details in order to evaluate characteristics and determine where

system changes should be made:- The fourth reason for using data dictionaries is to determine

whether new features are needed in a system or whether changes of any type are in order.

Purpose of a Data Dictionary: To DOCUMENT for all posterity the data elements, structures, flows, stores, processes, and

external entities in the system to be designed.

Optimum Format: The structures and elements are on the ERD. The Data Flows, Processes, and

External Entities are from the DFDs. Number the objects on the diagrams and refer to those

objects in the data dictionary by thier numbers. Use "dot notation." Example: Diagram 1 has 4

processes, 1,2,3,4. The explosion for process 1 will be Diagram 1.1, 1.2, etc, depending on how

much detail can be derived from your Diagram 1. Explosions of 1.1 would be 1.1.1, 1.1.2, etc.

The Listings of your objects have the page numbers of the object descriptions.

Here are the definitions of these items. Keep in mind that a data dictionary can be created

INDEPENDENT of the how the system will be implemented (ie A DBMS such as Sybase or Oracle,

spreadsheet, or yellow legal pad.) Please refer to the Casebook Appendix on page 79 for each of

the following.

Data Stores: A inventory of data. The whole of the data in a small system. A database. Very

large systems may have more than one database. Data Store Listing: The A# page is a reference

to the page number of each data store description. More than one data store description on

each page is OK. Data Store Description:

Brief description: A description which distinguishes this data store from others. In FSS, I

would anticipate only one data store.

Organization/Volume/Physical Implementation Info: How is the data store organized, ie

geographically, functionally. How big (MB) the data store is expected to be. What

implementation considerations are there? If this is a proposed data dictionary, keep your

mind open, if it’s an existing data dictionary, go ahead and note what DBMS (or color legal

pad) you will be using. Note any other implementation issues.

Contents: List the structures in the data store.

Inflows and outflows: List these.

Data Structures: Specific arrangements of data attributed (elements) that define the

organization of a single instance of a data flow. Data structures belong to a particular data

store. Think about this... Remember that this could be manifested in any way from a yellow legal

pad to an advanced DBMS. One instance of data flow could be the answers on a form. The

Page 88: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

88

answers would be put in to an organized structure. However, I’d like for you to keep your mind

open to the possibility that the "answers on your form may actually populate more than one

"table" in a database. For example, you may desire to collect information about an instance of

an "employee." Employees pave payroll information, perhaps work address information, and

human resources information. Because the purpose of this information is to serve several

systems or data flows, you might want to store the whole of the employee information in

several data structures. Data Structure Listing: Same as Data Store Listing above. Data Structure

Description:

Brief Description: Same as above.

Contents: List the data element (or attribute) names. Just list them, no need to give

details.

Used in Data Store(s), Data Structure(s), Data Flows(s): List these.

Data Elements: The descriptive property or characteristic of an entity. In database terms, this is

a "attribute" or a "field." In Microsoft and other vendor terms, this is a "property." Therefore,

the data element is part of a data structure, which is part of a data store. It has a source, and

there may be data entry rules you wish to enforce on this element. Data Element Listing: Same

as above. Data Element Description:

Brief Description: Same as above. Must distinguish from other elements.

Value Range/interpretation: Has to do with datatype or range of acceptible values.

Example: a data element for zip code is any group of 5 digits (or 5+4) Must be numbers,

must be 5 digits. Or, another example, a code must be "A" through "E."

Source/Policy/Control/Editing Info: Where does the data element come from? Example:

Source: "from order form line 12" What are some business rules about this data? Must

this element also exist in another structure before it can exist in this one?

Used in Data Store(s), Data Structure(s), Data Flows(s): List these.

Data Flows: Represent an input of data to a process, or the output of data (or information) from

a process. A data flow is also used to represent the creation, deletion, or updating of data in a

file or database (called a "data sore" in the Data Flow Diagram (DFD.) Note that a data flow is

not a process, but the passing of data only. These are represented in the DFD. Each data flow is

a part of a DFD Explosion of a particular process. Data Flow Listing: Same as above. Data Flow

Description: Note the DFD Explosion #.

Brief Description: Same as above.

Volume/Physical Implementation Info: What is the volume of this data flow per day,

week, month, whatever is appropriate? Does this dataflow require an Internet

connection? A car?

Source and Destination: Fill out as appropriate. What are the sources and destinations of

this data flow?

Page 89: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

89

Processes: Work performed on or in response to, incoming data flows or conditions. The FSS

Context Diagram includes "FSS System" and all the inputs and outputs of the FSS system. The

DFD Diagram 0 (if you’d included everything, just as an example) would be 1. Accounts Payable,

2. Order Processing, 3. Catering Processing, 4. Payroll Processing, 5. Accounts Payable

Processing, 6. Warehouse Processing, 7. Inventory Processing, and 8. Store Processing. Each of

these processes has inputs and outputs that are independent of the other. Process Listing: Same

as above. Process Description: Note the DFD Explosion #.

Brief Description: Same as above.

Purpose/Physical Implementation Info: Why do we need this process? State a specific

purpose. Indicate any implementation issues. Does the process need to accept certain

types of data? Certain type of DBMS or ODBC? What are the inflows and outflows? (If

inflows and outflows are the same... then you have a data flow and not a process.)

External Entities: A person, organization unit, other system, or other organization that lies

outside the scope of the project but that interacts with the system being studied. External

entities provide the net inputs into a system and receive net outputs from the system. This and

the others are definitions from the book. Basically, external entities are those entities that are

outside the scope of the project or system at hand. External Entities Listing: Same as above.

External Entities Description:

Page 90: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

90

Chapter 6- Design

Design is the first step in the development phase of any engineering product or system. It

may be defined as the process of applying various techniques and principles for the purpose of

defining a device, a process or a system in sufficient detail to permit its physical realization.

The designer.s goal is to produce a model of an entity that will later be built. The process by

which the model is developed combines:

• Intuition and judgment based on experience in building similar entities

• A set of principles and/or heuristics that guide the way in which the model evolves

• A set of criteria that enables the quality to be judged.

Using one of a number of design methods, the design step produces a data design, an

architectural design and a procedural design. The data design transforms the information

domain model created during analysis into the data structures that will be required to

implement the software. The architectural design defines the relationship between major

structural components of the program. The procedural design transforms structural components

into a procedural of the software. Source code is generated and testing is conducted to

integrate and validate the software. Software design is important primarily to create quality

software. Design is the place where quality is fostered in the software development. Design

provides us with a representation of the software that can be assessed for quality. Design is the

only way one can accurately translate a customer.s requirements into a finished software

product or a system.

Develope System Flow Chart A system flowchart is a concrete, physical model that documents, in an easily visualized,

graphical form, the system’s discrete physical components (its programs, procedures, files,

reports, screens, etc.).

A system flowchart is a valuable presentation aid because it shows how the system’s

major components fit together and interact. In effect, it serves as a system roadmap. During the

information gathering stage, a system flowchart is an excellent tool for summarizing a great

deal of technical information about the existing system. A system flowchart can also be used to

map a hardware system.

System flowcharts are valuable as project planning and project management aids. Using

the system flowchart as a guide, discrete units of work (such as writing a program or installing a

new printer) can be identified, cost estimated, and scheduled. On large projects, the

components suggest how the work might be divided into subsystems.

Historically, some analysts used system flowcharts to help develop job control language

specifications. For example, IBM’s System/370 job control language requires an EXEC statement

for each program and a DD statement for each device or file linked to each program.

Page 91: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

91

Consequently, each program symbol on the system flowchart represents an EXEC statement and

each file or peripheral device symbol linked to a program by a flowline implies a need for one DD

statement. Working backward, preparing a system flowchart from a JCL listing is good way to

identify a program’s linkages.

A system flowchart’s symbols represent physical components, and the mere act of

drawing one implies a physical decision. Consequently, system flowcharts are poor analysis tools

because the appropriate time for making physical decisions is after analysis has been

completed.

A system flowchart can be misleading. For example, an on-line storage symbol might

represent a diskette, a hard disk, a CD-ROM, or some combination of secondary storage devices.

Given such ambiguity, two experts looking at the same flowchart might reasonably envision two

different physical systems. Consequently, the analyst’s intent must be clearly documented in an

attached set of notes.

Flowcharting symbols and conventions

On a system flowchart, each component is represented by a symbol that visually

suggests its function (Figure 37.1). The symbols are linked by flowlines. A given flowline might

represent a data flow, a control flow, and/or a hardware interface. By convention, the direction

of flow is from the top left to the bottom right, and arrowheads must be used when that

convention is not followed. Arrowheads are recommended even when the convention is

followed because they help to clarify the documentation.

Page 92: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

92

Predefined processes

The flowchart for a complex system can be quite large. An off-page connector symbol

(resembling a small home plate) can be used to continue the flowchart on a subsequent page,

but multiple-page flowcharts are difficult to read. When faced with a complex system, a good

approach is to draw a high-level flowchart showing key functions as predefined processes and

then explode those predefined processes to the appropriate level on subsequent pages.

Predefined processes are similar to subroutines.

Figure 37.2 Use

predefined

processes to

simplify a complex

system flowchart.

For example, Figure 37.2 shows a system flowchart for processing a just-arrived

shipment into inventory. Note that the shipment is checked (in a predefined process) against the

Shipping documents (the printer symbol) and recorded (the rectangle) in both the Inventory file

and the Vendor file using data from the Item ordered file.

Figure 37.3 is a flowchart of the predefined process named Check shipment. When a shipment

arrives, it is inspected manually and either rejected or tentatively accepted. The appropriate

data are then input via a keyboard/display unit to the Record shipment program (the link to

Figure 37.2). Unless the program finds something wrong with the shipment, the newly arrived

stock is released to the warehouse. If the shipment is rejected for any reason, the Shipping

documents are marked and sent to the reorder process.

Structure Chart A structure chart is a hierarchy chart with arrows showing the flow of data and control

information between the modules. Structure charts are used as design tools for functionally

decomposing structured programs.

Page 93: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

93

A structure chart graphically highlights tightly coupled, excessively dependent modules

that can cause debugging and maintenance problems. A structure chart is an extension of a

hierarchy chart, so the core of this tool is consistent with other tools. A complete structure chart

that shows all the data and control flows for a program can be very difficult to read, however.

A structure chart is a hierarchy chart that shows the data and control information flows

between the modules. (Figure shows a partial structure chart.) Each module is represented as a

rectangle. Each data flow (or data couple) is shown as an arrow with an open circle at the origin

end. A control couple (a flow of control information such as a flag or a switch setting) is shown

as an arrow with a solid circle at the origin end, see the control couple labeled Reorder flag

between Process sale and Process transaction in Figure. (Note: In this program design, Initiate

reorder is an independent (not shown) level-2 module called by Process transaction when the

Reorder flag is set.) As appropriate, the names of the data elements, data composites, and/or

control fields are written alongside the arrows.

A structure chart does not show the program’s sequence, selection, or repetitive logical

structures; those details are inside the modules, which are viewed as black boxes. However,

some designers identify high-level case structures by adding a transaction center to a key

control module. For example, the solid diamond (the transaction center symbol) at the bottom

of the Process transaction module indicates that, based on the transaction type, either Process

sale, Process customer return, or Process shipment arrival is called.

A data couple might list a composite item; for example, Get data passes a complete transaction

and the associated master record back to Process transaction. Higher-level modules generally

select substructures or specific data elements from a composite and pass them down to their

children. At the bottom of the structure, the detailed computational modules accept and return

data elements.

The structured program designer’s objective is to define independent, cohesive, loosely

coupled modules. Coupling is a function of the amount of data and control information flowing

between two modules, and the structure chart graphically shows the data and control flows. An

excessive number of data or control flows suggests a poor design or a need for further

decomposition.

Page 94: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

94

Software Design And Documentation Tool The Concept of use of tools is brought to software engineering is similar to that of tools

used in the other Engineering production technologies such as automobile industry etc.

• A software tool is general-purpose software, which carries out a highly specific task of

human being, which would completely or partially eliminate need of human skills in that

task.

• Though there are various types of software tools available, we study here only those

which can automate some or many of the software system development tasks.

These are broadly termed as Software development (design) tools. CASE (Computer Aided

Software Engineering) tool is another broader set of tools.

Page 95: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

95

The basic purpose of using software tools are as follows:

• Increased Speed and Accuracy.

• High-level of automation brings in lot of Standardization.

• Less dependence on human skills.

• For high volume of work, the costs are considerably low.

• Iterative several times.

• Versatile because of high programmability.

Thus the use of tools increases the development productivity several times.

GUI Design: The word GUI stands for Graphical User Interface (GUI). Since the PC was invented, the

computer systems and application systems have started considering the GUI as one of the most

important aspect of system design. The GUI design component in every application system will

vary depending upon the importance of the user interactions for the success of the system.

However, by now, we have definitely shifted successfully from purely noninteractive

batch oriented application system, having almost no component of GUI, to present day

applications such as games and animations and internet based application systems, where the

success of the application systems vests almost completely on the effectiveness of the GUI.

The Goals of a good GUI:

Page 96: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

96

1. Provide the best way for people to interact with computers. This is commonly known as

Human Computer Interaction (HCI).

2. The presentation should be user friendly. User-friendly Interface is helpful eg. It should

not only tell user that he has committed an error, but also provide guidance as to how

he/she can rectify it soon.

3. It should provide information on what is the error and how to fix it.

4. User-friendliness also includes tolerant and adaptable.

5. A good GUI makes the User more productive.

6. A good GUI is more effective, because it finds the best solutions to a problem.

7. It is also efficient because it helps the User to find such solution much quickly and with

the least error.

8. For a user, using a computer system, his workspace is the computer screen. The goal of

a good GUI is to make the best, if not all, of a Users workspace.

9. A good GUI should be robust. High robustness implies that the interface should not fall

because of some action taken by the user. Also, any user error should not lead to a

system breakdown.

10. The Usability of a GUI is expected to be high. The usability is measured in the various

terms.

11. A good GUI is of high Analytical capability, most of the information needed by user

appears on the screen.

12. With a good GUI, user finds the work easier and more pleasant.

13. The user should be happy and confident to use the interface.

14. A good GUI has a high cognitive workload ability i.e. the mental efforts required of the

user to use the system should be the latest. In fact, the GUI closely approximates the

user’s mental model or reactions to the screen.

15. For a good GUI, the user satisfaction is HIGH.

These are the Characteristics or Goals of a good GUI.

HIPO (hierarchy plus input-process-output) The HIPO (Hierarchy plus Input-Process-Output) technique is a tool for planning and/or

documenting a computer program. A HIPO model consists of a hierarchy chart that graphically

represents the program’s control structure and a set of IPO (Input-Process-Output) charts that

describe the inputs to, the outputs from, and the functions (or processes) performed by each

module on the hierarchy chart.

1. HIPO is a commonly used method for developing software.An acronym for Hierachical

Input Process Output,this method was developed by IBM for its large,complex

Operating Systems.

Page 97: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

97

2. Purpose: The assumption on which HIPO is based is that one oftenly loses track of the

intended function of large system. This is one reason why it is difficult to compare

existing systems against their original specifications and therefore why failure can occur

even in system that are technically well formulated. From the user view, single function

can often extend across several modules. The concerns of the analyst is that

understanding, describing and documenting the modules and their interaction in a way

that provides sufficient details out that does not lose sight of the larger picture.

3. HIPO diagrams are graphic, rather than prose or narrative, descriptions of the system.

4. They assest the analyst in answering three guidelines.

a. What does the system or module do?

b. How does to do it?

c. What are the inputs and outputs?

5. A HIPO descriptions for a system consists of

a. Visual of the the table of contents(VTOC),and

b. Functional diagrams.

6. Advantages:

a. HIPO diagrams are effective for documenting a system.

b. They also aid designers and force them to think about how specification will be

met and where activities and components must be linked together.

7. Disadvantages:

a. They rely on a set of specialized symbols that require explanation. an extra

concern when compared to the simplicity of, for example, data flow diagrams.

b. HIPO diagrams are not as easy to use communication purpose as many people

would like.And of course, they donot guarantee error free systems. Hence, their

greatest strength is the documentation of a system.

Warnier-Orr diagrams A Warnier-Orr diagram, a graphical representation of a horizontal hierarchy with

brackets separating the levels, is used to plan or document a data structure, a set of detailed

logic, a program, or a system.

Warnier-Orr diagrams are excellent tools for describing, planning, or documenting data

structures. They can show a data structure or a logical structure at a glance. Because only a

limited number of symbols are required, specialized software is unnecessary and diagrams can

be created quickly by hand. The basic elements of the technique are easy to learn and easy to

explain. Warnier-Orr diagrams map well to structured code.

The structured requirements definition methodology and, by extension, Warnier-Orr

diagrams are not as well known as other methodologies or tools. Consequently, there are

relatively few software tools to create and/or maintain Warnier-Orr diagrams and relatively few

systems analysts or information system consultants who are proficient with them.

Page 98: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

98

In-out diagrams

A Warnier-Orr diagram shows a data structure or a logical structure as a horizontal

hierarchy with brackets separating the levels. Once the major tasks are identified, the systems

analyst or information system consultant prepares an in-out Warnier-Orr diagram to document

the application’s primary inputs and outputs.

For example, Figure shows an in-out diagram for a batch inventory update application.

Start at the left (the top of the hierarchy). The large bracket shows that the program, Update

Inventory, performs five primary processes ranging from Get Transaction at the top to Write

Reorder at the bottom. The letter N in parentheses under Update Inventory means that the

program is repeated many (1 or more) times. The digit 1 in parentheses under Get Transaction

(and the next three processes) means the process is performed once. The (0, 1) under Write

Reorder means the process is repeated 0 or 1 times, depending on a run-time condition. (Stock

may or may not be reordered as a result of any given transaction.)

Figure 33.1 An in-out Warnier-Orr diagram.

Data flow into and out from every process. The process inputs and outputs are identified

to the right of the in-out diagram. For example, the Get Transaction process reads an Invoice

and passes it to a subsequent process. The last column is a list of the program’s primary input

and output data structures. Note how the brackets indicate the hierarchical levels.

Page 99: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

99

Data structures

After the in-out diagram is prepared, the data structures are documented. For example,

Figure 31.2 shows the data structure for an invoice.

Figure 33.2 A Warnier-Orr diagram of the Invoice data structure.

The highest level composite, Invoice, is noted at the left. The N in parentheses under the

data name means that there are many (one or more) invoices. Moving to the right of the first

bracket are the components that make up an invoice. Invoice number, Date-of-sale, Customer

telephone, Subtotal, Sales tax, and Total due are data elements, while Customer name,

Customer address, and Item purchased are composite items that are further decomposed.

Consider the composite item Customer name. The composite name appears at the left

separated from its lower-level data elements by a bracket. Three of the data elements that

make up Customer name are conditional; Customer title (Dr, Mr., Ms.), Customer middle (not

everyone has a middle name), and Customer suffix (Sr., Jr., III) may or may not be present on a

given Invoice. The entry (0, 1) under a data element name indicates that it occurs 0 or 1 times.

A given sales transaction might include several different products, so Item purchased is a

repetitive data structure that consists of one or more sets of the data elements Stock number,

Description, Units, Unit price, and Item total. The letter M in parenthesis under Item purchased

indicates that the �substructure is repeated an unknown number of times. (Note: M and N are

different values.) The composite item, Units, can hold either Weight or Quantity, but not both.

The “plus sign in a circle” is an exclusive or symbol.

Page 100: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

100

Program (logic) structures

A key principle of the Warnier-Orr methodology is that the structure of a well-written

program is tied to the structure of its data. For example, because the number of invoices is

unknown, the primary structure of an inventory update program designed to process the data

described in Figure 33.2 will be a repetitive loop. At the second level, the �number of items

purchased is unknown, suggesting another loop structure to compute and accumulate the item

costs. Finally, the exclusive or and the conditional items at the data element level suggest

selection logic.

1. The ability to show the relationship between processes and steps in a process is not

unique to Warnier/Orr diagrams, not is the use of iteration ,or treatment of individual

cases. 2. Both the structured flowcharts and structured-English methods do this equality well.

However, the approach used to develop systems definitions with Warnier/Orr diagrams

are different and fit well with those used in logical system design.

3. To develop a Warnier/Orr diagram, the analyst works backwards, starting with systems

output and using on output-oriented analysis.

On paper, the development moves from left to right. First, the intended output or results

of processing are defined..

4. At the next level, shown by inclusion with a bracket, the steps needed to produce the

output are defined. Each step in turn is further defined. Additional brackets group the

processes required to produce the result on the next level.

5. A complete Warnier/Orr diagram includes both process groupings and data

requirements. Both elements are listed for each process or process component. These

data elements are the ones needed to determine which alternative or case should be

handled by the system and to carry out the process.

6. The analyst must determine where each data element originates how it is used, and

how individual elements are combined. When the definition is completed, data structure

for each process is documented. It is turn, is used by the programmers, who work from

the diagrams to code the software.

Advantages:

1. Warnier/Orr diagrams offer some distinct advantages to system experts. They are simple

in appearance and easy to understand. Yet they are powerful design tools.

2. They have the advantage of showing grouping of processes and the data that must be

passed from level to level.

3. In addition, the sequence of working backwards ensures that the system will be result-

oriented.

4. This method is also a natural process. with structured flowcharts. for example it is often

necessary to determine the innermost steps before interactions and modularity.

Page 101: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

101

Design Database

Page 102: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

102

Chapter 7- Designing Input, Output & User Interface

Input Design Goals of Input Design:

1. The data which is input to a computerized system should be correct.. If the data is carelessly

entered will lead to erroneous results. The system designer should ensure that system

prevents such errors.

2. The Volume of data should be considered since if any error occurs, it will take long time to

find out the source of error. The design of inputs involves the following four tasks:

a. Identify data inputs devices and mechanism.

b. Identify the input data and attributes

c. Identify input controls

d. Prototype the input forms

a. Identify data inputs devices and mechanism.

To reduce input error:

1. Automate the data entry process for reducing human errors.

2. Validate the data completely and correctly at the location where it is entered.

Reject the wrong data at its source only.

b. Identify the input data and attributes

1. It involves identifying information flow across the system boundary.

2. When the input form is designed the designer ensures that all these data

elements are provided for entry, validations and storage.

c. Identify input controls

1. Input integrity controls are used with all kinds of mechanisms which help

reduces the data entry errors at input stage and ensure completeness.

2. Various error detection and correction techniques are applied.

d. prototype the input forms

1. the users should be provided with the form prototype and related

functionality including validation, help ,error message.

Output Design Goals of Output Design:

In order to select an appropriate output device and to design a proper output format,

it is necessary to understand the objectives the output is expected to serve.

Following questions can address these objectives:

1. Who will use the report?

2. What is the proposed use of the report?

3. What is the volume of the output?

Page 103: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

103

4. How often is the output required?

a. If the report is for top management, it must be summarized, highlighting important results.

Graphical outputs e.g. bar charts and pie charts convey information in a form useful for

decision making.

b. If the report is for middle management it should highlight exceptions. for example, in a stores

management system, information on whether items are rapidly consumed or not consumed

for longer period should be highlighted. Exceptions convey information for tactical

management.

c. At operational level all relevant information needs to be printed. For example, in a Payroll

system, pay slips for all employees have to be printed monthly.

d. The system should provide the volume of information appropriate for user requirement. The

total volume of printed output should be minimal. Irrelevant reports should be avoided.

User Interface Design User interface design is the spercification of of a conversation between the system user

and the computer.it generally results in the form of input or output.there are several types of

user interface styles including menu selection , instruction sets, question answer dialogue and

direct manipulation.

1. Menu selection: It is a strategy of dialogue design that presents a list of alternatives or

options to the user.The system user selects the desired alternative or option by keying in the

no. of alternatives associated with the options.

2. Instruction sets: It is strategy where the application is designed osing a dialogue syntax that

the user must learn.There are three types of syntax :structured English,mnemonic syntax

and natural language.

3. Question answer dialogue strategy: It is a style that wsa primerly used to supplement either

menu-driven or syntax-driven dialogues.The system question involves yes or no.It was also

popular in developing interfaces for character based screens inainframe applications.

4. Direct manipulation: It allows graphical objects appear on a screen.Essentialy, this user

interface style foceses on using icons , small graphical images ,to sugest functions to the

user.

Page 104: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

104

Chapter 8- Testing

Testing is a process of executing a program with the intent of finding an error. A good test

case is one that has a high probability of finding an as yet undiscovered error. A successful test is

one that uncovers previously undiscovered errors. The main objective of testing, thus, is to

design tests that uncover different classes of errors, and to do so with minimum time and effort

spent.

Two classes of input are provided to the test process:

• A software configuration that includes a Software Requirement Specification, Design

Specification and source code.

• A test configuration that includes a test plan and procedure, any testing tools to be used

and testing cases with their expected results.

Tests are conducted and the results are compared to the expected results. When erroneous

data are uncovered, an error is implied and debugging commences. The results of testing give a

qualitative indication of the software quality and reliability. If severe errors requiring design

modification are encountered regularly, the reliability is suspect. On the other hand, uncovering

of errors that can be easily fixed implies either that the software quality is good, or that the test

plan is inadequate.

Testing Objectives In an excellent book on software testing, Glen Myers [MYE79] states a number of rules

that can serve well as testing objectives:

1. Testing is a process of executing a program with the intent of finding an error.

2. A good test case is one that has a high probability of finding an as-yetundiscovered

error.

3. A successful test is one that uncovers an as-yet-undiscovered error.

These objectives imply a dramatic change in viewpoint. They move counter to the commonly

held view that a successful test is one in which no errors are found. Our objective is to design

tests that systematically uncover different classes of errors and to do so with a minimum

amount of time and effort.

If testing is conducted successfully (according to the objectives stated previously), it will

uncover errors in the software. As a secondary benefit, testing demonstrates that software

functions appear to be working according to specification, that behavioral and performance

requirements appear to have been met. In addition, data collected as testing is conducted

provide a good indication of software reliability and some indication of software quality as a

whole. But testing cannot show the absence of errors and defects, it can show only that

software errors and defects are present. It is important to keep this (rather gloomy) statement

in mind as testing is being conducted.

Page 105: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

105

Testing Principles Before applying methods to design effective test cases, a software engineer must

understand the basic principles that guide software testing.

• All tests should be traceable to customer requirements.

As we have seen, the objective of software testing is to uncover errors. It follows that the

most severe defects (from the customer’s point of view) are those that cause the program to fail

to meet its requirements.

• Tests should be planned long before testing begins.

Test planning can begin as soon as the requirements model is complete. Detailed

definition of test cases can begin as soon as the design model has been solidified. Therefore, all

tests can be planned and designed before any code has been generated.

• The Pareto principle applies to software testing.

Stated simply, the Pareto principle implies that 80 percent of all errors uncovered during

testing will likely be traceable to 20 percent of all program components. The problem, of course,

is to isolate these suspect components and to thoroughly test them.

• Testing should begin “in the small” and progress toward testing “in

the large.”

The first tests planned and executed generally focus on individual components. As

testing progresses, focus shifts in an attempt to find errors in integrated clusters of components

and ultimately in the entire system.

• Exhaustive testing is not possible.

The number of path permutations for even a moderately sized program is exceptionally

large. For this reason, it is impossible to execute every combination of paths during testing. It is

possible, however, to adequately cover program logic and to ensure that all conditions in the

component-level design have been exercised.

• To be most effective, testing should be conducted by an independent third party.

By most effective, we mean testing that has the highest probability of finding errors (the

primary objective of testing).The software engineer who created the system is not the best

person to conduct all tests for the software.

Strategic approach to software testing

Page 106: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

106

Testing is a set of activities that can be planned in advance and conducted

systematically. For this reason a template for software testing -- a set of steps into which we can

place specific test case design techniques and testing methods -- should be defined for the

software process.

A number of software testing strategies have been proposed in the literature. All provide

the software developer with a template for testing and all have the following generic

characteristics.

• Testing begins at the component level and works 'outward' toward the integration of

the entire computer-based system,

• Different testing techniques are appropriate at different points in time.

• Testing is conducted by the developer of the software and (for large projects) an

independent test group.

• Testing and debugging are different activities, but debugging must be accommodated in

any testing strategy,

A strategy for software testing must accommodate low-level tests that are necessary to

verify that a small source code segment has been correctly implemented as well as high-level

tests that validate major system functions against customer requirements. A strategy must

provide guidance for the practitioner and a set of milestones for the manager. Because the steps

of the test strategy occur at a time when dead-line pressure begins to rise, progress must be

measurable and problems must surface as early as possible.

White box testing This testing is also referred to as glass box testing or code testing. It is a test case design

that uses the control structure of the procedural design to derive test cases. Using white-box

testing methods, the software engineer can derive test cases that do all of the following:

1. Exercise all independent paths within the module at least once.

2. Exercise all logical decisions for both true and false scenarios.

3. Execute all loops at their boundaries and within their operational loops

4. Exercise all internal data structures to ensure their validity.

Even though code testing seems like an ideal method for testing software, it does not

guarantee against software failures due to faulty data. Also, it is realistically not possible to

perform such extensive and exhaustive testing in large organizations with complex systems.

Basis path testing

Page 107: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

107

Basis path testing is a white box testing technique. The basis path enables the test case

designer to derive a logical complexity measure of a procedural design and use this measure as

a guide for defining the basis set of execution paths. Test cases derived to exercise the basis set

are guaranteed to execute every statement in the program at least once during testing.

1. Flow graph notation

A simple notation for the representation of control flow must be introduced. The flow

graph depicts logical control flow using the notations illustrated in figure.

Referring to figure, we define the following:

i. Node: Each circle in the flow graph represents one or more procedural statements and is

referred to as a flow graph node.

ii. Edge: The arrows on the flow graph are called edges or links. They represent the flow of

control and are analogous to flow chart arrows. An edge must always terminate on a

node.

iii. Region: An area bounded by edges and nodes is called a region. The area outside the

graph is also considered a region.

iv. Predicate node: Each node that contains a condition is called a predicate node and is

characterized by two or more edges emerging from it.

2. Cyclomatic complexity

Cyclomatic complexity is a software metric that provides a quantitative measure of the

logical complexity of a program. It defines the number of independent paths in the basis set of a

Page 108: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

108

program, and hence establishes the number of tests that must be conducted to ensure that

every statement is executed at least once.

An independent path is defined as any path in the program that introduces at least one

new processing statement or condition. When stated in terms of a flow graph, an independent

path must traverse at least one edge that has not been followed before.

For example, in figure 11.2 (B), the set of independent paths is:

Path 1: 1 . 11

Path 2: 1 . 2 . 3 . 4 . 5 . 10 . 1 . 11

Path 3: 1 . 2 . 3 . 6 . 8 . 9 . 10 . 1 . 11

Path 4: 1 . 2 . 3 . 6 . 7 . 9 . 10 . 1 . 11

Note that each of these paths introduces a new edge. Paths 1, 2, 3 and 4 thus constitute a set of

basis paths for the flow graph in figure 11.2 (B). That is, if tests can be designed to force

execution along these paths, every statement in the program will be executed at least once and

every condition will be evaluated on its true and false side.

The number of paths is determined by the cyclomatic complexity. Complexity is computed in one

of three ways.

i. The number of regions of the flow graph corresponds to the cyclomatic complexity.

ii. Cyclomatic complexity V(G) for a flow graph, G, is also defined as :

V(G) = E . N + 2

where E is the number of edges and N is the number of nodes in the graph.

iii. Cyclomatic complexity V(G) for a flow graph, G, is also defined as :

Page 109: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

109

V(G) = P + 1

where P is the number of predicate nodes contained in the flow graph G.

Thus, for the flow graph in figure 11.2 (B),

i. The flow graph has 4 regions.

ii. V(G) = 11 edges . 9 nodes + 2 = 4.

iii. V(g) = 3 predicate nodes + 1 = 4.

Hence the cyclomatic complexity of the flow graph is 4.

3. Deriving test cases

The basis set can be derived using the following steps:

i) Using the design or code as a foundation, draw a corresponding flow graph.

ii) Determine the cyclomatic complexity of the resultant flow graph.

iii) Determine a basis set of linearly independent paths.

iv) Prepare test cases that will force execution of each path in the basis set.

4. Control structure testing

The basis path testing technique described is one of a number of white box testing

techniques. Although this method is simple and effective, it is not sufficient in itself. In this

section, other variations on control structure testing are discussed. These broaden the testing

coverage and improve the quality of white box testing.

1) Condition testing

Condition testing is a test case design method that exercises the logical

conditions contained in a program module. A simple condition is a Boolean variable or a

relational expression, possibly preceded with one NOT operator. A relational expression

takes the form:

E1 <relational operator> E2

where E1 and E2 are arithmetic expressions and <relational operator> is one of the

following:

<, >, =, != (not equal), <=, >=.

A compound condition is composed of two or more simple conditions, Boolean

operators and parentheses. We assume that Boolean operators allowed in a compound

condition include OR (|), AND (&) and NOT (-). A condition without relational expressions

is referred to as a relational expression.

If a condition is incorrect, then at least one component of the condition is

incorrect. Thus, types of

errors in a condition include the following:

Page 110: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

110

• Boolean operator error

• Boolean variable error

• Boolean parentheses error

• Relational operator error

• Arithmetic expression error

The condition testing method focuses on testing each condition in the program.

The purpose of condition testing is to detect not only errors in the conditions of a

program but also other errors in the program.

2) Data flow testing

The data flow testing method selects test paths of a program according to the

locations of definition and use of variables in the program. A number of data flow testing

strategies exist.

These strategies are useful for selecting test paths of a program containing

nested-if and nestedloop statements. Since the statements in a program are related to

each other according to the definitions and uses of variables, the data flow testing

approach is effective for error detection.

3) Loop testing

Loop testing is another white-box testing technique that focuses exclusively on

the validity of loop constructs. Four different classes of loops can be defined:

a) Simple loops . These are tested by applying the following tests:

• Skip the loop entirely

• Only one pass through the loop

• m passes through the loop where m < n (n is the maximum allowable number of

passes through the loop)

• n-1, n, n+1 passes through the loop.

b) Nested loops . If we were to extend the same strategy used for simple loops to

nested loops, the number of tests would geometrically increase with the level of

nesting. A different method is used to test these loops, which helps to reduce the

number of tests:

• Start at the innermost loop. Set all other loops to their minimum value.

• Conduct simple tests for the inner loop while holding other loops at their

minimum iteration value. Conduct additional tests for out of range and excluded

values. • Work outward, conducting tests for the next loop, while holding all outer loops at

their minimum iteration value.

• Continue until all loops are tested.

Page 111: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

111

c) Concatenated loops

Concatenated loops can be tested using the approach defined for simple loops, if the

two loops are independent of each other. If however, the loop counter for the first

loop is used as the initial value for the second loop, then the two loops are

dependant. In this case the approach recommended for nested loops is suggested.

d) Unstructured loops

Whenever possible, this class of loops should be redesigned to reflect the use of

structured programming constructs.

Black box testing

In this method of testing, the analyst examines the specifications to see what the

program should do under various conditions. Test cases are then developed for each condition

or combination of conditions and submitted for processing. By examining results the analyst can

determine if the program performs according to the specified requirements. Black box testing

attempts to find errors in the following categories:

i) Incorrect or missing functions

ii) Interface errors

iii) Errors in data structures or external database access

iv) Performance errors

v) Initialization and termination errors

In this method, the system is treated as a black box, i.e. the analyst does not look into

the code to see if each path is followed. Due to this, specification testing is not complete.

However, it is a complementary approach to white box testing that is likely to uncover a

different class of errors than the white-box methods. Some of the black-box testing methods are

dealt with in the following subsections.

1. Equivalence partitioning

This is a black box testing method that divides the input domain of a program into

classes of data from which test cases can be derived. An ideal test case uncovers a class of

errors that might otherwise require many cases to be executed before the general error is

observed. Equivalence partitioning attempts to define a test case that uncovers classes of errors,

thus reducing the total number of test cases that must be developed.

Test case design for equivalence partitioning is based on an evaluation of equivalence

classes for an input condition. An equivalence class represents a set of valid or invalid states for

input conditions. Typically, an input condition is either a specific numeric value, a range of

values, a set of related values or a Boolean condition. Equivalence classes may be defined

according to the following guidelines:

Page 112: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

112

i) If an input condition specifies a range, one valid and two invalid equivalence classes

are defined.

ii) If an input condition requires a specific value, one valid and two invalid equivalence

classes are defined.

iii) If an input condition specifies a member of a set, one valid and one invalid

equivalence

class are defined.

iv) If an input condition is Boolean, one valid and one invalid equivalence class are

defined.

For example, consider an application that requires the following data:

• Area code . blank or 3-digit number

• Prefix . 3-digit number not beginning with 0 or 1

• Suffix . 4-digit number

• Password . 6-digit alphanumeric string

• Commands . check, deposit, etc.

The input conditions associated with the above data elements can be specified as follows:

• Area code: Input condition, Boolean . the area code may or may not be present

Input condition, range . values defined between 000 . 999

• Prefix: Input condition, range . values defined between 200 . 999

• Suffix: Input condition, range . values defined between 0000 . 9999

• Password: Input condition, Boolean . the password may or may not be present

Input condition, value . 6-character string

• Commands: Input condition, set . containing commands noted previously

Applying the guidelines for the derivation for equivalence classes, test cases for each

domain data item can be developed and executed. Test cases are developed so that the largest

number of attributes of an equivalence class, are exercised at once.

2. Boundary value analysis

It is generally observed that a greater number of errors occur at the boundaries of the

input domain, rather than at the center. For this reason, boundary value analysis (BVA) has been

developed as a testing technique. This technique leads to the development of test cases that

exercise boundary values. BVA is a test case technique that complements equivalence

partitioning. Rather than selecting an element of the equivalence class, BVA leads to the

selection of test cases at the edge of the class.

Rather than focusing solely on input conditions, BVA derives test cases from the output

domain as well.

Guidelines for BVA are similar in many respects to those provide for equivalence

partitioning:

i) If an input condition specifies a range bounded by values a and b, test cases should be

designed with values a and b, and just above and just below a and b.

Page 113: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

113

ii) If an input condition specifies a number of values, test cases should be developed that

exercise the minimum and maximum numbers. Values just above and below minimum

and maximum are also tested.

iii) Apply guidelines (i) and (ii) to output conditions; i.e. test cases should be designed to

generate the maximum and minimum number of possible values as output.

iv) If internal data structures have prescribed boundaries, test the data structure at its

boundary (e.g. an array has a defined limit of 100 entries)

3. Comparison testing

There are some situations where the reliability of the system is absolutely critical. In

some applications, redundant hardware and software are often used to minimize the possibility

of errors. When redundant software is developed, separate software engineering teams develop

independent versions of an application using the same specifications. In such situations, both

versions can be tested using the same test data to ensure that they provide identical outputs.

Then all the versions are executed in parallel with a real-time comparison of results to ensure

consistency.

If the output is different, each of the applications is investigated to determine if a defect

in one or more versions is responsible for the difference. In most cases, this comparison can be

done with automated tools.

This method is not foolproof. If the specifications are incorrect, it is likely that all versions

of the software reflect the error. In addition, if the comparison provides identical, but incorrect

results, comparison testing fails to detect the error.

Verification and Validation

Software testing is one element of a broader topic that is often referred to as verification

and validation. Verification refers to the set of activities that ensure that software correctly

implements a specific function. Validation refers to a different set of activities that ensure that

the software meets specification requirements.

The definition of verification and validation encompasses many of the activities that we

refer to as software quality assurance:

• Software engineering activities provide the foundation form which quality is built.

• Analysis, design and coding methods act to enhance the quality by providing uniform

techniques and predictable results.

• Formal technical reviews help to ensure the quality of the products.

• Throughout the process measures and controls are applied to every element of a

software configuration. These help to maintain uniformity.

• Testing is the last phase in which quality can be assessed and errors can be uncovered.

Unit testing

Page 114: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

114

Unit testing focuses verification effort on the smallest unit of software design . the

software module. Using the component level design description as a guide, important control

paths are tested to uncover errors within the boundary of the module. The unit test is white box

oriented, and the steps can be conducted in parallel for multiple modules.

• The module interface is tested to ensure that information flows properly into and out of

the unit under test.

• The local data structure is tested to ensure that data stored temporarily maintains its

integrity during all the steps in the execution.

• Boundary conditions are tested to establish that the module operates properly at the

established boundaries.

• All independent paths through the module are exercised to ensure that all statements

have been executed at least once.

• Finally, all error-handling paths are also tested.

Unit testing is normally considered an addition to the coding. After source level code is

developed, reviewed and verified for syntax, unit test case design begins. Because a module is

not a stand-alone program, driver, and / or stub software must be developed. A driver program

is no more than a main program that accepts test case data and passes it to the component to

be tested and prints the relevant results. Stubs serve to replace subordinate modules to the

module under test. Drivers and stubs represent overhead, as it is additional software that must

be developed (not formally designed) but not delivered with the final software. Unit testing

becomes simpler when a component is designed with high cohesion. When only one function is

addresses by a component, the number of test cases is reduced and errors are more easily

predicted and uncovered.

Integration testing Interfacing of various modules can cause problems. Data can be lost across an interface,

one module may affect the other, and individually acceptable imprecision may be magnified

when combined.

Integration testing is a systematic technique for constructing the program structure

while at the same time conducting tests to uncover errors associated with interfacing. The

objective is to take unit tested components and build a program structure that has been

dictated by design.

There is often a tendency to attempt a non-incremental approach. All components are

combined in advance. The entire program is tested as a whole. This results in an abundance of

errors. Correction becomes difficult because isolation of causes is complicated.

Incremental integration is the antithesis of the above approach. The program is

constructed and tested in small increments, where errors are easier to isolate and correct,

interfaces are tested more completely and a systematic test approach is applied. The following

subsections deal with the different approaches to incremental integration.

Page 115: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

115

Top-down testing Top-down integration testing is an incremental approach to construction of program

structure. Modules are integrated by moving downward through the control hierarchy,

beginning with the main control module. Modules subordinate to the main module are

incorporated into the structure in steps.

The integration process is performed in five steps:

i) The main control module is used as a test driver and stubs are substituted for all

components directly subordinate to the main module.

ii) Depending on the integration approach selected, subordinate stubs are replaced one

at a time with actual components, in breadth-first or depth-first order.

iii) Tests are conducted as each component is integrated.

iv) On completion of each set of tests, another stub is replaced by the real component.

v) Regression testing may be conducted to ensure that new errors have not been

introduced.

The top-down strategy verifies major control or decision points early in the test process.

In a well-factored program structure, decision-making occurs at higher levels in the hierarchy

and is thus encountered first.

Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems

can arise. The most common of these problems occurs when processing at low levels in the

hierarchy is required to adequately test upper levels.

In such cases another approach to integration testing is used . the bottom-up method discussed

in the next subsection.

Bottom-up testing Bottom-up integration testing, as the name implies, begins construction and testing with

atomic modules (i.e. components at the lowest levels in the program structure). Because

components are integrated form the bottom-up, processing required for components

subordinate to a given level is always available and the need for stubs is eliminated.

A bottom-up integration strategy may be implemented using the following steps:

i) Low-level components are combined into clusters that perform a specific sub-function.

ii) A driver is written to co-ordinate test case input and output.

iii) The cluster is tested.

iv) Drivers are removed and clusters are combined moving upward in the program structure.

Integration test documentation

Page 116: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

116

An overall plan for integration of the software and a description of specific tests are

documented in a test specification. This document contains a test plan and a test procedure. It is

a work product of the software process, and becomes a part of the software configuration.

Validation testing Software validation is achieved through a series of black box tests that demonstrate

conformity with requirements. A test plan outlines the classes of tests to be conducted and a

test procedure defines specific test cases that will be used to demonstrate conformity with

requirements.

After each validation test case has been conducted, one of two possible conditions exist:

i) The function or performance characteristics conform to the specifications and are

accepted.

ii) A deviation form specifications is discovered and a deficiency list is created.

An important element of validation testing is configuration review. The intent of the

review is to ensure that all elements of the software configuration have been properly

developed and are well documented. The configuration review is sometimes called an audit.

If software is developed for the use of many customers, it is impractical to perform formal

acceptance tests with each one. Many software product builders use a process called alpha

and beta testing to uncover errors that only the end-user is able to find.

The alpha test is conducted at the developer.s site by the customer. The software is used in a

natural setting with the developer recording errors and usage problems. Alpha tests are

performed in a controlled environment.

Beta tests are conducted at one or more customer sites by the end-users of the software.

Unlike alpha testing, the developer is generally not present. The beta test is thus a live

application of the software in an environment that cannot be controlled by the developer.

The customer records all the errors and reports these to the developer at regular intervals.

As a result of problems recorded or reported during these tests, the developers make

modifications and prepare for the release of the entire customer base.

System testing Software is only one element of a larger computer-based system. Ultimately, software is

incorporated with other system elements and a series of system integration and validation tests

are conducted. These tests fall outside the scope of the software process and are not conducted

solely by software engineers. However, steps taken during software design and testing can

greatly improve the probability of successful software integration in a larger system.

System testing is actually a series of tests whose purpose is to fully exercise the computer-based

system. Although each test has a different purpose, all work to verify that system elements have

been properly integrated and perform allocated functions.

Page 117: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

117

Some types of common system tests are:

i) Recovery testing

Many computer-based systems must recover from faults and resume processing within a

pre-specified time. In some cases, a system must be fault-tolerant, i.e. processing faults

must not cause overall system function to cease. In other cases, a system failure must be

corrected within a specified period of time or severe economic damage will occur.

Recovery testing is a system test that forces the software to fail in a variety of ways and

verifies that recovery is properly performed. If recovery is automatic, re-initialization,

check-pointing mechanisms, data recovery and restart are evaluated for correctness. If

recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to

determine whether it is within acceptable limits.

ii) Security testing

Any computer-based system that manages sensitive information or causes actions that

can harm individuals is a target for improper or illegal penetration. Security testing

attempts to verify that protection mechanisms built into a system will, in fact, protect it

from improper penetration. During security testing, the tester plays the role of the hacker

who desires to penetrate the system. Given enough time and resources, good security

testing will ultimately penetrate a system. The role of the system designer is to make

penetration cost more than the value of the information that will be obtained.

iii) Stress testing

During earlier testing steps, white box and black box techniques result in a thorough

evaluation of normal program functions and performance. Stress tests are designed to

confront programs with abnormal situations. Stress testing executes a system in a manner

that demands resources in abnormal quantity, frequency or volume. Essentially, the tester

attempts to break the program.

A variation of stress testing is a technique called sensitivity testing. In some situations, a

very small range of data contained within the bounds of valid data for a program may

cause extreme and even erroneous processing or performance degradation. Sensitivity

testing attempts to uncover data combinations within valid input classes that may cause

instability or improper processing.

iv) Performance testing

Software that performs the required functions but does not conform to performance

requirements is unacceptable. Performance testing is designed to test run-time

performance of software within the context of an integrated system. Performance testing

occurs through all the steps in the testing process. However, it is not until all system

elements are fully integrated that the true performance of a system can be ascertained

Page 118: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

118

Test data

The type of data that is used for testing is called test data. There are two very important

sources of data each with their own sets of advantages and disadvantages. They are:

i) Live test data

Live test data is actually extracted from organization files. After a system is partially

constructed, analysts may ask the users to key in data from their normal activities. The

analyst uses this set of data to partially test the system.

It is difficult to obtain live data in sufficient amounts to perform extensive testing.

Although this data is realistic, i.e. typical for regular processing activities, it does not help

to test for unusual occurrences, thus ignoring factors that may cause system failure.

ii) Artificial test data

Since it may be difficult to produce large volumes of live data, or to easily obtain it,

artificial data is created solely for testing purposes. Artificial test data and testing tools are

maintained in a testing library. Test data can be generated to test all combinations of

formats and values. Since they are generated artificially, large volumes can be generated for

extensive testing. The test data should be capable of testing all the functionality and

revealing the bugs in the system. For this reason the library must contain the following

classes of data:

• Correct test data

• Erroneous test data

• Boundary condition test data

Test data

The type of data that is used for testing is called test data. There are two very important

sources of data each with their own sets of advantages and disadvantages. They are:

i) Live test data

Live test data is actually extracted from organization files. After a system is partially

constructed, analysts may ask the users to key in data from their normal activities. The

analyst uses this set of data to partially test the system.

It is difficult to obtain live data in sufficient amounts to perform extensive testing.

Although this data is realistic, i.e. typical for regular processing activities, it does not help

to test for unusual occurrences, thus ignoring factors that may cause system failure.

ii) Artificial test data

Since it may be difficult to produce large volumes of live data, or to easily obtain it,

artificial data is created solely for testing purposes. Artificial test data and testing tools are

Page 119: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

119

maintained in a testing library. Test data can be generated to test all combinations of

formats and values. Since they are generated artificially, large volumes can be generated for

extensive testing. The test data should be capable of testing all the functionality and

revealing the bugs in the system. For this reason the library must contain the following

classes of data:

• Correct test data

• Erroneous test data

• Boundary condition test data

Page 120: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

120

Chapter 9-Implementation And Maintenance Maintenance

Not all jobs run successfully. Sometimes an unexpected boundary condition or an

overload causes an error. Sometimes the o/p fails to pass controls. Sometimes program bugs

may appear. No matter what the problem, a previously working system that ceases to function,

requires emergency maintenance. Isolating operational problems is not always an easy task,

particularly when combinations of circumstances are responsible. The ease with which a

problem can be corrected is directly related to how well a system has been designed and

documented.

Changes in environment may lead to maintenance requirement. For example, new

reports may need to be generated, competitors may alter market conditions, a new manager

may have a different style of decision-making, organization policies may change, etc.

Information should be able to accommodate changing needs. The design should be flexible to

allow new features to be added with ease.

Although software does not wear out like hardware, integrity of the program, test data

and documentation degenerate as a result of modifications. Hence, the system will need

maintenance.

Maintenance covers a wide range of activities such as correcting code, design errors,

updating documentation and upgrading user support. Software maintenance can be classified

into four types:

i) Corrective maintenance

It means repairing processing or performance failures, or making changes because of

previously uncorrected problems or false assumptions. It involves changing the software to

correct defects.

For example:

• Debugging and correcting errors or failures and emergency fixes that are required when

newly developed software is installed for the first time.

• Fixing errors due to incomplete specifications, which may result in erroneous

assumptions, such as assuming an employee code is 5 numeric digits instead of 5

characters.

ii) Adaptive maintenance

Over time the environment for which the software was developed is likely to change.

Adaptive maintenance results in modifications to the software to accommodate changes in

the external environment.

For example:

• Report formats may have been changed.

• New hardware may have been installed (changing from 16-bit to 32-bit environment)

iii) Perfective maintenance (Enhancement)

This implies changing the performance or modifying the program to improve or enhance

Page 121: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

121

the system. It extends the software beyond its original functional requirements. This type of

maintenance involves more time and money than both corrective and adaptive

maintenance.

For example:

• Automatic generation of dates and invoice numbers.

• Reports with graphical analysis such as pie charts, bar charts, etc.

• Providing on-line help system

iv) Preventive maintenance (Re-engineering)

Preventive maintenance is conducted to enable the software to serve the needs of the enduser.

It is done to prevent any more maintenance to the application keeping future results in

focus. Changes are made to the software so that it can be corrected, adapted and enhanced

more easily.

For example:

• Application rebuilding from one platform to another.

• Changing from a single-user to a multi-user environment.

In general software maintenance can be reduced by keeping the following points in mind:

• A system should be planned keeping the future in mind.

• User specs should be accurate.

• The system design should be modular.

• Documentation should be complete.

• Proper steps must be followed during the development cycle.

• Testing should be thorough.

Page 122: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

122

Chapter 10-Documentation

CASE tools Today software engineers use tools that are analogous to the computer-aided design and

engineering tools used by hardware engineers. There are several different tools with different

functionalities. They are:

i) Business systems planning tools

By modeling the strategic information requirements of an organization, these tools

provide a meta-model from which specific information systems are derived. Business

systems planning tools help software developers to create information systems that route

data to those that need the information. The transfer of data is improved and the

decisionmaking

is expedited.

ii) Project management tools

Using these management tools, a project manager can generate useful estimates of cost,

effort and duration of a software project, plan a work schedule and track projects on a

continuous basis. In addition, the manager can use these tools to collect metrics that will

establish a baseline for software development quality and productivity.

iii) Support tools

This category encompasses document production tools, network system software,

databases, electronic mail, bulletin boards and configuration management tools that are

used to control and manage the information that is created as the software is developed.

iv) Analysis and design tools

These enable the software engineer to model the system that is being built. These tools

assist in the creation on\f the model and an assessment of the model.s quality. By

performing consistency and validity tests on each model, these tools provide the engineer

with the insight and help to eliminate errors before they propagate into the program.

v) Programming tools

System software utilities, editors, compilers, debuggers are a legitimate part of CASE

tools. In addition to these, new and powerful tools can be added. Object-oriented

programming tools, 4G languages, advanced database query systems all fall into this tool

category.

vi) Integration and testing tools

Testing tools provide a variety of different levels of support for the software testing steps

that are applied as part of the software engineering process. Some tools provide direct

support for the design of test cases and are used in the early stages of testing. Other tools,

such as automatic regression testing and test data generation tools, are used during

integration and validation testing and can help reduce the amount of effort required for

testing.

vii) Prototyping and simulation tools

These tools span a wide range of tools that include simple screen painters to simulation

Page 123: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

123

products for timing and sizing analysis of real-time embedded systems. At their most

fundamental, prototyping tools focus on the creation of screens and reports that will

enable a user to understand the input and output domain of an information system.

viii) Maintenance tools

These tools can help to decompose an existing program and provide the engineer with

some insight. However, the engineer must use intuition, design sense and intelligence to

complete the reverse engineering process and / or to re-engineer the application.

ix) Framework tools

These tools provide a framework from which an integrated project support environment

(IPSE) can be created. In most cases, framework tools actually provide database

management and configuration management capabilities along with utilities that enable

tools from different vendors to be integrated into the IPSE.

Types of Documentation Documentation is an important part of software engineering that is often overlooked.

Types of documentation include:

• Architecture/Design - Overview of software. Includes relations to an environment and

construction principles to be used in design of software components.

• Technical - Documentation of code, algorithms, interfaces, and APIs.

• End User - Manuals for the end-user, system administrators and support staff.

• Marketing - Product briefs and promotional collateral

Architecture/Design Documentation

Architecture documentation is a special breed of design documents. In a way,

architecture documents are third derivative from the code (design documents being

second derivative, and code documents being first). Very little in the architecture documents is

specific to the code itself. These documents do not describe how to program a particular

routine, or even why that particular routine exists in the form that it does, but instead merely

lays out the general requirements that would motivate the existence of such a routine. A good

architecture document is short on details but thick on explanation. It may suggest approaches

for lower level design, but leave the actual exploration trade studies to other documents.

Technical Documentation

This is what most programmers mean when using the term software documentation.

When creating software, code alone is insufficient. There must be some text along with it to

describe various aspects of its intended operation. This documentation is usually embedded

within the source code itself so it is readily accessible to anyone who may be traversing it.

User Documentation

Unlike code documents, user documents are usually far divorced from the source

Page 124: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

124

code of the program, and instead simply describe how it is used.

In the case of a software library, the code documents and user documents could be effectively

equivalent and are worth conjoining, but for a general application this is not often true. On the

other hand, the Lisp machine grew out of a tradition in which every piece of code had an

attached documentation string. In combination with strong search capabilities (based on a Unix-

like apropos command), and online sources, Lispm users could look up documentation and paste

the associated function directly into their own code. This level of ease of use is unheard of in

putatively more modern systems.

Marketing Documentation

For many applications it is necessary to have some promotional materials to encourage

casual observers to spend more time learning about the product. This form of documentation

has three purposes:-

1. To excite the potential user about the product and instill in them a desire for becoming more

involved with it.

2. To inform them about what exactly the product does, so that their expectations are in line

with what they will be receiving.

To explain the position of this product with respect to other alternatives.

Page 125: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

125

Question And Answers

1. How data dictionary can be used during software development?(May06- 5Marks)

Ans:

Data Dictionary

a) Data dictionary are integral components of structured analysis; since data flow diagrams

by themselves do not fully describe a subject of the investigation.

b) A data dictionary is a catalog-a repository of the element in the system.

c) In data dictionary one will find a list of all the elements composing the data flowing

through a system. The major elements are -

• Data flows

• Data stores

• Processes

d) The dictionary is developed during data flows analysis and assist the analysis Involved in

determining system requirements and its content are used during System design as well.

e) The data dictionary contains the following description about the data:-

1. The name of the data element.

2. The physical source/ Destination name.

3. The type of the data element.

4. The size of data element.

5. Usage such as input or output or update etc

6. Reference (/s) of DFD process nos, where used.

7. Any special information useful for system specification such as validation rules etc.

f) this is appropriately called as system development data dictionary, since it is created

during the system development, facilitating the development functions, used by the

developers and is designed for the developers information needs in mind.

g) for every single data element mentioned in the DFD there should be at least one and only

one unique data element in the data dictionary.

h) the type of data elements is considered as numeric, textual or image, audio etc

i) usage should specify whether the referenced DFD process uses it as input data(only read)

or creates data output (e.g Insert) or update.

j) a data element can be mentioned with reference to multiple DFD processes but in that

case if the usages are different , then there should be one entry for each usage.

k) the data dictionary is used as an important basic information during the development

stages.

l) Importance of data dictionary:

• To manage the details in large system.

• To communicate a common meaning for all system elements.

• To document the features of the system.

• To facilitate analysis of the details in order to evaluate characteristics and determine

where system changes should be made.

• To locate errors and omissions in the system.

Page 126: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

126

m) Points to be considered while constructing a data dictionary

• Each unique data flow in the DFD must have one data dictionary entry. There is also

a data dictionary entry for each data store and process.

• Definition must be readily accessible by name.

• There should be no redundancy or unnecessary definition in the data definitions. It

must also be simple to make updates.

• The procedures for writing definitions should be straightforward but specific there

should be only one way of defining words.

2. Under what circumstances or for what purposes would one use an interview rather than

other data collection methods?Explain (May06-12Marks)

OR

Discuss hoe interview technique can be used to need of a new system(May-03)

Ans

a) Interview is a fact finding technique whereby the system analyst collects information

from individuals through face to face interaction.

b) there are two types of interviews:

i. Unstructured interviews: this is an interview that is conducted with only a general

goal or subject in mind and with few, if any specific questions.

ii. Structured interviews: This is an interview in which the interviewer has a specific set

of questions to ask the interviewee.

c) unstructured interviews tend to involve asking open ended questions while structured

interviews tend to involve asking more close ended questions.

d) Advantages:

• Interviews gives the analyst an opportunity to motivate the interviewee to respond

freely and openly to questions.

• Interviews allow the system analyst to probe (penetrating investigation) for more

feedback from the interviewee.

• Interviews permit the system analyst to adapt or reword questions for each

individual.

• A good system analyst may be able to obtain information by observing the

interviewee’s body movements and facial expressions as well as listening to verbal

replies to questions.

e) This technique is advised in the following situations:-

• Where the application system under consideration is highly specific to the user

organization.

• When the application system may not very specialize but the practices followed this

organization may be specific.

Page 127: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

127

• The organization does not have any documentation where the part of information

requirements are documented or any such documentation is irrelevant or not

available or cannot be shared with developers due to privacy issues etc.

• The organization is not decided on the details of the practices, the new application

system would demand or would be used to implement new practices but want to

decided when responding to the information requirements determination .

f) A structured interview meeting is useful in the following situation:-

• When the development team and the user team members know the broad system

environment with high familiarity. This reduces the amount of communication to

significantly to just a few words or a couple of sentences.

• Responding to the questions involves collecting data from different sources or person

and/or analyzing it and/or making the analysis ready for discussion at the meeting,

in order to save on the duration of meeting.

• The costly communication media such as international phone/ conference call are to

be used.

• Some of all of the members of the user team represent the top management or

external consultants or specialist.

g) A semi structured interview meeting is useful in the following situations :-

• Usually in the initial stage of information requirements determination, the users and

the developers need to exchange significant amount of basic information. The

structured interview meetings may not be effective here since there is not enough

information to ask questions of importance.

• Also very initially or with the new organization , the personal meetings are important

because ,they help the development team not only in knowing the decisions but also

in understanding the decision making process of the organization ,the role of every

user team members in the decision and their organizational and personal interests.

These observations during the meetings can help the software development team

significantly in future communications.

• When the new application system is not a standard one and /or the users follow

radically different or highly specialized business practices ,then it is very difficult to

predict which questions are important from the point of determining the information

requirements.

• When the members of the development team expects to participate in formulating

some new information requirements or freezing them, as an advisor, say then

interview meeting has to be personal and therefore, semi structured.

• If the user’s are generally available , located very close and the number of top

management users or external consultant is nil or a negligible issue, then the

personal meeting are conducted.

• When there are no warranted reasons that only the structured interviews are to be

conducted.

Page 128: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

128

3. What are the CASE tools? explain some CASE tools used for prototyping. (May-06-

15Marks, Nov-03, M-05, Dec-04).

Ans: Computer assisted software engineering (CASE)

• Computer assisted software engineering (CASE) is the use of automated software tools

that support the drawing and analysis of system models and associated specifications.

Some tools also provide pr typing and code generation facilities.

• At the center of any true CASE tool’s architecture is a developer’s database called a CASE

repository where developer’s can store system models, detailed descriptions and

specifications and other products of systems developments.

• A CASE tool enables people working on a software project to store data about a project,

its plan and schedules, to be able to track its progress and, make changes easily, analyze

and store data about user, store the design of a s stem through automation.

• A CASE environment makes system development economical and practical.The

automated tools and environment provides a mechanism for system personnel to

capture the document and model and information system.

• A CASE environment is a number of CASE tools, which use integrated approach to

support the interaction between environment’s components and user of the

environment.

CASE Components

CASE tools generally include five components - diagrammatic tools, an information

repository, interface generators, code generators, and management tools.

1. Diagrammatic Tools

• Typically, they include the capabilities to produce data flow diagrams, data structure

diagrams, and program structure charts.

• These high-level tools are essential for support of structured analysis mythology and

CASE tools incorporate structured analysis extensively.

• They support the capability to draw diagram in chart and to store the details

internally. When changes must be made the nature of changes is described to the

system which can then withdraw the entire diagram automatically.

• The ability to change and redraw eliminates an activity that analyst finds both

tedious and undesirable.

2. Centralized Information Repository

• A centralized information repository or data dictionary aides the capture analysis

processing and distribution of all system information.

• The dictionary contains the details system components such as data items, data

flows and processes and also includes information describing the volumes and

frequency of each

• activities.

Page 129: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

129

• While dictionary are designed so that the information is easily accessible. They also

includes built-in control and safeguards to preserve the accuracy and consistency of

the system details.

• The use of authorization levels process validation and procedures for testing

consistency of the description ensures that access to definitions and the revisions

made to them in the information repository occur properly according to the

prescribed procedures.

3. Interface Generators

• System interfaces are the means by which users interact with an application, both to

enter information and data and to receive information.

• Interface generator provides the capability to prepare mockups and prototypes of

user interfaces.

• Typically the support the rapid creation of demonstration system menus,

presentation screens and report layouts.

• Interfaces generators are an important element for application prototyping,

although they are useful with all developments methods.

4. Code Generators:

• Code generators automated the preparations of computer software.

• They incorporate method that allows the conversion of system specifications into

executable source code.

• The best generator will produce the approximately 75 percent of the source code for

an application. The rest must be written by hand. The hand coding as this process is

termed, is still necessary.

• Because CASE tools are general purpose tools not limited any specific area such as

manufacturing control, investment portfolio analysis, or accounts management, the

challenge of fully automating software generation is substantial.

• The greatest benefits accrue in the code generator are integrated with the central

information repository such as combination achieved objective of creating reusable

computer code.

• When specification change code can be regenerated by feeding details from data

dictionary through the code generators. The dictionary contents can be reused to

prepare the executable code.

5. Management Tools:

CASE systems also assist project manager in maintaining efficiency and effectiveness

throughout the application development process.

• The CASE components assist development manager in the scheduling of the analysis

and designing activities and allocation of resources to different project activities.

Page 130: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

130

• Some CASE systems support the monitoring of project development schedules

against the actual progress as well as the assignments of specific task individuals.

• Some CASE management tools allow project managers to specify custom elements.

For example, they can select the graphic symbols to describe process, people,

department, etc.

Page 131: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

131

4. What is cost benefit analysis? Describe any two methods of performing same(May-

06,May-04).

Ans:

Cost -benefit analysis:

Cost benefit analysis is a procedure that gives the picture of various costs, benefits, and

rules associated with each alternative system.

The cost - benefit analysis is a part of economic feasibility study of a system.

The basic tasks involved in cost-benefit analysis are as follows:

• To compute the total costs involved.

• To compute the total benefits from the project.

• Top compare to decide, if the project provides more net gains or no net gains.

Procedure for cost and benefit determination:

• The determination of costs and benefits entails the following steps-

• Identify the costs and benefits pertaining to a given project.

• Categorize the various costs and benefits for analysis.

• Select a method for evaluation.

• Interpret the result of the system.

• Take action.

Costs and benefits are classified are classified as follows:

i. Tangible or intangible cost and benefits:

Tangibility refers to the ease with which costs or benefits can be measured. The following

are the examples of tangible costs and benefits:

• Purchase of hardware or software.(tangible cost)

• Personnel training.(tangible cost)

• Reduced expenses.(tangible benefit)

• Increased sales. (tangible benefit)

Costs and benefits that are known to exist but whose financial value cannot be accurately

measured are referred to as intangible costs and benefits. The following are the examples of

intangible costs and benefits:

• Employee morale problems caused by a new system.(intangible cost)

• Lowered company image. (intangible cost)

• Satisfied customers. (intangible benefit)

• Improved corporate image. (intangible benefit)

ii. Direct or Indirect costs and benefits:

Direct costs are those with which a money figure can be directly associated in a project.

They are directly applied to a particular operation. Direct benefits can also be specifically

attributable to a given project.

Page 132: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

132

The following are the examples:

• The purchase of a box of diskettes for 35$ is a direct cost.

• A new system that can handle 25% more transaction per day is a direct benefit.

Indirect costs are the results of operation that are not directly associated with a given system or

activity. They are often referred to as overhead.

Indirect benefits are realized as a by-product of another activity or system.

iii. Fixed or variable costs and benefits:

Fixed costs are sunk costs. They are constant and do not change. Once encountered, they will

not recur.

For example: straight-line depreciation of hardware, exempt employee salary.

Fixed benefits are also constant and do not change.

For example: A decrease in the number of personnel by 20% resulting from the use of a new

computer.

Variable costs are incurred on a regular basis. They are usually proportional to work volume and

continue as long as the system is in operation.

For example: The costs of computer forms vary in proportion to amount of processing or the

length of the reports required.

Variable benefits are realized on a regular basis.

For example: Consider a safe deposit tracking system that saves 20 minutes

preparing customer with manual system.

The following are the methods of performing cost and benefit analysis:

1. Net benefit analysis.

2. Present value analysis.

3. Payback analysis.

4. Break-even analysis.

5. Cash flow analysis.

6. Return on investment analysis.

5. Explain the concept of Normalization with examples. why would you denormilzation?

(May-04)

Ans:

Normalization:

Normalization is a process of simplifying the relationship between data elements in a record.

Through normalization a collection of data in a record structure is replaced by successive record

structures that are simpler and more predictable and therefore more manageable.

Normalization is carried out for four reasons:

1. To structure the data so that any important relationships between entities can be

represented.

Page 133: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

133

2. To permit simple retrieval of data in response to query and report requests.

3. To simplify the maintenance of the data through updates, insertions, and deletions.

4. To reduce the need to restructure or reorganize data when new application requirement

arise.

A great deal of research has been conducted to develop methods for carrying out the

normalization. Systems Analysts should be familiar with the steps in normalization, since this

process can improve the quality of design for an application.

1. Decompose all data groups into two- dimensional records.

2. Eliminate any relationships in which data elements do not fully depend on the primary

key of the record.

3. Eliminate any relationships that contain transitive dependencies.

There are three normal forms. Research in database design has also identified other forms, but

they are beyond what analysts use in application design.

First Normal Form:

First normal form is achieved when all repeating groups are removed so that a record is

of fixed length. A repeating group, the reoccurrence of a data item or group of data items within

a record is actually another relation. Hence, it is removed from the record and treated as an

additional record structure, or relation. Consider the information contained in a customer order:

order number, customer name, customer address, order date, as well as the item number, item

description, price and quantity of the item ordered. Designing a record structure to handle an

order containing such data is not difficult. The analyst must consider how to handle the order.

The order can be treated as four separate records with the order and customer information

included in each record. However it increases complexity of changing the details of any part of

the order and uses additional space.

Another alternative is to design the record to be of variable length. So when four

items are ordered, the item details are repeated four times. This portion is termed as

repeating group.

First normal form is achieved when a record is designed to be of fixed length.

This is accomplished by removing the repeating and creating a separate file or

relation containing the repeating group. The original record and the new records are

interrelated by a common data item.

Second Normal Form:

Second normal form is achieved when a record is in first normal form and each item

in the record is fully dependent on the primary record key for identification. In other words, the

analyst seeks functional dependency.

For example: State motor vehicle departments go to great lengths to ensure that only one

vehicle in the state is assigned a specific license tag number. The license number is uniquely

identifies a specific vehicle; a vehicle’s serial number is associated with one and only one state

Page 134: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

134

license number. Thus if you know the serial number of a vehicle, you can determine the state

license number. This is functional dependency.

In contrast, if a motor vehicle record contains the name of all individuals who drive the

vehicle, functional dependency is lost. If we know the license number, we do not know who the

driver is -there can be many. And if we know the name of the driver, we do not know the specific

license number or vehicle serial number, since a driver can be associated with more than one

vehicle in the file. Thus to achieve second normal form, every data item in a record that is not

dependent on the primary key of the record should be removed and used to form a separate

relation.

Third Normal Form:

Third normal form is achieved when transitive dependencies are removed from record

design. The following is the example of third normal form:

1. A, B, C are three data items in a record.

2. If C is a functionally dependent on B and

3. B is functionally dependent on A,

4. Then C is functionally dependent on A

5. Therefore, a transitive dependency exists.

In data management, transitive dependency is a concern because data can inadvertently

be lost when the relationship is hidden. In the above case, if A is deleted, then B and C are

deleted also, whether or not this is intended. This problem is eliminated by designing the record

for third normal form. Conversion to third normal form removes the transitive dependency by

splitting the relation into two separate relations.

Denormilzation

Performance needs dictate very quick retrieval capability for data stored in relational

databases. To accomplish this, sometimes the decision is made to denormalize the physical

implementation. Denormalization is the process of putting one fact in numerous places. This

speeds data retrieval at the expense of data modification. Of course, a normalized set of

relational tables is the optimal environment and should be implemented for whenever possible.

Yet, in the real world, denormalization is sometimes necessary. Denormalization is not

necessarily a bad decision if implemented wisely. You should always consider these issues before

denormalizing:

• Can the system achieve acceptable performance without denormalizing?

• Will the performance of the system after denormalizing still be unacceptable?

• Will the system be less reliable due to denormalization?

If the answer to any of these questions is "yes," then you should avoid denormalization

because any benefit that is accrued will not exceed the cost. If, after considering these issues,

you decide to denormalize be sure to adhere to the general guidelines that follow.

Page 135: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

135

The reasons for Denormalization

Only one valid reason exists for denormalizing a relational design - to enhance

performance. However, there are several indicators which will help to identify systems and

tables which are potential denormalization candidates. These are:

• Many critical queries and reports exist which rely upon data from more than one table.

Often times these requests need to be processed in an on-line environment.

• Repeating groups exist which need to be processed in a group instead of individually.

• Many calculations need to be applied to one or many columns before queries can be

successfully answered.

• Tables need to be accessed in different ways by different users during the same timeframe.

• Many large primary keys exist which are clumsy to query and consume a large amount of

DASD when carried as foreign key columns in related tables.

• Certain columns are queried a large percentage of the time. Consider 60% or greater to be a

cautionary number flagging denormalization as an option. Be aware that each new

RDBMS release usually brings enhanced performance and improved access options that

may reduce the need for denormalization. However, most of the popular RDBMS products

on occasion will require denormalized data structures. There are many different types of

denormalized tables which can resolve the performance problems caused when accessing

fully normalized data. The following topics will detail the different types and give advice on

when to implement each of the denormalization types.

6. Write detailed note about the different levels and methods of testing software (May-06).

Ans:

A test case is a set of data that the system will process as normal input. However the

data are created with the express intent of determining whther the system will process them

correctly There are two logical strategies for testing software that is the strategic of code

testing and specification testing

CODE TESTING:

The code testing strategy examines the logic of the program to follow this testing

method the analyst develops test cases that result in executing every instruction in the program

or module that is every path through the program is tested

This testing strategy does not indicate whether the code meets it’s specification nor does

it determine whether all aspects are even implemented. Code testing also does not check the

range of data that the program will accept even through when the software failure occur in

actual size it is frequently because users submitted data outside of expected ranges(for example

a sales order for $1 the largest in the history of the organization)

SPECIFICATION TESTING:

Page 136: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

136

To perform specification testing the analyst examines the specifications stating what the

program should do and how it should perform under various conditions. Then test cases are

developed for each condition or combination of condition and submitted for processing. by

examining the results the analyst can determine whether the program perform according to it’s

specified requirement

LEVELS OF TESTING:

Systems are not designed as entire systems nor are they tested as single systems.The

analyst must perform both unit and system testing

UNIT TESTING:

In unit testing the analyst tests the programs making up a system.(for this reason

sometimes unit testing is also called as program testing)

Unit testing focuses first on the modules independently of one another to locate error.This

enables the tester to detect errors in coding and logic that are contained within that module

alone. Unit testing can be performed only from bottom up,starting with the smallest and lowest

level modules are proceeding one at time .for each module in bottom up testing a shot program

executes the module and provides the needed data so that the module is asked to perform the

way it will when embedded within the larger system.when bottom level modules tested

attention turns to those on the next level that use the lower ones. They are tested individually

and the linked with the previously examined lower level modules

SYSTEM TESTING:

System testing does not test the software but rather the integration of each module

in the system .it also tests to find discrepancies between the system and its original objective

current specifications , and system documentation. The primary concern is the compatibility of

individual modules .Analyst trying to find areas where modules have been designed with the

different specifications for data length,type and data element name For example one module

may expect the data item for customer identification number to be character data item

7. Distinguish between reliability and validity how are they related?(May-06,Nov-03).

Ans:

Reliability and validity are two faces of information gathering .the term reliability is

synonymous with dependability,consistency and accuracy.Concern for reliability comes from the

necesisity for dependability in measurement whereas validity is concerned on what is being

measured rather than consistency and accuracy.

Reliability can be approached in three ways

1. it is defined as stability, dependability, and predictability.

2. It focuses on accuracy aspect.

Page 137: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

137

3. Errors of measurement-they are random errors stemming from fatigue or fortuitous

conditions at a given time.

The most common question that defines validity is :Does the instrument measure

what it is measuring? It refers to the notion that the question asked are worded to

produce the information sought.in validity the emphasis is on what is being

measured .

8. Security and Disaster Recovery

Ans:

The system security problem can be divided into four related issues:

1. System Security:

System security refers to the technical innovations and procedures applied to the

hardware and operating systems to protect deliberate or accidental damage from a defined

threat.

2. System Integrity:

System integrity refers to the proper functioning of hardware and programs, appropriate

physical security, and safety against external threats such as eavesdropping and wiretapping.

3. Privacy:

Privacy defines the rights of the users or organizations to determine what information

they are willing to share with or accept from others and how the organization can be protected

against unwelcome, unfair, or excessive dissemination of information about it.

4. Confidentiality:

The term confidentiality is a special status given to sensitive information in a database to

minimize the possible invasion of privacy. It is an attribute of information that characterizes its

need for protection.

Disaster/Recovery Planning:

1. Disaster/recovery planning is a means of addressing the concern for system availability

by identifying potential exposure, prioritizing applications, and designing safeguards

that minimize loss if disaster occurs. It means that no matter what the disaster, the

system can be recovered. The business will survive because a disaster/recovery plan

allows quick recovery under the circumstances.

2. In disaster/recovery planning, management’s primary roll is to accept the need for the

consistency planning, select an alternative measure, and recognize the benefits that can

be delivered from establishing a disaster/recovery plan. Top management should

Page 138: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

138

establishing a disaster/recovery policy and commit corporate support staff for its

implementation

3. The user’s role is also important. The user’s responsibilities include the following:

a. Identifying critical applications, why they are critical, and how computer

unavailability would affect the department.

b. Approving data protection procedures and determining how long and how well

operations will continue without the data.

c. Funding the costs of backup.

9. What are the roles of the system Analyst in system analysis design?(Nov-03,May-01)

Ans:

The various roles of system analyst are as follows:

1 Change Agent:

The analyst may be viewed as an agent of change. A candidate system is designed to

introduce change and reorientation in how the user organization handles information or makes

decisions .In role of a change agent, the system analyst may select various styles to introduce

change to user organization. The styles range from that of the persuader to imposer. In

between, there are the catalyst and confronter roles. When user appears to have a tolerance for

change ,the persuader or catalyst(helper)styles is appropriate. On the other hand , when drastic

changes are requires, it may be necessary to adopt the confronter or even imposer style .No

matter what style is used; however the goal is same :to achieve acceptance of candidate system

with a minimum of resistance.

2 Investigator and Monitor.

In defining a problem, the analyst pieces together the information gathered to

determine why the present system does not work well and what changes will correct the

problem .In one respect, this work is similar to that of an investigator. Related to the role of

investigator is that of monitor programs in relation to time, cost, and quality. Of these

resources, time is the most important, If time gets away, the project suffers from increased costs

and wasted human resources.

3 Architect:

Just as an architect related the client abstract design requirement and the contractor’s

detailed building plan, an analyst relates the user’s logical design requirement with the detailed

physical system design,As an architect,the analyst also creates a detailed physical system design

of the candidate systems.

4 Psychologist

In system development, system are built around people. The analyst plays the role

Page 139: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

139

of psychologist in the way he/she reaches people, interprets their thoughts, assesses their

behaviour and draws conclusions from these interactions. Understanding inetrfunctional

relationships is important.It is important that analyst be aware of people’s feelings and be

prepared to get around things in a graceful way. The art of listening is important in evaluating

responses and feedback.

5 Salesperson:

Selling change can be as crucial as initiating change. Selling the system actually

takes place at each step in the system life cycle. Sales skills and persuasiveness are crucial to the

success of system.

6 Motivator:

The analyst role as a motivator becomes obvious during the first few weeks after

implementation of new system and during times when turnover results in new people being

trained to work with the candidate system. The amount of dedication if takes to motivate the

users often taxes the analyst abilities to maintain the pace.

7 Politcian:

Related to the role of motivator is that of politician. In implementing a candidate

system , the analyst tries to appeases all parties involved.Diplomacy and fitness in dealing with

the people can improve acceptance of the system.In as much as a politician must have the

support of his/her constituency, so as good analyst’s goal to have the support of the users’s

staff.

10. What is the requirement of good system analyst (M-04,May-01,M-06)

Ans : Requirements of a good System Analyst:

An analyst must possess various skills to effectively carry out the job. The skills required

by system analyst can be divided into following categories:

i) Technical Skills:

Technical skills focus on procedures and techniques for operetional analysis, system analysis and

computer science. The technical skills relevant to system include the following:

a. Working Knowledge of Information Technologies:

The system analyst must be aware of both existing and emerging information

technologies. They should also stay through disciplined reading and participation in current

appropriate professional societies.

b. Computer programming experience and expertise:

System Analyst must have some programming experience. Most system analyst need to

be proficient in one or more high level programming language.

ii) Interpersonal Skills:

Page 140: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

140

Interpersonal skills deal with relationships and interface of the analyst with people in the

business.The interpersonal skills relevant to system include the following:

a. Good interpersonal Communication skills:

An analyst must be able to communicate, both orally and in writing. Communication is

not just reports, telephone conversations and interviewes. It is people talking, listening,

feeling and reaching to one another; their experiences and reactions. Opening communication

channel are a must for system development.

b. Good interpersonal Relations skills:

An analyst interacts with all stackholders in a system development project. There

interaction requires effective interpersonal skills that enable the analyst o deal with a group

dynamics, bussiness politics, conflict and change.

iii) General knowledge of Business process and terminology:

System analyst must be able to communicate with business experts to gain on

understanding of their problems and needs. They should avail themselves of every opportunity

to complete basic business literacy courses such as financial accounting, management or cost

accounting, finance, marketing , manufacturing or operations management, quality

management, economics and business law.

iv) General Problem solving skills:

The system analyst must be able to tke a large bussiness problem, break down that

problem into its parts, determine problem causes and effects and then recommended a

solution.Analyst must avoid the tendancy to suggest the solution before analyzing the problem.

v) Flexibility and Adaptability:

No two are alike.Accordingly there is no single, magical approach or standard that is

equally applicable to all projects.Successful system analyst learn to be flexible and to adopt to

unique challenges and situations.

vi) Character and Ethics:

The nature of the systems analyst job requires strong character and a sense of right and

wrong.Analysts often gain access to sensitive and confidential facts and information that are

not meant for public disclosure.

11. Discuss prototyping as a way to test a new idea?( M-03)

Ans:

1. Prototyping is a technique for quickly building a functioning but incomplete model of the

information system.

2. A prototype is a small, representative, or working model of users requirements or a

proposed design for an information system.

3. The development of the prototype is based on the fact that the requirements are seldom

fully know at the beginning of the project. The idea is to build a first, simplified version of

the system and seek feedback from the people involved, in order to then design a better

Page 141: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

141

subsequent vesion. This process is repeated until the system meets the clients condition of

acceptance.

4. any given prototype may omit certain functions or features until such a time as the

prototype has sufficiently evolved into an acceptable implementation of requirements.

Reason for Protoyping:

1. information requirements are not always well defined. Users may know only that certain

business areas need improvement or that the existing procedures must be changed. Or,

they may know that they need better information for managing certain activities but are

not sure what that information is.

2. the users requirements may be too vague to even begin formulating a design.

3. developers may have neither information nor experience of some unique sitations or

some high-cost or high-risk situations, in which the proposed design is new and

untested.

4. developers may be unsure of the efficiency of an algorithm, the adaptability of an

operating system, or the form that human-machine interaction should take.

5. in these and many other situations, a prototyping approach may offer the best

approach.

Advantages:

1. Shorter development time.

2. more accurate user requirements

3. greater user participation and support

4. relatively inexpensive to build as compared with the cost of conventional system.

This method is most useful for unique applications where developers have little information

or experience or where risk of error may be high. It is useful to test the feasibility of the system

or to identify user requirements.

12. Discuss features of good user interface design, using login context(M-03).

Ans:

The features of the good user interface design are as follows:

1. Provide the best way for people to interact with computers. This is commonly known as

Human Computer Interaction. (HCI).

2. The presentation should be user friendly. User- friendly Interface is helpful e.g. it should

not only tell user that he has committed and error, but also provide guidance as to how

she can rectify it soon.

3. It should provide information on what is the error and how to fix it.

4. User-friendly ness also includes tolerant and adaptable.

5. A good GUI makes the User more productive.

Page 142: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

142

6. A good GUI is more effective, because it finds the best solutions to a problem.

7. It is also efficient because it helps the User to find such solution much quickly and with

the least error.

8. For a User, using a computer system, his workspace is the computer screen. The goal of

the good GUI is to make the best, if not all, of a User’s workspace.

9. A Good GUI should be robust. High robustness implies that the interface should not fail

because of some action taken by the User. Also, and User error should not lead to a

system breakdown.

10. The Usability o0f the GUI is expected to be high. The Usability is measures in the various

terms.

11. A good GUI is of high Analytical capability; most of the information needed by the User

appears on the screen.

12. With a good GUI, User finds the work easier and more pleasant.

13. The User should be happy and confident to use the interface.

14. A good GUI has a high cognitive workload ability i.e. the mental efforts required of the

User to use the system should be the least. In fact, the GUI closely approximates the

User’s mental model or reactions to the screen.

15. For a good GUI, the User satisfaction is high.

13. Discuss use and abuse of multiwindow displays (M-03).

Multiple Window Design:-

1. Designer use multiple screens to give users the information they need.

The first screen gives gives general information.

2. By depressing a specified key, the user retrieves a second screen containing details.

3. It allows users to browse through the details quickly and identify each item about which

more detail is needed. At the same time, the “explosion” into detail for one specific item

on a second (or even a third) screen maintains the readability of the first screen by

requiring display of only enough detail to identify the item wanted.

4. Display different data or report sets simultaneously .

5. Switch between several programs, alternatively displaying the output from each.

6. Move information from one window to the other (within the same or different programs).

14. Short Note On Design of input and control

Ans:

The design of inputs involve the following four tasks :-

1. Identify the data input devices and mechanisms.

2. Identify the input data and attributes.

3. Identify input controls.

4. Prototype the input forms.

Page 143: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

143

We study the detailed activities as follows.

1) Identify the data input devices and mechanisms:

To reduce the input errors, use the following guidelines :

• Apart from entering data through the electronic form, new ways of entering data is

through scanning , reading and transmitting devices. They are faster, more efficient

and less error prone.

• Capture the data as close to where it originates as possible. These days application

systems eliminate the use of paper forms, but encourage entry through laptops etc.

which the user can carry to the origin of the data and enter data directly through it.

This reduces

errors in data entry dramatically.

• Automate the data entry and avoid the human intervention in the data capture, as

much as possible. This will give users less chance to make typing errors and increase

the speed of data capture.

• In case, the information is available in the electronic form, prefer it over entering

data manually. This will also reduce errors further.

• Validate the data completely and correctly at the location where it is entered. Reject

the wrong data at its source only.

2) Identify the input data and attributes :

The activities involved in this step are as follows :-

• This is to ensure that all system inputs are indentified and have been specified

correctly.

• Basically, it involves indenfying the information flows across the system boundary.

• Using the DFD models, at the lowest levels of the system the developers mark the

system boundary considering what part of the system is to be automated and what

other not.

• Each input data flow in the DFD may translate into one or more of the physical inputs

to the system. Thus it is easy to identify the DFD processes, which would input the

data from outside sources.

• Knowing now the details of what data elements to input, the GUI designer prepares

a list of input data elements.

• When the input form is designed the designer ensures that all these data elements

are provided for entry, validations and storage.

3) Identify input controls :

• Input integrity controls are used with all kinds of input mechanisms. They help reduce

the data entry errors at the input stage. One more control is required on the input

data to ensure the completeness.

• Various error detection and correction methods are employed these days. Some of

them arelisted as follows :-

Page 144: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

144

i. Data validation controls introduce an additional digit called as ‘check digit’ -

which is inserted based on some mathematical formula. The data verification

step recalculates, if there is an data entry error, the check digit would not

match and flag error.

ii. Range of acceptable values in a data item can also be used to check the defects

going into the data. If the data value being entered is beyond the acceptable

range, it flags out an error.

iii. References to master data tables already stored can be used to validate codes,

such as customer codes, product codes, etc. if there is an error then either the

reference would not be

iv. available or will be wrong.

v. Some of the controls are used in combination depending upon the business

knowledge.

• Transaction logging is another technique of input control. It logs important data base

update operations by user. The transaction log records the user Id, date, time,

location of user. This information is useful to control the fraudulent transactions, and

also provides recovery mechanisms for erroneous transactions.

4) Prototype input forms for better design :

• Use the paper form for recording the transaction mainly, if an evidence is required or

direct data capture is not feasible.

• Provide help to every data element to be entered.

• It should be easy to use, appear natural to the user and should be complete.

• The data elements asked should be related in logical order. Typically, the left - to -

right and top - to - bottom order of entry is more natural.

• A form should not have too many data elements in one screen display.

The prototype form can be provided to the users. The steps are as follows :

a) The users should be provided with the form prototype and related functionality including

validations, help, error messages.

b) The users should be invited to use the prototype forms and the designers should observe

them.

c) The designers should observe their body language, in terms of ease of use, comfort

levels, satisfaction levels, help asked etc. they also should note the errors in the data

entry.

d) The designers should ask the user’s their critical feedback.

e) The designers should improve the GUI design and associated functionality and run

second test runs for the users in the same way.

Page 145: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

145

f) This exercise should be continued until the input form delivers expected levels of

usability.

15. Summarize the procedure for developing DFD?(m-05)

Using your own example illustrate?(m-06)

Ans:Developing a DFD

Step1. Make a list of business activities and use it to determine:

a. External entities i.e. source and sink

b. Data flows

c. Processes

d. Data stores

Step2. Draw a context level diagram:

Context level diagram is a top level diagram and contain sonly one process representing

the entire system .It determines the boundaries of the system .Anything that is not inside the

diagram will not be the part of the system study

Step3. Develop process chart

It is also called as hierarchy chart or decomposition diagram .It shows top down

functional decomposition of the system

Step4. Develop the first level dfd:

Its also known as diagram 0 or level 0diagram.Its the explosion of the context level

diagram It includes data stores and external entities. Here the processes are number

Step5. Draw more detailed level :

Each process in diagram 0 may in turn be exploded to create a more detailed DFD. New

data flows and data stores are added .There are further decomposition/leveling of process

16. what cost elements are considered in the cost/benefit analysis?

Which do you think is most difficult to estimate? Why?

Ans:The cost-benefit analysis is a part of economic feasibility study of a system. The

basic elements of cost-benefit analysis are as follows.

I. To compute the total costs involved.

II. To compute the total benefits from the project.

III. Top compare to decide, if the project provides more net gains or no net gains.

The costs are classified under two heads as follows.

i. Initial Costs- They are mainly the development costs.

ii. Recurring Costs- They are the costs incurred in running the application system or

operating the system (usually per month or per year)

Page 146: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

146

The initial costs include the following major heads of expenses:-

• Salary of development staff

• Consultant fees

• Costs of software Development tools-Licensing Fees.

• Infrastructure development costs

• Training and Training material costs

• Travel costs

• Development hardware applied for te costs, Networking costs

• Salary of support staff

The development costs are applied for the entire duration of the software development.

Therefore, they are over initial but longer period. To keep these costs lower, the development

time reduction is the key.

The Recurring costs include the following major costs:-

1 Salary of operation users

2 License fees for software tools used for running the software systems, if any.

3 Hardware/Networking Maintenance charges

4 Costs of hard discs, magnetic tapes, being used as media for data storage.

5 The furniture and fixture leasing charges

6 Electricity and other charges

7 Rents and Taxes

8 Infrastructure Maintenance charges

9 Spare Parts and Tools

The benefits are classified under two heads, as follows:-

i. Tangible benefits- benefits that can be measured in terms of the rupee value.

ii. Intangible benefits-benefits that can not be measured in terms of the rupee value.

The common heads of tangible benefits vary from organisation to organisation, but some of

them are as follows:-

i. Benefits due to reduced business cycle times i.e production cycle, marketing cycle, etc.

ii. Benefits due to increased efficiency

iii. Savings in the salary of operational users

iv. Savings in the space rent and taxes

v. Savings in the cost of stationary, telephone and other communication costs, etc.

vi. Savings in the costs due to the use of storage media

vii. Benefits due to overall increase in profits before tax.

The common heads of intangible benefits vary from organisation to organisation, but some of

them are as follows:-

i. Benefits due to increased working satisfaction of employees.

Page 147: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

147

ii. Benefits due to increased quality of products and/or services provided to the end-

customer.

iii. Benefits in terms of business growth due to better customer services

iv. Benefits due to increased brand image

v. Benefits due to captured errors, which could not be captured in the current system.

vi. Savings on costs of extra activities that were carried out and now not required to be

carried out in the new system.

17. Describe the concept and procedure used in constructing DFD’s.Using and example of your

own to illustrate.[Apr-04]

Data flow diagram (DFD):

1. DFD is a process model used to depict the flow of data through a system & work or

processing performed by the system.

2. It also known as Bubble Chart Transformation graph & process model.

3. DFD is a graphic tool to describe & analyze the movement of data through a system,

using the processes, stores of data & delay in system.

4. DFD are of two types:

A) Physical DFD

B) Logical DFD

Physical DFD:

Represents implementation-dependent view of current system & system show what task

are carried out & how they are performed. Physical Characteristics are:

1. name of people

2. form & document names or numbers

3. names of department

4. master & transaction files

5. locations

6. names of procedures

Logical DFD:

They represent implementation-independent view of the system & focus on the flow of

the specific devices, storage locations, or people in the system. They do not specify physical

characteristics Listed above for physical DFD’s.

The most useful approach to develop an accurate & complete description system begins with

the development of a physical DFD & then they are converted to logical DFD

Procedure:

Step 1: make a list of business activities & use it to determine:

1 External entities

Page 148: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

148

2 Data flows

3 Processes

4 Data stores

Step 2: draw a context level diagram:

Context level diagram is a top level diagram & contains Only one process representing

the entire system. Anything that not inside the context diagram will not be the part of the

system study.

Step3: develop process chart:

It is also called as hierarchy charts or decomposition diagram. It shows top down

functional decomposition of the system.

Step4: develop the first level DFD:

It is also known as diagram 0 or level 0 diagram. It is the explosion of the context level

diagram. It includes data stores & external entities. Here the processes are number.

Step 5: draw more detailed level:

Each process in diagram 0 may in turn be exploded to create a more detailed DFD. New

data flows & data stores are added. There are decomposition/ leveling of processes. E.g. library

management system

19. What is decoupling? What is its role in system handling?

Ans:DECOUPLING AND ITS ROLE IN SYSTEM HANDLING

1. Decoupling facilitates each module of the system to work independently of others.

2. Decoupling enhances the adaptability of the system by helping in isolating the

impact of potential changes i.e. more the decoupling, easier it is to make changes to

a subsystem without effecting the rest of the system.

3. Concept of decoupling is also used in :

• Separation of functions between human beings and machines.

• Defining the human-computer interfaces.

4. Decoupling is achieved by:

• Defining the subsystems such that each performs a single complete function.

• Minimizing the degree of interconnection (exchange of data and control

parameters).

5. Key decoupling mechanisms in system designs are:

• Inventory, buffer, and waiting line.

• Provision of slack resources.

• Extensive use of standards.

20. Discuss the six special system test? Give special examples?

Ans:

Page 149: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

149

Special System Tests:

The tests which do not focus on the normal running of the system but come under

special category and are used for performing specific tests related specific tasks are termed as

Special System Tests.

They are listed as follows:

1. Peak Load Test:

This is used to determine whether the system will handle the volume of activities that

occur when the system is at peak of the processing demand. For instance when all terminals are

active at the same time. This test applies mainly for on-line systems.

For example, in a banking system, analyst want to know what will happen if all the tellers sign

on at their terminals at the same time before start of the business day. Will the system handle

them one at a time without incident, or will it attempt to handle all of them at once and be so

confused that it “locks up” and must be restarted, or will terminal address be post? The only

sure way to find out is to test for it.

2. Storage Testing:

This test is to be carried out to determine the capacity of the system to store transaction

data on a disk or in other files. Capacities her are measured in terms of the number of records

that a disk will handle or a file can contain. If this test is not carried out then there are

possibilities that during installation one may discover that, there is not enough storage capacity

for transactions and master file records.

3. Performance Time Testing:

This test refers to the response time of the system being installed. Performance time

testing is conducted prior to implementation to determine how long it takes to receive a

response to a inquiry, make a backup copy of the file, or send a transmission and receive a

response. This also includes test runs to time indexing or restoring of large files of the size the

system will have during atypical run or to prepare a report. A system may run well with only a

handful of test transactions may be unacceptably slow when full loaded. This should be done

using the entire volume of live data.

4. Recovery Testing:

Analyst must never be too sure of anything. He must always be prepared for the

worst. One should assume that the system will fail and data will be damaged or lost. Even

though plans and procedures are written to cover these situations, they also must be tested.

5. Procedure Testing:

Documentation & manuals telling the user how to perform certain functions and tests

quite easily by asking the user to follow them exactly through a series of events. By not including

instructions about aspects such as, when to depress the enter key, removing the diskettes before

Page 150: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

150

putting off the power and so on, could cause problems. This type of testing brings out what is

not mentioned in the documentation, and also the errors in them.

6. Human Factors:

In case during processing, the screen goes blank the operator may start to wonder as to

what is happening and the operator may just do things like press the enter key number of times,

or switch off the system and so on, but if a message is displayed saying that the processing is in

progress and asking the operator to wait, then these types of problems can be avoided.

Thus, during this test we determine how users will use the system when processing

data or preparing reports.

As we have noticed that these special test are used for some special situations, and

hence the name as Special System Tests.

21. There are 2 ways of debugging program software bottom up and top down. How do they

differ?

Ans:

Its a long-standing principle of programming style that the functional elements of a

program should not be too large. If some component of a program grows beyond the stage

where it's readily comprehensible, it becomes a mass of complexity which conceals errors.Such

software will be hard to read, hard to test, and hard to debug.

In accordance with this principle, a large program must be divided into pieces, and the

larger the program, the more it must be divided. How do you divide a program? The traditional

approach is called top-down design: you say "the purpose of the program is to do these seven

things, so I divide it into seven major subroutines. The first subroutine has to do these four

things, so it in turn will have four of its own subroutines," and so on. This process continues until

the whole program has the right level of granularity-- each part large enough to do something

substantial, but small enough to be understood as a single unit.

As well as top-down design, they follow a principle which could be called bottom-up

design-- changing the language to suit the problem.It's worth emphasizing that bottom-up

design doesn't mean just writing the same program in a different order.When you work bottom-

up, you usually end up with a different program. Instead of a single, monolithic program, you

will get a larger language with more abstract operators, and a smaller program written in it.

Instead of a lintel, you'll get an arch.

This brings several advantages:

1. By making the language do more of the work, bottom-up design yields programs which

are smaller and more agile. A shorter program doesn't have to be divided into so many

components, and fewer components means programs which are easier to read or

modify. Fewer components also means fewer connections between components, and

Page 151: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

151

thus less chance for errors there. As industrial designers strive to reduce the number of

moving parts in a machine, experienced Lisp programmers use bottom-up design to

reduce the size and complexity of their programs.

2. Bottom-up design promotes code re-use. When you write two or more programs, many

of the utilities you wrote for the first program will also be useful in the succeeding ones.

Once you've acquired a large substrate of utilities, writing a new program can take only

a fraction of the effort it would require if you had to start with raw Lisp.

3. Bottom-up design makes programs easier to read. An instance of this type of abstraction

asks the reader to understand a general-purpose operator; an instance of functional

abstraction asks the reader to understand a special-purpose subroutine.

4. Because it causes you always to be on the lookout for patterns in your code, working

bottom-up helps to clarify your ideas about the design of your program. If two distant

components of a program are similar in form, you'll be led to notice the similarity and

perhaps to redesign the program in a simpler way.

Top-Down Advantages

1.Design errors are trapped earlier

2.A working (prototype) system

3.Enables early validation of design

4.No test drivers are needed

5.The control program plus a few modules forms a basic early prototype

6.Interface errors discovered early

7.Modular features aid debugging

Disadvantages

1.Difficult to produce a stub for a complex component

2.Difficult to observe test output from top-level modules

3.Test stubs are needed

4.Extended early phases dictate a slow manpower buildup

5.Errors in critical modules at low levels are found late

Bottom-Up

Advantages

1.Easier to create test cases and observe output

2.Uses simple drivers for low-level modules to provide data and the interface

3.Natural fit with OO development techniques

4.No test stubs are needed

5.Easier to adjust manpower needs

6.Errors in critical modules are found early

Page 152: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

152

7.Natural fit with OO development techniques

Disadvantages

1.No ‘program’ until all modules tested

2.High-level errors may cause changes in lower modules

3.Test drivers are needed

4.Many modules must be integrated before a working program is available

5.Interface errors are discovered late

22. Explain how you would expect documentation to help analyst and designers.

Introduction:

Documentation is not a step in SDLC. It is an activity on-going in every phase of SDLC. It

is about developing documents initially as a draft, later on the review document and then a

signed-off document. The document is born, either after it is signed-off by an authority or after

its review. It cries initial version number. However, the document also undergoes changes and

then the only way to keep your document up tom date is to incorporate these changes.

Software Documentation helps Analysts and Designers in the following ways:

1. The development of software starts with abstract ideas in the minds of the Top

Management of User organization, and these ideas take different forms as the software

development takes place. The Documentation is the only link between the entire

complex processes of software development.

2. The documentation is a written communication, therefore, it can be used for future

reference as the software development advances, or even after the software is

developed, it is useful for keeping the software up to date.

3. The documentation carried out during a SDLC stage, say system analysis, is useful for the

respective system developer to draft his/her ideas in the form which is shareable with

the other team members or Users. Thus it acts as a very important media for

communication.

4. The document reviewer(s) can use the document for pointing out the deficiencies in

them, only because the abstract ideas or models are documented. Thus, documentation

provides facility to make abstract ideas, tangible.

5. When the draft document is reviewed and recommendations incorporated, the same is

useful for the next stage developers, to base their work on. Thus documentation of a

stage is important for the next stage.

6. Documentation is a very important because it documents very important decisions about

freezing the system requirements, the system design and implementation decisions,

agreed between the Users and Developers or amongst the developers themselves.

7. Documentation provides a lot of information about the software system. This makes it

very useful tool to know about the software system even without using it.

Page 153: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

153

8. Since the team members in a software development team, keep adding, as the software

development project goes on, the documentation acts as important source of detailed

and complete information for the newly joined members.

9. Also, the User organization may spread implementation of a successful software system

to few other locations in the organization. The documentation will help the new Users to

know the operations of the software system. The same advantage can be drawn when a

new User joins the existing team of Users. Thus documentation makes the User’s

productive on the job, very fast and at low cost.

10. Documentation is live and important as long as the software is in use by the User

organization.

11. When the User organization starts developing a new software system to replace this

one, even then the documentation is useful. E.G. The system analysts can refer to the

documentation as a starting point for discussions on the documentation as a starting

point for discussions on the new system requirements.

Hence, we can say that Software documentation is a very important aspect of SDLC.

23. what are the major threads in the system security. Which is one of the most serious and

important and why?

System Security Threats

ComAir’s system crash on December 24, 2004, was just one example showing that the

availability of data and system operations is essential to ensure business continuity. Due to

resource constraints, organizations cannot implement unlimited controls to protect their

systems. Instead, they should understand the major threats, and implement effective controls

accordingly. An effective internal control structure cannot be implemented overnight, and

internal control over financial reporting must be a continuing process.

The term “system security threats” refers to the acts or incidents that can and will affect

the integrity of business systems, which in turn will affect the reliability and privacy of business

data. Most organizations are dependent on computer systems to function, and thus must deal

with systems security threats. Small firms, however, are often understaffed for basic

information technology (IT) functions as well as system security skills. Nonetheless, to protect a

company’s systems and ensure business continuity, all organizations must designate an

individual or a group with the responsibilities for system security. Outsourcing system security

functions may be a less expensive alternative for small organizations

Top System Security Threats

The 2005 CSI/FBI Computer Crime and Security Survey of 700 computer security

practitioners revealed that the frequency of system security breaches has been steadily

decreasing since 1999 in almost all threats except the abuse of wireless networks.

Page 154: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

154

Viruses

A computer virus is a software code that can multiply and propagate itself. A virus can

spread into another computer via e-mail, downloading files from the Internet, or opening a

contaminated file. It is almost impossible to completely protect a network computer from virus

attacks; the CSI/FBI survey indicated that virus attacks were the most widespread attack for six

straight years since 2000.

Insider Abuse of Internet Access

Annual U.S. productivity growth was 2.5% during the second half of the 1990s, as

compared to 1.5% from 1973 to 1995, a jump that has been attributed to the use of IT (Stephen

D. Oliner and Daniel E. Sichel, “Information Technology and Productivity: Where Are We Now

and Where Are We Going?,” Reserve Bank of Atlanta Economic Review, Third Quarter 2002).

Unfortunately, IT tools can be abused. For example, e-mail and Internet connections are

available in almost all offices to improve productivity, but employees may use them for personal

reasons, such as online shopping, playing games, and sending instant messages to friends

during work hours.

Laptop or Mobile Theft

Because they are relatively expensive, laptops and PDAs have become the targets of

thieves. Although the percentage has declined steadily since 1999, about half of network

executives indicated that their corporate laptops or PDAs were stolen in 2005 (Network World

Technology Executive Newsletter, 02/21/05). Besides being expensive, they often contain

proprietary corporate data, access codes

Denial of Service

A denial of service (DoS) attack is specifically designed to interrupt normal system

functions and affect legitimate users’ access to the system. Hostile users send a flood of fake

requests to a server, overwhelming it and making a connection between the server and

legitimate clients difficult or impossible to establish. The distributed denial of service (DDoS)

allows the hacker to launch a massive, coordinated attack from thousands of hijacked (zombie)

computers remotely controlled by the hacker. A massive DDoS attack can paralyze a network

system and bring down giant websites. For example, the 2000 DDoS attacks brought down

websites such as Yahoo! And eBay for hours. Unfortunately, any computer system can be a

hacker’s target as long as it is connected to the Internet.

Unauthorized Access to Information

To control unauthorized access to information, access controls, including passwords and

a controlled environment, are necessary. Computers installed in a public area, such as a

conference room or reception area, can create serious threats and should be avoided if possible.

Any computer in a public area must be equipped with a physical protection device to control

Page 155: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

155

access when there is no business need. The LAN should be in a controlled environment accessed

by authorized employees only. Employees should be allowed to access only the data necessary

for them to perform their jobs.

Abuse of Wireless Networks

Wireless networks offer the advantage of convenience and flexibility, but system security

can be a big issue. Attackers do not need to have physical access to the network. Attackers can

take their time cracking the passwords and reading the network data without leaving a trace.

One option to prevent an attack is to use one of several encryption standards that can be built

into wireless network devices. One example, wired equivalent privacy (WEP) encryption, can be

effective at stopping amateur snoopers, but it is not sophisticated enough to foil determined

hackers. Consequently, any sensitive information transmitted over wireless networks should be

encrypted at the data level as if it were being sent over a public network.

System Penetration

Hackers penetrate systems illegally to steal information, modify data, or harm the

system.

Telecom Fraud

In the past, telecom fraud involved fake use of telecommunication (telephone) facilities.

Intruder often hacked into a company’s private branch exchange (PBX) and administration or

maintenance port for personal gains, including free long-distance calls, stealing (changing)

information in voicemail boxes, diverting calls illegally,wiretapping, and eavesdrop

Theft of Proprietary Information

Information is a commodity in the e-commerce era, and there are always buyers for

sensitive information, including customer data, credit card information, and trade secrets. Data

theft by an insider is common when access controls are not implemented. Outside hackers can

also use “Trojan” viruses to steal information from unprotected systems. Beyond installing

firewall and anti-virus software to secure systems, a company should encrypt all of its important

data Financial Fraud

The nature of financial fraud has changed over the years with information technology.

System-based financial fraud includes trick e-mails, identity theft, and fraudulent transactions.

With spam, con artists can send scam e-mails to thousands of people in hours. Victims of the so-

called 419 scam are often promised a lottery winning or a large sum of unclaimed money sitting

in an offshore bank account, but they must pay a “fee” first to get their shares. Anyone who gets

this kind of e-mail is recommended to forward a copy to the U.S. Secret Service

Misuse of Public Web Applications

The nature of e-commerce-convenience and flexibility-makes Web applications

vulnerable and easily abused. Hackers can circumvent traditional network firewalls and

Page 156: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

156

intrusion-prevention systems and attack web applications directly. They can inject commands

into databases via the web application user interfaces and surreptitiously steal data, such as

customer and credit card information.

Website Defacement

Website defacement is the sabotage of webpages by hackers inserting or altering

information. The altered webpages may mislead unknowing users and represent negative

publicity that could affect a company’s image and credibility. Web defacement is in essence a

system attack, and the attackers often take advantage of undisclosed system vulnerabilities or

unpatched systems.

24. What do you mean by structured analysis? describe the various tools used for structured

analysis wiht pros n cons of each

STRUCTURED ANALYSIS:-

SET OF TECHNIQUES & GRAPHICAL TOOLS THAT ALLOW THE ANALYST TO DEVELOP

A NEW KIND OF SYSTEM SPECIFICATIONS THAT ARE UNDERSTANDABLE TO THE USER.

Tools of Structured Analysis are:

1: DATA FLOW DIAGRAM ( DFD )--[ Bubble Chart ]

Modular Design;

Symbols:

i. External Entities: A Rectangle or Cube -- Represents SOURCE or

DESTINATION.

ii. Data Flows/Arrows: Shows MOTION of Data.

iii.Processes/Circles/Bubbles: Transform INCOMING to OUTGOING

functions.

iv. Open Rectangles: File or Data Store.

( Data at rest or temporary repository of

data )

-DFD describes data flow logically rahter than how it is processed.

-Independent of H/W, S/W, Data Structure or File Organization.

ADV- Ability to represent data flow.

Useful in HIGH & LOW level Analysis.

Provides Good System Documentation.

DIS ADV- Weak input & output details.

Confusing Initially.

Iterations.

2: DATA DICTIONARY-- It is a STRUCTURED REPOSITORY of DATA

METADATA: Data about Data

Page 157: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

157

3 items of Data present in Data Dictionary are:

i. DATA ELEMENT.

ii. DATA STRUCTURE.

iii. DATA FLOW & DATA STORES.

ADV- Supports Documentation.

Improves Communication b/w Analyst & User.

Provides common database for Control.

Easy to locate error.

DIS ADV- No Functional Details.

Not Acceptable by non-technical users.

3: DECISION TREE--

ADV- Used to Verify Logic.

Used where Complex Decisions are Involved.

Used to Calculate Discount or Commissions in Inventory Control System.

DIS ADV- A Large no. of Branches with MANY THROUGH PATHS will make S.A

difficult rather than Easy.

4: DECISION TABLE-- A Matrix of rows & coloumns that shows conditions & actions.

LIMITED/EXTENDED/MIXED ENTRY DECISION TABLE.

CONDITION STUB | CONDITION ENTRY

ACTION STUB | ACTION ENTRY

ADV- Condition Statement identifies relevant conditions.

Condition Entries tell which value applies for any particular

condition.

Actions are based on the above condition statements & entries.

DIS ADV- Drawing tables becomes Cumbersome if there are many

conditions and Respective entries.

5: STRUCTURED ENGLISH-- 3 Statements are used:

i. SEQUENCE- All statements written as Sequence get executed

Sequentially.

The Execution does not depend upon the existence of any

other statement.

ii. Selection- Make a choice from given options.

Choice is made based on conditions.

Normally condition is written after 'IF' & action after

'THEN'.

For 2 way selection 'ELSE' is used.

iii.Iteration- When a set of statements is to be performed a no. of

times,the statements are put iin a loop.

ADV- Easy to understand.

Page 158: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

158

DIS ADV- Unambigious language used to describe the logic.

25. What is decoupling? What is its role in system handling?

DECOUPLING AND ITS ROLE IN SYSTEM HANDLING

1 Decoupling facilitates each module of the system to work independently of others.

2 Decoupling enhances the adaptability of the system by helping in isolating the impact of

potential changes i.e. more the decoupling, easier it is to make changes to a subsystem without

effecting the rest of the system.

3 Concept of decoupling is also used in :

• Separation of functions between human beings and machines.

• Defining the human-computer interfaces.

4 Decoupling is achieved by:

• Defining the subsystems such that each performs a single complete function.

• Minimizing the degree of interconnection (exchange of data and control parameters).

5 Key decoupling mechanisms in system designs are:

• Inventory, buffer, and waiting line.

• Provision of slack resources.

• Extensive use of standards.

26. Build a current Admission for MCA system .Draw context level diagram, DFD upto Two

level, ER diagram and a dataflow and data stores and a process. Draw input, output

screen.(May-04)

EVENT LIST FOR THE CURRENT MCA ADMISSION SYSTEM

1. Administrator enters the college details.

2. Issue of admission forms.

3. Administrator enters the student details into the system.

4. Administrator verifies the details of the student.

5. System generates the hall tickets for the student.

6. Administrator updates the CET score of student in the system.

7. System generates the score card for the students.

8. Student enters his list of preference of college into the system.

9. System generates college-wise student list according to CET score.

10. System sends the list to the college as well as student.

Data Store used in MCA admission:

1) Student_Details :

i) Student_id

ii) Student Name

Page 159: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

159

iii) Student Address

iv) STudent_contactNo

v) Student Qualifiacation

vi) Studentmarks10th

vii) Studentmarks12th

viii) StudentmarksDegree

2) Stud_cet details:

i) Student_id

ii) Student_rollNo

iii) Student_cetscore

iv) Student_percentile

3) Stud_preference List:

i) Student_id

ii) Student_rollNo

iii) Student_prefenence1

iv) Student_prefenence3

v) Student_prefenence3

vi) Student_prefenence4

vii) Student_prefenence5

4) College List:

i) College_id

ii) college_name

iii) college_address

iv) seats available

v) Fee

Input Files

1) Student Details Form:

Student Name: ___________________________

Student Address:

Student Contact NO:

Student Qualification:

Students Percentge 12th:

Students Percentage 10th:

Students Degree Percentage:

(optional)

2) Student preference List:

Student_rollNo: _________

Preference No 1:

Preference No 2:

Preference No 3:

Page 160: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

160

Preference No 4:

Preference No 5:

3) Stdent CET Details :

Student id :

Student rollno:

Student Score:

Student Percentile:

4) College List :

College Name:

College Address:

Sets Available :

Fees :

OUTPUT FILES

1) Student ScoreCard

Student RollNo :

Student Name :

Student SCore :

Percentile :

2) Student List to College

RollNo Name Score Percentile

1

2

3

4

5

6

7

8

9

26. RAD Rapid Application Development (RAD) is an incremental software development process

model that emphasizes an extremely short development cycle. If requirements are well

understood and project scope is constrained the RAD process enables a development team to

create a fully functional system within very short time periods( eg. 60 to 90 days)

RAD approach encompasses the following phases.

Page 161: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

161

1. Business modeling:

The information flow among business functions is modeled in a way that answers the

following questions:

What information drives the business process?

What information is generated?

Who generates it?

Where does the information go?

Who processes it?

2. Data Modeling:

The information flow defined as part of the business modeling phase is refined into a set

of data objects that are needed to support the business.

The characteristics (called attributes) of each object are identified and the relationships

between these objects defined.

3. Process Modeling:

The data objects defined in the data modeling phase are transformed to achieve the

information flow necessary to implement a business function.

Processing descriptions are created for adding, modifying deleting or retrieving a data

object.

4. Application generation:

RAD assumes the use of fourth generation techniques. Rather than creating software

using conventional third generation programming languages the RAD process works to

reuse existing program components (when necessary). In all cases, automated tools are

used to facilitate construction of the software.

5. Testing and Turnover:

Since the RAD process emphasizes reuse, many of the program components have already

been tested. This reduces overall testing time. However new components must be tested

and all interfaces must be fully exercised.

The time constraints imposed on a RAD project demand “scalable scope”. If a business

application can be modularized in a way that enables each major function to be completed in

less than three months, it is a candidate for RAD. Each major function can be addressed by a

separate RAD team and then integrated to form a whole.

The RAD approach has the following Drawbacks:

• For large but scalable projects RAD requires sufficient human resources to create the right

number of RAD teams.

• RAD requires developers and customers who are committed to the rapidfire activities

necessary to get a system complete in a much abbreviated time frame. If commitment is lacking

from either constituency, RAD projects will fail.

• Not all types of applications are appropriate for RAD. If a system cannot be properly

modularized, building the components necessary for RAD will be problematic.

Page 162: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

162

If high performance is an issue and performance is to be achieved through tuning the interfaces

to system components the RAD approach may not work.

• RAD is not appropriate when technical risks are high. This occurs when a new application

makes heavy use of new technology or when the new software requires a high degree of

interoperability with existing computer programs.

Page 163: System Analysis and Design Mumbai University MCA Sem1

Pravin Prabhakar Chalke

[email protected]

163

For More Notes Join Us on Orkut: - M.C.A. Part –Time (MU)

http://www.orkut.co.in/Main#Community?cmm=103405711