23
With Lots of Luck February 2011 Master of Computer Application (MCA) – Semester 5 MC0084 – Software Project Management & Quality Assurance – 4 Credits (Book ID: B0958 & B0959) Assignment Set – 1 (60 Marks) Answer all Questions Each question carries TEN Marks Book ID: B0958 1. What is project management? Explain various activities involved in project management. 2. What is COCOMO? Explain COCOMO model in detail. 3. What is project scheduling? Explain different techniques for project scheduling. Book ID: B0959 4. What is testing? Explain the following testing techniques, a) White Box testing b) Black Box testing 5. What is debugging? Explain the basic steps in debugging? 6. What is a fish bone diagram? How is it helpful to the project management?

MC0084 – Software Project Management &

Embed Size (px)

DESCRIPTION

MC0084 – Software Project Management &

Citation preview

Page 1: MC0084 – Software Project Management &

With Lots of Luck

February 2011

Master of Computer Application (MCA) – Semester 5

MC0084 – Software Project Management &

Quality Assurance – 4 Credits

(Book ID: B0958 & B0959)

Assignment Set – 1 (60 Marks)

Answer all Questions Each question carries TEN Marks

Book ID: B0958

1. What is project management? Explain various activities involved in project management.

2. What is COCOMO? Explain COCOMO model in detail.

3. What is project scheduling? Explain different techniques for project scheduling.

Book ID: B0959

4. What is testing? Explain the following testing techniques,

a) White Box testing

b) Black Box testing

5. What is debugging? Explain the basic steps in debugging?

6. What is a fish bone diagram? How is it helpful to the project management?

Page 2: MC0084 – Software Project Management &

With Lots of Luck 1

February 2011

Master of Computer Application (MCA) – Semester 5

MC0084 – Software Project Management &

Quality Assurance – 4 Credits

(Book ID: B0958 & B0959)

Assignment Set – 2 (60 Marks)

Answer all Questions Each question carries TEN Marks

Book ID: B0958

1. Explain different project planning methods?

2. Explain the following,

a. Risk

b. Software risk management.

3 . Explain the following management control structures,

a. Decentralized

b. Centralized

c. Mixed mode control.

Book ID: B0959

4. What is CMM? Explain all the phases in it.

5. What are the different Box Specifications? Explain in detail.

6. Explain the Mathematics in software development? Explain its preliminaries also.

Page 3: MC0084 – Software Project Management &

With Lots of Luck 2

Assignment Set – 1(ANSWER-1)

Project management is a systematic method of defining and achieving targets with optimized use of resources such as time,

money, manpower, material, energy, and space. It is an application of knowledge, skills, resources, and techniques to meet

project requirements. Project management involves various activities, which are as follows:

Work planning

Resource estimation

Organizing the work

Acquiring recourses such as manpower, material, energy, and space

Risk assessment

Task assigning

Controlling the project execution

Reporting the progress

Directing the activities

Analyzing the results

Assignment Set – 1(ANSWER-2)

The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The

model uses a basic regression formula, with parameters that are derived from historical project data and current project

characteristics.

COCOMO was first published in 1981 Barry W. Boehm's Book Software engineering economics[1] as a model for estimating

effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRW Aerospace where Barry Boehm was

Director of Software Research and Technology in 1981. The study examined projects ranging in size from 2,000 to 100,000

lines of code, and programming languages ranging from assembly to PL/I. These projects were based on the waterfall model of

software development which was the prevalent software development process in 1981.

References to this model typically call it COCOMO 81. In 1997 COCOMO II was developed and finally published in 2000 in

the book Software Cost Estimation with COCOMO II[2]. COCOMO II is the successor of COCOMO 81 and is better suited for

estimating modern software development projects. It provides more support for modern software development processes and an

updated project database. The need for the new model came as software development technology moved from mainframe and

overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components. This

article refers to COCOMO 81.

COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good

for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to

account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and

Detailed COCOMO additionally accounts for the influence of individual project phases.

Page 4: MC0084 – Software Project Management &

With Lots of Luck 3

The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The

model uses a basic regression formula, with parameters that are derived from historical project data and current project

characteristics.

COCOMO was first published in 1981 Barry W. Boehm's Book Software engineering economics[1] as a model for estimating

effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRW Aerospace where Barry Boehm was

Director of Software Research and Technology in 1981. The study examined projects ranging in size from 2,000 to 100,000

lines of code, and programming languages ranging from assembly to PL/I. These projects were based on the waterfall model of

software development which was the prevalent software development process in 1981.

References to this model typically call it COCOMO 81. In 1997 COCOMO II was developed and finally published in 2000 in

the book Software Cost Estimation with COCOMO II[2]. COCOMO II is the successor of COCOMO 81 and is better suited for

estimating modern software development projects. It provides more support for modern software development processes and an

updated project database. The need for the new model came as software development technology moved from mainframe and

overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components. This

article refers to COCOMO 81.

COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good

for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to

account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and

Detailed COCOMO additionally accounts for the influence of individual project phases.Basic COCOMO computes software

development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of lines of code

(KLOC).

COCOMO applies to three classes of software projects:

* Organic projects - "small" teams with "good" experience working with "less than rigid" requirements

* Semi-detached projects - "medium" teams with mixed experience working with a mix of rigid and less than rigid

requirements

* Embedded projects - developed within a set of "tight" constraints (hardware, software, operational, ...)

The basic COCOMO equations take the form

Effort Applied = ab(KLOC)bb [ man-months ]

Development Time = cb(Effort Applied)db [months]

People required = Effort Applied / Development Time [count]

The coefficients ab, bb, cb and db are given in the following table.

Software project ab bb cb db

Organic 2.4 1.05 2.5 0.38

Semi-detached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

Page 5: MC0084 – Software Project Management &

With Lots of Luck 4

Basic COCOMO is good for quick estimate of software costs. However it does not account for differences in hardware

constraints, personnel quality and experience, use of modern tools and techniques, and so on.

Intermediate COCOMO computes software development effort as function of program size and a set of "cost drivers" that

include subjective assessment of product, hardware, personnel and project attributes. This extension considers a set of four

"cost drivers",each with a number of subsidiary attributes:-

* Product attributes

Required software reliability

Size of application database

Complexity of the product

* Hardware attributes

Run-time performance constraints

Memory constraints

Volatility of the virtual machine environment

Required turnabout time

* Personnel attributes

Analyst capability

Software engineering capability

Applications experience

Virtual machine experience

Programming language experience

* Project attributes

Use of software tools

Application of software engineering methods

Required development schedule

Assignment Set – 1(ANSWER-3)

Project Scheduling

Project scheduling is concerned with the techniques that can be employed to manage the activities that need to be undertaken

during the development of a project.

Scheduling is carried out in advance of the project commencing and involves:

• identifying the tasks that need to be carried out;

• estimating how long they will take;

• allocating resources (mainly personnel);

• scheduling when the tasks will occur.

Once the project is underway control needs to be exerted to ensure that the plan continues to represent the best prediction of

what will occur in the future:

• based on what occurs during the development;

• often necessitates revision of the plan.

Effective project planning will help to ensure that the systems are delivered:

Page 6: MC0084 – Software Project Management &

With Lots of Luck 5

• within cost;

• within the time constraint;

• to a specific standard of quality.

Two project scheduling techniques will be presented, the Milestone Chart (or Gantt Chart) and the Activity Network.

Milestone Charts

Milestones mark significant events in the life of a project, usually critical activities which must be achieved on time to avoid

delay in the project.

Milestones should be truely significant and be reasonable in terms of deadlines (avoid using intermediate stages).

Examples include:

• installation of equipment;

• completion of phases;

• file conversion;

• cutover to the new system

Gantt Charts

A Gantt chart is a horizontal bar or line chart which will commonly include the following features:

• activities identified on the left hand side;

• time scale is drawn on the top (or bottom) of the chart;

• a horizontal open oblong or a line is drawn against each activity indicating estimated duration;

• dependencies between activities are shown;

• at a review point the oblongs are shaded to represent the actual time spent (an alternative is to represent actual and

estimated by 2 separate lines);

• a vertical cursor (such as a transparent ruler) placed at the review point makes it possible to establish activities which

are behind or ahead of schedule.

Activity Networks

The foundation of the approach came from the Special Projects Office of the US Navy in 1958. It developed a technique for

evaluating the performance of large development projects, which became known as PERT - Project Evaluation and Review

Technique. Other variations of the same approach are known as the critical path method (CPM) or critical path analysis (CPA).

Page 7: MC0084 – Software Project Management &

With Lots of Luck 6

The heart of any PERT chart is a network of tasks needed to complete a project, showing the order in which the tasks need to

be completed and the dependencies between them. This is represented graphically:

EXAMPLE OF ACTIVITY NETWORK

The diagram consists of a number of circles, representing events within the development lifecycle, such as the start or

completion of a task, and lines, which represent the tasks themselves. Each task is additionally labelled by its time duration.

Thus the task between events 4 & 5 is planned to take 3 time units. The primary benefit is the identification of the critical path.

The critical path = total time for activities on this path is greater than any other path through the network (delay in any task on

the critical path leads to a delay in the project).

Tasks on the critical path therefore need to be monitored carefully.

The technique can be broken down into 3 stages:

1. Planning:

• identify tasks and estimate duration of times;

• arrange in feasible sequence;

• draw diagram.

2. Scheduling:

• establish timetable of start and finish times.

3. Analysis:

• establish float;

• evaluate and revise as necessary.

Assignment Set – 1(ANSWER-4a)

White-box testing (a.k.a. clear box testing, glass box testing, transparent box testing, or structural testing) is a method of testing

software that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In

white-box testing an internal perspective of the system, as well as programming skills, are required and used to design test

cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to

testing nodes in a circuit, e.g. in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually

done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a

system level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented

parts of the specification or missing requirements.

Page 8: MC0084 – Software Project Management &

With Lots of Luck 7

White-box test design techniques include:

Control flow testing

Data flow testing

Branch testing

Path testing

Assignment Set – 1(ANSWER-4b)

Black-box testing is a method of software testing that tests the functionality of an application as opposed to its internal

structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming

knowledge in general is not required. Test cases are built around specifications and requirements, i.e., what the application is

supposed to do. It uses external descriptions of the software, including specifications, requirements, and design to derive test

cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid

inputs and determines the correct output. There is no knowledge of the test object's internal structure.

This method of test can be applied to all levels of software testing: unit, integration, functional, system and acceptance. It

typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.

Typical black-box test design techniques include:

Decision table testing

All-pairs testing

State transition tables

Equivalence partitioning

Boundary value analysis

Assignment Set – 1(ANSWER-5)

Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece of

electronic hardware, thus making it behave as expected. Debugging tends to be harder when various subsystems are tightly

coupled, as changes in one may cause bugs to emerge in another. Many books have been written about debugging (see below:

Further reading), as it involves numerous aspects, including: interactive debugging, control flow, integration testing, log files,

monitoring (application, system), memory dumps, profiling, Statistical Process Control, and special design tactics to improve

detection while simplifying changes.

Step 1. Identify the error.

This is an obvious step but a tricky one, sometimes a bad identification of an error can cause lots of wasted developing time, is

usual that production errors reported by users are hard to be interpreted and sometimes the information we are getting from

them is misleading.

A few tips to make sure you identify correctly the bug are.

Page 9: MC0084 – Software Project Management &

With Lots of Luck 8

See the error. This is easy if you spot the error, but not if it comes from a user, in that case see if you can get the user to send

you a few screen captures or even use remote connection to see the error by yourself.

Reproduce the error. You never should say that an error has been fixed if you were not able to reproduce it.

Understand what the expected behavior should be. In complex applications could be hard to tell what should be the expected

behavior of an error, but that knowledge is basic to be able to fix the problem, so we will have to talk with the product owner,

check documentation… to find this information

Validate the identification. Confirm with the responsible of the application that the error is actually an error and that the

expected behavior is correct. The validation can also lead to situations where is not necessary or not worth it to fix the error.

Step 2. Find the error.

Once we have an error correctly identified, is time to go through the code to find the exact spot where the error is located, at

this stage we are not interested in understanding the big picture for the error, we are just focused on finding it. A few

techniques that may help to find an error are:

Logging. It can be to the console, file… It should help you to trace the error in the code.

Debugging. Debugging in the most technical sense of the word, meaning turning on whatever the debugger you are using

and stepping through the code.

Removing code. I discovered this method a year ago when we were trying to fix a very challenging bug. We had an

application which a few seconds after performing an action was causing the system to crash but only on some computers and

not always but only from time to time, when debugging, everything seemed to work as expected, and when the machine was

crashing it happened with many different patterns, we were completely lost, and then it occurred to us the removing code

approach. It worked more or less like this: We took out half of the code from the action causing the machine to crash, and we

executed it hundreds of times, and the application crashed, we did the same with the other half of the code and the application

didn’t crash, so we knew the error was on the first half, we kept splitting the code until we found that the error was on a third

party function we were using, so we just decided to rewrite it by ourselves.

Step 3. Analyze the error.

This is a critical step, use a bottom-up approach from the place the error was found and analyze the code so you can see the big

picture of the error, analyzing a bug has two main goals: to check that around that error there aren’t any other errors to be found

(the iceberg metaphor), and to make sure what are the risks of entering any collateral damage in the fix.

Step 4. Prove your analysis

This is a straight forward step, after analyzing the original bug you may have come with a few more errors that may appear on

the application, this step it’s all about writing automated tests for these areas (is better to use a test framework as any from the

xUnit family).

Once you have your tests, you can run them and you should see all them failing, that proves that your analysis is right.

Step 5. Cover lateral damage.

At this stage you are almost ready to start coding the fix, but you have to cover your ass before you change the code, so you

create or gather (if already created) all the unit tests for the code which is around where you will do the changes so that you will

be sure after completing the modification that you won’t have break anything else. If you run this unit tests, they all should

pass.

Step 6. Fix the error.

That’s it, finally you can fix the error!

Step 7. Validate the solution.

Run all the test scripts and check that they all pass.

Page 10: MC0084 – Software Project Management &

With Lots of Luck 9

Assignment Set – 1(ANSWER-6)

FISH BONE DIAGRAM

Ishikawa diagrams (also called fishbone diagrams, cause-and-effect diagrams or Fishikawa) are causal diagrams that show the

causes of a certain event -- created by Kaoru Ishikawa (1990).[1] Common uses of the Ishikawa diagram are product design

and quality defect prevention, to identify potential factors causing an overall effect. Each cause or reason for imperfection is a

source of variation. Causes are usually grouped into major categories to identify these sources of variation. The categories

typically include:

People: Anyone involved with the process

Methods: How the process is performed and the specific requirements for doing it, such as policies, procedures, rules,

regulations and laws

Machines: Any equipment, computers, tools etc. required to accomplish the job

Materials: Raw materials, parts, pens, paper, etc. used to produce the final product

Measurements: Data generated from the process that are used to evaluate its quality

Environment: The conditions, such as location, time, temperature, and culture in which the process operates

Ishikawa diagrams were proposed by Ishikawa [2] in the 1960s, who pioneered quality management processes in the Kawasaki

shipyards, and in the process became one of the founding fathers of modern management.

It was first used in the 1940s, and is considered one of the seven basic tools of quality control.[3] It is known as a fishbone

diagram because of its shape, similar to the side view of a fish skeleton.

Mazda Motors famously used an Ishikawa diagram in the development of the Miata sports car, where the required result was

"Jinba Ittai" or "Horse and Rider as One". The main causes included such aspects as "touch" and "braking" with the lesser

causes including highly granular factors such as "50/50 weight distribution" and "able to rest elbow on top of driver's door".

Every factor identified in the diagram was included in the final design.

Causes

Causes in the diagram are often categorized, such as to the 8 M's, described below. Cause-and-effect diagrams can reveal key

relationships among various variables, and the possible causes provide additional insight into process behavior.

Page 11: MC0084 – Software Project Management &

With Lots of Luck 10

Causes can be derived from brainstorming sessions. These groups can then be labeled as categories of the fishbone. They will

typically be one of the traditional categories mentioned above but may be something unique to the application in a specific

case. Causes can be traced back to root causes with the 5 Whys technique.

Typical categories are:

The 8 Ms (used in manufacturing)

The 8 Ps (used in service industry)

The 4 Ss (used in service industry)

One may find it helpful to use the Fishbone diagram in the following cases:

To analyze and find the root cause of a complicated problem

When there are many possible causes for a problem

If the traditional way of approaching the problem (trial and error, trying all possible causes, and so on) is very time

consuming

The problem is very complicated and the project team cannot identify the root cause

When not to use it

Of course, the Fishbone diagram isn't applicable to every situation. Here are a just a few cases in which you should not use the

Fishbone diagram because the diagrams either are not relevant or do not produce the expected results:

The problem is simple or is already known.

The team size is too small for brainstorming.

There is a communication problem among the team members.

There is a time constraint; all or sufficient headcount is not available for brainstorming.

The team has experts who can fix any problem without much difficulty.

Assignment Set – 2(ANSWER-1)

Here are examples and explanations of four commonly used tools in project planning and project management, namely:

Brainstorming, Fishbone Diagrams, Critical Path Analysis Flow Diagrams, and Gantt Charts. Additionally and separately see

business process modelling and quality management, which contain related tools and methods aside from the main project

management models shown below.

The tools here each have their strengths and particular purposes, summarised as a basic guide in the matrix below.

Matrix key:

Page 12: MC0084 – Software Project Management &

With Lots of Luck 11

brainstorming

Brainstorming is usually the first crucial creative stage of the project management and project planning process. See the

brainstorming method in detail and explained separately, because it many other useful applications outside of project

management.

Unlike most project management skills and methods, the first stages of the brainstorming process is ideally a free-thinking and

random technique. Consequently it can be overlooked or under-utilized because it not a natural approach for many people

whose mains strengths are in systems and processes. Consequently this stage of the project planning process can benefit from

being facilitated by a team member able to manage such a session, specifically to help very organised people to think randomly

and creatively.

Fishbone diagrams are chiefly used in quality management fault-detection, and in business process improvement, especially in

manufacturing and production, but the model is also very useful in project management planning and task management

generally.

Within project management fishbone diagrams are useful for early planning, notably when gathering and organising factors, for

example during brainstorming.

Fishbone diagrams are very good for identifying hidden factors which can be significant in enabling larger activities, resources

areas, or parts of a process.

Fishbone diagrams are not good for scheduling or showing interdependent time-critical factors.

Fishbone diagrams are also called 'cause and effect diagrams' and Ishikawa diagrams, after Kaoru Ishikawa (1915-89), a

Japanese professor specialising in industrial quality management and engineering who devised the technique in the 1960s.

Ishikawa's diagram became known as a fishbone diagram, obviously, because it looks like a fishbone. A fishbone diagram has a

central spine running left to right, around which is built a map of factors which contribute to the final result (or problem).

For each project the main categories of factors are identified and shown as the main 'bones' leading to the spine.

Into each category can be drawn 'primary' elements or factors (shown as P in the diagram), and into these can be drawn

secondary elements or factors (shown as S). This is done for every category, and can be extended to third or fourth level factors

if necessary. The diagram above is a very simple one. Typically fishbone diagrams have six or more main bones feeding into

the spine. Other main category factors can include Environment, Management, Systems, Training, Legal, etc.

The categories used in a fishbone diagram should be whatever makes sense for the project. Various standard category sets exist

for different industrial applications, however it is important that your chosen structure is right for your own situation, rather

than taking a standard set of category headings and hoping that it fits.

At a simple level the fishbone diagram is a very effective planning model and tool - especially for 'mapping' an entire

operation.

Where a fishbone diagram is used for project planning of course the 'Effect' is shown as an aim or outcome or result, not a

problem.

The 'Problem' term is used in fault diagnosis and in quality management problem-solving. Some fishbone diagrams can become

very complex indeed, which is common in specialised quality management areas, especially where systems are computerised.

This model, and the critical path analysis diagram are similar to the even more complex diagrams used on business process

modelling within areas of business planning and and business process improvement.

project critical path analysis (flow diagram or chart)

'Critical Path Analysis' sounds very complicated, but it's a very logical and effective method for planning and managing

complex projects. A critical path analysis is normally shown as a flow diagram, whose format is linear (organised in a line),

and specifically a time-line.

Page 13: MC0084 – Software Project Management &

With Lots of Luck 12

Critical Path Analysis is also called Critical Path Method - it's the same thing - and the terms are commonly abbreviated, to

CPA and CPM. A commonly used tool within Critical Path Analysis is PERT (Program/Programme/Project Evaluation and

Review Technique) which is a specialised method for identifying related and interdependent activities and events, especially

where a big project may contain hundreds or thousands of connected elements. PERT is not normally relevant in simple

projects, but any project of considerable size and complexity, particularly when timings and interdependency issues are crucial,

can benefit from the detailed analysis enabled by PERT methods. PERT analysis commonly feeds into Critical Path Analysis

and to other broader project management systems, such as those mentioned here.

Critical Path Analysis flow diagrams are very good for showing interdependent factors whose timings overlap or coincide.

They also enable a plan to be scheduled according to a timescale. Critical Path Analysis flow diagrams also enable costings and

budgeting, although not quite as easily as Gantt charts (below), and they also help planners to identify causal elements,

although not quite so easily as fishbone diagrams (below).

This is how to create a Critical Path Analysis. As an example, the project is a simple one - making a fried breakfast.

Assemble crockery and utensils, assemble ingredients, prepare equipment, make toast, fry sausages and eggs, grill bacon and

tomatoes, lay table, warm plates, serve.

Note that some of these activities must happen in parallel - and crucially they are interdependent. That is to say, if you tried to

make a fried breakfast by doing one task at a time, and one after the other, things would go wrong. Certain tasks must be started

before others, and certain tasks must be completed in order for others to begin. The plates need to be warming while other

activities are going on. The toast needs to be toasting while the sausages are frying, and at the same time the bacon and

sausages are under the grill. The eggs need to be fried last. A Critical Path Analysis is a diagrammatical representation of what

needs done and when. Timescales and costs can be applied to each activity and resource. Here's the Critical Path Analysis for

making a fried breakfast:

This Critical Path Analysis example below shows just a few activities over a few minutes. Normal business projects would see

the analysis extending several times wider than this example, and the time line would be based on weeks or months. It is

possible to use MS Excel or a similar spreadsheet to create a Critical Path Analysis, which allows financial totals and time

totals to be planned and tracked. Various specialised project management software enable the same thing. Beware however of

spending weeks on the intricacies of computer modelling, when in the early stages especially, a carefully hand drawn diagram -

which requires no computer training at all - can put 90% of the thinking and structure in place. (See the details about the most

incredible planning and communications tool ever invented, and available for just a tiny fraction of the price of all the

alternatives.)

project critical path analysis flow diagram example

Page 14: MC0084 – Software Project Management &

With Lots of Luck 13

gantt charts

Gantt Charts (commonly wrongly called gant charts) are extremely useful project management tools. The Gantt Chart is named

after US engineer and consultant Henry Gantt (1861-1919) who devised the technique in the 1910s.

Gantt charts are excellent models for scheduling and for budgeting, and for reporting and presenting and communicating

project plans and progress easily and quickly, but as a rule Gantt Charts are not as good as a Critical Path Analysis Flow

Diagram for identifying and showing interdependent factors, or for 'mapping' a plan from and/or into all of its detailed causal or

contributing elements.

You can construct a Gantt Chart using MSExcel or a similar spreadsheet. Every activity has a separate line. Create a time-line

for the duration of the project (the breakfast example shows minutes, but normally you would use weeks, or for very big long-

term projects, months). You can colour code the time blocks to denote type of activity (for example, intense, watching brief,

directly managed, delegated and left-to-run, etc.) You can schedule review and insert break points. At the end of each line you

can show as many cost columns for the activities as you need. The breakfast example shows just the capital cost of the

consumable items and a revenue cost for labour and fuel. A Gantt chart like this can be used to keep track of progress for each

activity and how the costs are running. You can move the time blocks around to report on actuals versus planned, and to re-

schedule, and to create new plan updates. Costs columns can show plan and actuals and variances, and calculate whatever

totals, averages, ratios, etc., that you need. Gantt Charts are probably the most flexible and useful of all project management

tools, but remember they do not very easily or obviously show the importance and inter-dependence of related parallel

activities, and they won't obviously show the necessity to complete one task before another can begin, as a Critical Path

Analysis will do, so you may need both tools, especially at the planning stage, and almost certainly for large complex projects.

gantt chart example

A wide range of computerised systems/software now exists for project management and planning, and new methods continue

to be developed. It is an area of high innovation, with lots of scope for improvement and development. I welcome suggestions

of particularly good systems, especially if inexpensive or free. Many organizations develop or specify particular computerised

tools, so it's a good idea to seek local relevant advice and examples of best practice before deciding the best computerised

project management system(s) for your own situation.

Project planning tools naturally become used also for subsequent project reporting, presentations, etc., and you will make life

easier for everyone if you use formats that people recognize and find familiar.

Assignment Set – 2(ANSWER-2a)

DEFINING RISK

So, what are risks? Risks are simply potential problems. For example, every time we cross the street, we run the risk of

Page 15: MC0084 – Software Project Management &

With Lots of Luck 14

being hit by a car. The risk does not start until we make the commitment, until we step in the street. It ends when the

problem occurs (the car hits us) or the possibility of risk is eliminated (we safely step onto the sidewalk of the other

side of the street).

A software project may encounter various types of risks:

· Technical risks include problems with languages, project size, project functionality, platforms, methods,

standards, or processes. These risks may result from excessive constraints, lack of experience, poorly

defined parameters, or dependencies on organizations outside the direct control of the project team.

· Management risks include lack of planning, lack of management experience and training, communications

problems, organizational issues, lack of authority, and control problems.

· Financial risks include cash flow, capital and budgetary issues, and return on investment constraints.

· Contractual and legal risks include changing requirements, market-driven schedules, health & safety

issues, government regulation, and product warranty issues.

· Personnel risks include staffing lags, experience and training problems, ethical and moral issues, staff

conflicts, and productivity issues.

· Other resource risks include unavailability or late delivery of equipment & supplies, inadequate tools,

inadequate facilities, distributed locations, unavailability of comp uter resources, and slow response times.

Assignment Set – 2(ANSWER-2b)

There are many risks involved in creating high quality software on time and within budget. However, in order for it to

be worthwhile to take these risks, they must be compensated for by a perceived reward. The greater the risk, the

greater the reward must be to make it worthwhile to take the chance. In software development, the possibility of

reward is high, but so is the potential for disaster. The need for software risk management is illustrated in Gilb’s risk

principle. “If you don’t actively attack the risks, they will actively attack you" [Gilb-88]. In order to successfully

manage a software project and reap our rewards, we must learn to identify, analyze, and control these risks. This

paper focuses on the basic concepts, processes, and techniques of software risk management.

There are basic risks that are generic to almost all software projects. Although there is a basic component of risk

management inherent in good project management, risk management differs from project management in the following

Page 16: MC0084 – Software Project Management &

With Lots of Luck 15

ways:

Within risk management the “emphasis is shifted from crisis management to anticipatory management” [Down-94].

Boehm defines four major reasons for implementing software risk management [Boehm-89]:

1. Avoiding software project disasters, including run away budgets and schedules, defect-ridden software

products, and operational failures.

2. Avoiding rework caused by erroneous, missing, or ambiguous requirements, design or code, which typically

consumes 40-50% of the total cost of software development.

3. Avoiding overkill with detection and prevention techniques in areas of minimal or no risk.

4. Stimulating a win-win software solution where the customer receives the product they need and the vendor

makes the profits they expect.

Assignment Set – 2(ANSWER-3a)

Decentralized

In a decentralized-control team organization, decisions are made by consensus, and all work is considered group work. Team

members review each other’s work and are responsible as a group for what every member produces. Figure 8.1 shows the

patterns of control and communication among team members in a decentralized-control orga­nization. The ringlike

management structure is intended to show the lack of a hierar­chy and that all team members are at the same level.

Such a "democratic" organization leads to higher morale and job satisfaction and, therefore, to less turnover. The engineers feel

more ownership of the project and responsibility for the problem, resulting in higher quality in their work.

A decen­tralized-control organization is more suited for long-term projects, because the amount of intragroup communication

that it encourages leads to a longer develop­ment time, presumably accompanied by lower life cycle costs.

The proponents of this kind of team organization claim that it is more appropriate for less understood and more complicated

problems, since a group can invent better solutions than a single individual can. Such an organization is based on a technique

referred to as "egoless programming," because it encourages programmers to share and review one another’s work.

On the negative side, decentralized-control team organization is not appropriate for large teams, where the communication

overhead can overwhelm all the engineers, reducing their individual productivity.

Page 17: MC0084 – Software Project Management &

With Lots of Luck 16

Assignment Set – 2(ANSWER-3b)

Centralized

Centralized-control team organization is a standard management technique in well understood disciplines. In this mode of

organization, several workers report to a supervisor who directly controls their tasks and is responsible for their

performance.Centralized control is based on a hierarchical organizational structure in which number of supervisors report to a

"second-level" manager and so on up the chain, to the president of the enterprise.

One way to centralize the control of a software development team is through chief-programmer team. In this kind of

organization, one engineer, known as the chief programmer, is responsible for the design and all the technical details of the

project.

The chief programmer reports to a peer project manager who is responsible for the administrative aspects of the project. Other

members of the team are a software librarian and programmers who report to the chief programmer and are added to the team

on a temporary basis when needed. Specialists may be used as consultants to the team. The need for programmers and

consultants, as well as what tasks they perform is determined by the chief programmer, who initiates and controls all decisions.

The software library maintained by the librarian is the central repository for all the documentation, and decisions made by the

team. Figure 8.0 is a graphical representation of the patterns of control and communication supported by this kind of

organization…

Assignment Set – 2(ANSWER-3c)

A mixed-control team organization attempts to combine the benefits of centralized and decentralized control, while minimizing

or avoiding their disadvantages.

Rather than treating all members the same, as in a decentralized organization, or treatin single individual as the chief, as in a

centralized organization, the mixed organization differentiates the engineers into senior and junior engineers. Each senior

engineer leads a group of junior engineers and reports, in its turn, to a project manager. Control is vested in the project manager

and senior programmers, while communication is decentralized among each set of individuals, peers, and their immediate

supervisors. The patterns of control and communication in mixed-control organizations are shown in Figure 8.2.

A mixed-mode organization tries to limit communication to within a group that is most likely to benefit from it. It also tries to

realize the benefits of group decision making by vesting authority in a group of senior programmers or architects. The mixed-

control organization is an example of the use of a hierarchy to master the complexity of software development as well as

organizational structure.

Page 18: MC0084 – Software Project Management &

With Lots of Luck 17

Assignment Set – 2(ANSWER-4)

Business Process Re-Engineering services offered by Encore update the architecture of the technical applications and initiate

the new business process based upon the requirements of the new business model. Such modifications in both technical and

functional processes are a requisite to remaining competitive and profitable.

We are able to provide you with the required project management, business analysis and technical expertise to perform efficient

process re-engineering. We understand the reason for business process re-engineering and will work closely with your staff to

improve the intricate business rules of large enterprise systems in a way that is consistent with industry standards.

Encore has a successful history of performing a Capability Maturity Model (CMM) Mini Assessment Process (MAP) to

improve a process in accordance with the CMM phases as defined by the Software Engineering Institute (SEI).

The CMM phases are:

Initial

Repeatable

Defined

Managed

Optimized

Using certified CMM analysts and following our PMTech and Technology Process Framework methodologies, we are able to

provide a proven performance by delivering reliable, consistent and high-quality results.

To deliver a successful CMM MAP, we execute the following high-level phases and associated deliverable processes:

Define the enterprise appraisal objectives and critical success factors of the mini assessment

Conduct an opening briefing to summarize the process maturity concepts and the MAP methodology

Page 19: MC0084 – Software Project Management &

With Lots of Luck 18

Review corporate documents (policies, procedures, process flows and performance metrics)

Review project-relevant documents and exhibits

Interview corporate managers and stakeholders with the support of a project liaison

Map the findings to the requisites of the criteria for CMM-Level compliance

Assignment Set – 2(ANSWER-5)

The term black box is a metaphor for a specific kind of abstraction. Black-box abstraction means, that none of the internal

workings are visible, and that one can only observe output as reaction to some specific input (Fig. 1). Black-box testing, for

instance, works this way. Test cases are selected without knowledge about the implementation. They are run, and the delivered

results are checked for correctness.

Figure 1: Black-box component

For black-box specification of software components, pre- and postconditions are adequate. They describe the circumstances

under which a component can be activated and the result of this activation.

Unfortunately, black-box specifications are insufficient for more interactive components. The result of an operation may

depend not only on the input, but also on results of operations external to the specified component, which are called during

execution (Fig. 2).

Figure 2: Black-box component, which makes an external call

The external operation, activated by the component, needs to be specified, too. This specification is not needed to see how the

operation is to be applied, but to see how it needs to be implemented. It is a specification of duties, sometimes referred to as

required interfaces [AOB97]. Often, such an external operation will depend on the state of the component calling it. As an

example of this, in [Szy97] the Observer Pattern from [JV95] is analyzed. To perform its task, the observer needs to request

state information from the observed object. If the observed object is a text and the observer a text view component, the observer

needs to know whether it sees the old or the new version of the text. In other words, one needs to know, whether the observer is

called before or after the data is changed. Specifying the text manipulation operations, such as delete, as black boxes does not

reveal when the observer is called and what the state of the caller is at that time (Fig. 3). In this simple example, the

intermediate state of the black-box at the time of the call could be described verbally, but in more involved cases this approach

usually fails.

Figure 3: Black-box specification of observer pattern

Page 20: MC0084 – Software Project Management &

With Lots of Luck 19

One may think, that the above problems occur only, if intermediate state of a black-box operation can be observed, but this not

true. If an operation involves more than one external call, the order of these may be important when implementing the called

components. Consider the copying of a view with a separate model in the model-view-controller pattern: Has the model already

been copied when the view is asked to copy its own state?

Sometimes, programmers will try to retrieve information in addition to the specification by experimenting with the component

[PS94]. Of course, such behavioral assumptions might not actually hold and depending on them ruins the possibility of later

substitution. However, black-box descriptions often force this additional information gathering to be able to use a component.

The black box, state box and clear box views are distinct usage perspectives which are effective in defining the behaviour of

individual components but they provide little in the way of compositionality. Combining specifications cannot make statements

about the behaviour as a whole [1]. Cleanroom provides no means of describing concurrent (dynamic) behaviours and

analysing pathological problems, such as deadlock, livelocks, race conditions and so on.

Box Structured Development Method. The box structured development method outlines a hierarchy of concerns by a box

structure, which allows for divide, conquer and connect software specifications. The box structured development method

originates from Cleanroom. Cleanroom defines three views of the software system, referred to as the black box, state box and

clear box views. An initial black box is refined into a state box and then into a clear box. A clear box

can be further refined into one or more black boxes or it closes a hierarchical branch as a leave box providing a control

structure. This hierarchy of views allows for a stepwise refinement and verification as each view is derived from the previous.

The clear box is verified for equivalence against its state box, and the state box against its black box. The box structure should

specify all requirements for the component, so that no further specification is logically required to complete the component. We

have slightly altered the box structured development method to make it more beneficial

Sequence-Based Specification Method. The sequence-based specification method also originates from Cleanroom. The

sequence-based specification method describes the causality between stimuli and responses using a sequence enumeration

table. Sequence enumerations describe the responses of a process after accepting a history of stimuli. Every mapping of an

input sequence to a response is justified by explicit reference to the informal specifications. The sequence-based specification

method is applied to the black box and state box specifications. Each sequence enumeration can be tagged with a requirement

reference. The tagged requirement maps a stimuli-response causality of the system to the customer or derived requirements.

Architecture Specification: The purpose is to define the 3 key dimensions of architecture:

Page 21: MC0084 – Software Project Management &

With Lots of Luck 20

Conceptual architecture, module architecture and execution architecture. The Cleanroom aspect of architecture specification is

in decomposition of the history-based black box Function Specification into state-based state box and procedure-based clear

box descriptions. It is the beginning of a referentially transparent decomposition of the function specification into a box

structure hierarchy, and will be used during increment development. Increment Design: The purpose is to design and code a

software increment that conforms to Cleanroom design principles. Increments are designed and implemented as usage

hierarchies through box structure decomposition, and are expressed in procedure-based clear box forms that can introduce new

black boxes for further decomposition. The design is performed in such a way that it is provably correct using mathematical

models. Treating a program as a mathematical function can do this. Note that specification and design are developed in parallel,

resulting in a box structure hierarchy affording complete traceability. Correctness Verification: The purpose is to verify the

correctness of a software increment using mathematically based techniques. Black box specifications are verified to be

complete, consistent, and correct. State box specifications are verified with respect to black box specifications, and clear box

procedures are verified with respect to state box specifications. A set of correctness questions is asked during functional

verification. Correctness is established by group consensus and/or by formal proof techniques. Any part of the work changed

after verification, must be reverified. Cleanroom Software Specification and Design begins with an external view (black box),

and is transformed into a state machine view (state box), and is fully developed into a procedure (clear box). The process of

box structure development is as following:

1) Define the system requirements.

2) Specify and validate the black box.

• Define the system boundary and specify all stimuli and responses

• Specify the black box mapping rules

• Validate the black box with owners and users

3) Specify and verify the state box

• Specify the state data and initial state values

• Specify the state box transition function

• Derive the black box behavior of the state box and compare the derived black box for equivalence

4) Design and verify the clear box

• Design the clear box control structures and operations

• Embed uses of new and reused black boxes as necessary

• Derive the state box behavior of the clear box and compare the derived state box to the original state box for

equivalence

5) Repeat the process for new black boxes

Figure 3.1 shows the 3-tier hierarchy of box structures namely black box, state box, and clear

box forms. Referential transparency is ensured which means traceability to requirements.

Page 22: MC0084 – Software Project Management &

With Lots of Luck 21

The theory of sequence-based specification is used to develop the specifications. In the sequence-based specification process,

all possible sequences of stimuli are enumerated systematically in a strict order, as stimulus sequences of length zero, length

one, length two, and so on. As each sequence is mapped to its correct response, equivalent sequences are identified by applying

a reduction rule, and the enumeration process terminates when the system has been defined completely and consistently.

Assignment Set – 2(ANSWER-6)

Mathematics in Software Development

Mathematics has many useful properties for the developers of large systems. One of its most useful properties is that it is

capable of succinctly and exactly describing a physical situation, an object or the outcome of an action. Ideally, the software

engineer should be in the same position as the applied mathematician. A mathematical specification of a system should be

presented, and a solution developed in terms of a software architecture that implements the specification should be produced.

Another advantage of using mathematics in the software process is that it provides a smooth transition between software

engineering activities. Not only functional specifications but also system designs can be expressed in mathematics, and of

course, the program code is a mathematical notation – albeit a rather long-winded one.

The major property of mathematics is that it supports abstraction and is an excellent medium for modeling. As it is an exact

medium there is little possibility of ambiguity: Specifications can be mathematically validated for contradictions and

incompleteness, and vagueness disappears completely.

In addition, mathematics can be used to represent levels of abstraction in a system specification in an organized way.

Mathematics is an ideal tool for modeling. It enables the bare bones of a specification to be exhibited and helps the analyst and

system specifier to validate a specification for functionality without intrusion of such issues as response time, design directives,

implementation directives, and project constraints. It also helps the designer, because the system design specification exhibits

the properties of a model, providing only sufficient details to enable the task in hand to be carried out. Finally, mathematics

provides a high level of validation when it is used as a software development medium. It is possible to use a mathematical

proof to demonstrate that a design matches a specification and that some program code is a correct reflection of a design. This

is preferable to current practice, where often little effort is put into early validation and where much of the checking of a

software system occurs during system and acceptance testing.

Mathematical Preliminaries

To apply formal methods effectively, a software engineer must have a working knowledge of the mathematical notation

associated with sets and sequences and the logical notation used in predicate calculus. The intent of the section is to provide a

brief introduction. For a more detailed discussion the reader is urged to examine books dedicated to these subjects

Sets and Constructive Specification

A set is a collection of objects or elements and is used as a cornerstone of formal methods. The elements contained within a set

are unique (i.e., no duplicates are allowed). Sets with a small number of elements are written within curly brackets (braces)

with the elements separated by commas. For example, the set {C++, Pascal, Ada, COBOL, Java} contains the names of five

programming languages. The order in which the elements appear within a set is immaterial. The number of items in a set is

known as its cardinality. The # operator returns a set's cardinality. For example, the expression #{A, B, C, D} = 4 implies that

Page 23: MC0084 – Software Project Management &

With Lots of Luck 22

the cardinality operator has been applied to the set shown with a result indicating the number of items in the set. There are two

ways of defining a set. A set may be defined by enumerating its elements (this is the way in which the sets just noted have been

defined). The second approach is to create a constructive set specification. The general form of the members of a set is

specified using a Boolean expression. Constructive set specification is preferable to enumeration because it enables a succinct

definition of large sets. It also explicitly defines the rule that was used in constructing the set. Consider the following

constructive specification example: {n : _ | n < 3 . n} This specification has three components, a signature, n : _, a predicate n <

3, and a term, n. The signature specifies the range of values that will be considered when forming the set, the predicate (a

Boolean expression) defines how the set is to be constricted, and, finally, the term gives the general form of the item of the set.

In the example above, _ stands for the natural numbers; therefore, natural numbers are to be considered. The predicate indicates

that only natural numbers less than 3 are to be included; and the term specifies that each element of the set will be of the form

n.

Therefore, this specification defines the set {0, 1, 2} When the form of the elements of a set is obvious, the term can be

omitted. For example, the preceding set could be specified as (n : _ | n < 3} All the sets that have been described here have

elements that are single items. Sets can also be made from elements that are pairs, triples, and so on. For example, the set

specification {x, y : _ | x + y = 10 . (x, y2)} describes the set of pairs of natural numbers that have the form (x, y2) and where

the sum of x and y is 10. This is the set { (1, 81), (2, 64), (3, 49), . . .} Obviously, a constructive set specification required to

represent some component of computer software can be considerably more complex than those noted here. How ever the basic

form and structure remains the same.