Assignment

Embed Size (px)

Citation preview

Software development project needs analysis and solution changesAbstract "demand change", once the process of software development projects that the demand for change, whether it is the project manager or program developers have come to feel a headache. Moreover, in some project management consultant PPT courseware, as well as software project management of technical books and tutorials, Ye to "Xuqiubiangeng" as a separate item to study. In this article, and your software development project to explore the demand for change occurs because the demand for change control, and when there needs to resolve how to deal with the time change. First, the demand for change annoying As a software project manager in project development in progress, you encounter such a problem: the customer a call overturned before on your customers, and your own development team, after repeated discussions and settle down in the demand for recognition . After you re-start and customers, and your development team needs new round of talk, even talk about is endless. Even have to redesign the existing structure. In the face of this situation, as a project manager you will say: "We can not refuse customers, but he could not immediately meet the new demand, so I had to be pushed to complete it later." Or, more extreme the idea of some : Customer always good to be true, customer demand is technically impossible to achieve ... ... Customer demand with the new argument, you will demand confirmation of the importance of the doubt. Since the beginning has been repeatedly and customer communication, but also where no objection was clearly the answer, but when the development project is constantly evolving, customers gradually deepening understanding of the system when they themselves would like to eventually reverse the to demand. But then you would think that to demand only the acquisition, did not confirm. The reason is because the demand changes, resulting in the extension of the project many times, customers still say this is not what they want. You are still complaining about the needs of our customers have been like the weather keeps changing, ultimately, whether your complaints or customer demand changes to the project team will make the developer exhausted and confused. In your software development projects before, you and your project members have had such thoughts, in the software development, it is necessary to eliminate the demand for change, not to talk about any changes in the demand good? First, the kind of thinking and understanding is wrong, the software project development changes in the demand can not be completely eliminated. Whether the project manager or project developer, preferably before the start of a project to eliminate this idea. Needs to change is not possible to be eliminated, and the "elimination of demand change," but the idea needs to be remove. Demand for change to eliminate all the efforts and ideas, usually carried out in the project development are thankless. Project development process, the demand for change is inevitable.

While under normal circumstances, the project manager spent a lot of effort to avoid tedious requirements change, can change the final demand will always be. But this does not mean that the project should not do this work, whether it is the project manager or developer needs to change for the right attitude and approach should be the attitude of software testing as possible before the change takes place in the demand for demand reduction to change the situation that occurred to demand changes to the risks to a minimum. Second, the demand causes the change In software development projects, the demand for change may come from program service providers, customers or suppliers and, of course, may come from within the project team. To demand changes to the causes and carefully held up nothing less than the following reasons: 1, there is no delineation began thinning Detailed work is done by the staff requirement analysis, generally based on user submitted descriptive summary of just a few words to refine, and extract a functional, and gives descriptions (normal execution time description and a description of the accident). With addition to a certain degree, and began a systematic design, scope changes occur, then the details of use case description may have Henduo to change. If the original data is added manually to change calculated based information systems, and the original description of a property to become an entity so described. 2, do not specify the baseline demand The baseline demand is the demand for change is to allow the line. As the project progressed, the baseline demand is also changing. Are allowed to change based on contracts and costs, such as the overall structure of the software has been designed, are not allowed to change the scope of the demand, because the overall structure of the entire project will progress and cost of the initial budget. As the project progressed, the baseline will be higher the more (the change will allow the less). 3, without good software architecture to adapt to change Component-based software architecture is to provide a rapid adaptation to changes in demand architecture, the data layer encapsulates the data logic between visits, business layer encapsulates the business logic, presentation logic layer show the user. However, adaptation must follow some principles of loosely coupled together, or there are links between the layers, the design of interfaces to minimize the entrance of parameters will be changed. Well, if the business logic package, then the interface layer arrangement or reduce some of the information requested is very easy to adapt. If the interface definition was reasonable, even if there are changes in business processes, and can quickly adapt to change. Therefore, the cost impact to the extent permitted baseline can reduce demand, improve customer satisfaction. Third, demand for change control

As already mentioned, and in software development projects before, we should eliminate the "demand for change must not allow the occurrence of" thinking. In the project, the event needs to change, not the complaints are not blind, do not go blindly to meet the customer's "new needs", but to manage and control requirements change. 1, hierarchical management customer demand Software development project, the "customer is always right" and "Customer is God" is not entirely correct, because the project contract has been signed, any new changes and increasing demand in addition to the normal impact of the project, it also affect the customer's investment income, so sometimes the project manager Fandao be for the customer. Demand for the project, classified management can be implemented to meet the demand for change on the control and management. One needs (or change) is a key demand, this demand, if met, means that the project can not be properly delivered, preliminary work will be all negative. This level of demand must be met, otherwise it means that no project members and members of their own all the efforts, so as "Urgent". This is usually a remedial debug type to fire. 2 demand (or change) is the follow-up to critical needs, it does not affect the delivery of the content of the previous work, but not be met, the content can not submit new projects or continue, it is "Necessary". General basis for the new module key components, fall into this category. 3 demand is an important follow-up demand, if not met will decrease the value of the overall project, in order to reflect the value of the project, the developer is also proof of the value of their own technology, so as "Needed". Major general valuable new module development, fall into this category. The three levels should be implemented, but the timing can be arranged in priority. 4 demand is improving demand, failed to meet such demand does not affect the use of existing features, but if achieved would be better, rated "Better". Interface and usage requirements, generally in the grade. 5 requirements are optional requirements, more is even an idea, and a possible, usually just a personal preference of customers only, rated as "Maybe". Demand for the four, if time and resources allow it, may wish to do so. For the five needs, as its description, as do it or not is a "Maybe". 2, the entire change management life cycle needs Of all sizes and types of software project life cycle can be divided into three stages, namely, project initiation, project implementation, project closeout. Do not change that needs management and control of projects is in the implementation phase, but throughout the entire project life cycle in the entire process. Proceed from the overall perspective of the demand for change management, change control requires an integrated approach.

(1) Project start-up phase of the change to prevent As previously emphasized, for any software project, requirements change are inevitable, and no escape, no matter the project manager or developer can actively respond to, and this should be started from the needs analysis phase of the project began. Doing very well on a needs analysis project, the base document defines the scope of the more detailed and clear, the user needs to change with the project manager of the chances you get. If demand did not do a good job, reference documents in the range of ambiguity was found to have great customers, "the new demands of space", this time the project team often have to pay a lot of unnecessary expense. If the needs analysis done well, the document clear and there clients sign, then later changes made by the customer beyond the scope of the contract would require additional fees. This time, the project manager must argue, then this is not to deliberately make our customers money, but can not allow customers to develop the habit of frequent changes, or no end of trouble. (2) the needs of the project implementation phase change Successful software projects and failed projects is that projects the difference between whether the whole process is controllable. Project manager should establish the idea that "needs to change is inevitable, controllable, and is useful." Project implementation phase of change control needs to do is to analyze the change request, assess the potential risk to change and modify the base file. Gradient control needs to note the following: Requirements must be associated with the input, if the demand side to change the cost borne by the developer, the project has become an inevitable demand for a change. Therefore, in the beginning of the project, both funded the development of party or parties must be clear about this one: Requirements change, software development inputs have changed. Changes in demand for recognition of donors to go through, so will demand changes to the concept of cost can be prudent to deal with demand changes. The demand for small change is also subject to formal requirements management process, otherwise it will add up. In practice, people often do not want the demand for small change to the implementation of formal demand management process that reduced the efficiency of development and a waste of time. But precisely because of this concept needs only to gradually become uncontrollable, leading to project failure. Precise definition of the needs and scope of change will not stop the demand. The definition of needs is not the more detailed, more needs to avoid gradient, which is two dimensions. Requirements definition is too thin gradient have no effect on demand. Because the demand for change is eternal, not requirement was dropped, it will not be changed. Attention to communication skills.

Project development process is the actual user, the developer recognized the problems between the points above, but due to demand changes may come from customer side, it may come from the development side, therefore, as demand for managers, project managers need to using a variety of communication skills to make the project the parties get what they want. (3), a summary of the project closeout phase Capabilities are often not come from successful experience, but from the lessons of failure come. Many project managers do not pay attention to lessons learned and accumulated, even in the course of the project ending up badly beaten, he can just complain about luck, the environment and the teamwork is not good, very little systematic analysis of the sum, or do not know how to analyze and summarize, so the same problem recurring. In fact, the project should be concluded as the existing continuous improvement project or future projects an important part of the work, but also as the project contract, design the content and target identification and validation. Project summary work includes projects in the pre-identified risks and did not anticipate the changes that occurred in response to measures such as risk analysis and summary, including projects and project changes that occurred in the analysis of problems in the statistical summary. 3, requirements change management principles Although the demand for content and type of change are varied, but the principles of change management needs is their original aim. Implementation of change management needs to follow the following principles: (1) to establish baseline requirements. Demand is the demand to change the basis of the baseline. In the development process, requirements to identify and read the review after (user involved in the assessment), we can establish the first requirements baseline. After each change, and after review, the new requirements should be re-established baseline. (2) develop a simple, effective change control process and the formation of the document. After establishing the baseline demand for all changes made must follow the control flow control. At the same time, this process is widespread, on future project development and other projects have the reference. (3) the establishment of Project Change Control Board (CCB) or related functions similar organizations, is responsible for determining which changes to accept. CCB by the project staff involved in multi-joint component, should include users and developers to policy and decision makers. (4) needs to change we must first apply and then evaluate, and finally through and change the size of the considerable level of assessment to confirm. (5) needs to change, the affected software plans, products, activities must change accordingly to maintain and update the same demand. (6) to keep proper documentation generated change. Fourth, changing requirements, deal with such

Change control requirements apply generally to go through change, change assessment, decision making, respond to these four steps. If the change is accepted, but also increase the implementation of the change and validate two steps, and sometimes there are steps to cancel the change. For the change control process, a few responded to the crisis. Mutual cooperation - the user resistance was difficult to imagine a project to succeed. In discussing the demand, developers and users should try to understand each other, the attitude of mutual cooperation on energy issues as much as possible. Even if the user put forward the developer seems to be "excessive" demands should carefully analyze the reasons, put forward a viable alternative. Full exchange - the process of change management needs of users and developers a large extent the exchange process. Software developers must learn to carefully listen to the user's requirements, considerations and ideas, and analyze and organize. Meanwhile, software developers should explain to the user to enter the design stage, a further demand for change will bring the development of what kind of impact and negative consequences. Arrangements for full-time staff requirements change management - the task to heavy at times, developers vulnerable to neglect the development work at any time communicate with users, it requires a dedicated change management needs timely exchange of staff and users. Binding contract - needs to change to the software development impact for all to see, so contracts with users, you can increase the number of relevant terms, such as the limited needs of the user made changes to time, according to the circumstances of the change can be accepted, rejected or Some accepted, but also provisions in the demand for change must be the implementation of change control processes. Discrimination - as the development progress, some users will continue to offer some of the project group did not achieve or workload appears relatively large, have a significant impact on the progress of the project needs. In such cases, developers can explain to the user, the project's launch is the initial basic needs as a prerequisite for development, if a substantial increase in new demand (although demand for the user that is refined, but actually increase the workload new requirements), the project will not be completed on time. If users insist on implementation of the new requirements, users can suggest new important and urgent needs of attainment by grade, as a basis for assessing needs change. At the same time, but also pay attention to controlling the frequency of proposed new requirements. Choose the appropriate development model - a prototype development model used to establish more appropriate to needs of development projects is not clear. Developers first description of the user on demand to establish a system prototype, and then communicate with the user. General users to see some real stuff, the more demand there will be a detailed explanation, developers can further improve the user's instructions prototype. This process is repeated several times, the system prototype is gradually moving closer to the end user needs, and fundamentally change the appearance of demand reduction. The industry is more popular iterative development methods of the urgent requirements of the project schedule change control is very effective. Needs assessment of user participation - as the authors of the demand, users take for granted is one of the most authoritative spokesman. In fact, demand for the assessment process, the user can often put forward many valuable comments. It is also, on demand by the user the opportunity to final confirmation, you can effectively reduce the incidence of demand for change.

IntroductionA major computer software company has retained the services of an attorney to represent it in litigation alleging that an upand-coming software firm has pirated its software. The copyright, patent, trade secret, and software piracy issues associated with this litigation are complex and difficult for the attorney, the judge, and the jury to grasp. In order to adequately represent the client, an attorney requires the assistance of a computer expert to properly assess and evaluate the complex technical evidence. Regardless of the resolution of the matter through settlement, arbitration or litigation, a technical expert is necessary to properly evaluate the case and to deftly reduce complex technical concepts to simple terms so that attorneys, arbitrators, the parties, judges, and juries fully understand the issues. Technical experts are typically required to provide or defend issues arising from patent infringement, copyright infringement, trade secret misappropriation, and software piracy. This article examines the critical role of the computer expert. The selection of a computer expert is crucial. Computer experts can be used by attorneys to help resolve computer-related intellectual property disputes without costly, timeconsuming litigation. If litigation proves necessary, the services of a computer expert are essential during pretrial proceedings and at the trial itself.

What is the role of a Computer Expert?A computer expert makes the technical aspects of a computer-related intellectual property dispute understandable to laypersons, including lawyers and their clients. For example, a technical expert may evaluate whether a competitor's software program is "substantially similar" to another's in a potential copyright infringement suit. At times, an expert may conclude that a threatened claim is weak or even baseless. As a result, a party may refrain from suit and, possibly, avoid serious Rule 11 sanctions. If a lawsuit is filed, the technical expert's assistance will be important to pre-filing preparation, pretrial discovery and the presentation of evidence at trial. A technical expert may be essential in a non-jury trial by presenting a case in terms understandable to the judge so that the judge can adequately assess the case.

Computer-Related Intellectual Property Disputes Which Require Technical ExpertsTechnical experts are typically required to prove or defend issues arising from patent infringement, copyright infringement, trade secret misappropriation, and software piracy.

Copyright InfringementA technical expert must first investigate exactly what the plaintiff's copyright and what the defendant infringed upon protect. These elements are usually included within three general categories: An exact copy of the plaintiff's software. A derivative work with many elements exactly the same or similar. Similarity in design, which extends protection beyond copying program code. (1) Instead, protection extends to similarities in the program structure, sequence and organization. See Whelan Assocs., Inc. v. Jaslow Dental Lab, Inc., 797 F.2d 1222 (3d Cir. 1986), cert. denied, 479 U.S. 1031 (1987); compare Computer Assocs. Inter., Inc. v. Altai, Inc., 982 F.2d 693 (2d Cir. 1992).

The expert must examine the original software and look for a copyright notice, as the software must clearly state that is a copyrighted work, who owns the work, and the creation date. Even though the copyright registration is not necessary for copyright ownership, in order to claim copyright infringement, the owner must have a valid copyright registration in the computer software. The expert must then examine the owner's version control system. He or she must determine whether the software was published or was in the public domain prior to the copyright date. Moreover, the expert must determine whether the defendant had sufficient access to enable him or her to copy the plaintiff's software. The most important element of the expert's investigation is an examination of the defendant's software. If the source code is available, the expert should launch a full-scale investigation. Otherwise, the expert should examine the software for similarities in the overall design. Looking at screens, reports, menus and software logic hierarchy does this. The objective is to determine whether probable cause exists for a copyright infringement lawsuit. Once a lawsuit is instituted, the defendant's source code can be obtained during discovery, perhaps subject to the terms of a confidentiality order.

Patent InfringementLike in the case of copyright infringement, the technical expert must first investigate exactly what is protected by the patent and what the defendant infringed. An examination of the patent claims and specifications is essential to this investigation. While patent protection is more difficult to obtain, patent protection can be broader than copyright protection. A patent can protect a: Process Device Methodology (in some cases) Format Type

In the case of computer software, a pure mathematical algorithm, without any specific end use, is not patentable. However, a new format type (e.g., a new spreadsheet concept) could be patentable. A patentable claim could include computer software that controls industrial processes or devices, even though such software utilizes mathematical algorithms. Diamond v. Diehr, 450 U.S. 175 (1981). The technical expert must examine the patent to determine specifically what the claims and specifications protect. He or she must then look for public domain similarities to ascertain the validity of the claims. Finally, he or she must examine the defendant's software and determine the areas of infringement. Sometimes, examination of source code is not necessary, but it would not hurt.

Misappropriation of a Trade SecretTrade secret law may provide the broadest protection against copying or misappropriation. This protection extends not only to the software itself but also to any derivative work. In this case, the expert must determine whether the software was sufficiently novel and whether it was treated by the plaintiff as a trade secret. The expert must determine whether the defendant knew that the software was a trade secret, had access to the secret, and used the secret in an unauthorized manner. The expert must examine the defendant's software to uncover areas of violation. However, to be thorough, the expert must also search the public domain, because if the software exists therein through no fault of the defendant, then the defendant did not violate the plaintiff's confidence.

Elements of Discovery Required by a Technical ExpertIn order to complete a forensic investigation in an intellectual property dispute involving software piracy, a computer expert must have access to the following information: Copyright, patent or trade secret information on both plaintiff's and defendant's software; Copies of all agreements that were entered into between plaintiff and defendant; All information necessary to create a complete chronology of events pertaining to the matter, including any and all documentation created during development of plaintiff's and defendant's software; Complete working magnetic copies (object code, executables, and databases) of both plaintiff's and defendant's software; and, Complete program source code for both plaintiff's and defendant's software.

Software PiracyIn order to establish software piracy, the computer expert must launch a full-scale forensic investigation. There are at least seven different instances of software piracy which would usually be investigated: Defendant's software was created as a direct (exact) duplicate of plaintiff's object code; Defendant's software was created as an updated derivative of plaintiff's software from original source code using the same programming language;

Defendant's software was created as a direct (exact) translation from plaintiff's original source code into another programming language; Defendant's software was created as an updated derivative of plaintiff's software from translated source code using a different programming language; Defendant's software was copied from plaintiff's software using a fourth generation language (4GL); Defendant's software was created as an updated derivative of plaintiff's software which was generated using a 4GL; and, Defendant created software by copying only the design of plaintiff's software.

The forensic investigation is made by the expert using both object and source code. A comparison of source code is extremely difficult. Usually, software systems are very large. Often, a software system contains several hundred thousand lines of source code. In these cases, locating copied sections is very time consuming. As computer-related litigation can be very expensive, an attorney should carefully direct the expert's efforts in order to ensure that an expert produces the most ideal work and does not waste the client's money. One tool that can be very useful in a forensic investigation is HIPO (Hierarchy plus Input - Process - Output) This is a documentation technique developed by IBM during the 1970's. It was developed as a structured analysis tool. It was intended that HIPO diagrams be created prior to actual software development. This would impose a structure upon the software created from these diagrams thereby insuring maintainability. However, it is possible to develop HIPO diagrams from already existing software using the source code. The hierarchy chart shows the relationship between various programs and modules. It appears similar to a corporate organization chart. One IPO diagram is then generated for each program or module on the hierarchy diagram. In other words, each box on the hierarchy chart generates its own IPO diagram. The IPO diagram shows the Input, Processing, and Output portions of each programming step within the program or module. Using HIPO enables an expert to see the forest through the trees, and makes his forensic investigation more manageable. It is important to remember that an individual who develops software similar to existing software, even where the functionality is similar, is not necessarily guilty of software piracy. Copyright laws do not protect computer algorithms. Even where it can be shown that the individual had access to the original software, the new software may not have been copied. Similar functionality may have been created merely from the marketing needs of a particular industry or profession. What follows is a methodology that a computer expert can use to establish software piracy:

Direct (Exact) Duplication of Object CodeDirect (exact) duplication of object code is the most common form of software piracy. The program is produced by making an exact magnetic copy of the original. It is very simple to accomplish using standard computer utilities. This type of piracy is prevalent among personal computer users. However, this type of copying can be performed on any computer. It is so widespread because it does not require the defendant to use the plaintiff's source code. To establish software piracy resulting from direct duplication of object code, the technical expert would compare the file sizes and creation dates of both the plaintiff's and defendant's programs. If they are identical, the expert then performs a byte-by-byte comparison of the defendant's and the plaintiff's object code. If defendant's software was produced by direct duplication, then the object files would be identical. Another clue would be to look at a character dump of both object files. Most programmers put some character information into their programs. While object code is not usually understandable, the character information contained therein can often be recognized. If the defendant's software was copied from plaintiff's object code, the identifying character information should be recognizable. During discovery, source code should be demanded, as the defendant probably cannot produce source code.

Updated Derivative Software From Original Source CodeIf the defendant has access to plaintiff's source code, he would be able to modify and improve the software to make it more marketable. He would want to make modifications to the original software to enable it to run on a different computer or with

a different operating system. In addition, by making such modifications, he is able to disguise the software so as to make piracy less detectable. The defendant would produce the new software by modifying the plaintiff's original source code. Probably, the original formats of the screens, reports and menus will also have been changed. New screens and reports will have been generated. Often, new functions will have been added. Possibly, some of the main logic will have been modified. However, there are limits to the logic modification, since severe modification would make a complete re-write more cost effective. To establish this type of software piracy, a computer expert must compare source code of the defendant's software with that of the plaintiff's software. First, since the programming languages are the same, the expert should search for copied segments of program code (exact duplication). Next, he should examine the data file structures of both systems. They should be identical or substantially similar. Duplication of file structure is one of the telltale indications of software piracy. The expert should then examine the program logic. In this type of software piracy, large segments of logic would be identical. Variables would have the same or similar names, and identical constants would be used.

Direct (Exact) Translation from Original Source Code Into Another Programming LanguageSoftware pirates are usually very clever. Translation of the original source code into another programming language accomplishes three things. First, it can disguise the final product. Second, it can permit the software to run more efficiently on a different computer or operating system. Third, it can enable the software to be produced from original source code by translating from one programming language into another. This is usually performed on a line-by-line basis. To establish software piracy in this instance, the expert must run both the plaintiff's and defendant's software to demonstrate identical operation. In addition, he or she must compare both plaintiff's and defendant's source code. If the defendant's software was produced by direct translation of plaintiff's source code, then the screens, reports and menus should be identical, the file structure should be identical, the constants should be identical, and there should be a one-to-one correspondence between the variables across both systems. To further establish this type of software piracy, the expert should develop HIPO charts from both plaintiff's and defendant's source code. The hierarchical charts should be identical. The IPO charts should show that the same logic was used to create both systems.

Updated Derivative Software from Translated Source CodeAfter a software pirate has translated software into a new programming language, he would probably perform modifications to the software. This would be done to further disguise the software and to improve the software following translation. Extensive modification to translated software could make software piracy virtually undetectable. Once again, screens, reports and menus would be changed significantly. New screens and reports as well as new functions would be added. There might also be some modification of the main logic. Software piracy can be established by a computer expert both from examination of source code of both systems and from observing software operation of both systems. The expert should examine the file structures of both systems. They should be identical or, at least, very similar. He or she should search the source code for constants. Many would be the same. Finally, the expert should develop HIPO charts. He or she should find that large sections of the hierarchy are identical or similar. In those cases where the hierarchy is identical, the expert should examine the corresponding IPO diagrams. They should also be identical or similar.

Exact Duplication of Software Using a 4GLThe late 1970's and early 1980's witnessed the development of fourth generation software development systems. With such 4GL systems, a software analyst could potentially create entire software systems at a terminal without having to write a single line of program code. Thus, software piracy acquired a new dimension. Rather than copying object modules or translating original source code, the pirate could easily duplicate the exact external functionality of someone else's software. Copying using a 4GL is much simpler than translation, and it provides ease of subsequent modification. It can be done with or without the pirate having original source code available. If he has the source code available, he would duplicate the original data file structure as well as the data flow. On the other hand, he could derive a new file structure without source

code that would function just as well. The pirate can copy the software merely by reading the user manuals and by observing software operation. Essentially, he duplicates the design of the original software. The technical expert can establish this type of software piracy both by observation of software operation and from examination of the source code. If the plaintiff's source code was available to the defendant when he copied the software, the data file structures should be identical or extremely similar. If the source code was not available to him, the file structure should contain the same elements (fields) which have the same specifications, but which are in a different order. Screens, reports and menus should be identical. This can be observed from software operation as well as source code. Finally, the expert should prepare HIPO diagrams. They should be identical.

Updated Derivative Software from 4GL TranslationOnce a program has been duplicated using a 4GL, a software pirate would probably update and modify the software. Piracy in the resulting software would be extremely difficult to detect. Such modifications would be simple to generate. A technical expert would have difficulty establishing software piracy in this instance. Where sufficient modification has been performed, the resulting software is virtually new and original. The expert should examine the source code and observe software operation. He or she should examine the data file structures for similarities. The expert should examine the design of the system for investigating screens, reports, menus and program logic. Finally, he or she should develop HIPO diagrams and search for similarities in logic.

Newly Developed Software Where Only the Design Was CopiedThere have been many instances of software copyright infringement where the program code was completely new (not copied), but where there was a deliberate effort to duplicate the design of an existing system. This was usually done to enhance the marketability of a newly developed product, especially when the original software was very popular among consumers. This issue demands that an expert be able to show striking similarities in structure, function and organization. Menus, screens, reports and logic should be similar. Specific methods of accomplishing certain tasks should be identical. An example of this would be the use of all the same function keys to accomplish specific tasks. Copyright infringement is demonstrated by observation of software operation. Nothing would be gained by examination of source code. If the software has been sufficiently modified, proving copyright infringement would be very difficult.

Using an Expert to Resolve Computer-Related Intellectual Property Claims Without LitigationMost computer-related intellectual property claims never go to court. Due to the high cost of litigation and the uncertainty of outcome, they are either settled or abandoned. A technical expert can be used to increase the chance of reaching a favorable settlement quickly. With an enhanced technical perspective, an expert will work with an attorney to help him or her prepare an imposing argument of the merits of his case and the weaknesses of the opposition's case. In such instances, the attorney and the expert, working as a team, often convince the opposing side that litigation would accomplish nothing.

Use of Computer Experts in Pretrial ProceedingsComputer-related intellectual property litigation requires one or more independent technical experts. An expert's report is usually submitted to the adversary during pretrial discovery. At a minimum, the report is required to set forth the opinions that the expert will offer at trial and the basis for these opinions. However, many reports are more substantial. They often become treatises that attempt to prove whether or not infringement occurred. At times, because of the overwhelming nature of an expert's technical report, the opposition offers to settle or withdraw from the proceedings. If the attorney hopes to settle the matter without a trial, this type of report is desirable. However, where the attorney knows that a settlement is unlikely, only a minimal report should be produced. Why should an expert over-prepare the opposition for trial? During pretrial preparation, an expert should review pleadings for factual accuracy and suggest changes. A computer expert should assist in preparing the complaint or, where required, counterclaim. The expert should also help prepare interrogatories, document requests, and requests for admission addressing the technical aspects of the case.

With intellectual property litigation, the expert should attend depositions of all technical witnesses. In preparation for such depositions, the expert prepares questions for the attorney to ask. At depositions the expert can provide ad hoc information to an attorney that could make the depositions more meaningful or less damaging. Normally, the expert is deposed as a part of pretrial discovery. An expert helps to evaluate the technical reports generated by opposing technical experts. Often, an expert is asked to investigate an opposing expert in an effort to impeach that expert's credibility. Sometimes, just prior to a trial, an attorney will ask for a computer expert's help with jury selection. The expert assists the attorney in preparing a list of questions to ask prospective jurors during the voir dire. Such questions should reveal a potential juror with expert knowledge of computers so that he or she may be challenged and excluded. This is important because other jurors would look to this expert juror for guidance during deliberations.

Use of Technical Experts in Trial ProceedingsA technical expert is essential at trial. This individual is the most important witness in the case. Establishment of software piracy is difficult because judges and juries have insufficient knowledge of the technical elements required to prove the case. During direct examination, the expert must educate the fact finder. He or she must effectively explain complex technical evidence to lay people. The expert normally uses exhibits and materials prepared before trial. It is important that exhibits be presented because they are placed in the jury room at the end of the trial, and remain as a constant explanation and reminder to the jury during its deliberations. Sometimes, an expert witness provides a hands-on demonstration of the software to the court during direct examination. In any matter of this type, both litigants present expert witnesses. This is very confusing to judges and juries since their testimony will invariably conflict. The jury does not know which one to believe. Consequently, during cross-examination, attorneys challenge every fact and every opinion of the opposing expert. One expert will always state that a sufficient number of similarities exist between two software products so as to establish copying or derivation. The other expert will always testify that the first expert's investigation was inconclusive. An attorney must sort out the logic that separates these two witnesses and create a technical position that would be clear to the fact finder. This can only be done with the expert's assistance. A technical expert must be able to anticipate the answers of the opposing expert. This can usually be accomplished if he or she is familiar with the opposing expert's deposition. The expert then establishes a series of questions or question categories designed to prove the point.

SummaryTechnical experts are essential in computer-related intellectual property litigation. They are needed because the complex technical issues are beyond the knowledge and understanding of the average layperson. Initially, the expert performs a forensic investigation. He or she helps with discovery. Establishment of infringement is only possible with expert assistance. In order to establish software piracy, an expert must examine and analyze both the software of the plaintiff and defendant. This software is normally very large, and similarities are very difficult to find. This article presents a methodology that simplifies the task of the expert. Not only are experts essential in cases that go to trial, but they can be valuable in attaining satisfactory pretrial settlements. Experts are as important during pretrial proceedings as they are during the trial itself.

Cash flow forecastingFrom Wikipedia, the free encyclopedia Jump to: navigation, search Cash flow forecasting is (1) in a corporate finance sense, the modeling of a company or assets future financial liquidity over a specific timeframe. Cash usually refers to the companys total bank balances, but often what is forecast is treasury position which is cash plus short-term investments minus short-

term debt. Cash flow is the change in cash or treasury position from one period to the next; (2) in the context of the entrepreneur or manager, forecasting what cash will come into the business or business unit in order to ensure that outgoing can be managed so as to avoid them exceeding cashflow coming in. If there is one thing entrepreneurs learn fast, it is to become very good at cashflow forecasting.

Contents[hide]

1 Methods (corporate finance) 2 Methods (entrepreneurial) 3 Uses (corporate finance) 4 Uses (entrepreneurial) 5 References

[edit] Methods (corporate finance)The direct method of cash flow forecasting schedules the companys cash receipts and disbursements (R&D). Receipts are primarily the collection of accounts receivable from recent sales, but also include sales of other assets, proceeds of financing, etc. Disbursements include, payroll, payment of accounts payable from recent purchases, dividends, debt service, etc. This direct, R&D method is best suited to the short-term forecasting horizon of 30 days or so because this is the period for which actual, as opposed to projected, data is available. (de Caux, 2005) The three indirect methods are based on the companys projected income statements and balance sheets. The adjusted net income (ANI) method starts with operating income (EBIT or EBITDA) and adds or subtracts changes in balance sheet accounts such as receivables, payables and inventories to project cash flow. The pro-forma balance sheet (PBS) method looks straight at the projected book cash account; if all the other balance sheet accounts have been correctly forecast, cash will be correct, too. Both the ANI and PBS methods are best suited to the medium-term (up to one year) and long-term (multiple years) forecasting horizons. Both are limited to the monthly or quarterly intervals of the financial plan, and need to be adjusted for the difference between accrual-accounting book cash and the often-significantly-different bank balances. (Association for Financial Professionals, 2006) The third indirect approach is the accrual reversal method (ARM), which is similar to the ANI method. But instead of using projected balance sheet accounts, large accruals are reversed and cash effects are calculated based upon statistical distributions and algorithms. This allows the forecasting period to be weekly or even daily. It also eliminates the cumulative errors inherent in the direct, R&D method when it is extended beyond the short-term horizon. But because the ARM allocates both accrual reversals and cash effects to weeks or days, it is more complicated than the ANI or PBS indirect methods. The ARM is best suited to the medium-term forecasting horizon. (Bort, 1990)and the advantages are as follows

[edit] Methods (entrepreneurial)The simplest method is to have a spreadsheet that shows cash coming in from all sources out to at least 90 days, and all cash going out for the same period. This requires that the quantity and timings of receipts of cash from sales are reasonably accurate, which in turn requires judgement honed by

experience of the industry concerned, because it is rare for cash receipts to match sales forecasts exactly, and it is also rare for suppliers all to pay on time. These principles remain constant whether the cash flow forecasting is done on a spreadsheet or on paper or on some other IT system. A danger of using too much corporate finance theoretical methods in cash flow forecasting for managing a business is that there can be non cash items in the cashflow as reported under financial accounting standards. This goes to the heart of the difference between financial accounting and management accounting.

[edit] Uses (corporate finance)A cash flow projection is an important input into valuation of assets, budgeting and determining appropriate capital structures in LBOs and leveraged recapitalizations.

[edit] Uses (entrepreneurial)The point of making the forecast of incoming cash is to manage the outflow of cash so that the business remains solvent. The section of the spreadsheet that shows cash out is thus the basis for what-if modeling, for instance, "what if we pay our suppliers 30 days later?"

[edit] References

Cash Forecasting, Tony de Caux, Treasurers Companion, Association of Corporate Treasurers, 2005 Cash Flow Forecasting, Association for Financial Professionals, 2006 Medium-Term Funds Flow Forecasting, Corporate Cash Management Handbook, Richard Bort, Warren Gorham & Lamont, 1990

Cost estimation in software engineeringFrom Wikipedia, the free encyclopedia Jump to: navigation, search The ability to accurately estimate the time and/or cost taken for a project to come in to its successful conclusion is a serious problem for software engineers. The use of a repeatable, clearly defined and well understood software development process has, in recent years, shown itself to be the most effective method of gaining useful historical data that can be used for statistical estimation. In particular, the act of sampling more frequently, coupled with the loosening of constraints between parts of a project, has allowed more accurate estimation and more rapid development times.

[edit] MethodsPopular methods for estimation in software engineering include:

Parametric Estimating

Wideband Delphi COCOMO SLIM SEER-SEM Parametric Estimation of Effort, Schedule, Cost, Risk. Mimimum time and staffing concepts based on Brooks's law Function Point Analysis Proxy-based estimating (PROBE) (from the Personal Software Process) The Planning Game (from Extreme Programming) Program Evaluation and Review Technique (PERT) Analysis Effort method PRICE Systems Founders of Commercial Parametric models that estimates the scope, cost, effort and schedule for software projects. Evidence-based Scheduling Refinement of typical agile estimating techniques using minimal measurement and total time accounting.

Cost-benefit analysisFrom Wikipedia, the free encyclopedia Jump to: navigation, search Cost-benefit analysis is a term that refers both to:

helping to appraise, or assess, the case for a project, programme or policy proposal; an approach to making economic decisions of any kind.

Under both definitions the process involves, whether explicitly or implicitly, weighing the total expected costs against the total expected benefits of one or more actions in order to choose the best or most profitable option. The formal process is often referred to as either CBA (Cost-Benefit Analysis) or BCA (Benefit-Cost Analysis). Benefits and costs are often expressed in money terms, and are adjusted for the time value of money, so that all flows of benefits and flows of project costs over time (which tend to occur at different points in time) are expressed on a common basis in terms of their present value. Closely related, but slightly different, formal techniques include cost-effectiveness analysis, economic impact analysis, fiscal impact analysis and Social Return on Investment (SROI) analysis. The latter builds upon the logic of cost-benefit analysis, but differs in that it is explicitly designed to inform the practical decision-making of enterprise managers and investors focused on optimizing their social and environmental impacts.

Contents[hide]

1 Theory 2 Application and history 3 Accuracy problems o 3.1 Use in regulation 4 See also 5 References

6 Further reading 7 External links

[edit] TheoryCostbenefit analysis is often used by governments to evaluate the desirability of a given intervention. It is heavily used in today's government. It is an analysis of the cost effectiveness of different alternatives in order to see whether the benefits outweigh the costs. The aim is to gauge the efficiency of the intervention relative to the status quo. The costs and benefits of the impacts of an intervention are evaluated in terms of the public's willingness to pay for them (benefits) or willingness to pay to avoid them (costs). Inputs are typically measured in terms of opportunity costs - the value in their best alternative use. The guiding principle is to list all parties affected by an intervention and place a monetary value of the effect it has on their welfare as it would be valued by them. The process involves monetary value of initial and ongoing expenses vs. expected return. Constructing plausible measures of the costs and benefits of specific actions is often very difficult. In practice, analysts try to estimate costs and benefits either by using survey methods or by drawing inferences from market behavior. For example, a product manager may compare manufacturing and marketing expenses with projected sales for a proposed product and decide to produce it only if he expects the revenues to eventually recoup the costs. Costbenefit analysis attempts to put all relevant costs and benefits on a common temporal footing. A discount rate is chosen, which is then used to compute all relevant future costs and benefits in present-value terms. Most commonly, the discount rate used for present-value calculations is an interest rate taken from financial markets (R.H. Frank 2000). This can be very controversial; for example, a high discount rate implies a very low value on the welfare of future generations, which may have a huge impact on the desirability of interventions to help the environment. Empirical studies suggest that in reality, people's discount rates do decline over time. Because costbenefit analysis aims to measure the public's true willingness to pay, this feature is typically built into studies. During costbenefit analysis, monetary values may also be assigned to less tangible effects such as the various risks that could contribute to partial or total project failure, such as loss of reputation, market penetration, or long-term enterprise strategy alignments. This is especially true when governments use the technique, for instance to decide whether to introduce business regulation, build a new road, or offer a new drug through the state healthcare system. In this case, a value must be put on human life or the environment, often causing great controversy. For example, the costbenefit principle says that we should install a guardrail on a dangerous stretch of mountain road if the dollar cost of doing so is less than the implicit dollar value of the injuries, deaths, and property damage thus prevented (R.H. Frank 2000). Costbenefit calculations typically involve using time value of money formulas. This is usually done by converting the future expected streams of costs and benefits into a present value amount.

[edit] Application and historyThe practice of costbenefit analysis differs between countries and between sectors (e.g., transport, health) within countries. Some of the main differences include the types of impacts that are included as costs and benefits within appraisals, the extent to which impacts are expressed in monetary terms, and

differences in the discount rate between countries. Agencies across the world rely on a basic set of key costbenefit indicators, including the following:

NPV (net present value) PVB (present value of benefits) PVC (present value of costs) BCR (benefit cost ratio = PVB / PVC) Net benefit (= PVB - PVC) NPV/k (where k is the level of funds available)

The concept of CBA dates back to an 1848 article by Dupuit and was formalized in subsequent works by Alfred Marshall. The practical application of CBA was initiated in the US by the Corps of Engineers, after the Federal Navigation Act of 1936 effectively required costbenefit analysis for proposed federal waterway infrastructure.[1] The Flood Control Act of 1939 was instrumental in establishing CBA as federal policy. It specified the standard that "the benefits to whomever they accrue [be] in excess of the estimated costs.[2] Subsequently, costbenefit techniques were applied to the development of highway and motorway investments in the US and UK in the 1950s and 1960s. An early and often-quoted, more developed application of the technique was made to London Underground's Victoria Line. Over the last 40 years, costbenefit techniques have gradually developed to the extent that substantial guidance now exists on how transport projects should be appraised in many countries around the world. In the UK, the New Approach to Appraisal (NATA) was introduced by the then Department for Transport, Environment and the Regions. This brought together costbenefit results with those from detailed environmental impact assessments and presented them in a balanced way. NATA was first applied to national road schemes in the 1998 Roads Review but subsequently rolled out to all modes of transport. It is now a cornerstone of transport appraisal in the UK and is maintained and developed by the Department for Transport.[11] The EU's 'Developing Harmonised European Approaches for Transport Costing and Project Assessment' (HEATCO) project, part of its Sixth Framework Programme, has reviewed transport appraisal guidance across EU member states and found that significant differences exist between countries. HEATCO's aim is to develop guidelines to harmonise transport appraisal practice across the EU.[12][13] [3] Transport Canada has also promoted the use of CBA for major transport investments since the issuance of its Guidebook in 1994.[4] More recent guidance has been provided by the United States Department of Transportation and several state transportation departments, with discussion of available software tools for application of CBA in transportation, including HERS, BCA.Net, StatBenCost, CalBC, and TREDIS. Available guides are provided by the Federal Highway Administration[5][6], Federal Aviation Administration[7], Minnesota Department of Transportation[8], California Department of Transportation (Caltrans)[9], and the Transportation Research Board Transportation Economics Committee [10]. In the early 1960s, CBA was also extended to assessment of the relative benefits and costs of healthcare and education in works by Burton Weisbrod.[11][12] Later, the United States Department of Health and Human Services issued its CBA Guidebook.[13]

Net present valueFrom Wikipedia, the free encyclopedia Jump to: navigation, search In finance, the net present value (NPV) or net present worth (NPW)[1] of a time series of cash flows, both incoming and outgoing, is defined as the sum of the present values (PVs) of the individual cash flows. In the case when all future cash flows are incoming (such as coupons and principal of a bond) and the only outflow of cash is the purchase price, the NPV is simply the PV of future cash flows minus the purchase price (which is its own PV). NPV is a central tool in discounted cash flow (DCF) analysis, and is a standard method for using the time value of money to appraise long-term projects. Used for capital budgeting, and widely throughout economics, finance, and accounting, it measures the excess or shortfall of cash flows, in present value terms, once financing charges are met. The NPV of a sequence of cash flows takes as input the cash flows and a discount rate or discount curve and outputting a price; the converse process in DCF analysis, taking as input a sequence of cash flows and a price and inferring as output a discount rate (the discount rate which would yield the given price as NPV) is called the yield, and is more widely used in bond trading.

*********************** *****************************************

Return On Investment - ROIWhat Does Return On Investment - ROI Mean? A performance measure used to evaluate the efficiency of an investment or to compare the efficiency of a number of different investments. To calculate ROI, the benefit (return) of an investment is divided by the cost of the investment; the result is expressed as a percentage or a ratio. The return on investment formula:

In the above formula "gains from investment", refers to the proceeds obtained from selling the investment of interest. Return on investment is a very popular metric because of its versatility and simplicity. That is, if an investment does not have a positive ROI, or if there are other opportunities with a higher ROI, then the investment should be not be undertaken.

Watch: ROI

Investopedia explains Return On Investment - ROI Keep in mind that the calculation for return on investment and, therefore the definition, can be modified to suit the situation -it all depends on what you include as returns and costs. The definition of the term in the broadest sense just attempts to measure the profitability of an investment and, as such, there is no one "right" calculation. For example, a marketer may compare two different products by dividing the gross profit that each product has generated by its respective marketing expenses. A financial analyst, however, may compare the same two products using an entirely different ROI calculation, perhaps by dividing the net income of an investment by the total value of all resources that have been employed to make and sell the product. This flexibility has a downside, as ROI calculations can be easily manipulated to suit the user's purposes, and the result can be expressed in many different ways. When using this metric, make sure you understand what inputs are being used. Filed Under: Acronyms, Formulas, Fundamental Analysis, Investing Basics, Portfolio Management Related Terms

Capital Rationing Compound Return Investment Phantom Gain Return Return On Assets - ROA Return On Capital Employed - ROCE Return On Equity - ROE Return On Gross Invested Capital - ROGIC Return On Investment Capital - ROIC More Related Terms Return on Investment (ROI) analysis is one of several commonly used approaches for evaluating the financial consequences of business investments, decisions, or actions. ROI analysis compares the magnitude and timing of investment gains directly with the magnitude and timing of investment costs. A high ROI means that investment gains compare favorably to investment costs. In the last few decades, ROI has become a central financial metric for asset purchase decisions (computer systems, factory machines, or service vehicles, for example), approval and funding decisions for projects and programs of all kinds (such as marketing programs, recruiting programs, and training programs), and more traditional investment decisions (such as the management of stock portfolios or the use of venture capital). The ROI Concept Simple ROI for Cash Flow and Investment Analysis Competing Investments: ROI From Cash Flow Streams ROI vs NPV, IRR, and Payback Period Other ROI Metrics

The ROI ConceptMost forms of ROI analysis compare investment returns and costs by constructing a ratio, or percentage. In most ROI methods, an ROI ratio greater than 0.00 (or a percentage greater than

0%) means the investment returns more than its cost. When potential investments compete for funds, and when other factors between the choices are truly equal, the investmentor action, or business case scenariowith the higher ROI is considered the better choice, or the better business decision. One serious problem with using ROI as the sole basis for decision making, is that ROI by itself says nothing about the likelihood that expected returns and costs will appear as predicted. ROI by itself, that is, says nothing about the risk of an investment. ROI simply shows how returns compare to costs if the action or investment brings the results hoped for. (The same is also true of other financial metrics, such as Net Present Value, or Internal Rate of Return). For that reason, a good business case or a good investment analysis will also measure the probabilities of different ROI outcomes, and wise decision makers will consider both the ROI magnitude and the risks that go with it. Decision makers will also expect practical suggestions from the ROI analyst, on ways to improve ROI by reducing costs, increasing gains, or accelerating gains (see the figure above). [ Page Top ] [ Encyclopedia ] [ Business Case Books & Tools ] [ Home ]

Simple ROI for Cash Flow and Investment AnalysisReturn on investment is frequently derived as the return (incremental gain) from an action divided by the cost of that action. That is simple ROI, as used in business case analysis and other forms of cash flow analysis. For example, what is the ROI for a new marketing program that is expected to cost $500,000 over the next five years and deliver an additional $700,000 in increased profits during the same time?

Simple ROI is the most frequently used form of ROI and the most easily understood. With simple ROI, incremental gains from the investment are divided by investment costs. Simple ROI works well when both the gains and the costs of an investment are easily known and where they clearly result from the action. In complex business settings, however, it is not always easy to match specific returns (such as increased profits) with the specific costs that bring them (such as the costs of a marketing program), and this makes ROI less trustworthy as a guide for decision support. Simple ROI also becomes less trustworthy as a useful metric when the cost figures include allocated or indirect costs, which are probably not caused directly by the action or the investment. [ Page Top ] [ Encyclopedia ] [ Business Case Books & Tools ] [ Home ]

Competing Investments: ROI From Cash Flow StreamsROI and other financial metrics that take an investment view of an action or investment compare investment returns to investment costs. However each of the major investment metrics (ROI, internal rate of return IRR, net present value NPV, and payback period), approaches the comparison differently, and each carries a different message. This section illustrates ROI calculation from a cash flow stream for two competing investments, and the next section ( ROI vs. NPV, IRR, and Payback Period) compares the differing and sometimes conflicting messages from different financial metrics. Consider two five-year investments competing for funding, Investment A and Investment B. Which is the better business decision? Analysts will look first at the net cash flow streams from each investment. The net cash flow data and comparison graph appear below.

Payback periodFrom Wikipedia, the free encyclopedia Jump to: navigation, search

This article does not cite any references or sources.Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2009)

Payback period in capital budgeting refers to the period of time required for the return on an investment to "repay" the sum of the original investment. For example, a $1000 investment which returned $500 per year would have a two year payback period. The time value of money is not taken into account. Payback period intuitively measures how long something takes to "pay for itself." All else being equal, shorter payback periods are preferable to longer payback periods. Payback period is widely used because of its ease of use despite recognized limitations, described below. The term is also widely used in other types of investment areas, often with respect to energy efficiency technologies, maintenance, upgrades, or other changes. For example, a compact fluorescent light bulb may be described as having a payback period of a certain number of years or operating hours, assuming certain costs. Here, the return to the investment consists of reduced operating costs. Although primarily a financial term, the concept of a payback period is occasionally extended to other uses, such as energy payback period (the period of time over which the energy savings of a project equal the amount of energy expended since project inception); these other terms may not be standardized or widely used. Payback period as a tool of analysis is often used because it is easy to apply and easy to understand for most individuals, regardless of academic training or field of endeavour. When used carefully or to compare similar investments, it can be quite useful. As a stand-alone tool to compare an investment to "doing nothing," payback period has no explicit criteria for decisionmaking (except, perhaps, that the payback period should be less than infinity). The payback period is considered a method of analysis with serious limitations and qualifications for its use, because it does not account for the time value of money, risk, financing or other important considerations, such as the opportunity cost. Whilst the time value of money can be rectified by applying a weighted average cost of capital discount, it is generally agreed that this tool for investment decisions should not be used in isolation. Alternative measures of "return" preferred by economists are net present value and internal rate of return. An implicit assumption in the use of payback period is that returns to the investment continue after the payback period. Payback period does not specify any required comparison to other investments or even to not making an investment. There is no formula to calculate the payback period, except the simple and unrealistic case of the initial cash outlay and further constant cash inflows or constantly growing cash inflows. To calculate the payback period an algorithm is needed. It is easily applied in spreadsheets. The typical algorithm reduces to the calculation of cumulative cash flow and the moment in which it turns to positive from negative. Additional complexity arises when the cash flow changes sign several times; i.e., it contains outflows in the midst or at the end of the project lifetime. The modified payback period algorithm may be applied then. First, the sum of all of the cash outflows is calculated. Then the cumulative positive cash flows are determined for each period. The modified payback period is calculated as the moment in which the cumulative positive cash flow exceeds the total cash outflow.

[edit] References

COCOMOFrom Wikipedia, the free encyclopedia Jump to: navigation, search

Not to be confused with Docomo. Not to be confused with Kokomo (disambiguation). This article needs attention from an expert on the subject. See the talk page for details. WikiProject Computer science or the Computer science Portal may be able to help recruit an expert. (November 2008) The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project data and current project characteristics. COCOMO was first published in 1981 Barry W. Boehm's Book Software engineering economics[1] as a model for estimating effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRW Aerospace where Barry Boehm was Director of Software Research and Technology in 1981. The study examined projects ranging in size from 2,000 to 100,000 lines of code, and programming languages ranging from assembly to PL/I. These projects were based on the waterfall model of software development which was the prevalent software development process in 1981. References to this model typically call it COCOMO 81. In 1997 COCOMO II was developed and finally published in 2000 in the book Software Cost Estimation with COCOMO II[2]. COCOMO II is the successor of COCOMO 81 and is better suited for estimating modern software development projects. It provides more support for modern software development processes and an updated project database. The need for the new model came as software development technology moved from mainframe and overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components. This article refers to COCOMO 81. COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally accounts for the influence of individual project phases.

Contents[hide]

1 Basic COCOMO 2 Intermediate COCOMOs 3 Detailed COCOMO 4 Projects using COCOMO 5 See also 6 References 7 Further reading 8 External links

[edit] Basic COCOMOBasic COCOMO computes software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of lines of code (KLOC).

COCOMO applies to three classes of software projects:

Organic projects - "small" teams with "good" experience working with "less than rigid" requirements Semi-detached projects - "medium" teams with mixed experience working with a mix of rigid and less than rigid requirements Embedded projects - developed within a set of "tight" constraints (hardware, software, operational, ...)

The basic COCOMO equations take the form Effort Applied = ab(KLOC)bb [ man-months ] Development Time = cb(Effort Applied)db [months] People required = Effort Applied / Development Time [count] The coefficients ab, bb, cb and db are given in the following table. Software project ab bb cb db Organic 2.4 1.05 2.5 0.38 Semi-detached 3.0 1.12 2.5 0.35 Embedded 3.6 1.20 2.5 0.32 Basic COCOMO is good for quick estimate of software costs. However it does not account for differences in hardware constraints, personnel quality and experience, use of modern tools and techniques, and so on.

[edit] Intermediate COCOMOsIntermediate COCOMO computes software development effort as function of program size and a set of "cost drivers" that include subjective assessment of product, hardware, personnel and project attributes. This extension considers a set of four "cost drivers",each with a number of subsidiary attributes:

Product attributes o Required software reliability o Size of application database o Complexity of the product Hardware attributes o Run-time performance constraints o Memory constraints o Volatility of the virtual machine environment o Required turnabout time Personnel attributes o Analyst capability o Software engineering capability o Applications experience o Virtual machine experience o Programming language experience Project attributes o Use of software tools

o o

Application of software engineering methods Required development schedule

Each of the 15 attributes receives a rating on a six-point scale that ranges from "very low" to "extra high" (in importance or value). An effort multiplier from the table below applies to the rating. The product of all effort multipliers results in an effort adjustment factor (EAF). Typical values for EAF range from 0.9 to 1.4. Ratings Cost Drivers Product attributes Required software reliability Size of application database Complexity of the product Hardware attributes Run-time performance constraints Memory constraints Volatility of the virtual machine environment Required turnabout time Personnel attributes Analyst capability Applications experience Software engineer capability Virtual machine experience Programming language experience Project attributes Application of software engineering methods Use of software tools Required development schedule Very Low 0.75 0.70 Low 0.88 0.94 0.85 Nominal 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 High 1.15 1.08 1.15 1.11 1.06 1.15 1.07 0.86 0.91 0.86 0.90 0.95 0.91 0.91 1.04 Very High 1.40 1.16 1.30 1.30 1.21 1.30 1.15 0.71 0.82 0.70 Extra High

1.65 1.66 1.56

0.87 0.87 1.46 1.29 1.42 1.21 1.14 1.24 1.24 1.23 1.19 1.13 1.17 1.10 1.07 1.10 1.10 1.08

0.82 0.83 1.10

The Intermediate Cocomo formula now takes the form: E=ai(KLoC)(bi).EAF where E is the effort applied in person-months, KLoC is the estimated number of thousands of delivered lines of code for the project, and EAF is the factor calculated above. The coefficient ai and the exponent bi are given in the next table. Software project ai Organic 3.2 Semi-detached 3.0 Embedded 2.8 bi 1.05 1.12 1.20

The Development time D calculation uses E in the same way as in the Basic COCOMO.

[edit] Detailed COCOMO

Detailed COCOMO - incorporates all characteristics of the intermediate version with an assessment of the cost driver's impact on each step (analysis, design, etc.) of the software engineering process 1. the detailed model uses different efforts multipliers for each cost drivers attribute these Phase Sensitive effort multipliers are each to determine the amount of effort required to complete each phase.

[edit] Projects using COCOMO

Software Quality Factors

Till now we have been talking software quality in general. What it means to be a quality product. We also looked at CMM in brief. We need to know various quality factors upon which quality of a software produced is evaluated. These factors are given below. The various factors, which influence the software, are termed as software factors. They can be broadly divided into two categories. The classification is done on the basis of measurability. The first category of the factors is of those that can be measured directly such as number of logical errors and the second category clubs those factors which can be measured only indirectly for example maintainability but the each of the factors are to be measured to check for the content and the quality control. Few factors of quality are available and they are mentioned below.

Correctness - extent to which a program satisfies its specification and fulfills the client's objective. Reliability - extent to which a program is supposed to perform its function with the required precision. Efficiency - amount of computing and code required by a program to perform its function. Integrity - extent to which access to software and data is denied to unauthorized users. Usability- labor required to understand, operate, prepare input and interpret output of a program Maintainability- effort required to locate and fix an error in a program. Flexibility- effort needed to modify an operational program. Testability- effort required to test the programs for their functionality. Portability- effort required to run the program from one platform to other or to different hardware.

Reusability- extent to which the program or its parts can be used as building blocks or as prototypes for other programs. Interoperability- effort required to couple one system to another.

Now as you consider the above-mentioned factors it becomes very obvious that the measurements of all of them to some discrete value are quite an impossible task. Therefore, another method was evolved to measure out the quality. A set of matrices is defined and is used to develop expressions for each of the factors as per the following expression Fq = C1*M1 + C2*M2 + .Cn*Mn where Fq is the software quality factor, Cn are regression coefficients and Mn is metrics that influences the quality factor. Metrics used in this arrangement is mentioned below.

Auditability- ease with which the conformance to standards can be verified. Accuracy- precision of computations and control Communication commonality- degree to which standard interfaces, protocols and bandwidth are used. Completeness- degree to which full implementation of functionality required has been achieved. Conciseness- programs compactness in terms of lines of code. Consistency- use of uniform design and documentation techniques throughout the software development. Data commonality- use of standard data structures and types throughout the program. Error tolerance damage done when program encounters an error. Execution efficiency- run-time performance of a program. Expandability- degree to which one can extend architectural, data and procedural design. Hardware independence- degree to which the software is de-coupled from its operating hardware. Instrumentation- degree to which the program monitors its own operation and identifies errors that do occur. Modularity- functional independence of program components. Operability- ease of programs operation. Security- control and protection of programs and database from the unauthorized users. Self-documentation- degree to which the source code provides meaningful documentation. Simplicity- degree to which a program is understandable without much difficulty. Software system independence- degree to which program is independent of nonstandard programming language features, operating system characteristics and other environment constraints. Traceability- ability to trace a design representation or actual program component back to initial objectives. Training- degree to which the software is user-friendly to new users.

There are various checklists for software quality. One of them was given by Hewlett-Packard that has been given the acronym FURPS for Functionality, Usability, Reliability, Performance and Supportability. Functionality is measured via the evaluation of the feature set and the program capabilities, the generality of the functions that are derived and the overall security of the system.

Considering human factors, overall aesthetics, consistency and documentation assesses usability. Reliability is figured out by evaluating the frequency and severity of failure, the accuracy of output results, the mean time between failure (MTBF), the ability to recover from failure and the predictability of the program. Performance is measured by measuring processing speed, response time, resource consumption, throughput and efficiency. Supportability combines the ability to extend the program, adaptability, serviceability or in other terms maintainability and also testability, compatibility, configurability and the ease with which a system can be installed.

Software Quality Attributes

September 7, 2006 11:39:46.490 Chapters 4 and 5 of Software Architecture in Practice are about "software quality attributes". This is what they call non-functional requirements like performance, security, reliability, modifiability, testability and usability. These are, in fact, the main ones they talk about (though the book says "availability" when it should say "reliability") The book claims that one of the main purposes of architecture is to ensure these attributes. I go along with that, because most of these are global properties of systems. Chapter 4 talks about how to specify these attributes, and chapter 5 talks about how to achieve them. It does this by describing, for each attribute, tactics for achieving the attribute. Of course, most of these are big topics. UIUC has courses on most of them. Moreover, these topics can be specialized by problem domain. Performance means a different thing for programming scientific applications on supercomputers than it does for distributed business systems, or real-time control systems. So, the few pages that SAIP gives to each quality attribute is not nearly enough. Nevertheless, what the book says is important. The book says that patterns bundle tactics. In other words, patterns are concrete examples of how to use a few tactics together. Certainly some patterns are like this. But I've seen people write patterns that were the same thing as one of the tactics. I think that tactics are patterns, too. Even though the book doesn't explain how to use any tactic, the lists that it gives should be useful for people who want to document patterns because it gives an outline of possible patterns. \**************************************************************

Capability Maturity Model versus ISO 9000An assessmentJohn R. Snyder March, 2003

AbstractThis paper serves as a general guideline for those who wish to implement a process improvement model--but are unsure of which governing framework to select. How the process model has become the dominant framework for software engineering activities is investigated, as is the distinction of a process model from a lifecycle model. The two dominant process improvement models in use today, Capability Maturity Model and the ISO 9000 standard, will be illustrated, contrasted, and analyzed for applicability to software development environments. The focus is on the ISO guidelines in areas that are most relevant to software engineering, that is, the ISO 9000:3. The recent ISO 9000:2000 updates and revisions are discussed at a high-level. Previous publications on this topic are analyzed for relevance to today's environment.

Some general conclusions are developed as to the applicability of each of the process model standards for different types of software development organizations and business environments.

IntroductionThe two most common process models in use today for software engineering are the Capability Maturity Model (CMM), and the International Organization for Standards (ISO) ISO 9000 standard. To embezzle the classic "To be or not to be" phrase of the ancient thespian Shakespeare, "To CMM, or ISO--that is the question". Indeed, choosing a process improvement framework is a daunting prospect for the uninitiated. The "alphabet soup" of acronyms and labyrinth of clauses can be confusing to interpret. Developing analytic comparisons between the two models can be problematic because of interpretation issues. Mark C. Paulk of the Software Engineering Institute, in his 1994 paper "A Comparison of ISO 9001 and the Capability Maturity Model for Software" (Paulk, 1994) developed an analysis of the relationships between ISO 9001, and the CMM model. However, since the publication of that document, ISO 9001 has undergone a major revision. The 1994 standard, which consisted of a twenty-clause structure, now consists of only five clauses. In this document, the most recent model of the ISO 9000 standard will be compared and contrasted to the current CMM standard. This paper will attempt to analyze and assess how these two process models compare and contrast; how applicable each respective model might be in your organization.

Process Model DefinedThe Historical Perspective Why is the topic of "process" synonymous with the creation of high-quality software by professionals? According to Dampier and Glickstein (2000, p. 4), "The quality of a software system is highly influenced by the quality of the process used to develop and maintain it". Software and the hardware systems that process the software have become increasingly complex. When computers began to get a foothold in academia and business in the 1970's, only the rare mission-critical software project was managed with any type of methodology framework. The early computers and their software were certainly not "simple", but perhaps more "straightforward" to manipulate. The smaller collection of configurations, permutations, and the people who understood them made the creation and maintenance of software easier to manage. In addition, the rudimentary nature of the tools in use through the early 1980's dictated that the pace of software construction was extremely slow and methodical--therefore less prone to be in error. Are you old enough to remember punch cards? In the last ten-twenty years, the advent of high-level programming languages and the personal computer brought the ability to create software to a much larger group. The responsibilities that accompanied this new ability were not always considered. Amateurish programming and get-rich quick software schemes unleashed a Pandora's Box of software issue on a naive public. Horror stories of malfunctioning software are rampant today. Software projects are defined as a "Death March" (Yourdon, 1997) more often than not. One could easily attribute many of the difficulties faced by software developers to the immaturity of the science--we do not have the luxury of hundreds of years of empirical experience. However, from a

historical perspective, the emerging field of Software Engineering may be doing better than the constant negativity in the news media would lead us to believe. Schulz, speaking to the failure rate of Information Technology projects--states, "IT is performing just as well as other disciplines". He goes on to make the assertion "Perhaps the problem is that IT is just newer, more active and being studied and reported more frequently" (2000, para 2). Complexity Certainly, the complexity of software is one major factor that contributes to software project failures, and products that are laden with defects. "Software entities are more complex for their size than perhaps any other human construct " (Dorfman & Thayer, 1997, p. 14). Are quality control mechanisms that have been successful in other genres of manufacturing applicable to the creation of Software? Many in the industry believe that the application of "engineering discipline to the development and maintenance of software" (Paulk, 2002) would bring Total Quali