Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore BCA123 CU-BCA-SEM-II-Software Engineering-26.10.2020-converted

BCA123 CU-BCA-SEM-II-Software Engineering-26.10.2020-converted

Published by Teamlease Edtech Ltd (Amita Chitroda), 2021-04-20 17:47:00

Description: BCA123 CU-BCA-SEM-II-Software Engineering-26.10.2020-converted

Search

Read the Text Version

• Plans should be iterative and allow adjustments as time passes and more details are known. General Project Estimation Approach The Project Estimation Approach that is widely used is Decomposition Technique. Decomposition techniques take a divide and conquer approach. Size, Effort and Cost estimation are performed in a stepwise manner by breaking down a Project into major Functions or related Software Engineering Activities. Step 1 − Understand the scope of the software to be built. Step 2 − Generate an estimate of the software size. • Start with the statement of scope. • Decompose the software into functions that can each be estimated individually. • Calculate the size of each function. • Derive effort and cost estimates by applying the size values to your baseline productivity metrics. • Combine function estimates to produce an overall estimate for the entire project. Step 3 − Generate an estimate of the effort and cost. You can arrive at the effort and cost estimates by breaking down a project into related software engineering activities. Identify the sequence of activities that need to be performed for the project to be completed. • Divide activities into tasks that can be measured. • Estimate the effort (in person hours/days) required to complete each task. • Combine effort estimates of tasks of activity to produce an estimate for the activity. • Obtain cost units (i.e., cost/unit effort) for each activity from the database. • Compute the total effort and cost for each activity. 49 CU IDOL SELF LEARNING MATERIAL (SLM)

• Combine effort and cost estimates for each activity to produce an overall effort and cost estimate for the entire project. Step 4 − Reconcile estimates: Compare the resulting values from Step 3 to those obtained from Step 2. If both sets of estimates agree, then your numbers are highly reliable. Otherwise, if widely divergent estimates occur conduct further investigation concerning whether − The scope of the project is not adequately understood or has been misinterpreted. The function and/or activity breakdown is not accurate. Historical data used for the estimation techniques is inappropriate for the application, or obsolete, or has been misapplied. Step 5 − Determine the cause of divergence and then reconcile the estimates. Estimation Accuracy Accuracy is an indication of how close something is to reality. Whenever you generate an estimate, everyone wants to know how close the numbers are to reality. You will want every estimate to be as accurate as possible, given the data you have at the time you generate it. And of course you don’t want to present an estimate in a way that inspires a false sense of confidence in the numbers. Important factors that affect the accuracy of estimates are − • The accuracy of all the estimate’s input data. • The accuracy of any estimate calculation. • How closely the historical data or industry data used to calibrate the model matches the project you are estimating. • The predictability of your organization’s software development process. • The stability of both the product requirements and the environment that supports the 50 CU IDOL SELF LEARNING MATERIAL (SLM)

software engineering effort. • Whether or not the actual project was carefully planned, monitored and controlled, and no major surprises occurred that caused unexpected delays. Following are some guidelines for achieving reliable estimates − • Base estimates on similar projects that have already been completed. • Use relatively simple decomposition techniques to generate project cost and effort estimates. • Use one or more empirical estimation models for software cost and effort estimation. • Refer to the section on Estimation Guidelines in this chapter. • To ensure accuracy, you are always advised to estimate using at least two techniques and compare the results. Estimation Issues Often, project managers resort to estimating schedules skipping to estimate size. This may be because of the timelines set by the top management or the marketing team. However, whatever the reason, if this is done, then at a later stage it would be difficult to estimate the schedules to accommodate the scope changes. While estimating, certain assumptions may be made. It is important to note all these assumptions in the estimation sheet, as some still do not document assumptions in estimation sheets. Even good estimates have inherent assumptions, risks, and uncertainty, and yet they are often treated as though they are accurate. The best way of expressing estimates is as a range of possible outcomes by saying, for example, that the project will take 5 to 7 months instead of stating it will be complete on a particular date or it will be complete in a fixed no. of months. Beware of committing to a range that is too narrow as that is equivalent to committing to a definite date. You could also include uncertainty as an accompanying probability value. For example, there is a 90% probability that the project will complete on or before a definite date. Organizations do not collect accurate project data. Since the accuracy of the estimates depend on the historical 51 CU IDOL SELF LEARNING MATERIAL (SLM)

data, it would be an issue. • For any project, there is a shortest possible schedule that will allow you to include the required functionality and produce quality output. If there is a schedule constraint by management and/or client, you could negotiate on the scope and functionality to be delivered. • Agree with the client on handling scope creeps to avoid schedule overruns. • Failure in accommodating contingency in the final estimate causes issues. For e.g., meetings, organizational events. • Resource utilization should be considered as less than 80%. This is because the resources would be productive only for 80% of their time. If you assign resources at more than 80% utilization, there is bound to be slippages. Estimation Guidelines • One should keep the following guidelines in mind while estimating a project − • During estimation, ask other people's experiences. Also, put your own experiences at task. • Assume resources will be productive for only 80 percent of their time. Hence, during estimation take the resource utilization as less than 80%. • Resources working on multiple projects take longer to complete tasks because of the time lost switching between them. • Include management time in any estimate. • Always build in contingency for problem solving, meetings and other unexpected events. • Allow enough time to do a proper project estimate. Rushed estimates are inaccurate, high-risk estimates. For large development projects, the estimation step should really 52 CU IDOL SELF LEARNING MATERIAL (SLM)

be regarded as a mini project. • Where possible, use documented data from your organization’s similar past projects. It will result in the most accurate estimate. If your organization has not kept historical data, now is a good time to start collecting it. • Use developer-based estimates, as the estimates prepared by people other than those who will do the work will be less accurate. • Use several different people to estimate and use several different estimation techniques. • Reconcile the estimates. Observe the convergence or spread among the estimates. Convergence means that you have got a good estimate. Wideband-Delphi technique can be used to gather and discuss estimates using a group of people, the intention being to produce an accurate, unbiased estimate. • Re-estimate the project several times throughout its life cycle SUMMARY • A project includes a number of activities that must be completed in some particular order, or sequence. • Projects have a specified completion date. • The customer, or the recipient of the project’s deliverables, expects a certain level of functionality and quality from the project. • Software projects are disreputably hard to define • Software is said to be an intangible product. Software development is a kind of all new streams in world business and there’s very little experience in building software products. Most software products are tailor made to fit client’s requirements. The most important is that the underlying technology changes and advances so frequently and rapidly that experience of one product may not be applied to the other one. All 53 CU IDOL SELF LEARNING MATERIAL (SLM)

such business and environmental constraints bring risk in software development hence it is essential to manage software projects efficiently • Estimation is an integral part of the software development process and should not be taken lightly. A well planned and well estimated project is likely to be completed in time. Incomplete and inaccurate documentation may pose serious hurdles to the success of a software project during development and implementation. Software cost estimation is an important part of the software development process. Metrics are important tools to measure software product and process. Metrics are to be selected carefully so that they provide a measure for the intended process/product. Models are used to represent the relationship between effort and a primary cost factor such as software product size. Cost drivers are used to adjust the preliminary estimate provided by the primary cost factor. Models have been developed to predict software cost based on empirical data available, but many suffer from some common problems. The structure of most models is based on empirical results rather than theory. Models are often complex and rely heavily on size estimation. Despite problems, models are still important to the software development process. A model can be used most effectively to supplement and corroborate other methods of estimation • Software process and project metrics are quantitative measures that enable software engineers to gain insight into the efficiency of the software process and the projects conducted using the process framework. In software project management, we are primarily concerned with productivity and quality metrics. There are four reasons for measuring software processes, products, and resources (to characterize, to evaluate? to predict, and to improve). • Factors assessing software quality come from three distinct points of view (Product operation, product revision, product modification). • Software quality factors requiring measures include • Correctness (defects per KLOC). 54 CU IDOL SELF LEARNING MATERIAL (SLM)

• Maintainability, (mean time to change). • Integrity (threat and security). • Usability (easy to learn, easy to use, productivity increase, user attitude). • Software project management is perhaps the most important factor in the outcome of a project. Without proper project management, a project will fail. Many organizations have evolved effective project management processes. At the top level, the project management process consists of three phases: planning, execution, and closure. When creating a project schedule, managers will find both Program Evaluation and Review Technique (PERT) and Gantt charts to be essential tools to successfully completing the project at hand. Both types of charts provide tools for managers to analyze projects through visualization, helping divide tasks into manageable parts. • Four Organizational paradigms for Software Development Teams: • Closed paradigm: Traditional hierarchy of authority works well when producing software similar to past efforts, members are less likely to be innovative. • . Random paradigm: Depends on individual initiative of team members, work well for project requiring innovation technological break. • Open paradigm: hybrid of the closed and random paradigm, works well for solving Complex problems, requiring collaboration, communication, and consensus among members KEY WORDS/ABBREVIATIONS • Blueprint: an exact or detailed plan or outline. Contrast with graph. • Calibration: ensuring continuous adequate performance of sensing, measurement, and actuating equipment with regard to specified accuracy and precision requirements. See: accuracy, bias, precision. 55 CU IDOL SELF LEARNING MATERIAL (SLM)

• Code audit: an independent review of source code by a person, team, or tool to verify compliance with software design documentation and programming standards. Correctness and efficiency may also be evaluated. Contrast with code inspection, code review, code walkthrough. See: static analysis. • Data corruption: a violation of data integrity. Syn: data contamination. • Functional analysis: verifies that each safety-critical software requirement is covered and that an appropriate criticality level is assigned to each software element LEARNING ACTIVITY 1. Write responsibilities that a project manager shoulders - 2. Write how software process done. UNIT END QUESTIONS (MCQ AND DESCRIPTIVE) A. Descriptive Types Questions 1. Identify finding an estimate, or approximation, which is a value that can be used for some purpose even if input data may be incomplete, uncertain, or unstable with it steps? 2. Illustrate with procedure which technique is widely used in Decomposition Technique? 3. State indication of how close something is to reality with its important factors affects? 4. Justify metrics to evaluate the state of the product, tracing risks and under covering prospective problem areas with its details. 5. State and explain the factor decided the success of project? 56 CU IDOL SELF LEARNING MATERIAL (SLM)

B. Multiple Choice Questions 1. Which of the following is not project management goal? (a) Keeping overall costs within budget (b) Delivering the software to the customer at the agreed time (c) Maintaining a happy and well-functioning development team (d) Avoiding customer complaints 2. Which of the following is not project management goal? (a) Keeping overall costs within budget (b) Delivering the software to the customer at the agreed time (c) Maintaining a happy and well-functioning development team (d) Avoiding customer complaints 3. The process each manager follows during the life of a project is known as (a) Project Management (b) Manager life cycle (c) Project Management Life Cycle (d) All of the mentioned 4. A 66.6% risk is considered as (a) very low (b) low (c) moderate (d) high 5. Which of the following is/are main parameters that you should use when computing the costs of a software development project? (a) Travel and training costs (b) hardware and software costs (c) effort costs (the costs of paying software engineers and managers) (d) all of the mentioned 57 CU IDOL SELF LEARNING MATERIAL (SLM)

Answers 1. (d), 2. (c), 3. (c), 4. (d), 5. (d) REFERENCES • Pressman R.S. (2009). Software Engineering - A Practitioner’s Approach. New Delhi: MGH Publications. • Mall R. (2003). Fundamentals of Software Engineering. New Delhi: PHI • Jalote P. (2019). An Integrated Approach to Software Engineering. New Delhi: Narosa Publications. • Summerville I. (2013). Software Engineering. New Delhi: Pearson Education. • Software Engineering, Sixth Edition, 20~1, Ian Sommerville; Pearson Education. • http://www.rspa.com http • //www.ieee.org http • //www.ncst.ernet.in • http://en.wikipedia.org/wiki/Software_project_management • http://www.comp.glam.ac.uk/staff/dwfarthi/projman.htm • Software Engineering: A Practitioner's Approach, R. S. Pressman, McGraw-Hill. • Software Engineering Best Practices: Lessons from Successful Projects in Top Companies, Capers Jones, McGraw Hill. • Software Engineering, Ian Sommerville; Sixth Edition, 2001, Pearson Education. • COCOMO model for software based on Open Source: Application to the adaptation of TRIADE to the university system By Moulla Donatien Koulla 58 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 4 SOFTWARE PROJECT PLANNING Structure Learning Objectives Introduction Decomposition technique S/W Sizing Problem-based estimation Process based estimation Differences between process based and problem based estimation Summary Key Words/Abbreviations Learning Activity Unit End Questions (MCQ and Descriptive) References LEARNING OBJECTIVES After studying this unit, you will be able to: ⚫ Explain the objectives of software project planning. ⚫ Discuss decomposition techniques. ⚫ Outline differences between process based and problem based. INTRODUCTION Software is said to be an intangible product. Software development is a kind of all new 59 CU IDOL SELF LEARNING MATERIAL (SLM)

stream in world business and there’s very little experience in building software products. Most software products are tailor made to fit client’s requirements. The most important is that the underlying technology changes and advances so frequently and rapidly that experience of one product may not be applied to the other one. All such business and environmental constraints bring risk in software development hence it is essential to manage software projects efficiently. Fig 4.1 Planning The image above shows triple constraints for software projects. It is an essential part of software organization to deliver quality product, keeping the cost within client’s budget constrain and deliver the project as per scheduled. There are several factors, both internal and external, which may impact this triple constrain triangle. Any of three factors can severely impact the other two. Therefore, software project management is essential to incorporate user requirements along with budget and time constraints. A software project manager is a person who undertakes the responsibility of executing the software project. Software project manager is thoroughly aware of all the phases of SDLC that the software would go through. Project manager may never directly involve in producing the end product but he controls and manages the activities involved in production. A project manager closely monitors the development process, prepares and executes various plans, arranges necessary and adequate resources, maintains communication among all team members in order to address issues of cost, budget, resources, time, and quality and customer satisfaction. DECOMPOSITION TECHNIQUES 60 CU IDOL SELF LEARNING MATERIAL (SLM)

S/w sizing, problem-based estimation, process based estimation. S/w sizing- Estimation of the size of software is an essential part of Software Project Management. It helps the project manager to further predict the effort and time which will be needed to build the project. Various measures are used in project size estimation. Some of these are: Lines of Code • Number of entities in ER diagram • Total number of processes in detailed data flow diagram • Function points 1. Lines of Code (LOC): As the name suggest, LOC count the total number of lines of source code in a project. The units of LOC are: • KLOC- Thousand lines of code • NLOC- Non comment lines of code • KDSI- Thousands of delivered source instruction The size is estimated by comparing it with the existing systems of same kind. The experts use it to predict the required size of various components of software and then add them to get the total size. Advantages: • Universally accepted and is used in many models like COCOMO. • Estimation is closer to developer’s perspective. • Simple to use. Disadvantages: 61 CU IDOL SELF LEARNING MATERIAL (SLM)

• Different programming languages contain different number of lines. • No proper industry standard exists for this technique. • It is difficult to estimate the size using this technique in early stages of project. Software project estimation is a form of problem solving, and in most cases, the problem to be solved (i.e., developing a cost and effort estimate for a software project) is too complex to be considered in one piece. For this reason, we decompose the problem, re-characterizing it as a set of smaller (and hopefully, more manageable) problems. Before an estimate can be made, the project planner must understand the scope othe software to be built and generate an estimate of its \"size. The accuracy of a software project estimate is predicated on a number of things: (1) The degree to which the planner has properly estimated the size of the product to be built; (2) The ability to translate the size estimate into human effort, calendar time, and dollars (afunction of the availability of reliable software metrics from past projects); (3) The degree to which the project plan reflects the abilities of the software team; and (4) The stability of product requirements and the environment that supports the software engineering effort. Because a project estimate is only as good as the estimate of the size of the work to be accomplished, sizing represents the project planner’s first major challenge. In the context of project planning, size refers to a quantifiable outcome of the software project. If a direct approach is taken, size can be measured in LOC. If an indirect approach is chosen, size is represented as FP. Putnam and Myers suggest four different approaches to the sizing problem: “Fuzzy logic” sizing. This approach uses the approximate reasoning techniques that are the cornerstone of fuzzy logic. To apply this approach, the planner must identify the type of application, establish its magnitude on a qualitative scale, and then refine the magnitude 62 CU IDOL SELF LEARNING MATERIAL (SLM)

within the original range. Although personal experience can be used, the planner should also have access to a historical database of projects8 so that estimates can be compared to actual experience. Problem-based estimation- Lines of code and function points were described as measures from which productivity metrics can be computed. LOC and FP data are used in two ways during software project estimation: (1) as an estimation variable to \"size\" each element of the software and (2) as baseline metrics collected from past projects and used in conjunction with estimation variables to develop cost and effort projections. LOC and FP estimation are distinct estimation techniques. Yet both have a number of characteristics in common. The project planner begins with a bounded statement of software scope and from this statement attempts to decompose software into problem functions that can each be estimated individually. LOC or FP (the estimation variable) is then estimated for each function. Alternatively, the planner may choose another component for sizing such as classes or objects, changes, or business processes affected. Baseline productivity metrics (e.g., LOC/pm or FP/pm) are then applied to the appropriate estimation variable, and cost or effort for the function is derived. Function estimates are combined to produce an overall estimate for the entire project. It is important to note, however, that there is often substantial scatter in productivity metrics for an organization, making the use of a single baseline productivity metric suspect. In general, LOC/pm or FP/pm averages should be computed by project domain. That is, projects should be grouped by team size, application area, complexity, and other relevant parameters. Local domain averages should then be computed. When a new project is estimated, it should first be allocated to a domain, and then the appropriate domain average for productivity should be used in generating the estimate. The LOC and FP estimation techniques differ in the level of detail required for decomposition and the target of the partitioning. When LOC is used as the estimation variable, decomposition is absolutely essential and is often taken to considerable levels of detail. The following decomposition approach has been adapted from Phillips: • define product scope; • identify functions by decomposing scope; 63 CU IDOL SELF LEARNING MATERIAL (SLM)

• do while functions remain • select a function • assign all functions to subfunctions list; • do while subfunctions remain • select subfunctionk • if subfunctionk resembles subfunctiond described in a historical data base • then note historical cost, effort, size (LOC or FP) data for subfunctiond; • adjust historical cost, effort, size data based on any differences; • use adjusted cost, effort, size data to derive partial estimate, Ep; • project estimate = sum of {Ep}; else if cost, effort, size (LOC or FP) for subfunctionk can be estimated • then derive partial estimate, Ep; • project estimate = sum of {Ep}; • else subdivide subfunctionk into smaller subfunctions; • add these to subfunctions list; endif endif enddo endow This decomposition approach assumes that all functions can be decomposed into subfunctions that will resemble entries in a historical data base. If this is not the case, then 64 CU IDOL SELF LEARNING MATERIAL (SLM)

another sizing approach must be applied. The greater the degree of partitioning, the more likely reasonably accurate estimates of LOC can be developed. For FP estimates, decomposition works differently. Rather than focusing on function, each of the information domain characteristics—inputs, outputs, data files, inquiries, and external interfaces—as well as the 14 complexity adjustment are estimated. The resultant estimates can then be used to derive a FP value that can be tied to past data and used to generate an estimate. Regardless of the estimation variable that is used, the project planner begins by estimating a range of values for each function or information domain value. Using historical data or (when all else fails) intuition, the planner estimates an optimistic, most likely, and pessimistic size value for each function or count for each information domain value. An implicit indication of the degree of uncertainty is provided when a range of values is specified. A three-point or expected value can then be computed. The expected value for the estimation variable (size), S, can be computed as a weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess) estimates. For example, S = (sopt + 4sm + spess)/6 gives heaviest credence to the “most likely” estimate and follows a beta probability distribution. We assume that there is a very small probability the actual size result will fall outside the optimistic or pessimistic values Process-Based Estimation- The most common technique for estimating a project is to base the estimate on the process that will be used. That is, the process is decomposed into a relatively small set of tasks and the effort required to accomplish each task is estimated. Like the problem-based techniques, process-based estimation begins with a delineation of software functions obtained from the project scope. A series of software process activities must be performed for each function. Functions and related software process activities may be represented as part of a table .Once problem functions and process activities are melded, the planner estimates the effort (e.g., person-months) that will be required to accomplish 65 CU IDOL SELF LEARNING MATERIAL (SLM)

each software process activity for each software function. These data constitute the central matrix of the table. Average labor rates (i.e., cost/unit effort) are then applied to the effort estimated for each process activity. It is very likely the labor rate will vary for each task. Senior staff heavily involved in early activities is generally more expensive than junior staff involved in later design tasks, code generation, and early testing. Costs and effort for each function and software process activity are computed as the last step. If process-based estimation is performed independently of LOC or FP estimation, we now have two or three estimates for cost and effort that may be compared and reconciled. If both sets of estimates show reasonable agreement, there is good reason to believe that the estimates are reliable. If, on the other hand, the results of these decomposition techniques show little agreement, further investigation and analysis must be conducted. 4.2.4. Differences between process based and problem based Matter Project-based learning tends to be associated with K-12 instruction. Problem-based learning is also used in K-12 classrooms, but has its origins in medical training and other professional preparation practices. (Ryan et al, 1994). Start Project-based learning typically begins with an end product or \"artifact\" in mind, the production of which requires specific content knowledge or skills and typically raises one or more problems which students must solve. Projects vary widely in scope and time frame, and end products vary widely in level of technology used and sophistication. Problem-based learning, as the name implies, begins with a problem for students to solve or learn more about. Often these problems are framed in a scenario or case study format. Problems are designed to be \"ill-structured\" and to imitate the complexity of real life cases. As with project-based learning, problem-based learning assignments vary widely in scope and sophistication. Approach, Model 66 CU IDOL SELF LEARNING MATERIAL (SLM)

Project-based learning: production model: • Students define the purpose for creating the end product and identify their audience. They research their topic, design their product, and create a plan for project management. • Students then begin the project, resolve problems and issues that arise in production, and finish their product. Students may use or resent the product they have created, and ideally are given time to reflect on and evaluate their work. (Crawford, Bellnet website, Autodesk website, Blumenfeld et al). The entire process is meant to be authentic, mirroring real world production activities and utilizing students’ own ideas and approaches to accomplish the tasks at hand. Though the end product is the driving force in project-based learning, it is the content knowledge and skills acquired during the production process that are important to the success of the approach. Problem-based learning: inquiry model: • students are presented with a problem • they begin by organizing any previous knowledge on the subject, posing any additional questions, and identifying areas they need more information. • Students devise a plan for gathering more information, then do the necessary research and reconvene to share and summarize their new knowledge. • Students may present their conclusions, and there may or may not be an end product. • Again, students ideally have adequate time for reflection and self-evaluation. (Duch, 1995; Delisle, Hoffman and Ritchie, 1997; Stephan and Gallagher, 1993). All problem- based learning approaches rely on a problem as their driving forces, but may focus on the solution to varying degrees. Some problem-based approaches intend for students to clearly define the problem, develop hypotheses, gather information, and arrive at clearly stated solutions. (Allen, 1998). Others design the problems as learning-embedded cases which may have no solution but are meant to engage students in learning and information gathering. (Wang, 1998). 67 CU IDOL SELF LEARNING MATERIAL (SLM)

Two approaches sometime complementary In practice, the line between project- and problem-based learning is frequently blurred and that the two are used in combination and play complementary roles. Fundamentally, problem- and project-based learning has the same orientation: both are authentic, constructivist approaches to learning. The differences between the two approaches may lie in the subtle variations. There are at least two possible continua of variation in these type of learning approaches. The extent to which the end product is the organizing center of the project. On one end of this continuum, end products are elaborate and shape the production process, such as a computer animation piece which requires extensive planning and labor. On the other end, end products are simpler and more summative, such as a group’s report on their research findings. The former example is best described as project-based learning, where the end product drives the planning, production, and evaluation process. The latter example, where the inquiry and research (rather than the end product) is the primary focus of the learning process, is a better example of problem-based learning. The extent to which a problem is the organizing center of the project. On one end of this continuum are projects in which it is implicitly assumed that any number of problems will arise and students will require problem-solving skills to overcome them. On the other end of this continuum are projects that begin with a clearly stated problem or problems and require a set of conclusions or a solution in direct response, where \"the problematic situation is the organizing center for the curriculum.\". Here again, the former example typifies project-based learning, where the latter is best described as problem-based learning. Function point sizing-The planner develops estimates of the information domain characteristics. Standard component sizing. Software is composed of a number of different “standard components” that are generic to a particular application area. For example, the standard components for an information system are subsystems, modules, screens, reports, interactive programs, batch programs, files, LOC, and object-level instructions. The project planner 68 CU IDOL SELF LEARNING MATERIAL (SLM)

estimates the number of occurrences of each standard component and then uses historical project data to determine the delivered size per standard component. To illustrate, consider an information systems application. The planner estimates that 18 reports will be generated. Historical data indicates that 967 lines of COBOL are required per report. This enables the planner to estimate that 17,000 LOC will be required for the reports component. Similar estimates and computation are made for other standard components, and a combined size value (adjusted statistically) results. Change sizing. This approach is used when a project encompasses the use of existing software that must be modified in some way as part of a project. The planner estimates the number and type (e.g., reuse, adding code, changing code, and deleting code) of modifications that must be accomplished. Using an “effort ratio” for each type of change, the size of the change may be estimated. Problem-based estimation- Lines of code and function points were described as measures from which productivity metrics can be computed. LOC and FP data are used in two ways during software project estimation: (1) as an estimation variable to \"size\" each element of the software and (2) as baseline metrics collected from past projects and used in conjunction with estimation variables to develop cost and effort projections. LOC and FP estimation are distinct estimation techniques. Yet both have a number of characteristics in common. The project planner begins with a bounded statement of software scope and from this statement attempts to decompose software into problem functions that can each be estimated individually. LOC or FP (the estimation variable) is then estimated for each function. Alternatively, the planner may choose another component for sizing such as classes or objects, changes, or business processes affected. Baseline productivity metrics (e.g., LOC/pm or FP/pm) are then applied to the appropriate estimation variable, and cost or effort for the function is derived. Function estimates are combined to produce an overall estimate for the entire project. It is important to note, however, that there is often substantial scatter in productivity metrics for an organization, making the use of a single baseline productivity metric suspect. In general, LOC/pm or FP/pm averages should be computed by project domain. That is, projects should be grouped by team size, application area, complexity, and other relevant parameters. Local domain averages should then be computed. When a new 69 CU IDOL SELF LEARNING MATERIAL (SLM)

project is estimated, it should first be allocated to a domain, and then the appropriate domain average for productivity should be used in generating the estimate. The LOC and FP estimation techniques differ in the level of detail required for decomposition and the target of the partitioning. When LOC is used as the estimation variable, decomposition is absolutely essential and is often taken to considerable levels of detail. The following decomposition approach has been adapted from Phillips: Process based estimation- The most common technique for estimating a project is to base the estimate on the process that will be used. That is, the process is decomposed into a relatively small set of tasks and the effort required to accomplish each task is estimated. Like the problem-based techniques, process-based estimation begins with a delineation of software functions obtained from the project scope. A series of software process activities must be performed for each function. Functions and related software process activities may be represented as part of a table .Once problem functions and process activities are melded, the planner estimates the effort (e.g., person-months) that will be required to accomplish each software process activity for each software function. These data constitute the central matrix of the table. Average labor rates (i.e., cost/unit effort) are then applied to the effort estimated for each process activity. It is very likely the labor rate will vary for each task. Senior staff heavily involved in early activities is generally more expensive than junior staff involved in later design tasks, code generation, and early testing. Costs and effort for each function and software process activity are computed as the last step. If process-based estimation is performed independently of LOC or FP estimation, we now have two or three estimates for cost and effort that may be compared and reconciled. If both sets of estimates show reasonable agreement, there is good reason to believe that the estimates are reliable. If, on the other hand, the results of these decomposition techniques show little agreement, further investigation and analysis must be conducted. Table 4.1 Differences between process based and problem based 70 CU IDOL SELF LEARNING MATERIAL (SLM)

Process based Problem based The process is decomposed into a relatively A series of software process activities small set of tasks and the effort required to must be performed for each function. accomplish each task is estimated. Process-based estimation begins with a Begins with a statement of scope. delineation of software functions obtained from the project scope. Estimate effort to complete each software Estimating FP or LOC. function. Apply average labor rates, compute the total Combine those estimates and produce an cost and compare the estimation overall estimate. SUMMARY • Software project management is an umbrella activity within software engineering. It begins before any technical activity is initiated and continues throughout the definition, development, and maintenance of computer software. � • There P’s have a substantial influence on software project management people, problem and process. • People must be organized into effective teams, motivated to do high quality software work, and coordinated to achieve effective communication. 71 CU IDOL SELF LEARNING MATERIAL (SLM)

• The problem must be communicated from customer to developer, partitioned into its constituent arts, and positioned for work by the software team • The process must be adapted to the people and the problem. A common process framework is selected, an appropriate software engineering paradigm is applied, and a set of work tasks is chosen to get the job done. � • The pivotal element in all software projects is people, software engineers can be organized in a number of different team structures that range from traditional control hierarchies to “open paradigm” teams. • A variety of coordination and communication techniques can be applied to support the work of the team. In general, formal reviews and informal person to person communication have the most value for practitioners • Software project management begins with a set of activities that are collectively called project planning. The manager and the software team must estimate the work that is to be done, the resources required and the time that will be taken to complete the project • Estimates should always be made with the future needs in mind and also taking into account the various degree of uncertainty. Process and project metrics provides the historical perspective and a powerful input for the generation of quantitative estimates • As estimation lays a foundation for all other project planning activities, project planning paves the way for successful software engineering. • Estimation: The process approximating a value that can be used even if the data may be incomplete or unstable is referred to as estimation. • Problem based estimation: • Begins with a statement of scope. • The software is decomposed into problem functions. 72 CU IDOL SELF LEARNING MATERIAL (SLM)

• Estimating FP or LOC. • Combine those estimates and produce an overall estimate. • Process based estimation: • The functions of the software are identified. • The framework is formulated. • Estimate effort to complete each software function. • Apply average labor rates, compute the total • Lines of code and function points were described as measures from which productivity metrics can be computed. LOC and FP data are used in two ways during software project estimation: (1) as an estimation variable to \"size\" each element of the software and (2) as baseline metrics collected from past projects and used in conjunction with estimation variables to develop cost and effort projections. • LOC and FP estimation are distinct estimation techniques. Yet both have a number of characteristics in common. The project planner begins with a bounded statement of software scope and from this statement attempts to decompose software into problem functions that can each be estimated individually. LOC or FP (the estimation variable) is then estimated for each function. Alternatively, the planner may choose another component for sizing such as classes or objects, changes, or business processes affected. • It is important to note, however, that there is often substantial scatter in productivity metrics for an organization, making the use of a single baseline productivity metric suspect. In general, LOC/pm or FP/pm averages should be computed by project domain. That is, projects should be grouped by team size, application area, complexity, and other relevant parameters. Local domain averages should then be computed. When a new project is estimated, it should 73 CU IDOL SELF LEARNING MATERIAL (SLM)

first be allocated to a domain, and then the appropriate domain average for productivity should be used in generating the estimate. KEY WORDS/ABBREVIATIONS • Module interface table: a table which provides a graphic illustration of the data elements whose values are input to and output from a module. • Project plan: a management document describing the approach taken for a project. The plan typically describes work to be done, resources required, methods to be used, the configuration management and quality assurance procedures to be followed, the schedules to be met, the project organization, etc. Project in this context is a generic term. Some projects may also need integration plans, security plans, test plans, quality assurance plans, etc. See: documentation plan, software development plan, test plan, software engineering. • Pseudo code: a combination of programming language and natural language used to express a software design. If used, it is usually the last document produced prior to writing the source code. • Qualification, product performance: (establishing confidence through appropriate testing that the finished product produced by a specified process meets all release requirements for functionality and safety. • Quality control: the operational techniques and procedures used to achieve quality requirements LEARNING ACTIVITY 1. Write about various team structures in company. 2. List down the steps involved in project planning. 74 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT END QUESTIONS (MCQ AND DESCRIPTIVE) A. Descriptive Types Questions 1. Justify software organization to deliver quality product, keeping the cost within client’s budget constrain and deliver the project as per scheduled. 2. Identify software organization to deliver quality product, keeping the cost within client’s budget constrain and deliver the project as per scheduled. Explain with its measures? 3. State the estimation form of problem solving, and problem to be solved? 4. Illustrate the process approximating a value that can be used even if the data may be incomplete or unstable. 5. With details estimation, compare between Problem based estimation and Process based estimation. B. Multiple Choice Questions 1. Why is decomposition technique required? (a) Software project estimation is a form of problem solving (b) Developing a cost and effort estimate for a software project is too complex (c) All of the mentioned (d) None of the mentioned 2. If a Direct approach to software project sizing is taken, size can be measured in (a) LOC (b) FP (c) LOC and FP (d) None of the mentioned 75 CU IDOL SELF LEARNING MATERIAL (SLM)

3. Which software project sizing approach develops estimates of the information domain characteristics? (a) Function point sizing (b) Change sizing (c) Standard component sizing (d) Fuzzy logic sizing 4. The expected value for the estimation variable (size), S, can be computed as a weighted average of the optimistic (Sopt), most likely (Sm), and pessimistic (Spess) estimates given as (a) EV = (Sopt + 4Sm + Spess)/4 (b) EV = (Sopt + 4Sm + Spess)/6 (c) EV = (Sopt + 2Sm + Spess)/6 (d) EV = (Sopt + 2Sm + Spess)/4 5. Who suggested the four different approaches to the sizing problem? (a) Putnam (b) Myers (c) Boehm (d) Putnam and Myers Answers 1. (c), 2. (a), 3. (a), 4. (b), 5. (d) REFERENCES • Pressman R.S. (2009). Software Engineering - A Practitioner’s Approach. New Delhi: MGH Publications. • Mall R. (2003). Fundamentals of Software Engineering. New Delhi: PHI • Jalote P. (2019). An Integrated Approach to Software Engineering. New Delhi: Narosa Publications. • Summerville I. (2013). Software Engineering. New Delhi: Pearson Education. 76 CU IDOL SELF LEARNING MATERIAL (SLM)

• Software Engineering, Sixth Edition, 20~1, Ian Sommerville; Pearson Education. • Software Engineering: A Practitioner's Approach, R. S. Pressman, McGraw-Hill. • Software Engineering Best Practices: Lessons from Successful Projects in Top Companies, Capers Jones, McGraw Hill. • Software Engineering, Ian Sommerville; Sixth Edition, 2001, Pearson Education. • https://www.computing.dcu.ie/~renaat/ca421/LWu1.html • https://www.knowledgehut.com/tutorials/project-management • https://mrcet.com/downloads/digital_notes/CSE/IV%20Year/SOFTWARE%20P ROJECT%20MANAGEMENT.pdf • Development Effort Prediction: A Software Science Validation\", IEEE on Software Engineering, NOV 1983 • Albert L. Lederer and Jayesh Prasad \"Nine Management Guidelines for Better Cost Estimating\", CACM,Vol.35,No.2, Feb 1992 77 CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 5 COST ESTIMATION MODELS Structure Learning Objectives Introduction COCOMO Model The S/W Equation Summary Key Words/Abbreviations Learning Activity Unit End Questions (MCQ and Descriptive) References LEARNING OBJECTIVES After studying this unit, you will be able to: • Discuss the basic concepts of Cost Estimation Models. • Explain COCOMO model. INTRODUCTION Cost estimation simply means a technique that is used to find out the cost estimates. The cost estimate is the financial spend that is done on the efforts to develop and test software in Software Engineering. Cost estimation models are some mathematical algorithms or parametric equations that are used to estimate the cost of a product or project. Various techniques or models are available for cost estimation, also known as Cost Estimation Models as shown below 78 CU IDOL SELF LEARNING MATERIAL (SLM)

Fig 5.1 Estimation Model Empirical Estimation Technique –Empirical estimation is a technique or model in which empirically derived formulas are used for predicting the data that are a required and essential part of the software project planning step. These techniques are usually based on the data that is collected previously from a project and also based on some guesses, prior experience with the development of similar types of projects, and assumptions. It uses the size of the software to estimate the effort. In this technique, an educated guess of project parameters is made. Hence, these models are based on common sense. However, as there are many activities involved in empirical estimation techniques, this technique is formalized. For example Delphi technique and Expert Judgment technique Heuristic Technique –Heuristic word is derived from a Greek word that means “to discover”. The heuristic technique is a technique or model that is used for solving problems, learning, or discovery in the practical methods which are used for achieving immediate goals. These techniques are flexible and simple for taking quick decisions through shortcuts and good enough calculations, most probably when working with complex data. But the decisions that are made using this technique are necessary to be optimal. In this technique, the relationship among different project parameters is expressed using mathematical equations. The popular heuristic technique is given by Constructive Cost Model (COCOMO). This technique is also used to increase or speed up the analysis and investment decisions. Analytical Estimation Technique –Analytical estimation is a type of technique that is used to measure work. In this technique, firstly the task is divided or broken down into its basic component operations or elements for analyzing. Second, if the standard time is available 79 CU IDOL SELF LEARNING MATERIAL (SLM)

from some other source, then these sources are applied to each element or component of work. Third, if there is no such time available, then the work is estimated based on the experience of the work. In this technique, results are derived by making certain basic assumptions about the project. Hence, the analytical estimation technique has some scientific basis. Halstead’s software science is based on an analytical estimation model. There is no simple way to make an accurate estimate of the effort required to develop a software system. You may have to make initial estimates on the basis of a high-level user requirements definition. The software may have to run on unfamiliar computers or use new development technology. The people involved in the project and their skills will probably not be known. All of these mean that it is impossible to estimate system development costs accurately at an early stage in a project. Furthermore, there is a fundamental difficulty in assessing the accuracy of different approaches to cost-estimation techniques. Project cost estimates are often self-fulfilling. The estimate is used to define the project budget, and the product is adjusted so that the budget figure is realized. I do not know of any controlled experiments with project costing where the estimated costs were not used to bias the experiment. A controlled experiment would not reveal the cost estimate to the project manager. The actual costs would then be compared with the estimated project costs. However, such an experiment is probably impossible because of the high costs involved and the number of variables that cannot be controlled. Nevertheless, organizations need to make software effort and cost estimates. To do so, one or more of the techniques described may be used (Boehm, 1981). All of these techniques rely on experience-based judgements by project managers who use their knowledge of previous projects to arrive at an estimate of the resources required for the project. However, there may be important differences between past and future projects. Many new development methods and techniques have been introduced in the last 10 years. Some examples of the changes that may affect estimates based on experience include 1. Distributed object systems rather than mainframe-based systems 2. Use of web services 3. Use of ERP or database-centered systems 80 CU IDOL SELF LEARNING MATERIAL (SLM)

4. Use of off-the-shelf software rather than original system development 5. Development for and with reuse rather than new development of all parts of a system 6. Development using scripting languages such as TCL or Perl (Ousterhout, 1998) 7. The use of CASE tools and program generators rather than unsupported software Development. If project managers have not worked with these techniques, their previous experience may not help them estimate software project costs. This makes it more difficult for them to produce accurate costs and schedule estimates. You can tackle the approaches to cost estimation shown in using either a top-down or a bottom-up approach. A top-down approach starts at the system level. You start by examining the overall functionality of the product and how that functionality is provided by interacting sub-functions. The costs of system-level activities such as integration, configuration management and documentation are taken into account. The bottom-up approach, by contrast, starts at the component level. The system is decomposed into components, and you estimate the effort required to develop each of these components. You then add these component costs to compute the effort required for the whole system development. The disadvantages of the top-down approach are the advantages of the bottom-up approach and vice versa. Top-down estimation can underestimate the costs of solving difficult technical problems associated with specific components such as interfaces to nonstandard hardware. There is no detailed justification of the estimate that is produced. By contrast, bottom-up estimation produces such a justification and considers each component. However, this approach is more likely to underestimate the costs of system activities such as integration. Bottom-up estimation is also more expensive. There must be an initial system design to identify the components to be costed. Each estimation technique has its own strengths and weaknesses. Each uses different information about the project and the development team, so if you use a single model and this information is not accurate, your final estimate will be wrong. For large projects, therefore, you should use several cost estimation techniques and compare their results. If these predict radically different costs, you probably do not have enough information about the product or the development process. You should look for more information about the product, process or team and repeat the costing process until the estimates converge. These 81 CU IDOL SELF LEARNING MATERIAL (SLM)

estimation techniques are applicable where a requirements document for the system has been produced. This should define all users and system requirements. You can therefore make a reasonable estimate of the system functionality that is to be developed. In general, large systems engineering projects will have such a requirements document. However, in many cases, the costs of many projects must be estimated using only incomplete user requirements for the system. This means that the estimators have very little information with which to work. Requirements analysis and specification is expensive, and the managers in a company may need an initial cost estimate for the system before they can have a budget approved to develop more detailed requirements or a system prototype. Under these circumstances, “pricing to win” is a commonly used strategy. The notion of pricing to win may seem unethical and un business like. However, it does have some advantages. A project cost is agreed on the basis of an outline proposal. Negotiations then take place between client and customer to establish the detailed project specification. This specification is constrained by the agreed cost. The buyer and seller must agree on what is acceptable system functionality. The fixed factor in many projects is not the project requirements but the cost. The requirements may be changed so that the cost is not exceeded. For example, say a company is bidding for a contract to develop a new fuel delivery system for an oil company that schedules deliveries of fuel to its service stations. There is no detailed requirements document for this system so the developers estimate that a price of $900,000 is likely to be competitive and within the oil company’s budget. After they are granted the contract, they negotiate the detailed requirements of the system so that basic functionality is delivered; then they estimate the additional costs for other requirements. The oil company does not necessarily lose here because it has awarded the contract to a company that it can trust. The additional requirements may be funded from a future budget, so that the oil company’s budgeting is not disrupted by a very high initial software cost. COCOMO MODEL COCOMO Model- A number of algorithmic models have been proposed as the basis for estimating the effort, schedule and costs of a software project. These are conceptually similar but use different 82 CU IDOL SELF LEARNING MATERIAL (SLM)

parameter values. The model that I discuss here is the COCOMO model. The COCOMO model is an empirical model that was derived by collecting data from a large number of software projects. These data were analyzed to discover formulae that were the best fit to the observations. These formulae link the size of the system and product, project and team factors to the effort to develop the system. I have chosen to use the COCOMO model for several reasons: Fig 5.1 Estimate uncertainty 1. It is well documented, available in the public domain and supported by public domain and commercial tools. 2. It has been widely used and evaluated in a range of organizations. 3. It has a long pedigree from its first instantiation in 1981 (Boehm, 1981), through a refinement tailored to Ada software development (Boehm and Royce, 1989), to its most recent version, COCOMO II, published in 2000 (Boehm, et al., 2000). The COCOMO models are comprehensive, with a large number of parameters that can each take a range of values. They are so complex that I cannot give a complete description here. Rather, I simply discuss their essential characteristics to give you a basic understanding of algorithmic cost models. The first version of the COCOMO model (COCOMO 81) was a three-level model where the levels corresponded to the detail of the analysis of the cost estimate. The first level (basic) provided an initial rough estimate; the second level modified this using a number of 83 CU IDOL SELF LEARNING MATERIAL (SLM)

project and process multipliers; and the most detailed level produced estimates for different phases of the project. Table 5.1 shows the basic COCOMO formula for different types of projects. The multiplier M reflects product, project and team characteristics. COCOMO 81 assumed that the software would be developed according to a waterfall process (see Chapter 4) using standard imperative programming languages such as C or FORTRAN. However, there have been radical changes to software development since this initial version was proposed. Prototyping and incremental development are commonly used process models. Software is now often developed by Table 5.1. The basic COCOMO 81 model COCOMO (Constructive Cost Model) is a regression model based on LOC, i.e. number of Lines of Code. It is a procedural cost estimate model for software projects and often used as a process of reliably predicting the various parameters associated with making a project such as size, effort, cost, time and quality. It was proposed by Barry Boehm in 1970 and is based on the study of 63 projects, which make it one of the best-documented models. The key parameters which define the quality of any software products, which are also an outcome of the COCOMO, are primarily Effort & Schedule: Effort: Amount of labor that will be required to complete a task. It is measured in person- months units. Schedule: Simply means the amount of time required for the completion of the job, which is, of course, proportional to the effort put. It is measured in the units of time such as weeks, months. 84 CU IDOL SELF LEARNING MATERIAL (SLM)

Different models of COCOMO have been proposed to predict the cost estimation at different levels, based on the amount of accuracy and correctness required. All of these models can be applied to a variety of projects, whose characteristics determine the value of constant to be used in subsequent calculations. These characteristics pertaining to different system types are mentioned below. Boehm’s definition of organic, semidetached, and embedded systems: Organic – A software project is said to be an organic type if the team size required is adequately small, the problem is well understood and has been solved in the past and also the team members have a nominal experience regarding the problem. Semi-detached – A software project is said to be a Semi-detached type if the vital characteristics such as team-size, experience, knowledge of the various programming environment lie in between that of organic and embedded. The projects classified as Semi- Detached are comparatively less familiar and difficult to develop compared to the organic ones and require more experience and better guidance and creativity. E.g.: Compilers or different Embedded Systems can be considered of Semi-Detached type. Embedded – A software project with requiring the highest level of complexity, creativity, and experience requirement fall under this category. Such software requires a larger team size than the other two models and also the developers need to be sufficiently experienced and creative to develop such complex models. All the above system types utilize different values of the constants used in Effort Calculations. Types of Models: COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. Any of the three forms can be adopted according to our requirements. These are types of COCOMO model: • Basic COCOMO Model • Intermediate COCOMO Model • Detailed COCOMO Model 85 CU IDOL SELF LEARNING MATERIAL (SLM)

The first level, Basic COCOMO can be used for quick and slightly rough calculations of Software Costs. Its accuracy is somewhat restricted due to the absence of sufficient factor considerations. Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally accounts for the influence of individual project phases, i.e. in case of Detailed it accounts for both these cost drivers and also calculations are performed phase wise henceforth producing a more accurate result. Appraisal of the Model- Applications of COCOMO • Medium and Large Projects: For small projects, the attempt for an estimation according to intermediate and detailed COCOMO is too high; but the results from basic COCOMO alone are not sufficiently exact. • Technical Application: For software projects developing commercial applications, COCOMO usually comes up with overstated effort estimation values therefore COCOMO is only applied for the development of technical software. This circumstance is due to the fact that the ratio DSI and man months implemented in the COCOMO estimation equation fits the efficiency rate in a technical development; with regard to commercial software development a higher productivity rate DSI/man- month can be assumed. Modes- COCOMO can be applied to the following software project’s categories Organic Mode -These projects are very easy and have small team size. The team has a good application experience work to a set of less than inflexible/rigid requirements. A thermal analysis program developed for a heat transfer group is an example of this. Semi-detached Mode -These are intermediate in size and complexity. Here the team has mixed experience to meet up a mix of rigid and less than rigid requirements. A transaction 86 CU IDOL SELF LEARNING MATERIAL (SLM)

processing system with fixed requirements for terminal hardware and database software is an example of this. Embedded Mode- Software projects that must be developed within a set of tight hardware, software, and operational constraints. For example, flight control software for aircraft. THE S/W EQUATION Estimation of Effort: Calculations – Basic Model – The effort is measured in Person-Months and as evident from the formula is dependent on Kilo-Lines of code. The development time is measured in Months. These formulas are used as such in the Basic Model calculations, as not much consideration of different factors such as reliability, expertise is taken into account, henceforth the estimate is rough. Some insight into the basic COCOMO model can be obtained by plotting the estimated characteristics for different software sizes. Fig shows a plot of estimated effort versus product size. From fig, we can observe that the effort is somewhat superliner in the size of the software product. Thus, the effort required to develop a product increases very rapidly with project size. 87 CU IDOL SELF LEARNING MATERIAL (SLM)

Fig 5.2 Graph effort vs. Estimated cost The development time versus the product size in KLOC is plotted in fig. From fig it can be observed that the development time is a sub linear function of the size of the product, i.e. when the size of the product increases by two times, the time to develop the product does not double but rises moderately. This can be explained by the fact that for larger products, a larger number of activities which can be carried out concurrently can be identified. The parallel activities can be carried out simultaneously by the engineers. This reduces the time to complete the project. Further, from fig, it can be observed that the development time is roughly the same for all three categories of products. For example, a 60 KLOC program can be developed in approximately 18 months, regardless of whether it is of organic, semidetached, or embedded type. 88 CU IDOL SELF LEARNING MATERIAL (SLM)

Fig 5.3 Graph development time vs nominal development time From the effort estimation, the project cost can be obtained by multiplying the required effort by the manpower cost per month. But, implicit in this project cost computation is the assumption that the entire project cost is incurred on account of the manpower cost alone. In addition to manpower cost, a project would incur costs due to hardware and software required for the project and the company overheads for administration, office space, etc. It is important to note that the effort and the duration estimations obtained using the COCOMO model are called a nominal effort estimate and nominal duration estimate. The term nominal implies that if anyone tries to complete the project in a time shorter than the estimated duration, then the cost will increase drastically. But, if anyone completes the project over a longer period of time than the estimated, then there is almost no decrease in the estimated 89 CU IDOL SELF LEARNING MATERIAL (SLM)

cost value. Example 1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and development time for each of the three model i.e., organic, semi-detached & embedded. Solution: The basic COCOMO equation takes the form: Effort=a*(KLOC) b PM Tdev=c (efforts)d Months Estimated Size of project= 400 KLOC (i) Organic Mode E = 2.4 * (400)1.05 = 1295.31 PM D = 2.5 * (1295.31)0.38=38.07 PM (ii) Semidetached Mode E = 3.0 * (400)1.12=2462.79 PM D = 2.5 * (2462.79)0.35=38.45 PM (iii) Embedded Mode E = 3.6 * (400)1.20 = 4772.81 PM D = 2.5 * (4772.8)0.32 = 38 PM Example 2: A project size of 200 KLOC is to be developed. Software development team has average experience on similar type of projects. The project schedule is not very tight. Calculate the Effort, development time, average staff size, and productivity of the project. Solution: The semidetached mode is the most appropriate mode, keeping in view the size, schedule and experience of development time. Hence E = 3.0(200)1.12=1133.12PM D = 2.5(1133.12) 0.35=29.3PM 90 CU IDOL SELF LEARNING MATERIAL (SLM)

P = 176 LOC/PM 2. Intermediate Model: The basic COCOMO model considers that the effort is only a function of the number of lines of code and some constants calculated according to the various software systems. The intermediate COCOMO model recognizes these facts and refines the initial estimates obtained through the basic COCOMO model by using a set of 15 cost drivers based on various attributes of software engineering. Classification of Cost Drivers and their attributes: (i) Product attributes - • Required software reliability extent • Size of the application database • The complexity of the product Hardware attributes - • Run-time performance constraints • Memory constraints • The volatility of the virtual machine environment • Required turnabout time Personnel attributes - • Analyst capability • Software engineering capability 91 CU IDOL SELF LEARNING MATERIAL (SLM)

• Applications experience • Virtual machine experience • Programming language experience Project attributes - • Use of software tools • Application of software engineering methods • Required development schedule The cost drivers are divided into four categories: Table 5.2 Cost driver Intermediate COCOMO equation: 92 E=a (KLOC) b*EAF CU IDOL SELF LEARNING MATERIAL (SLM)

D=c (E)d Coefficients for intermediate COCOMO 3. Detailed COCOMO Model: Detailed COCOMO incorporates all qualities of the standard version with an assessment of the cost drivers effect on each method of the software engineering process. The detailed model uses various effort multipliers for each cost driver property. In detailed COCOMO, the whole software is differentiated into multiple modules, and then we apply COCOMO in various modules to estimate effort and then sum the effort. The Six phases of detailed COCOMO are: • Planning and requirements • System structure • Complete structure • Module code and test • Integration and test • Cost Constructive model The effort is determined as a function of program estimate, and a set of cost drivers are given according to every phase of the software lifecycle. SUMMARY • COCOMO (Constructive Cost Model) is a regression model based on LOC, i.e. number of Lines of Code. It is a procedural cost estimate model for software projects and often used as a process of reliably predicting the various parameters associated with making a project such as size, effort, cost, time and quality. It was proposed by Barry Boehm in 1970 and is based on the study of 63 projects, which make it one of the best-documented models. • COCOMO (Constructive Cost Model) is a regression model based on LOC, 93 CU IDOL SELF LEARNING MATERIAL (SLM)

i.e number of Lines of Code. It is a procedural cost estimate model for software projects and often used as a process of reliably predicting the various parameters associated with making a project such as size, effort, cost, time and quality. It was proposed by Barry Boehm in 1970 and is based on the study of 63 projects, which make it one of the best-documented models. • The key parameters which define the quality of any software products, which are also an outcome of the COCOMO are primarily Effort & Schedule: • Effort: Amount of labor that will be required to complete a task. It is measured in person-months units. • Schedule: Simply means the amount of time required for the completion of the job, which is, of course, proportional to the effort put. It is measured in the units of time such as weeks, months. • Different models of COCOMO have been proposed to predict the cost estimation at different levels, based on the amount of accuracy and correctness required. All of these models can be applied to a variety of projects, whose characteristics determine the value of constant to be used in subsequent calculations. These characteristics pertaining to different system types are mentioned below. • Boehm’s definition of organic, semidetached, and embedded systems: • Organic – A software project is said to be an organic type if the team size required is adequately small, the problem is well understood and has been solved in the past and also the team members have a nominal experience regarding the problem. • Semi-detached – A software project is said to be a Semi-detached type if the vital characteristics such as team-size, experience, knowledge of the various programming environment lie in between that of organic and embedded. The projects classified as Semi-Detached are comparatively less familiar and difficult to develop compared to the organic ones and require more experience and better guidance and creativity. E.g.: Compilers or different Embedded Systems can be considered of Semi-Detached type. 94 CU IDOL SELF LEARNING MATERIAL (SLM)

• Embedded – A software project with requiring the highest level of complexity, creativity, and experience requirement fall under this category. Such software requires a larger team size than the other two models and also the developers need to be sufficiently experienced and creative to develop such complex models. • All the above system types utilize different values of the constants used in Effort Calculations. • Types of COCOMO model: o Basic COCOMO Model o Intermediate COCOMO Model o Detailed COCOMO Model • Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally accounts for the influence of individual project phases, i.e in case of Detailed it accounts for both these cost drivers and also calculations are performed phase wise henceforth producing a more accurate result • Intermediate Model –The basic COCOMO model assumes that the effort is only a function of the number of lines of code and some constants evaluated according to the different software system. However, in reality, no system’s effort and schedule can be solely calculated on the basis of Lines of Code. For that, various other factors such as reliability, experience, Capability. These factors are known as Cost Drivers and the Intermediate Model utilizes 15 such drivers for cost estimation. • Detailed Model –Detailed COCOMO incorporates all characteristics of the intermediate version with an assessment of the cost driver’s impact on each step of the software engineering process. The detailed model uses different effort multipliers for each cost driver attribute. In detailed COCOMO, the whole software is divided into different modules and then we apply COCOMO in different modules to estimate effort and then sum the effort. • The Six phases of detailed COCOMO are: 95 CU IDOL SELF LEARNING MATERIAL (SLM)

• Planning and requirements • System design • Detailed design • Module code and test • Integration and test • Cost Constructive model The effort is calculated as a function of program size and a set of cost drivers are given according to each phase of the software lifecycle. KEY WORDS/ABBREVIATIONS • Requirements analysis: (IEEE) (1) The process of studying user needs to arrive at a definition of a system, hardware, or software requirements. (2) The process of studying and refining system, hardware, or software requirements. See: prototyping, software engineering. • Retention period: (ISO) The length of time specified for data on a data medium to be preserved. • Side effect: An unintended alteration of a program's behavior caused by a change in one part of the program, without taking into account the effect the change has on another part of the program. See: regression analysis and testing. • Sizing: (IEEE) The process of estimating the amount of computer storage or the number of source lines required for a software system or component. Contrast with timing. • Software design description: (IEEE) A representation of software created to facilitate analysis, planning, implementation, and decision making. The software design description is used as a medium for communicating software design information, and may be thought of as a blueprint or model of the system. See: structured design, design description, specification. 96 CU IDOL SELF LEARNING MATERIAL (SLM)

• COCOMO: Constructive Cost Model • EI: External Inputs • EIF's: External Interface Files • EO: External Outputs • EQ: External Inquiry • IFPUG: International Function Point User Group LEARNING ACTIVITY 1. Write how Cost Estimation Models plays an important role in software engineering 2. Examine how Constructive Cost Model (COCOMO) is a method for assessing the cost of a software package? UNIT END QUESTIONS (MCQ AND DESCRIPTIVE) A. Descriptive Types Questions 1. Project work estimation has three phases, explain each in detail. 2. Project size of 200 KLOC is to be developed. Software development team has average experience on similar type of projects. The project schedule is not very tight. Calculate the Effort, development time, average staff size, and productivity of the project? 3. Suppose a project was estimated to be 400 KLOC. Calculate the effort and development time for each of the three model i.e., organic, semi-detached & embedded? 97 CU IDOL SELF LEARNING MATERIAL (SLM)

4. Identify technique or model in which empirically derived formulas are used for predicting the data that are a required and essential part of the software project planning step? 5. Identify model uses various effort multipliers for each cost driver property? Explain the phases of it. B. Multiple Choice Questions 1. Which of the following are parameters involved in computing the total cost of a software development project? (a) Hardware and software costs (b) Effort costs (c) Travel and training costs (d) All of the mentioned 2. Which of the following costs is not part of the total effort cost? (a) Costs of networking and communications (b) Costs of providing heating and lighting office space (c) Costs of lunch time food (d) Costs of support staff 3. What is related to the overall functionality of the delivered software? (a) Function-related metrics (b) Product-related metrics (c) Size-related metrics (d) None of the mentioned 4. A is developed using historical cost information that relates some software metric to the project cost. (a) Algorithmic cost modelling (b) Expert judgement (c) Estimation by analogy (d) Parkinson’s Law 98 CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook