Software Project Planning

BASIC COCOMO 

Basic COCOMO compute software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of source lines of code (SLOC, KLOC).

COCOMO applies to three classes of software projects:

  • Organic projects – “small” teams with “good” experience working with “less than rigid” requirements
  • Semi-detached projects – “medium” teams with mixed experience working with a mix of rigid and less than rigid requirements
  • Embedded projects – developed within a set of “tight” constraints. It is also combination of organic and semi-detached projects.(hardware, software, operational, …)

The basic COCOMO equations take the form

Effort Applied (E) = ab(KLOC)bb [ man-months ]
Development Time (D) = cb(Effort Applied)db [months]
People required (P) = Effort Applied / Development Time [count]

where, KLOC is the estimated number of delivered lines (expressed in thousands ) of code for project. The coefficients abbbcb and db are given in the following table (note: the values listed below are from the original analysis, with a modern reanalysis  producing different values):

Software project ab bb cb db
Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

Basic COCOMO is good for quick estimate of software costs. However it does not account for differences in hardware constraints, personnel quality and experience, use of modern tools and techniques, and so on.

Intermediate COCOMOs 

Intermediate COCOMO computes software development effort as function of program size and a set of “cost drivers” that include subjective assessment of product, hardware, personnel and project attributes. This extension considers a set of four “cost drivers”, each with a number of subsidiary attributes:-

  • Product attributes
    • Required software reliability
    • Size of application database
    • Complexity of the product
  • Hardware attributes
    • Run-time performance constraints
    • Memory constraints
    • Volatility of the virtual machine environment
    • Required turnabout time
  • Personnel attributes
    • Analyst capability
    • Software engineering capability
    • Applications experience
    • Virtual machine experience
    • Programming language experience
  • Project attributes
    • Use of software tools
    • Application of software engineering methods
    • Required development schedule

Each of the 15 attributes receives a rating on a six-point scale that ranges from “very low” to “extra high” (in importance or value). An effort multiplier from the table below applies to the rating. The product of all effort multipliers results in an effort adjustment factor (EAF). Typical values for EAF range from 0.9 to 1.4.

The Intermediate Cocomo formula now takes the form:

E=ai(KLoC)(bi)(EAF)

where E is the effort applied in person-months, KLoC is the estimated number of thousands of delivered lines of code for the project, and EAF is the factor calculated above. The coefficient ai and the exponent bi are given in the next table.

Software project ai bi
Organic 3.2 1.05
Semi-detached 3.0 1.12
Embedded 2.8 1.20

Detailed COCOMO 

Detailed COCOMO incorporates all characteristics of the intermediate version with an assessment of the cost driver’s impact on each step (analysis, design, etc.) of the software engineering process.

The detailed model uses different effort multipliers for each cost driver attribute. These Phase Sensitive effort multipliers are each to determine the amount of effort required to complete each phase. In detailed cocomo,the whole software is divided in different modules and then we apply COCOMO in different modules to estimate effort and then sum the effort

In detailed COCOMO, the effort is calculated as function of program size and a set of cost drivers given according to each phase of software life cycle.

A Detailed project schedule is never static.

The Six phases of detailed COCOMO are:-

  • plan and requirement.
  • system design.
  • detailed design.
  • module code and test.
  • integration and test.
  • Cost Constructive model

 

Putnam model

The Putnam model is an empirical software effort estimation model.   .  As a group, empirical models work by collecting software project data (for example, effort and size) and fitting a curve to the data. Future effort estimates are made by providing size and calculating the associated effort using the equation which fit the original data (usually with some error).

Equation 

While managing R&D projects for the Army and later at GE, Putnam noticed software staffing profiles followed the well-known Rayleigh distribution. 

Putnam used his observations about productivity levels to derive the software equation:

 {\displaystyle {\frac {B^{1/3}\cdot {\textrm {Size}}}{\textrm {Productivity}}}={\textrm {Effort}}^{1/3}\cdot {\textrm {Time}}^{4/3}}

where:

  • Size is the product size (whatever size estimate is used by your organization is appropriate). Putnam uses ESLOC (Effective Source Lines of Code) throughout his books.
  • B is a scaling factor and is a function of the project size. 
  • Productivity is the Process Productivity, the ability of a particular software organization to produce software of a given size at a particular defect rate.
  • Effort is the total effort applied to the project in person-years.
  • Time is the total schedule of the project in years.

In practical use, when making an estimate for a software task the software equation is solved for effort:

 {\displaystyle {\textrm {Effort}}=\left[{\frac {\textrm {Size}}{{\textrm {Productivity}}\cdot {\textrm {Time}}^{4/3}}}\right]^{3}\cdot B}

An estimated software size at project completion and organizational process productivity is used. Plotting effort as a function of time yields the Time-Effort Curve. The points along the curve represent the estimated total effort to complete the project at some time. One of the distinguishing features of the Putnam model is that total effort decreases as the time to complete the project is extended. This is normally represented in other parametric models with a schedule relaxation parameter.

Time-Effort Curve

This estimating method is fairly sensitive to uncertainty in both size and process productivity. Putnam advocates obtaining process productivity by calibration: 

 {\displaystyle {\textrm {Process\ Productivity}}={\frac {\textrm {Size}}{\left[{\frac {\textrm {Effort}}{B}}\right]^{1/3}\cdot {\textrm {Time}}^{4/3}}}}

Putnam makes a sharp distinction between ‘conventional productivity’ : size / effort and process productivity.