In Making it Big in Software I discuss 11 reason why software projects run late (see Chapter 13). In this post I’ll mention just one – my favorite. It’ my favorite because it’s ubiquitous, subtle and is fundamentally about programmer psychology.
Software developers are notoriously poor judges on how long things will take. Fred Brooks (author of the Mythical Man Month) found that the following rule of thumb allocates time for software development with reasonable accuracy: 1/3 specification and design, 1/6 programming, 1/4 function and integration testing, and 1/4 system test. Over the past 30 years those ratios have held true (of course, they vary from project to project). Over the past ~15 years, the more mature teams in our industry have finally come to terms with the reality that testing is half the effort of a software development cycle, and that’s probably the biggest reason project success rates have been improving. Testing, in this context is the entire test cycle, which includes both the effort to plan and execute the tests, as well as the effort to fix the problems it surfaces. We still have major problems on the front end, and these are the areas that most significantly damage individual careers. Why is that happening?
For a wide range of emotional reasons, software developers refuse to believe that coding represents only 1/6 of the software development cycle. They refuse to believe this because software programming is fun, and it’s what they want to spend their time on. It’s just too depressing to believe that the very thing they jump out of bed excited to do, the very thing they planned to spend their professional careers on, represents so little of what they will actually be doing.
Programmers like to believe that programming is relatively hard (which it is) and that other tasks, such as testing and writing, are comparatively easy (highly questionable). So when you ask developers how long a piece of work will take, they are thinking predominantly about the coding effort. However, what the project manager was usually asking for was an estimate for the total effort, including specification, design, and testing – or at least all the work up until testing. One tactic to avoid these issues is to only let programmers estimate what they are good at estimating: the coding effort. The ratios are well-known and hold true across a wide range of projects. If you can get a reasonable estimate on the coding effort from the engineering team, you can extrapolate easily to determine the time for design, specification and testing. The coding estimates are usually the most accurate sizing a programmer will produce:
Design and spec = 2 x coding estimate
Function and integration test = 1.5 x coding estimate
System test = 1.5 x coding estimate
These guidelines are simplistic, and, of course, well-known exceptions exist. Features that add performance value (speed) but otherwise leave externals unchanged require performance quality assurance but functionally may require little testing beyond regression testing (to ensure that what used to work still works). Dependencies between features under development always add delays as teams spend more time collaborating to understand a broader set of requirements. Although I don’t recommend applying these guidelines naïvely to all projects, they make a good starting point for most. Then you can revise the estimates for the outliers as a (sometimes major) refinement.
Whether you let the programmer estimate some,most or all of the required work, its worthwhile to sanity check the estimates they provide to make sure they’re in the ballpark of reasonable. If testing is less than 25% or design and specification are less than 15% it should raise a large red flag for you.
Chapter 13 of Making it Big in Software: Get the job. Work the org. Become great. has the full story, along with the other 10 causes for software development overruns and how to handle them.