How Much Failure?
Surveys of project “failure” rates have been going on for about 15 years. The 1994 Standish CHAOS Report is usually the first one cited in most discussions. After that, CHAOS Reports have come out every other year and there is data from sources such as Capers Jones, Computer Weekly, and KPMG. The CHAOS Reports tend to be the most widely quoted and certainly the most regular reporting of such data. Unfortunately, reports, from any of these sources, cannot easily be verified since their raw data, data sources, and methodologies are not generally available to other researchers/analysts.
A few years ago, Robert Glass [Glass] raised the question of just how much project failure there actually was. He noted the substantial reliance on the CHAOS Reports by many who discuss failure rates and criticized how people used those numbers. However, he did note that the three categories in the CHAOS Reports (i.e., cancelled, challenged and successful) had seen improvement in the percentages of projects falling into each. In 1994, the reported % rates were: 31, 53 and 16; in 2000: 23, 49 and 28; in 2006: 19, 46 and 35. Things clearly seem to be getter better, but, as Glass pointed out, they are “not figures to be proud of.”
Glass also makes a point about how “failure” is usually defined in such reports. Project failure means large cost or schedule overruns, late discovery of quality problems, or cancellation (for any reason). A “functionally brilliant” project, Glass says, that “misses its cost or schedule targets by 10 percent” could be categorized as a failure. (A recent examination of such data [El Emam] repeats many of Glass’ observations, including the improvement trend in project results.)
For the purposes of this series, I’d like to dispense with how much and where one would place a project in the CHAOS categories. Indeed, my title is intended to do away with the whole concept of “failure,” not because I don’t believe it happens, but because, like Glass, I think it is a relative term depending on your definition.
In discussing a Robbins-Gioia Survey (2001), IT Cortex [Cortex] says:
Project failure is not defined by objective criteria but by the perception of the respondents. The advantage of a perception is that it naturally integrates multiple aspects. Its obvious disadvantage is that it is inevitably partial: if the respondent has taken an active role in the project it will inevitably embellish the reality, whereas if the project has been "forced down his throat" he might cast a grimmer look at the project outcome.
Thus, I’m not here to tell you what you, or your management, or your organization, or your customer(s) should think constitutes “failure."
I do want to note, though, that lists of potential project risks are even larger than lists of project failure reasons. To all this, Capers Jones points out [Jones 96], “There are myriad ways to fail. … There are only a very few ways to succeed.” And though it is not mentioned directly in any of the lists of project failure reasons, Tim Lister points out [Lister] that, “The biggest risk an organization faces is lost opportunity, the failure to choose the right projects. So, value is every bit as important as cost (the plusses matter as much as the minuses) and your process for deciding what projects to do is more important than your process for how to do them.” But that’s a topic for another blog.
What I want to do in this series is list the things that typically challenge projects, increasing risk and tending toward less satisfactory results than might otherwise have been achieved. Since the various sources do not list problems in the same order, I won’t try to do that. Indeed, I have taken all the reasons given and categorized them in my own way. The frequency with which things are mentioned does offer some indication of their importance. So I am listing the problems based on my categories in order of the overall frequency of issues being mentioned in the sources. Briefly, these areas are:
• Requirements Related Issues
• Planning Related Issues
• Technology Related Issues
• Project Management Related Issues
• Estimation Related Issues
• Quality Related Issues
• Stakeholder Related Issues
• Risk Management Related Issues
• Management Related Issues
• People Related Issues
• Schedule Related Issues
• Communication Related Issues
• Resource Related Issues
• Lessons Learned Related Issues
• Process Related Issues
• Testing Related Issues
• Vendor Related Issues
Having addressed these problem categories briefly, I’ll point out what aspects of an Agile approach I believe could (help) minimize each of them.
Part 2 of this series will begin the discussion of the categories listed above.
References
- [Armour] Armour, Phillip G. “Twenty Percent,” Communications of the ACM, June 2007 (vol. 50, no. 6), pp. 21-23.
- [Cortex] IT Cortex. http://www.it-cortex.com/Stat_Failure_Rate.htm and (http://www.it-cortex.com/Stat_Failure_Cause.htm)
- [El Emam] El Emamn, Khaled and A. Güneş Koru. “A Replicated Survey of IT Software Project Failures,” IEEE Software, Sept/Oct 2008 (vol. 25, no. 5), pp. 84-90.
- [Evans] Evans, Michael W., Alex M. Abela, and Thomas Beltz. “Seven Characteristics of Dysfunctional Software Projects,” Crosstalk (The Journal of Defense Software Engineering), April 2002, pp. 16-20.
- [Fairley] Fairley, Richard E. and Mary Jane Wilshire. “Why the Vasa Sank: 10 Problems and Some Antidotes for Software Projects,” IEEE Software, Mar/Apr 2003 (vol. 20, no. 2), pp. 18-25.
- [Glass] Glass, Robert L. “IT Failure Rates – 70% 0r 10-15%?” IEEE Software, May/June 2005 (vol 22. no. 3), pp. 112 & 110-111
- [Jones 96] Jones, Capers. Patterns of Software Systems Failure and Success, International Thompson Computer Press, Boston, MA, 1996.
- [Jones 06] Jones, Capers. “Social and Technical Reasons for Software Project Failure,” Crosstalk (The Journal of Defense Software Engineering), June 2006, pp. 4-9.
- [Lister] From slides from a talk.
- May] May, Lorin J. “Major Causes of Software Project Failures,” Crosstalk (The Journal of Defense Software Engineering), July 1998, pp. 9-12.
- [McConnell] McConnell, Steve. “The Nine Deadly Sins of Project Planning,” IEEE Software, Sept/Oct 2007 (vol. 18, no. 5), pp. 5-7.
- [Reifer] Reifer, Donald J. “Software Management’s Seven Deadly Sins,” IEEE Software, Mar/Apr 2001 (vol. 18, no. 2), pp. 12-15.
- [Rost] Rost, Johann. “Political reasons for Failed Software Projects,” IEEE Software, Nov/Dec 2004 (vol. 21, no. 6),pp. 104, 102-103.
- [SD Times] Rubinstein, David. “Standish group report: There’s Less Development Chaos Today,” March 1, 2007 (found at http://www.sdtimes.com/content/article.aspx?ArticleID=30247).
- [SPC] Software Productivity Center, Inc. “Root Causes Of The Most Common Project Problems,” http://www.spc.ca/resources/process/problems.htm
- [Standish 94] The Standish Group Report, 1995, http://www.cs.nmt.edu/~cs328/reading/Standish.pdf
- [Thayer] Thayer, Richard H., Arthur Pyster and Roger C. Wood, “The Challenge for Software Engineering Project management,” IEEE Computer, August 1980 (vol. 13, no. 8), pp. 51-59