Coming from a more traditional quality background originally, I view the burndown chart as a simplistic form of a classic control chart. The baseline is like the central line on a control chart. We have no upper or lower limits since we are not doing statistical sampling, of course. We are tracking actual data completely. But there are some similarities in how a control chart is used and how we can view the actual iteration progress line compared to the baseline.
If the actual progress line hovers/fluctuates right around the baseline, the team is on track to complete the iteration goal. If the actual progress line is above the baseline constantly, it could mean the team is not headed to complete their Sprint commitment. If the actual progress line is below the baseline constantly, it may mean the team is headed to complete their Sprint commitment somewhat early. However, being not too far above or below the line is likely nothing to worry about if the trend is consistent. If a team’s ability to estimate and commit is effective, they should not be too far (or for too long) above, or below, the line.
On the other hand being below the baseline and heading even further below or being above the baseline and heading even further above it, should be case to consider taking some action:
- A progress line that is below the baseline and increasingly headed down means the team is ahead of their schedule for completing tasks and getting further ahead. This may or may not be good news. If things are going very well during the iteration, the team might discuss with the customer/Product Owner/etc. the possibility of taking on more work. However, this pattern could suggest some tasks being skipped or downgraded in time. That ought to be looked into as well to be sure everyone understands what the pattern means.
- A progress line that is above the baseline and increasingly headed up means the team is behind their schedule for completing tasks and getting further behind. This is not good news as this pattern suggests some tasks being added or increased in time. This ought to be looked into to find out why work is not converging on the iteration goal.
Earlier I said the task burndown chart is not the complete story and used the word “helps.” Like a control chart, the burndown chart is an indicator of whether or not further consideration needs to be given to team progress. Because of the statistical nature of control charts, deviations of certain kinds from the center line, in either direction, are reasons to investigate the cause of that deviation. The assumption is that this baseline represents expected results of sampling the production process and deviations either way could be either good or bad news, but, in either case, cause to look deeper into what is happening.
The same goes for the burndown chart, but there is certainly more to know about iteration progress since completion of task hours does not, by itself, mean completion of stories which is the iteration goal. One could be completing many hours of task effort, even be below the baseline, and not have completed a single story. This can happen if a mini-Waterfall is occurring in the iteration, with testing tasks bunching toward the end of the iteration.
One thing a couple teams I’ve worked with have done is to put story completion targets along their baseline, then note when each story actually gets completed. This gives both a daily progress indication based on the tasks and an iteration goal indication based on when stories show completion. If teams size stories effectively, a story should be getting completed every few days, at least.
Now teams that are communicating well among their members usually know all these things without the chart. But the visual representation of what the team “knows” is a useful reminder and lets people outside the team see what is happening. The visibility/transparency provided by the burndown chart is important and, for me, is its basic value since it offers everyone the opportunity to understand iteration progress and discuss it objectively.