Sunday, November 8, 2009

Burndown & Control Charts

The teams I’ve worked with have used burndown charts to track task hours remaining during their iterations. For them, the burndown baseline represents the optimal pace a team would need to be on to complete all work for the iteration. Assuming, of course, that all the work contributes to completing committed stories, the burndown chart helps indicate how well the team is doing in meeting the iteration goal. I say “helps” as the burndown is not the entire truth. (Some teams have tracked story points in a burndown, but, as that produces a stair-step chart, most teams have used a task hours chart for their iterations. Later I’ll mention how some teams track story completion as well.)

Coming from a more traditional quality background originally, I view the burndown chart as a simplistic form of a classic control chart. The baseline is like the central line on a control chart. We have no upper or lower limits since we are not doing statistical sampling, of course. We are tracking actual data completely. But there are some similarities in how a control chart is used and how we can view the actual iteration progress line compared to the baseline.

If the actual progress line hovers/fluctuates right around the baseline, the team is on track to complete the iteration goal. If the actual progress line is above the baseline constantly, it could mean the team is not headed to complete their Sprint commitment. If the actual progress line is below the baseline constantly, it may mean the team is headed to complete their Sprint commitment somewhat early. However, being not too far above or below the line is likely nothing to worry about if the trend is consistent. If a team’s ability to estimate and commit is effective, they should not be too far (or for too long) above, or below, the line.

On the other hand being below the baseline and heading even further below or being above the baseline and heading even further above it, should be case to consider taking some action:

  • A progress line that is below the baseline and increasingly headed down means the team is ahead of their schedule for completing tasks and getting further ahead. This may or may not be good news. If things are going very well during the iteration, the team might discuss with the customer/Product Owner/etc. the possibility of taking on more work. However, this pattern could suggest some tasks being skipped or downgraded in time. That ought to be looked into as well to be sure everyone understands what the pattern means.
  • A progress line that is above the baseline and increasingly headed up means the team is behind their schedule for completing tasks and getting further behind. This is not good news as this pattern suggests some tasks being added or increased in time. This ought to be looked into to find out why work is not converging on the iteration goal.

Earlier I said the task burndown chart is not the complete story and used the word “helps.” Like a control chart, the burndown chart is an indicator of whether or not further consideration needs to be given to team progress. Because of the statistical nature of control charts, deviations of certain kinds from the center line, in either direction, are reasons to investigate the cause of that deviation. The assumption is that this baseline represents expected results of sampling the production process and deviations either way could be either good or bad news, but, in either case, cause to look deeper into what is happening.

The same goes for the burndown chart, but there is certainly more to know about iteration progress since completion of task hours does not, by itself, mean completion of stories which is the iteration goal. One could be completing many hours of task effort, even be below the baseline, and not have completed a single story. This can happen if a mini-Waterfall is occurring in the iteration, with testing tasks bunching toward the end of the iteration.

One thing a couple teams I’ve worked with have done is to put story completion targets along their baseline, then note when each story actually gets completed. This gives both a daily progress indication based on the tasks and an iteration goal indication based on when stories show completion. If teams size stories effectively, a story should be getting completed every few days, at least.

Now teams that are communicating well among their members usually know all these things without the chart. But the visual representation of what the team “knows” is a useful reminder and lets people outside the team see what is happening. The visibility/transparency provided by the burndown chart is important and, for me, is its basic value since it offers everyone the opportunity to understand iteration progress and discuss it objectively.

7 comments:

  1. As a tool provider, we get a lot of pressure to add an "ideal line" to the Sprint Burndown Chart. From what I've seen people doing, this would cause more harm than good. I believe an ideal line adds a subtle pressure to fudge a bit, and that people will tend to do this without being aware of it. A lot of people still expect a smooth predictable progression even though they're doing complex, creative work.

    --mj

    ReplyDelete
  2. I agree with Michael James about the "ideal line."

    Also, I've found using task hours to be less than reliable. It's so easy to fool yourself as to how much work is left in a task (or how many tasks left in a story) when you think you know the answer from previous estimates. It's easy to get to the point where you've done 90% of the work, but the remaining 10% takes 90% of the time. (That's an o-l-d engineering joke.) It's better to track the growth of functionality that can be shown to work or to not work.

    My article on burn charts that was printed in Better Software magazine talks about some of the things you can read in a Sprint burndown chart. See http://bit.ly/ZL3Dp

    ReplyDelete
  3. If fudging goes on, then it would seem the team fudges as a team, unless people are off doing their "own" work rather invisibly. I have seen that latter issue and baseline or no baseline wouldn't have mattered much there.

    I've always encouraged teams to view the baseliner as a way to consider their own estimation accuracy. If people are having a tough time estimating accurately for 2-4 hour chunks of time, then that's an opportunity to get better at estimating in that narrow time-frame. The old engineering joke should not be an Agile reality. If teams are finding it easy to fall into that pattern, I would focus them more on what things are affecting the estimates to become more realistic.

    Now management mis using/understandsing burndown charts is another matter and I tend to spend more time with them than teams on this topic.

    George... In your article, you say "if the rate of progress is at all consistent, then we can easily predict when the work will be finished." So it sounds like there is an implicit trend line that used to see if this remains true. Using task hours or story points, there is still an expected trend from upper left to lower right. So, okay, don't draw the actual baseline, but there is some implicit expected slope in the line against which to assess progress.

    While I agree the goal is to produce software, not estimates, making a commitment to prosduce a certain amount of work each iteration is an estimate. I have found that, after a while, when teams get comfortable in an iteration rhythm, less detailed tracking certainly works. But I do not agree that estimation, in general, is not worth worrying about.

    (I did like your article, though.)

    ReplyDelete
  4. Scott, yes the commitment to a iteration is an estimate. All the more reason, in my opinion, to avoid practices that tempt us to fool ourselves.

    Drawing the baseline is one such tool, in my experience. It exerts a subtle but constant pressure to view the reality in a way that matches our prediction. We do that to reduce our cognitive dissonance--it being easier to adjust our view of the situation than to question our initial estimate.

    If we don't draw it, it's easier to eyeball the trendline and use it to see where we'll hit the baseline. With it, it's easier to see our daily variance from "ideal." Only it's not really ideal; it's just a straight line.

    I'm not just worried about management pressure placed on the team, but the pressure the team places on itself. That pressure can result in inaccurate estimates not being visible until later than otherwise, or in corners being cut to "catch up" with the estimate. I try very hard to remove such pressures.

    ReplyDelete
  5. Part of my preference of omitting ideal lines is the realization that self deception is the *main* way we humans operate, not the exception. The ScrumMaster's responsibilities include helping people be more honest with themselves. For an unshackled team, I'm likely to see the line go *up* before it goes down. (Grab me at AYE and I'll show you my favorite example.) This is perfectly fine, as the team should commit PBIs/stories, not task hours, at the Sprint Planning Meeting.

    --mj

    ReplyDelete
  6. Michael and George,

    All I can say is that it appears I have been lucky or blessed with unusually sophisticated teams. The problems of dissonance and self deception you both describe have not been characteristics of those teams. Some inability to judge their capability to complete work and deliver in small increments has been. But the reality of short iterations, working closely with a PO/customer, and transparency regarding just about everything associated with the work make it relatively easy to improve that inability relatively quickly. It has taken about 3 iterations or so fore this to happen.

    Now the problem has usually been that there is less patience outside the team for this period of growth/experience to occur.

    I would say that, because I have worked a good bit with organizations using offshore/omshore contract resources in distributed teams, getting the contract organizations on-board with agile practices has been harder than the contracting organization. For example, judging contract developers by fewness of defects generated and stating it's the only way they have to determine developer capability, is one thing I've heard. And, for contracted testing staff, the problem has been measuring them by defects found. You can imagine what that does when both are on the same project.

    So I have had lots bigger problems to help teams deal with than their misunderstanding/misuse of burndown baselines.

    ReplyDelete
  7. Thanks for sharing such wonderfull Post!!

    ReplyDelete