“Fail Fast” is a commonly heard idea in Agile development. This idea is that, if there is some doubt about a design/implementation approach, try something and find out quickly if it works or not. In this way, you don’t wait for a long time to find out if there is going to be a problem.
Now at the coding level, this can be tried without too much resistance from others. But what about this approach in a larger context? On Twitter one ASQ (American Society for Quality) daily quality quote offered was from Charles Knight: “You need the ability to fail. You cannot innovate unless you are willing to accept mistakes." The question is, at what level can failure be accepted?
With a coding spike, you’d refactor or try something different and, hopefully, find an approach that would work. You may even leave yourself some slack in the iteration commitment for just such trial-and-error. A similar application of slack is usually recommended for accepting stories into the iteration. This is normally done by committing to about a 75-80% person/day level for everyone on the team as a contingency. Failure here may be a story (or two). The goal would be to discuss why this happened at the retrospective and work to eliminate the causes going forward.
These kinds of failures, could be absorbed and viewed as something to be expected. What about failure of an entire iteration? In particular, few or no stories were completed. Could that sort of failure be absorbed? Probably not well into an organization’s release effort. But what about at the very beginning? What expectation for successful iteration performance exists when an organization first starts out?
I think it is very important to employ a “fail fast” approach to the entire adoption effort. Failing fast is about learning, of course, not really failing as it “nothing achieved.” So I usually recommend to an organization that they start with, at most, two week iterations. In this way, teams gain experience with every aspect of iteration behavior and can repeat this a few times – three at least – to get used to the rhythm of an Agile approach.
This is not to say there would be acceptance for delivering no stories, but everyone involved should agree that the learning which goes on in these early iterations is as important as delivery of stories. Identification and elimination of impediments is very important at this early stage in adoption, so effective retrospectives are crucial.
The goal, of course, is to get better and better at iteration delivery capability, but this is achieved, not by quotas, “win one for the Gipper” attitudes, etc. It’s achieved by people on, and related to, the teams seeing how Agile Values and Principles, as well as a specific method’s practices and techniques, will function in the organization’s environment. This will lead, hopefully, to adjustments in that environment to allow all these Agile concepts to become understood, accepted, and practiced by everyone.
Small, co-located teams might be able to begin with one week iterations. But I believe even larger (up to 10-15 people) teams that are distributed can benefit from sticking to no more than two weeks. As long as everyone involved understands the important of the learning that will go on and works to apply that learning each iteration, I believe two weeks is advisable. After the teams have a good feeling for how individuals (and teams) interact (including with management), what estimation and delivery commitment makes sense, and what technology support they can reasonably expect, iteration length could be increased.
I do not believe these early iterations should be considered “practice” efforts, though. It may be what they are at one level, but the results should not be treated as throw-aways. That is, the delivered functionality should be treated with production-level quality expectations and expected to be the basis for future iteration work. That very little such production-level results may come out of the early iterations should not be the concern, however.
Finally, it’s important not to set these early iterations up as “failure” cushion, i.e., allow teams to think they can afford not to try to do the best they can. The same commitment and accountability should be expected of them as would be expected later on; however, besides evidence of delivered functionality, evidence of important lessons learned should be considered valuable results at this point.
It is likely everyone associated with the Agile adoption will learn things, not just those delivering functionality. If everyone feels everyone else is, indeed, developing important experience in Agile concepts and behaviors, I believe early “failures” of functionality delivery can be handled positively and without “panic.”
What are your experiences with early team “failures” and the response to them?
Now at the coding level, this can be tried without too much resistance from others. But what about this approach in a larger context? On Twitter one ASQ (American Society for Quality) daily quality quote offered was from Charles Knight: “You need the ability to fail. You cannot innovate unless you are willing to accept mistakes." The question is, at what level can failure be accepted?
With a coding spike, you’d refactor or try something different and, hopefully, find an approach that would work. You may even leave yourself some slack in the iteration commitment for just such trial-and-error. A similar application of slack is usually recommended for accepting stories into the iteration. This is normally done by committing to about a 75-80% person/day level for everyone on the team as a contingency. Failure here may be a story (or two). The goal would be to discuss why this happened at the retrospective and work to eliminate the causes going forward.
These kinds of failures, could be absorbed and viewed as something to be expected. What about failure of an entire iteration? In particular, few or no stories were completed. Could that sort of failure be absorbed? Probably not well into an organization’s release effort. But what about at the very beginning? What expectation for successful iteration performance exists when an organization first starts out?
I think it is very important to employ a “fail fast” approach to the entire adoption effort. Failing fast is about learning, of course, not really failing as it “nothing achieved.” So I usually recommend to an organization that they start with, at most, two week iterations. In this way, teams gain experience with every aspect of iteration behavior and can repeat this a few times – three at least – to get used to the rhythm of an Agile approach.
This is not to say there would be acceptance for delivering no stories, but everyone involved should agree that the learning which goes on in these early iterations is as important as delivery of stories. Identification and elimination of impediments is very important at this early stage in adoption, so effective retrospectives are crucial.
The goal, of course, is to get better and better at iteration delivery capability, but this is achieved, not by quotas, “win one for the Gipper” attitudes, etc. It’s achieved by people on, and related to, the teams seeing how Agile Values and Principles, as well as a specific method’s practices and techniques, will function in the organization’s environment. This will lead, hopefully, to adjustments in that environment to allow all these Agile concepts to become understood, accepted, and practiced by everyone.
Small, co-located teams might be able to begin with one week iterations. But I believe even larger (up to 10-15 people) teams that are distributed can benefit from sticking to no more than two weeks. As long as everyone involved understands the important of the learning that will go on and works to apply that learning each iteration, I believe two weeks is advisable. After the teams have a good feeling for how individuals (and teams) interact (including with management), what estimation and delivery commitment makes sense, and what technology support they can reasonably expect, iteration length could be increased.
I do not believe these early iterations should be considered “practice” efforts, though. It may be what they are at one level, but the results should not be treated as throw-aways. That is, the delivered functionality should be treated with production-level quality expectations and expected to be the basis for future iteration work. That very little such production-level results may come out of the early iterations should not be the concern, however.
Finally, it’s important not to set these early iterations up as “failure” cushion, i.e., allow teams to think they can afford not to try to do the best they can. The same commitment and accountability should be expected of them as would be expected later on; however, besides evidence of delivered functionality, evidence of important lessons learned should be considered valuable results at this point.
It is likely everyone associated with the Agile adoption will learn things, not just those delivering functionality. If everyone feels everyone else is, indeed, developing important experience in Agile concepts and behaviors, I believe early “failures” of functionality delivery can be handled positively and without “panic.”
What are your experiences with early team “failures” and the response to them?
No comments:
Post a Comment