Wednesday, September 30, 2009

Common Project Risks and Agile Mitigations, Part 1, Introduction

There’s a lot of literature on software project failure rates and reasons. The intent of this series is to identify the various reasons cited for project problems and suggest how elements of an Agile approach might minimize the risk of them occurring.

How Much Failure?

Surveys of project “failure” rates have been going on for about 15 years. The 1994 Standish CHAOS Report is usually the first one cited in most discussions. After that, CHAOS Reports have come out every other year and there is data from sources such as Capers Jones, Computer Weekly, and KPMG. The CHAOS Reports tend to be the most widely quoted and certainly the most regular reporting of such data. Unfortunately, reports, from any of these sources, cannot easily be verified since their raw data, data sources, and methodologies are not generally available to other researchers/analysts.

A few years ago, Robert Glass [Glass] raised the question of just how much project failure there actually was. He noted the substantial reliance on the CHAOS Reports by many who discuss failure rates and criticized how people used those numbers. However, he did note that the three categories in the CHAOS Reports (i.e., cancelled, challenged and successful) had seen improvement in the percentages of projects falling into each. In 1994, the reported % rates were: 31, 53 and 16; in 2000: 23, 49 and 28; in 2006: 19, 46 and 35. Things clearly seem to be getter better, but, as Glass pointed out, they are “not figures to be proud of.”

Glass also makes a point about how “failure” is usually defined in such reports. Project failure means large cost or schedule overruns, late discovery of quality problems, or cancellation (for any reason). A “functionally brilliant” project, Glass says, that “misses its cost or schedule targets by 10 percent” could be categorized as a failure. (A recent examination of such data [El Emam] repeats many of Glass’ observations, including the improvement trend in project results.)

For the purposes of this series, I’d like to dispense with how much and where one would place a project in the CHAOS categories. Indeed, my title is intended to do away with the whole concept of “failure,” not because I don’t believe it happens, but because, like Glass, I think it is a relative term depending on your definition.

In discussing a Robbins-Gioia Survey (2001), IT Cortex [Cortex] says:

Project failure is not defined by objective criteria but by the perception of the respondents. The advantage of a perception is that it naturally integrates multiple aspects. Its obvious disadvantage is that it is inevitably partial: if the respondent has taken an active role in the project it will inevitably embellish the reality, whereas if the project has been "forced down his throat" he might cast a grimmer look at the project outcome.

Thus, I’m not here to tell you what you, or your management, or your organization, or your customer(s) should think constitutes “failure."

I do want to note, though, that lists of potential project risks are even larger than lists of project failure reasons. To all this, Capers Jones points out [Jones 96], “There are myriad ways to fail. … There are only a very few ways to succeed.” And though it is not mentioned directly in any of the lists of project failure reasons, Tim Lister points out [Lister] that, “The biggest risk an organization faces is lost opportunity, the failure to choose the right projects. So, value is every bit as important as cost (the plusses matter as much as the minuses) and your process for deciding what projects to do is more important than your process for how to do them.” But that’s a topic for another blog.

What I want to do in this series is list the things that typically challenge projects, increasing risk and tending toward less satisfactory results than might otherwise have been achieved. Since the various sources do not list problems in the same order, I won’t try to do that. Indeed, I have taken all the reasons given and categorized them in my own way. The frequency with which things are mentioned does offer some indication of their importance. So I am listing the problems based on my categories in order of the overall frequency of issues being mentioned in the sources. Briefly, these areas are:

• Requirements Related Issues
• Planning Related Issues
• Technology Related Issues
• Project Management Related Issues
• Estimation Related Issues
• Quality Related Issues
• Stakeholder Related Issues
• Risk Management Related Issues
• Management Related Issues
• People Related Issues
• Schedule Related Issues
• Communication Related Issues
• Resource Related Issues
• Lessons Learned Related Issues
• Process Related Issues
• Testing Related Issues
• Vendor Related Issues

Having addressed these problem categories briefly, I’ll point out what aspects of an Agile approach I believe could (help) minimize each of them.

Part 2 of this series will begin the discussion of the categories listed above.

References
  1. [Armour] Armour, Phillip G. “Twenty Percent,” Communications of the ACM, June 2007 (vol. 50, no. 6), pp. 21-23.
  2. [Cortex] IT Cortex. http://www.it-cortex.com/Stat_Failure_Rate.htm and (http://www.it-cortex.com/Stat_Failure_Cause.htm)
  3. [El Emam] El Emamn, Khaled and A. Güneş Koru. “A Replicated Survey of IT Software Project Failures,” IEEE Software, Sept/Oct 2008 (vol. 25, no. 5), pp. 84-90.
  4. [Evans] Evans, Michael W., Alex M. Abela, and Thomas Beltz. “Seven Characteristics of Dysfunctional Software Projects,” Crosstalk (The Journal of Defense Software Engineering), April 2002, pp. 16-20.
  5. [Fairley] Fairley, Richard E. and Mary Jane Wilshire. “Why the Vasa Sank: 10 Problems and Some Antidotes for Software Projects,” IEEE Software, Mar/Apr 2003 (vol. 20, no. 2), pp. 18-25.
  6. [Glass] Glass, Robert L. “IT Failure Rates – 70% 0r 10-15%?” IEEE Software, May/June 2005 (vol 22. no. 3), pp. 112 & 110-111
  7. [Jones 96] Jones, Capers. Patterns of Software Systems Failure and Success, International Thompson Computer Press, Boston, MA, 1996.
  8. [Jones 06] Jones, Capers. “Social and Technical Reasons for Software Project Failure,” Crosstalk (The Journal of Defense Software Engineering), June 2006, pp. 4-9.
  9. [Lister] From slides from a talk.
  10. May] May, Lorin J. “Major Causes of Software Project Failures,” Crosstalk (The Journal of Defense Software Engineering), July 1998, pp. 9-12.
  11. [McConnell] McConnell, Steve. “The Nine Deadly Sins of Project Planning,” IEEE Software, Sept/Oct 2007 (vol. 18, no. 5), pp. 5-7.
  12. [Reifer] Reifer, Donald J. “Software Management’s Seven Deadly Sins,” IEEE Software, Mar/Apr 2001 (vol. 18, no. 2), pp. 12-15.
  13. [Rost] Rost, Johann. “Political reasons for Failed Software Projects,” IEEE Software, Nov/Dec 2004 (vol. 21, no. 6),pp. 104, 102-103.
  14. [SD Times] Rubinstein, David. “Standish group report: There’s Less Development Chaos Today,” March 1, 2007 (found at http://www.sdtimes.com/content/article.aspx?ArticleID=30247).
  15. [SPC] Software Productivity Center, Inc. “Root Causes Of The Most Common Project Problems,” http://www.spc.ca/resources/process/problems.htm
  16. [Standish 94] The Standish Group Report, 1995, http://www.cs.nmt.edu/~cs328/reading/Standish.pdf
  17. [Thayer] Thayer, Richard H., Arthur Pyster and Roger C. Wood, “The Challenge for Software Engineering Project management,” IEEE Computer, August 1980 (vol. 13, no. 8), pp. 51-59

Wednesday, September 23, 2009

What If Software Quality Got (Lots) Better?


I have been reading various things about testing and the QA role on Agile teams. This question occurred to me the other day as a result, I believe. Just what would we do or how would things be different, in software development, if the overall quality of software increased by a lot? There are a number of aspects to how one could answer this question.

Presumably, relationships with customers would improve significantly and people might be inclined to spend more money on software expecting that they would not be as disappointed in the results. Note that I did say “as disappointed” since others things beyond raw quality would affect a customer’s view of the results. For most, improving “quality” would go beyond defects, but let’s just stick with that now. Let’s assume the results include acceptable usability and coverage of functionality and that what we are concerned with is failures of the software to perform that functionality. I don’t want to get too hung up on a definition of “quality” at this point.

Of course, another question may be by how much would the improvement be that I am thinking of (i.e., what does “lots” mean)? That’s one of the things I’m going to leave up to those who comment on this post. What would the change have to be for you to think it is significant compared to today?

One thing that I would expect to be different would be some change(s) in process. Indeed, such change(s) would likely be required to even get to “lots.” What change(s) would you see occurring to get there are as a consequence of having gotten there?

My next thought was, what would happen to our view of the role of test staff? Would they just be able to do more comprehensive testing (since more software would work correctly at a basic checking/testing level)? Would it mean they could do different testing than required if confidence was lower in potential quality? Or would your definition of “lots” mean so few defects would exist that (at least some portion of) test staff would be moved to some other kind(s) of work either related to quality control or quality assurance or a actual career change/move?

Finally, and this is certainly not anything new to think about, what would we need to make a big enough improvement in quality to call it “lots”? Now there are many things being done today related to quality improvement, but what new thing(s) might have to occur to make a leap to “lots”? Would it be just more diligent practice of what we already know and (try to) do? Or is there something significant in the nature of how we create and check/test software that is needed?

I’d be most interested in what people think.

Sunday, September 20, 2009

The Problem Solving PM and Agile

I had been thinking about this topic and making notes periodically.  Then, earlier today, I was reviewing a chapter from a book (in progress) on coaching. Some of what it says reminded me of the topic of what a well-regarded Project Manager may face in becoming involved with an Agile effort. The chapter describes, among other things, the need for Agile coaches to restrain themselves from immediately trying to solve team problems.

Now the chapter seems to be written mostly from the perspective of how a coach would view their own behavior based on having been a “go to” person when problems need solving. Such problem-solving skill is something valued in project management. Indeed, success in solving problems helps make a Project Manager valuable and well-respected. So what happens when this is true, i.e., in their work, their problem-solving, fix-it ability helps define their organizational value, and they become involved with an Agile team/project?

One thought I have heard expressed is that, of many traditional roles on projects, PMs can be good candidates to become ScrumMasters or Agile coaches. The book chapter deals very nicely with advice to a person finding themselves in such a role when it comes to their own behavior. It provides good examples of occasions when a coach might be tempted to try to fix a problem right away based on their experience and their view of how the issue should be addressed. The “take it to the team” theme is covered nicely in the chapter.

That addresses the inward looking aspect of the “fix-it” approach. What about the outward looking one? That is the inward looking view of those around and above the new coach in the organizational hierarchy? If part of a coach’s organizational value has been based on their problem-solving initiative, what happens when they begin not to dive in and solve problems but work to help the team(s) to do so in most cases? Will they be viewed as failing to meet expectations, even “dogging it” in their new role by not having ready answers when issues arise in the team or are brought to them (or the team) from outside the team?

One of the things a person moving into a coach role needs to realize is that they have to coach in both directions. Coaching outwardly to the rest of the organization seems to me to be at least as hard, if not harder, than coaching a team. Presumably, teams are expecting to get coaching and, if trained reasonably effectively in Agile concepts, have heard that they, as a team, will be expected to identify impediments and do what they can, first, to address them before expecting others to do so. So a coach that holds back a bit and tries to point the team toward their own solution is probably not going to be looked upon too strangely by the team.

What about coaches who have been viewed as the “fix it” people and now must deal with people used to bringing them problems from outside the team? Now, the problems are really for the team, hopefully the coach is aware of them (and the team). If not, that’s an issue the coach should be addressing. But what if the problems aren’t team issues but organizational ones thrown their way which will impinge on the new Agile approach, yet which others expect the coach and team to “adapt” to somehow and “fix”? (By the way, I have heard folks latch on to the “adapt” word and assume it means the team figures out how to deal with whatever gets thrown at them.)

What will it be like for the new coach to have to coach those that were previously their peers or managers? When those folks, used to bringing the coach problems to solve, find the coach using an Agile approach with them, it may be a shock initially. This may be more true if the other people are not expecting to be involved in any Agile process, There may even be resentment and a feeling that the new coach is trying to “manage” them. (I know there’s a lot of material or “managing your boss,” etc., but the openness of an Agile approach seems somewhat more direct than much of this advice often suggests.)

Most of my experience in coaching has involved being brought in from the outside, so I have had no prior PM fix-it expectations about my role. However, I will say that one thing that has helped me when I have coached other coaches who are in this position has been to work at least as much with the management and peers of these other coaches to help them understand the Agile approach. This includes the roles and responsibilities of teams and coaches (and management) as well as how issues and impediments are handled.

In particular, I try to get peers and managers to understand two things:

1) to get the most value out of a team’s effort, those outside the team should be prepared to become involved in more problem-solving (i.e., impediment clearing) than they may have previously expected, so teams can focus on meeting iteration functionality expectations;

2) an important goal of an Agile approach is to build teams that can effectively solve their own problems (but not all the organizational ones thrown at them), developing larger numbers of people who are, in fact, problem-solvers.

Indeed, this last point is a key reason the coach does not jump in, even if they can solve the problem. Having the team do it is usually a valuable experience in the team working to manage their own environment (to the extent they can). Agile or otherwise, relying on single individuals to be “problem-solvers” may mean you set up a number of single-points-of-failure should such people leave, transfer, etc.

None of what I have said, though, should be interpreted to mean that having individuals who can “get things done” is a bad thing. But my belief is that an Agile approach hopes those are the people who can, if they do not already, come to achieve this through the teams. This can lead to multiplying the number of people who develop skills in problem-solving that can be spread further through the organization. A benefit of taking an Agile approach is uncovering/developing people who are, after all, the kind companies say they want in the first place. Agile helps do that, but only if teams are given the chance to learn this. The temptation to find the “fix it” people and expect them, in the name of efficiency, to get it done faster rather than developing teams who can will, at the very least, retard Agile’s contribution to organizational improvement/growth and may, indeed, kill it.

Thursday, September 17, 2009

Yet More Quotes from Twitter

A third installment of interesting things I've captured from Twitter that people have said (including some of my own) either individually or as part of a thread of discussion (again sorted by first name):

Alistair Cockburn (not from Twitter but mentioned on it) - Management tells the workers to mutiny. The workers refuse. (A koan for agile development)

Andrew Meyer - The People, Process and Technology triangle, to me, is about balancing a path to successful execution of a project. Value is about determining if a project should be done.

Andrew Morgan - Be fearless with your day. Show up bold. Show up on purpose. Love every minute of life today. (Repeat daily until death)

Ben Rady - The first rule of productivity is accepting the fact that there are valuable things that you will just not have time for.

Benjamin Mitchell - Challenge of #kanban in s/w dev is to manage overlapping activities (analysis, dev) that may start work with partial information. Vasco Duarte - That's the exact same challenge that all processes face. #kanban is no exception. In #waterfall that was just ignored. #agile Benjamin Mitchell - Good point. It's also a difference when applying 'Lean' approaches to Prod Dev and Manufacturing. Goal of Prod Dev is to generate info. (which requires variation) to assist in making best future economic decisions so 'rework' can be ok. Goal of manufacturing is to produce within acceptable variation. Rework is generally waste. So depending on your view of s/w dev (manufact or prod dev) you'll take a diff view of managing 'rework'. My #kanban board surfaces this.

Bob Marshall - Is there any chance we can forego the "waterfall" epithet in favour of the more descriptive, objective, term "batch and queue"?

Bob Martin - to be a professional, you must not rush. keep your code clean. So clean it barely needs comments.

Brian Button - Is the most important skill in facilitating retrospective the ability to hear what isn't being said?

Dan Whelan – The Fifth Discipline and #systemsthinking has me thinking that agile/lean focuses too much on short-term value. Need to build learning orgs.

David Anderson - I find risk management is poorly executed. This is why I am going after it next as a root cause of #agile adoption failure. J.B. Rainsberger - Do you find risk management a sizable bottleneck? From here, it looks like not focusing on value limits everything. David Anderson - value is a complex notion. Board room wants predictable ROI more than revenue or profit maximization. if you ask "would you like more value?" answer: "yes". if you ask "would you like more value at risk of less predictable outcome?" answer: "No!" #agile appears to give less predictable outcome as traditional gives (false) impression of predictability. Hence #agile == risky! to remove this impediment we must show that #agile manages risk better than traditional to deliver more predictable outcome. J.B. Rainsberger - For most teams, most of the time, volatility of the cost of features depends on the design. Improve the design, less volatile. David Anderson - if you are using design to include analysis classification and architecture then i agree with you. Lean product design uses analysis classification & product architecture to design for variability/volatility. J.B. Rainsberger - I consider "architecture" simply to be design-in-the-large, and I don't know the term "analysis classification". I more mean that the crappier the design, the more the marginal cost of a feature depends on the state of the design.... Dan Whelan - I think of architecture as design decisions that are expensive to change. David Anderson - agree architecture is design-in-the-large. good analysis can classify functionality into areas of the design.

Deborah Hartmann Preuss - Toyota competency levels:: Assisted, Independent, Can make changes, Coach. #AgileOpen Thx! I like those levels!

Declan Whelan – I've had negative comments from mgrs and teams be de-motivated by unrealistic diagonal lines on burn-down charts. I do see the value of line in highlighting continuous flow. I find the shape of real actual curve sufficient for this, i.e. focus team on making curve flatter rather than having curve track to some ideal line. Deborah Hartmann Preuss - I have teams reflect on their own sprint "signatures" over time - improving? repeating same mistakes? Aim is consistency. I find that sprint burndown w/o task board is much less useful (& less used). If had to choose: task board. Lisa Crispin - I'm not a fan of burndown charts. Prefer task or kanban type board, visually you can see what tasks/types remain to be done. Scott Duncan - "focus team on making curve flatter" Flatter relative to what? Wouldn't that be some baseline even if it isn't drawn on the graph? "unrealistic diagonal line" I spend time coaching/training on what lines means, use of burndown, etc. Haven't had the issue. Declan Whelan - Flatter in that tangent of curve remains constant - i.e. only relative to itself. Perhaps splitting hairs ;).

Elizabeth Hendrickson – (On exercises about learning to learn) Simple & meta: have groups work together to solve a puzzle of some kind & debrief how they learned?

Elizabeth Hendrickson - Many have said this before. I'll say it again anyway: Source code, like inventory, is a liability, not an asset. Dave Rooney - Not sure I agree. Source code that hasn't been shipped to production is indeed inventory. Afterwards, it's documentation. Brian Foote - Tell us more. It would follow then that you feel that reusing source is like reusing diapers, noble sounding, but impractical. Chet Hendrickson - Good code can be a capital resource. 1 that allows us to build things of value. But it must be treated as one otherwise... Elizabeth Hendrickson - Consider: mgr insists on keeping 1/2-done feats in code base despite drag on productivity b/c they're "too valuable" to toss. Chet Hendrickson - reminds me of my grandfather's box of broken electric drills. He didn't need a new one, he had 5 already. I don't think it's all that audacious. Consider inventory as liability from a manufacturing perspective: http://bit.ly/YvQ1d.

Hillel Glazer - Consultant <> Contractor. Ensure you're being hired as a consultant, not a contractor, are you there to create outcomes or outputs?

James Bach - They way to kill curiosity is to *force* other people to learn what you think they will someday need to know.

John C. Maxwell - People change when they: HURT enuf that they have to; LEARN enuf that they want to; & RECEIVE enuf that they're able to.

Joshua Kerievsky - Low-grade sausages contain stuff you don't want to know about just like low-quality software.

Kathy Sierra - "self-promotion" need not be literal. Want people to see you're good at X? Promote "learning X" or "others I helped do X" or simply BE... X

Kathy Sierra - Clarification on word "useful": it does NOT rule out entertainment or tweeting your lunch menu. Making my day even .0001% better? Useful.

Kathy Sierra - Good directions to your house include how to know I'm on the right road and how to know recognize when I'm not (& what to do about THAT).

Lao-tzu (via Bob McNeal) - Different perception?... What the caterpillar calls the end, the rest of the world calls a butterfly.

Larry Weidel -There are LAWS, PRINCIPLES, and PREFERENCES. LAWS are ALWAYS true. PRINCIPLES are USUALLY true. PREFERENCES are up 2 U.

Michael Bolton - @cory_foy "OH: 'We just need to go faster'" No, no! Work smarter! No, no! Work harder!

Michael Bolton - If the last question is "Am I okay with having no more questions?", then you're done. If you're not okay with that, you're not.

Michael Bolton - Testing is what you do when designing a check, interpreting the result of a check, and learning. Spell checkers *help you test* spelling. J.B. Rainsberger - Here's a clue: automated "tests" replaced the error-prone "guru checks output" anti-pattern. The clue is right there. Michael Bolton - Not always. You DO need a guru (programmer, tester, business person) to design and interpret the results of the check. James Marcus Bach - Non-sapient work can, but not necessarily should be performed by computer. Checking is a poor substitute for testing. In general, when checking substitutes for testing, bad outcomes happen. But it has its place. Regression CHECKING != regression TESTING. Latter /investigates risk/ of change-related failure, requires *new* tests. Michael Bolton - You can call unit tests and ATDD "tests" if you like; I don't mind. But if you only *check* your product, you may not be *testing* it well. Testing is explorative (probing) & learning oriented. Checking is confirmative (verification & validation of what we know.

Michael Bolton - The problem with maturity models: they assess "maturity" on based conformance, instead of independence and adaptability.

Michael Bolton "Don't mistake requirements document for the requirement. Don't mistake process manual for the process".

Michelle Sliger - The importance of facing the truth and saying Yes 2 reality, then no 2 denial.

Mitch Kapor - "Is the good enough the enemy of the barely ok?" Brian Foote - Nope. The barely ok is the enemy of everything.

Paul Dyson (via William W. (Woody) Williams) - "Scrum Masters remove impediments, Project Managers prevent them occurring."

Serge Beaumont - Agile is techniques at the Shu level, a framework at the Ha level, and a culture at the Ri level.

Skip Angel - Coaching = Not afraid 2 pull off band-aid of workarounds 2 expose infection of problems under surface. May hurt but needed 4 org's health!

Steve Keating - When we throw mud at others, not only do we lose a lot of ground, we also get our hands dirty.

Steven Keating - People don't buy your product for the value you put into it. They buy it for the value they get out of it. Sell that way!

Steven Keating - People quit leaders not companies. If you lose employees you don't have an employee problem, you have a leadership problem.

Tanmay Vora - One of the biggest challenges for "Human Resources" is to do justice to "Human" part of it and not get into routine policies/processes!

Tim Ottinger - An expert is a person who knows what all the mistakes look like, and that you don't have to make them.

Vadim Zaytsev - Optimist: the glass is half full. Pessimist: the glass is half empty. Engineer: the glass is twice the required size. Tim Ottinger - _COST_ACCOUNTANT_: glass is too large, Engineer: glass has 100% safety margin.

Wednesday, September 16, 2009

My Thoughts on Certification (and some related topics)

Recently, there has been increased talk in the Agile community about certifications, pro and con, though it appears mostly the latter. It has also been noted that the IEEE Computer Society will work on an exam for state licensing of software engineers. Since I have had some experience/contact with certifications (e.g., CSQE, CSM/CSP, PMP, CSDP, my wife's PA-C), I have some idea as to how they work and feel that any certification effort should be able to explain how it addresses some common certification characteristics.

What It Means to “Certify”

First, though, it seems to me to be important to define what it means to certify something/body as there are implications just in this definition. Common definitions for “certify” related to professional credentialing include:
  1. a declaration by some individual, group, organization (i.e., the certifier) that
  2. some other individual, group, organizations possesses/has demonstrated some
  3. quality, characteristic, knowledge, ability, skill, or combination of these.
Thus, it requires a certifier to grant the certification to an applicant. Of course, the value of any certification depends on the credibility of the certifier e.g., a professional association of some sort. For this reason, there are even certifiers who certify other certifiers. For example, auditing firms (“registrars”) are themselves audited by certification bodies. These latter bodies attest to the fact that a registrar conducts audits according to some standard, usually created by a Standards Development Organization such as ISO.

So, for any certification of any kind to have an meaning/value whatsoever there must be trust and confidence in the certifier. What has to be trusted is that the certifier has actually been able to confirm that the individual, group, or organization being certified has met the criteria established for the certification. There are a few ways in which this trust can be established:
  1. the certification criteria are public and easy to understand (at least by those familiar with the scope of the certification);
  2. the certifier states and can demonstrate how they verify that the criteria have been met;
  3. those certified are recognized as actually possessing the quality, characteristic, knowledge, ability, skill, or combination covered by the certification.
In the latter case, this means that significant question/evidence is not raised after the fact that those certified do not, in fact, meet the criteria, e.g., people certified to some skill/craft capability actually cannot apply that skill/craft as defined by the certification scope and criteria.

Scope

As mentioned above, one important matter is the scope covered by the certification.

The scope really defines what it is that is being certified. This is often defined by what is known as a “body of knowledge” which represents the domain covered by the certification. Some certifications have “guides” or outlines to perform such scope definition since actual bodies of knowledge are usually considered to be the entire corpus of published material available for the domain (e.g., the Project Management Institute’s PMBoK or the IEEE Computer Society’s SWEBoK for software engineering).

For certain very narrow certifications, a body of knowledge might be contained within some single, perhaps large, published source. But, one way or the other, a scope and body of knowledge need to be defined so they can be publically assessed for others to determine just what the scope of a certification would mean.

Another aspect of scope is whether the certification is largely based on demonstration of knowledge alone or application of that knowledge. This is where a number of people complain about some (levels of) certification that currently exists in the software field: it can be achieved by mostly passing a test (plus perhaps some minimal years of employment). Any professional certification (from medicine through hairdressing) usually involves some actual demonstration, before already certified professionals, that a person can apply the knowledge that may have been demonstrated through a test (or series of them).

The controversy will be because folks will disagree on the boundaries of that scope, especially in a skills-based certification. For example, the SWEBoK and ASQ’s Certified Software Quality Engineer BoK both note that there are important areas relative to being an effective employee and/or professional that are not covered by their BoK. They even admit there are technical and domain knowledge their exams don't cover that can and/or will matter in given work situations. So you cannot cover everything that could conceivably be important in working in a given situation.

Finally, it would also be necessary to indicate whether there are levels of certification, e.g., entry, experienced, expert or some such scale. Now each level could also be established as an independent certification on its own, of course. But it will need to be part of the definition of scope. Having different levels may help with the difference between demonstration of knowledge vs of application of that knowledge.

Nonetheless, it is important to be clear as to what scope any certification would cover.

Criteria

While not without controversy and not automatically simple to do, it can be much easier to define the scope than to actually certify one against it. Therefore, a second important matter is defining the criteria to be met to become certified. Different certifications have different criteria, but most professional certifications contain some form of the following types of criteria:
  1. Evidence of some (a) training such as industry courses or formal school classes, (b) educational degree based on some curriculum of classes/ having been met, and/or (c) experience gained, often under supervision of already certified individuals.
  2. Examination/testing independent of that associated with educational achievement since people in most fields can get the educational credentialing from many sources. Developing (and maintaining) these takes substantial time and effort.
  3. Possible observational input from existing, certified holders of the certification based on the applicant. Not all certifications do this; however, most "professional" ones do.
The extent of any or all of these criteria may depend on any level of certification as well.

Now 2 is where must controversy exists in current software-based certifications. Most tests turn out to be multiple-choice efforts and focus on factual knowledge, not application of that knowledge. I do not see how the latter can be avoided to have any test be of real value. Of course, multiple test types could be used for multiple certification levels, e.g., a "trainee's" being largely or completely knowledge while more experienced levels having more applied expectations.

The key here is what out of all the Body of Knowledge should be covered in any test and how answers can be made "objective." Of course, even “objective” tests are “subjective” in that they represent someone’s opinion about what someone else should know. Can this even be avoided? And in judgment-based professions, can answers even be “objective” except in very narrow, technical ways?

The observational component of 3 could be like residency in medicine. Some one gets through medical school (which requires no small practical demonstration of ability) but are now to be observed by those who are already MDs. That is, for some period of time, applicants must be under the observation of or otherwise associated with someone who is already fully certified to practice the profession independently. Engineering has a somewhat different model as not everyone graduating from an engineering school ends up being licensed as a Professional Engineer. But, in one sense this is not dramatically different than their being other medical professionals certified to be other than a full MD, e.g., various nursing licenses, Physician Assistants.

Validating Scope & Criteria

Having determined what the scope and criteria will be, the next step is to validate these. This requires that there be openness in the definition of and rational for the scope and criteria and review by the professional community. The latter would be people who are currently believed to represent those who would deserve to be certified, i.e., expected to be able to meet all the criteria. (For an update to an existing certification, this would be people already certified under the prior criteria.)

The job of these people is to ensure the criteria are truly relevant to the scope the certification claims to cover such that most people who pass deserve to do so and few people who deserve to pass end up failing to do so. This can be done through peer review of the certification materials and knowledge base as well as these peer reviewers taking sample exams and assessing whether the results suggest the tests effectively cover the scope.

Verifying Criteria are Met

After this is done, the next step is to actually carry out the certification effort and verify whether the criteria are met by individual applicants or not. Concerns here are:
  1. How is training, education and/or experience to be verified? Copies of certificates of class attendance? School Transcripts? Letters from institutions?
  2. What pass/fail rate will exams have? Must 100% of all answers, results, etc. be "right" or is some, less than, 100% rate okay? And what is the correct <100%>
  3. What kind of “objectivity” can be brought to bear on observational input? Multiple inputs rather than just one observer’s? Indeed, can this be (and do we want) purely “objective” input for this criterion? Is any input beyond the body of knowledge relevant for observational input?
For example, PMI actually audits some portion of applications on point 1.

Ongoing Certification Responsibility

Once an individual is certified, the question remains how long that certification is considered valid. For some certifications, there is yearly recertification required. Others require it every 2, 3, 5, or 6 years. This can take the form of accumulating continuing educational credits, demonstrating continuous practice in the profession, as well as, at one of the longer periods of time, taking another examination. All this needs to be defined as part of the (re)certification criteria so applicants, and others, will know what ongoing expectations exist.

------------------------------------------------------------------------------

There is a lot to consider and all of it may not be possible or make sense at this point. But I think these ideas cover most of what passes for meeting "certification" requirements in most fields. However, some other topics deserve a bit of mention as they are definitely related to certification.

Where Licensing Fits In

Licensing is a government activity. It is where some governmental authority grants some individual, group, organization a “right” of some sort. For example, in the USA, states license people to practice law, medicine, etc. This is usually a pro forma event, accompanied by licensing fees paid to the state, given that the state trusts the certifier. For example, in the USA, this would be the AMA for doctors or the ABA for lawyers. Licensing is not an issue I want to discuss here in any depth. I felt it was important, though, to make the distinction since not all certifications necessarily lead to licensing concerns. One criteria when they do is if the health, welfare or safety of the public can be affected by activities covered under the certification. The IEEE effort noted above is occurring because matters of software development in areas such as aerospace, medical devices, financial transactions, and the like can have significant impact on the public.

Training Separated from Certification

For true objectivity and ethical reasons, a certification body should not mandate/control its own training/education as the only source to meet its own criteria. In any certification program, money will always end up having to be involved, one way or the other. This is why a certification body should not have some vested interest in consulting/training related to the certification.

The same idea applies to a company that would both perform audits to some standard and supply training/consulting to that same standard. What are the odds that, if you pay for the latter, you're likely to fail the former? At least, that's the conflict of interest question. Can a group claim to effectively train/consult with a company on some standard, then turn around and fail them because they followed the training/consulting? You get the idea. Most audit certification bodies look dimly on registrars that have too close a relationship with training/consulting firms (or arms of the registrar firm).

Value of Certification: Who Cares?

Certainly, those who would make use of the products/services of organizations employing/using certified individuals could care how effective certification of those individuals is conducted or, indeed, that certification exists at all. Individuals might care about certification as a distinguishing characteristic for themselves compared to those who are not certified. Naturally, as noted above, the credibility of the certification will matter. As some note, people with too many certifications may raise questions with potential clients/employers who may wonder how valid such certifications are if a person can maintain so many of them concurrently.

If the goal of certification is to ensure (a loaded word itself) people can be (not even "are") more effective in a given domain covered by the certification scope, perhaps there are other ways to make people more effective rather than worrying about how to judge if they are? Not that the latter isn't important, just that the former might be easier to accomplish, i.e., designing/promoting excellence in education/training could be easier than excellence in certification.

[And as a final note, almost everything said above regarding certification of individuals could apply to assessment programs related to, for example, “how agile” individuals or organizations are.]

Tuesday, September 15, 2009

So why the plural?

Last month, I noted that I had a blog over 2 years ago that was cut very short. I reposted a somewhat revised version of one of the posts, Here's the other initial post, also somewhat revised, explaining why I use "Qualities" plural form.

--------------------------------------------------------------------

A lot of websites and blogs use the singular and, at the time, I was fishing around for something that wasn't already in use. One day, I was also working on something related to the ISO standard (ISO 9126) on software product quality characteristics. At one point I was writing something along the lines of "the non-functional qualities against which software can be compared" and, then, later, "these software qualities." That was even longer ago and I had the idea of using "software qualities" as a website since then.

Since I mentioned ISO 9126 and since it was the "inspiration" for the title of the original blog (and then this one), let me say a few things about this standard in case you are not familiar with it. It addresses what are usually called categories of "non-functional requirements" related to software and which are often known, for short, as the "ilities" because of how many of them end, e.g., maintainability, reliability, portability, etc.

Now you can certainly argue with how they are organized or what terms are used -- I've collected a list of over 100 such terms that I use on one slide in one presentation I give just to point out that the names are not the really important thing. However, having some sort of "model" of quality attributes seems to me to be quite valuable in considering how you plan to achieve quality in software.

Very often, non-functional requirements do find their way into formal requirements specifications, but not always. Sometimes they come up in less formal discussion. Sometimes, and this is the dangerous part, they do not come up at all as they are assumed by a customer. Well, they do come up, but usually after the customer gets their first substantive look at or experience with the software and they say, "What do you mean it doesn't....?"

Being explicit about such characteristics when the requirements are being discussed can avoid a lot of anger and frustration (not to mention cost) later on. So having a "model" that addresses the kinds of non-functional characteristics important to a specific product (or release) is a way to address them in an organized fashion. It can also demonstrate some proactiveness on your part as a software developer.

--------------------------------------------------------------------

That's what I said back then and it's still behind my thinking these days, though, as I noted in my first blog post, I've adopted a more Kano model approach to the "Qualities" ideas which covers functional and non-functional requirements expectations.

Monday, September 14, 2009

There is No Definition of Agile – Hooey!

Periodically, I’m at meetings or conferences and hear someone saying something like: “There is no definition of ‘Agile’. It’s a bunch of practices and techniques and methods, but no real definition exists. To that, I always say, “Hooey!”

Well, I don’t use that word. Instead I point out that before the Snowbird meeting in 2001, there was no definition for "Agile" with the capital "A". The Values in the Manifesto that came out of that meeting and the formulation of the Principles that followed shortly thereafter are, to me, what define "Agile." Matters of method vs practice vs technique are another issue, but the Values & Principles are what represent a definition to me.

Now you may say these are too vague. Indeed, I have had folks tell me their (or their clients) consider the Manifesto to be “content-free.” That is, the four Values are statements that hardly anyone could disagree with as generally desirable, but they have no substantive meaning. In asking further about this, it usually trails back to the Values & Principles not saying how to do anything, i.e., no specific practices or techniques are included.

Now, lots of definitions don't tell you how to do what the word(s) define. But the better explanation is simply that the Manifesto came together based on the common beliefs of many practicing individuals – each with their own (often related or similar) practices – about how software should be developed. As Alistair Cockburn as noted, this means Agile “has no ‘center’: There is no “center” to agile development. There’s only proximity of similar-but-different personal value systems coincidentally producing similar recommendations.

This does not, in my estimation, mean the Values and Principles do not serve to “define” Agile. They are, to me, a touchstone against which to measure what is done, not a description of how to do. Practices and behaviors that more closely adhere to the Values and Principles are more “Agile” than those that drift further from them. Indeed, my belief in this is why one of my first posts on this blog was entitled “Agile Training - Values & Principles Are Essential”.

[A Side-Note on Agile Assessment Models --

It’s also for this reason that I believe any form of assessment model related to an organization’s readiness for or practice of an Agile approach must use the Values and Principles as the top-level concerns. If you are familiar with the (CMM®/)CMMI® model, you know there are (Key) Process Areas, followed by Goals, followed by Practices. From any Agile perspective, the Values and Principles would be, in effect, the (K)PAs. The various methods’ practices and techniques would be the Practice level where what is stated are recommended ways to achieve the goals, but not required, per se. What is missing in Agile is a clear statement of the goal-level.

In performing an assessment, an organization would show that they achieve the Goals by following some identifiable practices/techniques. The Goals would need to be satisfied, but a variety of practices might be used to do so, though a given assessment model could offer examples of practices and techniques that would be considered effective in doing so.

I mention this, because there are efforts to define and use such Agile assessment models already and I am not sure they can work as well as would be hoped without such a set of Goals having been clearly defined. In some instances, it appears models have established practices as goals, i.e., if you do this or that practice, you get some sort of “points” for that. I would hope model creators spend time coming up with a clear statement of the Goal level.]

Saturday, September 12, 2009

Why Organizations May be Uncomfortable Getting the Kind of Employees They Say They Want

Organizations often say they want individuals to have the characteristics Agile calls self-managing/organizing, so why do organizations sometimes doubt/resist teams doing so? To me, an Agile approach would seem to encourage developing exactly the kinds of people organizations claim they want: empowered, skilled, motivated, responsible, concerned with quality, responsive to customers, etc.

I asked about this on Twitter. Here are some of the responses:

@skirk Because they (the resistors) don't understand the changes involved and are afraid. Change has to come from within.

@madwilliamflint Because in those instances organizations want 'self motivated' but NOT 'independent' which they see as unpredictable. Last thing they want is what they see as "rogue programmers." They mean "works w/o badgering."

@estherderby Managers lack knowledge of how to set appropriate boundaries & negotiate explicit decision-making authority, which is part of the reason they freak out. no boundaries --> unpredictable behavior.

@tottinge I think they want self-SUPERVISING members, not self-MANAGING members. They confuse the terms.

@YvesHanoulle I want my kids also to have an opinion and be independent. I hope to raise them so that they end up with a strong personality. But in the morning rush, when we are juggling to get them on time to school and ourself to work, it does not feel like the right time for me that they practice that. Managers also want their teams to become self-organizational because they heard it is better. But if they never had the experience of such a team, the first time feels very frustrating. Just like how I feel in the morning with my kids.

One idea I think came through the discussion was that statements about the kind of employees desired are more corporate in nature while resistance often comes individually. That is, what is said collectively and what people are comfortable doing individually are different things. It’s also likely that, while there is a desire for more initiative from workers, there are also concerns over when that occurs, e.g., a general license to make decisions about everything isn't desired.

I’m reminded of an Agile 2009 session I attended on “Boundary, Authority, Role and Tasks” which you might want to read.

What do you think are reasons why what Agile offers as potential people development might not be accepted readily?

Monday, September 7, 2009

An Accountability "Scale" and Agile Teams

Over the weekend, I found some things from many years ago regarding "rules" for corporate survival. One of them was a list of phrases related to being accountable for one's actions and situation (and, at the other end of the "scale," acting the part of a victim). Now "motivational" posters have never been popular with me, but this list was interesting mainly because of the "scale" it represented. The list was seen on the office wall of one of the owning companies of a company I worked for at the time.

Make It Happen
Find Solutions
"Own It"
Acknowledge Reality
--- [Accountability starts above this point.] ---
Wait and Hope
--- [Being a "victim" starts below this point.] ---
"I can't" Excuses
Blame Others
Unaware and/or Unconscious

Again, the theory was that there was some motivational value in posting a list like this with its implied "scale" from worst (at the bottom) to best (at the top) attitude toward accountability.

So I offer it, not as motivational material, but perhaps as something useful in thinking about individual and team accountability postures as well as something around which an interesting discussion could occur. I know that this worked, in some instances, many years ago when I (and others) first saw and heard of it.

Of course, to do so means getting beyond the platitudinous responses, i.e., agree with the best and reject the worst along the scale. Then, there can be a useful discussion of the reasons an agile team (or members on the team) could/would adopt each of the positions along the scale.

And, years ago, one group of people where I worked created their own poster on a magnetic surface, put little pictures of each of themselves on individual magnets, and placed their magnets where they felt that day, moving them during the day depending on circumstances. Seeing where other people placed their magnets was a kind of invitation for others to ask them about why they were feeling that way. They did not keep this up for a long time, but it did open up people to talk more about how they were feeling and why.

Friday, September 4, 2009

More Quotes from Twitter

Back on August 20th, I posted a long list of quotes from things people have posted through Twitter. Today, because I can tell it's going to be a lazy day for me, and for no other reason, I'm posting a second, shorter, set of such Twitter material. Again, listed in alphabetical order by first names and with some things by me.

?? (can’t remember where I heard it) – “If you fence people in, they become sheep.”

Alistair Cockburn - "Plans are great till they meet people"

Angelo Anolin - No matter how much bashing waterfall method gets, it has its share of successes prior to agile. Scott Duncan - Re: bashing waterfall & its "successes prior to agile" - 'course agile really suggests it isn't the method we should rely on.

anonymous (via Armond Mehrabian) - When there is in elephant in the room, introduce him.

Bob Marshall - Adopt the perspective that all sw development is a subset of product development and learn from PD folks like Reinertsen.

Bob Marshall - Limit work-in-process - not just development work but all work. Get Little’s Law working for you rather than against.

Bob Marshall - When I say (or think) "I have an idea" what I mean is "please look at the world again, from this new perspective, folks".

Brian Marick - Greater ease requires first moving in the direction of less ease. Shame, that.

Dave Rooney - Easier to make something small bigger, but not opposite. Scott Duncan - Yes...one of my main methodology pts! And also encourages what do I need to do better" thinking as opposed to "what do I want to throw out or get away without doing". Dave Rooney - Absolutely! By nature we're methodology "pack rats", loathe to get rid of any artifact or procedure "in case we need it"!

Dave Rooney - Yup - that's my business model!! :) Beginners are in the "shu" stage - I get them to "ha" and they fire me when they attain "ri".

David Anderson - @flowchainsensei the problem with Deming is folks refuse to believe that he's relevant to the knowledge work century. :-S Bob Marshall - @agilemanager True. Although I think that opinion's ltd to those (few) who've ever heard of him :-S And fewer want to own ways of working

Dee Hock (via Wally Bock)- "Haste never made time and waste never made abundance."

Dr. Seuss (via Mike Cottmeyer) - "Be who you are and say what you feel because those who mind don't matter and those who matter don't mind.”

Earl Everett - Scrum is not the name of the 'ball game', it's rugby. Played well, rugby is a very agile game, mentally & physically. Scott Duncan - True, but once you're asked where the word comes from, Rugby gets dragged into the discussion, unfortunately. But your point is good in that I think we should move the conversation from game to qualities as you describe. Earl Evertt - Also, rugby is a highly collaborative and fluid game, and leadership comes from different people at different times. Scott Duncan - Unlike many sports we get to see in the USA, Rugby is very much a team game. Even moreso than football/soccer, I think.

Esther Derby - having leadership strengths != being "the leader." teams need _leadership_ which can come from diff ppl at dif times.

Esther Derby - Myths about pay: Labor rate = labor cost. Labor rate is easy to count; labor cost, not always so easy. Definitely not the same thing.

George Dinwiddie - Carpenters don't argue whether a hammer or saw is the better tool. Why Lean vs Agile vs Kanban? Goal is to build a house. Tim Ottinger - Carpenters do argue about Case v. Cat v. Deere v. Holland. Also, saw v. hammer is clear-cut. flow v. piulse is harder. Scott Duncan – Carpenters don't argue hammer vs saw since they can't do the other's job. Individuals sure have their fav hammers, though.

Henry Ford (via Deborah Hartmann Preuss) - "If I'd asked my customers what they wanted, they'd have said 'a faster horse.'” Dale Emery - Maybe Ford didn't know to ask the next question: "If u had a faster horse, what would that do for u?"

J.B.Rainsberger (via David Hussman at Agile 2009, not Twitter) – “Drive the cost of failure to 0, so we can fail a million times.”

James Bach (and Michael Bolton) - We think a check [as opposed to a test] is "an observation with a decision rule that can be performed non-sapiently" example: junit asserts. So, a lot of what Agilists call testing, we would call checking. But to create checks and react to broken checks requires testing skill.

Joan Koerber-Walker - Why Do We Ignore "Best Practices"? Scott Duncan - 'Cause we don't think they are?

Michelle Sliger - Deming says don’t blame the people, it's the system. My question: who forged the sys? PEOPLE. Who's going to chg the sys? People. Let’s get busy. Dennis Stevens - I feel sorry for the system. People are always abusing it and gaming it. It just isn't nice. Skip Angel - One more: Many orgs expect mgmt to fix systems, but best ppl are those that are closest to problems system is trying to fix. Esther Derby - even when U can't chg the system, U can chg your response to the system from any point in org.

Mike Schubert - Agile Development isn't doing things faster - it's doing the right things sooner (which may make you appear ... faster).

Mitch Kapor - The perfect is the enemy of the good, & the good is the enemy of the good enough. Is the good enough the enemy of the barely ok?

Paula Thornton (via Grant Rule) - Toyota "not normal corp business model...they've learned how to learn. GM makes cars. Toyota makes people who make cars" Grant Rule - If Toyota has "learned how to learn" by "mak[ing] people who make cars" who in the software industry makes ppl who make effective systems?

Scott Duncan - And instead of a CSM test, what about a CSP-like statement that CSM "graduates" would have to submit and have judged?

Steven M Smith - A TEAM without the means to score product value is missing explicit agreement with their customer(s) about significance. A TEAM obsessed with product quality IME runs out of funds, which forces shipment, which results in a product with little value. A TEAM obsessed with time to market IME operates on hunches and ships quickly with the hope that its product has value.

Steven M Smith - Tell me my idea is wrong and I'll think that info is useful. Assist me to transform the idea so it's right and I'll feel helped.

Steven M Smith - When a TEAM reaches consensus, ideally IME each member has agreed to actively rather than passively support the decision. Most TEAMS don't use consensus to make decisions IME, which results in many members failing to actively support the decisions. a TEAM that uses consensus to make minor decisions IME wastes its time: Empower individuals or sub-teams to make minor decisions. a TEAM member is guilty of fraud when they participate in a consensus decision and refuse to support it to an outsider.

Sydney J. Harris (via Chris Moy) – “The whole purpose of education is to turn mirrors into windows."

Tanmay Vora - In the game of excellence, if you have "anger" and "ego" on your side, you don't need an opponent! :)

Tim Ottinger - How many orgs consider management "the art of getting people to work more"?

Tim Ottinger (actually from a workshop at Agile 2009) – “My backlog is terribly frustrating. The users? Abusive, berating. Id like it all first, but will work in small bursts, if you let me make money while waiting!”

Tom Gilb (via Benjamin Mitchell) - "A software programmer is not necessarily an engineer in the same way a bricklayer is not necessarily a construction engineer"

Vasco Duarte - Irony is having the first quiet moment of the day in a canteen with 200 other people after heavy work load in an office alone!

Thursday, September 3, 2009

A Mononumerosis Example (and Measurement Scales)

Mononumerosis, according to Wiktionary is “The oversimplification of a metric by using a single numerical value to characterize a complex phenomenon or system.”

My example involves the very common practice of doing surveys, asking people to rate something on a 1 to N scale, then reporting results using an “average” value for that something.

Really, there are two bad things at work here, from a statistics and survey perspective, I believe. Let me deal with the one that is not the main point of this post, but important for people to consider. Once again, Wikipedia covers the broad subject, but I’ll just explain it briefly.

The Scale Problem

Most 1 to N scales have nothing numeric about them, except for the fact that they use numbers as symbols for points on the scale. It would be better to use words to indicate what the points on the scale mean, since the scale is really ordinal, at best.

That is, from left to right or right to left each position is considered higher or lower (better or worse) in some “value” than those around it. But there is no guarantee the space between them is mathematically equal (or it would be at least an interval scale), so it cannot be a ratio scale, where multiplying and dividing are legitimate operations.

And, that’s the point, using numbers to represent an ordinal scale then adding up values and dividing to get an average is, technically, meaningless. (You can count instances of such points on a scale and report how many results you got for each point on the scale.)

Of course, this is done all the time on customer surveys, conference feedback forms, the ratings on Amazon, etc. And all such examples seem to end up with an average rating for the questions on the survey. (Amazon, at least, shows you the counts for each “star” value as well as the whole feedback statement from those doing the rating.)

Another issue with ordinal scales is that there is no way to be sure one person’s 3 really is the same as another person’s because the surveys often do not place any substantive interpretation on the points to help you judge where your sense of evaluation for that question would fit. But, enough of that…you get the idea.

The Results Representation Problem

This is the more serious issue with the mononumerosis question. Let’s even say you could legitimately do adding and dividing and get an average. Does that tell you an accurate story about what all the respondents taken together felt? Or does it represent some imaginary respondent’s evaluation?

Here’s a few examples.

Let’s say you have 3 data sets of 10 responses on a 1 to 5 scale each with the following numbers of each data set being the number of 1s, 2s, 3s, 4s and 5s for each data set:

Bell - 1, 2, 4, 2, 1
Flat - 2, 2, 2, 2, 2
Camel - 0, 5, 0, 5, 0

This will give you an average of “3" for each.

I’m sure you can see from the data itself that “3” isn’t the same as “3” isn’t the same as “3” when it comes to the actual sense of what responses to the question would mean.

Here they are graphed in two ways (both showing the same thing, but one might be more meaningful for you than the other):



Each shows the very different impression of what the sets of responses might mean.

My point is that it matters how data is represented based on its scale and distribution. So watch out for mononumerosis (and scales) when you are given survey results.

Wednesday, September 2, 2009

Iterative Development Happens in Your Head, not on the Calendar

Just a quick note today that I actually posted as a "comment" on another site a week or so ago, but decided to update just a bit as I thought more about it.

One of the little things I note that holds groups back, technically, from be more effective in their Agile practices is their concept of what it means to do development iteratively and incrementally. I hear and read, quite often, that teams fall into mini-waterfalls during their iterations. They take days to complete individual pieces of functionality, resulting in testing often bunching up at the end of the iteration. When I have asked developers or testers about this, their view of being iterative and incremental seems to be applied at the iteration as a whole, not their own work. Thus, being iterative and incremental has a "calendar" focus.

What they are not doing is looking at the work as a series of daily "episodes" (as many people have described/called it). They are also not engaging in TDD and/or pairing, since the ~2 hour recommended pairing time frame and the test-first approach would drive them right to a shorter cycle of creating and testing software. However, getting people to implement either of these practices can take quite a bit of effort since they require a level of collaboration which people often have not experienced in the past.

So, if there is resistance getting people to pair or practice TDD, I suggest another thing they can try which allows them to work more individually, but still in an iterative/incremental fashion of short duration. The goal is to get more frequent delivery, within the iteration, by having people think in minutes or hours rather than days, including testers. I've done this by simply asking folks to considering the following as an approach to developing a "module":

1) Create a shell with just the interfaces to other modules and test that. (Ada, for example, used to be (I'm guessing still is) great at being able validate a whole system of modules like this to ensure all interfaces pass the right number of and type of variables, for instance.)

2) Create the I/O "calls" and test them. (Yes, I am not presupposing use of OO development as my first couple of experiences with Agile iterations involved COBOL and assembler on IBM mainframes.)

3) Create the "control structure" for the module and test that. (Perhaps the hardest part of the coding/testing effort because of the combinations involved, so I even suggest doing this in increments, usually from outside in.)

4) Add the sequential data manipulation operations and test them. (I also suggest doing this incrementally based on the control structure, usually from inside out, in this case.)

The result will be shorter and shorter periods of development and test, producing earlier confidence that something "works" and closer, more frequent, collaboration between developers and testers.

Now, I'll admit that a waterfall can still be said to exist, but it is more "mini" than before and is headed in the right direction.

This also takes a few iterations to get rolling, but people began to see what being iterative and incremental, from an Agile perspective, could mean instead of just at the iteration level.

[PS: And, yes, I know about vertical slicing and Alistair Cockburn's "Walking Skeleton" as other approaches. Also, great ways to get across the idea of working in short bursts. Which is best may depend on the team and what they respond to. The idea is to get them to respond and move to the shorter time frame as their mental model for the implementation cycle.]

Tuesday, September 1, 2009

Defining "Agile"

Yeah, I know...hasn't this been argued to death?

Folks have criticized agile ideas by saying the whole approach is vague because there is no definition of what "Agile" means. Other folks fall back on the dictionary definitions of "quick and easy of movement" or something along those lines. Individuals known in the community have said there isn't, and likely shouldn't/can't be a definition, e.g.:

Scott Ambler has said - “the agile community will never settle on a common definition for what agile is and more than likely are smart enough not to even try.”

Alistair Cockburn has said - "finding the 'true center' of agile can’t be done. There is no 'center' to agile development. There’s only proximity of similar-but-different personal value systems coincidentally producing similar recommendations."

Of course, people have and are trying to define agile (or, at least, how agile one is), sometimes through standards work, sometimes through assessment and capability scales, sometimes through tests and certifications.

Alistair's point comes from how the term came into existence as a description for what a number of people felt they shared in common with regard to software development ideals. Since what emerged when those people met was from many people's perspectives, each with their own practices and techniques, a "center" wasn't likely.

But, to an extent, I must disagree that there is no definition of "Agile" since what that group of people did in 2001 was precisely that. When someone says, or I read someone state, that there is no definition of "Agile," I must point to the Manifesto's Values and Principles. Before those people created the Manifesto and the subsequent principles, there was no definition for "Agile" as we use it in software (and elsewhere for some of us) today.

It is my belief that the Manifesto's Vs & Ps define what Agile is. Agile is an adherence to those Vs & Ps in getting work done.

Now there are many practices and techniques one can employ that may be closer to or farther from the Vs & Ps. But that's about the "how Agile are you" question, not what it is.

Whenever I have taught Agile intro classes, I start with the Vs & Ps and try to make sure attendees understand what the Vs & Ps say (and what I think they mean in practice). When talking about specific methods and practices, my goal is always to point back to the Vs & Ps to say, in effect, "and this is how method X [and/or practice Y] implements Z" (Z being one or more Vs & Ps).

Now there are surely many methods and practices that can be used to implement given Vs & Ps, but I think there is a "tether" back to the Vs & Ps. It may be made out of Bungee material so, if you really stretch it far, you'll get snapped back.

Some people may cut that tether, either accidentally or on purpose, of course. When trying to fit something new into an organization, "tailoring" methods and practices seems quite the "practical" approach. But if you don't honor the tether, i.e., you cut it, you can end up rather far from the intent of the Vs & Ps and miss the whole point and value of the methods and practices.

I don't want this to sound like some "if you failed at Agile, you didn't do it right" rant. I certainly think, like anything, Agile might not work in your environment. I do think that, if you start off focused on the Vs & Ps, you'll have a better idea, sooner, if that is likely to be true. And you'll know whether any "adaptations" or "tailorings" you make are likely to draw you further away from the intent and benefit of the Vs & Ps.

So, really study the Vs & Ps. Talk them over with people in your organization. Talk to people doing coaching and training about what they think the Vs and Ps mean/imply. When you've done this, then make a commitment to some method/practices and try them out. Doing this investigation shouldn't take long, and you will certainly save time and money later by doing so. You'll also avoid antagonizing and stressing a lot of people in your organization by making sure they understand what Agile means.