Sunday, November 22, 2009

"The System," "They" and "Policy"

This was such a funny/weird situation involving communication with a customer that I just had to pass it along.

First, some setup information.  In August of 2008 we changed cell phone company to fit in with the last job I had.  So no contact from the prior provider since then.  That is, until two days ago...

A 4-page (2 sheet) bill from the provider (one of the big four) shows up saying its from the Oct 10-Nov 9 Bill Period with a Nov 13 Bill Date.  First page shows

Oct 13 Tax Adjustment ............................................. -$1.81
New Charges ...........................................................   $1.81
                                                                 Total Due    $0.00

On the payment remit slip is says "DO NOT SEND PAYMENT. This amount will be credited to your next bill.  $0.00."

On page 4 (after 2 pages of legal stuff and ads for buying more services) it says

Charges
Toleance..................................................................  $1.81
                                                                        Total  $1.81
(and that is the only thing on the page except the "4of 4" page number, the company logo, and billing acct number and dates).

Now the fun begins since I wanted to know what this was all about after 14 months of not being with this provider.  So I call the customer service number from the first page of the bill and explain all this to the person who answers.  I tell them I'm trying to find out why I got this and what it means given I have not been a customer since August of 2008.

The person was quite nice but said her records of our account doin't show any such credit/charges.  But she said we didn't owe anything, so just ignore it.  But, again, I asked why did I get this and what does it mean. Since she seemed to feel I should not care, I asked for a supervisor.

The customer service supervisor was also quite nice, but could not explain it, i.e., nothing showing on any records she could see.  She guessed it was a credit left over after we closed the account.  So why, I asked, did it take 14 months to contact us and why what was the charge for that cancelled out the credit.  She did not know and put me on hold a few times trying to find someone who might know.  Finally, she sent me to their finance/accounting folks.

That person, also very nice, suggested "they" must have noticed this credit recently so "the system" sent me a letter letting me know.  I said, it wasn't a letter, it was a bill and asked what a "Tolerance" charge was.  I never got an answer to that last question.  But it seems, since we closed the account and paid the final bill, somehow "they" decided we were owed a $1.81 for some overpayment of tax.  However, it is the provider's "policy" not to send out checks for less that $5.  But the finance person could also not really explain much beyond this and did not show anything in their records either specific to this "bill" being sent.

So, apparently, to clear the account, "the system" or some "they" decided to send a bill to acknowledge the credit and the charge in that credit amount to make the account $0 since it was not the provider's intent to actually reimburse the credit amount given it was under $5.

This silliness, of course, was to satisfy some legal and accounting rules that I didn't know or care about.

But a couple things struck me about all this, besides the silliness:
  1. None of the people could access any system of information that would allow them to find out what this was all about.  (The Finance person was basically guessing what happenbed based on "policy" rules, not based on any information about the actual account.)
  2. I wonder if this was something that was done to many people to clear out old accounts, i.e., the cost to do this and then to deal with people like myself calling up to find out why, if so, would clearly be a substantial waste of time, money and company credibility.
  3. It would be interesting to know how much the provider makes each year keeping all credits under $5 (and what legal loophole makes this possible) since they waste the postage and time sending out silly $0.00 balance bills anyway.
Like Arsenio used to say, makes you wanna' go "Hmmmm."

Monday, November 16, 2009

Expectations Around Uncertainty

Back in September (10th), Mike Cottmeyer posted “Managing Expectations about Uncertainty” and noted that traditional project management views it as important to “manage uncertainty out of the project.” On the other hand, Agile efforts “[r]ather than managing OUT uncertainty” choose “managing FOR uncertainty.” I like that phrasing of the Agile approach. Mike did point out that “both worldviews have a place depending upon your context and problem domain” and that it is “up to us [as Project Managers] to recognize the nature of the projects we are working on and choose the strategy most likely yield a desirable outcome.”

I am inclined to say that "uncertainty" early in a project can be divided into (a) things we cannot (now) know and (b) things we should be able to know. I believe the more traditional approach takes the latter as its view. That is, the traditional view is that “due diligence” should be able to reduce uncertainty to nearly zero, leaving few (or no) things unknown. Thus, Agile’s approach to "embrace uncertainty" suggests irresponsible risk-taking in the traditional view because insufficient effort is expended to eliminate all the uncertainty possible. In this view, uncertainty is lack of knowledge that should be corrected by better initial effort. I believe the Agile approach looks at (a) as its view of early uncertainty. That is, there are things we really cannot know early on, may not be able to know until work has been done and feedback collected, or may not end up needing to know by the time we get there.

Now the word “uncertain” suggests being aware of something but not totally sure about it. If you are totally unaware of something (or it is something that is truly unknowable), talking of being “certain” or “uncertain” makes little sense. The traditional view of “uncertainty” carries a lot of weight, then, within that worldview if it means things we are aware of but do not understand deeply enough. Consider the large number of lists of potential project risks and failure causes that have been compiled over the years. In effect, they say, “Look, all of these things have been noted in the past and could impact your project. You need to explore them and become ‘certain’ about whether or not they have meaning/impact on your work.” Hence, “due diligence” involves being thorough about considering all these factors since we can and “should have known” this early on during planning for our project.

The Agile view is that it is wasteful to try to drive out all uncertainty early because it cannot be done. This appears to the more traditional view as irresponsible for the reasons noted above. An Agile approach relies more on short delivery cycles, than detailed up front planning, to address uncertainty. As with many other things, an Agile approach advocates an incremental, iterative way of addressing uncertainty, increasing detail as the events requiring it loom closer and moving from an implication of "early" to merely "before." From an Agile perspective, “due diligence” includes avoiding wasteful anticipation of risks/problems as much as responsible consideration of them.

This is not to say all early consideration of risk/uncertainty is to be avoided. However, from an Agile perspective, the details regarding how certain issues should be addressed can be delayed until more knowledge is available to the project. Agile projects move ahead with what is known while information on what isn’t known is developed.

In the end, of course, if an Agile project goes bad because of an unplanned for issue, the traditional view can say “See, we told you.” Equally, if a traditional project never encounters issues it expends effort to make plans for, the Agile view can say “See, we told you.” I am reminded of a talk Kent Beck gave at XP2006 in Oulu, Finland where he discussed “responsible development.” I think “certainty” and “due diligence” are things which walk that line of what is and is not “responsible.”

Thursday, November 12, 2009

Notes on the ASQ Software Division’s ICSQ 2009

Each year, the Software Division of the American Society for Quality (ASQ) holds their International Conference on Software Quality. This year, it was held at the Hilton in Northbrook, Illinois on November 10-11 (with a tutorial day on the 9th).

What follows are my notes on the sessions I attended and the Agile “debate” in which I represented the “pro” Agile side. Other than keynotes, sessions were run in parallel with 4 tracks going on at the same time. My notes, therefore, represent the one session I attended of four going on simultaneously.

One thing to be aware of is that attendees at ICSQ’s are often from regulated industries and firms doing government related contracting where formal, standards-driven quality approaches are the rule.

Tuesday, November 10, 2009

Keynote – Bill Curtis on “Quality in Multi-Tiered IT Applications”

Bill Curtis has been a researcher, practitioner and chief scientist in software methods and individual performance for many decades. He has worked at ITT, MCC (Austin, TX research consortium), SEI (as head of the Process program), TeraQuest (process assessment and improvement), and now at CAST Software. I have known Bill over the years during his time at MCC, SEI and TeraQuest, in particular coordinating (and applying the results of) his research activity in software design expertise for the company where I was working at that time.

Curtis started saying, “We’re moving beyond just what smarts and knowledge can handle.” By this, he meant the systems and their interactions have evolved (and continue to evolve) to where product (code) quality ideas are not enough to manage the desired results. Expertise in application level quality, i.e., how all the components interact, is what has the largest impact on system quality today. Quoting an ACM article (Jackson, “A Direct Path to Dependable Software,” CACM v. 54, no. 4), “The correctness of code is rarely the weakest link.”

Curtis pointed to problems with design choices that “pass (functional) tests,” but are (at best) inadvisable practice when scaled and must address non-functional production requirements. Presaging the end of day keynote by Joe Jarzombek, Curtis said that we need to be able to make dependability, assurance, etc. “cases” about our systems. That is, we should be able to provide evidence to support arguments that justify belief in claims about such non-functional requirements.

Curtis offered a few other ideas such as:

  • addressing people’s ability to understand a system when changes must be made since he said 50% of the change effort in maintenance is devoted just to figuring out what the system, not an individual module, is doing;
  • allowing (and training) testing groups to do true QA, indeed do Quality Engineering, which would require a broader involvement of personnel from testing organizations in the full lifecycle of work as well as not regarding test groups as “entry-level” positions;
  • understanding the COO’s view on the need to standardize ways of doing things an organization does not compete on.
Finally, Curtis mentioned the Consortium for IT Software Quality “sponsored by a partnership between the Software Engineering Institute (SEI) at Carnegie Mellon University and the Object Management Group (OMG) to combine their industry-leading strengths in developing software-related standards and appraiser licensing programs.” As one of its activities, Curtis said, it will work to create more operational definitions of software (non-functional) quality characteristics (i.e., the “ilities”). The ISO 25000 series, which supplanted ISO 9126, has definitions, but the CISQ’s work suggests they are not viewed as operational enough.

Tom Roth – Psychology of Software Quality

Roth’s background is in embedded avionics software quality assurance where he is in charge of overseeing reviews, internal audits, testing, etc.

Roth started by saying we should “think of QA as people trying to influence the behavior of other people developing software.” Hence, his talk was about influencing people with regard to quality and the importance of QA knowing how to develop trust in developers since not everything can be reviewed. (Interestingly, an article in the ASQ’s main magazine, Quality Progress, for this month, is entitled “Trust, But Verify.”) But Roth cautioned against “enjoying the power you have” as an arbiter of quality, using knowledge of psychology to establish a collaborative relationship with development groups/management.

In discussing inspections and reviews, Roth noted that software engineering formalities, such as Fagan inspections, impart a level of discipline and, potentially, a group sharing of responsibility, which may not exist with individuals alone. Indeed, inspections turn the deliverable being inspected over to the group and the individual does not bear the full brunt of making sure quality is present in that deliverable. From an Agile perspective, I was thinking that, after a while, such discipline should become more internalized and less dependent on external rigor.

Some of the things Roth touched on were how:

  • in relationships, differences often attract while similarities comfort, but trying to force sameness can destroy the attraction;
  • inappropriate habits exert tremendous influence on further (and, perhaps even worse, expanded) inappropriate behavior;
  • we are not 2, 3 or 4 different people, despite external appearances under different social circumstances, i.e., there is a congruence in behavior which, especially under stress, will reveal itself;
  • people working alone can spend 2/3 of their time evaluating alternatives and 1/3 implementing a chosen alternative while two people working together reverse the balance, effectively quadrupling the productivity of one person alone;
  • behavior [engineering] leads attitude [morality] - you can tell people what to do but not how/what to think, so work on behaviors/practices and allow the thinking to come along on its own.
The last two struck me as quite interesting, of course, from an Agile perspective.

Ed Weller – Getting Management to Listen

There are many talks that have been given over the years about how to talk to management. Ed Weller covered some of the same terrain in terms of speaking from a cost/dollars perspective. However, he did offer some specific ideas related to managers who are:

  • used to technical “change agents” (1) underestimating the cost of implementation, (2) overestimating improvement benefits, and (3) introducing risk management is not comfortable with;
  • faced with “Problem Saturation”, i.e., “consumed solving today’s problems” with “no time for next week’s (month’s/year’s) problem.”
Weller’s suggestion was to focus on data on the cost of rework, pre/post ship defects, and, in general, poor quality. From a lean/agile perspective, this means showing management how they can reduce waste in the software process.

Rebecca Staton-Reinstein – Using A Cost of Quality Model to Drive Improvement

This was a fairly standard talk on CoQ models elements. Some of the audience interaction and comments were of interest, especially regarding the difficulties in doing CoQ calculations:

  • collecting data accepted as “accurate” enough to truly prove/show improvement ROI is very difficult for an organization that does not have some level of process discipline and decent data capture capability;
  • such models, on the cost avoidance side, are talking about things that haven’t happened, yet, requiring (accepted) historical data to show prior trends that could be reasonably extrapolated to the future;
  • belief in quality as a matter of personal “morality” or “will” (i.e., we have problems because people just don’t try hard enough to do the job “right”) rather than something addressable through an engineering approach;
  • being able to take quality data and relate it to schedule and budget impact.
Then, at some point during the talk, the following thought struck me: if you do things with urgency, you won’t have to do them in a rush.

Keynote – Joe Jarzombek, National Software [Security] Assurance effort from DHS

Joe Jarzombek directs the Department of Homeland Security’s program on Software Assurance and has been doing this, with little budget, for over 4-1/2 years. I met Joe through my activities on the IEEE Software and Systems Engineering Standards Committee when he was still consulting within the Dept. of Defense. (Before that, he was an active duty Lt. Colonel with the Army serving at the Pentagon.) Joe’s job is to promote an interest in and work actively toward securing the infrastructure of the USA from cyber attack. To do this over the years he has brought together academic institutions, government agencies (DHS, Dept. of Commerce, Dept. of Energy, and DoD), non-profit agencies, and commercial organizations to work on a variety of efforts in tools, techniques, guidance, educational programs, standards (working with IEEE and ISO), etc.

Joe’s talk is one I have heard over his many years with DHS. He updates it regularly with the latest status on the efforts noted above. And, in case it is not otherwise obvious, the “assurance” focus of this work is on writing secure software and securing infrastructure computing.

As most of the printed materials which arise from the efforts of the participants he has brought together are produced under government funding, they are freely available at the Build Security In website under the “DHS SwA Web Site” link. Another good source of material on Software Assurance is articles from issues of Crosstalk (the Journal of Defense Software Engineering) which are also freely available. And, though a few years old, a July 31, 2007 “State of the Art Report on Software Security Assurance,” is also available.

Wednesday, November 11, 2009

Keynote – Edy Liongosari “Everything’s Elastic”

Liongosari directs work at Accenture Technology Labs and spoke about the changing landscape of computing as it moves from traditional computers to mobile devices. Most of the trends he noted (e.g., cloud computing) were not new, however, some of the implications and data were interesting.

For example, 30% of “smart” phones are owned by people with family incomes at or below $30,000. For them, this was their computing platform in the sense that they did their “computing” through internet access to sources of information, data, and applications. (On the latter point, Liongosari noted that there were some 100,000 iPhone applications available.) From a third-world perspective, Liongosari noted that, despite wide-spread cell-phone use in the developed countries, cell technology was even more prevalent in the third-world where land-line phones, computers, bank accounts, etc. were not at all common or available. Indeed, there were places, he said, where people “barely able to feed themselves” had cell phones.

Liongosari also spent some time talking about how large organizations were beginning to use cloud capability to get work done in fractions of the time it would have taken them to set up in house infrastructure to handle the same level of computing. He even noted an insurance firm (unnamed) that uploaded data to the cloud, performed massive analysis and downloaded the data and results a few hours later, “renting” the time and resources.

From a social computing perspective, he talked about how companies were starting to use such ideas (if not the most well-known social sites) in “harnessing the power of the crowd” to collect ideas and trends. Some examples were IBM’s Bluehouse, Adobe’s Cocomo, and Dell’s Ideastorm.

Another point made was how people in the workforce from teens to late twenties had a view of free access to computing resources and what this means when they are in company environments. Liongosari also noted the relative lack of concern people in this age group have for the idea of privacy, which is a concern (and mentioned in Joe Jarzombek’s talk) with regard to cloud computing.

While I listened to this keynote, another thought came to me: IT is moving from meaning “institutional” to “individual” technology, even more than the PC represented such a move since, it is not just owning your own computing resources now, but having an individual “right” to information that is starting to dominate thinking.

Tim Olson – Lean Principles and Process Models

Tim Olson has considerable background in process (improvement) consulting, having worked at the SEI early on in the CMM program, being involved with the Juran Institute, having Six Sigma experience, and, most recently, working with Lean.

Olson started by relating to an example from Deming about the danger of trying to replicate success by others through simply copying their apparent behavior without understanding the context and principles behind the practices. This resonated greatly with me because it is an issue I have seen with Agile adoption when companies learn practices and techniques without an understanding of Agile Values and Principles.

For the most part, Olson’s talk was about basic Value-Stream and Value-Stream Mapping ideas. However, he noted the lack of automated tools to make the mapping easier. He did note that, in discussion with Toyota, that they used walls and boards, without automated tools. But Olson noted that his process definition approach, focused on diagrammatic, rather than textual, process description has led him to apply process modeling/mapping tools to value-stream work.

He did caution, however, that simply trying to transplant Lean manufacturing ideas to software has been a problem in various cases since this has resulted in removing “non-value-added” activities such as configuration management.

Siegfried Zopf – Pitfalls in Globally Distributed Projects

Zopf is from Austria and discussed issues and problems he has observed working in multi-national, multi-cultural project situations.

Zopf began by making a distinction between distributed and outsourced situations, then, between minimally and large-scale outsourcing. Overall, it seemed as though he was emphasizing awareness of the level of appropriate control to apply to the different combinations of situations. For example, in a minimally responsible outsourcing situation – the work is confined to low-level design and coding – risk is low and pulling the work back in-house is not a problem, but the financial savings are lower. On the other hand, there is great financial advantage in outsourcing greater parts of the work, but much more local project management required in the outsourced locations. Zopf suggested allowing for 15-20% cost for such a “beachhead” operation.

Zopf also, in connection with any larger outsourcing effort, noted how planning must account for extra costs in project management, communication, travel, documentation, and knowledge transfer that would not exist in a fully local project. Thus, a company cannot take an estimation done assuming in-house work, then simply portion out parts of the work, “transplanting” the estimate and plans without adjusting for the differences.

And, for distribution matters in general, whether outsourcing or not, there are still issues related to national and cultural differences regardless of whether or not it is the “same” company. A couple examples of what he discussed are:

  • two groups, for each of whom English is a second language, and the problems they can have trying to communicate using English which are not the case when at least one group is native English speakers;
  • monochronistic and polychronistic cultures where the former view time as linear, compartmentalized, and value punctuality and the latter view time as more fluid, schedule multiple things at the same time, and even expect people/things to be late.
One final point in distributing work (with or without outsourcing) is process mismatch. Specifically when working with a high maturity (using the CMMI scale, Level 5) organization, that organization will find it difficult working with another organization that is not at least Level 3. In the reverse direction, a low maturity organization may find it frustrating working with the expectations and pace of a high maturity one.

Ron McClintic – Performance Testing

Ron was the “con” side in the Agile “debate” (which I describe below) and has a lot of years of testing/QA experience working for and in the GE Capital environment. He currently works with applications that collect and analyze data on truck fleets using a combination of hardware, embedded software, and more traditional application software. However, the multiple vendor networked environment he works in matches quite well with the multi-tiered issues Bill Curtis discussed.

His talk could be considered a “case study” since he went into detail about the various points for performance testing that his efforts need to address, from the lowest levels of (vendor-supplied) software doing network and database optimization and capacity adjustments to customer GUIs and response expectations. On the latter he noted work done to determine thresholds for response time from the minimum one would ever need which matched the time for a person to perceive a screen change up to the upper limit before a customer would abandon a (web) application and go elsewhere. It turns out the latter is about 4x the tolerated time frame. The problem is that, for different client situations, the tolerated times vary.

As an example of a difficult performance/response issue to address, one client reported significant performance problems before 10am each day, but which seemed to go away at or about 10am. After a few weeks of probing and testing, it was discovered that 10am local time was when the company’s European offices, mostly in the UK, ended their day, a fact not previously revealed to Ron’s group. The moral being not all issues are really technical ones even though they have technical impacts.

One statement Ron made rang very true to me, especially in light of one of the points Bill Curtis made, and that had to do with using testing and test tools in an “investigative,” rather than ”reactive,” manner. I think, overall, this is an important way to view the real value testing and test staff can have to an organization as opposed to the, often, “last ditch” quality control function often performed. QA (testing and other true QA functions) should be to supply information to the rest of the organization about the status of progress and quality of work.

The Agile “Debate” – Ron McClintic & Scott Duncan

The other talks/keynotes described above occurred in the order I’ve listed them. This “debate” actually occurred Tuesday afternoon just before Joe Jarzombek’s keynote. I’ve saved it for last since I am commenting on my own (and Ron’s) session. Ron had proposed this and I had been recommended to him by the Conference Chair, Mark Neal, whom I have worked with before on Software Division activities.

We started the session with me going over, briefly, the Agile Values and Principles and stating that I felt this is what “defines” Agile. Since the various methods preceded the Snowbird meeting, the term “Agile,” applied to software development, occurred at that meeting and with regard to the Vs & Ps. So, for me, that means the Vs & Ps are what define “Agile” while practices and techniques are examples of possible implementation approaches. Ron had no real problem with this as he noted he agreed with these ideas. His objection to Agile came from two sources, he felt:

  • it resulted in overall “suboptimization” of a project because it focused only on optimization of the development piece, and
  • it did not focus on actually profitability of a company that sells to the marketplace, defining value as just the delivery of the working software.
Thus, his argument was that a more traditional approach to projects that accounted for the full product lifecycle, including longer-term maintenance and support costs, was more appropriate. He also felt there had been no appropriately trials of similar project situations that had collected data to show the benefit of a well-conducted traditional effort compared to an Agile one.

He and the audience had stories to tell about “purist” insistences from consults arguing against the responsibility for Agile teams to be concerned with such issues as they were matters for the business beyond the development team. What I was hearing were stories of:

  • “teams” without appropriate business collaboration or all the skills needed to do the work or
  • projects where the organization, itself, isolated groups outside of development from trying to pursue/accommodate an Agile approach, insisting on more formal handoffs or
  • developers insisting that going back to fix work not done completely or well in the first place constituted “refactoring” or
  • the usually litany of refusal to document, measure, and/or plan.
Indeed, in one case that Ron noted, developers were writing code without requirements. I had to ask him how they got the okay to be developing anything with “no requirements” and, then, suggest this was an “Agile” approach.

A couple audience members also brought up the book “eXtreme Programming Refactored” and its claims of failure for the C3 project.

What I found was that people were exceedingly receptive to an explanation of the Values and Principles and accepted practices, seeing how wrong it was to characterize many of these behaviors as “Agile,” rather than merely ad hoc.

Of course, throughout the Conference, there were stories and discussions about this same sort of thing happening to other ideas once they had “crossed the chasm.” Mark Paulk, for example, was there discussing various process improvement ideas as well as his research work at Carnegie Mellon University with Scrum. He and I sat at lunch with a number of people who were at this “debate” or had other Agile contact and discussed how similar things were after a while for the CMM (and now for the CMMI) with people ascribing “requirements” status to guidance material and pushing their own idea of process “rightness” in assessing companies.

So I have left this “debate” topic until the end, and am not going into great detail about what was said because, overall, it demonstrated the attraction the Manifesto’s Values and associated Principles have for people. It also demonstrated, to me at least, the need that exists for people to understand how practices are intended to implement those Vs & Ps and not simply copy (or think they are copying) them without such understanding.

Monday, November 9, 2009

And Still More Quotes from Twitter

Alan Shalloway - Scrum's power lies in its collaboration, co-location & feedback. thinking it lies in its practices is what ossifies/limits it.

Alan Shalloway - The Leader must be pulling the org thru the change process, not pushing it thru. When a leader is pushing, the org's, people don't know if they are being pushed toward something better or off a cliff- they do not have the concern if leaders are out in front and pulling.

Alistair Cockburn - Crystal = frequent delivery + close communication + reflective improvement => self-awareness; Ladder: techniques -> principles -> metaskills

Bob MacNeal - Maybe "self-direction" is one characteristic that distinguishes a team from a group. Scott Duncan - Collaboration would be another. Been in group's that related well, but each did their own thing and were structured that way.

Carl Sagan (via Kevlin Henney) - Intellectual brilliance is no guarantee against being dead wrong.

Chris McMahon - to improve performance, what is THE ONE THING we can do right now? Ron Jeffries - they already know the one thing. Everyone does.

Dale Emery - Passion and respect are not inherently at odds, though they can seem to be if you don't know how to express both at once.

Daryl Kulak - Project estimating is just wishing to two decimal places.

Dave Ramsey (via Carlton Matthews and @E-Mealz) - When your outgo exceeds your income, your upkeep becomes your downfall.

David Hussman - Sometimes Twitter seems like the world first and largest virtual fortune cookie.

Elizabeth Hendrickson - It is hard to Speak the Truth, and speak it diplomatically enough to be heard, when people want Comforting Lies.

Elizabeth Hendrickson (reported by Chris Sterling at PNSQC) - "The definition of Agile is in the results" (deliver value frequently at sustainable pace) "Flexibility is a side-effect of being agile."

Glyn Lumley - Anybody's job should not merely be to "do it right" but to do it better.

Hillel Glazer - Real engineers *do* like GOOD processes. Those that don't are posers. If you made it through college/university w/out any processes you probably weren't in an engineering school. Check your diploma.

J.B. Rainsberger - I find it hard to value Customer Collaboration without valuing whatever the customer decides is value.

Jacob Yip - Stop the line != stop and fix. Stop and contain. Fix when you understand why, which may require longer analysis.

Jason Yip - A car is a system; individual parts are not. Extreme Programming is a system; individual practices are not.

Jason Yip - Paradoxically, if you truly value people, you tend to focus on the work, not on the people.

Jeff Patton - agile people: is iteration process iteration- repeat the same steps in cycles, or product iteration: reconsider and rework prod decisions?

Jeff Patton - Agile-resistant teams hate all the meetings. Experienced agile teams love all the collaborative work.

Jeff Patton - Requirements are the boundary between what I get to decide and what you get to decide. It's a fuzzy discussion, or DMZ.

Jim Highsmith - Agility is the ability to think and learn rather than blindly following a recipe. Brian Marick - No, agility is not "the ability to think and learn rather than blindly following a recipe". Let's not equate "agility" with "good". The ability to think and learn is part of Agile and 8 zillion other things. Joshua Kerievsky - @marick You've focused on the think/learn part while the important part is not "blindly following a recipe." (even an Agile one). Brian Marick - @JoshuaKerievsky I suppose it needs saying. But I think it would be good if Agile thought leaders thought more about software.

Jim Knowlton (at PNSQC via Matt Dressman) - "date, but don't marry, your framework."

John D. Cook (actually from his blog pointed to by Jurgen Appelo) - I’ve said that someone has too much time on their hands, but not since I read Meyer’s post. I see now that the phrase is often a sour grapes response to creativity. I don’t want to do that anymore.

John Goodsen - Iterations are the dogma and waste of Agile. Flow and pull make iterations irrelevant. Are you certified to manage waste?

John Seddon (via Bob Marshall) - Less of the wrong thing is not the right thing.

John Seddon (via Bob Marshall) - Measure what is important to customers, not auditors.

Mike Sutton - The larger the organisation the thinner the thread that connects work to value.

Naresh Jain - Code is not an asset its a liability. Tacit knowledge gained building the product is the asset.

Nat Pryce - New manifesto: while we value valuing things other than value, we value valuing value more.

Pat Reed (via George Dinwiddie) - Projects fail at the beginning, not at the end.

Payson Hall - On reflection, key takeaway from #Risk Conference last month was definition of #project risk as "Uncertainty that matters".

Peter R. Scholtes (via Glyn Lumley) - Why do you hire dead wood? Or, why do you hire live wood and kill it! (From The leader's handbook: making things happen, getting things done‎).

Peter Scholtes (via Glyn Lumley) - Bureaucracy is a form of waste.. people with no real work to do impose needless tasks on those with real work to do.

Scott Duncan - I see "giving" offense as different from "taking" it. I can avoid the former, but likely not the latter.

Scott Duncan - In general we do not expect perfection, but in particular, we do.

Scott Duncan - Waste: Write on board; copy to paper; transcribe to elec doc; have a bunch of people review to check for mistakes, etc.

Tim Ottinger - Between the placebo effect and the hawthorne effect it is hard to know anything about anything involving human beings.

Tim Ottinger - Biggest tip for remote pairing: Latitude hurts, longitude kills. Common hours are helpful. [Actually, a general comment about distributed teams that I’ve heard various places.]

Virginia Satir - Understanding and clarity, not agreement, is what's important in dialogue.

“What do you think of …?”

Long before I became involved with Agile ideas, I was doing internal (and some external) consulting in more traditional (software) quality and process improvement. And many years before that, even before I got involved with software, I taught some at the college level. My style, I came to learn, was “Socratic” in that I would use questions to encourage (self) learning by my classes, clients, and audiences rather than be too direct, too often with “answers.”

I think this has been helpful as an Agile coach/trainer since, over the years, I do believe leading a horse to water will get most of them to drink. That is, most people are reasonable enough to arrive at decent conclusions if given the information that will allow them to do this. This does not mean people will not have years of experience pointing them in certain directions than my questions may be hoping to encourage. I also find that it is easier, in a group, to get more useful learning to occur using this question-driven approach since the group learns from itself in a sense.

In another context in a Twitter post I stated you can control what you give, but not what others take. With at least some common base of agreed upon information, I do think you can encourage people to take and experience them asking you to give more. And what I do give, I try to do so by offering alternatives when I think they exist.

The title of this post suggests what I typically do that seems to work. I’ll ask people to direct attention to some data or situation and ask them what they think of it. In general with any data related to quality/process, I suggest people take an “it’s ten o’clock, do you know where your children are”* approach. That is, I encourage them to know what’s going on to the greatest extent reasonable at that time, then consider whether they are or are not okay with that being the situation.

That’s it. Nothing dramatic. Just something that I find has worked for me over the years in many different contexts.

(By the way, I have even used this approach to teach 8yr olds, and up, about Newton’s laws of motion where they derive F=MA and eventually come to understand why satellites stay up in the air. Takes about one 30-45 minute class. And, no, they don’t all remember everything, but they do end up learning about learning and that they, in fact, can learn about some seemingly complex things.)


*For those not familiar with this phrase, it was used in a US public service announcement on TV to urge parents to know what their kids are up to and where they are late at night.

Sunday, November 8, 2009

Burndown & Control Charts

The teams I’ve worked with have used burndown charts to track task hours remaining during their iterations. For them, the burndown baseline represents the optimal pace a team would need to be on to complete all work for the iteration. Assuming, of course, that all the work contributes to completing committed stories, the burndown chart helps indicate how well the team is doing in meeting the iteration goal. I say “helps” as the burndown is not the entire truth. (Some teams have tracked story points in a burndown, but, as that produces a stair-step chart, most teams have used a task hours chart for their iterations. Later I’ll mention how some teams track story completion as well.)

Coming from a more traditional quality background originally, I view the burndown chart as a simplistic form of a classic control chart. The baseline is like the central line on a control chart. We have no upper or lower limits since we are not doing statistical sampling, of course. We are tracking actual data completely. But there are some similarities in how a control chart is used and how we can view the actual iteration progress line compared to the baseline.

If the actual progress line hovers/fluctuates right around the baseline, the team is on track to complete the iteration goal. If the actual progress line is above the baseline constantly, it could mean the team is not headed to complete their Sprint commitment. If the actual progress line is below the baseline constantly, it may mean the team is headed to complete their Sprint commitment somewhat early. However, being not too far above or below the line is likely nothing to worry about if the trend is consistent. If a team’s ability to estimate and commit is effective, they should not be too far (or for too long) above, or below, the line.

On the other hand being below the baseline and heading even further below or being above the baseline and heading even further above it, should be case to consider taking some action:

  • A progress line that is below the baseline and increasingly headed down means the team is ahead of their schedule for completing tasks and getting further ahead. This may or may not be good news. If things are going very well during the iteration, the team might discuss with the customer/Product Owner/etc. the possibility of taking on more work. However, this pattern could suggest some tasks being skipped or downgraded in time. That ought to be looked into as well to be sure everyone understands what the pattern means.
  • A progress line that is above the baseline and increasingly headed up means the team is behind their schedule for completing tasks and getting further behind. This is not good news as this pattern suggests some tasks being added or increased in time. This ought to be looked into to find out why work is not converging on the iteration goal.

Earlier I said the task burndown chart is not the complete story and used the word “helps.” Like a control chart, the burndown chart is an indicator of whether or not further consideration needs to be given to team progress. Because of the statistical nature of control charts, deviations of certain kinds from the center line, in either direction, are reasons to investigate the cause of that deviation. The assumption is that this baseline represents expected results of sampling the production process and deviations either way could be either good or bad news, but, in either case, cause to look deeper into what is happening.

The same goes for the burndown chart, but there is certainly more to know about iteration progress since completion of task hours does not, by itself, mean completion of stories which is the iteration goal. One could be completing many hours of task effort, even be below the baseline, and not have completed a single story. This can happen if a mini-Waterfall is occurring in the iteration, with testing tasks bunching toward the end of the iteration.

One thing a couple teams I’ve worked with have done is to put story completion targets along their baseline, then note when each story actually gets completed. This gives both a daily progress indication based on the tasks and an iteration goal indication based on when stories show completion. If teams size stories effectively, a story should be getting completed every few days, at least.

Now teams that are communicating well among their members usually know all these things without the chart. But the visual representation of what the team “knows” is a useful reminder and lets people outside the team see what is happening. The visibility/transparency provided by the burndown chart is important and, for me, is its basic value since it offers everyone the opportunity to understand iteration progress and discuss it objectively.