Wednesday, December 2, 2009

New Set of Quotes from Twitter

[Been occupied with packing and moving since my last post until a couple days ago, though more packing and moving needed.  Hopefully I'll get on a daily posting schedule before too long.]

As with the others, this list of Twitter tweets is sorted by first name.  (Hmmm...a thought just struck me.  By posting these collections, am I engaging in mass retweeting?  Guess not since it isn't on Twitter.  But there must be a name for this...or should be, I guess.)

Alan Shalloway - people are often confusing predictability with control. if you do, you won't believe you can control your dev process. being in control is not a bad thing. being controlled by someone else is.

Alan Shalloway - the more i work w/ lrg orgs (300+ devs) the more i believe that any approach that focuses solely on the team or is bottom up is doomed. top down does not mean top down control. but rather top down enabling of working on the right things in the right way.

Alan Shalloway (series of quotes) - Issue isn't "when does agile work?", rather "how agile can i be?" This implies a continuum. Going to iterations is a break. Scrum is evolutionary in perspective but revolutionary in implementation (making it hard). Kanban is revolutionary in perspective but evolutionary in implementation. Many (most?) people in Agile community think Agile==Scrum. So when Scrum won't work they think Agile won't. At conferences people say "we're doing agile" when they mean they are learning scrum.

Andreas Boes at XP Days Germany 09 on “The New Industrialization” (via Alistair Cockburn) – “Industrialization" converted subjective process to objective process (remove individual's capabilities). Taylor reduced 'processes' to invidual activities and actions. Taylor was a micro-methodologist (studied microtechniques). Alistair Cockburn - I think modern agile ppl are doing Taylorist work: study top performers, decompose, teach to others. That's exactly what we're doing! (I disagree with [Boes] - he says programmers are black box. but nano-TDD opens that box. I think agilists are whitewashing a secret Taylorist agenda, and it worries me.) what does he mean with “separate subjectivity from individuality”? I need a translation of Taylor’s 3 ground rules. Boes - agile brings in "living process"; which also lean brings in, too. diff : Taylor = optimize before announcing the process; Lean = optimize after deploying the process. Agile brings self-organization; Lean brings repeatable processes. Agile = collective learning & knowledge capture; this makes the "objectivity" and visibility needed. What does agile mean for the average worker? we haven't thought through it (bad things to follow?) Alistair Cockburn - Boes/audience Taylor tried to diminish the human, sw tries to augment (I worry that parts-replaceable programmers goes the wrong way.

Bob MacNeal (more on being self-directing) - 1) Responsibility - given or takes responsibility for results and 2) Autonomy (free to determine, plan & schedule activities).

Chris Sterling - scrum is a learning system; modify tools, practices, & process when your team learns something.

Dave Rooney - Agile is a collection of those 'things that work' put together in a synergistic way... whole is greater than sum of parts.

David Anderson - Kaizen done right is statistically based systems analysis often using SPC. Diana Larsen - Retros done right get at diff probs than SPC - esp human issues & + deviance. Well-run retrospectives incl subjective & objective data, as relevant to retro focus, + analysis & action. David Anderson - retros tend to project level focused. higher maturity orgs will have an organization level focus across teams/projects. This is not guidance or opinion. This is field reported cases. You need to be open to ask why? and challenge your beliefs. Paul Dyson (via Rachel Davies) - Like all agile practices (or practices in general), there is a risk of 'cargo cult execution' with retrospectives.

Gerald Weinberg - Definition of Bureaucracy: Each thing is in control, but everything is out of control.

James Bach - Saying ISTQB certification makes a tester better is like saying a chefs hat makes you a good cook. I had a woman in my class who studied for two years to be a baker. Then she studied for a few days to get a tester certification. Why are donuts worth two years of school, and testing worth no training at all? Because testing is easier to fake than donuts.

Jason Yip - Crisis is required to provoke deep change only if top management has a monopoly on setting strategy.

Jean Tabaka asked about 100-130 chars to describe Kanban and some responses were: Karl Scotland - Map value stream, visualize, limit WIP, establish cadence. Reduce WIP to improve value flow & individual fulfillment. David Anderson - visualize flow, limit WIP to encourage evolutionary change towards lean outcome, high maturity culture.

Jurgen Appelo - Introverts are not shy. Introverts just prefer low-noise communication. [Added by Dave Rooney in a retweet - Introvert == shy is common misconception]

Karl Scotland - If your process is designed to expose dysfunction, what do you do when your process becomes the dysfunction?

Karl Scotland (asked) - Are retrospectives a form of Schewart/Deming Cycle? David Anderson (replied) - I find most retrospective guidance to suggest subjective, anecdotal feedback rather than objective data-based Deming style info. SEI classifies OID based on subjective, anecdotal evidence as low maturity even though OID is a ML5 process area. Deming's method would be considered high maturity OID/CAR ML5 by the SEI. Typical retros are a low maturity precursor to PDSA. meanwhile, @kjscotland and I are only reporting facts from field. High maturity kanban teams tend to drop retros as waste.

Vasco Duarte - Ppl talk a lot about business value, but they forget that for most people business value is totally subjective! (i.e. unquantifiable)

Vince Lombardi (via Jason Yip) - Perfection is not attainable, but if we chase perfection we can catch excellence.

Sunday, November 22, 2009

"The System," "They" and "Policy"

This was such a funny/weird situation involving communication with a customer that I just had to pass it along.

First, some setup information.  In August of 2008 we changed cell phone company to fit in with the last job I had.  So no contact from the prior provider since then.  That is, until two days ago...

A 4-page (2 sheet) bill from the provider (one of the big four) shows up saying its from the Oct 10-Nov 9 Bill Period with a Nov 13 Bill Date.  First page shows

Oct 13 Tax Adjustment ............................................. -$1.81
New Charges ...........................................................   $1.81
                                                                 Total Due    $0.00

On the payment remit slip is says "DO NOT SEND PAYMENT. This amount will be credited to your next bill.  $0.00."

On page 4 (after 2 pages of legal stuff and ads for buying more services) it says

Toleance..................................................................  $1.81
                                                                        Total  $1.81
(and that is the only thing on the page except the "4of 4" page number, the company logo, and billing acct number and dates).

Now the fun begins since I wanted to know what this was all about after 14 months of not being with this provider.  So I call the customer service number from the first page of the bill and explain all this to the person who answers.  I tell them I'm trying to find out why I got this and what it means given I have not been a customer since August of 2008.

The person was quite nice but said her records of our account doin't show any such credit/charges.  But she said we didn't owe anything, so just ignore it.  But, again, I asked why did I get this and what does it mean. Since she seemed to feel I should not care, I asked for a supervisor.

The customer service supervisor was also quite nice, but could not explain it, i.e., nothing showing on any records she could see.  She guessed it was a credit left over after we closed the account.  So why, I asked, did it take 14 months to contact us and why what was the charge for that cancelled out the credit.  She did not know and put me on hold a few times trying to find someone who might know.  Finally, she sent me to their finance/accounting folks.

That person, also very nice, suggested "they" must have noticed this credit recently so "the system" sent me a letter letting me know.  I said, it wasn't a letter, it was a bill and asked what a "Tolerance" charge was.  I never got an answer to that last question.  But it seems, since we closed the account and paid the final bill, somehow "they" decided we were owed a $1.81 for some overpayment of tax.  However, it is the provider's "policy" not to send out checks for less that $5.  But the finance person could also not really explain much beyond this and did not show anything in their records either specific to this "bill" being sent.

So, apparently, to clear the account, "the system" or some "they" decided to send a bill to acknowledge the credit and the charge in that credit amount to make the account $0 since it was not the provider's intent to actually reimburse the credit amount given it was under $5.

This silliness, of course, was to satisfy some legal and accounting rules that I didn't know or care about.

But a couple things struck me about all this, besides the silliness:
  1. None of the people could access any system of information that would allow them to find out what this was all about.  (The Finance person was basically guessing what happenbed based on "policy" rules, not based on any information about the actual account.)
  2. I wonder if this was something that was done to many people to clear out old accounts, i.e., the cost to do this and then to deal with people like myself calling up to find out why, if so, would clearly be a substantial waste of time, money and company credibility.
  3. It would be interesting to know how much the provider makes each year keeping all credits under $5 (and what legal loophole makes this possible) since they waste the postage and time sending out silly $0.00 balance bills anyway.
Like Arsenio used to say, makes you wanna' go "Hmmmm."

Monday, November 16, 2009

Expectations Around Uncertainty

Back in September (10th), Mike Cottmeyer posted “Managing Expectations about Uncertainty” and noted that traditional project management views it as important to “manage uncertainty out of the project.” On the other hand, Agile efforts “[r]ather than managing OUT uncertainty” choose “managing FOR uncertainty.” I like that phrasing of the Agile approach. Mike did point out that “both worldviews have a place depending upon your context and problem domain” and that it is “up to us [as Project Managers] to recognize the nature of the projects we are working on and choose the strategy most likely yield a desirable outcome.”

I am inclined to say that "uncertainty" early in a project can be divided into (a) things we cannot (now) know and (b) things we should be able to know. I believe the more traditional approach takes the latter as its view. That is, the traditional view is that “due diligence” should be able to reduce uncertainty to nearly zero, leaving few (or no) things unknown. Thus, Agile’s approach to "embrace uncertainty" suggests irresponsible risk-taking in the traditional view because insufficient effort is expended to eliminate all the uncertainty possible. In this view, uncertainty is lack of knowledge that should be corrected by better initial effort. I believe the Agile approach looks at (a) as its view of early uncertainty. That is, there are things we really cannot know early on, may not be able to know until work has been done and feedback collected, or may not end up needing to know by the time we get there.

Now the word “uncertain” suggests being aware of something but not totally sure about it. If you are totally unaware of something (or it is something that is truly unknowable), talking of being “certain” or “uncertain” makes little sense. The traditional view of “uncertainty” carries a lot of weight, then, within that worldview if it means things we are aware of but do not understand deeply enough. Consider the large number of lists of potential project risks and failure causes that have been compiled over the years. In effect, they say, “Look, all of these things have been noted in the past and could impact your project. You need to explore them and become ‘certain’ about whether or not they have meaning/impact on your work.” Hence, “due diligence” involves being thorough about considering all these factors since we can and “should have known” this early on during planning for our project.

The Agile view is that it is wasteful to try to drive out all uncertainty early because it cannot be done. This appears to the more traditional view as irresponsible for the reasons noted above. An Agile approach relies more on short delivery cycles, than detailed up front planning, to address uncertainty. As with many other things, an Agile approach advocates an incremental, iterative way of addressing uncertainty, increasing detail as the events requiring it loom closer and moving from an implication of "early" to merely "before." From an Agile perspective, “due diligence” includes avoiding wasteful anticipation of risks/problems as much as responsible consideration of them.

This is not to say all early consideration of risk/uncertainty is to be avoided. However, from an Agile perspective, the details regarding how certain issues should be addressed can be delayed until more knowledge is available to the project. Agile projects move ahead with what is known while information on what isn’t known is developed.

In the end, of course, if an Agile project goes bad because of an unplanned for issue, the traditional view can say “See, we told you.” Equally, if a traditional project never encounters issues it expends effort to make plans for, the Agile view can say “See, we told you.” I am reminded of a talk Kent Beck gave at XP2006 in Oulu, Finland where he discussed “responsible development.” I think “certainty” and “due diligence” are things which walk that line of what is and is not “responsible.”

Thursday, November 12, 2009

Notes on the ASQ Software Division’s ICSQ 2009

Each year, the Software Division of the American Society for Quality (ASQ) holds their International Conference on Software Quality. This year, it was held at the Hilton in Northbrook, Illinois on November 10-11 (with a tutorial day on the 9th).

What follows are my notes on the sessions I attended and the Agile “debate” in which I represented the “pro” Agile side. Other than keynotes, sessions were run in parallel with 4 tracks going on at the same time. My notes, therefore, represent the one session I attended of four going on simultaneously.

One thing to be aware of is that attendees at ICSQ’s are often from regulated industries and firms doing government related contracting where formal, standards-driven quality approaches are the rule.

Tuesday, November 10, 2009

Keynote – Bill Curtis on “Quality in Multi-Tiered IT Applications”

Bill Curtis has been a researcher, practitioner and chief scientist in software methods and individual performance for many decades. He has worked at ITT, MCC (Austin, TX research consortium), SEI (as head of the Process program), TeraQuest (process assessment and improvement), and now at CAST Software. I have known Bill over the years during his time at MCC, SEI and TeraQuest, in particular coordinating (and applying the results of) his research activity in software design expertise for the company where I was working at that time.

Curtis started saying, “We’re moving beyond just what smarts and knowledge can handle.” By this, he meant the systems and their interactions have evolved (and continue to evolve) to where product (code) quality ideas are not enough to manage the desired results. Expertise in application level quality, i.e., how all the components interact, is what has the largest impact on system quality today. Quoting an ACM article (Jackson, “A Direct Path to Dependable Software,” CACM v. 54, no. 4), “The correctness of code is rarely the weakest link.”

Curtis pointed to problems with design choices that “pass (functional) tests,” but are (at best) inadvisable practice when scaled and must address non-functional production requirements. Presaging the end of day keynote by Joe Jarzombek, Curtis said that we need to be able to make dependability, assurance, etc. “cases” about our systems. That is, we should be able to provide evidence to support arguments that justify belief in claims about such non-functional requirements.

Curtis offered a few other ideas such as:

  • addressing people’s ability to understand a system when changes must be made since he said 50% of the change effort in maintenance is devoted just to figuring out what the system, not an individual module, is doing;
  • allowing (and training) testing groups to do true QA, indeed do Quality Engineering, which would require a broader involvement of personnel from testing organizations in the full lifecycle of work as well as not regarding test groups as “entry-level” positions;
  • understanding the COO’s view on the need to standardize ways of doing things an organization does not compete on.
Finally, Curtis mentioned the Consortium for IT Software Quality “sponsored by a partnership between the Software Engineering Institute (SEI) at Carnegie Mellon University and the Object Management Group (OMG) to combine their industry-leading strengths in developing software-related standards and appraiser licensing programs.” As one of its activities, Curtis said, it will work to create more operational definitions of software (non-functional) quality characteristics (i.e., the “ilities”). The ISO 25000 series, which supplanted ISO 9126, has definitions, but the CISQ’s work suggests they are not viewed as operational enough.

Tom Roth – Psychology of Software Quality

Roth’s background is in embedded avionics software quality assurance where he is in charge of overseeing reviews, internal audits, testing, etc.

Roth started by saying we should “think of QA as people trying to influence the behavior of other people developing software.” Hence, his talk was about influencing people with regard to quality and the importance of QA knowing how to develop trust in developers since not everything can be reviewed. (Interestingly, an article in the ASQ’s main magazine, Quality Progress, for this month, is entitled “Trust, But Verify.”) But Roth cautioned against “enjoying the power you have” as an arbiter of quality, using knowledge of psychology to establish a collaborative relationship with development groups/management.

In discussing inspections and reviews, Roth noted that software engineering formalities, such as Fagan inspections, impart a level of discipline and, potentially, a group sharing of responsibility, which may not exist with individuals alone. Indeed, inspections turn the deliverable being inspected over to the group and the individual does not bear the full brunt of making sure quality is present in that deliverable. From an Agile perspective, I was thinking that, after a while, such discipline should become more internalized and less dependent on external rigor.

Some of the things Roth touched on were how:

  • in relationships, differences often attract while similarities comfort, but trying to force sameness can destroy the attraction;
  • inappropriate habits exert tremendous influence on further (and, perhaps even worse, expanded) inappropriate behavior;
  • we are not 2, 3 or 4 different people, despite external appearances under different social circumstances, i.e., there is a congruence in behavior which, especially under stress, will reveal itself;
  • people working alone can spend 2/3 of their time evaluating alternatives and 1/3 implementing a chosen alternative while two people working together reverse the balance, effectively quadrupling the productivity of one person alone;
  • behavior [engineering] leads attitude [morality] - you can tell people what to do but not how/what to think, so work on behaviors/practices and allow the thinking to come along on its own.
The last two struck me as quite interesting, of course, from an Agile perspective.

Ed Weller – Getting Management to Listen

There are many talks that have been given over the years about how to talk to management. Ed Weller covered some of the same terrain in terms of speaking from a cost/dollars perspective. However, he did offer some specific ideas related to managers who are:

  • used to technical “change agents” (1) underestimating the cost of implementation, (2) overestimating improvement benefits, and (3) introducing risk management is not comfortable with;
  • faced with “Problem Saturation”, i.e., “consumed solving today’s problems” with “no time for next week’s (month’s/year’s) problem.”
Weller’s suggestion was to focus on data on the cost of rework, pre/post ship defects, and, in general, poor quality. From a lean/agile perspective, this means showing management how they can reduce waste in the software process.

Rebecca Staton-Reinstein – Using A Cost of Quality Model to Drive Improvement

This was a fairly standard talk on CoQ models elements. Some of the audience interaction and comments were of interest, especially regarding the difficulties in doing CoQ calculations:

  • collecting data accepted as “accurate” enough to truly prove/show improvement ROI is very difficult for an organization that does not have some level of process discipline and decent data capture capability;
  • such models, on the cost avoidance side, are talking about things that haven’t happened, yet, requiring (accepted) historical data to show prior trends that could be reasonably extrapolated to the future;
  • belief in quality as a matter of personal “morality” or “will” (i.e., we have problems because people just don’t try hard enough to do the job “right”) rather than something addressable through an engineering approach;
  • being able to take quality data and relate it to schedule and budget impact.
Then, at some point during the talk, the following thought struck me: if you do things with urgency, you won’t have to do them in a rush.

Keynote – Joe Jarzombek, National Software [Security] Assurance effort from DHS

Joe Jarzombek directs the Department of Homeland Security’s program on Software Assurance and has been doing this, with little budget, for over 4-1/2 years. I met Joe through my activities on the IEEE Software and Systems Engineering Standards Committee when he was still consulting within the Dept. of Defense. (Before that, he was an active duty Lt. Colonel with the Army serving at the Pentagon.) Joe’s job is to promote an interest in and work actively toward securing the infrastructure of the USA from cyber attack. To do this over the years he has brought together academic institutions, government agencies (DHS, Dept. of Commerce, Dept. of Energy, and DoD), non-profit agencies, and commercial organizations to work on a variety of efforts in tools, techniques, guidance, educational programs, standards (working with IEEE and ISO), etc.

Joe’s talk is one I have heard over his many years with DHS. He updates it regularly with the latest status on the efforts noted above. And, in case it is not otherwise obvious, the “assurance” focus of this work is on writing secure software and securing infrastructure computing.

As most of the printed materials which arise from the efforts of the participants he has brought together are produced under government funding, they are freely available at the Build Security In website under the “DHS SwA Web Site” link. Another good source of material on Software Assurance is articles from issues of Crosstalk (the Journal of Defense Software Engineering) which are also freely available. And, though a few years old, a July 31, 2007 “State of the Art Report on Software Security Assurance,” is also available.

Wednesday, November 11, 2009

Keynote – Edy Liongosari “Everything’s Elastic”

Liongosari directs work at Accenture Technology Labs and spoke about the changing landscape of computing as it moves from traditional computers to mobile devices. Most of the trends he noted (e.g., cloud computing) were not new, however, some of the implications and data were interesting.

For example, 30% of “smart” phones are owned by people with family incomes at or below $30,000. For them, this was their computing platform in the sense that they did their “computing” through internet access to sources of information, data, and applications. (On the latter point, Liongosari noted that there were some 100,000 iPhone applications available.) From a third-world perspective, Liongosari noted that, despite wide-spread cell-phone use in the developed countries, cell technology was even more prevalent in the third-world where land-line phones, computers, bank accounts, etc. were not at all common or available. Indeed, there were places, he said, where people “barely able to feed themselves” had cell phones.

Liongosari also spent some time talking about how large organizations were beginning to use cloud capability to get work done in fractions of the time it would have taken them to set up in house infrastructure to handle the same level of computing. He even noted an insurance firm (unnamed) that uploaded data to the cloud, performed massive analysis and downloaded the data and results a few hours later, “renting” the time and resources.

From a social computing perspective, he talked about how companies were starting to use such ideas (if not the most well-known social sites) in “harnessing the power of the crowd” to collect ideas and trends. Some examples were IBM’s Bluehouse, Adobe’s Cocomo, and Dell’s Ideastorm.

Another point made was how people in the workforce from teens to late twenties had a view of free access to computing resources and what this means when they are in company environments. Liongosari also noted the relative lack of concern people in this age group have for the idea of privacy, which is a concern (and mentioned in Joe Jarzombek’s talk) with regard to cloud computing.

While I listened to this keynote, another thought came to me: IT is moving from meaning “institutional” to “individual” technology, even more than the PC represented such a move since, it is not just owning your own computing resources now, but having an individual “right” to information that is starting to dominate thinking.

Tim Olson – Lean Principles and Process Models

Tim Olson has considerable background in process (improvement) consulting, having worked at the SEI early on in the CMM program, being involved with the Juran Institute, having Six Sigma experience, and, most recently, working with Lean.

Olson started by relating to an example from Deming about the danger of trying to replicate success by others through simply copying their apparent behavior without understanding the context and principles behind the practices. This resonated greatly with me because it is an issue I have seen with Agile adoption when companies learn practices and techniques without an understanding of Agile Values and Principles.

For the most part, Olson’s talk was about basic Value-Stream and Value-Stream Mapping ideas. However, he noted the lack of automated tools to make the mapping easier. He did note that, in discussion with Toyota, that they used walls and boards, without automated tools. But Olson noted that his process definition approach, focused on diagrammatic, rather than textual, process description has led him to apply process modeling/mapping tools to value-stream work.

He did caution, however, that simply trying to transplant Lean manufacturing ideas to software has been a problem in various cases since this has resulted in removing “non-value-added” activities such as configuration management.

Siegfried Zopf – Pitfalls in Globally Distributed Projects

Zopf is from Austria and discussed issues and problems he has observed working in multi-national, multi-cultural project situations.

Zopf began by making a distinction between distributed and outsourced situations, then, between minimally and large-scale outsourcing. Overall, it seemed as though he was emphasizing awareness of the level of appropriate control to apply to the different combinations of situations. For example, in a minimally responsible outsourcing situation – the work is confined to low-level design and coding – risk is low and pulling the work back in-house is not a problem, but the financial savings are lower. On the other hand, there is great financial advantage in outsourcing greater parts of the work, but much more local project management required in the outsourced locations. Zopf suggested allowing for 15-20% cost for such a “beachhead” operation.

Zopf also, in connection with any larger outsourcing effort, noted how planning must account for extra costs in project management, communication, travel, documentation, and knowledge transfer that would not exist in a fully local project. Thus, a company cannot take an estimation done assuming in-house work, then simply portion out parts of the work, “transplanting” the estimate and plans without adjusting for the differences.

And, for distribution matters in general, whether outsourcing or not, there are still issues related to national and cultural differences regardless of whether or not it is the “same” company. A couple examples of what he discussed are:

  • two groups, for each of whom English is a second language, and the problems they can have trying to communicate using English which are not the case when at least one group is native English speakers;
  • monochronistic and polychronistic cultures where the former view time as linear, compartmentalized, and value punctuality and the latter view time as more fluid, schedule multiple things at the same time, and even expect people/things to be late.
One final point in distributing work (with or without outsourcing) is process mismatch. Specifically when working with a high maturity (using the CMMI scale, Level 5) organization, that organization will find it difficult working with another organization that is not at least Level 3. In the reverse direction, a low maturity organization may find it frustrating working with the expectations and pace of a high maturity one.

Ron McClintic – Performance Testing

Ron was the “con” side in the Agile “debate” (which I describe below) and has a lot of years of testing/QA experience working for and in the GE Capital environment. He currently works with applications that collect and analyze data on truck fleets using a combination of hardware, embedded software, and more traditional application software. However, the multiple vendor networked environment he works in matches quite well with the multi-tiered issues Bill Curtis discussed.

His talk could be considered a “case study” since he went into detail about the various points for performance testing that his efforts need to address, from the lowest levels of (vendor-supplied) software doing network and database optimization and capacity adjustments to customer GUIs and response expectations. On the latter he noted work done to determine thresholds for response time from the minimum one would ever need which matched the time for a person to perceive a screen change up to the upper limit before a customer would abandon a (web) application and go elsewhere. It turns out the latter is about 4x the tolerated time frame. The problem is that, for different client situations, the tolerated times vary.

As an example of a difficult performance/response issue to address, one client reported significant performance problems before 10am each day, but which seemed to go away at or about 10am. After a few weeks of probing and testing, it was discovered that 10am local time was when the company’s European offices, mostly in the UK, ended their day, a fact not previously revealed to Ron’s group. The moral being not all issues are really technical ones even though they have technical impacts.

One statement Ron made rang very true to me, especially in light of one of the points Bill Curtis made, and that had to do with using testing and test tools in an “investigative,” rather than ”reactive,” manner. I think, overall, this is an important way to view the real value testing and test staff can have to an organization as opposed to the, often, “last ditch” quality control function often performed. QA (testing and other true QA functions) should be to supply information to the rest of the organization about the status of progress and quality of work.

The Agile “Debate” – Ron McClintic & Scott Duncan

The other talks/keynotes described above occurred in the order I’ve listed them. This “debate” actually occurred Tuesday afternoon just before Joe Jarzombek’s keynote. I’ve saved it for last since I am commenting on my own (and Ron’s) session. Ron had proposed this and I had been recommended to him by the Conference Chair, Mark Neal, whom I have worked with before on Software Division activities.

We started the session with me going over, briefly, the Agile Values and Principles and stating that I felt this is what “defines” Agile. Since the various methods preceded the Snowbird meeting, the term “Agile,” applied to software development, occurred at that meeting and with regard to the Vs & Ps. So, for me, that means the Vs & Ps are what define “Agile” while practices and techniques are examples of possible implementation approaches. Ron had no real problem with this as he noted he agreed with these ideas. His objection to Agile came from two sources, he felt:

  • it resulted in overall “suboptimization” of a project because it focused only on optimization of the development piece, and
  • it did not focus on actually profitability of a company that sells to the marketplace, defining value as just the delivery of the working software.
Thus, his argument was that a more traditional approach to projects that accounted for the full product lifecycle, including longer-term maintenance and support costs, was more appropriate. He also felt there had been no appropriately trials of similar project situations that had collected data to show the benefit of a well-conducted traditional effort compared to an Agile one.

He and the audience had stories to tell about “purist” insistences from consults arguing against the responsibility for Agile teams to be concerned with such issues as they were matters for the business beyond the development team. What I was hearing were stories of:

  • “teams” without appropriate business collaboration or all the skills needed to do the work or
  • projects where the organization, itself, isolated groups outside of development from trying to pursue/accommodate an Agile approach, insisting on more formal handoffs or
  • developers insisting that going back to fix work not done completely or well in the first place constituted “refactoring” or
  • the usually litany of refusal to document, measure, and/or plan.
Indeed, in one case that Ron noted, developers were writing code without requirements. I had to ask him how they got the okay to be developing anything with “no requirements” and, then, suggest this was an “Agile” approach.

A couple audience members also brought up the book “eXtreme Programming Refactored” and its claims of failure for the C3 project.

What I found was that people were exceedingly receptive to an explanation of the Values and Principles and accepted practices, seeing how wrong it was to characterize many of these behaviors as “Agile,” rather than merely ad hoc.

Of course, throughout the Conference, there were stories and discussions about this same sort of thing happening to other ideas once they had “crossed the chasm.” Mark Paulk, for example, was there discussing various process improvement ideas as well as his research work at Carnegie Mellon University with Scrum. He and I sat at lunch with a number of people who were at this “debate” or had other Agile contact and discussed how similar things were after a while for the CMM (and now for the CMMI) with people ascribing “requirements” status to guidance material and pushing their own idea of process “rightness” in assessing companies.

So I have left this “debate” topic until the end, and am not going into great detail about what was said because, overall, it demonstrated the attraction the Manifesto’s Values and associated Principles have for people. It also demonstrated, to me at least, the need that exists for people to understand how practices are intended to implement those Vs & Ps and not simply copy (or think they are copying) them without such understanding.

Monday, November 9, 2009

And Still More Quotes from Twitter

Alan Shalloway - Scrum's power lies in its collaboration, co-location & feedback. thinking it lies in its practices is what ossifies/limits it.

Alan Shalloway - The Leader must be pulling the org thru the change process, not pushing it thru. When a leader is pushing, the org's, people don't know if they are being pushed toward something better or off a cliff- they do not have the concern if leaders are out in front and pulling.

Alistair Cockburn - Crystal = frequent delivery + close communication + reflective improvement => self-awareness; Ladder: techniques -> principles -> metaskills

Bob MacNeal - Maybe "self-direction" is one characteristic that distinguishes a team from a group. Scott Duncan - Collaboration would be another. Been in group's that related well, but each did their own thing and were structured that way.

Carl Sagan (via Kevlin Henney) - Intellectual brilliance is no guarantee against being dead wrong.

Chris McMahon - to improve performance, what is THE ONE THING we can do right now? Ron Jeffries - they already know the one thing. Everyone does.

Dale Emery - Passion and respect are not inherently at odds, though they can seem to be if you don't know how to express both at once.

Daryl Kulak - Project estimating is just wishing to two decimal places.

Dave Ramsey (via Carlton Matthews and @E-Mealz) - When your outgo exceeds your income, your upkeep becomes your downfall.

David Hussman - Sometimes Twitter seems like the world first and largest virtual fortune cookie.

Elizabeth Hendrickson - It is hard to Speak the Truth, and speak it diplomatically enough to be heard, when people want Comforting Lies.

Elizabeth Hendrickson (reported by Chris Sterling at PNSQC) - "The definition of Agile is in the results" (deliver value frequently at sustainable pace) "Flexibility is a side-effect of being agile."

Glyn Lumley - Anybody's job should not merely be to "do it right" but to do it better.

Hillel Glazer - Real engineers *do* like GOOD processes. Those that don't are posers. If you made it through college/university w/out any processes you probably weren't in an engineering school. Check your diploma.

J.B. Rainsberger - I find it hard to value Customer Collaboration without valuing whatever the customer decides is value.

Jacob Yip - Stop the line != stop and fix. Stop and contain. Fix when you understand why, which may require longer analysis.

Jason Yip - A car is a system; individual parts are not. Extreme Programming is a system; individual practices are not.

Jason Yip - Paradoxically, if you truly value people, you tend to focus on the work, not on the people.

Jeff Patton - agile people: is iteration process iteration- repeat the same steps in cycles, or product iteration: reconsider and rework prod decisions?

Jeff Patton - Agile-resistant teams hate all the meetings. Experienced agile teams love all the collaborative work.

Jeff Patton - Requirements are the boundary between what I get to decide and what you get to decide. It's a fuzzy discussion, or DMZ.

Jim Highsmith - Agility is the ability to think and learn rather than blindly following a recipe. Brian Marick - No, agility is not "the ability to think and learn rather than blindly following a recipe". Let's not equate "agility" with "good". The ability to think and learn is part of Agile and 8 zillion other things. Joshua Kerievsky - @marick You've focused on the think/learn part while the important part is not "blindly following a recipe." (even an Agile one). Brian Marick - @JoshuaKerievsky I suppose it needs saying. But I think it would be good if Agile thought leaders thought more about software.

Jim Knowlton (at PNSQC via Matt Dressman) - "date, but don't marry, your framework."

John D. Cook (actually from his blog pointed to by Jurgen Appelo) - I’ve said that someone has too much time on their hands, but not since I read Meyer’s post. I see now that the phrase is often a sour grapes response to creativity. I don’t want to do that anymore.

John Goodsen - Iterations are the dogma and waste of Agile. Flow and pull make iterations irrelevant. Are you certified to manage waste?

John Seddon (via Bob Marshall) - Less of the wrong thing is not the right thing.

John Seddon (via Bob Marshall) - Measure what is important to customers, not auditors.

Mike Sutton - The larger the organisation the thinner the thread that connects work to value.

Naresh Jain - Code is not an asset its a liability. Tacit knowledge gained building the product is the asset.

Nat Pryce - New manifesto: while we value valuing things other than value, we value valuing value more.

Pat Reed (via George Dinwiddie) - Projects fail at the beginning, not at the end.

Payson Hall - On reflection, key takeaway from #Risk Conference last month was definition of #project risk as "Uncertainty that matters".

Peter R. Scholtes (via Glyn Lumley) - Why do you hire dead wood? Or, why do you hire live wood and kill it! (From The leader's handbook: making things happen, getting things done‎).

Peter Scholtes (via Glyn Lumley) - Bureaucracy is a form of waste.. people with no real work to do impose needless tasks on those with real work to do.

Scott Duncan - I see "giving" offense as different from "taking" it. I can avoid the former, but likely not the latter.

Scott Duncan - In general we do not expect perfection, but in particular, we do.

Scott Duncan - Waste: Write on board; copy to paper; transcribe to elec doc; have a bunch of people review to check for mistakes, etc.

Tim Ottinger - Between the placebo effect and the hawthorne effect it is hard to know anything about anything involving human beings.

Tim Ottinger - Biggest tip for remote pairing: Latitude hurts, longitude kills. Common hours are helpful. [Actually, a general comment about distributed teams that I’ve heard various places.]

Virginia Satir - Understanding and clarity, not agreement, is what's important in dialogue.

“What do you think of …?”

Long before I became involved with Agile ideas, I was doing internal (and some external) consulting in more traditional (software) quality and process improvement. And many years before that, even before I got involved with software, I taught some at the college level. My style, I came to learn, was “Socratic” in that I would use questions to encourage (self) learning by my classes, clients, and audiences rather than be too direct, too often with “answers.”

I think this has been helpful as an Agile coach/trainer since, over the years, I do believe leading a horse to water will get most of them to drink. That is, most people are reasonable enough to arrive at decent conclusions if given the information that will allow them to do this. This does not mean people will not have years of experience pointing them in certain directions than my questions may be hoping to encourage. I also find that it is easier, in a group, to get more useful learning to occur using this question-driven approach since the group learns from itself in a sense.

In another context in a Twitter post I stated you can control what you give, but not what others take. With at least some common base of agreed upon information, I do think you can encourage people to take and experience them asking you to give more. And what I do give, I try to do so by offering alternatives when I think they exist.

The title of this post suggests what I typically do that seems to work. I’ll ask people to direct attention to some data or situation and ask them what they think of it. In general with any data related to quality/process, I suggest people take an “it’s ten o’clock, do you know where your children are”* approach. That is, I encourage them to know what’s going on to the greatest extent reasonable at that time, then consider whether they are or are not okay with that being the situation.

That’s it. Nothing dramatic. Just something that I find has worked for me over the years in many different contexts.

(By the way, I have even used this approach to teach 8yr olds, and up, about Newton’s laws of motion where they derive F=MA and eventually come to understand why satellites stay up in the air. Takes about one 30-45 minute class. And, no, they don’t all remember everything, but they do end up learning about learning and that they, in fact, can learn about some seemingly complex things.)

*For those not familiar with this phrase, it was used in a US public service announcement on TV to urge parents to know what their kids are up to and where they are late at night.

Sunday, November 8, 2009

Burndown & Control Charts

The teams I’ve worked with have used burndown charts to track task hours remaining during their iterations. For them, the burndown baseline represents the optimal pace a team would need to be on to complete all work for the iteration. Assuming, of course, that all the work contributes to completing committed stories, the burndown chart helps indicate how well the team is doing in meeting the iteration goal. I say “helps” as the burndown is not the entire truth. (Some teams have tracked story points in a burndown, but, as that produces a stair-step chart, most teams have used a task hours chart for their iterations. Later I’ll mention how some teams track story completion as well.)

Coming from a more traditional quality background originally, I view the burndown chart as a simplistic form of a classic control chart. The baseline is like the central line on a control chart. We have no upper or lower limits since we are not doing statistical sampling, of course. We are tracking actual data completely. But there are some similarities in how a control chart is used and how we can view the actual iteration progress line compared to the baseline.

If the actual progress line hovers/fluctuates right around the baseline, the team is on track to complete the iteration goal. If the actual progress line is above the baseline constantly, it could mean the team is not headed to complete their Sprint commitment. If the actual progress line is below the baseline constantly, it may mean the team is headed to complete their Sprint commitment somewhat early. However, being not too far above or below the line is likely nothing to worry about if the trend is consistent. If a team’s ability to estimate and commit is effective, they should not be too far (or for too long) above, or below, the line.

On the other hand being below the baseline and heading even further below or being above the baseline and heading even further above it, should be case to consider taking some action:

  • A progress line that is below the baseline and increasingly headed down means the team is ahead of their schedule for completing tasks and getting further ahead. This may or may not be good news. If things are going very well during the iteration, the team might discuss with the customer/Product Owner/etc. the possibility of taking on more work. However, this pattern could suggest some tasks being skipped or downgraded in time. That ought to be looked into as well to be sure everyone understands what the pattern means.
  • A progress line that is above the baseline and increasingly headed up means the team is behind their schedule for completing tasks and getting further behind. This is not good news as this pattern suggests some tasks being added or increased in time. This ought to be looked into to find out why work is not converging on the iteration goal.

Earlier I said the task burndown chart is not the complete story and used the word “helps.” Like a control chart, the burndown chart is an indicator of whether or not further consideration needs to be given to team progress. Because of the statistical nature of control charts, deviations of certain kinds from the center line, in either direction, are reasons to investigate the cause of that deviation. The assumption is that this baseline represents expected results of sampling the production process and deviations either way could be either good or bad news, but, in either case, cause to look deeper into what is happening.

The same goes for the burndown chart, but there is certainly more to know about iteration progress since completion of task hours does not, by itself, mean completion of stories which is the iteration goal. One could be completing many hours of task effort, even be below the baseline, and not have completed a single story. This can happen if a mini-Waterfall is occurring in the iteration, with testing tasks bunching toward the end of the iteration.

One thing a couple teams I’ve worked with have done is to put story completion targets along their baseline, then note when each story actually gets completed. This gives both a daily progress indication based on the tasks and an iteration goal indication based on when stories show completion. If teams size stories effectively, a story should be getting completed every few days, at least.

Now teams that are communicating well among their members usually know all these things without the chart. But the visual representation of what the team “knows” is a useful reminder and lets people outside the team see what is happening. The visibility/transparency provided by the burndown chart is important and, for me, is its basic value since it offers everyone the opportunity to understand iteration progress and discuss it objectively.

Tuesday, October 27, 2009

Common Project Risks and Agile Mitigations, Part 8, Epilogue

Part 7 closed out the discussion of issues that the various sources identified as contributing to project failure. As promised at the end of that Part, this Part looks back on the first 7 parts and offers some summary comments. I’ve also added some further bibliography entries that I encountered them while doing this series. While they were not used to compile the list of failure issues, they do discuss project failure issues.

As implied in Part 1, the series was inspired by my having reviewed a number of surveys and articles on why projects failures occur. It seemed to me that the Agile Values and Principles, if adopted using a number of generally accepted Agile practices and techniques, could mitigate many of these issues. While an Agile approach cannot pretend to solve all the identified problems, its communication and feedback model for team-based work is far more likely to surface and deal directly with such problems when they are usually less severe and less costly to address.

However, these project failure risks have existed, in some cases, for decades. People have been dealing with them long before the Agile Manifesto was created. Over the years, the failure rate has been declining, as some of the source surveys have noted. The recommended solutions have been codified in various bodies of knowledge (e.g., PMIBoK, SWEBoK, CSQE BoK), models (e.g., CMM®/CMMI®), and standards (e.g., ISO 9001, IEEE S2ESC, ISO/IEC JTC1/SC7). These approaches to the problems have been (often unfairly) characterized by voluminous documentation, significant cost, and bureaucracy. Agile methods have (equally as unfairly) been characterized by applicability only to small, low-risk efforts. The BoK, models, and standards approach is a reflection of what people believe to be currently understood “best” practices derived from those used in other (non-software) engineering and project domains. Agile methods reflect the view that a less complex, less prescriptive approach is possible which relies less on detailed process guidance and more on process transparency and person-to-person communication.

What is interesting, however, is the position often taken by both approaches with regard to why projects employing them continue to have problems. From a phase-based, sequential project perspective, the view is that such problems can be avoided if only project personnel would follow the existing, accepted, and well-defined project management and software engineering BoKs, models, and standards. That is, projects fail because people do not follow known methods and procedures effectively, i.e., they do not carry out such guidance “right.” Interestingly, when an Agile project is not successful, we often hear the same things, i.e., it failed because people did not do Agile (or a specific method) “right.”

So the question, in either case, is: can it merely be a matter of doing what is predetermined to be “right,” whether traditional or Agile? And, why is it so hard to do what is “right” to achieve the results expected from doing so in either case? Proponents of an Agile approach say the BoK/model/standards approach suffers from too much overhead, inapplicable detail, and inflexibility causing an inability to respond promptly to the circumstances and changes that lead to failures. Proponents of the BoK/model/standards approach say Agile methods suffer from ad hoc behavior, short-term thinking, and transfer of significant risk responsibility to the business and customers.

What may be most true of either approach is that a formulaic application of their practices and techniques, with insufficient knowledge and understanding of their basic principles can produce an inflexibility (or inability) to respond when circumstances require. It may be that the “best” approach would be to ensure that the guidance in the BoKs, models, and standards is understood by those working on projects, but that the Agile Values and Principles are used as the structure within which to apply that knowledge.

Related Bibliographical Items

Sunday, October 25, 2009

Common Project Risks and Agile Mitigations, Part 7, Everything Else

Part 6 addressed the topics of quality and stakeholders. This part will address the remaining categories of issues that the various sources identified as contributing to project failure. They involve issues with and between people, availability of information, failure to apply lessons learned, testing process and vendor performance.

Remaining Issues

While a number of the issues noted below have impacts in many other categories, there are some specific instances if problems that were not assigned to other areas and mentioned on their own in the various surveys of problem failure risks. The remaining categories/issues are:

  • tension in social relationships; people (often respected staff) who, for one reason or another, block progress.
  • missing, or delayed access to, information resulting in untimely decision-making. (Shari Pfleeger [May] notes the common situation that “there isn’t one person who has an overview of the whole project.”)
  • not learning from past problems and/or ignoring the need to act on what has been learned.
  • inadequate procedures, strategies, and tools to be used in testing; late involvement of the test team in the project.
  • poor vendor performance delivering on a contract; for vendors of contract staff, a concern was offshore-outsourcing relationships as engineers sometimes must train colleagues who do the same work for much less pay. (Vendor contract issues were not mentioned much, but were a high % item when they were.)
Applicable Agile Values and Principles

Regarding people issues, Agile’s emphasis on transparency, regular improvement, and a team-based approach, while not automatically solving problems, will surface them sooner and in a more constructive atmosphere when it can be less costly to address them.

Agile’s expectation of frequent, direct communication between development team, business staff, and stakeholders, while not solving all information availability problems, can reduce them significantly and surface them earlier. Agile’s approach can also mitigate “stonewalling” around information that can occur in phase-based, sequential approaches.

The inspection and adaptation potential in daily meetings and regular retrospectives greatly increases the opportunity for, and expectation of, improvement. Reflection occurs close to, if not immediately after, an opportunity to learn and improve. Improvements, then, take place or are identified almost immediately rather than waiting, sometimes months, before consideration of them and any action is taken in a traditional project cycle.

Testing is given first-class status by most Agile methods and test staff are very much expected to be a part of the development team. In this way a quality focus not only drives development work but confirms its success in meeting customer functionality expectations. Agile’s expectation of working software as the primary measure of project progress and success, backed by a strong regression test suite, not only moves concern for quality to the forefront of the project but help ensure that changes can be made with confidence. There is also significant emphasis on test automation in Agile methods.

[A side note here regarding the latter. In 1989-90, I conducted some interviews with many large, technology-based companies in the US regarding the technologies they found most important in addressing quality. I asked what 3 things, if they were denied use of them, would have the most negative impact on their development work/quality. Two of the things mentioned most frequently were automated source code control and automated builds. Today, these two are almost always found present in development organizations. The third thing mentioned was automated regression testing. Interestingly, today, test automation is still a subject of much concern, even after 20 years.]

With regard to vendor performance, Agile Values and Principles do not directly address the issues noted. However, Agile’s team structure, communication expectations, collaboration emphasis, and success-focused contracting all provide a basis for working more effectively with vendors as with other participants and stakeholders. At the very least, an Agile approach will surface the issues of everyone’s performance from the very beginning of the project and offer the potential for a more constructive, early solution to problems when they are far less costly to address.

[One thing I have personally noted in working on some large, distributed efforts, was the lack of Agile experience of and training offered to off-shore teams. I believe this was because, at least in those instances, the on-shore company making use of off-shore, contract staff had been working with the off-shore companies in traditional project situations for many years and just stayed with them. The problem was the lack of training off-shore folks were given though many on-shore people went to some level of Agile training. More than once, as a consultant myself, I was asked to train off-shore personnel when their lack of experience/familiarity with Agile methods became clear.]

A final Part 8 will be posted within the next few days where I look back on the first 7 parts and make some summary comments.

Friday, October 23, 2009

Common Project Risks and Agile Mitigations, Part 6, Quality & Stakeholders

Part 5 addressed the category of technology. This part will address the next two most frequently mentioned categories of issues that can lead to project failure: quality and stakeholders.

Quality Related Issues

The particular issues related to quality mentioned in the various survey sources include:

  • general lack of focus on quality (i.e., assurance and control), leading to critical problems where quality and reliability end up being unacceptably poor.
  • No practical way to show software meets non-functional criteria (i.e., “ilities”) or that the delivered software will "work" for the user.
  • Misunderstanding of the role of quality assurance, giving it a secondary status to other activities and viewing it as just being about testing.
Applicable Agile Values and Principles

Agile’s major contributions to addressing quality issues are:

  • the insistence on working software as the actual measure of progress (and success) on a project;
  • early and frequent demonstration of functionality to the customer;
  • clear acceptance criteria that define what it means for functionality to be “done.”
All of these work together to help reduce the possibility that what is built diverges from what the customer needs/expects and to increase the likelihood that what is built functions acceptably. Various development practices are used to implement these three main goals (e.g., TDD, continuous integration, pair programming, shared code ownership).

The brevity of this discussion on quality is not meant to trivialize its importance, just to indicate that an Agile approach has fairly straight-forward quality-related goals and techniques to met those goals.

Stakeholder Related Issues

It might have been expected that stakeholder related issues would be a bit higher, but some of the more frequently mentioned items have stakeholder impacts associated with them. The particular issues related to stakeholder attitudes and behaviors mentioned in the various survey sources include:

  • Lack of sufficient user input/involvement leading to or as a result of most requirements coming from management or other user “proxies” who do not (or rarely) use the existing system.
  • Stakeholders unable to agree on answers related to requirements, resources, etc. and the consequences of these things, who then “paper over their differences” early on, pushing problems into development where they are revealed when software implementation concerns demand a specific answer.
  • Fear that project threatens their job: their influence (even working conditions) in the organization because many software projects result in one group assuming power and another losing it.
  • Stakeholders who are partners on this project may be competitors in other areas.
Applicable Agile Values and Principles

Quite frankly, in my view, there aren’t significant Agile ideas that specifically address the last two of these issues. Certainly, the Value of “customer collaboration” applies here, but does not, in itself or in combination with any of the Principles, offer a solution to serious stakeholder personal or competitive differences.

As to the second issue, Agile advice is to give the development team a single individual to “represent” the customer(s) and deal with the differences before the team gets its direction on functionality and priorities. That person (e.g., Product Owner) deals with the stakeholder diversities. Not really a solution in itself as it just hands the issue to the business, saying it needs to handle this if the development team is going to be able to be as “agile” as desired.

With multiple, a perhaps disparate, stakeholders, there is not Agile “magic” to wipe away the problems. Early and frequent demonstration of working software, though, can help focus stakeholder attention on the work, giving them concrete functionality to consider. Being concrete often makes it easier to overcome, or at least discuss somewhat more openly, concerns of and differences between stakeholders.

The first issue is best addressed through Agile’s goal that there be regular, frequent involvement by the customer in discussing functionality and reviewing iteration output. However, this will not prevent the “customer” view from being dominated more by management than actual system user perspectives. Again, it is left up to the business side of the relationship to ensure the information the development team receives represents the actual needs of the business in a way that will allow the most benefit to be derived from the work produced.

[Note: In the case of both quality and stakeholder negotiation matters, there is a great deal of existing wisdom and practice that is not Agile-specific in nature, but certainly consistent with Agile Values and Principles. Indeed, one could say the same about all of the issues mentioned in this series. Certainly, the goal of the series has not been to suggest only Agile ideas exist to address them, just that Agile has something to contribute in the way it suggests conducting product development.]

Thursday, October 22, 2009

My First “Agile” Experience in the Early 80s

Of course, it wasn’t actually Agile according to any currently understood practices and techniques. But it did have aspects of some of the ideas now viewed as Agile and was the experience that came to mind when I first read Kent beck’s 1st edition of eXtreme Programming Explained. I’ll describe the working environment, the product (a bit) and how we interacted. You can judge for yourself where the experience fit on any scale of being or not being Agile.

We were doing a commercial 4GL product (non-procedural language and associated database) known as RAMISII. It was written in FORTRAN and IBM 370 assembler so, of course, ran on IBM mainframes. There were 9 of us in the development group. Two focused mainly on the database code (and an access module to IMS DB/DC so the “language” could do reporting from those databases as if they were our own database). Two did “support” and non-development (regression) testing. So five of us worked on the language/reporting side. There were also a couple documentation folks who worked under the Marketing group which had 4-5 other folks in it. Marketing was our internal “customer,” though we never used that term or really thought of them that way formally.

Once year, after relational databases started catching on but before DB2 was big in the IBM environment, it was decided we would redo the reporting/language side of the product to add substantial functionality, including language constructs to allow relational operations for reporting extract purposes. Since we were going to change some 70% of the system, it was also decided we’d get rid of all the FORTRAN and replace it with Assembler. That’ll give you an idea of the technical scope of what the five of us would tackle over the course of 9 months or so. The 9 month target was because it was early summer and the next major customer gathering/conference was the next Spring, so we needed to have a major release ready then as we did each Spring.

[An aside on why assembler and not C, Pascal, even PL/I. We had to stick with languages that IBM was currently supporting on mainframes and an IBM C compiler was many years away at that point. But we could have used PL/I except that, at least back then, there were dynamic libraries associated with it that the customer would already have had to have, i.e., they had to be using PL/I, to then run our delivered system. Or the customer would have to license those libraries separately from IBM. Or we would have had to license them and pay IBM for each copy of our system that we released with them. Company senior management did not want to get involved with any of these concerns. So no PL/I.]

As to our environment, besides just saying it as assembler and IBM mainframes. Our (two) machines were mid-sized mainframes running VM/CMS (a virtual memory system that allowed each of us to feel like we had a machine all to ourselves). We worked on, wait for it, TI Silent 700s, i.e., thermal printer terminals. No CRTs as it was before even 3270s were widely used. (A year or so later, we had to get into supporting 3270 full-screen, but, at that time, it was command-line interfacing.) Each of us had our own private office in a nice building in the suburbs of Princeton, NJ. Of the five of us, 3 were relatively new to the company (<1 year), one was just a bit over a year, and I had been there between 2-3 years. (The database access and support folks were more senior than I.) Everyone, save one very junior person, had several years of experience at that point (i.e., 5+ at least) in IBM or Honeywell mainframe environments with assembler (and other languages).

Our immediate bosses were really fine folks with development experience from an operations research background. There was no obvious hierarchy among us. (Indeed. while I was there, I was "promoted" 3 times and never knew it.  It never affected how I related to or was noticebly viewed by anyone, at least as far as I could see.) We each, and our managers, valued good ideas and enjoyed one another’s ideas as there was always lots of room for people to contribute. The product was thousands of lines of code.  I can’t remember exactly how big but we had some 50-60 uniquely named FORTRAN and assembler modules that ranged from a couple hundred to a thousand lines each. Back then, it was a large footprint product that employed dynamically loaded overlays to limit memory use.

That summer, those of us in development spent a month or so determining every module we felt we’d have to touch based on some very high-level specs from Marketing. These specs came from the last Spring’s meeting where customers would annually “vote” on the features they wanted from a long list compiled from everything they all had been submitting for the 6 months before that. As this was pre-32 bit addressing, we also had some decisions to make about how to restructure our pointer approach for our report record sorting. One of the features desired was the ability to print summary data before all the data actually printed on the page, so we had look-ahead selection and calculation issues to deal with so we could create, carry and sort summary data records within the detailed records.

I have no recollection of how this started, but near the end of the summer, we started having regular meetings with the Marketing and documentation folks to flesh out functionality in more detail. We never had a formal requirements spec, but the user/support manuals became our “spec” as were interactively decided on features, new language syntax, etc. We would not have new functionality to show at each meeting, but we had ideas to present and questions to cover so Marketing and documentation knew, throughout, what we were up to and what it was going to look like. (Being syntax and command-line focused helped a lot since we did not have GUIs and full-screen formatting to worry about.)

After a couple months, though, we did have features to show since we had updated/rewritten enough to have threads of functionality able to work. We could also start testing functionality, at least within development, though we were also sharing what we were doing (as was Marketing and documentation) with the support group so they could work on tests at their level to incorporate into the test suite. (We would give them ours, but they’d always enhance them and then move them into their large test suite where they thought the new tests would work best with existing test cases.)

As we saw how our design ideas were working out, we began to offer to Marketing ideas we had on features, related to ones clients had requested, that we felt could “fall out” of work we were doing with little or no additional effort. If Marketing liked them and management accepted our evidence that it would not impact the Spring client meeting demo date, we’d put them in. An example of this were various summary calculations and totals we saw we could add if Marketing felt they’d be useful to clients. In some cases, we suggested ideas the clients had literally suggested but which were lower on the voting list than ones the Marketing identified as committed to the release. So, as we began to announce what features would be in the release, end customers were really happy to see everything they had voted for making it as well as other things they had not expected we would deliver.

A couple months before the meeting, all the functionality was done as was all the user documentation. Actually, the documentation had been settled upon a month before that. The doc folks were just doing their own formatting and example insertion work. Examples often came from our test examples.  In deed, tests we used often came from ideas Marketing had for typical user request streams. During the last two months, we started working more with performance improvements and internal structural changes. No new functionality was being worked. All defects found during that time were due to internal issues and had no functionality impacts, i.e., no missing or misunderstood requirements.

At the Spring meeting, customers seemed really excited by the work we had done. They loved all the functionality as well as hearing, despite all that was added, that both memory footprint had been reduced and performance had been improved. There was also an explosion of functionality ideas from clients coming out of the meeting. (As the formal release date was early summer each year, we were able to incorporate a number of these successfully before the actual release, so clients did not have to wait a year for them.)

In the process of making all these changes, we introduced no major defects and, in fact, found two serious defects that had been plaguing us on large reports and on customer system configuration changes, i.e., whether they allocated dynamic memory above or below the base load address of the main system modules.

We never had a name for how we operated. It clearly wasn’t “Agile” in many aspects, but I think we hit every Manifesto Value dead on and several of the Principles.

Monday, October 19, 2009

Common Project Risks and Agile Mitigations, Part 5, Technology

Part 4 addressed the topic of creating an initial plan (i.e., planning and estimation). This part will address a category of issues that can lead to project failure which was surprisingly high in the number of times mentioned: technology.

Technology Related Issues

The particular issues related to technology mentioned in the various survey sources include two types. The first is application technology, revealing itself through

  • Complex technical details (often hard to recognize or address early in a project)
  • Lack of sufficient technology competence to address the complexity.
The second is development process technology, revealing itself through

  • Poor up-front (inflexible, inadequate) architectural planning
  • Lack of effective, practical ways to show one approach is better than another.
Another item also mentioned in connection with these was that technical decisions may be made by people without expertise in the domain, but with the managerial authority to make those decisions. One could argue that that this is not a technology matter, per se, as non-technical decisions could as easily be made under the same circumstances. But it was mentioned, so I wanted to note it.

Applicable Agile Values and Principles

The Agile Values and Principles that seem to apply are those associated with individual & team performance and design approaches as well as frequent software delivery.

With regard to application technology, if a commitment has to be made early to such directions before other work occurs, such commitments, if “wrong,” will unquestionably have a strong, negative impact on project success. The Agile approach is characterized by two phrases often seen in Agile literature: making decisions at “the last responsible moment” and focusing on “the simplest thing that could possibly work.” Key here, of course, are “responsible” and “could possibly work.” While Agile “embraces change” and seeks to “inspect and adapt,” that does not mean things are done ad hoc. It would be irresponsible not to make decisions using the best information possible, at that moment. However, to make progress and gain more information usually requires moving ahead, incrementally, at the point when a decision must be made (“the last responsible moment”) using that information in the most direct, clear manner known at that time (“simplest thing that could possibly work”). (Naturally, in a very complex domain, “the simplest thing” is still going to be complex, but likely better to deal with than a more complicated complex thing.)

Regarding technical competence, especially in software, it can be quite difficult to assess because the evidence is often buried within the code and not obvious, especially in phased, sequential efforts, until very late in the project schedule when various pieces are integrated into the larger system. Agile techniques such as collective code ownership, pairing and general visibility within the team are one way this information can be revealed sooner rather than later. Also the Principles of commitment to continuing technical growth and regular reflection on how to work better provide a path toward increasing individual, team and project competence on a regular basis.

These latter Agile techniques and Principles apply equally as much to the issue of development process technology, including architectural planning. There is always the question of how much up front work is necessary before useful progress can begin. The answer is usually less than might otherwise be expected if working software is delivered early and frequently thereafter since there will be concrete evidence of what “works” and what does not. In a complex project, what this will require is close communication between any architecture groups and development teams and a willingness of the former to work iteratively and incrementally rather than in a purely phased, sequential manner.

This latter point is often the most challenging organizational issue faced early in an Agile adoption effort since adoption efforts often start within development teams, perhaps with some test participation, but often without direct involvement of other technology-based teams. Architectural teams are one; configuration and change control teams are another; very often build and deployment teams are a third. Failure to incorporate these areas into the Agile effort can produce, at best, delays and limitations on the velocity achieved by teams since other groups usually work in a less iterative manner. They will engage in more up-front planning, expect less frequent changes and follow well-defined timelines for availability later in a typical project cycle as they do for non-Agile projects.

Saturday, October 17, 2009

And More Quotes From Twitter

A fourth installment of interesting things I've captured from Twitter that people have said (including some of my own) either individually or as part of a thread of discussion (again sorted by first name):

Alan Shalloway - biggest difference between adopting Lean now from adopting agile 10 yrs ago is customers r now more open-minded than th consultant community

Bob Marshall - Common insight from Reinertsen, Goldratt, Seddon: Manage *queues*, not schedules, capacity, efficiency or costs.

David Hussman - Detailed requirements are poorly written tests or what I call "tests in disguise".

David Hussman: Project communities are bonded by common goals not by percentage of availability on org charts.

Declan Whelan - I tire of people saying we were successful w/o agile as if success is binary. Agility fosters the ability to expand success.

Eric Hoffer (via Steve Freeman) - “Every great cause begins as a movement, becomes a business, and eventually degenerates into a racket.”

Gloria Steinem (via Suhas Walanjoo) - The truth will set you free. But first, it will piss you off.

James Bach - 3-word critical thinking process: Huh? Really? So? (question meaning, then question fact, then question significance)

Jason Gorman - 1. Hire good programmers. 2. Give them clear goals. 3. Give regular constructive feedback. 4. Stay out of the way!

Jason Yip - Isn't it kind of weird that Japan has a Deming prize and the US has a Shingo prize?

Jean Tabaka - Scrum vs Kanban vs other systems/certification debates distract vs focus. Continuous systems improvement jazzes me. Pick & focus.

Marcin Niebudeck - What I find the most difficult in #agile transition is not the legacy code, but the legacy people. For legacy code we have already good engineering techniques from #xp. What do we have for legacy people?

Mark Twain (via Will Green) - Never argue with an idiot. He will drag you down to his level and beat you with his experience. Scott Duncan - A variation "Never argue with an idiot. Onlookers may not be able to tell the difference." Both from Twain, I think.

Niklas Bjørnerstedt (via Kent Beck blog) - A good team can learn a new domain much faster than a bad one can learn good practices.

Scott Duncan - I am just not convinced the way to combat dysfunction is with more easily dismissed forms of dysfunction.

Scott Duncan (From the movie "Nightwatch") - "It's easier for a man to destroy the light within himself than to defeat the darkness all around him." (Anna Nachesa said the actual translation from the original Russian is: “It's always easier to put out the light within yourself than to cast away the darkness outside.”)

Tim Ottinger - I think that the agile motto should be "Building a better next week."

Timothy L. Johnson (via Josh Nankivel from Twitter)- Changing the world, surprisingly, looks a lot like living your life... day to day... with purpose... with focus... and with love. And there are days when looking at yourself in the mirror at the end of it all... and smiling... is really the best accomplishment. (28 September 2009)

Tobias Mayer - Scrum & XP are incomparable. Scrum is a framework for organizational change, XP for individual craftsmanship.

Unknown (via Kim Coles) - "The world is so fast that there are days when the person who says it can't be done is interrupted by the person doing it.”

Vasco Duarte - My def: "a method scales iff the effort needed to manage "things" grows at a slower rate than the number of "things"."

Will Rogers (via Ainsley Nies) - “Even if you're on the right track, you'll get run over if you just sit there.”

William W. (Woody) Williams - Highly motivated, productive people working in the wrong direction do huge damage in a short time.

Willie Colon (via Roy Atkinson) - The capacity to learn is a gift; The ability to learn is a skill; The WILLINGNESS to learn is a choice.

Yves Hanoulle - "A shared vision is about shared state, not about a shared statement."

Monday, October 12, 2009

Common Project Risks and Agile Mitigations, Part 4, Project & Risk Management

Part 3 addressed the topic of creating an initial plan (i.e., planning and estimation). This part will address related categories of issues that can lead to project failure: project and risk management.

Project Management Related Issues

To hearken back a bit to the last part, consider some thoughts from Tom DeMarco [May] in noting how a “Lean and Mean” emphasis in companies has “goaded” managers and staff into unreasonable estimates and the need for overtime work. “Any failure will be viewed as a direct result of underperformance,” DeMarco says. However, underperformance is “not even a significant [project failure] factor” for most projects compared to simply having initially unattainable goals.

Now consider some remarks from Ed Yourdon, quoted in [May] saying, “Nobody seems to acknowledge that disaster is approaching” even when they recognize it. “There is no early warning signal.” May goes on to state Yourdon’s belief that, “Until more organizations abandon waterfall-style development in favor of processes that demand early working code or prototypes…this scenario will continue to be familiar.”

The particular issues related to project and risk management mentioned in the various survey sources include:
  • Failure to apply (or understand) essential project management techniques and/or project control systems (e.g., tracking, measurement).
  • Inadequate visibility of progress (not just resources used) generally due to status reporting being misleading (e.g., generally more optimistic than pessimistic) or not having effective means of tracing a software development from requirements to completed code.
  • Not proactively managing risk or basing actions on “catching up” later (e.g., counting on overtime to handle contingencies).
  • Lack of knowledge of the actual probability of success, late failure warning signals, and reduction in resources needed for the project.
The first of the issues noted above argues for traditional project management capabilities being needed. Yet, though such capabilities might, on the surface, be present, the latter three problems could still occur because of the things DeMarco and Yourdon mention. Part 3 mentioned some ways to try to address DeMarco’s concerns. In this part, Yourdon’s concern should also be addressed.

Applicable Agile Values and Principles

Again, all the Agile Values and most of the Principles seem to apply in one way or the other.

The first problem noted above could just as easily be an issue in an Agile project if the techniques and principles for how to conduct one are either not applied or understood. The key in an Agile project is that these techniques are very near-term, direct and based on visible indicators of what is happening every day rather than at weekly/monthly status reporting occasions. This promotes the Agile Principle of simplicity, especially in process and tools, applied to project management by taking advantage of close, regular communication among project stakeholders and regular reflection on and adjustment to plans and project conduct. Because of all this, an Agile project will not engage in “Project Management Chicken” where everyone waits to see who will be the first to have to admit they are behind schedule.

Another important concept is how Agile projects measure progress based on working software rather than tasks accomplished. While it is true that an iteration burndown may track task, the ultimate measure of success will be completion of the committed to functionality and its acceptance by the customer. This is Agile’s “highest priority” and, as in planning, employs short timeframes (e.g., a few weeks) as the cycle for ensuring project direction meets customer expectations. And, as the demonstration of achievement of iteration goals is a very visible event, surprises are very unlikely if stakeholders maintain involvement with the team(s).

Finally, as noted in Part 3, planning is organized to make changes easier, less costly, and very visible. Agile project management, then, is focused on how to best respond to change and address risks right away. Indeed, a risk-based approach to managing projects is very much in keeping with Agile principles. However, rather than try to defend against change and risk, Agile projects take advantage of the near-term focus and visibility to accommodate change and surface risks early. (Jim Highsmith has some good advice related to what he calls a “Planning and Scanning” approach to managing projects, encouraging us to “expand our anticipation practices to include both planning (working with what we know), and scanning (looking ahead to learn the unknown as quickly as possible).”)

Regarding surfacing issues early, I have experienced managers responding to an Agile approach with the feeling, at least initially, that all they hear about day after day are the impediments the team is dealing with and the issues that need to be addressed. However, after a while, they see that these are issues that usually get dealt with immediately and which go on all the time, but are often not made visible in traditional projects. It is the visibility of the issues that initially produces the discomfort but, as time progresses, then produces greater confidence that teams are managing work well and ensuring stakeholders are kept informed. In this way, if large issues emerge, they are recognized earlier and addressed more promptly. This does not necessarily make them automatically easier to deal with, but does mean you’ll be dealing with them earlier, rather than later, increasing the likelihood of dealing with them at the least cost to the project.

Thursday, October 8, 2009

Common Project Risks and Agile Mitigations, Part 3, Planning & Estimation

Part 2 covered issues related to requirements as the number one issue plaguing projects. This part will address two of the next most frequently mentioned category of issues that can lead to project failure: planning and estimation. Planning was the next most frequently mentioned category, but estimation, which is highly related to planning, was also high on the list. Therefore, these two have been combined to be discussed together.

Planning & Estimation Related Issues

As with requirements, the fact that planning is a problem area should come as no surprise. Making plans, getting agreement to them, managing the project using the plan, and making changes to a plan all involve substantial effort. Later, project management will be discussed. But, for this post, the focus will be on creating a plan.

The particular issues related to plan creation/initiation mentioned in the various survey sources include:

  • Allowing a plan to diverge from project reality, or to be created which diverges from that reality in the first place, due to lack of estimation experience, estimation under pressure, rejection of reasonable estimates (usually resulting in underestimation), ignoring the obvious (e.g., back-of-the-envelope calculations), and/or not revising plans if scope changes.
  • Failure to plan, adequately or at all, to consider all project activities, to address risk management, or to inadequately document decisions and commitments (leading to later disagreements, disappointments, and costly rework).
  • Infrequent project milestones (i.e., project "chunks" too large), allowing too much time to elapse between opportunities to validate that work done meets intended needs.
A telling comment from Watts Humphrey (as quoted in [May]) is that “any plan [project managers] put together won’t meet the [desired release] date, so they can’t plan.” Additionally, May asks states, “It is unfair to call a project a failure if it fails to meet budget and schedule goals that were inherently unattainable. Attempts to circumvent a project’s natural minimum limits will backfire. This problem occurs any time someone ‘makes up a number and won’t listen to anyone about how long other projects took,’ said Capers Jones.”

There are also famous quotes (mostly from the military) about planning and what happens when plans start to be executed such as “a battleplan never survives contact with the enemy” and “plans are not important, but planning is everything.” Interestingly, the formality, rigor, and hierarchical control structure of the military is classic, but the military also realizes how important it is for the people on the front lines to be prepared to inspect and adapt. Remember Clint Eastwood’s comment as Gunny Highway in “Heartbreak Ridge” regarding plans and response to situations in the field: “Improvise. Adapt. Overcome.”

Applicable Agile Values and Principles

As with the requirements category, most of the Values and Principles also seem to apply to planning and estimation.

The key in Agile-based planning (as with requirements specification) is simplicity born of close collaboration with the stakeholder(s) and frequent validation that work is meeting needs. Because of the iterative and incremental nature of an Agile approach, detail is reserved for the near-term with progressively less detail the further out the planning goes. This approach is taken to reduce the initial cost of detailed plans which, in all likelihood, will be changing anyway.

Because of this willingness to change plans if stakeholder needs and priorities change, Agile planning (and validation of the planning), as an activity, occurs frequently (i.e., daily, every few weeks). In this way, the plan will not diverge far from reality, can be adapted easily when changes occur, and will not consume significant effort to maintain. Most importantly, all planning is based on the regular, frequent delivery of working functionality as the basis for assessing the validity of the development activity.

Saturday, October 3, 2009

Common Project Risks and Agile Mitigations, Part 2, Requirements

As noted in Part 1, this series is intended to “identify the various reasons cited for project problems and suggest how elements of an Agile approach might minimize the risk of them occurring.” That first part set some background to where the categories of project difficulty have come from and identified a list of some 17 such categories. A number of the latter categories mentioned far less often in the sources of failure data and observations. So I’m going to cover the categories from most to least frequently cited in the belief that’s how people would like to see the parts in this series emerge.

This part will address the most frequently mentioned category of issues that can lead to project failure: requirements.

Requirements Related Issues

It probably comes as no surprise that requirements related issues are at the top of the list. After all, there would be no project unless someone felt a need to have some work done; however, problems eliciting and defining requirements “completely” are likely no surprise. The traditional phased-sequential process expects someone to explain everything they want, analysts to translate that into formats (often text-based) understandable to all others who need to know, and rigorous control over changes. This is aggravated by the fact that many “someones” (i.e., stakeholders) may be involved. (Stakeholder issues related to project problems are covered later in this series and, no doubt, contribute to requirements being at the top.)

The particular issues related to requirements mentioned in the various survey sources seem to fall into two groups:

First, late discovery of information that was assumed to be true or was not known when work began.

  • Discovery that there is no more need for certain requirements (or the system as a whole) to be developed.
  • Late recognition of the seriousness of requirements repercussions.
  • Mid/late-development changes in requirements and scope.
Second, misperception in what could be expected and how satisfaction of expectations would be determined.

  • Inadequate acceptance criteria, which results in "poor-quality" delivered software.
  • Incomplete, ambiguous, inconsistent, and/or unmeasurable requirements (and other project objectives).
  • Unrealistic Expectations.
One could look at this list and also say the items are communication problems. That is, inability to be capable of communicating what is needed early enough to avoid the problems described. That would be true enough. But it is also clear that change constitutes a good bit of the concern over the role of requirements in project problems. Finally, it seems that “knowledge” is another possible theme. All of these are focused here on requirements and contribute to project instability of one sort or another.

A couple of things are suggested by the above two groups of issues. In the first, much effort is usually expended based on assuming, one way or the other, that the information available after “due diligence” has occurred is, in fact, accurate. In the second, even without significant change, if criteria for assessing satisfaction of requirements are inadequate or applied only after much effort has been expended, disagreements are likely as to whether the result satisfies what was intended.

A large, up-front investment in very detailed descriptions of functionality and stringent change control procedures are often not justified by later project occurrences and lead to both of the difficulties noted above. We are asking much of stakeholders to insist they can, and should, be fully explicit up front and incur large costs when changes become necessary. However, knowing this, stakeholders must acknowledge that to avoid such problems both regular, direct communication and effective prioritization of requirements will be necessary on their part.

An interesting remark comes from Watts Humphrey [May] who has said, “You can’t design a process that assumes [requirements] are stable” so “learning what the requirements really are while building the product” should be expected.

Applicable Agile Values and Principles

When it comes to this category of issues, every Agile Value and most of the Principles apply.

From a change perspective, an Agile approach seeks a development process designed to adapt to, rather than resist, change. It does this by focusing on working software rather than documentation as its measure of achievement. This does not mean all documentation is rejected, just that it be kept as simple and direct as possible in leading to working software.

Up front specifications are kept simple (e.g., stories) with stakeholder involvement used to replace extensive documentation passed from one person/group to another. Given short iterations and frequent demonstrations of working software, simplicity and brevity in specifications is possible. Simplicity in requirements is also achieved through emphasis on the importance of test cases as executable requirements specifications. Hence, less is written in static text/diagrams and more in executable scripts.

Given that individuals on a project (e.g., developers, testers, product owners, etc.) work on effective, direct interaction between one another (on a daily basis), collaboration can replace extensive written specifications. Then, with software demonstrated and delivered within a few weeks of the start of the work (and every few weeks after that), there is significantly less chance that stakeholder expectations and development work will diverge greatly before a correction can be made.

This last point is an important one in reducing the cost of change and addressing the concern that full, up-front specification is needed to prevent disappointments at the end of the work. An Agile approach seeks validation of all aspects of the work on a frequent basis: every few hours, every day, every few weeks.

All of this, combined, does not guarantee changes will not be required that can be costly. Nothing can do that unless change is prohibited from occurring completely. In some cases, that may be possible, but usually only if the time from beginning of work until the end is short anyway. What an Agile approach can ensure is that change impacts, possible requirements misinterpretations, and issues of dissatisfaction with functionality can be identified early and resolved as inexpensively as possible, all without significant cost having been incurred until such things are discovered.