Each year, the Software Division of the American Society for Quality (ASQ) holds their International Conference on Software Quality. This year, it was held at the Hilton in Northbrook, Illinois on November 10-11 (with a tutorial day on the 9th).
What follows are my notes on the sessions I attended and the Agile “debate” in which I represented the “pro” Agile side. Other than keynotes, sessions were run in parallel with 4 tracks going on at the same time. My notes, therefore, represent the one session I attended of four going on simultaneously.
One thing to be aware of is that attendees at ICSQ’s are often from regulated industries and firms doing government related contracting where formal, standards-driven quality approaches are the rule.
Tuesday, November 10, 2009
Keynote – Bill Curtis on “Quality in Multi-Tiered IT Applications”
Bill Curtis has been a researcher, practitioner and chief scientist in software methods and individual performance for many decades. He has worked at ITT, MCC (Austin, TX research consortium), SEI (as head of the Process program), TeraQuest (process assessment and improvement), and now at CAST Software. I have known Bill over the years during his time at MCC, SEI and TeraQuest, in particular coordinating (and applying the results of) his research activity in software design expertise for the company where I was working at that time.
Curtis started saying, “We’re moving beyond just what smarts and knowledge can handle.” By this, he meant the systems and their interactions have evolved (and continue to evolve) to where product (code) quality ideas are not enough to manage the desired results. Expertise in application level quality, i.e., how all the components interact, is what has the largest impact on system quality today. Quoting an ACM article (Jackson, “A Direct Path to Dependable Software,” CACM v. 54, no. 4), “The correctness of code is rarely the weakest link.”
Curtis pointed to problems with design choices that “pass (functional) tests,” but are (at best) inadvisable practice when scaled and must address non-functional production requirements. Presaging the end of day keynote by Joe Jarzombek, Curtis said that we need to be able to make dependability, assurance, etc. “cases” about our systems. That is, we should be able to provide evidence to support arguments that justify belief in claims about such non-functional requirements.
Curtis offered a few other ideas such as:
- addressing people’s ability to understand a system when changes must be made since he said 50% of the change effort in maintenance is devoted just to figuring out what the system, not an individual module, is doing;
- allowing (and training) testing groups to do true QA, indeed do Quality Engineering, which would require a broader involvement of personnel from testing organizations in the full lifecycle of work as well as not regarding test groups as “entry-level” positions;
- understanding the COO’s view on the need to standardize ways of doing things an organization does not compete on.
Finally, Curtis mentioned the Consortium for IT Software Quality “sponsored by a partnership between the Software Engineering Institute (SEI) at Carnegie Mellon University and the Object Management Group (OMG) to combine their industry-leading strengths in developing software-related standards and appraiser licensing programs.” As one of its activities, Curtis said, it will work to create more operational definitions of software (non-functional) quality characteristics (i.e., the “ilities”). The ISO 25000 series, which supplanted ISO 9126, has definitions, but the CISQ’s work suggests they are not viewed as operational enough.
Tom Roth – Psychology of Software Quality
Roth’s background is in embedded avionics software quality assurance where he is in charge of overseeing reviews, internal audits, testing, etc.
Roth started by saying we should “think of QA as people trying to influence the behavior of other people developing software.” Hence, his talk was about influencing people with regard to quality and the importance of QA knowing how to develop trust in developers since not everything can be reviewed. (Interestingly, an article in the ASQ’s main magazine, Quality Progress, for this month, is entitled “Trust, But Verify.”) But Roth cautioned against “enjoying the power you have” as an arbiter of quality, using knowledge of psychology to establish a collaborative relationship with development groups/management.
In discussing inspections and reviews, Roth noted that software engineering formalities, such as Fagan inspections, impart a level of discipline and, potentially, a group sharing of responsibility, which may not exist with individuals alone. Indeed, inspections turn the deliverable being inspected over to the group and the individual does not bear the full brunt of making sure quality is present in that deliverable. From an Agile perspective, I was thinking that, after a while, such discipline should become more internalized and less dependent on external rigor.
Some of the things Roth touched on were how:
- in relationships, differences often attract while similarities comfort, but trying to force sameness can destroy the attraction;
- inappropriate habits exert tremendous influence on further (and, perhaps even worse, expanded) inappropriate behavior;
- we are not 2, 3 or 4 different people, despite external appearances under different social circumstances, i.e., there is a congruence in behavior which, especially under stress, will reveal itself;
- people working alone can spend 2/3 of their time evaluating alternatives and 1/3 implementing a chosen alternative while two people working together reverse the balance, effectively quadrupling the productivity of one person alone;
- behavior [engineering] leads attitude [morality] - you can tell people what to do but not how/what to think, so work on behaviors/practices and allow the thinking to come along on its own.
The last two struck me as quite interesting, of course, from an Agile perspective.
Ed Weller – Getting Management to Listen
There are many talks that have been given over the years about how to talk to management. Ed Weller covered some of the same terrain in terms of speaking from a cost/dollars perspective. However, he did offer some specific ideas related to managers who are:
- used to technical “change agents” (1) underestimating the cost of implementation, (2) overestimating improvement benefits, and (3) introducing risk management is not comfortable with;
- faced with “Problem Saturation”, i.e., “consumed solving today’s problems” with “no time for next week’s (month’s/year’s) problem.”
Weller’s suggestion was to focus on data on the cost of rework, pre/post ship defects, and, in general, poor quality. From a lean/agile perspective, this means showing management how they can reduce waste in the software process.
Rebecca Staton-Reinstein – Using A Cost of Quality Model to Drive Improvement
This was a fairly standard talk on CoQ models elements. Some of the audience interaction and comments were of interest, especially regarding the difficulties in doing CoQ calculations:
- collecting data accepted as “accurate” enough to truly prove/show improvement ROI is very difficult for an organization that does not have some level of process discipline and decent data capture capability;
- such models, on the cost avoidance side, are talking about things that haven’t happened, yet, requiring (accepted) historical data to show prior trends that could be reasonably extrapolated to the future;
- belief in quality as a matter of personal “morality” or “will” (i.e., we have problems because people just don’t try hard enough to do the job “right”) rather than something addressable through an engineering approach;
- being able to take quality data and relate it to schedule and budget impact.
Then, at some point during the talk, the following thought struck me: if you do things with urgency, you won’t have to do them in a rush.
Keynote – Joe Jarzombek, National Software [Security] Assurance effort from DHS
Joe Jarzombek directs the Department of Homeland Security’s program on Software Assurance and has been doing this, with little budget, for over 4-1/2 years. I met Joe through my activities on the IEEE Software and Systems Engineering Standards Committee when he was still consulting within the Dept. of Defense. (Before that, he was an active duty Lt. Colonel with the Army serving at the Pentagon.) Joe’s job is to promote an interest in and work actively toward securing the infrastructure of the USA from cyber attack. To do this over the years he has brought together academic institutions, government agencies (DHS, Dept. of Commerce, Dept. of Energy, and DoD), non-profit agencies, and commercial organizations to work on a variety of efforts in tools, techniques, guidance, educational programs, standards (working with IEEE and ISO), etc.
Joe’s talk is one I have heard over his many years with DHS. He updates it regularly with the latest status on the efforts noted above. And, in case it is not otherwise obvious, the “assurance” focus of this work is on writing secure software and securing infrastructure computing.
As most of the printed materials which arise from the efforts of the participants he has brought together are produced under government funding, they are freely available at the Build Security In website under the “DHS SwA Web Site” link. Another good source of material on Software Assurance is articles from issues of Crosstalk (the Journal of Defense Software Engineering) which are also freely available. And, though a few years old, a July 31, 2007 “State of the Art Report on Software Security Assurance,” is also available.
Wednesday, November 11, 2009
Keynote – Edy Liongosari “Everything’s Elastic”
Liongosari directs work at Accenture Technology Labs and spoke about the changing landscape of computing as it moves from traditional computers to mobile devices. Most of the trends he noted (e.g., cloud computing) were not new, however, some of the implications and data were interesting.
For example, 30% of “smart” phones are owned by people with family incomes at or below $30,000. For them, this was their computing platform in the sense that they did their “computing” through internet access to sources of information, data, and applications. (On the latter point, Liongosari noted that there were some 100,000 iPhone applications available.) From a third-world perspective, Liongosari noted that, despite wide-spread cell-phone use in the developed countries, cell technology was even more prevalent in the third-world where land-line phones, computers, bank accounts, etc. were not at all common or available. Indeed, there were places, he said, where people “barely able to feed themselves” had cell phones.
Liongosari also spent some time talking about how large organizations were beginning to use cloud capability to get work done in fractions of the time it would have taken them to set up in house infrastructure to handle the same level of computing. He even noted an insurance firm (unnamed) that uploaded data to the cloud, performed massive analysis and downloaded the data and results a few hours later, “renting” the time and resources.
From a social computing perspective, he talked about how companies were starting to use such ideas (if not the most well-known social sites) in “harnessing the power of the crowd” to collect ideas and trends. Some examples were IBM’s Bluehouse, Adobe’s Cocomo, and Dell’s Ideastorm.
Another point made was how people in the workforce from teens to late twenties had a view of free access to computing resources and what this means when they are in company environments. Liongosari also noted the relative lack of concern people in this age group have for the idea of privacy, which is a concern (and mentioned in Joe Jarzombek’s talk) with regard to cloud computing.
While I listened to this keynote, another thought came to me: IT is moving from meaning “institutional” to “individual” technology, even more than the PC represented such a move since, it is not just owning your own computing resources now, but having an individual “right” to information that is starting to dominate thinking.
Tim Olson – Lean Principles and Process Models
Tim Olson has considerable background in process (improvement) consulting, having worked at the SEI early on in the CMM program, being involved with the Juran Institute, having Six Sigma experience, and, most recently, working with Lean.
Olson started by relating to an example from Deming about the danger of trying to replicate success by others through simply copying their apparent behavior without understanding the context and principles behind the practices. This resonated greatly with me because it is an issue I have seen with Agile adoption when companies learn practices and techniques without an understanding of Agile Values and Principles.
For the most part, Olson’s talk was about basic Value-Stream and Value-Stream Mapping ideas. However, he noted the lack of automated tools to make the mapping easier. He did note that, in discussion with Toyota, that they used walls and boards, without automated tools. But Olson noted that his process definition approach, focused on diagrammatic, rather than textual, process description has led him to apply process modeling/mapping tools to value-stream work.
He did caution, however, that simply trying to transplant Lean manufacturing ideas to software has been a problem in various cases since this has resulted in removing “non-value-added” activities such as configuration management.
Siegfried Zopf – Pitfalls in Globally Distributed Projects
Zopf is from Austria and discussed issues and problems he has observed working in multi-national, multi-cultural project situations.
Zopf began by making a distinction between distributed and outsourced situations, then, between minimally and large-scale outsourcing. Overall, it seemed as though he was emphasizing awareness of the level of appropriate control to apply to the different combinations of situations. For example, in a minimally responsible outsourcing situation – the work is confined to low-level design and coding – risk is low and pulling the work back in-house is not a problem, but the financial savings are lower. On the other hand, there is great financial advantage in outsourcing greater parts of the work, but much more local project management required in the outsourced locations. Zopf suggested allowing for 15-20% cost for such a “beachhead” operation.
Zopf also, in connection with any larger outsourcing effort, noted how planning must account for extra costs in project management, communication, travel, documentation, and knowledge transfer that would not exist in a fully local project. Thus, a company cannot take an estimation done assuming in-house work, then simply portion out parts of the work, “transplanting” the estimate and plans without adjusting for the differences.
And, for distribution matters in general, whether outsourcing or not, there are still issues related to national and cultural differences regardless of whether or not it is the “same” company. A couple examples of what he discussed are:
- two groups, for each of whom English is a second language, and the problems they can have trying to communicate using English which are not the case when at least one group is native English speakers;
- monochronistic and polychronistic cultures where the former view time as linear, compartmentalized, and value punctuality and the latter view time as more fluid, schedule multiple things at the same time, and even expect people/things to be late.
One final point in distributing work (with or without outsourcing) is process mismatch. Specifically when working with a high maturity (using the CMMI scale, Level 5) organization, that organization will find it difficult working with another organization that is not at least Level 3. In the reverse direction, a low maturity organization may find it frustrating working with the expectations and pace of a high maturity one.
Ron McClintic – Performance Testing
Ron was the “con” side in the Agile “debate” (which I describe below) and has a lot of years of testing/QA experience working for and in the GE Capital environment. He currently works with applications that collect and analyze data on truck fleets using a combination of hardware, embedded software, and more traditional application software. However, the multiple vendor networked environment he works in matches quite well with the multi-tiered issues Bill Curtis discussed.
His talk could be considered a “case study” since he went into detail about the various points for performance testing that his efforts need to address, from the lowest levels of (vendor-supplied) software doing network and database optimization and capacity adjustments to customer GUIs and response expectations. On the latter he noted work done to determine thresholds for response time from the minimum one would ever need which matched the time for a person to perceive a screen change up to the upper limit before a customer would abandon a (web) application and go elsewhere. It turns out the latter is about 4x the tolerated time frame. The problem is that, for different client situations, the tolerated times vary.
As an example of a difficult performance/response issue to address, one client reported significant performance problems before 10am each day, but which seemed to go away at or about 10am. After a few weeks of probing and testing, it was discovered that 10am local time was when the company’s European offices, mostly in the UK, ended their day, a fact not previously revealed to Ron’s group. The moral being not all issues are really technical ones even though they have technical impacts.
One statement Ron made rang very true to me, especially in light of one of the points Bill Curtis made, and that had to do with using testing and test tools in an “investigative,” rather than ”reactive,” manner. I think, overall, this is an important way to view the real value testing and test staff can have to an organization as opposed to the, often, “last ditch” quality control function often performed. QA (testing and other true QA functions) should be to supply information to the rest of the organization about the status of progress and quality of work.
The Agile “Debate” – Ron McClintic & Scott Duncan
The other talks/keynotes described above occurred in the order I’ve listed them. This “debate” actually occurred Tuesday afternoon just before Joe Jarzombek’s keynote. I’ve saved it for last since I am commenting on my own (and Ron’s) session. Ron had proposed this and I had been recommended to him by the Conference Chair, Mark Neal, whom I have worked with before on Software Division activities.
We started the session with me going over, briefly, the Agile Values and Principles and stating that I felt this is what “defines” Agile. Since the various methods preceded the Snowbird meeting, the term “Agile,” applied to software development, occurred at that meeting and with regard to the Vs & Ps. So, for me, that means the Vs & Ps are what define “Agile” while practices and techniques are examples of possible implementation approaches. Ron had no real problem with this as he noted he agreed with these ideas. His objection to Agile came from two sources, he felt:
- it resulted in overall “suboptimization” of a project because it focused only on optimization of the development piece, and
- it did not focus on actually profitability of a company that sells to the marketplace, defining value as just the delivery of the working software.
Thus, his argument was that a more traditional approach to projects that accounted for the full product lifecycle, including longer-term maintenance and support costs, was more appropriate. He also felt there had been no appropriately trials of similar project situations that had collected data to show the benefit of a well-conducted traditional effort compared to an Agile one.
He and the audience had stories to tell about “purist” insistences from consults arguing against the responsibility for Agile teams to be concerned with such issues as they were matters for the business beyond the development team. What I was hearing were stories of:
- “teams” without appropriate business collaboration or all the skills needed to do the work or
- projects where the organization, itself, isolated groups outside of development from trying to pursue/accommodate an Agile approach, insisting on more formal handoffs or
- developers insisting that going back to fix work not done completely or well in the first place constituted “refactoring” or
- the usually litany of refusal to document, measure, and/or plan.
Indeed, in one case that Ron noted, developers were writing code without requirements. I had to ask him how they got the okay to be developing anything with “no requirements” and, then, suggest this was an “Agile” approach.
A couple audience members also brought up the book “eXtreme Programming Refactored” and its claims of failure for the C3 project.
What I found was that people were exceedingly receptive to an explanation of the Values and Principles and accepted practices, seeing how wrong it was to characterize many of these behaviors as “Agile,” rather than merely ad hoc.
Of course, throughout the Conference, there were stories and discussions about this same sort of thing happening to other ideas once they had “crossed the chasm.” Mark Paulk, for example, was there discussing various process improvement ideas as well as his research work at Carnegie Mellon University with Scrum. He and I sat at lunch with a number of people who were at this “debate” or had other Agile contact and discussed how similar things were after a while for the CMM (and now for the CMMI) with people ascribing “requirements” status to guidance material and pushing their own idea of process “rightness” in assessing companies.
So I have left this “debate” topic until the end, and am not going into great detail about what was said because, overall, it demonstrated the attraction the Manifesto’s Values and associated Principles have for people. It also demonstrated, to me at least, the need that exists for people to understand how practices are intended to implement those Vs & Ps and not simply copy (or think they are copying) them without such understanding.