Chapter 5 Transitioning Typical Processes
There are many processes in a traditional project that don’t transition well to agile because they require heavyweight documentation or are an inherent part of the phased and gated process and require sign-offs at the end of each stage.
Like anything else, there are no hard and fast rules for transitioning your processes to a more agile or lightweight process. In this chapter, we discuss a few of those processes, and give you alternatives and guidance on how to work with them in an agile project. You’ll find more examples and details about these alternatives in Parts III, IV, and V.
Seeking Lightweight Processes
When teams are learning how to use agile processes, some of the more traditional processes can be lost in the shuffle. Most testers who are used to working with traditional phased and gated development methodologies are accustomed to producing and using metrics, recording defects in a formal defect tracking system, and writing detailed test plans. Where do those fit in agile development?
Many software organizations must comply with audit systems or quality process models. Those requirements don’t usually disappear just because you start using agile development practices. In fact, some people worry that agile development will be incompatible with such models and standards as CMMI and ISO 9000.
It might be more fun to talk about everything that’s new and different when testing on an agile project, but we still need ways to measure progress, track defects, and plan testing. We also need to be prepared to work with our organization’s quality models. The key is to keep these processes lightweight enough to help us deliver value in a timely manner. Let’s start by looking at metrics.
Metrics
Metrics can be controversial, and we spend a lot of time talking about them. Metrics can be a pit of wasted effort, numbers for the sake of numbers. They are sometimes used in harmful ways, although they don’t have to be bad. They can guide your team and help it to measure your team’s progress toward its goals. Let’s take a look at how to use metrics to help agile testers and their teams.
Lean Measurements
Lean software development practitioners look for ways to reduce the number of measurements and find measurements that will drive the right behaviors. Implementing Lean Software Development: From Concept to Cash, by Mary and Tom Poppendieck, is an excellent resource that teaches how to apply lean initiatives to your testing and development efforts.
According to the Poppendiecks [2007], a fundamental lean measurement is the time it takes to go “from concept to cash,” from a customer’s feature request to delivered software. They call this measurement “cycle time.” The focus is on the team’s ability to “repeatedly and reliably” deliver new business value. Then the team tries to continuously improve their process and reduce the cycle time.
Measurements such as cycle time that involve the whole team are more likely to drive you toward success than are measures confined to isolated roles or groups. How long does it usually take to fix a defect? What can the team do to reduce that latency, the amount of time it takes? These types of metrics encourage collaboration in order to make improvements.
Another lean measurement the Poppendiecks explain in their book is financial return. If the team is developing a profitable product, it needs to understand how it can work to achieve the most profit. Even if the team is developing internal software or some other product whose main goal isn’t profit, it still needs to look at ROI to make sure it is delivering the best value. Identify the business goals and find ways to measure what the team delivers. Is the company trying to attract new customers? Keep track of how many new accounts sign on as new features are released.
Lean development looks for ways to delight customers, which ought to be the goal for all software development. The Poppendiecks give examples of simple ways you can measure whether your customers are delighted.
We like the lean metrics, because they’re congruent with our goal to deliver business value. Why are we interested in metrics at all? We’ll go into that in the next section.
Why We Need Metrics
There are good reasons to collect and track metrics. There are some really bad ones too. Anyone can use good metrics in terrible ways, such as using them as the basis for an individual team member’s performance evaluation. However, without metrics, how do you measure your progress?
When metrics are used as guideposts—telling the team when it’s getting off track or providing feedback that it’s on the right track—they’re worth gathering. Is our number of unit tests going up every day? Why did the code coverage take a dive from 75% to 65%? It might have been a good reason—maybe we got rid of unused code that was covered by tests. Metrics can alert us to problems, but in isolation they don’t usually provide value.
Metrics that measure milestones along a journey to achieve team goals are useful. If our goal is to increase unit test code coverage by 3%, we might run the code coverage every time we check in to make sure we didn’t slack on unit tests. If we don’t achieve the desired improvement, it’s more important to figure out why than to lament whatever amount our bonus was reduced as a result. Rather than focus on individual measurements, we should focus on the goal and the trending toward reaching that goal.
Metrics help the team, customers included, to track progress within the iteration and within the release or epic. If we’re using a burndown chart, and we’re burning up instead of down, that’s a red flag to stop, take a look at what’s happening, and make sure we understand and address the problems. Maybe the team lacked important information about a story. Metrics, including burndown charts, shouldn’t be used as a form of punishment or source of blame. For example, questions like “Why were your estimates too low?” or “Why can’t you finish all of the stories?” would be better coming from the team and phrased as “Why were our estimates so low?” and “Why didn’t we get our stories finished?”
Metrics, used properly, can be motivating for a team. Lisa’s team tracks the number of unit tests run in each build. Big milestones—100 tests, 1000 tests, 3000 tests—are a reason to celebrate. Having that number of unit tests go up every day is a nice bit of feedback for the development and customer teams. However, it is important to recognize that the number itself means nothing. For example, the tests might be poorly written, or to have a well tested product, maybe we need 10,000 tests. Numbers don’t work in isolation.
Lisa’s Story
Pierre Veragen told me about a team he worked on that was allergic to metrics. The team members decided to stop measuring how much code their tests covered. When they decided to measure again after six months, they were stunned to discover the rate had dropped from 40% to 12%.
How much is it costing you to not use the right metrics?
—Lisa
When you’re trying to figure out what to measure, first understand what problem you are trying to solve. When you know the problem statement, you can set a goal. These goals need to be measurable. “Reduce average response time on the XYZ application to 1.5 seconds with 20 concurrent users” works better than “Improve the XYZ application performance.” If your goals are measurable, the measurements you need to gather to track the metrics will be obvious.
Remember to use metrics as a motivating force and not for beating down a team’s morale. This wisdom bears repeating: Focus on the goal, not the metrics. Maybe you’re not using the right metrics to measure whether you’re achieving your team’s objectives, or perhaps you’re not interpreting them in context. An increased number of defect reports might mean the team is doing a better job of testing, not that they are writing more buggy code. If your metrics aren’t helping you to understand your progress toward your goal, you might have the wrong metrics.
What Not to Do with Metrics
Mark Twain popularized the saying, which he attributed to Benjamin Disraeli, “There are three kinds of lies: lies, damned lies, and statistics.” Measurable goals are a good thing; if you can’t gauge them in some way, you can’t tell if you achieved them. On the other hand, using metrics to judge individual or team performance is dangerous. Statistics by themselves can be twisted into any interpretation and used in detrimental ways.
Take lines of code, a traditional software measuring stick. Are more lines of code a good thing, meaning the team has been productive, or a bad thing, meaning the team is writing inefficient spaghetti-style code?
What about number of defects found? Does it make any sense to judge testers by the number of defects they found? How does that help them do their jobs better? Is it safe to say that a development team that produces a higher number of defects per lines of code is doing a bad job? Or that a team that finds more defects is doing a good job? Even if that thought holds up, how motivating is it for a team to be whacked over the head with numbers? Will that make the team members start writing defect-free code?
Communicating Metrics
We know that whatever we measure is bound to change. How many tests are running and passing? How many days until we need a “build on the shelf”? Is the full build passing? Metrics we can’t see and easily interpret aren’t worth having. If you want to track the number of passing tests, make sure that metric is visible in the right way, to the right people. Big visible charts are the most effective way of displaying metrics we know.
Lisa’s Story
My previous team had goals concerned with the number of unit tests. However, the number of unit tests passing wasn’t communicated to anyone; there were no big visible charts or build emails that referred to that number. Interestingly, the team never got traction on automating unit tests.
At my current company, everyone in the company regularly gets a report of the number of passing tests at the unit, behind-the-GUI, and GUI levels (see Tables 5-1 and 5-2 for examples). Business people do notice when that number goes down instead of up. Over time, the team has grown a huge number of useful tests.
—Lisa
Table 5-1 Starting and Ending Metrics
Table 5-2 Daily Build Results
Are your metrics worth the trouble? Don’t measure for the sake of producing numbers. Think about what you’ll learn from those numbers. In the next section, we consider the return on investment you can expect from metrics.
Metrics ROI
When you identify the metrics you need, make sure you can obtain them at a reasonable cost. If your continual build delivers useful numbers, it delivers good value. You’re running the build anyway, and if it gives us extra information, that’s gravy. If you need a lot of extra work to get information, ask yourself if it’s worth the trouble.
Lisa’s team went to a fair amount of trouble to track actual time spent per story versus estimated time. What did they learn other than the obvious fact that estimates are just that? Not much. Some experienced teams find they can dispense with the sprint burndown chart because the task board gives them enough information to gauge their progress. They can use the time spent estimating tasks and calculating the remaining hours on more productive activities.
This doesn’t mean we recommend that you stop tracking these measurements. New teams need to understand their velocity and burndown rate, so that they can steadily improve.
Defect rates are traditional software metrics, and they might not have much value on a team that’s aiming for zero defects. There’s not much value in knowing the rate of bugs found and fixed during development, because finding and fixing them is an integral part of development. If a tester shows a defect to the programmer who’s working on the code, and a unit test is written and the bug is fixed right away, there’s often no need to log a defect. On the other hand, if many defects reach production undetected, there can be value in tracking the number to know if the team improves.
When it started to rewrite its buggy legacy application, Lisa’s team set a goal of no more than six high-severity bugs in new code reported after the code is in production over a six-month period. Having a target that was straightforward and easy to track helped motivate the team to find ways to head bugs off during development and exceed this objective.
Figure each metric’s return on investment and decide whether to track or maintain it. Does the effort spent collecting it justify the value it delivers? Can it be easily communicated and understood? As always, do what works for your situation. Experiment with keeping a particular metric for a few sprints and evaluate whether it’s paying off.
One common metric that relates to software quality is the defect rate. In the next section, we look at reasons to track defects, or to not track defects, and what we can learn from them.
Defect Tracking
One of the questions that are asked by every new agile team is, “Do we still track bugs in a defect tracking system?” There’s no simple answer, but we’ll give you our opinion on the matter and offer some alternatives so that you can determine what fits your team.
Why Should We Use a Defect Tracking System (DTS)?
A lot of us testers have used defect tracking as the only way to communicate the issues we saw, and it’s easy to keep using the tools we are familiar with. A DTS is a convenient place to keep track of not only the defect but the priorities, severities, and status, and to see who it is assigned to. Many agile practitioners say that we don’t need to do this anymore, that we can track defects on cards or some other simple mechanism. We could write a test to show the failure, fix the code, and keep the test in our regression suite.
However, there are reasons to keep using a tool to record defects and how they were fixed. Let’s explore some of them now.
Convenience
One of the concerns about not keeping a defect tracking system is that there is no place to keep all of the details of the bug. Testers are used to recording a bug with lots of information, such as how to reproduce it, what environment it was found in, or what operating system or browser was used. All of this information cannot fit on a card, so how do you capture those details? If you are relying only on cards, you also need conversation. But with conversation, details get lost, and sometimes a tester forgets exactly what was done—especially if the bug was found a few days prior to the programmer tackling the issue.
A DTS is also a convenient place to keep all supplemental documentation, such as screen prints or uploaded files.
Knowledge Base
We have heard reasons to track defects such as, “We need to be able to look at old bug reports.” We tried to think of reasons why you would ever need to look at old bug reports, and as we were working on this chapter, Janet found an example.
Janet’s Story
When I was testing the pre-seating algorithm at WestJet, I found an anomaly. I asked Sandra, another tester, if she had ever come across the issue before. Sandra vaguely recalled something about it but not exactly what the circumstances were. She quickly did a search in Bugzilla and found the issue right away. It had been closed as invalid because the business had decided that it wasn’t worth the time it would take to fix it, and the impact was low.
Being able to look it up saved me from running around trying to ask questions or reentering the bug and getting it closed again. Because the team members sit close to each other, our talking led to another conversation with the business analyst on the team. This conversation sparked the idea of a FAQ page, an outstanding issues list, or something along that line that would provide new testers a place to find all of the issues that had been identified but for which the decision had been made not to address them.
—Janet
This story shows that although the bug database can be used as a knowledge base, there might be other mechanisms for keeping business decisions and their background information. If an issue is old enough to have been lost track of, maybe we should rewrite it and bring it up again. The circumstances may have changed, and the business might decide it is now worthwhile to fix the bug.
The types of bugs that are handy to keep in a DTS are the ones that are intermittent and take a long time to track down. These bugs present themselves infrequently, and there are usually gaps in time during which the investigation stalls for lack of information. A DTS is a place where information can be captured about what was figured out so far. It can also contain logs, traces, and so on. This can be valuable information when someone on the team finally has time to look at the problem or the issue becomes more critical.
The information in bug reports can be used later for several purposes. Here’s a story from Lisa’s team on how it uses its information.
Lisa’s Story
One developer from our team serves on a “production support” rotation for each iteration. Production support requests come in from the business side for manual fixes of past mistakes or production problems that need manual intervention. The “production support person” researches the problem and notes whatever was done to fix it in the bug report. These notes usually include a SQL statement and information about the cause. If anyone encounters the same error or situation later, the solution can be easily found in the DTS. If certain types of problems seem to occur frequently, the team can use the DTS for research and analysis. Even though our team is small, we deal with a lot of legacy code, and we can’t rely on people’s memory to keep track of every problem and fix.
—Lisa
Remembering the cause of defects or what was done to fulfill a special request is even harder when the team is particularly large or isn’t co-located. Customers might also be interested in the solutions to their problems.
Large or Distributed Teams
If projects are so large that defects found by one team might affect other teams, a DTS is probably a good choice. Of course, to be useful it needs to be accessible to all members of the team. Face-to-face communication is always our first choice, but when circumstances make that impractical, we need aids such as a DTS.
Customer Support
When there are defects that have been reported by the customer after the release, the customer usually wants to know when they’ve been fixed. It’s invaluable for the help desk or technical support to know what was fixed in a given release. They can also find defects that are still outstanding at release time and let the customers know. A DTS makes it much simpler to pull this information together.
Metrics
There are reasons to track defect rates. There are also reasons why you wouldn’t track a defect. For example, we don’t think that a bug should be counted as a defect if it never makes it out of the iteration. This, of course, brings up another discussion about what should we track and why, but we won’t discuss that here.
Chapter 18, “Coding and Testing,” explores metrics related to defect rates.
Traceability
Another reason we’ve heard for having a DTS is traceability, linking defects to test cases. We’re not sure that this is a valid reason. Not all defects are linked to test cases, nor should they be. For example, errors like spelling mistakes might not need specific test cases. Maybe the product was not intuitive to use; this is a very real bug that often goes unreported. How do you write a test to determine if something is usable? Exploratory testing might find bugs in edge conditions that are not worth the effort of creating automated tests.
If it is an automated test case that caught a bug, then the need to record that defect is further reduced, because it will be caught again if ever reintroduced. The need for traceability is gone. So, maybe we don’t need to track defects.
Why Shouldn’t We Use a DTS?
Agile and Lean provide us with practices and principles that help reduce the need for a DTS. If the process is solid, and all of the people are committed to delivering a quality product, defects should be rare and very simply tracked.
As a Communication Tool
Defect tracking systems certainly don’t promote communication between programmers and testers. They can make it easy to avoid talking directly to each other.
Waste of Time and Inventory
We tend to put lots of information into the DTS in addition to all of the steps to reproduce the defect. Depending on the bug, it can take a long time to write these steps so that the programmer can reproduce it as well. Then there is the triage, and someone has to make comments, interpret the defect, attempt to reproduce it, (ideally) fix it, write more comments, and assign it back to the person who reported it. Finally, the fix can be verified. This whole cycle can double if the programmer misunderstood the problem in the first place. The cost of a single defect report can become exorbitant.
How much easier would it be if we as testers could just talk to the programmer and show what we found, with the developer then fixing the defect right away? We’ll talk more about that later.
In Chapter 15, “Coding and Testing,” we’ll explain how tester and programmers work together on bugs.
Defects in a DTS become a queue or a mini product backlog. According to lean principles, this inventory of defects is a waste. As a team, we should be thinking of ways to reduce this waste.
Janet’s Story
In 2004, Antony Marcano, author of TestingReflections.com, wrote a blog post about the idea of not using a bug-tracking system. When it was discussed on mailing lists, he was flamed by many testers as introducing something similar to heresy. He finds he has a different reception now, because the idea is making its way into the mainstream of agile thinking.
He suggests that bug-tracking systems in agile teams are just “secret backlogs.”
—Janet
Antony will share his ideas about the hidden backlog when we cover iteration planning in Chapter 18, “Coding and Testing.”
Defect Tracking Tools
If you do decide to use a DTS, choose it carefully. Understand your needs and keep it simple. You will want everyone on the team to use it. If it becomes overhead or hard to use, people will find ways to work around it. As with all tools used by your agile development team, you should consider the whole team’s opinion. If anyone from the customer team enters bug reports, get his or her opinion too.
One of the simplest tools that Janet has used is Alcea’s FIT IssueTrack. It is configurable, does not make you follow a predefined process, and is easy to get metrics out of. Do your homework and find the tool that works for you. There are a variety of open source defect-tracking systems, hosted systems, and integrated enterprise systems available.
Whether or not you use a DTS, you want to make defects as visible as possible.
We use a commercial DTS, but we find value in keeping bugs visible. We color-code bugs and include them as tasks in our story board, shown in Figure 5-1. Yellow cards denote normal bugs, and red cards denote either high production bugs or “test stopper” development bugs—both categories need to be addressed right away. A quick look at the board lets us see how many bugs are in the To Do, WIP, Verify and Done columns. Other cards are color-coded as well: blue for story cards, green for test task cards, and white for development tasks. Striped cards are for tasks added after iteration planning. Yellow and red bug cards stand out easily.
Figure 5-1 Story board with color-coded cards. Used with permission of Mike Thomas. Copyright 2008.
During the time we were writing this book, my team converted to a virtual story board because one of our team members became a remote team member, but we retained this color-coding concept.
—Lisa
We usually recommend experimenting with different tools, using each one for a few iterations, but this is trickier with bug-tracking systems, because you need to port all of the bugs that are in one system to the new one that you’re trying on for size. Spend some time thinking about what you need in a DTS, what purposes it will serve, and evaluate alternatives judiciously.
Lisa’s Story
My team used a web-based DTS that was basically imposed upon it by management. We found it somewhat cumbersome to use, lacking in basic features such as time-stamping updates to the bug reports, and we chafed at the license restrictions. We testers were especially frustrated by the fact that our license limited us to three concurrent users, so sessions were set to time out quickly.
The team set aside time to evaluate different DTS alternatives. At first, the selection seemed mind-boggling. However, we couldn’t find one tool that met all our requirements. Every tool seemed to be missing something important, or we heard negative reports from people who had used the tool. We were concerned about the effort needed to convert the existing bug database into a new system.
The issue was forced when our DTS actually crashed. We had stopped paying for support a couple of years earlier, but the system administrator decided to see what enhancements the vendor had made in the tool. He found that a lot of shortcomings we had experienced had been addressed. For example, all updates were now time stamped. A client application was available that wasn’t subject to session timeouts and had enhanced features that were particularly valuable to the testers.
By going with our existing tool and paying for the upgrade and maintenance, plus a license allowing more concurrent users, we got help with converting our existing data to the new version and got a working system easily and at a low cost. A bonus was that our customers weren’t faced with having to learn a new system.
Sometimes the best tool is the one you already have if you just look to see how it has improved!
—Lisa
As with all your tool searches, look to others in your community, such as user groups and mailing lists, for recommendations. Define your criteria before you start looking, and experiment as much as you can. If you choose the wrong tool, cut your losses and start researching alternatives.
Keep Your Focus
Decisions about reporting and tracking defects are important, but don’t lose track of your main target. You want to deliver the best quality product you can, and you want to deliver value to the business in a timely manner. Projects succeed when people are allowed to do their best work. Concentrate on improving communication and building collaboration. If you encounter a lot of defects, investigate the source of the problem. If you need a DTS to do that, use it. If your team works better by documenting defects in executable tests and fixing them right away, do that. If some combination enables you continually improve, go with it. The main thing to remember is that it has to work for your whole team.
Chapter 18, “Coding and Testing,” covers alternatives and shows you different ways to attack your bug problems.
Defect tracking is one of the typical quality processes that generate the most questions and controversy in agile testing. Another big source of confusion is whether agile projects need documents such as test plans or traceability matrices. Let’s consider that next.
Test Planning
Traditional phased software methodologies stress the importance of test plans as part of the overall documentation needs. They’re intended to outline the objectives, scope, approach, and focus of the software testing effort for stakeholders. The completed document is intended to help people outside the test group understand the “why” and “how” of product validation. In this section, we look at test plans and other aspects of preparing and tracking the testing effort for an agile project.
Test Strategy vs. Test Planning
In an agile project, teams don’t rely on heavy documentation to communicate what the testers need to do. Testers work hand in hand with the rest of the team so that the testing efforts are visible to all in the form of task cards. So the question often put to us is, “Is there still a need for test plans?” To answer that question, let’s first take a look at the difference between a test plan and a test strategy or approach.
The more information that is contained in a document, the less likely it is that someone is going to read it all. Consider what information is really necessary for the stakeholders. Think about how often it is used and what it is used for.
We like to think of a test strategy as a static document that seldom changes, while a test plan is created new and is specific to each new project.
Test Strategy
A strategy is a long-term plan of action, the key word being “long-term.” If your organization wants documentation about your overall test approach to projects, consider taking this information and putting it in a static document that doesn’t change much over time. There is a lot of information that is not project specific and can be extracted into a Test Strategy or Test Approach document.
This document can then be used as a reference and needs to be updated only if processes change. A test strategy document can be used to give new employees a high-level understanding of how your test processes work.
Janet’s Story
I have had success with this approach at several organizations. Processes that were common to all projects were captured into one document. Using this format answered most compliance requirements. Some of the topics that were covered were:
• Testing Practices
• Story Testing
• Solution Verification Testing
• User Acceptance Testing
• Exploratory Testing
• Load and Performance Testing
• Test Automation
• Test Results
• Defect Tracking Process
• Test Tools
• Test Environments
—Janet
Test Plan
The power of planning is to identify possible issues and dependencies, to bring risks to the surface to be talked about and to be addressed, and to think about the big picture. Test planning is no different. A team should think about risks and dependencies and the big picture for each project before it starts.
Whether your team decides to create a test plan document or not, the planning should be done. Each project is different, so don’t expect that the same solution will fit all.
In Chapter 15, “Tester Activities in Release or Theme Planning,” we show examples and discuss alternatives you can use when you are planning the release.
Sometimes our customers insist on a test plan document. If you’re contracting to develop an application, a test plan might be part of a set of deliverables that also include items such as a requirements document and a design document.
Talk of test plans often leads to talk of traceability. Did someone execute all planned testing of the desired behavior on the delivered code? How do requirements and test plans relate to the actual testing and final functionality?
Traceability
In traditional projects, we used to need traceability matrices to determine whether we had actually tested all of the requirements. If a requirement changed, we needed to know that we had changed the appropriate test cases. With very large requirements documents, this was the only way that a test team knew it had good coverage.
In an agile project, we don’t have those restrictions. We build functionality in tiny, well-defined steps. We work with the team closely and know when something changes. If the programmers work test-first, we know there are unit tests for all of the small chunks of work. We can then collaborate with the customer to define acceptance tests. We test each story as the programmer works on it, so we know that nothing goes untested.
There might be requirements for some kind of traceability for regulated industries. If there is, we suggest that you really look at what problem management is trying to solve. When you understand what is needed, you should try to make the solution as simple as possible. There are multiple ways to provide traceability. Source code check-in comments can refer to the wiki page containing the requirements or test cases, or to a defect number. You can put comments in unit tests tying the test to the location or identifier of the requirement. The tests can be integrated directly with the requirements in a tool such as FitNesse. Your team can easily find the way that works best for your customers’ needs.
Documents such as traceability matrices might be needed to fulfill requirements imposed by the organization’s audit standards or quality models. Let’s consider how these directives get along with agile development.
Existing Processes and Models
This question is often asked: “Can traditional quality models and processes coexist with agile development methods?” In theory, there is no reason why they can’t. In reality, there is often not a choice. Quality models often fall into the domain of the traditional QA team, and they can follow testers into the new agile structure as well. It might not be easy to fit these into a new agile development model. Let’s look at a few typical quality processes and how testers and their teams might accommodate them.
Audits
Different industries have different audit requirements. Quality assurance teams in traditional development organizations are often tasked with providing information for auditors and ensuring compliance with audit requirements. The Sarbanes-Oxley Act of 2002, enacted in response to high-profile corporate financial scandals, sets out requirements for maintaining business records. Ensuring compliance usually falls to the IT departments. SAS 70 is another widely recognized auditing standard for service organizations. These are just a couple of examples of the type of audit controls that affect development teams.
Larger organizations have specialized teams that control compliance and work with auditors, but development teams are often asked to provide information. Examples include what testing has been performed on a given software release, or proving that different accounts reconcile. Testers can be tasked with writing test plans to evaluate the effectiveness of control activities.
Lisa’s Story
Our company undergoes regular SAS 70 audits. Whenever one is scheduled, we write a story card for providing support for the audit. Most of this work falls to the system administrators, but I provide support to the business people who work with the auditor. Sometimes we’re required to demonstrate system functionality in our demo environment. I can provide data for the demos and help if questions arise. I might also be asked to provide details about how we tested a particular piece of functionality.
Some of our internal processes are required to conform with SAS 70 requirements. For example, every time we release to production, we fill out a form with information about which build was released, how many tests at each level were run on it, who did the release, and who verified it.
—Lisa
Testers who are part of an agile team should be dedicated to that team. If their help is needed in providing information for an audit or helping to ensure compliance, write stories for this and plan them along with the rest of the team’s work. Work together with the compliance and internal audit teams to understand your team’s responsibilities.
Frameworks, Models, and Standards
There are many quality models, but we’ll look at two to show how you can adapt your agile process to fit within their constraints.
1. The Capability Maturity Model Integration (CMMI) aims to help organizations improve their process but doesn’t dictate specific development practices to accomplish the improvements.
2. Information Technology Infrastructure Library (ITIL) is a set of best practices for IT service management intended to help organizations develop an effective quality process.
Both of these models can coexist happily with agile development. They’re rooted in the same goal, making software development projects succeed.
Let’s look at CMMI, a framework for measuring the maturity of your process. It defines each level by measuring whether the process is unknown, defined, documented, permanent, or optimized. Agile projects have a defined process, although not all teams document what they do. For example, managing your requirements with index cards on a release planning wall with a single customer making the final decisions is a defined process as long as you do it all the time.
Retrospectives are aimed at constant process improvement, and teams should be always be looking for ways to optimize processes. If the only thing your team is lacking is documentation, then think about including your process into your test strategy documentation.
Ask yourself what the minimum amount of documentation you could give to satisfy the CMMI requirements would be. Janet has had success with using diagrams like the one in Figure 5-2.
Figure 5-2 Documenting the test strategy
See the bibliography for information about CMMI and agile development.
If ITIL has been introduced into your organization and affects change management, adapt your process to accommodate it. You might even find the new process beneficial.
Janet’s Story
When I worked in one organization that had a central call center to handle all of the customers’ support calls, management implemented ITIL for the service part of the organization. We didn’t think it would affect the development team until the change management team realized that the number of open problems was steadily increasing. No one understood why the number kept going up, so we held a series of problem-solving sessions. First, we mapped out the process currently in effect.
The call center staff reported an incident in their tracking system. They tried to solve the customer’s problem immediately. Often, that meant providing a work-around for a software defect. The call center report was closed, but a problem report in Remedy was then opened, and someone in the development team was sent an email. If the defect was accepted by the development team, a defect was entered into Bugzilla to be fixed.
There was no loop back to the problem issue to close it when the defect was finally fixed. We held several brainstorming sessions with all involved stakeholders to determine the best and easiest solution to that problem.
The problem statement to solve was, “How does the project team report back to the problem and change management folks to tell them when the bug was actually fixed?”
There were a couple of ways we could have solved the problem. One option was to reference the Remedy ticket in Bugzilla and put hooks into Remedy so that when we closed the Bugzilla defect, Remedy would detect it and close the Remedy ticket. Of course, some of the bugs were never addressed, which meant the Remedy tickets stayed open forever.
We actually found a better solution for the whole team, including the problem change folks. We brainstormed a lot of different ideas but decided that when a bug was opened in Bugzilla, we could close the Remedy ticket, because we realistically would never go back to the original complaint and tell the customer who reported it, or when the fix was done.
The change request that covered the release would automatically include all software fixes, so it followed the change management process as well.
—Janet
If your organization is using some kind of process model or quality standards management, educate yourself about it, and work with the appropriate specialists in your organization. Maintain the team’s focus on delivering high-quality software that provides real business value, and see how you can work within the model.
Process improvement models and frameworks emphasize discipline and conformance to process. Few software development methodologies require more discipline than agile development. Standards simply enable you to measure your progress toward your goal. Agile’s focus is on doing your best work and constantly improving. Agile development is compatible with achieving whatever standards you set for yourself or borrow from a process improvement measurement tool.
Separate your measurement goals and standards from your means to improve those measurements. Set goals, and know what metrics you need to measure success for areas that need improvement. Try using task cards for activities that provide the improvements in order to ensure they get the visibility they need.
Working with existing quality processes and models is one of the biggest cultural issues you may face as you transition to agile development. All of these changes are hard, but when your whole team gets involved, none are insurmountable.
Summary
In this chapter, we looked at traditional quality-oriented processes and how they can be adapted for an agile environment.
The right metrics can help you to make sure your team is on track to achieve its goals and provide a good return on your investment in them.
Metrics should be visible, providing necessary milestones upon which to make decisions.
The reasons to use a defect tracking system include for convenience, for use as a knowledge base, and for traceability.
Defect tracking systems are too often used as a communication tool, and entering and tracking unnecessary bugs can be considered wasteful.
All tools, including the DTS, need to be used by the whole team, so consider all perspectives when choosing a tool.
A test strategy is a long-term overall test approach that can be put in a static document; a test plan should be unique to the project.
Think about alternatives before blindly accepting the need for specific documents. For example, the agile approach to developing in small, incremental chunks, working closely together, might remove the need for formal traceability documents. Linking the source code control system comments to tests might be another way.
Traditional quality processes and process improvement models, such as SAS 70 audits and CMMI standards, can coexist with agile development and testing. Teams need to be open to thinking outside the box and work together to solve their problems.