Chapter 20 Successful Delivery



In this chapter, we share what you as a tester can do to help your team and your organization successfully deliver a high-quality product. The same process and tools can be used for shrink-wrapped products, customized solutions, or internal development products. Agile testers can make unique contributions that help both the customer and developer team define and produce the value that the business needs.


What Makes a Product?

Many of the books on agile development talk about the actual development cycle but neglect to talk about what makes a product and what it takes to successfully deliver that product. It’s not enough to just code, test, and say it’s done. It’s like buying something from a store: If there is great service to go with the purchase, how much more likely are you to go back and buy there again?

Janet’s Story

I was talking to my friend, Ron, who buys and sells coins. Over the years he has developed a very good reputation in the industry and has turned away prospective clients because he is so busy.

When I asked him his secret, he said, “It’s not a secret. I just work with my customers to make them feel comfortable and establish a trusting relationship with them. In the end, both I and my customer need to be happy with the deal. It only takes one unhappy customer to break my reputation.”

Agile teams can learn from Ron’s experience. If we treat our customers with respect and deliver a product they are happy with, we will have a good relationship with them, hopefully for many years.

—Janet

Our goal is to deliver value to the business in a timely manner. We don’t want just to meet requirements but also to delight our customers. Before we release, we want to make sure all of the deliverables are ready and polished up appropriately. Hopefully, you started planning early to meet not only the code requirements but to plan for training, documentation, and everything that goes into making a high-value product.

Fit and Finish

Coni Tartaglia, a software test manager with Primavera Systems, Inc., explains “fit and finish” deliverables.

It is helpful to have a “Fit and Finish” checklist. Sometimes fit and finish items aren’t ready to be included in the product until close to the end. It may be necessary to rebuild parts of the product to include items such as new artwork, license or legal agreements, digital signatures for executables, copyright dates, trademarks, and logos.

It is helpful to assemble these during the last full development iteration and incorporate them into the product while continuous integration build cycles are running so that extra builds are not needed later.

Business value is the goal of agile development. This can include lots beyond the production code. Teams need to plan for all aspects of product delivery.

Imagine yourself in the middle of getting your release ready for production. You’ve just finished your last iteration and are wrapping up your last story test. Your automated regression suite has been running on every new build, or at least on every nightly build. What you do now will depend on how disciplined your process has been. If you’ve been keeping to the “zero tolerance” for bugs, you’re probably in pretty good shape.

If you’re one of those teams that thinks you can leave bugs until the end to fix, you’re probably not in such good shape and may need to introduce an iteration for “hardening” or bug fixes. We don’t recommend this, but if your team has a lot of outstanding bugs that have been introduced during the development cycle, you need to get those addressed before you go into the end game. We find that new teams tend to fall into this trap.

In addition, there are lots of varied components to any release, some in the software, some not. You have customers who need to install and learn to use the new features. Think about all those elements that are critical to a successful release, because it’s time to wrap up all those loose ends and hone your product.

Bob Galen, an agile coach and end-game expert, observes that agile development may not have seeped into every organizational nook and cranny. He notes, “Agile testers can serve as a conduit or facilitator when it comes to physical delivery of the software.”


Planning Enough Time for Testing

Because testing and coding are part of one process in agile development, we’d prefer not to make special plans for extra testing time, but in real life we might need some extra time.

Most teams accumulate some technical debt, despite the best intentions, especially if they’re working with legacy code. To maintain velocity, your team may need to plan a refactoring iteration at regular intervals to add tests, upgrade tools, and reduce technical debt. Lisa’s team conducts a refactoring sprint about every six months. While the business doesn’t usually receive any direct benefits at the end of a refactoring sprint, the business experts understand that these special sprints result in better test coverage, a solid base for future development, reduced technical debt, and a higher overall team velocity.

Some teams resort to “hardening” iterations, where they spend time only finding and fixing bugs, and they don’t introduce any new functionality. This is a last resort for keeping the application and its infrastructure solid. New teams may need an extra iteration to complete testing tasks, and if so, they budget time for that in the release plan.

Use retrospectives and other process improvement practices to learn ways to integrate testing and coding so that the code produced in each iteration is production-ready. When that goal is achieved, work to ensure that a stable build that could be released to production is available every day. Lisa’s team members thought that this was an unattainable goal in the days when they struggled to get any stable build before release, but it was only a couple of years before almost every build was release-worthy.

When your build is stable, you are ready to enter the “End Game.”


The End Game

What is the end game? We’ve heard people call the time right before delivery many things, but the “end game” seems to fit best. It’s the time when the team applies the finishing touches to the product. You’re dotting your i’s and crossing your t’s. It’s the last stretch before the delivery finish line. It’s not meant to be a bug-fix cycle, because you shouldn’t have any outstanding bugs by then, but that doesn’t mean you might not have one or two to fix.

You might have groups in your organization that you didn’t involve in your earlier planning. Now it’s time to work closely with the folks that administer the staging and production environments, the configuration managers, the database administrators outside of your team, and everyone who plays a role in moving the software from development to staging and production. If you weren’t working with them early this time, consider talking to these folks during your next release planning sessions, and keep in touch with them throughout the development cycle.

Bob Galen tells us that the testers on his team have partnered with the operations group that manages the staging and production environments. Because the operations group is remote, it finds that having guidance from the agile team is particularly valuable.

There are always system-level tests that can’t be automated, or are not worth automating. More often than not, your staging environment is the only place where you can do some system-level integration tests or system-level load and stress testing. We suggest that you allot some time after development for these types of finishing tasks. Don’t code right up to the end.

Plan as much time for the end game as you need. Janet has found that the length of time needed for the end game varies with the maturity of the team and the size of the application. It may be that only one day is needed to finish the extra tasks, but it may be one week or sometimes as much as a whole two-week iteration. The team from the example used in Chapter 12, “Summary of Testing Quadrants,” scheduled two weeks, because it was a complex system that required a fair bit of setup and system testing.

Lisa’s Story

When I worked on a team developing applications for a client, we had to follow the client’s release schedule. Testing with other parts of the larger system was only possible during certain two-week windows, every six or eight weeks. Our team completed two or three iterations, finishing all of the stories for each as if they were releasing each iteration.

Then we entered a testing window where we could coordinate system testing with other development teams, assist the client with UAT, and plan the actual release. This constituted our end game.

—Lisa

If you have a large organization, you might have ten or fifteen teams developing software for individual products or for separate areas of functionality for the same application. These areas or products may all need to release together, so an integrated end game is necessary. This does not mean that you leave the integration until the very end. Coordination with the other teams will be critical all along your development cycle, and if you have a test integration system, we recommend that you be sure that you have tried to integrate long before the end game.

You also may have considerations beyond your team, for example, working with software delivered by external teams at the enterprise level.

Use this end-game time to do some final exploratory testing. Step back and look at the whole system and do some end-to-end scenarios. Such testing will confirm that the application is working correctly, give you added confidence in the product, and provide information for the next iteration or release.


Testing the Release Candidate

We recommend that the automated regression testing be done against every release candidate. If you’re following our recommendation to run automated regression tests continually on each new build, or at least daily, you’ve already done this. If some of your regression tests are manual, you’ll need to plan time for those or they might not get done. A risk assessment based on changes made to each build will determine what tests need to be run if there is more than one release candidate.


Test on a Staging Environment

Whether you are using traditional or agile development processes, a staging environment that mimics production is vital for final testing before release, as well as for testing the release process itself. As part of the end game, your application should be deployed to staging just like you would deploy it to production, or as your customers would on their environments. In many organizations that Janet has seen, the staging environment is usually shared among multiple projects, and the deployment must be scheduled as part of the release planning. Consider ahead of time how to handle dependencies, integrating with other teams using the staging environment, and working with external third parties. It might feel like “traditional” test planning, but you might be dealing with teams that haven’t embraced agile development.

Although agile promotes continuous integration, it is often difficult to integrate with third-party products or other applications outside your project’s control. Staging environments can have better controls so that external applications may connect and have access to third-party test environments. Staging environments can also be used for load and performance testing, mock deploys, fail-over testing, and manual regression tests and exploratory functional testing. There are always configuration differences between environments so your staging environment is a good place to test these.


Final Nonfunctional Testing

Load testing should be scheduled throughout the project on specific pieces of the application that you are developing. If your staging environment is in high demand, you may not be able to do full system load testing until the end game.

By this time, you should be able to do long-running reliability tests on all product functionality. Check for crashes and degradation of performance with normal load. When done at release time, it should be a final confirmation only.

Fault tolerance and recovery testing is best done on your staging environment as well, because test environments usually don’t have the necessary setup. For these same reasons, you may only be able to test certain aspects of security. One example is https, a secure http connection through encrypted secure sockets. Some organizations may choose to have the necessary certificates on their staging environment only. Other examples are clustering or data replication. Make sure you involve all parties who need to be included in this testing.


Integration with External Applications

Your team may be agile, but other product teams in your organization, or third parties your team works with, may not be.

Janet’s Story

In one organization that I worked with, the third-party partner that approved credit cards had a test account that could be used, but it was only accessible from the staging environment.

To test during development, test stubs were created to return specified results depending on the credit card number used. However, this wasn’t sufficient because the third party sometimes changed functionality on its end that we weren’t aware of. Testing with the actual third party was critical to the success of the project, and it is a key part of the end game.

—Janet

Coordinate well in advance with other product teams or outside partners that have products that need to integrate with your product. If you have identified these risks early and done as much up-front testing as possible, the testing done during the end game should be final verification only. However, there are always last-minute surprises, so you may need to be prepared to make changes to your application.

Tools like simulators and mock objects used for testing during development can help alleviate some of the risks, but the sooner you can test with external applications, the lower the risk.


Data Conversion and Database Updates

As we are developing an application, we change fields, add columns in the database, or remove obsolete ones. Different teams tackle this in different ways. Some teams re-create the database with each new build. This works for new applications, because there is no existing data. However, after an application exists in production and has associated data, this approach won’t work.

An application needs to consider the data that is part of the product. As with so much in agile development, a joint effort by database experts, programmers, and testers on the team is required to ensure successful release of database changes. Janet has seen a couple of different tactics for dealing with data conversion and backward compatibility. Database scripts can be created by the developers or database administrators as the team makes changes. These scripts become part of the build and are continually tested. Another option is for the team to run “diffs” on the database after all of the database changes have been made.

If you’re a tester, ask your database administrator/developer to help your team ensure that schemas are kept consistent among the production, testing, and staging environments. Find a way to guarantee that all changes made in the test environments will be done in the staging and production environments during release. Keep the schemas matching (except for the new changes still under development) in terms of column names, triggers, constraints, indices, and other components. The same discipline applied to coding and testing also should be applied to database development and maintenance.

Lisa’s Story

We recently had a bug released to production because some of the test schemas, including the one used by regression tests, were missing a constraint. Without the constraint in place, the code didn’t fail. This triggered an effort to make sure the exact same update scripts get run against each schema to make changes for a given release.

It turned out that different test schemas had small differences, such as old columns still remaining in some or columns in different order in different schemas, so it wasn’t possible to run the same script in every environment. Our database administrator led a major effort to re-create all of the test schemas to be perfectly compatible with production. He creates one script in each iteration with all necessary database changes and runs that same script in the staging and production environment when we release. This seems simple, but it’s easy to miss subtle differences when you’re focused on delivering new features.

—Lisa

Automating data migrations enhances your ability to test them and reduces the chance for human error. Native database tools such as SQL, stored procedures, data import tools such as SQL*Loader and bcp, shell scripts, and Windows command files can be used for automation because they can be cloned and altered easily.

No matter how the database update and conversion scripts are created or maintained, they need to be tested. One of the best ways to ensure all of the changes have been captured in the update scripts is to use the customer’s data if it is available. Customers have a habit of using the application in weird and wonderful ways, and the data is not always as clean as we would like it. If the development team cleans up the database and puts extra restrictions on a column, the application on the customer’s site might blow up as soon as a query touches a piece of data that does not match the new restrictions. You need to make sure that any changes you’ve made are still compatible with existing data.

Lisa’s Story

My team uses the staging environment to test the database update scripts. After the scripts are run, we do manual testing to verify that all changes and data conversions completed correctly. Some of our GUI test scripts cover a subset of regression scenarios. This gives us confidence about releasing to production, where our ability to test is more limited.

—Lisa

When planning a data conversion, think about data cleanup as part of the mitigation strategy. You have the opportunity to take the data that was entered in some of the “weird and wonderful” ways we mentioned before and massage or manipulate it so it conforms to the new constraints. This type of job can take a long time to do but is often very worthwhile in terms of maintaining data integrity.

Not everyone can do a good enough simulation of production data in the staging environment. If a customer’s data is not available, a mitigation strategy is to have a UAT at the customer site. Another way to mitigate risk is to try to avoid large-scale updates and release in smaller stages. Develop new functionality in parallel with the old functionality and use a system property to “turn on” one or the other. The old functionality can continue to work in production until the new functionality is complete. Meanwhile, testing can be done on the new code at each iteration. New columns and tables can be added to production tables without affecting the old code so that the data migration or conversion for the final release is minimized.


Installation Testing

Organizations often have a separate team that deploys to production or creates the product set. These team members should have the opportunity to practice the deployment exactly as they would for production. If they use the deployment to staging as their proving ground, they can work out any of the problems long before they release to the customer.

Testing product installations can also mean testing various installations of shrink-wrapped products to different operating systems or hardware. How does the product behave? Does it do what is expected? How long will the system need to be down for installation? Can we deploy without taking an outage? Can we make the user experience as pleasant as possible?

Janet’s Story

I had an experience a while ago that was not so pleasant, and it led me to wish that someone had tested and fixed the issue before I found it. I bought a new laptop and wanted to transfer my license for one of my applications to the new computer. It came with a trial version of the same application, so the transfer should have been easy, but the new PC did not recognize the product key—it kept saying it was invalid. I called the support desk and after a bit of diagnostics, I was informed they were considered different products, so the key wouldn’t work.

Two more hours of support time, and the issue was fixed. The trial version had to be removed, an old version had to be reinstalled, the key had to be reentered, and all updates since the original purchase had to be uploaded. How much easier would it have been for the development team to test that scenario and offer the customer an informative message saying, “The trial version is not compatible with your product key.” A message such as that would have let me figure out the problem and solve it myself rather than taking the support person’s time.

—Janet

Take the time you need to determine what your requirements are for testing installation. It will be worth it in the end if you satisfy your customers.


Communication

Constant communication between different development team members is always important, but it’s especially critical as we wrap up the release. Have extra stand-up meetings, if needed, to make sure everything is ready for the release. Write cards for release tasks if there’s any chance some step might be forgotten.

Lisa’s Story

My team releases after each iteration. We usually have a quick stand-up on the last afternoon of the sprint to touch base and identify any loose ends. Before the team had a lot of practice with releases, we wrote release task cards such as “run database update script in staging” and “verify database updates in production.” With more experience at deploying, we no longer need those cards unless we have a new team member who might need an extra reminder. It never hurts to have cards for release tasks, though.

—Lisa

Reminders of tasks, whether they are in a full implementation plan or just written on task cards as Lisa’s team does, are often necessary. On simple implementations, a whiteboard works well.


What If It’s Not Ready?

By constantly tracking progress in many forms, such as builds, regression test suites, story boards, and burndown charts, a team usually knows well in advance when it’s in trouble on a release. There’s time to drop stories and readjust. Still, last-minute disasters can happen. What if the build machine breaks on the last day of the iteration? What if the test database crashes so that final testing can’t be completed? What if a showstopper bug isn’t detected until final functional testing?

We strongly advise against adding extra days to an iteration, because it will eat into the next iteration or release development. An experienced team might be flexible enough to do this, but it can derail a new team. Still, desperate times call for desperate measures. If you release every two weeks, you may simply be able to skip doing the actual release, budget time into the next iteration to correct the problems and finish up, and release on the next scheduled date. If testing tasks are being put off or ignored and the release goes ahead, bring up this issue with the team. Did the testing needs change, or is the team taking a chance and sacrificing quality to meet a deadline? The team should cut the release scope if the delivery date is fixed and in jeopardy.

If your release cycle is longer, more like three months, you should know in advance if your release is in jeopardy. You probably have planned an end game of at least two weeks, which will just be for final validation. When you have a longer release cycle, you have more time to determine what you should do, whether it’s dropping functionality or changing the schedule.

If your organization requires certain functionality to be released on a fixed day and last-minute glitches threaten the release, evaluate your alternatives. See if you can continue on your same development cycle but delay the release itself for a day or a week. Maybe the offending piece of code can be backed out temporarily and a patch done later. The customers have the ultimate say in what will work for the business.

Lisa’s Story

On the rare occasions when our team has faced the problem of last-minute showstoppers, we’ve used different approaches according to the situation. If there’s nothing critical that has to be released right now, we sometimes skip the release and release two iterations’ worth on the next release day. If something critical has to go in, we delay the release a day or two. Sometimes we can go ahead and release what we have and do a patch release the next day. On one occasion, we decided to have a special one-week iteration to correct the problems, release, and then go back to the normal two-week iteration schedule.

After more than four years of practicing agile development, we have a stable build almost 100% of the time, and we feel confident about being able to release whenever it’s necessary. We needed a lot of discipline and continual improvement to our process in order to feel that a more flexible approach could work for us. It’s also nice to be able to release a valuable bit of functionality early, if we can. What we’ve worked hard to avoid is falling into a death spiral where we can never release on schedule and we’re always playing catch-up.

Don’t beat yourself up if you can’t release on time. Your team is doing its best. Do spend time analyzing why you got behind schedule, or over-committed, and take action to keep it from happening again.

—Lisa

Work to prevent a “no go” situation with good planning, close collaboration, driving coding with tests, and testing as you code. If your tracking shows the release could be in jeopardy, remove the functionality that can’t be finished, if possible. If something bad and unexpected happens, don’t panic. Involve the whole team and the customer team, and brainstorm about the best solution.


Customer Testing

There are a couple of different ways in which to involve your customers to get their approval or feedback. User Acceptance Testing can be fairly formal, with sign-offs from the business. It signifies acceptance of a release. Alpha or beta testing is a way to get feedback on a product you are looking to release but which is not quite ready.


UAT

User Acceptance Testing (UAT) is important in large customized applications as well as internal applications. It’s performed by all affected business departments to verify usability of the system and to confirm existing and new (emphasis on new) business functionality of the system. Your customers are the ones who have to live with the application, so they need to make sure it works on their system and with their data.

In previous chapters we’ve often talked about getting the customers involved early, but at those times, the testing is done on specific features under development. UAT is usually done after the team decides the quality is good enough to release. Sometimes though, the timeline dictates the release cycle. If that is the case, then try moving the UAT cycle up to run parallel with your end game. The application should be stable enough so that your team could deploy to the customer’s test system at the same time as they deploy to staging.

Janet’s Story

In one team I joined, the customers were very picky. In fact, the pickiest I had ever seen. They always asked for a full week of UAT just to be sure they had the time to test it all. They had prepared test cases and checked them all, including all the content, both in English and in French. Showstopper bugs included spelling errors such as a missing accent in the French content. Over time, as they gained more confidence in our releases and found fewer and fewer errors, they relaxed their demands but still wanted a week, just in case they couldn’t get to it right away. Their business group was very busy.

One release came that pushed the timeline. We were being held to the release date but couldn’t get all the functionality in and leave two weeks for the end game. We talked with the business users and we decided to decrease the end game to one week; the business users would perform their UAT while the project team finished up their system testing and cleanup. The only reason we were able to do this was because of the trust the customer had in our team and the consistency of our releases.

The good news was that, once again, the UAT found no issues that could not wait until the next release.

—Janet

Figure 20-1 shows an example timeline with a normal UAT at the end of the release cycle. The team starts working on the next release, doing release planning, and starts the first iteration with all team members ready to go.

Figure 20-1 Release timeline with UAT

Work with customers so that they understand the process, their role, and what is expected of them. If the UAT is not smooth, then the chances are there will be a high level of support needed. An experienced customer test team may have defined test cases, but most often its testing is ad hoc. Customers may approach their testing as if they were doing their daily job but will probably focus on the new functionality. This is an opportunity to observe how people use the system and to get feedback from them on what works well and what improvements would help them.

Testers can provide support to the customers who are doing the UAT by reviewing tests run and defects logged, and by tracking defects to completion. Both of us have found it helpful to provide customers involved in doing UAT with a report of all of the testing done during development, along with the results. That helps them decide where to focus their own testing.


Alpha/Beta Testing

If you are an organization that distributes software to a large customer base, you may not have a formal UAT. You are much more likely to incorporate alpha or beta testing. Your team will want to get feedback on new features from your real customers, and this is one mechanism for doing so. Alpha testing is early distribution of new versions of software. Because there are likely to be some major bugs, you need to pick your customers wisely. If you choose this method of customer feedback, make sure your customers understand their role. Alpha testing is to get feedback on the features—not to report bugs.

Beta testing is closer to UAT. It is expected that the release is fairly stable and can actually be used. It may not be “ready for prime time” for most customers, but many customers may feel the new features are worth the risk. Customers should understand that it is not a formal release and that you are asking them to test your product and report bugs.

As a tester, it is important to understand how customers view the product, because it may affect how you test. Alpha and beta testing may be the only time you get to interact with end users, so take advantage of the chance to learn how well the product meets their needs.


Post-Development Testing Cycles

If you work in a large organization or are developing a component of a large, complex system, you may need to budget time for testing after development is complete. Sometimes the UAT testing, or the test coordination, isn’t as smooth as it could be, so the timeline stretches out. Test environments that include test versions of all production systems may only be available for small, scheduled windows of time. You may need to coordinate test sessions with teams working on other applications that interact with yours. Whatever the reason, you need extra testing time that does not include the whole development team.

Lisa’s Story

I worked on a team developing components of both internal and external applications for a large telecom client. We could only get access to the complete test environment at scheduled intervals. Releases were also tightly scheduled.

The development team worked in two-week iterations. It could release to the test environment only after every third iteration. At that time, there was a two-week system integration and user acceptance test cycle, followed by the release.

Someone from my team needed to direct the post-development testing phase. Meanwhile, the developers were starting a new iteration with new features, and they needed a tester to help with that effort.

The team had to make a special effort to make sure someone in the tester role followed each release from start to finish. For example, I worked from start to finish on release 1. Shauna took over the tester role as the team started work on the first iteration of release 2, while I was coordinating system testing and UAT on release 1. Shauna stayed as primary tester for release 2, while I assumed that role for release 3.

—Lisa

Figure 20-2 shows an example timeline where the UAT was extended. This could happen for any number of reasons, and the issue may not always be UAT. Most of the team is ready to start working on the next release, but often a tester is still working with customers, completing final testing. Sometimes a programmer will be involved as well. There are a couple of options. If the team is large enough, you can probably start the next release while a couple of team members work with the existing release (Release 2—Alternative 2 in Figure 20-2). If you have a small team, you may need to consider an Iteration 0 with programmers doing refactoring or spikes (experiments) on new functionality so that the tester working with the customer does not get left behind (Release 2—Alternative 1 in Figure 20-2).

Figure 20-2 Release timeline—alternative approach with extended UAT

Be creative in dealing with circumstances imposed on your team by the realities of your project. While plans rarely work as expected, planning ahead can still help you make sure the right people are in place to deliver the product in a timely manner.


Deliverables

In the first section of this chapter we talked about what makes a product. The answer to this will actually depend on the audience: Who is accepting the product, and what are their expectations?

If your customers need to meet SOX (Sarbanes-Oxley)compliance requirements, there will be certain deliverables that are required. For example, one customer Janet has worked with felt test results should be thoroughly documented, and made test results one of their SOX compliance measurement points, while a different customer didn’t measure test results at all. Work with compliance and audit personnel to identify reporting needs as you begin a project.

How much documentation is enough? Janet always asks two questions before answering that question: “Who is it for?” and “What are they using it for?” If there are no adequate answers to those questions, then consider whether the documentation is really needed.

Deliverables are not always for the end customer, and they aren’t always in the form of software. There are many internal customers, such as the production support team members. What will they need to make their job easier? Workflow diagrams can help them understand new features. They would probably like to know if there are work-arounds in place so they can help customers through problems.

Janet often gets asked about test coverage of code, usually by management. How much of the application is being tested by the unit tests or regression tests? The problem is that the number by itself is just a number, and there are so many reasons why it might be high or low. Also, code coverage doesn’t tell you about features that might have been missed, for which no code exists yet. The audience for a deliverable such as code coverage should not be management, but the team itself. It can be used to see what areas of the code are not being tested.

Training could be considered a deliverable as well. Many applications require customized training sessions for customers. Others may only need online help or a user manual. Training could determine the success of your product, so it’s important to consider. Lisa’s team often writes task cards for either a tester or the product owner to make sure training materials and sessions are arranged. Some people may feel training isn’t the job of testers or anyone else on the development team. However, agile teams aim to work as closely as possible with the business. Testers often have the domain expertise to be able to at least identify training that might be needed for new or updated features. Even if training isn’t the tester’s responsibility, she can raise the issue if the business isn’t planning training sessions.

Many agile teams have technical writers as part of the team that write online help or electronic forms of documentation. One application even included training videos to help get started, and different members of the team were the trainers. It is the responsibility of the team to create a successful product.

Nonsoftware Deliverables

Coni Tartaglia, software test manager at Primavera Systems, Inc., reflects on what has worked for her team in delivering items that aren’t code but are necessary for a successful release.

Aside from the software, what is the team delivering? It is helpful to have a conversation with the people outside of the development team who may be concerned with this question. Groups such as Legal, Product Marketing, Training, and Customer Support will want to contribute to the list of deliverables.

After there is agreement on what is being delivered, assembly of the components can begin, and the Release Management function can provide confirmation of the delivery through execution of a release checklist. If the release is an update to an existing product, testers can check the deliverables from previous releases to ensure nothing critical is left out of the update package. Deliverables can include legal notices, documentation, translations, and third-party software that are provided as a courtesy to the customers.

Agile teams are delivering value, not just software. We work together with the customer team to improve all aspects of the product.

There are no hard and fast rules to what should be delivered with the product. Think of deliverables as something that adds value to your product. Who should be the recipient of the deliverable, and when does it make the most sense to deliver it?


Releasing the Product

When we talk about releasing the product, we mean making it available to the customer in whatever format that may take. Your organization might have a website that gets updated or a custom application that is delivered to a few large customers. Maybe the product is shrink-wrapped and delivered to millions of PCs around the world, or downloaded off the Internet.


Release Acceptance Criteria

How do you know when you’re done? Acceptance criteria are a traditional way of defining when to accept the product. Performance criteria may have to be met. We capture these for each story at the start of each iteration, and we may also specify them for larger feature sets when we begin a theme or epic. Customers may set quality criteria such as a certain percentage of code covered by automated tests, or that certain tests must pass. Line items such as having zero critical bugs, or zero bugs with serious impact to the system, are often part of the release criteria. The customers need to decide how they’ll know when there’s enough value in the product. Testers can help them define release criteria that accomplish their goals.

Agile teams work to attain the spirit of the quality goals, not just the letter. They don’t downgrade the severity of bugs to medium so they can say they achieved the criterion of no high-severity bugs. Instead, they frequently look at bug trends and think of ways to ensure that high-severity bugs don’t occur in production.

Your quality level should be negotiated with your customer up front so that there are no unpleasant surprises. The acceptance tests your team and your customers defined, using real examples, should serve as milestones for progress toward release. If your customer has a very low tolerance for bugs, and 100% of those acceptance tests must be passing, your iteration velocity should take that into consideration. If new features are more important than bug fixes, well, maybe you will be shipping with bugs.

A Tale of Multitiered “Doneness”

Bob Galen, agile coach and author of Software Endgames, explains how his teams define release acceptance criteria and evaluate whether they’ve been met.

I’ve joined several new agile teams over the past few years, and I’ve seen a common pattern within those teams. My current team does a wonderful job of establishing criteria at a user story or feature level—basically defining acceptance criteria. We’ve worked hard at refining our acceptance criteria. Initially they were developed from the Product Owners’ perspective, and often they were quite ambiguous and ill-defined. The testers decided they could really assist the customers in refining their tests to be much more relevant, clear, and testable. That collaboration proved to be a significant win at the story level, and the Product Owners really valued the engagement and help.

Quite often the testers would also automate the user story acceptance tests, running them during each sprint but also demonstrating overall acceptance during the sprint review.

One problem we had, though, was getting this same level of clarity for “doneness” at a story level to extend beyond the individual stories. We found that often, when we approached the end of a Sprint or the end game of a release, we would have open expectations of what the team was supposed to accomplish within the sprint. For example, we would deliver stories that were thoroughly tested “in the small”; that is, the functionality of those stories was tested but the stories were not integrated into our staging environment for broader testing. That wasn’t part of our “understanding,” but external stakeholders had that expectation of the teams’ deliverables.

The way the teams solved this problem was to look at our criteria as a multitiered set of guiding goals that wrap each phase, if you will, of agile development. An example of this is shown in Table 20-1.

Table 20-1 Different Levels of Doneness

Defining doneness at these individual levels has proven to work for our teams and has significantly improved our ability to quantify and meet all of our various customer expectations. Keep in mind that there is a connection among all of the criteria, so defining at one level really helps define the others. We often start at the Release Criteria level and work our way “backwards.”

Agile development doesn’t work if stories, iterations, or releases aren’t “done.” “Doneness” includes testing, and testing is often the thing that gets postponed when time is tight. Make sure your success criteria at every level includes all of the necessary testing to guide and validate development.

Each project, each team, each business is unique. Agile teams work with the business experts to decide when they’re ready to deliver software to production. If the release deadline is set in stone, the business will have to modify scope. If there’s enough flexibility to release when the software has enough value, the teams can decide when the quality criteria have been met and the software can go to production.

Challenging Release Candidate Builds

Coni Tartaglia’s team uses a checklist to evaluate each release candidate build. The checklist might specify that the release candidate build:

• Includes all features that provide business value for the release, including artwork, logos, legal agreements, and documentation

• Meets all build acceptance criteria

• Has proof that all agreed-upon tests (acceptance, integration, regression, nonfunctional, UAT) have passed

• Has no open defect reports

Coni’s team challenges the software they might ship with a final set of inspections and agreed-upon “release acceptance tests,” or “RATS.” She explains:

The key phrase is “agreed-upon tests.” By agreeing on the tests in advance, the scope for the release checklist is well defined. Include system-level, end-to-end tests in the RATS, and select from the compatibility roster tests, which will really challenge the release candidate build. Performance tests can also be included in RATs. Agree in advance on the content of the automation suites as well as a subset of manual tests for each RAT.

Agree in advance which tests will be repeated if a RAT succeeds in causing the failure of a release candidate build. If the software has survived several iterations of continuously run automated regression tests, passing these final challenges should be a breeze.

Defining acceptance criteria is ultimately up to the customers. Testers are in a unique position to help the customer and development teams agree on the criteria that optimize product quality.

Traditional software development works in long time frames, with deadlines set far in advance and hurdles to clear from one phase to the next. Agile development lets us produce quality software in small increments and release as necessary. The development and customer teams can work closely to define and decide what to release and when. Testers can play a critical role in this goal-setting process.


Release Management

Many organizations have a release management team, but if you don’t, someone still does the work. Many times in a small organization it is the QA manager who fulfills this role. The person leading the release may hold a release readiness meeting with the stakeholders to evaluate readiness.

A release readiness checklist is a great tool to use to walk through what is important to your team. The intention of this checklist is to help the team objectively determine what was completed and identify the risks associated with not completing a task.

For example, if training is not required because the changes made to the product were transparent to the end user, then the risk is low. However, if there were significant changes to the process for how a new user is created in the system, the risk would be very high to the production support or help teams, and may warrant a delay. The needs of all stakeholders must be considered.

Release notes are important for any product release. The formality of these depends on the audience. If your product is aimed at developers, then a “read me” text file is probably fine. In other cases, you may want to make them more formal. Whatever the media, they should address the needs of the audience. Don’t provide a lot of added information that isn’t needed.

When Janet gets a new release, one of the first things she does is check the version and all of the components. “Did I get what they said they gave me? Are there special instructions I need to consider before installing, such as dependencies or upgrade scripts?” Those are good simple questions to answer in release notes. Other things to include are the new features that the customer should look for.

Release notes should give special consideration to components that aren’t part of what your development team delivered, such as a help file or user manuals prepared by a different team. Sometimes old release notes get left on the release media, which may or may not be useful to the end user. Consider what is right for your team and your application.


Packaging

We’ve talked a lot about continual integration. We tend to take it for granted and forget what good configuration management means. “Build once, deploy multiple times” is part of what gives us confidence when we release. We know that the build we tested in staging is the same build that the customer tested in UAT and is the build we release to production. This is critical for a successful release.

If the product is intended for an external customer, the installation should be easy, because the installation may be the first look at the product that customer has. Know your audience and its tolerance level for errors. How will the product be delivered? For example, if it is to be downloaded off the Internet, then it should be a simple download and install. If it is a huge enterprise system, then maybe your organization needs to send a support person with the product to help with the install.


Customer Expectations

Before we spring new software on our customers, we’d better be certain they are ready for it. We must be sure they know what new functionality to expect and that they have some means to deal with problems that arise.


Production Support

Many organizations have a production or operations support team that maintains the code and supports customers after it’s in production. If your company has a production support team, that group is your first customer. Make it your partner as well. Production support teams receive defect reports and enhancement requests from the customers, and they can work with your team to identify high-risk areas.

Very often the production support team is the team that accepts the release from the development team. If your organization has this type of hand-off, it is important that your development team works closely with the production support team to make it a smooth transition. Make sure the production support team understands how to use the system’s log files and the messaging and monitoring systems in order to keep track of operations and identify problems quickly.


Understand Impact to Business

Every time a deployment to production requires an outage, the product is unavailable to your customer. If your product is a website, this may be a huge impact. If your product is an independent product to be downloaded onto a PC, the impact is low. Agile teams release frequently to maximize value to the business, and small releases have a lower risk of a large negative impact. It’s common sense to work with the business to time releases for time periods that minimize disruption. Automate and streamline deployment processes as much as possible to keep downtime windows small. A quick deployment process is also helpful during development in short iterations where we may deploy a dozen times in one day.

International Considerations

Markus Gärtner, an “agile affected” testing group lead, explains his team’s approach to timing its releases:

We build telecommunications software for mobiles, so we usually install our software at night, when no one is likely to make calls. This might be during our office hours, when we’re handling a customer in Australia, but usually it is during our nighttime.

My colleagues who do the actual installation—there are three within our team—are most likely to appear late during next day’s office hours because we don’t have a separate group for these tasks.

As businesses and development teams become more global, release timing gets more complicated. Fortunately, production configurations can make releases easier. If your production environment has multiple application servers, you may be able to bring them down one at a time for release without disrupting users.

New releases should be as transparent as possible to the customer. The fewer emergency releases or patches required after a release, the more confidence your customer will have in both the product and the development team.

Learn from each release and take actions to make the next one go more smoothly. Get all roles, such as system and database administrators, involved in the planning. Evaluate each release and think of ways to improve the next one.


Summary

This chapter covered the following points:

Successful delivery of a product includes more than just the application you are building. Plan the non-software deliverables such as documentation, legal notices, and training.

The end game is an opportunity to put the spit and polish, the final finishing touches, on your product.

Other groups may be responsible for environments, tools, and other components of the end game and release. Coordinate with them ahead of time.

Be sure to test database update scripts, data conversions, and other parts of the installation.

UAT is an opportunity for customers to test against their data and to build their confidence in the product.

Budget time for extra cycles as needed, such as post-development cycles to coordinate testing with outside parties.

Establish release acceptance criteria during release planning so that you can know when you’re ready to release.

Testers often are involved in managing releases and testing the packaging.

When releasing the product, consider the whole package—what the customer needs and expects.

Learn from each release, and adapt to improve your processes.


Загрузка...