Chapter 8 Business-Facing Tests that Support the Team



In the last chapter, we talked about programmer tests, those low-level tests that help programmers make sure they have written the code right. How do they know the right thing to build? In phased and gated methodologies, we try to solve that by gathering requirements up front and putting as much detail in them as possible. In projects using agile practices, we put all our faith in story cards and tests that customers understand in order to help code the right thing. These “understandable” tests are the subject of this chapter.


Driving Development with Business-Facing Tests

Yikes, we’re starting an iteration with no more information than what fits on an index card, something like what’s shown in Figure 8-1.

Figure 8-1 Story to set up conversation

That’s not much information, and it’s not meant to be. Stories are a brief description of desired functionality and an aid to planning and prioritizing work. On a traditional waterfall project, the development team might be given a wordy requirements document that includes every detail of the feature set. On an agile project, the customer team and development team strike up a conversation based on the story. The team needs requirements of some kind, and they need them at a level that will let them start writing working code almost immediately. To do this, we need examples to turn into tests that will confirm what the customer really wants.

These business-facing tests address business requirements. These tests help provide the big picture and enough details to guide coding. Business-facing tests express requirements based on examples and use a language and format that both the customer and development teams can understand. Examples form the basis of learning the desired behavior of each feature, and we use those examples as the basis for our story tests in Quadrants 2 (see Figure 8-2).

Figure 8-2 The Agile Testing Quadrants, highlighting Quadrant 2

Business-facing tests are also called “customer-facing,” “story,” “customer,” and “acceptance” tests. The term “acceptance test” is particularly confusing, because it makes some people think only of “user acceptance tests.” In the context of agile development, acceptance tests generally refer to the business-facing tests, but the term could also include the technology-facing tests from Quadrant 4, such as the customer’s criteria for system performance or security. In this chapter, we’re discussing only the business-facing tests that support the team by guiding development and providing quick feedback.

As we explained in the previous two chapters, the order in which we present these four quadrants isn’t related to the order in which we might perform activities from each quadrant. The business-facing tests in Quadrant 2 are written for each story before coding is started, because they help the team understand what code to write. Like the tests in Quadrant 1, these tests drive development, but at a higher level. Quadrant 1 activities ensure internal quality, maximize team productivity, and minimize technical debt. Quadrant 2 tests define and verify external quality, and help us know when we’re done.

Part V, “An Iteration in the Life,” examines the order in which we perform tests from the different quadrants.

The customer tests to drive coding are generally written in an executable format, and automated, so that team members can run the tests as often as they like in order to see if the functionality works as desired. These tests, or some subset of them, will become part of an automated regression suite so that future development doesn’t unintentionally change system behavior.

As we discuss the stories and examples of desired behavior, we must also define nonfunctional requirements such as performance, security, and usability. We’ll also make note of scenarios for manual exploratory testing. We’ll talk about these other types of testing activities in the chapters on Quadrants 3 and 4.

We hear lots of questions relating to how agile teams get requirements. How do we know what the code we write should do? How do we obtain enough information to start coding? How do we get the customers to speak with one voice and present their needs clearly? Where do we start on each story? How do we get customers to give us examples? How do we use those to write story tests?

This chapter explains our strategy for creating business-facing tests that support the team as it develops each story. Let’s start by talking more about requirements.


The Requirements Quandary

Just about every development team we’ve known, agile or not, struggles with requirements. Teams on traditional waterfall projects might invest months in requirements gathering only to have them be wrong or quickly get out of date. Teams in chaos mode might have no requirements at all, with the programmers making their best guess as to how a feature should work.

Agile development embraces change, but what happens when requirements change during an iteration? We don’t want a long requirements-gathering period before we start coding, but how can we be sure we (and our customers) really understand the details of each story?

In agile development, new features usually start out life as stories, or groups of stories, written by the customer team. Story writing is not about figuring out implementation details, although high-level discussions can have an impact on dependencies and how many stories are created. It’s helpful if some members of the technical team can participate in story-writing sessions so that they can have input into the functionality stories and help ensure that technical stories are included as part of the backlog. Programmers and testers can also help customers break stories down to appropriate sizes, suggest alternatives that might be more practical to implement, and discuss dependencies between stories.

Stories by themselves don’t give much detail about the desired functionality. They’re usually just a sentence that expresses who wants the feature, what the feature is, and why they want it. “As an Internet shopper, I need a way to delete items from my shopping cart so I don’t have to buy unwanted items” leaves a lot to the imagination. Stories are only intended as a starting point for an ongoing dialogue between business experts and the development team. If team members understand what problem the customer is trying to solve, they can suggest alternatives that might be simpler to use and implement.

In this dialogue between customers and developers, agile teams expand on stories until they have enough information to write appropriate code. Testers help elicit examples and context for each story, and help customers write story tests. These tests guide programmers as they write the code and help the team know when it has met the customers’ conditions of satisfaction. If your team has use cases, they can help to supplement the example or coaching test to clarify the needed functionality (see Figure 8-3).

Figure 8-3 The makeup of a requirement

In agile development, we accept that we’ll never understand all of the requirements for a story ahead of time. After the code that makes the story tests pass is completed, we still need to do more testing to better understand the requirements and how the features should work.

After customers have a chance to see what the team is delivering, they might have different ideas about how they want it to work. Often customers have a vague idea of what they want and a hard time defining exactly what that is. The team works with the customer or customer proxy for an iteration and might deliver just a kernel of a solution. The team keeps refining the functionality over multiple iterations until it has defined and delivered the feature.

Being able to iterate is one reason agile development advocates small releases and developing one small chunk at a time. If our customer is unhappy with the behavior of the code we deliver in this iteration, we can quickly rectify that in the next, if they deem it important. Requirements changes are pretty much inevitable.

We must learn as much as we can about our customers’ wants and needs. If our end users work in our location, or it’s feasible to travel to theirs, we should sit with them, work alongside them, and be able to do their jobs if we can. Not only will we understand their requirements better but we might even identify requirements they didn’t think to state.

Tests need to include more than the customers’ stated requirements. We need to test for post conditions, impact on the system as a whole, and integration with other systems. We identify risks and mitigate those with tests as needed. All of these factors guide our coding.


Common Language

We can also use our tests to provide a common language that’s understood by both the development team and the business experts. As Brian Marick [2004] points out, a shared language helps the business people envision the features they want. It helps the programmers craft well-designed code that’s easy to extend. Real-life examples of desired and undesired behavior can be expressed so that they’re understood by both the business and technical sides. Pictures, flow diagrams, spreadsheets, and prototypes are accessible to people with different backgrounds and viewpoints. We can use these tools to find examples and then easily turn those examples into tests. The tests need to be written in a way that’s comprehensible to a business user reading them yet still executable by the technical team.

Business-facing tests also help define scope, so that everyone knows what is part of the story and what isn’t. Many of the test frameworks now allow teams to create a domain language and define tests using that language. Fit (Functional for Integrated Framework) is one of those.

More on Fit in Chapter 9, “Toolkit for Business-Facing Tests that Support the Team.”

The Perfect Customer

Andy Pols allowed us to reprint this story from his blog [Pols, 2008]. In it, he shows how his customer demanded a test, wrote it, and realized the story was out of scope.

On a recent project, our customer got so enthusiastic about our Fit tests that he got extremely upset when I implemented a story without a Fit test. He refused to let the system go live until we had the Fit test in place.

The story in question was very technical and involved sending a particular XML message to an external system. We just could not work out what a Fit test would look like for this type of requirement. Placing the expected XML message, with all its gory detail, in the Fit test would not have been helpful because this is a technical artifact and of no interest to the business. We could not work out what to do. The customer was not around to discuss this, so I just went ahead and implemented the story (very naughty!).

What the customer wanted was to be sure that we were sending the correct product information in the XML message. To resolve the issue, I suggested that we have a Fit test that shows how the product attributes get mapped onto the XML message using Xpath, although I still thought this was too technical for a business user.

We gave the customer a couple of links to explain what XPath was so that he could explore whether this was a good solution for him. To my amazement, he was delighted with XPath (I now know who to turn to when I have a problem with XPath) and filled in the Fit test.

The interesting bit for me is that as soon as he knew what the message looked like and how it was structured, he realized that it did not really support the business—we were sending information that was outside our scope of our work and that should have been supplied by another system. He was also skeptical about the speed at which the external team could add new products due to the complex nature of the XML.

Most agile people we tell this story to think we have the “perfect customer!”

Even if your customers aren’t perfect, involving them in writing customer tests gives them a chance to identify functionality that’s outside the scope of the story. We try to write customer tests that customers can read and comprehend. Sometimes we set the bar too low. Collaborate with your customers to find a tool and format for writing tests that works for both the customer and development teams.

It’s fine to say that our customers will provide to us the examples that we need to have in order for us to understand the value that each story should deliver. But what if they don’t know how to explain what they want? In the next section, we’ll suggest ways to help customers define their conditions of satisfaction.


Eliciting Requirements

If you’ve ever been a customer requesting a particular software feature, you know how hard it is to articulate exactly what you want. Often, you don’t really know exactly what you want until you can see, feel, touch and use it. We have lots of ways to help our customers get clarity about what they want.


Ask Questions

Start by asking questions. Testers can be especially good at asking a variety of questions because they are conscious of the big picture, the business-facing and technical aspects of the story, and are always thinking of the end user experience. Types of general questions to ask are:

Is this story solving a problem?

If so, what’s the problem we’re trying to solve?

Could we implement a solution that doesn’t solve the problem?

How will the story bring value to the business?

Who are the end users of the feature?

What value will they get out of it?

What will users do right before and right after they use that feature?

How do we know we’re done with this story?

One question Lisa likes to ask is, “What’s the worst thing that could happen?” Worst-case scenarios tend to generate ideas. They also help us consider risk and focus our tests on critical areas. Another good question is, “What’s the best thing that could happen?” This question usually generates our happy path test, but it might also uncover some hidden assumptions.


Use Examples

Most importantly, ask the customer to give you examples of how the feature should work. Let’s say the story is about deleting items out of an online shopping cart. Ask the customer to draw a picture on a whiteboard of how that delete function might look. Do they want any extra features, such as a confirmation step, or a chance to save the item in case they want to retrieve it later? What would they expect to see if the deletion couldn’t be done?

Examples can form the basis for our tests. Our challenge is to capture examples, which might be expressed in the business domain language, as tests that can actually be executed. Some customers are comfortable expressing examples using a test tool such as Fit or FitNesse as long as they can write them in their domain language.

Let’s explore the difference between an example and a test with a simple story (see Figure 8-4). People often get confused between these two terms.

Figure 8-4 Story to use as a base for examples and tests

An example would look something like this:

There are 5 items on a page. I want to select item 1 for $20.25 and put it in the shopping cart. I click to the next page, which has 5 more items. I select a second item on that page for $5.38 and put it in my shopping cart. When I say I’m done shopping, it will show both the item from the first page and the item from the second page in my shopping cart, with the total of $25.63

The test could be quite a bit different. We’ll use a Fit type format in Table 8-1 to show you how the test could be represented.

Table 8-1 Test for Story PA-2

The test captures the example in an executable format. It might not use exactly the same inputs, but it encapsulates the sample user scenario. More test cases can be written to test boundary conditions, edge cases, and other scenarios.


Multiple Viewpoints

Each example or test has one point of view. Different people will write different tests or examples from their unique perspectives. We’d like to capture as many different viewpoints as we can, so think about your users.

Getting the requirements right is an area where team members in many different roles can jump in to help. Business analysts, subject matter experts, programmers, and various members of the customer team all have something to contribute. Think about other stakeholders, such as your production support team. They have a very unique perspective.

We often forget about nonfunctional requirements such as “How long does the system need to be up? What happens if it fails? If we have middleware that passes messages, do we expect messages to be large enough that we might need to consider loss during transmission? Or will they be a constant size? What happens if there is no traffic for hours? Does the system need to warn someone?” Testing for these types of requirements usually falls into quadrants 3 and 4, but we still need to write tests to make sure they get done.

All of the examples that customers give to the team add up quickly. Do we really have to turn all of these into executable tests? Not as long as we have the customers there to tell us if the code is working the way they want. With techniques such as paper prototyping, designs can be tested before a line of code is written.

Wizard of Oz Testing

Gerard Meszaros, a Certified ScrumMaster (Practicing) and Agile Coach, shared his story about Wizard of Oz Testing on Agile Projects. He describes a good example of how artifacts we generate to elicit requirements can help communicate meaning in an unambiguous form.

We thought we were ready to release our software. We had been building it one iteration at a time under the guidance of an on-site customer who had prioritized the functionality based on what he needed to enter into integration testing with his business partners. We consciously deferred the master data maintenance and reporting functionality to later iterations to ensure we had the functionality needed for integration testing ready. The integration testing went fine, with just a few defects logged (all related to missing or misunderstood functionality). In the meantime, we implemented the master data maintenance in parallel with integration testing in the last few iterations. When we went into acceptance testing with the business users, we got a rude shock: They hated the maintenance and reporting functionality! They logged so many defects and “must-have improvements” that we had to delay the release by a month. So much for coming up with a plan that would allow us to deliver early!

While we were reimplementing the master data maintenance, I attended the Agile 2005 conference and took a tutorial by Jeff Patton. One of the exercises was building paper prototypes of the UI for a sample application. Then we “tested” the paper prototypes with members of the other groups as our users and found out how badly flawed our UI designs were. Déjà vu! The tutorial resembled my reality.

On my return to the project back home, I took the project manager I was mentoring in agile development aside and suggested that paper prototyping and “Wizard of Oz” testing (the Wizard of Oz reference is to a human being acting as a computer—sort of the “man behind the curtain”) might have avoided our one-month setback. After a very short discussion, we decided to give it a try on our release 2 functionality. We stayed late a couple of evenings and designed the UI using screenshots from the R1 functionality overlaid with hand-drawn R2 functionality. It was a long time since either of us had used scissors and glue sticks, and it was fun!

For the Wizard of Oz testing with users, we asked our on-site customers to find some real users with whom to do the testing. They also came up with some realistic sample tasks for the users to try to execute. We put the sample data into Excel spreadsheets and printed out various combinations of data grids to use the in the testing. Some future users came to town for a conference. We hijacked pairs of them for an hour each and did our testing.

I acted as the “wizard,” playing the part of the computer (“it’s a 286 processor so don’t expect the response times to be very good”). The on-site customer introduced the problem and programmers acted as observers, recording the missteps the users made as “possible defects.” After just a few hours, we had huge amounts of valuable data about which parts of our UI design worked well and which parts needed rethinking. And there was little argument about which was which! We repeated the usability testing with other users when we had alpha versions of the application available and gained further valuable insights. Our business customer found the exercise so valuable that on a subsequent project the business team set about doing the paper prototyping and Wizard of Oz testing with no prompting from the development team. This might have been influenced somewhat by the first e-mail we got from a real user 30 minutes after going live: “I love this application!!!”

Developing user interfaces test-first can seem like an intimidating effort. The Wizard of Oz technique can be done before writing a single line of code. The team can test user interaction with the system and gather plenty of information to understand the desired system behavior. It’s a great way to facilitate communication between the customer and development teams.

Close, constant collaboration between the customer team and the developer team is key to obtaining examples on which to base customer tests that drive coding. Communication is a core agile value, and we talk about it more in the next section.


Communicate with Customers

In an ideal world, our customers are available to us all day, every day. In reality, many teams have limited access to their business experts, and in many cases, the customers are in a different location or time zone. Do whatever you can to have face-to face conversations. When you can’t, conference calls, phone conversations, emails, instant messages, cameras, and other communication tools will have to substitute. Fortunately, more tools to facilitate remote communication are available all the time. We’ve heard of teams, such as Erika Boyer’s team at iLevel by Weyerhaeuser, that use webcams that can be controlled by the folks in the remote locations. Get as close to you can to direct conversation.

Lisa’s Story

I worked on a team where the programmers were spread through three time zones and the customers were in a different one. We sent different programmers, testers, and analysts to the customer site for every iteration, so that each team member had “face time” with the customers at least every third iteration. This built trust and confidence between the developer and customer teams. The rest of the time we used phone calls, open conference calls, and instant messages to ask questions. With continual fine-tuning based on retrospective discussions, we succeeded in satisfying and even delighting the customers.

—Lisa

Even when customers are available and lines of communication are wide open, communication needs to be managed. We want to talk to each member of the customer team, but they all have different viewpoints. If we get several different versions of how a piece of functionality should work, we won’t know what to code. Let’s consider ways to get customers to agree on the conditions of satisfaction for each story.


Advance Clarity

If your customer team consists of people from different parts of the organization, there may be conflicting opinions among them about exactly what’s intended by a particular story. In Lisa’s company, business development wants features that generate revenue, operations wants features that cut down on phone support calls, and finance wants features that streamline accounting, cash management, and reporting. It’s amazing how many unique interpretations of the same story can emerge from people who have differing viewpoints.

Lisa’s Story

Although we had a product owner when we first implemented Scrum, we still got different directives from different customers. Management decided to appoint a vice president with extensive domain and operations knowledge as the new product owner. He is charged with getting all of the stakeholders to agree on each story’s implications up front. He and the rest of the customer team meet regularly to discuss upcoming themes and stories, and to agree on priorities and conditions of satisfaction. He calls this “advance clarity.”

—Lisa

A Product Owner is a role in Scrum. He’s responsible not only for achieving advance clarity but also for acting as the “customer representative” in prioritizing stories. There’s a downside, though. When you funnel the needs of many different viewpoints through one person, something can be lost. Ideally, the development team should sit together with the customer team and learn how to do the customer’s work. If we understand the customer’s needs well enough to perform its daily tasks, we have a much better chance of producing software that properly supports those tasks.

Janet’s Story

Our team didn’t implement the product owner role at first and used the domain experts on the team to determine prioritization and clarity. It worked well, but the achieving consensus took many meetings because each person had different experiences. The product was better for it, but there were trade-offs. The many meetings meant the domain experts were not always available for answering questions from the programmers, so coding was slower than anticipated.

There were four separate project teams working on the same product, but each one was focused on different features. After several retrospectives, and a lot of problem-solving sessions, each project team appointed a Product Owner. The number of meetings was cut down significantly because most business decisions were made by the domain experts on their particular project. Meetings were held for all of the domain experts if there were any differences of opinion, and the Product Owner facilitated bringing consensus on an issue. Decisions were made much faster, the domain experts were more available for answering questions by the team, and were able to keep up with the acceptance tests.

—Janet

However your team chooses to bring together varying viewpoints, it is important that there is only “one voice of the customer” presented to the team.

We said that product owners provide conditions of satisfaction. Let’s look more closely at what we mean.


Conditions of Satisfaction

There are conditions of satisfaction for the whole release as well as for each feature or story. Acceptance tests help define the story acceptance. Your development team can’t successfully deliver what the business wants unless conditions of satisfaction for a story are agreed to up front. The customer team needs to “speak with one voice.” If you’re getting different requirements from different stakeholders, you might need to push back and put off the story until you have a firm list of business satisfaction conditions. Ask the customer representative to provide a minimum amount of information on each story so that you can start every iteration with a productive conversation.

The best way to understand the customer team’s requirements is to talk with the customers face to face. Because everyone struggles with “requirements,” there are tools to help the customer team work through each story. Conditions of satisfaction should include not only the features that the story delivers but also the impacts on the larger system.

Lisa’s product owner uses a checklist format to sort out issues such as:

Business satisfaction conditions

Impact on existing functions such as the website, documents, invoices, forms, or reports

Legal considerations

The impact on regularly scheduled processes

References to mock-ups for UI stories

Help text, or who will provide it

Test cases

Data migration (as appropriate)

Internal communication that needs to happen

External communication to business partners and vendors

The product owner uses a template to put this information on the team’s wiki so that it can be used as team members learn about the stories and start writing tests.

Chapter 9, “Toolkit for Business-Facing Tests that Support the Team,” includes example checklists as well as other tools for expressing requirements.

These conditions are based on key assumptions and decisions made by the customer team for a story. They generally come out of conversations with the customer about high-level acceptance criteria for each story. Discussing conditions of satisfaction helps identify risky assumptions and increases the team’s confidence in writing and correctly estimating all of the tasks that are needed to complete the story.


Ripple Effects

In agile development, we focus on one story at a time. Each story is usually a small component of the overall application, but it might have a big ripple effect. A new story drops like a little stone into the application water, and we might not think about what the resulting waves might run into. It’s easy to lose track of the big picture when we’re focusing on a small number of stories in each iteration.

Lisa’s team finds it helpful to make a list of all of the parts of the system that might be affected by a story. The team can check each “test point” to see what requirements and test cases it might generate. A small and innocent story might have a wide-ranging impact, and each part of the application that it touches might present another level of complexity. You need to be aware of all the potential impacts of any code change. Making a list is a good place to start. In the first few days of the iteration, the team can research and analyze affected areas further and see whether any more task cards are needed to cover them all.

Janet’s Story

In one project I was on, we used a simple spreadsheet that listed all of the high-level functionality of the application under test. During release planning, and at the start of each new iteration, we reviewed the list and thought about how the new or changing functionality would affect those areas. That became the starting point for determining what level of testing needed to be done in each functional area. This impact analysis was in addition to the actual story testing and enabled our team to see the big picture and the impact of the changes to the rest of the system.

—Janet

Stories that look small but that impact unexpected areas of the system can come back to bite you. If your team forgets to consider all dependencies, and if the new code intersects with existing functionality, your story might take much longer than planned to finish. Make sure your story tests include the less obvious fallout from implementing the new functionality.

Chapter 16, “Hit the Ground Running,” and Chapter 17, “Iteration Kickoff,” give examples of when and how teams can plan customer tests and explore the wider impact of each story.

Take time to identify the central value each story provides and figure out an incremental approach to developing it. Plan small increments of writing tests, writing code, and testing the code some more. This way, your Quadrant 2 tests ensure you’ll deliver the minimum value as planned.


Thin Slices, Small Chunks

Writing stories is a tricky business. When the development team estimates new stories, it might find some stories too big, so it will ask the customer team to go back and break them into smaller stories. Stories can be too small as well, and might need to be combined with others or simply treated as tasks. Agile development, including testing, takes on one small chunk of functionality at a time.

When your team embarks on a new project or theme, ask the product owner to bring all of the related stories to a brainstorming session prior to the first iteration for that theme. Have the product owner and other interested stakeholders explain the stories. You might find that some stories need to be subdivided or that additional stories need to be written to fill in gaps.

After you understand what value each story should deliver and how it fits in the context of the system, you can break the stories down into small, manageable pieces. You can write customer tests to define those small increments, while keeping in mind the impact on the larger application.

A smart incremental approach to writing customer tests that guide development is to start with the “thin slice” that follows a happy path from one end to the other. Identifying a thin slice, also called a “steel thread” or “tracer bullet,” can be done on a theme level, where it’s used to verify the overall architecture. This steel thread connects all of the components together, and after it’s solid, more functionality can be added.

See Chapter 10, “Business-Facing Tests that Critique the Product,” for more about exploratory testing.

We find this strategy works at the story level, too. The sooner you can build the end-to-end path, the sooner you can do meaningful testing, get feedback, start automating tests, and start exploratory testing. Begin with a thin slice of the most stripped-down functionality that can be tested. This can be thought of as the critical path. For a user interface, this might start with simply navigating from one page to the next. We can show this to the customer and see whether the flow makes sense. We could write a simple automated GUI test. For the free-shipping threshold story at the beginning of this chapter, we might start by verifying the logic used to sum up the order total and determine whether it qualifies for free shipping, without worrying about how it will look on the UI. We could automate tests for it with a functional test tool such as FitNesse.

See Part IV, “Automation,” for more about regression test automation.

After the thin slice is working, we can write customer tests for the next chunk or layer of functionality, and write the code that makes those tests pass. Now we’ll have feedback for this small increment, too. Maybe we add the UI to display the checkout page showing that the order qualified for free shipping, or add the layer to persist updates to the database. We can add on to the automated tests we wrote for the first pass. It’s a process of “write tests—write code—run tests—learn.” If you do this, you know that all of the code your team produces satisfies the customer and works properly at each stage.

Lisa’s Story

My team has found that we have to focus on accomplishing a simple thin slice and add to it in tiny increments. Before we did this, we tended to get stuck on one part of the story. For example, if we had a UI flow that included four screens, we’d get so involved in the first one that we might not get to the last one, and there was no working end-to-end path. By starting with an end-to-end happy path and adding functionality a step at a time, we can be sure of delivering the minimum value needed.

Here’s an example of our process. The story was to add a new conditional step to the process of establishing a company’s retirement plan. This step allows users to select mutual fund portfolios, but not every user has access to this feature. The retirement plan establishment functionality is written in old, poorly designed legacy code. We planned to write the new page in the new architecture, but linking the new and old code together is tricky and error prone. We broke the story down into slices that might look tiny but that allowed us to manage risk and minimize the time needed to code and test the story. Figure 8-5 shows a diagram of incremental steps planned for this story.

Figure 8-5 Incremental steps

The #1 thin slice is to insert a new, empty page based on a property. While it’s not much for our customers to look at, it lets us test the bridge between old and new code, and then verify that the plan establishment navigation still works properly. Slice #2 introduces some business logic: If no mutual fund portfolios are available for the company, skip to the fund selection step, which we’re not changing yet. If there are fund portfolios available, display them on the new step 3. In slice #3, we change the fund selection step, adding logic to display the funds that make up the portfolios. Slice #4 adds navigational elements between various steps in the establishment process.

We wrote customer tests to define each slice. As the programmers completed each one, we manually tested it and showed it to our customers. Any problems found were fixed immediately. We wrote an automated GUI test for slice #1, and added to it as the remaining steps were finished. The story was difficult because of the old legacy code interacting with the new architecture, but the stepwise approach made implementation smooth, and saved time.

When we draw diagrams such as this to break stories into slices, we upload photos of them to our team wiki so our remote team member can see them too. As each step is finished, we check it off in order to provide instant visual feedback.

—Lisa

If the task of writing customer tests for a story seems confusing or overwhelming, your team might need to break the story into smaller steps or chunks. Finishing stories a small step at a time helps spread out the testing effort so that it doesn’t get pushed to the end of the iteration. It also gives you a better picture of your progress and helps you know when you’re done—a subject we’ll explore in the next section.

Check the bibliography for Gerard Meszaros’s article “Using Storyotypes to Split Bloated XP Stories.”


How Do We Know We’re Done?

We have our business-facing tests that support the team—those tests that have been written to ensure the conditions of satisfaction have been met. They start with the happy path and show that the story meets the intended need. They cover various user scenarios and ensure that other parts of the system aren’t adversely affected. These tests have been run, and they pass (or at least they’ve identified issues to be fixed).

Are we done now? We could be, but we’re not sure yet. The true test is whether the software’s user can perform the action the story was supposed to provide. Activities from Quadrants 3 and 4, such as exploratory testing, usability testing, and performance testing will help us find out. For now, we just need to do some customer tests to ensure that we have captured all of the requirements. The business users or product owners are the right people to determine whether every requirement has been delivered, so they’re the right people to do the exploring at this stage.

When the tests all pass and any missed requirements have been identified, we are done for the purpose of supporting the programmers in their quest for code that does the “right thing.” It does not mean we are done testing. We’ll talk much more about that in the chapters that follow.

Another goal of customer tests is to identify high-risk areas and make sure the code is written to solidify those. Risk management is an essential practice in any software development methodology, and testers play a role in identifying and mitigating risks.


Tests Mitigate Risk

Customer tests are written not only to define expected behavior of the code but to manage risk. Driving development with tests doesn’t mean we’ll identify every single requirement up front or be able to predict perfectly when we’re done. It does give us a chance to identify risks and mitigate them with executable test cases. Risk analysis isn’t a new technique. Agile development inherently mitigates some risks by prioritizing business value into small, tested deliverable pieces and by having customer involvement in incremental acceptance. However, we should still brainstorm potential events, the probability they might occur, and the impact on the organization if they do happen so that the right mitigation strategy can be employed.

Coding to predefined tests doesn’t work well if the tests are for improbable edge cases. While we don’t want to test only the happy path, it’s a good place to start. After the happy path is known, we can define the highest risk scenarios—cases that not only have a bad outcome but also have a good possibility of happening.

In addition to asking the customer team questions such as “What’s the worst thing that could happen?,” ask the programmers questions like these: “What are the post conditions of this section of code? What should be persisted in the database? What behavior should we look for down the line?” Specify tests to cover potentially risky outcomes of an action.

Lisa’s Story

My team considers worst-case scenarios in order to help us identify customer tests. For example, we planned a story to rewrite the first step of a multistep account creation wizard with a couple of new options. We asked ourselves questions such as the following: “When the user submits that first page, what data is inserted in the database? Are any other updates triggered? Do we need to regression test the entire account setup process? What about activities the user account might do after setup?” We might need to test the entire life cycle of the account. We don’t have time to test more than necessary, so decisions about what to test are critical. The right tests help us mitigate the risk brought by the change.

—Lisa

Programmers can identify fragile parts of the code. Does the story involve stitching together legacy code with a new architecture? Does the code being changed interact with another system or depend on third-party software? By discussing potential impacts and risky areas with programmers and other team members, we can plan appropriate testing activities.

There’s another risk. We might get so involved writing detailed test cases up front that the team loses the forest in the trees; that is, we can forget the big picture while we concentrate on details that might prove irrelevant.

Peril: Forgetting the Big Picture

It’s easy to slip into the habit of testing only individual stories or basing your testing on what the programmer tells you about the code. If you find yourself finding integration problems between stories late in the release or that a lot of requirements are missing after the story is “done,” take steps to mitigate this peril.

Always consider how each individual story impacts other parts of the system. Use realistic test data, use concrete examples as the basis of your tests, and have a lot of whiteboard discussions (or their virtual equivalent) in order to make sure everyone understands the story. Make sure the programmers don’t start coding before any tests are written, and use exploratory testing to find gaps between stories.

Remember the end goal and the big picture.

As an agile team, we work in short iterations, so it’s important to time-box the time spent writing tests before we start. After each iteration is completed, take the time to evaluate whether more detail up front would have helped. Were there enough tests to keep the team on track? Was there a lot of wasted time because the story was misunderstood? Lisa’s team has found it best to write high-level story tests before coding, to write detailed test cases once coding starts, and then to do exploratory testing on the code as it’s delivered in order to give the team more information and help make needed adjustments.

Janet worked on a project that had some very intensive calculations. The time spent creating detailed examples and tests before coding started, in order to ensure that the calculations were done correctly, was time well spent. Understanding the domain, and the impact of each story, is critical to assessing the risk and choosing the correct mitigation strategy.

While business-facing tests can help mitigate risks, other types of tests are also critical. For example, many of the most serious issues are usually uncovered during manual exploratory testing. Performance, security, stability, and usability are also sources of risk. Tests to mitigate these other risks are discussed in the chapters on Quadrants 3 and 4.

Experiment and find ways that your team can balance using up-front detail and keeping focused on the big picture. The beauty of short agile iterations is that you have frequent opportunities to evaluate how your process is working so that you can make continual improvements.


Testability and Automation

When programmers on an agile team get ready to do test-driven development, they use the business-facing tests for the story in order to know what to code. Working from tests means that everyone thinks about the best way to design the code to make testing easier. The business-facing tests in Quadrant 2 are expressed as automated tests. They need to be clearly understood, easy to run, and provide quick feedback; otherwise, they won’t get used.

It’s possible to write manual test scripts for the programmers to execute before they check in code so that they can make sure they satisfied the customer’s conditions, but it’s not realistic to expect they’ll go to that much trouble for long. When meaningful business value has to be delivered every two weeks or every 30 days, information has to be direct and automatic. Inexperienced agile teams might accept the need to drive coding with automated tests at the developer test level more easily than at the customer test level. However, without the customer tests, the programmers have a much harder time knowing what unit tests to write.

Each agile team must find a process of writing and automating business-facing tests that drive development. Teams that automate only technology-facing tests find that they can have bug-free code that doesn’t do what the customer wants. Teams that don’t automate any tests will anchor themselves with technical debt.

Part IV, “Test Automation,” will guide you as you develop an automation strategy.

Quadrant 2 contains a lot of different types of tests and activities. We need the right tools to facilitate gathering, discussing, and communicating examples and tests. Simple tools such as paper or a whiteboard work well for gathering examples if the team is co-located. More sophisticated tools help teams write business-facing tests that guide development in an executable, automatable format. In the next chapter, we’ll look at the kinds of tools needed to elicit examples, and to write, communicate, and execute business-facing tests that support the team.


Summary

In this chapter, we looked at ways to support the team during the coding process with business-facing tests.

In agile development, examples and business-facing tests, rather than traditional requirements documents, tell the team what code to write.

Working on thin slices of functionality, in short iterations, gives customers the opportunity to see and use the application and adjust their requirements as needed.

An important area where testers contribute is helping customers express satisfaction conditions and create examples of desired, and undesired, behavior for each story.

Ask open-ended questions to help the customer think of all of the desired functionality and to prevent hiding important assumptions.

Help the customers achieve consensus on desired behavior for stories that accommodate the various viewpoints of different parts of the business.

Help customers develop tools (e.g., a story checklist) to express information such as business satisfaction conditions.

The development and customer teams should think through all of the parts of the application that a given story affects, keeping the overall system functionality in mind.

Work with your team to break feature sets into small, manageable stories and paths within stories.

Follow a pattern of “write test—write code—run tests—learn” in a step-by-step manner, building on each pass through the functionality.

Use tests and examples to mitigate risks of missing functionality or losing sight of the big picture.

Driving coding with business-facing tests makes the development team constantly aware of the need to implement a testable application.

Business-facing tests that support the team must be automated for quick and easy feedback so that teams can deliver value in short iterations.


Загрузка...