Chapter 12 Summary of Testing Quadrants



In Chapter 6, we introduced the testing quadrants, and in the chapters that followed we talked about how to use the concepts in your agile project. In this chapter, we’ll bring it all together with an example of an agile team that used tests from all four quadrants.


Review of the Testing Quadrants

We’ve just spent five chapters talking about each of the quadrants (see Figure 12-1) and examples of tools you can use for the different types of testing. The next trick is to know which tests your project needs and when to do them. In this chapter, we’ll walk you through a real-life example of an agile project that used tests from all four agile testing quadrants.


Figure 12-1 Agile Testing Quadrants


A System Test Example

The following story is about one organization’s success in testing its whole system using a variety of home-grown and open source tools. Janet worked with this team, and Paul Rogers was the primary test architect. This is Paul’s story.


The Application

The system solves the problem of monitoring remote oil and gas production wells. The solution combines a remote monitoring device that can transmit data and receive adjustments from a central monitoring station using a proprietary protocol over a satellite communication channel.

Figure 12-2 shows the architecture of the Remote Data Monitoring system. The measurement devices on the oil wells, Remote Terminal Units (RTU), use a variety of protocols to communicate with the measurement device. This data from each RTU is transmitted via satellite to servers located at the client’s main office. It is then made available to users via a web interface. A notification system, via email, fax, or phone, is available when a particular reading is outside of normal operational limits. A Java Message Service (JMS) feed and web services are also available to help integration with clients’ other applications.

Figure 12-2 Remote data monitoring system architecture

The software application was a huge legacy system that had few unit tests. The team was slowly rebuilding the application with new technology.


The Team and the Process

The team consisted of four software programmers, two firmware programmers, three to four testers, a product engineer, and an off-site manager. The “real” customer was in another country. The development team uses XP practices, including pair programming and TDD. The customer team used the defect-tracking system for the backlog, but most of the visibility of the stories was through index cards. Story cards were used during iteration planning meetings, and the task board tracked the progress.

Scrum was used as the outside reporting mechanism to the organization and the customers. The team had two week iterations and released the product about every four months. This varied depending on the functionality being developed. Retrospectives were held as part of every iteration planning session, and action was taken on the top three priority items discussed.

Continuous integration through CruiseControl provided constant builds for the testers and the demonstrations held at the end of every iteration. Each tester had a local environment for testing the web application, but there were three test environments available to the system. The first one was to test new stories and was updated as needed with the latest build. The second one was for testing client-reported issues, because it had the last version released to the clients. The third environment was a full stand-alone test environment that was available for testing full deploys, communication links, and the firmware and hardware. It was on this environment that we ran our load and reliability tests.


Tests Driving Development

The tests driving development included unit test and acceptance tests.


Unit Tests

Unit tests are technology-facing tests that support programming. Those that are developed as part of test-driven development not only help the programmer get the story right but also help to design the system.

Chapter 7, “Technology-Facing Tests that Support the Team,” explains more about unit testing and TDD.

The programmers on the Remote Data Monitoring project bought into Test Driven Development (TDD) and pair programming wholeheartedly. All new functionality was developed and tested using pair programming. All stories delivered to the testers were supported by unit tests, and very few bugs were found after coding was complete. The bugs that were found were generally integration-related.

However, when the team first started, the legacy system had few unit tests to support refactoring. As process changes were implemented, the developers decided to start fixing the problem. Every time they touched a piece of code in the legacy system, they added unit tests and refactored the code as necessary. Gradually, the legacy system became more stable and was able to withstand major refactoring when it was needed. We experienced the power of unit tests!


Acceptance Tests

The product engineer (the customer proxy) took ownership of creating the acceptance tests. These tests varied in format depending on the actual story. Although he struggled at first, the product engineer got pretty good at giving the tests to the programmers before they started coding. The team created a test template, which evolved over time, that met both the programmers’ and the testers’ needs.

The tests were sometimes informally written, but they included data, required setup if it wasn’t immediately obvious, different variations that were critical to the story, and some examples. The team found that examples helped clarify the expectations for many of the stories.

The test team automated the acceptance tests as soon as possible, usually at the same time as the stories were being developed. Of course, the product engineer was available to answer any questions that came up during development.

These acceptance tests served three purposes. They were business-facing tests that supported development because they were given to the team before coding started. Secondly, they were used by the test team as the basis of automation that fed into the regression suite and provided future ideas for exploratory testing. The third purpose was to confirm that the implementation met the needs of the customer. The product engineer did this solution verification.

See Chapter 8, “Business-Facing Tests that Support the Team,” for more about driving development with acceptance tests.


Automation

Automation involved the functional test structure, web services, and embedded testing.


The Automated Functional Test Structure

Ruby was used with Watir as the tool of choice for the functional automation framework. It was determined to have the most flexibility and opportunity for customization that was required for the system under test.

The automated test code included three distinct layers, shown in Figure 12-3. The lowest layer, Layer 1, included Watir and other classes, such as loggers that wrote to the log files.

Figure 12-3 Functional test layers

The second layer, Layer 2, was the page access layer, where classes that contained code to access individual web pages lived. For example, in the application under test (AUT) there was a login page, a create user page, and an edit user page. Classes written in Ruby contained code that could perform certain functions in the AUT, such as a class that logs into the application, a class to edit a user, and a class to assign access rights to a user. These classes contained no data. For example, the log-in class didn’t know what username to log in with.

The third and top layer, Layer 3, was the test layer, and it contained the data needed to perform a test. It called Layer 2 classes, which in turned called Layer 1.

For example, the actual test would call LogIn and pass Janet as the user name and Passw0rd as the password. This meant you could feed in many different data sets easily.

LogIn ('Janet', 'Passw0rd')


Layer 2 also knew how to handle the error messages the application generated. For example, when an invalid username was entered on the login page, the login class detected the error message and then passed the problem back to the tests in Layer 3.

This means the same Layer 2 classes could be used for both happy path testing and for negative testing. In the negative case, Layer 3 would expect Layer 2 to return a failure, and would then check to see if the test failed for the correct reason by accessing the error messages that Layer Two scraped from the browser.

The functional tests used Ruby with Watir to control the DOM on the browser and could access almost all of the objects in the page. The automated test suite was run on nightly builds to give the team consistent feedback on high-level application behavior. This was a lifesaver as the team continued to build out the unit tests. This architecture efficiently accommodated the business-facing tests that support the team.


Web Services

Web services were used by clients to interface with some of their other applications. The development group used Ruby to write a client to test each service they developed. For these tests, Ruby’s unit testing framework, Test::Unit, was used.

The web services tests were expanded by the test team to cover more than 1,000 different test cases, and took just minutes to run. They gave the team an amazing amount of coverage in a short period of time.

The team demonstrated the test client to the customers, who decided to use it as well. However, the customers subsequently decided it didn’t work for them, so they started writing their own tests, albeit in a much more ad hoc fashion using Ruby.

They used IRB, the interactive interface provided by Ruby, and fed values in an exploratory method. It gave the customer an interactive environment for discovering what worked and what didn’t. It also let them get familiar with Ruby and how we were testing, and it gave them much more confidence in our tests. Much of their User Acceptance Testing was done using IRB.

Three different slants on the web services tests served three different purposes. The programmers used it to help test their client and drive their development. The testers used it to critique the product in a very efficient automated manner, and the customers were able to test the web services delivered to them using IRB.


Embedded Testing

In addition to the web interface, the RDM system consisted of a small embedded device that communicated with measuring equipment using various protocols. Using Ruby, various tests were developed to test part of its administrative interface. This interface was a command-line system similar to FTP.

These data-driven tests were contained in an Excel spreadsheet. A Ruby script would read commands from Excel using the OLE interface and send them to the embedded device. The script would then compare the response from the device with the expected result, also held in the spreadsheet. Errors were highlighted in red. These automated tests took approximately one hour to run, while doing the same tests manually would take eight hours.

While this provided a lot of test coverage, it didn’t actually test the reason the device was used, which was to read data from RTUs. A simulator was written in Ruby with a FOX (FXRuby) GUI. This allowed mock data to be fed into the device. Because the simulator could be controlled remotely, it was incorporated into automated tests that exercised the embedded device’s ability to read data, respond to error conditions, and generate alarms when the input data exceeded a predetermined threshold.

Embedded testing is highly technical, but with the power provided by the simulator, the whole team was able to participate in testing the device. The simulator was written to support testing for the test team, but the programmer for the firmware found it valuable and used it to help with his development efforts as well. That was a positive unexpected side effect. Quadrant 2 tests that support the team may incorporate a variety of technologies, as they did in this project.


Critiquing the Product with Business-Facing Tests

The business-facing tests that critique the product are outlined in this section.


Exploratory Testing

The automated tests were simple and easy for everyone on the team to use. Individual test scripts could be run to set up specific conditions, allowing effective exploratory testing to be done without having to spend a lot of time manually entering data. This worked for all three test frameworks: functional, web services, and embedded.

Exploratory testing, usability testing, and other Quadrant 3 tests are discussed in Chapter 10, “Business-Facing Tests that Critique the Product.”

The team performed exploratory testing to supplement the automated test suites and get the best coverage possible. This human interaction with the system found issues that automation didn’t find.

Usability testing was not a critical requirement for the system, but the testers watched so that the interface made sense and flowed smoothly. The testers used exploratory testing extensively to critique the product. The product engineer also used exploratory testing for his solution verification tests.


Testing Data Feeds

As shown in Figure 12-2, the data from the system is available on a JMS queue, as well as the web browser. To test the JMS queue, the development group wrote a Java proxy. It connected to a queue and printed any arriving data to the console. They also wrote a Ruby client that received this data via a pipe, which was then available in the Ruby automated test system.

Emails were automatically sent when alarm conditions were encountered. The alarm emails contained both plain text email and email with attachments. The MIME attachments contained data useful for testing, so a Ruby email client that supported attachments was written.


The End-to-End Tests

Quadrant 3 includes end-to-end functional testing that demonstrates the desired behavior of every part of the system. From the beginning, it was apparent that correct operation of the whole Remote Data Monitoring system could only be determined when all components were used. Once the simulator, embedded device tests, web services tests, and application tests were written, it was a relatively simple matter to combine them to produce an automated test of the entire system. Once again, Excel spreadsheets were used to hold the test data, and Ruby classes were written to access the data and expected results.

The end-to-end tests were complicated by the unpredictable response of the satellite transmission path. A predefined timeout value was set, and if the test’s actual value did not match the expected value, the test would cycle until it matched or the timeout was reached. When the timeout expired, the test was deemed to have failed. Most transmission issues were found and eliminated this way. It would have been highly unlikely that they would have been found with manual testing, because they were sporadic issues.

Because end-to-end tests such as these can be fragile, they may not be kept as part of the automated regression suite. If all of the components of the system are well covered with automated regression tests, automated end-to-end tests might not be necessary. However, due to the nature of this system, it wasn’t possible to do a full test without automation.


User Acceptance Testing

User Acceptance Testing (UAT) is the final critique of the product by the customer, who should have been involved in the project from the start. In this example, the real customer was in France, thousands of miles from the development team. The team had to be inventive to have a successful UAT. The customer came to work with the team members a couple of times during the year and so was able to interact with the team a little easier than if they’d never met.

After the team introduced agile development, Janet went to France to facilitate the first UAT at the customer site. It worked fairly well, and the release was accepted after a few critical issues were fixed. The team learned a lot from that experience.

The second UAT sign-off was done in-house. To prepare, the team worked with the customer to develop a set of tests the customer could perform to verify new functionality. The customer was able to test the application throughout the development cycle, so UAT didn’t produce any issues. The customer came, ran through the tests, and signed off in a day.

We cannot stress the importance of working with the customer enough. Even though the product engineer was the proxy for the customer, it was crucial to get face time with the actual customer. The relationship that had been built over time was critical to the success of the project. Janet strongly believes that the UAT succeeded because the customer knew what the team was doing along the way.


Reliability

Reliability, one of the “ilities” addressed by Quadrant 4 tests, was a critical factor of the system because it was monitoring remote sites that were often inaccessible, especially in winter. The simulator that was developed for testing the embedded system was set up on a separate environment, and was run for weeks at a time measuring stability (yet another “ility”) of the whole system. Corrections to the system design could be planned and coded as needed. This is a good example of why you shouldn’t wait until the end of the project to do the technology-facing tests that critique the product.

See Chapter 10, “Business-Facing Tests that Critique the Product,” for more about Quadrant 4 tests such as reliability testing.


Documentation

The approach taken to documentation is presented in this section.


Documenting the Test Code

During development, it became clear that a formal documentation system was needed for the test code. The simplest solution was to use RDoc, similar to Javadoc, but for Ruby. RDoc extracted tagged comments from the source code and generated web pages with details of files, classes, and methods. The documents were generated every night using a batch file and were available to the complete team. It was easy to find what test fixtures were created.

The documentation of the test code helped to document the tests and make it easier to find what we were testing and what the tests did. It was very powerful and easy to use.


Reporting the Test Results

Although comprehensive testing was being performed, there was little evidence of this outside of the test team. The logs generated during automated tests provided good information to track down problems but were not suitable for a wider audience.

Chapter 16, “Hit the Ground Running,” gives more examples of ways teams report test results.

To raise the visibility of the tests being performed, the test team developed a logging and reporting system using Apache, PHP, and mySQL. When a test ran, it logged the result into the database. A web front end allowed project stakeholders to determine what tests were run, the pass/failure rate, and other information.

Chapter 18, “Coding and Testing,” also discusses uses of big visible charts.

We also believed in making our progress visible (good or bad) as much as possible. To this end we created charts and graphs along the way and posted them in common areas. Figure 12-4 shows some of the charts we created.


Figure 12-4 Big visible charts used by the remote monitoring system project team


Using the Agile Testing Quadrants

This example demonstrates how testing practices from all four agile testing quadrants are combined during the life of a complex development project to achieve successful delivery. The experience of this team illustrates many of the principles we have been emphasizing. The whole team, including programmers, testers, customer proxy, and the actual customer, contributed to efforts to solve automation problems. They experimented with different approaches. They combined their homegrown and open source tools in different ways to perform testing at all levels, from the unit level to end-to-end system testing and UAT. The success of the project demonstrates the success of the testing approach.

As you plan each epic, release, or iteration, work with your customer team to understand the business priorities and analyze risks. Use the quadrants to help identify all of the different types of testing that will be needed and when they should be performed. Is performance the most important criteria? Is the highest priority the ability to interface with other systems? Is usability perhaps the most important aspect?

Invest in a test architecture that accommodates the complexity of the system under test. Plan to obtain necessary resources and expertise at the right time for specialized tests. For each type of test, your team should work together to choose tools that solve your testing problems. Use retrospectives to continually evaluate whether your team has the resources it needs to succeed, and whether all necessary tests are being specified in time to serve their purpose, and automated appropriately.

Does end-to-end testing seem impossible to do? Is your team finding it hard to write unit tests? As Janet’s team did, get everyone experimenting with different approaches and tools. The quadrants provide a framework for productive brainstorming on creative ways to achieve the testing that will let the team deliver value to the business.


Summary

In this chapter, we described a real project that used tests from all four agile testing quadrants to overcome difficult testing challenges. We used examples from this project to show how teams can succeed with all types of testing. Some important lessons from the Remote Data Monitoring System project are:

The whole team should choose or create tools that solve each testing problem.

Combinations of common business tools such as spreadsheets and custom-written test scripts may be needed to accomplish complex tests.

Invest time in building the right test architecture that works for all team members.

Find ways to keep customers involved in all types of testing, even if they’re in a remote location.

Report test results in a way that keeps all stakeholders informed about the iteration and project progress.

Don’t forget to document . . . but only what is useful.

Think about all four quadrants of testing throughout your development cycles.

Use lessons learned during testing to critique the product in order to drive development in subsequent iterations.


Загрузка...