Chapter 13 Why We Want to Automate Tests and What Holds Us Back



Why do we automate testing, the build process, deployment, and other tasks? Agile teams focus on always having working software, which enables them to release production-ready software as often as needed. Achieving this goal requires constant testing. In this chapter, we look at reasons we want to automate and the challenges that make it hard to get traction on automation.


Why Automate?

There are multiple reasons to automate besides our saying you need to have automation to be successful using agile. Our list includes the following:

Manual testing takes too long.

Manual processes are error prone.

Automation frees people to do their best work.

Automated regression tests provide a safety net.

Automated tests give feedback early and often.

Tests and examples that drive coding can do more.

Tests provide documentation.

Automation can be a good return on investment.

Let’s explore each of these in a little more detail.


Manual Testing Takes Too Long

The most basic reason a team wants to automate is that it simply takes too long to complete all of the necessary testing manually. As your application gets bigger and bigger, the time to test everything grows longer and longer, sometimes exponentially, depending on the complexity of the AUT (Application under test).

Agile teams are able to deliver production-ready software at the end of each short iteration by having production-ready software every day. Running a full suite of passing regression tests at least daily is an indispensable practice, and you can’t do it with manual regression testing. If you don’t have any automation now, you’ll have to regression test manually, but don’t let that stop you from starting to automate it.

If you execute your regression testing manually, it takes more and more time testing every day, every iteration. In order for testing to keep pace with coding, either the programmers have to take time to help with manual regression testing, or the team has to hire more testers. Inevitably, both technical debt and frustration will grow.

If the code doesn’t even have to pass a suite of automated unit level regression tests, the testers will probably spend much of their time researching, trying to reproduce and report those simple bugs, and less time finding potentially serious system level bugs. In addition, because the team isn’t doing test-first development, code design is more likely to be less testable and may not provide the functionality desired by the business.

Manually testing a number of different scenarios can take a lot of time, especially if you’re keying inputs into a user interface. Setting up data for a variety of complex scenarios can be an overwhelming task if you have no automated way to speed it up. As a result, only a limited number of scenarios may be tested, and important defects can be missed.


Manual Processes Are Error Prone

Manual testing gets repetitive, especially if you’re following scripted tests, and manual tests get boring very quickly. It’s way too easy to make mistakes and overlook even simple bugs. Steps and even entire tests will be skipped. If the team’s facing a tight deadline, there’s a temptation to cut corners, and the result is a missed problem.

Because manual testing is slow, you might still be testing at midnight on the last day of the iteration. How many bugs will you notice then?

Automated builds, deployment, version control, and monitoring also go a long way toward mitigating risk and making your development process more consistent. Automating these scripted tests eliminate the possibility of errors, because each test is done exactly the same way every time.

The adage of “build once, deploy to many” is a tester’s dream come true. Automation of the build and deploy processes allow you to know exactly what you are testing on any given environment.


Automation Frees People to Do Their Best Work

Writing code test-first helps programmers understand requirements and design code accordingly. Having continual builds run all of the unit tests and the functional regression tests means more time to do interesting exploratory testing. Automating the setup for exploratory tests means even more time to probe into potentially weak parts of the system. Because you didn’t spend time executing tedious manual scripts, you have the energy to do a good job, thinking of different scenarios and learning more about how the application works.

If we’re thinking constantly about how to automate tests for a fix or new feature, we’re more likely to think of testability and a quality design rather than a quick hack that might prove fragile. That means better code and better tests.

Automating tests can actually help with consistency across the application.

Janet’s Story

Jason (one of my fellow testers) and I were working on some GUI automation scripts using Ruby and Watir, and were adding constants for button names for the tests. We quickly realized that the buttons on each page were not consistently named. We were able to get them changed and resolved those consistency issues very quickly, and had an easy way to enforce the naming conventions.

—Janet


See Chapter 9, “Toolkit for Business-Facing Tests that Support the Team,” Chapter 12, “Summary of Testing Quadrants,” and Chapter 14, “An Agile Test Automation Strategy,” for more information about Ruby and Watir.

Books such as Pragmatic Project Automation [2004] can guide you in automating daily development chores and free your team for important activities such as exploratory testing.

Giving Testers Better Work

Chris McMahon described the benefits he’s experienced due to regression test automation in a posting to the agile-testing mailing list in November 2007:

Our UI regression test automation has grown 500% since April [of 2007]. This allows us to focus the attention of real human beings on more interesting testing.

Chris went on to explain, “Now that we have a lot of automation, we have the leisure to really think about what human tests need doing. For any testing that isn’t trivial, we have just about institutionalized a test-idea brainstorming session before beginning execution.” Usually, Chris and his teammates pair either two testers or one tester and a developer. Sometimes a tester generates ideas and gets them reviewed, via a mindmap, a wiki page, or a list in the release notes. Chris observed, “We almost always come up with good test ideas by pairing that wouldn’t have been found by either individual independently.”

Referring to their frequent releases of significant features, Chris says, “Thanks to the good test automation, we have the time to invest in making certain that the whole product is attractive and functional for real people. Without the automation, testing this product would be both boring and stupid. As it is, we testers have significant and interesting work to do for each release.”

We agree with Chris that the most exciting part of test automation is the way it expands our ability to improve the product through innovative exploratory testing.

Projects succeed when good people are free to do their best work. Automating tests appropriately makes that happen. Automated regression tests that detect changes to existing functionality and provide immediate feedback are a primary component of this.


Automated Regression Tests Provide a Safety Net

Most practitioners who’ve been in the software business for a few years know the feeling of dread when they’re faced with fixing a bug or implementing a new feature in poorly designed code that isn’t covered by automated tests. Squeeze one end of the balloon here and another part of it bulges out. Will it break?

Knowing the code has sufficient coverage by automated regression tests gives a great feeling of confidence. Sure, a change might produce an unexpected effect, but we’ll know about it within a matter of minutes if it’s at the unit level, or hours if at a higher functional level. Making the change test-first means thinking through the changed behavior before writing the code and writing a test to verify it, which adds to that confidence.

Janet’s Story

I recently had a conversation with one of the testers on my team who questioned the value of automated tests. My first answer was “It’s a safety net” for the team. However, he challenged that premise. Don’t we just become reliant on the tests rather than fixing the root cause of the problem?

It made me think a bit more about my answer. He was right in one sense; if we become complacent about our testing challenges and depend solely on automated tests to find our issues, and then just fix them enough for the test to pass, we do ourselves a disservice.

However, if we use the tests to identify problem areas and fix them the right way or refactor as needed, then we are using the safety net of automation in the right way. Automation is critical to the success of an agile project, especially as the application grows in size.

—Janet

When they don’t have an automated suite of tests acting as a safety net, the programmers may start viewing the testers themselves as a safety net. It’s easy to imagine that Joe Programmer’s thought process goes like this: “I ought to go back and add some automated unit tests for formatEmployeeInfo, but I know Susie Tester is going to check every page where it’s used manually. She’ll see if anything is off, so I’d just be duplicating her effort.”

It’s nice that a programmer would think so highly of the tester’s talents, but Joe is headed down a slippery slope. If he doesn’t automate these unit tests, which other tests might he skip? Susie is going to be awfully busy eyeballing all those pages.

Teams that have good coverage from automated regression tests can make changes to the code fearlessly. They don’t have to wonder, “If I change this formatEmployeeInfo module, will I break something in the user interface?” The tests will tell them right away whether or not they broke anything. They can go lots faster than teams relying exclusively on manual testing.


Automated Tests Give Feedback, Early and Often

After an automated test for a piece of functionality passes, it must continue to pass until the functionality is intentionally changed. When we plan changes in the application, we change the tests to accommodate them. When an automated test fails unexpectedly, a regression defect may have been introduced by a code change. Running an automated suite of tests every time new code is checked in helps ensure that regression bugs will be caught quickly. Quick feedback means the change is still fresh in some programmer’s mind, so troubleshooting will go more quickly than if the bug weren’t found until some testing phase weeks later. Failing fast means bugs are cheaper to fix.

Automated tests run regularly and often act as your change detector. They allow the team an opportunity to know what has changed since the last build. For example, were there any negative side effects with the last build? If your automation suite has sufficient coverage, it can easily tell far-reaching effects that manual testers can never hope to find.

More often than not, if regression tests are not automated, they won’t get run every iteration, let alone every day. The problem arises very quickly during the end game, when the team needs to complete all of the regression tests. Bugs that would have been caught early are found late in the game. Many of the benefits of testing early are lost.


Tests and Examples that Drive Coding Can Do More

In Chapter 7, “Technology-Facing Tests that Support the Team,” we talked about using tests and examples to drive coding. We’ve talked about how important it is to drive coding with both unit and customer tests. We also want to stress that if these tests are automated, they become valuable for a different reason. They become the base for a very strong regression suite.

Lisa’s Story

After my team got a handle on unit tests, refactoring, continuous integration, and other technology-facing practices, we were able to catch regression bugs and incorrectly implemented functionality during development.

Of course, this didn’t mean our problems were completely solved; we still sometimes missed or misunderstood requirements. However, having an automation framework in place enabled us to start focusing on doing a better job of capturing requirements in up-front tests. We also had more time for exploratory testing. Over time, our defect rate declined dramatically, while our customers’ delight in the delivered business value went up.

—Lisa

TDD and SDD (story test-driven development) keep teams thinking test-first. During planning meetings, they talk about the tests and the best way to do them. They design code to make the tests pass, so testability is never an issue. The automated test suite grows along with the code base, providing a safety net for constant refactoring. It’s important that the whole team practices TDD and consistently writes unit tests, or the safety net will have holes.

The bibliography contains an article by Jennitta Andrea [2008] on team etiquette for TDD.

The team also doesn’t accrue too much technical debt, and their velocity is bound to be stable or even increase over time. That’s one of the reasons why the business managers should be happy to let software teams take the time to implement good practices correctly.


Tests Are Great Documentation

In Part III, we explained how agile teams use examples and tests to guide development. When tests that illustrate examples of desired behavior are automated, they become “living” documentation of how the system actually works. It’s good to have narrative documentation about how a piece of functionality works, but nobody can argue with an executable test that shows in red and green how the code operates on a given set of inputs.

It’s hard to keep static documentation up to date, but if we don’t update our automated tests when the system changes, the tests fail. We need to fix them to keep our build process “green.” This means that automated tests are always an accurate picture of how our code works. That’s just one of the ways our investment in automation pays off.


ROI and Payback

All of the reasons just presented contribute to the bottom line and the payback of automation. Automation provides consistency to a project and gives the team opportunity to test differently and push the limits of the application. Automation means extra time for testers and team members to concentrate on getting the right product out to market in a timely manner.

An important component of test automation payback is the way defects are fixed. Teams that rely on manual tests tend to find bugs long after the code containing the bug is written. They get into the mode of fixing the “bug of the day,” instead of looking at the root cause of the bug and redesigning the code accordingly. When programmers run the automated test suite in their own sandbox, the automated regression tests find the bugs before the code is checked in, so there’s time to correct the design. That’s a much bigger payback, and it’s how you reduce technical debt and develop solid code.


Barriers to Automation—Things that Get in the Way

Back in 2001, Bret Pettichord [2001] listed seven problems that plague automation. They are still applicable, but are intended for teams that do not incorporate automation as part of their development. And of course, because you are doing agile, you are doing that, right?

We would like to think that everyone has included automation tasks as part of each story, but the reality is that you probably wouldn’t be reading this section if you had it all under control. We’ve included Bret’s list to show what problems you probably have if you don’t include automation as part of the everyday project deliverables.


Bret’s List

Bret’s list of automation problems looks like this:

Only using spare time for test automation doesn’t give it the focus it needs.

There is a lack of clear goals.

There is a lack of experience.

There is high turnover, because you lose any experience you may have.

A reaction to desperation is often the reason why automation is chosen, in which case it can be more of a wish than a realistic proposal.

There can be a reluctance to think about testing; the fun is in the automating, not in the testing.

Focusing on solving the technology problem can cause you to lose sight of whether the result meets the testing need.

We think there are some other problems that teams run into when trying to automate. Even if we do try to include automation in our project deliverables, there are other barriers to success. In the next section, we present our list of obstacles to successful test automation.


Our List

Our list of barriers to successful test automation is based on the experiences we’ve had with our own agile teams as well as that of the other teams we know.

Programmers’ attitude

The “Hump of Pain”

Initial investment

Code that’s always in flux

Legacy systems

Fear

Old habits


Programmers’ Attitude—“Why Automate?”

Programmers who are used to working in a traditional environment, where some separate, unseen QA team does all of the testing, may not even give functional test automation a lot of thought. Some programmers don’t bother to test much because they have the QA team as a safety net to catch bugs before release. Long waterfall development cycles make testing even more remote to programmers. By the time the unseen testers are doing their job, the programmers have moved on to the next release. Defects go into a queue to be fixed later at great expense, and nobody is accountable for having produced them. Even programmers who have adopted test-driven development and are used to automating tests at the unit level may not think about how acceptance tests beyond the unit level get done.

Lisa’s Story

I once joined an XP team of skilled programmers practicing test-driven development that had a reasonable suite of unit tests running in an automated build process. They had never automated any business-facing tests, so one day I started a discussion about what tools they might use to automate functional business-facing regression tests. The programmers wanted to know why we needed to automate these tests.

At the end of the first iteration, when everyone was executing the acceptance tests by hand, I pointed out that there would be all these tests to do again in the next iteration as regression tests, in addition to the tests for all of the new stories. In the third iteration, there would be three times as many tests. To a tester, it seems ridiculously obvious, but sometimes programmers need to do the manual tests before they understand the compulsion to automate them.

—Lisa

Education is the key to getting programmers and the rest of the team to understand the importance of automation.


The “Hump of Pain” (The Learning Curve)

It’s hard to learn test automation, especially to learn how to do it in a way that produces a good return on the resources invested in it. A term we’ve heard Brian Marick use to describe the initial phase of automation that developers (including testers) have to overcome is the “hump of pain” (see Figure 13-1). This phrase refers to the struggle that most teams go through when adopting automation.

Figure 13-1 Hump of pain of the automation learning curve

New teams are often expected to adopt practices such as TDD and refactoring, which are difficult to learn. Without good coaching, plenty of time to master new skills, and strong management support, they’re easily discouraged. If they have extra obstacles to learning, such as having to work with poorly designed legacy code, it may seem impossible to ever get traction on test automation.

Lisa’s Story

My team at ePlan Services originally tried to write unit tests for a legacy system that definitely wasn’t written with testing in mind. They found this to be a difficult, if not impossible, task, so they decided to code all new stories in a new, testable architecture. Interestingly, about a year later, they discovered it wasn’t really that hard to write unit tests for the old code. The problem was they didn’t know how to write unit tests at all, and it was easier to learn on a well-designed architecture. Writing unit-level tests became simply a natural part of writing code.

—Lisa

The hump of pain may occur because you are building your domain-specific testing framework or learning your new functional test tool. You may want to bring in an expert to help you get it set up right.

You know your team has overcome the “hump” when automation becomes, if not easy, at least a natural and ingrained process. Lisa has worked on three teams that successfully adopted TDD and functional test automation. Each time, the team needed lots of time, training, commitment, and encouragement to get traction on the practices.


Initial Investment

Even with the whole team working on the problem, automation requires a big investment, one that may not pay off right away. It takes time and research to decide on what test frameworks to use and whether to build them in-house or use externally produced tools. New hardware and software are probably required. Team members may take a while to ramp up on how to use automated test harnesses.

Many people have experienced test automation efforts that didn’t pay off. Their organization may have purchased a vendor capture-playback tool, given it to the QA team, and expected it to solve all of the automation problems. Such tools often sit on a shelf gathering dust. There may have been thousands of lines of GUI test scripts generated, with no one left who knows what they do, or the test scripts that are impossible to maintain are no longer useful.

Janet’s Story

I walked into an organization as a new QA manager. One of my tasks was to evaluate the current automated test scripts and increase the test coverage. A vendor tool had been purchased a few years earlier, and the testers who had developed the initial suite were no longer with the organization. One of the new testers hired was trying to learn the tool and was adding tests to the suite.

The first thing I did was ask this tester to do an assessment on the test suite to see what the coverage actually was. She spent a week just trying to understand how the tests were organized. I started poking around as well and found that that the existing tests were very poorly designed and had very little value.

We stopped adding more tests and instead spent a little bit of time understanding what the goal was for our test automation. As it turned out, the vendor tool could not do what we really needed it to do, so we cancelled the licenses and found an open source tool that met our needs.

We still had to spend time learning the new open source tool, but that investment would have been made if we’d stayed with the original vendor tool anyhow, because no one on the team knew how to use the original tool.

—Janet

Test design skills have a huge impact on whether automation pays off right away. Poor practices produce tests that are hard to understand and maintain, and may produce hard-to-interpret results or false failures that take time to research. Teams with inadequate training and skills might decide the return on their automation investment isn’t worth their time.

Good test design practices produce simple, well-designed, continually refactored, maintainable tests. Libraries of test modules and objects build up over time and make automating new tests quicker. See Chapter 14 for some hints on and guidelines for test design for automation.

We know it’s not easy to capture metrics. For example, trying to capture the time it takes to write and maintain automated tests versus the time it takes to run the same regression tests manually is almost impossible. Similarly, trying to capture how much it costs to fix defects within minutes of introducing them versus how much it costs to find and fix problems after the end of the iteration is also quite difficult. Many teams don’t make the effort to track this information. Without numbers showing that automating requires less effort and provides more value, it’s harder for teams to convince management that an investment in automation is worthwhile. A lack of metrics that demonstrate automation’s return on investment also makes it harder to change a team’s habits.


Code that’s Always in Flux

Automating tests through the user interface is tricky, because UIs tend to change frequently during development. That’s one reason that simple record and playback techniques are rarely a good choice for an agile project.

If the team is struggling to produce a good design on the underlying business logic and database access, and major rework is done frequently, it might be hard to keep up even with tests automated behind the GUI at the API level. If little thought is given to testing while designing the system, it might be difficult and expensive to find a way to automate tests. The programmers and testers need to work together to get a testable application.

Although the actual code and implementation, like the GUI, tends to change frequently in agile development, the intent of code rarely changes. Organizing test code by the application’s intent, rather than by its implementation, allows you to keep up with development.

In Chapter 14, “An Agile Test Automation Strategy,” we’ll look at ways to organize automated tests.


Legacy Code

In our experience, it’s much easier to get traction on automation if you’re writing brand new code in an architecture designed with testing in mind. Writing tests for existing code that has few or no tests is a daunting task at best. It seems virtually impossible to a team new to agile and new to test automation.

It is sometimes a Catch-22. You want to automate tests so you can refactor some of the legacy code, but the legacy code isn’t designed for testability, so it is hard to automate tests even at the unit level.

If your team faces this type of challenge and doesn’t budget plenty of time to brainstorm about how to tackle it, it’ll be tough to start automating tests effectively. Chapter 14 gives strategies to address these issues.


Fear

Test automation is scary to those who’ve never mastered it, and even to some who have. Programmers may be good at writing production code, but they might not be very experienced at writing automated tests. Testers may not have a strong programming background, and they don’t trust their potential test automation skills.

Non-programming testers have often gotten the message that they have nothing to offer in the agile world. We believe otherwise. No individual tester should need to worry about how to do automation. It’s a team problem, and there are usually plenty of programmers on the team who can help. The trick is to embrace learning new ideas. Take one day at a time.


Old Habits

When iterations don’t proceed smoothly and the team can’t complete all of the programming and testing tasks by the end of an iteration, team members may panic. We’ve observed that when people go into panic mode, they fall into comfortable old habits, even if those habits never produced good results.

So we may say, “We are supposed to deliver on February 1. If we want to meet that date, we don’t have time to automate any tests. We’ll have to do whatever manual tests can be done in that amount of time and hope for the best. We can always automate the tests later.”

This is the road to perdition. Some manual tests can get done, but maybe not the important manual exploratory tests that would have found the bug that cost the company hundreds of thousands of dollars in lost sales. Then, because we didn’t finish with our test automation tasks, those tasks carry over to the next iteration, reducing the amount of business value we can deliver. As iterations proceed, the situation continues to deteriorate.


Can We Overcome These Barriers?

The agile whole-team approach is the foundation to overcoming automation challenges. Programmers who are new to agile are probably used to being rewarded for delivering code, whether it’s buggy or not, as long as they meet deadlines. Test-driven development is oriented more toward design than testing, so business-facing tests may still not enter their consciousness. It takes leadership and a team commitment to quality to get everyone thinking about how to write, use, and run both technology-facing and business-facing tests. Getting the whole team involved in test automation may be a cultural challenge.

See Chapter 3, “Cultural Challenges,” for some ideas on making changes to the team culture in order to facilitate agile practices.

In the next chapter, we show how to use agile values and principles to overcome some of the problems we’ve described in this chapter.


Summary

In this chapter, we analyzed some important factors related to test automation:

We need automation to provide a safety net, provide us with essential feedback, keep technical debt to a minimum, and help drive coding.

Fear, lack of knowledge, negative past experiences with automation, rapidly changing code, and legacy code are among the common barriers to automation.

Automating regression tests, running them in an automated build process, and fixing root causes of defects reduces technical debt and permits growth of solid code.

Automating regression tests and tedious manual tasks frees the team for more important work, such as exploratory testing.

Teams with automated tests and automated build processes enjoy a more stable velocity.

Without automated regression tests, manual regression testing will continue to grow in scope and eventually may simply be ignored.

Team culture and history may make it harder for programmers to prioritize automation of business-facing tests than coding new features. Using agile principles and values helps the whole team overcome barriers to test automation.


Загрузка...