By George Pirocanac
A familiar question every software developer and team grapples with is, “How much testing is enough to qualify a software release?” A lot depends on the type of software, its purpose, and its target audience. One would expect a far more rigorous approach to testing commercial search engine than a simple smartphone flashlight application. Yet no matter what the application, the question of how much testing is sufficient can be hard to answer in definitive terms. A better approach is to provide considerations or rules of thumb that can be used to define a qualification process and testing strategy best suited for the case at hand. The following tips provide a helpful rubric:
Document your process or strategy. Have a solid base of unit tests. Don’t skimp on integration testing. Perform end-to-end testing for Critical User Journeys. Understand and implement the other tiers of testing. Understand your coverage of code and functionality. Use feedback from the field to improve your process.
Document your process or strategy
If you are already testing your product, document the entire process. This is essential for being able to both repeat the test for a later release and to analyze it for further improvement. If this is your first release, it’s a good idea to have a written test plan or strategy. In fact, having a written test plan or strategy is something that should accompany any product design.
Have a solid base of unit tests
A great place to start is writing unit tests that accompany the code. Unit tests test the code as it is written at the functional unit level. Dependencies on external services are either mocked or faked.
A mock has the same interface as the production dependency, but only checks that the object is used according to set expectations and/or returns test-controlled values, rather than having a full implementation of its normal functionality.
A fake , on the other hand, is a shallow implementation of the dependency but should ideally have no dependencies of it’s own. Fakes provide a wider range of functionality than mocks and should be maintained by the team providing the production version of the dependency. That way, as the dependency evolves so does the fake and the unit-test writer can be confident that the fake mirrors the functionality of the production dependency.
At many companies, including Google, there are best practices of requiring any code change to have corresponding unit test cases that pass. As the code base expands, having a body of such tests that is executed before code is submitted is an important part of catching bugs before they creep into the code base. This saves time later both in writing integration tests, debugging, and verifying fixes to existing code.
Don’t skimp on integration testing
As the codebase grows and reaches a point where numbers of functional units are available to test as a group, it’s time to have a solid base of integration tests. An integration test takes a small group of units, often only two units, and tests their behavior as a whole, verifying that they coherently work together.
Often developers think that integration tests can be deprioritized or even skipped in favor of full end-to-end tests. After all, the latter really tests the product as the user would exercise it. Yet, having a comprehensive set of integration tests is just as important as having a solid unit-test base (see the earlier Google Blog article,
Fixing a test hourglass ).
The reason lies in the fact that integration tests have less dependencies than full end-to-end tests. As a result, integration tests, with smaller environments to bring up, will be faster and more reliable than the full end-to-end tests with their full set of dependencies (see the earlier Google Blog article,
Test Flakiness - One of the Main Challenges of Automated Testing ).
Perform end-to-end testing for Critical User Journeys
The discussion thus far covers testing the product at its component level, first as individual components (unit-testing), then as groups of components and dependencies (integration testing). Now it’s time to test the product end to end as a user would use it. This is quite important because it’s not just independent features that should be tested but entire workflows incorporating a variety of features. At Google these workflows - the combination of a critical goal and the journey of tasks a user undertakes to achieve that goal - are called Critical User Journeys (CUJs). Understanding CUJs, documenting them, and then verifying them using end-to-end testing (hopefully in an automated fashion) completes the
Testing Pyramid .
Understand and implement the other tiers of testing
Unit, integration, and end-to-end testing address the functional level of your product. It is important to understand the other tiers of testing, including:
Performance testing - Measuring the latency or throughput of your application or service. Load and scalability testing - Testing your application or service under higher and higher load. Fault-tolerance testing - Testing your application’s behavior as different dependencies either fail or go down entirely. Security testing - Testing for known vulnerabilities in your service or application. Accessibility testing - Making sure the product is accessible and usable for everyone, including people with a wide range of disabilities. Localization testing - Making sure the product can be used in a particular language or region. Globalization testing - Making sure the product can be used by people all over the world. Privacy testing - Assessing and mitigating privacy risks in the product. Usability testing - Testing for user friendliness. Again, it is important to have these testing processes occur as early as possible in your review cycle. Smaller performance tests can detect regressions earlier and save debugging time during the end-to-end tests.
Understand your coverage of code and functionality
So far, the question of how much testing is enough, from a qualitative perspective, has been examined. Different types of tests were reviewed and the argument made that smaller and earlier is better than larger or later. Now the problem will be examined from a quantitative perspective, taking code coverage techniques into account.
Wikipedia has a great article on
code coverage that outlines and discusses different types of coverage, including statement, edge, branch, and condition coverage. There are several open source tools available for measuring coverage for most of the popular programming languages such as Java, C++, Go and Python. A partial list is included in the table below:
Table 1 - Open source coverage tools for different languages
Most of these tools provide a summary in percentage terms. For example, 80% code coverage means about 80% of the code is covered and about 20% of the code is uncovered. At the same time, It is important to understand that, just because you have coverage for a particular area of code, this code can still have bugs.
Another concept in coverage is called changelist coverage. Changelist coverage measures the coverage in changed or added lines. It is useful for teams that have accumulated technical debt and have low coverage in their entire codebase. These teams can institute a policy where an increase in their incremental coverage will lead to overall improvement.
So far the coverage discussion has centered around coverage of the code by tests (functions, lines, etc.). Another type of coverage is feature coverage or behavior coverage. For feature coverage, the emphasis is on identifying the committed features in a particular release and creating tests for their implementation. For behavior coverage, the emphasis is on identifying the CUJs and creating the appropriate tests to track them. Again, understanding your “uncovered” features and behaviors can be a useful metric in your understanding of the risks.
Use feedback from the field to improve your process
A very important part of understanding and improving your qualification process is the feedback received from the field once the software has been released. Having a process that tracks outages and bugs and other issues, in the form of action items to improve qualification, is critical for minimizing the risks of regressions in subsequent releases. Moreover, the action items should be such that they (1) emphasize filling the testing gap as early as possible in the qualification process and (2) address strategic issues such as the lack of testing of a particular type such as load or fault tolerance testing. And again, this is why it is important to document your qualification process so that you can reevaluate it in light of the data you obtain from the field.
Summary
Creating a comprehensive qualification process and testing strategy to answer the question “How much testing is enough?” can be a complex task. Hopefully the tips given here can help you with this. In summary:
Document your process or strategy. Have a solid base of unit tests. Don’t skimp on integration testing. Perform end-to-end testing for Critical User Journeys. Understand and implement the other tiers of testing. Understand your coverage of code and functionality. Use feedback from the field to improve your process.
References
I think it's worth underlining that coverage mainly tells you about code that has no tests: it doesn't tell you about the quality of testing for the code that's 'covered', especially if it's only line-coverage - branch/condition coverage is more informative there.
ReplyDeleteIronically, a typo: "search engne".
ReplyDeleteFixed. Thanks! :-)
ReplyDeleteare you still pushing more in favour of pyramid testing? I saw Martin Fowler talking about using honeycomb testing strategy fits better for a microservice architecture. What do you think about that?
ReplyDeleteYou mentioned integration tests and e2e tests among others but I didn't read anything about contract testing. Are they not a good strategy for your case?
thanks! and very interesting article!