Testing Blog
The Inquiry Method for Test Planning
Monday, June 06, 2016
by
Anthony Vallone
updated: July 2016
Creating a test plan is often a complex undertaking. An ideal test plan is accomplished by applying basic principles of
cost-benefit analysis
and
risk analysis
, optimally balancing these software development factors:
Implementation cost
: The time and complexity of implementing testable features and automated tests for specific scenarios will vary, and this affects short-term development cost.
Maintenance cost
: Some tests or test plans may vary from easy to difficult to maintain, and this affects long-term development cost. When manual testing is chosen, this also adds to long-term cost.
Monetary cost
: Some test approaches may require billed resources.
Benefit
: Tests are capable of preventing issues and aiding productivity by varying degrees. Also, the earlier they can catch problems in the development life-cycle, the greater the benefit.
Risk
: The probability of failure scenarios may vary from rare to likely, and their consequences may vary from minor nuisance to catastrophic.
Effectively balancing these factors in a plan depends heavily on project criticality, implementation details, resources available, and team opinions. Many projects can achieve outstanding coverage with high-benefit, low-cost unit tests, but they may need to weigh options for larger tests and complex corner cases. Mission critical projects must minimize risk as much as possible, so they will accept higher costs and invest heavily in rigorous testing at all levels.
This guide puts the onus on the reader to find the right balance for their project. Also, it does not provide a test plan template, because templates are often too generic or too specific and quickly become outdated. Instead, it focuses on selecting the best content when writing a test plan.
Test plan vs. strategy
Before proceeding, two common methods for defining test plans need to be clarified:
Single test plan
: Some projects have a single "test plan" that describes all implemented and planned testing for the project.
Single test strategy and many plans
: Some projects have a "test strategy" document as well as many smaller "test plan" documents. Strategies typically cover the overall test approach and goals, while plans cover specific features or project updates.
Either of these may be embedded in and integrated with project design documents. Both of these methods work well, so choose whichever makes sense for your project. Generally speaking, stable projects benefit from a single plan, whereas rapidly changing projects are best served by infrequently changed strategies and frequently added plans.
For the purpose of this guide, I will refer to both test document types simply as "test plans”. If you have multiple documents, just apply the advice below to your document aggregation.
Content selection
A good approach to creating content for your test plan is to start by listing all questions that need answers. The lists below provide a comprehensive collection of important questions that may or may not apply to your project. Go through the lists and select all that apply. By answering these questions, you will form the contents for your test plan, and you should structure your plan around the chosen content in any format your team prefers. Be sure to balance the factors as mentioned above when making decisions.
Prerequisites
Do you need a test plan?
If there is no project design document or a clear vision for the product, it may be too early to write a test plan.
Has testability been considered in the project design?
Before a project gets too far into implementation, all scenarios must be designed as testable, preferably via automation. Both project design documents and test plans should comment on testability as needed.
Will you keep the plan up-to-date?
If so, be careful about adding too much detail, otherwise it may be difficult to maintain the plan.
Does this quality effort overlap with other teams?
If so, how have you deduplicated the work?
Risk
Are there any significant project risks, and how will you mitigate them?
Consider:
Injury to people or animals
Security and integrity of user data
User privacy
Security of company systems
Hardware or property damage
Legal and compliance issues
Exposure of confidential or sensitive data
Data loss or corruption
Revenue loss
Unrecoverable scenarios
SLAs
Performance requirements
Misinforming users
Impact to other projects
Impact from other projects
Impact to company’s public image
Loss of productivity
What are the project’s technical vulnerabilities?
Consider:
Features or components known to be hacky, fragile, or in great need of refactoring
Dependencies or platforms that frequently cause issues
Possibility for users to cause harm to the system
Trends seen in past issues
Coverage
What does the test surface look like?
Is it a simple library with one method, or a multi-platform client-server stateful system with a combinatorial explosion of use cases? Describe the design and architecture of the system in a way that highlights possible points of failure.
What platforms are supported?
Consider listing supported operating systems, hardware, devices, etc. Also describe how testing will be performed and reported for each platform.
What are the features?
Consider making a summary list of all features and describe how certain categories of features will be tested.
What will not be tested?
No test suite covers every possibility. It’s best to be up-front about this and provide rationale for not testing certain cases. Examples: low risk areas that are a low priority, complex cases that are a low priority, areas covered by other teams, features not ready for testing, etc.
What is covered by unit (small), integration (medium), and system (large) tests?
Always test as much as possible in smaller tests, leaving fewer cases for larger tests. Describe how certain categories of test cases are best tested by each test size and provide rationale.
What will be tested manually vs. automated?
When feasible and cost-effective, automation is usually best. Many projects can automate all testing. However, there may be good reasons to choose manual testing. Describe the types of cases that will be tested manually and provide rationale.
How are you covering each test category?
Consider:
accessibility
functional
fuzz
internationalization and localization
performance
,
load
,
stress
, and
endurance
(aka soak)
privacy
security
smoke
stability
usability
Will you use static and/or dynamic analysis tools?
Both
static analysis tools
and
dynamic analysis tools
can find problems that are hard to catch in reviews and testing, so consider using them.
How will system components and dependencies be stubbed, mocked, faked, staged, or used normally during testing?
There are good reasons to do each of these, and they each have a unique impact on coverage.
What builds are your tests running against?
Are tests running against a build from HEAD (aka tip), a staged build, and/or a release candidate? If only from HEAD, how will you test release build cherry picks (selection of individual changelists for a release) and system configuration changes not normally seen by builds from HEAD?
What kind of testing will be done outside of your team?
Examples:
Dogfooding
External crowdsource testing
Public alpha/beta versions (how will they be tested before releasing?)
External trusted testers
How are data migrations tested?
You may need special testing to compare before and after migration results.
Do you need to be concerned with backward compatibility?
You may own previously distributed clients or there may be other systems that depend on your system’s protocol, configuration, features, and behavior.
Do you need to test upgrade scenarios for server/client/device software or dependencies/platforms/APIs that the software utilizes?
Do you have line coverage goals?
Tooling and Infrastructure
Do you need new test frameworks?
If so, describe these or add design links in the plan.
Do you need a new test lab setup?
If so, describe these or add design links in the plan.
If your project offers a service to other projects, are you providing test tools to those users?
Consider providing mocks, fakes, and/or reliable staged servers for users trying to test their integration with your system.
For end-to-end testing, how will test infrastructure, systems under test, and other dependencies be managed?
How will they be deployed? How will persistence be set-up/torn-down? How will you handle required migrations from one datacenter to another?
Do you need tools to help debug system or test failures?
You may be able to use existing tools, or you may need to develop new ones.
Process
Are there test schedule requirements?
What time commitments have been made, which tests will be in place (or test feedback provided) by what dates? Are some tests important to deliver before others?
How are builds and tests run continuously?
Most small tests will be run by
continuous integration
tools, but large tests may need a different approach. Alternatively, you may opt for running large tests as-needed.
How will build and test results be reported and monitored?
Do you have a team rotation to monitor continuous integration?
Large tests might require monitoring by someone with expertise.
Do you need a dashboard for test results and other project health indicators?
Who will get email alerts and how?
Will the person monitoring tests simply use verbal communication to the team?
How are tests used when releasing?
Are they run explicitly against the release candidate, or does the release process depend only on continuous test results?
If system components and dependencies are released independently, are tests run for each type of release?
Will a "release blocker" bug stop the release manager(s) from actually releasing? Is there an agreement on what are the release blocking criteria?
When performing canary releases (aka % rollouts), how will progress be monitored and tested?
How will external users report bugs?
Consider feedback links or other similar tools to collect and cluster reports.
How does bug triage work?
Consider labels or categories for bugs in order for them to land in a triage bucket. Also make sure the teams responsible for filing and or creating the bug report template are aware of this. Are you using one bug tracker or do you need to setup some automatic or manual import routine?
Do you have a policy for submitting new tests before closing bugs that could have been caught?
How are tests used for unsubmitted changes?
If anyone can run all tests against any experimental build (a good thing), consider providing a howto.
How can team members create and/or debug tests?
Consider providing a howto.
Utility
Who are the test plan readers?
Some test plans are only read by a few people, while others are read by many. At a minimum, you should consider getting a review from all stakeholders (project managers, tech leads, feature owners). When writing the plan, be sure to understand the expected readers, provide them with enough background to understand the plan, and answer all questions you think they will have - even if your answer is that you don’t have an answer yet. Also consider adding contacts for the test plan, so any reader can get more information.
How can readers review the actual test cases?
Manual cases might be in a test case management tool, in a separate document, or included in the test plan. Consider providing links to directories containing automated test cases.
Do you need traceability between requirements, features, and tests?
Do you have any general product health or quality goals and how will you measure success?
Consider:
Release cadence
Number of bugs caught by users in production
Number of bugs caught in release testing
Number of open bugs over time
Code coverage
Cost of manual testing
Difficulty of creating new tests
23 comments
GTAC 2016 Registration Deadline Extended
Thursday, June 02, 2016
by Sonal Shah on behalf of the GTAC Committee
Our goal in organizing GTAC each year is to make it a first-class conference, dedicated to presenting leading edge industry practices. The quality of submissions we've received for GTAC 2016 so far has been overwhelming. In order to include the best talks possible, we are extending the deadline for speaker and attendee submissions by 15 days. The new timelines are as follows:
June 1, 2016
June 15, 2016
- Last day for speaker, attendee and diversity scholarship submissions.
June 15, 2016
July 15, 2016
- Attendees and scholarship awardees will be notified of selection/rejection/waitlist status. Those on the waitlist will be notified as space becomes available.
August 15, 2016
August 29, 2016
- Selected speakers will be notified.
To register, please fill out
this
form.
To apply for diversity scholarship, please fill out
this
form.
The
GTAC website
has a list of frequently asked questions. Please do not hesitate to contact
gtac2016@google.com
if you still have any questions.
No comments
Labels
TotT
102
GTAC
61
James Whittaker
42
Misko Hevery
32
Code Health
30
Anthony Vallone
27
Patrick Copeland
23
Jobs
18
Andrew Trenk
13
C++
11
Patrik Höglund
8
JavaScript
7
Allen Hutchison
6
George Pirocanac
6
Zhanyong Wan
6
Harry Robinson
5
Java
5
Julian Harty
5
Adam Bender
4
Alberto Savoia
4
Ben Yu
4
Erik Kuefler
4
Philip Zembrod
4
Shyam Seshadri
4
Chrome
3
Dillon Bly
3
John Thomas
3
Lesley Katzen
3
Marc Kaplan
3
Markus Clermont
3
Max Kanat-Alexander
3
Sonal Shah
3
APIs
2
Abhishek Arya
2
Alan Myrvold
2
Alek Icev
2
Android
2
April Fools
2
Chaitali Narla
2
Chris Lewis
2
Chrome OS
2
Diego Salas
2
Dori Reuveni
2
Jason Arbon
2
Jochen Wuttke
2
Kostya Serebryany
2
Marc Eaddy
2
Marko Ivanković
2
Mobile
2
Oliver Chang
2
Simon Stewart
2
Stefan Kennedy
2
Test Flakiness
2
Titus Winters
2
Tony Voellm
2
WebRTC
2
Yiming Sun
2
Yvette Nameth
2
Zuri Kemp
2
Aaron Jacobs
1
Adam Porter
1
Adam Raider
1
Adel Saoud
1
Alan Faulkner
1
Alex Eagle
1
Amy Fu
1
Anantha Keesara
1
Antoine Picard
1
App Engine
1
Ari Shamash
1
Arif Sukoco
1
Benjamin Pick
1
Bob Nystrom
1
Bruce Leban
1
Carlos Arguelles
1
Carlos Israel Ortiz García
1
Cathal Weakliam
1
Christopher Semturs
1
Clay Murphy
1
Dagang Wei
1
Dan Maksimovich
1
Dan Shi
1
Dan Willemsen
1
Dave Chen
1
Dave Gladfelter
1
David Bendory
1
David Mandelberg
1
Derek Snyder
1
Diego Cavalcanti
1
Dmitry Vyukov
1
Eduardo Bravo Ortiz
1
Ekaterina Kamenskaya
1
Elliott Karpilovsky
1
Elliotte Rusty Harold
1
Espresso
1
Felipe Sodré
1
Francois Aube
1
Gene Volovich
1
Google+
1
Goran Petrovic
1
Goranka Bjedov
1
Hank Duan
1
Havard Rast Blok
1
Hongfei Ding
1
Jason Elbaum
1
Jason Huggins
1
Jay Han
1
Jeff Hoy
1
Jeff Listfield
1
Jessica Tomechak
1
Jim Reardon
1
Joe Allan Muharsky
1
Joel Hynoski
1
John Micco
1
John Penix
1
Jonathan Rockway
1
Jonathan Velasquez
1
Josh Armour
1
Julie Ralph
1
Kai Kent
1
Kanu Tewary
1
Karin Lundberg
1
Kaue Silveira
1
Kevin Bourrillion
1
Kevin Graney
1
Kirkland
1
Kurt Alfred Kluever
1
Manjusha Parvathaneni
1
Marek Kiszkis
1
Marius Latinis
1
Mark Ivey
1
Mark Manley
1
Mark Striebeck
1
Matt Lowrie
1
Meredith Whittaker
1
Michael Bachman
1
Michael Klepikov
1
Mike Aizatsky
1
Mike Wacker
1
Mona El Mahdy
1
Noel Yap
1
Palak Bansal
1
Patricia Legaspi
1
Per Jacobsson
1
Peter Arrenbrecht
1
Peter Spragins
1
Phil Norman
1
Phil Rollet
1
Pooja Gupta
1
Project Showcase
1
Radoslav Vasilev
1
Rajat Dewan
1
Rajat Jain
1
Rich Martin
1
Richard Bustamante
1
Roshan Sembacuttiaratchy
1
Ruslan Khamitov
1
Sam Lee
1
Sean Jordan
1
Sharon Zhou
1
Shiva Garg
1
Siddartha Janga
1
Simran Basi
1
Stan Chan
1
Stephen Ng
1
Tejas Shah
1
Test Analytics
1
Test Engineer
1
Tim Lyakhovetskiy
1
Tom O'Neill
1
Vojta Jína
1
automation
1
dead code
1
iOS
1
mutation testing
1
Archive
►
2024
(13)
►
Dec
(1)
►
Oct
(1)
►
Sep
(1)
►
Aug
(1)
►
Jul
(1)
►
May
(3)
►
Apr
(3)
►
Mar
(1)
►
Feb
(1)
►
2023
(14)
►
Dec
(2)
►
Nov
(2)
►
Oct
(5)
►
Sep
(3)
►
Aug
(1)
►
Apr
(1)
►
2022
(2)
►
Feb
(2)
►
2021
(3)
►
Jun
(1)
►
Apr
(1)
►
Mar
(1)
►
2020
(8)
►
Dec
(2)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
May
(1)
►
2019
(4)
►
Dec
(1)
►
Nov
(1)
►
Jul
(1)
►
Jan
(1)
►
2018
(7)
►
Nov
(1)
►
Sep
(1)
►
Jul
(1)
►
Jun
(2)
►
May
(1)
►
Feb
(1)
►
2017
(17)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Sep
(1)
►
Aug
(1)
►
Jul
(2)
►
Jun
(2)
►
May
(3)
►
Apr
(2)
►
Feb
(1)
►
Jan
(2)
▼
2016
(15)
►
Dec
(1)
►
Nov
(2)
►
Oct
(1)
►
Sep
(2)
►
Aug
(1)
▼
Jun
(2)
The Inquiry Method for Test Planning
GTAC 2016 Registration Deadline Extended
►
May
(3)
►
Apr
(1)
►
Mar
(1)
►
Feb
(1)
►
2015
(14)
►
Dec
(1)
►
Nov
(1)
►
Oct
(2)
►
Aug
(1)
►
Jun
(1)
►
May
(2)
►
Apr
(2)
►
Mar
(1)
►
Feb
(1)
►
Jan
(2)
►
2014
(24)
►
Dec
(2)
►
Nov
(1)
►
Oct
(2)
►
Sep
(2)
►
Aug
(2)
►
Jul
(3)
►
Jun
(3)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Feb
(1)
►
Jan
(2)
►
2013
(16)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
Jun
(2)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Jan
(2)
►
2012
(11)
►
Dec
(1)
►
Nov
(2)
►
Oct
(3)
►
Sep
(1)
►
Aug
(4)
►
2011
(39)
►
Nov
(2)
►
Oct
(5)
►
Sep
(2)
►
Aug
(4)
►
Jul
(2)
►
Jun
(5)
►
May
(4)
►
Apr
(3)
►
Mar
(4)
►
Feb
(5)
►
Jan
(3)
►
2010
(37)
►
Dec
(3)
►
Nov
(3)
►
Oct
(4)
►
Sep
(8)
►
Aug
(3)
►
Jul
(3)
►
Jun
(2)
►
May
(2)
►
Apr
(3)
►
Mar
(3)
►
Feb
(2)
►
Jan
(1)
►
2009
(54)
►
Dec
(3)
►
Nov
(2)
►
Oct
(3)
►
Sep
(5)
►
Aug
(4)
►
Jul
(15)
►
Jun
(8)
►
May
(3)
►
Apr
(2)
►
Feb
(5)
►
Jan
(4)
►
2008
(75)
►
Dec
(6)
►
Nov
(8)
►
Oct
(9)
►
Sep
(8)
►
Aug
(9)
►
Jul
(9)
►
Jun
(6)
►
May
(6)
►
Apr
(4)
►
Mar
(4)
►
Feb
(4)
►
Jan
(2)
►
2007
(41)
►
Oct
(6)
►
Sep
(5)
►
Aug
(3)
►
Jul
(2)
►
Jun
(2)
►
May
(2)
►
Apr
(7)
►
Mar
(5)
►
Feb
(5)
►
Jan
(4)
Feed
Follow @googletesting