Thank you for this comprehensive article around test planning and considerations for writing test plans.
We should also consider the source document based on which the test plans are constructed. The source documents are usually functional specification documents. I have seen test teams lay more emphasis on the test plan templates and organization and tend to ignore the content within the source document and the organization of the source document.
The source document is also referred to by the development teams and it is important for any test team to understand this document thoroughly before embarking on creating any test artifact (test plans, test approach or strategy documents).
On some projects and programs, the testing could rely more on the source document and less on test plans. The test plans tend to get heavy release on release especially when doing agile and some test cases lose their significance and relevance quickly.
We should consider the study of the source document and based on its organization decide the need and content of the test plans.
Apologies for the long comment. Hope this is relevant to the topic above.
Hi there, Antony. I posted on Twitter yesterday with a few questions as '@zacoid55' and was told to carry on the discussion here.
My main question is around the phrase "Many projects can automate all testing." Is this every bit of testing you've come up with i.e. 100% of the tests you've posited or is that actually you saying you've automated absolutely everything?
Most projects at Google have no manual testing and rely entirely on automated tests. This is particularly true for back-end/core/infrastructure projects. If I understand you correctly, you are asking if we have automated the tests we feel are necessary, or if we have literally automated every possible scenario and permutation of inputs/state. It is usually the former, because cost normally prohibits you from testing absolutely every possibility. However, on April 1st, we can manage this: http://googletesting.blogspot.com/2015/04/quantum-quality.html
It varies from team to team. There are many teams with complicated UIs that rely entirely on automation. Some teams take a hybrid approach where most testing is automated but some complex scenarios are manual. When taking the hybrid approach, it helps to design the system such that most project code is easy to automate in isolation and is loosely coupled with the components that are hard to automate.
We have internal systems, similar to continuous integration systems, that are dedicated to running end-to-end tests. These systems continuously build binaries, deploy the SUT, execute large tests against the SUT, monitor results, report on status changes, etc.
Consider software used by vehicles (land, water, air, or space), medical devices, heavy machinery, climate control, chemical factories, utility stations, etc.
Hoe do you deal with receiving interesting feedback from your automated tests and this feedback needs to be explored, write another automated test case or is it then chapier to do it exploitative?
I think what Ard is saying was, How do you handle the result you received from an automated test case - is it more cost-effective to then write another automated test from that result, or just do explorative testing on the result?
The result is simply pass or fail (along with logging and other artifacts). If the test fails, we need to determine the root cause. In many cases, root cause can be determined from logging alone (see http://googletesting.blogspot.com/2013/06/optimal-logging.html). In other cases, we need to reproduce the issue and debug. Since this is a one-time-effort, we may debug via ad hoc automation or manual experiments - whichever is easier. Once the root cause is determined, the test may be fixed, the SUT may be fixed, and/or new automated tests may be created to cover the scenario.
Great piece of content Anthony. I am mostly curious about the appliance of the above on a mobile project where you deal with constant changes in the market, app features variance on different devices, coverage challenges etc. What is your take from a test planning best practices when dealing with a cross platform mobile app? I have written some thoughts about it on my personal blog (mobiletestingblog.com) but looking to get your experienced POV.
You should identify the supported platforms in the plan and categorize the feature variance in some way. There are many good approaches, but a simple one is to setup a grid with platform rows and feature columns. Platform may be a combination of OS version and device model. Each cell can contain unique information and perhaps status about that particular platform/feature combination testing. If the feature list is very long, create multiple grids, where each grid is a general feature category.
Also, thanks for asking that question. It made me realize that an important question was missing. The post has been updated to include "What platforms are supported?".
Thank You Anthony Vallone, for sharing this great piece of words. Test planning is always an important factor, help testing executives to implement effective testing techniques so as to remove bugs and vulnerabilities attached to the software under testing.
quick question, where do you manage your test plan / cases ? mean you can do it in test management tool, but it's kinda waste of time no ? mean, my approach is, if I got test automation engineers and we are using cucumber + selenium for UI, why not writing the scenarios in the code (feature.file), could you share how test plan / test cases are managed at google ? what do you find more efficient ?
For test plans, most teams use Google Docs. Most of our test cases are automated, so the code repository and test comments serve as case management. For manual cases, we use an internal test case management tool.
Thanks for the post. Very informative
ReplyDeleteVery much useful.. thanks
ReplyDeleteHi Anthony,
ReplyDeleteThank you for this comprehensive article around test planning and considerations for writing test plans.
We should also consider the source document based on which the test plans are constructed. The source documents are usually functional specification documents. I have seen test teams lay more emphasis on the test plan templates and organization and tend to ignore the content within the source document and the organization of the source document.
The source document is also referred to by the development teams and it is important for any test team to understand this document thoroughly before embarking on creating any test artifact (test plans, test approach or strategy documents).
On some projects and programs, the testing could rely more on the source document and less on test plans. The test plans tend to get heavy release on release especially when doing agile and some test cases lose their significance and relevance quickly.
We should consider the study of the source document and based on its organization decide the need and content of the test plans.
Apologies for the long comment. Hope this is relevant to the topic above.
Thank You.
Deepak K
Excellent article. Very informative. Thanks for the post.
ReplyDelete-Sethu
Hi there, Antony. I posted on Twitter yesterday with a few questions as '@zacoid55' and was told to carry on the discussion here.
ReplyDeleteMy main question is around the phrase "Many projects can automate all testing." Is this every bit of testing you've come up with i.e. 100% of the tests you've posited or is that actually you saying you've automated absolutely everything?
Thank you in advance
Hi,
DeleteMost projects at Google have no manual testing and rely entirely on automated tests. This is particularly true for back-end/core/infrastructure projects. If I understand you correctly, you are asking if we have automated the tests we feel are necessary, or if we have literally automated every possible scenario and permutation of inputs/state. It is usually the former, because cost normally prohibits you from testing absolutely every possibility. However, on April 1st, we can manage this:
http://googletesting.blogspot.com/2015/04/quantum-quality.html
-Anthony
How about projects which are involved UI and use case/user flows? Are these 100% automated at Google too? How are end-to-end tests carried out?
DeleteIt varies from team to team. There are many teams with complicated UIs that rely entirely on automation. Some teams take a hybrid approach where most testing is automated but some complex scenarios are manual. When taking the hybrid approach, it helps to design the system such that most project code is easy to automate in isolation and is loosely coupled with the components that are hard to automate.
DeleteWe have internal systems, similar to continuous integration systems, that are dedicated to running end-to-end tests. These systems continuously build binaries, deploy the SUT, execute large tests against the SUT, monitor results, report on status changes, etc.
"Injury to people or animals", never thought !! But How ?
ReplyDeleteConsider software used by vehicles (land, water, air, or space), medical devices, heavy machinery, climate control, chemical factories, utility stations, etc.
DeleteHoe do you deal with receiving interesting feedback from your automated tests and this feedback needs to be explored, write another automated test case or is it then chapier to do it exploitative?
ReplyDeleteSorry, I don't understand. Can you define/clarify "interesting feedback" and "do it exploitative"?
DeleteI think what Ard is saying was, How do you handle the result you received from an automated test case - is it more cost-effective to then write another automated test from that result, or just do explorative testing on the result?
DeleteThe result is simply pass or fail (along with logging and other artifacts). If the test fails, we need to determine the root cause. In many cases, root cause can be determined from logging alone (see http://googletesting.blogspot.com/2013/06/optimal-logging.html). In other cases, we need to reproduce the issue and debug. Since this is a one-time-effort, we may debug via ad hoc automation or manual experiments - whichever is easier. Once the root cause is determined, the test may be fixed, the SUT may be fixed, and/or new automated tests may be created to cover the scenario.
DeleteGreat piece of content Anthony. I am mostly curious about the appliance of the above on a mobile project where you deal with constant changes in the market, app features variance on different devices, coverage challenges etc. What is your take from a test planning best practices when dealing with a cross platform mobile app?
ReplyDeleteI have written some thoughts about it on my personal blog (mobiletestingblog.com) but looking to get your experienced POV.
Again, great blog.
Regards
Eran
Hi Eran,
DeleteYou should identify the supported platforms in the plan and categorize the feature variance in some way. There are many good approaches, but a simple one is to setup a grid with platform rows and feature columns. Platform may be a combination of OS version and device model. Each cell can contain unique information and perhaps status about that particular platform/feature combination testing. If the feature list is very long, create multiple grids, where each grid is a general feature category.
Also, thanks for asking that question. It made me realize that an important question was missing. The post has been updated to include "What platforms are supported?".
DeleteThank You Anthony Vallone, for sharing this great piece of words. Test planning is always an important factor, help testing executives to implement effective testing techniques so as to remove bugs and vulnerabilities attached to the software under testing.
DeleteThank You Anthony. Very informative and helpful.
ReplyDeleteHi All,
ReplyDeletequick question, where do you manage your test plan / cases ? mean you can do it in test management tool, but it's kinda waste of time no ? mean, my approach is, if I got test automation engineers and we are using cucumber + selenium for UI, why not writing the scenarios in the code (feature.file), could you share how test plan / test cases are managed at google ? what do you find more efficient ?
Thanks
Ronen
For test plans, most teams use Google Docs. Most of our test cases are automated, so the code repository and test comments serve as case management. For manual cases, we use an internal test case management tool.
DeleteThanks for the update
ReplyDeleteHi Anthony, Is this something that is still being carried out in Google? Curious for something new!
ReplyDelete