The key to this is ROI and metrics. Currently it is fashion to show efficiencies in execution and regression, based on volume of execution. The start of the debate should focus on risk mitigation, versus volume of execution, or regression.
This is most common, in my opinion, with outsourced testing efforts, as the "shock and awe" effect of large numbers of Test Execution leads the customer (often an IT manager with little to no test acumen) to retaining and extending the service provider. Because it is believed that the volume of tests executed are directly proportional to mitigation of risk.
Looking forward to following to following the blog. Best of luck.
This is not just about ROI or metrics (*shudder*), although those things will be in the minds of those paying the test team's wages. For me, discussions of this nature always overlook the fact that a good manual test cannot be automated.
In order to fully appreciate that last statement I suggest reading James Bach's blog post on the subject, where he points out that "[t]est automation cannot reproduce the thinking that testers do when they conceive of tests, control tests, modify tests, and observe and evaluate the product."
Automated testing removes (to a greater or lesser degree) the value that a trained human being adds to a test.
The trick to balancing manual with automated testing is to allow humans to do what they do best: feel their way through software, make decisions based on hunches and patterns that a machine cannot be programmed to detect.
Leave the repetitious checking to the machines by all means, and let good testers do what they do best: ask questions of the software that reveals information pertaining to its value.
You said "I think the issue is test design versus doing something because we can." I absolutely agree. Effective test design tools and methods are not given nearly enough attention by most testers, regardless of which side of the "manual v. automate" debate they support. Test design methods shoud be a part of every testers training and test design tools should be a part of every serious tester's tool box. Despite this, few testers, even in Fortune 100 firms that have otherwise sophisticated IT departments, are currently using test design tools and well-structured methods. Worse still, many testers have not even heard of them; as a result, testers generate and execute highly ineffective test cases week after week.
I recently worked on a study that spanned 10 projects. In each project, we objectively compared the defects found by "2 testers in a room with a test design tool" to "4 testers in a room with no test design tool". Both teams tested the same "real world" application at the same time. I am looking forward to publishing the empirical findings (which is scheduled to happen in October).
The results from those pilot projects were positive enough (consistently doubled tester productivity, consistently higher quality, etc.) that I decided to create my own test design software tool and help testing teams achieve more coverage in fewer tests. (Please see: http://www.hexawise.com).
Thank you for emphasizing that identifying the right things to test is a critically important factor that too often gets overlooked in the manual testing v. automated testing debate.
I'd say you nailed it pretty much on the head. It is like building a house. You need to look at what you are trying to build (test), design it, build it, 'test' it, and then use it.
Manual testing does that, and when done you can (despite what some people say) use that as a blueprint for your automated test. I do agree that not all manual tests should be automated or can be (due to various reasons). Automation is a tool and not an end-all solution.
And to agree with James Bach, there is not better tool than your own brain.
Have fun at Google and look forward to more posts.
So Ian thought it was funny and Wendy thought it was offensive. I intended the former and not that latter. But I suppose it will stick in the mind either way. Wendy, you have my apology!
Now to the meat: thinking is the key issue but thinking is note the sole domain of the manual tester. Lots of thought should go into test design. If then those tests get executed by automation, this is very much OK. Like I said, all good automation starts it's life as a manual test. Maybe I should have said all good testing begins it's life as a good test design.
Simon (Ckwop): you're asking an author what books he recommends? Take a guess? Actually, my books will teach you testing, not necessarily how to set up a QA shop. And although he works at my former employer, Alan Page's book is top drawer stuff.
I was talking to my friend other day about this. It is very important to identify what exactly needs to be tested, the how is separate problem that should not be mixed. This helps us focus on designing better test cases.
I confess that I'm struggling to frame my comment because your post seems to assume a context that I just don't live in anymore, a test-last rather than test-driven context in which manual testing & automation compete for time, attention, & resources after the software is implemented.
But the assertion "All good automation begins it's life as manual test cases" strikes me as just wrong for any context.
Even in test-last contexts, I've automated tests that I never executed manually and that found bugs on their first run. Model-based testing FTW.
Perhaps you meant to say, "All good automation involves test cases that would still be worth executing manually if that were necessary?"
Great post James. I totally agree when it comes to the value of test design. How the test design is documented is up to the individual, the importent thing is the process of designing the test - think about what you are doing befor you do it :-)
Good to find you again, was wondering how to find you when you left MS. Maybe we'll see you at EUROSTAR sometime.
James ... I'm glad you wrote this. The obsession with the product/solution is becoming a big problem with budding testers.
I'd like to add here that I'm seeing it's not just the testers but also the clients/project managers at the client side who push for automation without having any understanding of if at all automation would help.
Automation when used thoughtfully does do wonderful things. The industry needs to stop creating a buzz around automation and take it at face (real) value.
James, good article, and it's about time someone is focussing on design. People who are not Testers dont understand when we talk about design, they think only Developers design (code).
In my view this is part of the problem because if those people dont have a "tester brain" they just dont get it, they dont understand why the product is still buggy after being fully automated, or why all those person days of manual testing did not deliver a bug free system...and when the opposite occurs and the System is near perfect they still cant explain why this happened.
Maybe we should be blogging more about the "tester brain" (what is it and how come those who have it can spot it in others), and then we can understand more about the underlying designs.
The key to this is ROI and metrics. Currently it is fashion to show efficiencies in execution and regression, based on volume of execution. The start of the debate should focus on risk mitigation, versus volume of execution, or regression.
ReplyDeleteThis is most common, in my opinion, with outsourced testing efforts, as the "shock and awe" effect of large numbers of Test Execution leads the customer (often an IT manager with little to no test acumen) to retaining and extending the service provider. Because it is believed that the volume of tests executed are directly proportional to mitigation of risk.
Looking forward to following to following the blog. Best of luck.
This is not just about ROI or metrics (*shudder*), although those things will be in the minds of those paying the test team's wages. For me, discussions of this nature always overlook the fact that a good manual test cannot be automated.
ReplyDeleteIn order to fully appreciate that last statement I suggest reading James Bach's blog post on the subject, where he points out that "[t]est automation cannot reproduce the thinking that testers do when they conceive of tests, control tests, modify tests, and observe and evaluate the product."
Automated testing removes (to a greater or lesser degree) the value that a trained human being adds to a test.
The trick to balancing manual with automated testing is to allow humans to do what they do best: feel their way through software, make decisions based on hunches and patterns that a machine cannot be programmed to detect.
Leave the repetitious checking to the machines by all means, and let good testers do what they do best: ask questions of the software that reveals information pertaining to its value.
James,
ReplyDeleteGreat post.
You said "I think the issue is test design versus doing something because we can." I absolutely agree. Effective test design tools and methods are not given nearly enough attention by most testers, regardless of which side of the "manual v. automate" debate they support. Test design methods shoud be a part of every testers training and test design tools should be a part of every serious tester's tool box. Despite this, few testers, even in Fortune 100 firms that have otherwise sophisticated IT departments, are currently using test design tools and well-structured methods. Worse still, many testers have not even heard of them; as a result, testers generate and execute highly ineffective test cases week after week.
I recently worked on a study that spanned 10 projects. In each project, we objectively compared the defects found by "2 testers in a room with a test design tool" to "4 testers in a room with no test design tool". Both teams tested the same "real world" application at the same time. I am looking forward to publishing the empirical findings (which is scheduled to happen in October).
The results from those pilot projects were positive enough (consistently doubled tester productivity, consistently higher quality, etc.) that I decided to create my own test design software tool and help testing teams achieve more coverage in fewer tests. (Please see: http://www.hexawise.com).
Thank you for emphasizing that identifying the right things to test is a critically important factor that too often gets overlooked in the manual testing v. automated testing debate.
- Justin Hunter
Founder and CEO of Hexawise
Jim,
ReplyDeleteI'd say you nailed it pretty much on the head. It is like building a house. You need to look at what you are trying to build (test), design it, build it, 'test' it, and then use it.
Manual testing does that, and when done you can (despite what some people say) use that as a blueprint for your automated test. I do agree that not all manual tests should be automated or can be (due to various reasons). Automation is a tool and not an end-all solution.
And to agree with James Bach, there is not better tool than your own brain.
Have fun at Google and look forward to more posts.
Jim Hazen
So Ian thought it was funny and Wendy thought it was offensive. I intended the former and not that latter. But I suppose it will stick in the mind either way. Wendy, you have my apology!
ReplyDeleteNow to the meat: thinking is the key issue but thinking is note the sole domain of the manual tester. Lots of thought should go into test design. If then those tests get executed by automation, this is very much OK. Like I said, all good automation starts it's life as a manual test. Maybe I should have said all good testing begins it's life as a good test design.
I've been tasked with setting up a QA department at my company.
ReplyDeleteWhat books would you recommend to learning what QA is and how to do it effectively?
Thanks,
Simon Johnson
Simon (Ckwop): you're asking an author what books he recommends? Take a guess? Actually, my books will teach you testing, not necessarily how to set up a QA shop. And although he works at my former employer, Alan Page's book is top drawer stuff.
ReplyDeleteI was talking to my friend other day about this. It is very important to identify what exactly needs to be tested, the how is separate problem that should not be mixed. This helps us focus on designing better test cases.
ReplyDeleteThe moot point is clearly and simply well explained.
ReplyDeleteSachin
Hi James,
ReplyDeleteGood to see you online.
I confess that I'm struggling to frame my comment because your post seems to assume a context that I just don't live in anymore, a test-last rather than test-driven context in which manual testing & automation compete for time, attention, & resources after the software is implemented.
But the assertion "All good automation begins it's life as manual test cases" strikes me as just wrong for any context.
Even in test-last contexts, I've automated tests that I never executed manually and that found bugs on their first run. Model-based testing FTW.
Perhaps you meant to say, "All good automation involves test cases that would still be worth executing manually if that were necessary?"
Elisabeth
This comment has been removed by the author.
ReplyDeleteGreat post James. I totally agree when it comes to the value of test design. How the test design is documented is up to the individual, the importent thing is the process of designing the test - think about what you are doing befor you do it :-)
ReplyDeleteGood to find you again, was wondering how to find you when you left MS. Maybe we'll see you at EUROSTAR sometime.
Take care,
Gitte
Program Test Manager
Systematic
James ... I'm glad you wrote this. The obsession with the product/solution is becoming a big problem with budding testers.
ReplyDeleteI'd like to add here that I'm seeing it's not just the testers but also the clients/project managers at the client side who push for automation without having any understanding of if at all automation would help.
Automation when used thoughtfully does do wonderful things. The industry needs to stop creating a buzz around automation and take it at face (real) value.
-- Anup / Director QE - QA InfoTech
James, good article, and it's about time someone is focussing on design. People who are not Testers dont understand when we talk about design, they think only Developers design (code).
ReplyDeleteIn my view this is part of the problem because if those people dont have a "tester brain" they just dont get it, they dont understand why the product is still buggy after being fully automated, or why all those person days of manual testing did not deliver a bug free system...and when the opposite occurs and the System is near perfect they still cant explain why this happened.
Maybe we should be blogging more about the "tester brain" (what is it and how come those who have it can spot it in others), and then we can understand more about the underlying designs.
Does this make sense? Anyone?
"Good test design" certainly is a topic not factored into what most testers call as "automation".
ReplyDeleteAutomation that refers to only automated test execution perhaps need to be more explicitly extended to include "automated test design" as well.
Pairwise testing, markov-chains and so on... and of course the tester brain as well.
Ashwin Palaparthi,
Founder, TestersDesk.com