Exploratory Testing on Chat
Testing Google Talk is challenging -- we have multiple client implementations, between the Google Talk client, the Google Talk Gadget, and Gmail chat, while also managing new features and development. We rely heavily on automation. Yet there's still a need to do manual testing before the release of the product to the public.
We've found that one of the best ways to unearth interesting bugs in the product is to use Exploratory Testing (http://www.satisfice.com
To do this, we start with the definition of a Test Strategy. This is where we outline the approach we are taking to the testing of the product as a whole. It's not super-detailed -- instead it mentions the overarching areas that need to be tested, whether automation can be used to test the area, and what role manual testing needs to play. This information lets developers and PMs know what we think we need to test for the product, and allows them to add unit tests etc to cover more ground.
Some basic test case definition go into the Test Plan. The aim of the test plan (and any test artifacts generated) is not to specify a set of actions to be followed in a rote manner, but instead a rough guide that encourages creative exploration. The test plan also acts as the virtual test expert, providing some framework under which exploratory testing can be executed effectively by the team. The plan decomposes the application into different areas of responsibility, that are doled out to members of the team in sessions that are one-working-day or less duration. By guiding people's thinking, we can cover the basics, fuzzy cases, and avoids a free-for-all, duplication, and missed areas.
Finally we get a status report from the testers every day, that describes the testing that was performed that day, any bugs raised, and blocking issues identified. The reports acts as an execution of the "contract" and gives traceability, and the ability to tweak exploratory testing that has gone off track from where we've determined we need to concentrate efforts. We can use these status reports along with bug statistics to gauge the effectiveness of the test sessions.
This is approach is fairly simple, but sometimes simple works best. Using this method has allowed us to make the best use of test engineers and maximized the effectiveness of each test pass. It's proven itself to be a fruitful approach and balances the need for reporting and accountability with the agility of exploratory testing.
Are you using a tool to track all this ET?
ReplyDelete"The aim of the test plan (and any test artifacts generated) is not to specify a set of actions to be followed in a rote manner, but instead a rough guide that encourages creative exploration."--- How do you measure this Creative Exploration.
ReplyDeleteSachin
Hi Erik,
ReplyDeleteWe use a bunch of internal tools to track the ET we're doing. These include bug tracking, code coverage tools, test case management -- yes, we do store interesting cases we come upon in the course of ET. In Google Talk, we rely on a broad set of quality metrics rather than just a couple to indicate release-readiness.
Joel
Hi Joel,
ReplyDeleteAt my work use Session-Based Exploratory Testing. But we follow the Bach´s definition of session(www.satisfice.com). Why use a day session? You made a report for each session?
Hi nacho,
ReplyDeleteFor our team, it worked better to have a broader definition of a session, and to have a daily milestone. The beauty of this form of testing is the ability to define the work into whatever chunks make sense for your implementation. We don't formally report each session, but we collate all of the daily reports into the work packet for the day. It would be interesting to hear how your company does using the pure method of Session-based ET.
Joel
Hey Sachin,
ReplyDeleteWe actually make a point of not explicitly measuring the creative exploration, but rather use tools like metrics reporting, coverage analysis and auditing to ensure the right level of testing in the right areas is done. I find the testers find better bugs this way, and the developers are encouraged to do better unit testing as a result.
Joel
Hi:
ReplyDeleteExploratory testing is one of the best techniques to find bugs in any given product.I do lot of exploratory testing and found around 100 crashes(high severity artifacts) and around 300 bugs from last 3 years.
In exploratory testing we can design many real time use cases.It can be used on any product which has immediate impact on customers.
Thanks and Regards,
sarath
How do you do regression test of captured ET test cases? Automatic (Selenium) or manual?
ReplyDelete