Yes...we will flush our queue of ToTT (testing on the toilet) very soon. We have some new folks converting them for external release (removing internal references and such).
A quick comment on performance tools: we tend to use open source tools, mostly JMeter and have done so for the past couple of years. We are currently investigating a new tool called FunkLoad because we feel it will fit nicely with some of our more complex projects. I am working on an article for the blog - about performance testing at Google.
What would happen to your metrics if you only wrote up internal defects that survived more that (n) hours/days after being found? The concept is quick fixes are not recorded (less overhead/more lean/ more agaile) in a defect. Do you capture the development (Junit) fixes?
Learned about DDT at NFJS today. Do ya'll do that at Google?
Thanks Al for the post. I can definitely see the benefit of reducing the overhead on a team instead of constantly tracking all the small defects that are one-offs in production. These one-offs are typically filed as low priority defects. We do have categories and priorities for bugs that are more widespread both in number of customers affected as well as how long they are aging in production.
I believe that if we have a way to categorize both large and small defects, we can (at least for now) de-prioritize some of the small ones and focus on fixing the larger ones. This could prevent the overhead of managing all the bugs, but in keeping both, it allows a good knowledge base of all defects for possible future use. Just in case the same thing pops up where information about the previous defect and its fix could really help the situation.
So, in short, I definitely agree trying to track and manage all defects will definitely slow a team down, but if there are good triaging and defect management processes, everything can be logged and only the critical bugs can be escalated.
In regards to JUnit or other unit test fixes, we do associate the check-ins in our control system when those unit tests are submitted to the defect it resolves (thus closing the loop on the fix). Not all defects are specifically regressed and tested by test, some are easily caught, tested, and pushed with just a solid unit test.
I don't have a direct question related to this article however I would like to know Test management tools apart from Quality Centre which we can use to manage testcases, requirements, defects etc and also we should be able to take reports out of it.
I love the testing on the toilet series, any chance for a return?
ReplyDeleteOh haha, I also enjoyed the metrics post too!
ReplyDeleteYes...we will flush our queue of ToTT (testing on the toilet) very soon. We have some new folks converting them for external release (removing internal references and such).
ReplyDeleteCan you comment on which tools do you use for testing, mainly for load and performance testing?
ReplyDeleteGood idea on perf tools. We're thinking about doing a youtube video. But maybe we can do a short blog post about them sooner.
ReplyDeleteA quick comment on performance tools: we tend to use open source tools, mostly JMeter and have done so for the past couple of years. We are currently investigating a new tool called FunkLoad because we feel it will fit nicely with some of our more complex projects. I am working on an article for the blog - about performance testing at Google.
ReplyDeleteWhat would happen to your metrics if you only wrote up internal defects that survived more that (n) hours/days after being found?
ReplyDeleteThe concept is quick fixes are not recorded (less overhead/more lean/ more agaile) in a defect.
Do you capture the development (Junit) fixes?
Learned about DDT at NFJS today. Do ya'll do that at Google?
Michael Bachman (author of this post) replies:
ReplyDeleteThanks Al for the post. I can definitely see the benefit of reducing the overhead on a team instead of constantly tracking all the small defects that are one-offs in production. These one-offs are typically filed as low priority defects. We do have categories and priorities for bugs that are more widespread both in number of customers affected as well as how long they are aging in production.
I believe that if we have a way to categorize both large and small defects, we can (at least for now) de-prioritize some of the small ones and focus on fixing the larger ones. This could prevent the overhead of managing all the bugs, but in keeping both, it allows a good
knowledge base of all defects for possible future use. Just in case the same thing pops up where information about the previous defect and its fix could really help the situation.
So, in short, I definitely agree trying to track and manage all defects will definitely slow a team down, but if there are good triaging and defect management processes, everything can be logged and only the critical bugs can be escalated.
In regards to JUnit or other unit test fixes, we do associate the check-ins in our control system when those unit tests are submitted to the defect it resolves (thus closing the loop on the fix). Not all defects are specifically regressed and tested by test, some are easily caught, tested, and pushed with just a solid unit test.
I don't have a direct question related to this article however I would like to know Test management tools apart from Quality Centre which we can use to manage testcases, requirements, defects etc and also we should be able to take reports out of it.
ReplyDelete