Testing Blog

The plague of repetitiveness

Wednesday, June 24, 2009
Share on Twitter Share on Facebook
Google
Labels: James Whittaker

12 comments :

  1. Flash SheridanJune 24, 2009 at 7:56:00 PM PDT

    I mostly agree, especially about Beizer’s Pesticide Paradox and the false sense of thoroughness. (At a previous employer, I twice saw large test suites for connected products which would mostly pass even when disconnected.)

    But I do disagree with making no assumptions about what the developers did. To paraphrase and extend what you’ve said elsewhere, “You’ve got to know the territory.” The most important part of the territory is the customers’ needs and habits, of course; but it’s worth knowing developers’ strengths and weaknesses as well, at least when deciding what to test first.

    To take one sadly rare example, I once had the privilege of working with a developer who was very good at the hard part of his job. So I focussed on the easy, obvious stuff, in this case going down the feature list and turning it into high-level test cases. Sure enough, he’d let a couple of obscure missing features slip through. Obviously I was obliged to have some test cases for the hard stuff as well, but in retrospect I should have spent less time on them and more on the easy stuff.

    More mundanely, the Eighty/Twenty Rule applies to developers with a vengeance: Areas owned by your worst developers deserve a lot more attention than areas covered by the best. (One of static analysis’s many advantages is that it can make this painfully apparent.)

    Obviously you shouldn’t go overboard in following this advice. Even the best developers and the most solid areas of code need a reasonable amount of coverage, and your best developer may be replaced by somebody who needs careful scrutiny. But testing in a fallen world is all about prioritizing, and one (but only one) of the factors in deciding what to test soonest and most is whatever you can learn about your developers. Asking them their opinions is always worthwhile, but it’s not the only means of communication. I’ve gotten some of my best test cases from internal chat rooms, for instance. And watching their check-in comments can be informative as well as amusing.

    ReplyDelete
    Replies
      Reply
  2. Simon MorleyJune 25, 2009 at 4:45:00 AM PDT

    Hi,

    Agree with the pesticide principle. This is a real danger where test suites are built-up over time, extended or modified without some kind of objective re-analysis.

    It's true that shaking up the order and input data will make a few bugs fall out of the tree, but more is needed.

    A periodic re-assessment is required. These are costs that need taking into account when the longevity of a test case is considered (during the test design phase) - is this test case going to be useful for several projects (versions of the product/solution?) then what should bugdet should be assigned for its periodic review...

    Agree with some other comments that going over some territory that the developers have covered is necessary.

    But the bigger picture is that the test phases should be complimentary - so you can (up-front) estimate/guestimate how much overlap the project needs (and what that will cost...)

    ReplyDelete
    Replies
      Reply
  3. JoeJune 25, 2009 at 4:58:00 AM PDT

    "Even worse, all that so-called successful testing will give us a false sense of thoroughness and make our completeness metrics a pack of very dangerous lies."

    Well said!

    ReplyDelete
    Replies
      Reply
  4. Justin HunterJune 25, 2009 at 5:39:00 AM PDT

    James,

    Very good post.

    It brings to mind an analogy I like even more than "pesticide" regarding the dangers of repeatedly testing the same things in the same way time after time:

    “Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason stepping in someone else’s footprints minimizes the chance of being blown up by land mine.” - James Bach

    (Shared with a tip of the hat to Matt Archer who recently included this memorable quote his blog).


    - Justin

    __________________
    Justin Hunter
    Founder and CEO
    Hexawise
    http://www.hexawise.com
    "More coverage. Fewer Tests."

    ReplyDelete
    Replies
      Reply
  5. AnonymousJune 25, 2009 at 8:12:00 AM PDT

    One of the main issues with repetitive testing is not realizing that tests expire. It comes back to test design. Well designed tests will have a better chance at being useful in the long run. Poorly designed tests they should never see the light of day. An iterative process is needed to reevaluate existing test cases to retire those not providing value.

    An approach to software automation implemented by team I worked with was to automate high priority bugs. They had a robust automation suite after a couple of years. Each time a test would fail there was a reevaluation of it to verify its validity.

    ReplyDelete
    Replies
      Reply
  6. BuddhaStarJune 25, 2009 at 9:27:00 AM PDT

    This comment has been removed by the author.

    ReplyDelete
    Replies
      Reply
  7. BuddhaStarJune 25, 2009 at 9:33:00 AM PDT

    I agree with the pesticide paradox. However, on most projects when the application is in the maintenance phase, not many resources are going to be put into testing. Developing new tests takes resources. The priority is testing bug fixes and new features. In risk based testing, testers focus on high risk areas. In the Six Sigma strategy, we create Risk Priority Numbers for tasks then we work on the important tasks and pay little attend to the low priority tasks. Nevertheless, IMHO, convincing people to devote more resources to testing in maintenance phase of a project is a tough row to hoe.

    ReplyDelete
    Replies
      Reply
  8. UnknownJune 28, 2009 at 11:05:00 AM PDT

    I agree with your comments and we need to evaluate our test cases time to time otherwise they will grow and difficult to maintain. In our case also we have thousands of cases and we run and maintain daily.Somewhere testers may provide the importance to unit cases by developers and ignore this area. But then that brings to another popular thing, don't assume anything. In the end blame comes to tester only and it is difficult to leave any area.

    ReplyDelete
    Replies
      Reply
  9. MauraJune 30, 2009 at 6:37:00 PM PDT

    Hmmm - I would also venture to add to the pesticide example that there's a problem with testers using pesticides that don't target the bugs they are looking to find and kill.

    I can't tell you how many times I've seen people running tests that do not test what they are claiming they will.

    ReplyDelete
    Replies
      Reply
  10. Regi JohnJuly 5, 2009 at 8:58:00 PM PDT

    It's not that the bugs grow immune but that the test techique has already cleared the area to which the technique has been applied.

    What now needs to happen is to continue to clear the other areas in the product.

    If a technique is not exposing new bugs, then one needs to dig into quiver to pull out and try the next technique. Better yet, if you have the resources, use a multitude of techniques concurrently. You can clear wider swaths of area that way.

    And hey, if you've tried everything in your repertoire and are still not finding bugs, then maybe it's time to ship? :)

    ReplyDelete
    Replies
      Reply
  11. RaghuJuly 5, 2009 at 11:36:00 PM PDT

    Hi James, nice to hear you again :-)

    Even though I completely agree with the aspect of keeping test suites updated, whenever I start making an effort in this direction (especially of quite old suites), I encounter a contrasting thought within myself: Let's say test suites have been updated and the 'new' test cases find 'new' bugs in a current product under validation. But these so called 'new' bugs are 'base code bugs' since conceptually they are present in previous products (which have been shipped) as well given that the previous products' "code" is taken as base for the current version of the product upon which new features are being added and no 'customer' has ever made a complaint of such 'old base/new' bugs, so should we fix them now (Dev team) or should we worry about finding them now (Test team)... Probably this thought/situation is more common in a product-based company/work environment...

    Given that every 'fix' is an expense, do we take an economic view and don't find such bugs (in other words, don't update the test suites once they get 'stable') or do we take a 'purist' approach and no matter find all bugs whenever we can irrespective of whether they be fixed or not...any thoughts please?

    ReplyDelete
    Replies
      Reply
  12. Peter WilliamsJuly 11, 2009 at 7:50:00 PM PDT

    You can measure the ratio of bugs found by fixed and variant testing. I guess you do.

    When I have measured the fixed:variant bug-finding effectiveness, I have always found variant testing to be better at finding bugs on mature codebases.

    That seemed natural. The code paths and system states exercised by the fixed tests get debugged over time.

    You know all that. So why the metaphor?

    ReplyDelete
    Replies
      Reply
Add comment
Load more...

The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.

  

Labels


  • TotT 104
  • GTAC 61
  • James Whittaker 42
  • Misko Hevery 32
  • Code Health 31
  • Anthony Vallone 27
  • Patrick Copeland 23
  • Jobs 18
  • Andrew Trenk 13
  • C++ 11
  • Patrik Höglund 8
  • JavaScript 7
  • Allen Hutchison 6
  • George Pirocanac 6
  • Zhanyong Wan 6
  • Harry Robinson 5
  • Java 5
  • Julian Harty 5
  • Adam Bender 4
  • Alberto Savoia 4
  • Ben Yu 4
  • Erik Kuefler 4
  • Philip Zembrod 4
  • Shyam Seshadri 4
  • Chrome 3
  • Dillon Bly 3
  • John Thomas 3
  • Lesley Katzen 3
  • Marc Kaplan 3
  • Markus Clermont 3
  • Max Kanat-Alexander 3
  • Sonal Shah 3
  • APIs 2
  • Abhishek Arya 2
  • Alan Myrvold 2
  • Alek Icev 2
  • Android 2
  • April Fools 2
  • Chaitali Narla 2
  • Chris Lewis 2
  • Chrome OS 2
  • Diego Salas 2
  • Dori Reuveni 2
  • Jason Arbon 2
  • Jochen Wuttke 2
  • Kostya Serebryany 2
  • Marc Eaddy 2
  • Marko Ivanković 2
  • Mobile 2
  • Oliver Chang 2
  • Simon Stewart 2
  • Stefan Kennedy 2
  • Test Flakiness 2
  • Titus Winters 2
  • Tony Voellm 2
  • WebRTC 2
  • Yiming Sun 2
  • Yvette Nameth 2
  • Zuri Kemp 2
  • Aaron Jacobs 1
  • Adam Porter 1
  • Adam Raider 1
  • Adel Saoud 1
  • Alan Faulkner 1
  • Alex Eagle 1
  • Amy Fu 1
  • Anantha Keesara 1
  • Antoine Picard 1
  • App Engine 1
  • Ari Shamash 1
  • Arif Sukoco 1
  • Benjamin Pick 1
  • Bob Nystrom 1
  • Bruce Leban 1
  • Carlos Arguelles 1
  • Carlos Israel Ortiz García 1
  • Cathal Weakliam 1
  • Christopher Semturs 1
  • Clay Murphy 1
  • Dagang Wei 1
  • Dan Maksimovich 1
  • Dan Shi 1
  • Dan Willemsen 1
  • Dave Chen 1
  • Dave Gladfelter 1
  • David Bendory 1
  • David Mandelberg 1
  • Derek Snyder 1
  • Diego Cavalcanti 1
  • Dmitry Vyukov 1
  • Eduardo Bravo Ortiz 1
  • Ekaterina Kamenskaya 1
  • Elliott Karpilovsky 1
  • Elliotte Rusty Harold 1
  • Espresso 1
  • Felipe Sodré 1
  • Francois Aube 1
  • Gene Volovich 1
  • Google+ 1
  • Goran Petrovic 1
  • Goranka Bjedov 1
  • Hank Duan 1
  • Havard Rast Blok 1
  • Hongfei Ding 1
  • Jason Elbaum 1
  • Jason Huggins 1
  • Jay Han 1
  • Jeff Hoy 1
  • Jeff Listfield 1
  • Jessica Tomechak 1
  • Jim Reardon 1
  • Joe Allan Muharsky 1
  • Joel Hynoski 1
  • John Micco 1
  • John Penix 1
  • Jonathan Rockway 1
  • Jonathan Velasquez 1
  • Josh Armour 1
  • Julie Ralph 1
  • Kai Kent 1
  • Kanu Tewary 1
  • Karin Lundberg 1
  • Kaue Silveira 1
  • Kevin Bourrillion 1
  • Kevin Graney 1
  • Kirkland 1
  • Kurt Alfred Kluever 1
  • Manjusha Parvathaneni 1
  • Marek Kiszkis 1
  • Marius Latinis 1
  • Mark Ivey 1
  • Mark Manley 1
  • Mark Striebeck 1
  • Matt Lowrie 1
  • Meredith Whittaker 1
  • Michael Bachman 1
  • Michael Klepikov 1
  • Mike Aizatsky 1
  • Mike Wacker 1
  • Mona El Mahdy 1
  • Noel Yap 1
  • Palak Bansal 1
  • Patricia Legaspi 1
  • Per Jacobsson 1
  • Peter Arrenbrecht 1
  • Peter Spragins 1
  • Phil Norman 1
  • Phil Rollet 1
  • Pooja Gupta 1
  • Project Showcase 1
  • Radoslav Vasilev 1
  • Rajat Dewan 1
  • Rajat Jain 1
  • Rich Martin 1
  • Richard Bustamante 1
  • Roshan Sembacuttiaratchy 1
  • Ruslan Khamitov 1
  • Sam Lee 1
  • Sean Jordan 1
  • Sebastian Dörner 1
  • Sharon Zhou 1
  • Shiva Garg 1
  • Siddartha Janga 1
  • Simran Basi 1
  • Stan Chan 1
  • Stephen Ng 1
  • Tejas Shah 1
  • Test Analytics 1
  • Test Engineer 1
  • Tim Lyakhovetskiy 1
  • Tom O'Neill 1
  • Vojta Jína 1
  • automation 1
  • dead code 1
  • iOS 1
  • mutation testing 1


Archive


  • ►  2025 (1)
    • ►  Jan (1)
  • ►  2024 (13)
    • ►  Dec (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (1)
    • ►  May (3)
    • ►  Apr (3)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2023 (14)
    • ►  Dec (2)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (3)
    • ►  Aug (1)
    • ►  Apr (1)
  • ►  2022 (2)
    • ►  Feb (2)
  • ►  2021 (3)
    • ►  Jun (1)
    • ►  Apr (1)
    • ►  Mar (1)
  • ►  2020 (8)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  May (1)
  • ►  2019 (4)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Jul (1)
    • ►  Jan (1)
  • ►  2018 (7)
    • ►  Nov (1)
    • ►  Sep (1)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (1)
    • ►  Feb (1)
  • ►  2017 (17)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2016 (15)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (1)
    • ►  Sep (2)
    • ►  Aug (1)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (1)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2015 (14)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Aug (1)
    • ►  Jun (1)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (1)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2014 (24)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Sep (2)
    • ►  Aug (2)
    • ►  Jul (3)
    • ►  Jun (3)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2013 (16)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Jan (2)
  • ►  2012 (11)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (1)
    • ►  Aug (4)
  • ►  2011 (39)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (2)
    • ►  Aug (4)
    • ►  Jul (2)
    • ►  Jun (5)
    • ►  May (4)
    • ►  Apr (3)
    • ►  Mar (4)
    • ►  Feb (5)
    • ►  Jan (3)
  • ►  2010 (37)
    • ►  Dec (3)
    • ►  Nov (3)
    • ►  Oct (4)
    • ►  Sep (8)
    • ►  Aug (3)
    • ►  Jul (3)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (3)
    • ►  Mar (3)
    • ►  Feb (2)
    • ►  Jan (1)
  • ▼  2009 (54)
    • ►  Dec (3)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (5)
    • ►  Aug (4)
    • ►  Jul (15)
    • ▼  Jun (8)
      • The plague of repetitiveness
      • The 7 Plagues of Software Testing
      • GTAC: Call for Proposals
      • Google Test Automation Conference 2009 - Zurich, S...
      • Burning Test Questions at Google
      • I'm a Googler now
      • James Whittaker joins Google
      • My Selenium Tests Aren't Stable!
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (5)
    • ►  Jan (4)
  • ►  2008 (75)
    • ►  Dec (6)
    • ►  Nov (8)
    • ►  Oct (9)
    • ►  Sep (8)
    • ►  Aug (9)
    • ►  Jul (9)
    • ►  Jun (6)
    • ►  May (6)
    • ►  Apr (4)
    • ►  Mar (4)
    • ►  Feb (4)
    • ►  Jan (2)
  • ►  2007 (41)
    • ►  Oct (6)
    • ►  Sep (5)
    • ►  Aug (3)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (7)
    • ►  Mar (5)
    • ►  Feb (5)
    • ►  Jan (4)

Feed

  • Google
  • Privacy
  • Terms