Testing Blog

Testing on the Toilet: Tests Too DRY? Make Them DAMP!

Tuesday, December 03, 2019
Share on Twitter Share on Facebook
Google
Labels: Derek Snyder , Erik Kuefler , TotT

11 comments :

  1. JoshuaDecember 4, 2019 at 11:19:00 AM PST

    I get what the author is going for here, but I don't totally agree with them.

    I think the easiest and cleanest approach is to move the list of users out of set up and into the test (as the author recommends), but personally I would keep the loops and reorganize the code a bit to make it easier to read, so kind of like a DRY DAMP approach?

    pseudo code:

    def register_list_of_users(user_list):
    to_return = []
    for user in self.users:
    this_user = self.forum.Register(user)
    to_return.append(this_user)

    return to_return
    # There also needs to be better error handling here in case a Register throws an error

    def testCanRegisterMultipleUsers(self):
    user_list = ['alice', 'bob', 'jack', 'sean']
    registered_users = register_list_of_users(user_list)
    for user in registered_users:
    self.assertTrue(self.forum.HasRegisteredUser(user))


    ReplyDelete
    Replies
    1. PeterDecember 30, 2019 at 10:40:00 PM PST

      It's hard to adapt your design heuristics to different situations and every design heuristic trades of with another one.

      A developer should know how patterns/principals work and should (try to) understand the motivation behind them.

      Did you ever delete a failed test, because you didn't understand the test? Rotten production code cannot be deleted, tests can.

      I try to make it easy for the next junior in my team:
      I write SOLID code and DAMP tests.

      Delete
      Replies
        Reply
    2. Reply
  2. DenysDecember 16, 2019 at 1:40:00 AM PST

    I'd rather stay DRY principle and use it together with feature of testing framework like parameterized test.

    ReplyDelete
    Replies
    1. AnonymousFebruary 7, 2020 at 11:34:00 AM PST

      My thoughts exactly. The second version of this test is no better than the first; it will still abort after the first failed "assert" statement, robbing us of half the test's value. Test methods should generally contain only 1 "assert" statement and be parametrized with all values to test, NOT iterate through a list of values. This also makes it much easier to expand the test coverage later on by adding more values to the parametrization.

      Delete
      Replies
        Reply
    2. Reply
  3. I'mDecember 17, 2019 at 10:37:00 AM PST

    Unit test above tries to test the functionality of Forum class. I.e: we add user to forum and then we check if those users are present here. But in your example I should look into register_list_of_users to understand what exactly we are tesing. Also you'll have chance to do something wrong inside register_list_of_users Actulally you already did: you ignore input parameter 'user_list' :).
    Or someone will refactor/expand register_list_of_users (which are reused bu other tests) in the way that it will change initial semantics and you'll lost initial 'test idea' of your testcase.

    ReplyDelete
    Replies
      Reply
  4. Karlo SmidDecember 25, 2019 at 2:46:00 AM PST

    Hi Derek and Erik, good to see that ToT is not dead! I have two questions:

    1. How often are ToT series published on this blog? I suspect that Google still has a lot of ToT goodies not published.

    2. Have you ever considered heuristic to delete test that failed due to code changes and just write new one?

    Thanks!

    Regards, Karlo.

    ReplyDelete
    Replies
      Reply
  5. AnonymousFebruary 28, 2020 at 11:53:00 AM PST

    Thank you for sharing, however I don't necessarily agree with this approach. The "Mental Computation" argument could apply to production code because it is a pain to follow the execution path, but there are several reasons why refactoring and breaking up the code adds benefits. I would argue that Fowler's principle of "separation between intention and implementation" still applies to test code and that it is a really a mind set of how you approach reading code. Can you work with abstractions or do you need everything lined up next to each other in order for you to trust that the code will function. Speed of readability is one thing, where the speed of maintenance is another.

    ReplyDelete
    Replies
    1. AnonymousAugust 11, 2020 at 1:50:00 PM PDT

      But you're missing the point that hard-to-follow / very abstract code in production is tested by tests. And the reason we enforce DRY in production code is so that we don't duplicate bugs and so that refactoring is straightforward.

      Hard-to-follow code in tests has to be run in devs' heads, because that's the only way to check that tests are right. And when a test fails, we want to know what went wrong quickly. We don't want to have to figure out what the test is testing.

      Delete
      Replies
        Reply
    2. Reply
  6. Samantha WongMarch 4, 2020 at 2:08:00 AM PST

    went to the comments section and was surprised to find the majority of commenters disagreed with the DAMP approach.

    i agree wholeheartedly with the DAMP approach.

    most people don't want to spend too much time understanding tests, or why a test is failing. whatever makes the test more understandable and readable to the team is the best.

    perhaps the example above isn't extreme enough for readers to understand why leaning towards DAMP is better. it's still relatively easy to understand that a test is looping through a list of items.

    i think the authors (and myself) are pushing for the case where when you have to make a choice between DAMP and DRY, DAMP is probably better than DRY. being succinct is generally good, but you shouldn't sacrifice readability just because you want to reuse code.

    i have seen situations where test code is more complex than the actual code itself - and developers spent 80% of the time fixing the failing tests, trying to understand why they failed. especially if the tests span many classes/files/names and you can't understand what is its relation to the function of the app.

    for example, in this particular list example, if one of the tests in the list failed, i want to know what is failing in terms of the behaviour of the app. i don't want to spend time counting through the list, and I don't really care what is the index of this test in the list. i just want to know in relation to user behaviour on the app, what is failing.

    sometimes DAMP and DRY are the same approach, i.e. reusing code makes you the most clear -- in which case, do both.

    ReplyDelete
    Replies
      Reply
  7. Tim LApril 17, 2020 at 7:01:00 AM PDT

    The readability point here is really important. I'll admit up front I am a somewhat strong believer in the DAMP approach. There's limits in being too religious with either approach. You can definitely "under abstract" in test code in a way that is verbose, distracting, hard to maintain and ultimately loses the point of making the code readable. But that's not the trend I've seen in code bases I've worked on in my career. Increasingly, over abstraction tends to cause both maintenance and readability pain.

    First readability: your test cases are your best API docs in many cases. You can execute them. You can debug them. Being able to step through the docs is like someone reading it to you and ensuring you nod your head in understanding at each point. When it comes to storytelling, and efficiency, a linear narrative is much easier for someone to understand than someone reciting the movie Memento to you. Abstracts are detours that require head space to keep up with.

    Now on maintenance: it often seems that less code is good code. But more important than how many keystrokes it takes is understanding what you're changing. If you can't grok it you can't make the right changes. If you rush it you introduce bugs. If you have to take many detours in test code to understand the structure of tests, you need to keep what happened during all of those tangential conversations in your head as you're making changes. Worse, you need to understand who else might depend on the abstractions that you need to change that are called from within the test cases. "Test helpers" and "test utils" are some of the worst offenders of a competing set of principles in code complexity: maximizing cohesion while minimizing coupling. I can't count the number of times I'm 5 modules deep in an hour long adventure trying to rework test code for a breaking API change and I just wish I could have mindlessly moved my hands over more redundant cases for 15 minutes while vegging out on a Youtube video.

    It is important not to treat either concept as religion. Where they don't conflict and you can be both DRY and DAMP, great. Often they conflict. And I think we want to be aware of the conflict, be able to size it up and then consider pros and cons of code reuse over verbosity and repetition when it's worth it. On the entire spectrum of approaches, all the way to a very flat, repeating, looks like "data" approach to testing - nothing is evil. They're just tradeoffs worth exploring. Maybe as an oversteer from a history in an art (programming) where verbosity was previously embraced more out of necessary, I find that programmers are frighteningly willing to write off a less than DRY approach without considering the tradeoffs as they pertain to the particular situation.

    ReplyDelete
    Replies
      Reply
  8. UnknownMay 30, 2020 at 8:40:00 AM PDT

    Can't agree more with DAMP when I get lost in over abstracted unit test. I even can't figure out the workflow of tests myself wrote if it uses too many test utils and helpers.

    ReplyDelete
    Replies
      Reply
Add comment
Load more...

New comments are not allowed.

  

Labels


  • TotT 104
  • GTAC 61
  • James Whittaker 42
  • Misko Hevery 32
  • Code Health 31
  • Anthony Vallone 27
  • Patrick Copeland 23
  • Jobs 18
  • Andrew Trenk 13
  • C++ 11
  • Patrik Höglund 8
  • JavaScript 7
  • Allen Hutchison 6
  • George Pirocanac 6
  • Zhanyong Wan 6
  • Harry Robinson 5
  • Java 5
  • Julian Harty 5
  • Adam Bender 4
  • Alberto Savoia 4
  • Ben Yu 4
  • Erik Kuefler 4
  • Philip Zembrod 4
  • Shyam Seshadri 4
  • Chrome 3
  • Dillon Bly 3
  • John Thomas 3
  • Lesley Katzen 3
  • Marc Kaplan 3
  • Markus Clermont 3
  • Max Kanat-Alexander 3
  • Sonal Shah 3
  • APIs 2
  • Abhishek Arya 2
  • Alan Myrvold 2
  • Alek Icev 2
  • Android 2
  • April Fools 2
  • Chaitali Narla 2
  • Chris Lewis 2
  • Chrome OS 2
  • Diego Salas 2
  • Dori Reuveni 2
  • Jason Arbon 2
  • Jochen Wuttke 2
  • Kostya Serebryany 2
  • Marc Eaddy 2
  • Marko Ivanković 2
  • Mobile 2
  • Oliver Chang 2
  • Simon Stewart 2
  • Stefan Kennedy 2
  • Test Flakiness 2
  • Titus Winters 2
  • Tony Voellm 2
  • WebRTC 2
  • Yiming Sun 2
  • Yvette Nameth 2
  • Zuri Kemp 2
  • Aaron Jacobs 1
  • Adam Porter 1
  • Adam Raider 1
  • Adel Saoud 1
  • Alan Faulkner 1
  • Alex Eagle 1
  • Amy Fu 1
  • Anantha Keesara 1
  • Antoine Picard 1
  • App Engine 1
  • Ari Shamash 1
  • Arif Sukoco 1
  • Benjamin Pick 1
  • Bob Nystrom 1
  • Bruce Leban 1
  • Carlos Arguelles 1
  • Carlos Israel Ortiz García 1
  • Cathal Weakliam 1
  • Christopher Semturs 1
  • Clay Murphy 1
  • Dagang Wei 1
  • Dan Maksimovich 1
  • Dan Shi 1
  • Dan Willemsen 1
  • Dave Chen 1
  • Dave Gladfelter 1
  • David Bendory 1
  • David Mandelberg 1
  • Derek Snyder 1
  • Diego Cavalcanti 1
  • Dmitry Vyukov 1
  • Eduardo Bravo Ortiz 1
  • Ekaterina Kamenskaya 1
  • Elliott Karpilovsky 1
  • Elliotte Rusty Harold 1
  • Espresso 1
  • Felipe Sodré 1
  • Francois Aube 1
  • Gene Volovich 1
  • Google+ 1
  • Goran Petrovic 1
  • Goranka Bjedov 1
  • Hank Duan 1
  • Havard Rast Blok 1
  • Hongfei Ding 1
  • Jason Elbaum 1
  • Jason Huggins 1
  • Jay Han 1
  • Jeff Hoy 1
  • Jeff Listfield 1
  • Jessica Tomechak 1
  • Jim Reardon 1
  • Joe Allan Muharsky 1
  • Joel Hynoski 1
  • John Micco 1
  • John Penix 1
  • Jonathan Rockway 1
  • Jonathan Velasquez 1
  • Josh Armour 1
  • Julie Ralph 1
  • Kai Kent 1
  • Kanu Tewary 1
  • Karin Lundberg 1
  • Kaue Silveira 1
  • Kevin Bourrillion 1
  • Kevin Graney 1
  • Kirkland 1
  • Kurt Alfred Kluever 1
  • Manjusha Parvathaneni 1
  • Marek Kiszkis 1
  • Marius Latinis 1
  • Mark Ivey 1
  • Mark Manley 1
  • Mark Striebeck 1
  • Matt Lowrie 1
  • Meredith Whittaker 1
  • Michael Bachman 1
  • Michael Klepikov 1
  • Mike Aizatsky 1
  • Mike Wacker 1
  • Mona El Mahdy 1
  • Noel Yap 1
  • Palak Bansal 1
  • Patricia Legaspi 1
  • Per Jacobsson 1
  • Peter Arrenbrecht 1
  • Peter Spragins 1
  • Phil Norman 1
  • Phil Rollet 1
  • Pooja Gupta 1
  • Project Showcase 1
  • Radoslav Vasilev 1
  • Rajat Dewan 1
  • Rajat Jain 1
  • Rich Martin 1
  • Richard Bustamante 1
  • Roshan Sembacuttiaratchy 1
  • Ruslan Khamitov 1
  • Sam Lee 1
  • Sean Jordan 1
  • Sebastian Dörner 1
  • Sharon Zhou 1
  • Shiva Garg 1
  • Siddartha Janga 1
  • Simran Basi 1
  • Stan Chan 1
  • Stephen Ng 1
  • Tejas Shah 1
  • Test Analytics 1
  • Test Engineer 1
  • Tim Lyakhovetskiy 1
  • Tom O'Neill 1
  • Vojta Jína 1
  • automation 1
  • dead code 1
  • iOS 1
  • mutation testing 1


Archive


  • ►  2025 (1)
    • ►  Jan (1)
  • ►  2024 (13)
    • ►  Dec (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (1)
    • ►  May (3)
    • ►  Apr (3)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2023 (14)
    • ►  Dec (2)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (3)
    • ►  Aug (1)
    • ►  Apr (1)
  • ►  2022 (2)
    • ►  Feb (2)
  • ►  2021 (3)
    • ►  Jun (1)
    • ►  Apr (1)
    • ►  Mar (1)
  • ►  2020 (8)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  May (1)
  • ▼  2019 (4)
    • ▼  Dec (1)
      • Testing on the Toilet: Tests Too DRY? Make Them DAMP!
    • ►  Nov (1)
    • ►  Jul (1)
    • ►  Jan (1)
  • ►  2018 (7)
    • ►  Nov (1)
    • ►  Sep (1)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (1)
    • ►  Feb (1)
  • ►  2017 (17)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2016 (15)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (1)
    • ►  Sep (2)
    • ►  Aug (1)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (1)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2015 (14)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Aug (1)
    • ►  Jun (1)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (1)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2014 (24)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Sep (2)
    • ►  Aug (2)
    • ►  Jul (3)
    • ►  Jun (3)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2013 (16)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Jan (2)
  • ►  2012 (11)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (1)
    • ►  Aug (4)
  • ►  2011 (39)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (2)
    • ►  Aug (4)
    • ►  Jul (2)
    • ►  Jun (5)
    • ►  May (4)
    • ►  Apr (3)
    • ►  Mar (4)
    • ►  Feb (5)
    • ►  Jan (3)
  • ►  2010 (37)
    • ►  Dec (3)
    • ►  Nov (3)
    • ►  Oct (4)
    • ►  Sep (8)
    • ►  Aug (3)
    • ►  Jul (3)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (3)
    • ►  Mar (3)
    • ►  Feb (2)
    • ►  Jan (1)
  • ►  2009 (54)
    • ►  Dec (3)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (5)
    • ►  Aug (4)
    • ►  Jul (15)
    • ►  Jun (8)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (5)
    • ►  Jan (4)
  • ►  2008 (75)
    • ►  Dec (6)
    • ►  Nov (8)
    • ►  Oct (9)
    • ►  Sep (8)
    • ►  Aug (9)
    • ►  Jul (9)
    • ►  Jun (6)
    • ►  May (6)
    • ►  Apr (4)
    • ►  Mar (4)
    • ►  Feb (4)
    • ►  Jan (2)
  • ►  2007 (41)
    • ►  Oct (6)
    • ►  Sep (5)
    • ►  Aug (3)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (7)
    • ►  Mar (5)
    • ►  Feb (5)
    • ►  Jan (4)

Feed

  • Google
  • Privacy
  • Terms