def setUp(self):
self.users = [User('alice'), User('bob')] # This field can be reused across tests.
self.forum = Forum()
def testCanRegisterMultipleUsers(self):
self._RegisterAllUsers()
for user in self.users: # Use a for-loop to verify that all users are registered.
self.assertTrue(self.forum.HasRegisteredUser(user))
def _RegisterAllUsers(self): # This method can be reused across tests.
for user in self.users:
self.forum.Register(user)
|
I get what the author is going for here, but I don't totally agree with them.
ReplyDeleteI think the easiest and cleanest approach is to move the list of users out of set up and into the test (as the author recommends), but personally I would keep the loops and reorganize the code a bit to make it easier to read, so kind of like a DRY DAMP approach?
pseudo code:
def register_list_of_users(user_list):
to_return = []
for user in self.users:
this_user = self.forum.Register(user)
to_return.append(this_user)
return to_return
# There also needs to be better error handling here in case a Register throws an error
def testCanRegisterMultipleUsers(self):
user_list = ['alice', 'bob', 'jack', 'sean']
registered_users = register_list_of_users(user_list)
for user in registered_users:
self.assertTrue(self.forum.HasRegisteredUser(user))
It's hard to adapt your design heuristics to different situations and every design heuristic trades of with another one.
DeleteA developer should know how patterns/principals work and should (try to) understand the motivation behind them.
Did you ever delete a failed test, because you didn't understand the test? Rotten production code cannot be deleted, tests can.
I try to make it easy for the next junior in my team:
I write SOLID code and DAMP tests.
I'd rather stay DRY principle and use it together with feature of testing framework like parameterized test.
ReplyDeleteMy thoughts exactly. The second version of this test is no better than the first; it will still abort after the first failed "assert" statement, robbing us of half the test's value. Test methods should generally contain only 1 "assert" statement and be parametrized with all values to test, NOT iterate through a list of values. This also makes it much easier to expand the test coverage later on by adding more values to the parametrization.
DeleteUnit test above tries to test the functionality of Forum class. I.e: we add user to forum and then we check if those users are present here. But in your example I should look into register_list_of_users to understand what exactly we are tesing. Also you'll have chance to do something wrong inside register_list_of_users Actulally you already did: you ignore input parameter 'user_list' :).
ReplyDeleteOr someone will refactor/expand register_list_of_users (which are reused bu other tests) in the way that it will change initial semantics and you'll lost initial 'test idea' of your testcase.
Hi Derek and Erik, good to see that ToT is not dead! I have two questions:
ReplyDelete1. How often are ToT series published on this blog? I suspect that Google still has a lot of ToT goodies not published.
2. Have you ever considered heuristic to delete test that failed due to code changes and just write new one?
Thanks!
Regards, Karlo.
Thank you for sharing, however I don't necessarily agree with this approach. The "Mental Computation" argument could apply to production code because it is a pain to follow the execution path, but there are several reasons why refactoring and breaking up the code adds benefits. I would argue that Fowler's principle of "separation between intention and implementation" still applies to test code and that it is a really a mind set of how you approach reading code. Can you work with abstractions or do you need everything lined up next to each other in order for you to trust that the code will function. Speed of readability is one thing, where the speed of maintenance is another.
ReplyDeleteBut you're missing the point that hard-to-follow / very abstract code in production is tested by tests. And the reason we enforce DRY in production code is so that we don't duplicate bugs and so that refactoring is straightforward.
DeleteHard-to-follow code in tests has to be run in devs' heads, because that's the only way to check that tests are right. And when a test fails, we want to know what went wrong quickly. We don't want to have to figure out what the test is testing.
went to the comments section and was surprised to find the majority of commenters disagreed with the DAMP approach.
ReplyDeletei agree wholeheartedly with the DAMP approach.
most people don't want to spend too much time understanding tests, or why a test is failing. whatever makes the test more understandable and readable to the team is the best.
perhaps the example above isn't extreme enough for readers to understand why leaning towards DAMP is better. it's still relatively easy to understand that a test is looping through a list of items.
i think the authors (and myself) are pushing for the case where when you have to make a choice between DAMP and DRY, DAMP is probably better than DRY. being succinct is generally good, but you shouldn't sacrifice readability just because you want to reuse code.
i have seen situations where test code is more complex than the actual code itself - and developers spent 80% of the time fixing the failing tests, trying to understand why they failed. especially if the tests span many classes/files/names and you can't understand what is its relation to the function of the app.
for example, in this particular list example, if one of the tests in the list failed, i want to know what is failing in terms of the behaviour of the app. i don't want to spend time counting through the list, and I don't really care what is the index of this test in the list. i just want to know in relation to user behaviour on the app, what is failing.
sometimes DAMP and DRY are the same approach, i.e. reusing code makes you the most clear -- in which case, do both.
The readability point here is really important. I'll admit up front I am a somewhat strong believer in the DAMP approach. There's limits in being too religious with either approach. You can definitely "under abstract" in test code in a way that is verbose, distracting, hard to maintain and ultimately loses the point of making the code readable. But that's not the trend I've seen in code bases I've worked on in my career. Increasingly, over abstraction tends to cause both maintenance and readability pain.
ReplyDeleteFirst readability: your test cases are your best API docs in many cases. You can execute them. You can debug them. Being able to step through the docs is like someone reading it to you and ensuring you nod your head in understanding at each point. When it comes to storytelling, and efficiency, a linear narrative is much easier for someone to understand than someone reciting the movie Memento to you. Abstracts are detours that require head space to keep up with.
Now on maintenance: it often seems that less code is good code. But more important than how many keystrokes it takes is understanding what you're changing. If you can't grok it you can't make the right changes. If you rush it you introduce bugs. If you have to take many detours in test code to understand the structure of tests, you need to keep what happened during all of those tangential conversations in your head as you're making changes. Worse, you need to understand who else might depend on the abstractions that you need to change that are called from within the test cases. "Test helpers" and "test utils" are some of the worst offenders of a competing set of principles in code complexity: maximizing cohesion while minimizing coupling. I can't count the number of times I'm 5 modules deep in an hour long adventure trying to rework test code for a breaking API change and I just wish I could have mindlessly moved my hands over more redundant cases for 15 minutes while vegging out on a Youtube video.
It is important not to treat either concept as religion. Where they don't conflict and you can be both DRY and DAMP, great. Often they conflict. And I think we want to be aware of the conflict, be able to size it up and then consider pros and cons of code reuse over verbosity and repetition when it's worth it. On the entire spectrum of approaches, all the way to a very flat, repeating, looks like "data" approach to testing - nothing is evil. They're just tradeoffs worth exploring. Maybe as an oversteer from a history in an art (programming) where verbosity was previously embraced more out of necessary, I find that programmers are frighteningly willing to write off a less than DRY approach without considering the tradeoffs as they pertain to the particular situation.
Can't agree more with DAMP when I get lost in over abstracted unit test. I even can't figure out the workflow of tests myself wrote if it uses too many test utils and helpers.
ReplyDelete