Looks reasonable, but I think it's OK to deviate from this rule if a whole test file uses the same initial set up of a complex object (may be a partial initialization), and then each test case changes certain parameters (or fully initializes the system under test) and asserts on the resulting state.
i think what you save hiding common code away is much less than what you loose looking for cause. i like the tests to be explicit about what happens. any jump i have to make in the code when analysing is a bad smell. Sure for one who wrote the tests the interedependence in the code may be clear, but not for others who just would like to know what the test is doing. so i always try put the full test sequence into the test scope. I also think that test code should not have the complexity of the product code. Because test code is a means of documenting the functional area of the sut. If the test flow is hard to follow its barely useful. Just my opinion. My test code uses page object pattern and it has only one abstraction layer. The test flow consists just of call on different functions from page object controller classes. This way the controller code is exactly one jump away from the place of its usage. This plain architecture already saved me amounts of maintenance work. Test code looks not so cool and a little bit dull, but i can live with it. Code is for people not for machines.
It shouldn't contradict with the advice given in this article.
As long as the logical cause and consequence are kept immediately visible in the test, it helps to move logically irrelevant boilerplate out into either a helper or some initialization code.
In a nutshell, I use the following rule to help me decide what code or data stays in the test method and what's out:
Relevant? In (don't worry about DRY). Irrelevant? Out.
This is a solid recommendation. My primary concern is that, at our shop putting ,inputs and effects next to each other will require a large amount duplication of "setup code". I don't think this is feasible for places that have already been around for awhile
Large amount of setup code is a design smell. In each test you should mention only pieces that are relevant for that test, ideally only one aspect of unit under test. If setting up your unit for each test is cumbersome, your tests try to tell you that you should rethink your design.
I absolutely agree with this post. I've seen so many pieces of code where this happens... Every setup/assert is either put away in variables (and that alone makes it harder to read, assert result == personid and not assert result == 119) but also like in this example that the declaration is miles away from the testcode. Imo that is the result when DRY, the guideline, becomes DRY, the law.
Looks reasonable, but I think it's OK to deviate from this rule if a whole test file uses the same initial set up of a complex object (may be a partial initialization), and then each test case changes certain parameters (or fully initializes the system under test) and asserts on the resulting state.
ReplyDeleteAgree
Deletei think what you save hiding common code away is much less than what you loose looking for cause. i like the tests to be explicit about what happens. any jump i have to make in the code when analysing is a bad smell. Sure for one who wrote the tests the interedependence in the code may be clear, but not for others who just would like to know what the test is doing. so i always try put the full test sequence into the test scope. I also think that test code should not have the complexity of the product code. Because test code is a means of documenting the functional area of the sut. If the test flow is hard to follow its barely useful. Just my opinion.
DeleteMy test code uses page object pattern and it has only one abstraction layer. The test flow consists just of call on different functions from page object controller classes. This way the controller code is exactly one jump away from the place of its usage. This plain architecture already saved me amounts of maintenance work. Test code looks not so cool and a little bit dull, but i can live with it. Code is for people not for machines.
It shouldn't contradict with the advice given in this article.
DeleteAs long as the logical cause and consequence are kept immediately visible in the test, it helps to move logically irrelevant boilerplate out into either a helper or some initialization code.
In a nutshell, I use the following rule to help me decide what code or data stays in the test method and what's out:
Relevant? In (don't worry about DRY).
Irrelevant? Out.
This is a solid recommendation. My primary concern is that, at our shop putting ,inputs and effects next to each other will require a large amount duplication of "setup code". I don't think this is feasible for places that have already been around for awhile
ReplyDeleteLarge amount of setup code is a design smell. In each test you should mention only pieces that are relevant for that test, ideally only one aspect of unit under test. If setting up your unit for each test is cumbersome, your tests try to tell you that you should rethink your design.
ReplyDeleteI absolutely agree with this post. I've seen so many pieces of code where this happens... Every setup/assert is either put away in variables (and that alone makes it harder to read, assert result == personid and not assert result == 119) but also like in this example that the declaration is miles away from the testcode. Imo that is the result when DRY, the guideline, becomes DRY, the law.
ReplyDelete