I think that the right answer is a healthy dose of both your positions.
While it is true that true "test first" development processes is insanely valuable in producing better architecture, validating the tests themselves and getting full coverage, it is not sufficient.
It is my view that one should put together a full unit test suite using TDD (100% code coverage -- produce a failing test then write the minimum code to make it pass), but your Dr. Jekill & Mr. Hyde approach is very valuable at higher level functional/workflow/blackbox/whatever-you-want-to-call-it testing
Then you will have the best of both worlds. If we can agree that there are different types of testing necessary to adequately qualify a bit of software, it doesn't seem too unreasonable to state that these different types of testing may lend themselves more closely to different approaches.
I think all of those approaches make quite a lot of sense, because they tackle the problem of testing from different sides. As usual, there's no silver bullet; some approaches may be better, but there's always some limitation that blocks from adopting in all possible contexts.
In my opinion, the main problem is that software development has evolved quite a lot, but the same does not applies to software testing. We have now tools and methodologies that enable a much higher complexity in software, but the testing is about the same. I think the only solution so far, is to change the way software is developed, and build the software taking testability into the design. Then, continue with the existing approaches as well, and then also using "Dr. Jeckill and Mr Hide" approach, in addition, to ensure an even better layer of testing.
I think of unit tests as a way to fix some behavior, and in that allowing us to refactor or otherwise enhance our code, constantly making it better without the fear of breaking an already achieved goal. They're like source control for behavior :)
This is why I find the maxim adequate for unit tests.
For other types of tests a Mr. hide personality is definitely a must. Truly, the best QA's I know posses a devilish twisted mind...
I don't think it is always enough to just think about the test. Sometimes, writing the test is the only way to expose the limitations of the test harness and know what testability hooks are required.
I think it is very difficult to create testable code if you are not actually writing tests. It is very easy to add some functionality where it does not belong because it is just "a small fix".
Others think that code and tests should not be thought of as one at all, but they should be treated independently – ideally as adversaries: "I don't want code and tests to be too "friendly". Production code should not be changed or compromised to make the testing easier, and tests should not trust the hooks put in the code to make it more testable."
---
I preach and practice the exact opposite. I'm a huge advocate of merging test and production code for a variety of reasons: we get maximal code reuse, our test utilities can use the exact same code the production services use to create and modify data, and it promotes ownership between test and development in both directions. The Test Engineers develop an ownership of production code and Software Engineers develop an ownership of test code.
I regularly look at 3 continuous builds. One is owned by developers (it's almost always red), one is owned by test engineers (it's almost always red), one is co-owned (when it breaks, people fix it immediately).
A coworker and I have recently embedded ourselves with a developer group (we're Software Engineers in Test). In two weeks I learned more about our code base than I had in the previous 12 months. Our developers also immediately started thinking about test solutions as engineering problems and not quality assurance problems.
And we were able to deploy a fake version of a server for testing and integration before the real server was written. Because we have embraced merging test and production code rather than run screaming, the transition from fake to real will be a 100% seamless transition, and we won't lose a single bit of automation or coverage (in fact, both will increase), all because we've embedding test code in production services.
Certainly thinking about testing while coding would help developers avoid some of the basic pitfalls that a tester might try. But having the Dr Jekyll and Mr Hyde approach is tough for a single person, unless of course you have MPD! :) I don't quite think that coding and testing should be done in isolation also, because there are a number of things a tester might think of testing only if he knows the logic behind the code. Thats why I think the approach of pairing a tester with a developer combines these 2 approaches - 1. of having 2 different personalities and 2. of developing the test cases and code in the same environment. Although we are using 2 resources and this could be expensive but I believe its effective.
What are your thoughts? Is it enough to think about testability when designing or writing the code, or must you actually write and run some tests in parallel with the code?
1) I do not expect everyone to use TDD, but I do expect them to deliver the same results as if they had. So you don't have to 'test first', but don't bother coming to me claiming the feature is 'done' but there wasn't time to write unit tests. Incidentally, I've never had anyone not use TDD and produce code/tests of the same quality.
2) Do not focus on TDD to the deteriment of what it hopes to accomplish. If some other approach achieves these goals better than TDD for what you're working on then abandon TDD for that work. Do not forsake your goals for a path to said goals.
regarding Bensley's post - You hit on a great point and that is that too many times the silos of development and testing (QA) are not intersecting until it's 'too late'. I worked with a developer on a project and as he coded, I actually put comments in the code on how to test the functionality. Those comments were followed during unit testing and then used as a guide for writing test cases during the integration portion of testing. It was a bit time consuming, but I believe it worth the trouble because it saved time in bug reporting and bug fixing down the line. Like many things, this is about people working together and if you have a good level of DEV/QA cohesion, it goes A LONG way.
It seems to me that the approach to testability does not matter much *if* the team follows a discipline of continuous code/test review and refactoring. On one hand, some developers are very productive as long as they do not have to write tests at the same time (bad habit?), and they can usually follow some basic testability practices to help test engineers afterwards (e.g. think twice before using final or static keywords in Java). On the other hand, some developers follow a TDD-like approach and have some extent of built-in testability. In both cases, review and refactoring can help with aligning code and test. What is the most efficient path? I guess it really depends on the teams, although I have preference for TDD.
Having wrote that, I think pushing people having a bipolar Jekyll/Hyde approach is extremely hard, even fruitless in general if they are not in this mindset by themselves. The dissonance between creative development and destructive testing is a stumbling block for many people around there. There is also a conflict of interests, where some developers will built tests *so that* the code execution lights up the green.
I can be rather difficult for us Jekyll Hyde (JH) types to integrate into a TDD organization. When designing a piece of code, particularly frameworks, I try to focus on usability of that framework rather than testability. One possible fallacy is to assume that if you do not have a TDD mentality you will not write rigorous tests. Most arguments seem to imply this, that TDD is required to generate proper and full testing coverage. Type JH developers become frustrated at the thought of being spoon-fed, essentially being told "don't forget to look both ways before crossing" by the org. Ahhh but a TD should be clever enough to "fake it", from an external point of view there's no difference except for the rigor of tests themselves, then you can spot a Jekyll/Hyde type.
Just a note but the normally used spelling would be "Dr. Jekyll and Mr. Hyde".
ReplyDeleteI think that the right answer is a healthy dose of both your positions.
ReplyDeleteWhile it is true that true "test first" development processes is insanely valuable in producing better architecture, validating the tests themselves and getting full coverage, it is not sufficient.
It is my view that one should put together a full unit test suite using TDD (100% code coverage -- produce a failing test then write the minimum code to make it pass), but your Dr. Jekill & Mr. Hyde approach is very valuable at higher level functional/workflow/blackbox/whatever-you-want-to-call-it testing
Then you will have the best of both worlds. If we can agree that there are different types of testing necessary to adequately qualify a bit of software, it doesn't seem too unreasonable to state that these different types of testing may lend themselves more closely to different approaches.
I think all of those approaches make quite a lot of sense, because they tackle the problem of testing from different sides.
ReplyDeleteAs usual, there's no silver bullet; some approaches may be better, but there's always some limitation that blocks from adopting in all possible contexts.
In my opinion, the main problem is that software development has evolved quite a lot, but the same does not applies to software testing. We have now tools and methodologies that enable a much higher complexity in software, but the testing is about the same.
I think the only solution so far, is to change the way software is developed, and build the software taking testability into the design.
Then, continue with the existing approaches as well, and then also using "Dr. Jeckill and Mr Hide" approach, in addition, to ensure an even better layer of testing.
I think of unit tests as a way to fix some behavior, and in that allowing us to refactor or otherwise enhance our code, constantly making it better without the fear of breaking an already achieved goal. They're like source control for behavior :)
ReplyDeleteThis is why I find the maxim adequate for unit tests.
For other types of tests a Mr. hide personality is definitely a must.
Truly, the best QA's I know posses a devilish twisted mind...
I don't think it is always enough to just think about the test. Sometimes, writing the test is the only way to expose the limitations of the test harness and know what testability hooks are required.
ReplyDeleteI think it is very difficult to create testable code if you are not actually writing tests. It is very easy to add some functionality where it does not belong because it is just "a small fix".
ReplyDeleteOthers think that code and tests should not be thought of as one at all, but they should be treated independently – ideally as adversaries: "I don't want code and tests to be too "friendly". Production code should not be changed or compromised to make the testing easier, and tests should not trust the hooks put in the code to make it more testable."
ReplyDelete---
I preach and practice the exact opposite. I'm a huge advocate of merging test and production code for a variety of reasons: we get maximal code reuse, our test utilities can use the exact same code the production services use to create and modify data, and it promotes ownership between test and development in both directions. The Test Engineers develop an ownership of production code and Software Engineers develop an ownership of test code.
I regularly look at 3 continuous builds. One is owned by developers (it's almost always red), one is owned by test engineers (it's almost always red), one is co-owned (when it breaks, people fix it immediately).
A coworker and I have recently embedded ourselves with a developer group (we're Software Engineers in Test). In two weeks I learned more about our code base than I had in the previous 12 months. Our developers also immediately started thinking about test solutions as engineering problems and not quality assurance problems.
And we were able to deploy a fake version of a server for testing and integration before the real server was written. Because we have embraced merging test and production code rather than run screaming, the transition from fake to real will be a 100% seamless transition, and we won't lose a single bit of automation or coverage (in fact, both will increase), all because we've embedding test code in production services.
I'm now convinced: MERGE MERGE MERGE!
Matthew Bensley
SET @ Google in Mountain View, CA
Certainly thinking about testing while coding would help developers avoid some of the basic pitfalls that a tester might try. But having the Dr Jekyll and Mr Hyde approach is tough for a single person, unless of course you have MPD! :)
ReplyDeleteI don't quite think that coding and testing should be done in isolation also, because there are a number of things a tester might think of testing only if he knows the logic behind the code. Thats why I think the approach of pairing a tester with a developer combines these 2 approaches - 1. of having 2 different personalities and 2. of developing the test cases and code in the same environment.
Although we are using 2 resources and this could be expensive but I believe its effective.
Surya Dodd
www.coroware.com
@Jesus Freak: Thanks for pointing out that I mispelled BOTH Dr. JekYll and Mr. HYde. I fixed it now. Nice catch.
ReplyDeleteWhat are your thoughts? Is it enough to think about testability when designing or writing the code, or must you actually write and run some tests in parallel with the code?
ReplyDelete1) I do not expect everyone to use TDD, but I do expect them to deliver the same results as if they had. So you don't have to 'test first', but don't bother coming to me claiming the feature is 'done' but there wasn't time to write unit tests. Incidentally, I've never had anyone not use TDD and produce code/tests of the same quality.
2) Do not focus on TDD to the deteriment of what it hopes to accomplish. If some other approach achieves these goals better than TDD for what you're working on then abandon TDD for that work. Do not forsake your goals for a path to said goals.
-Mark
regarding Bensley's post - You hit on a great point and that is that too many times the silos of development and testing (QA) are not intersecting until it's 'too late'. I worked with a developer on a project and as he coded, I actually put comments in the code on how to test the functionality. Those comments were followed during unit testing and then used as a guide for writing test cases during the integration portion of testing. It was a bit time consuming, but I believe it worth the trouble because it saved time in bug reporting and bug fixing down the line.
ReplyDeleteLike many things, this is about people working together and if you have a good level of DEV/QA cohesion, it goes A LONG way.
Thanks for sharing The Way of Testivus. I think the maxims are good except for "Think of the code and test as one." I think that's a path to disaster.
ReplyDeleteConsider @bensley's comment: "our test utilities can use the exact same code the production services use".
This is a great way to ensure that your test code and your production code have the same bugs. Yes, tests themselves can have bugs.
Notwithstanding this, there is a good point here. Here's my suggested replacement:
Think of code and test as two sides of a coin
You can't have a coin with just one side.
When writing the code, think of the test.
When writing the test, think of the code.
A coin with two heads is designed to cheat.
Don't copy from the code to the test.
Don't copy from the test to the code.
Also posted in my Vroospeak blog with a bit more detail.
It seems to me that the approach to testability does not matter much *if* the team follows a discipline of continuous code/test review and refactoring. On one hand, some developers are very productive as long as they do not have to write tests at the same time (bad habit?), and they can usually follow some basic testability practices to help test engineers afterwards (e.g. think twice before using final or static keywords in Java). On the other hand, some developers follow a TDD-like approach and have some extent of built-in testability. In both cases, review and refactoring can help with aligning code and test. What is the most efficient path? I guess it really depends on the teams, although I have preference for TDD.
ReplyDeleteHaving wrote that, I think pushing people having a bipolar Jekyll/Hyde approach is extremely hard, even fruitless in general if they are not in this mindset by themselves. The dissonance between creative development and destructive testing is a stumbling block for many people around there. There is also a conflict of interests, where some developers will built tests *so that* the code execution lights up the green.
I can be rather difficult for us Jekyll Hyde (JH) types to integrate into a TDD organization. When designing a piece of code, particularly frameworks, I try to focus on usability of that framework rather than testability. One possible fallacy is to assume that if you do not have a TDD mentality you will not write rigorous tests. Most arguments seem to imply this, that TDD is required to generate proper and full testing coverage. Type JH developers become frustrated at the thought of being spoon-fed, essentially being told "don't forget to look both ways before crossing" by the org. Ahhh but a TD should be clever enough to "fake it", from an external point of view there's no difference except for the rigor of tests themselves, then you can spot a Jekyll/Hyde type.
ReplyDelete