If you would like to work for company which takes testing seriously, or if you like what you read, why not send me your resume. We are always looking for sharp, energetic people. misko@google.com
What kind of tests take under 1ms? And if you've got 100 functions, then are 20 tests per function enough to do something useful? Probably yes, but then who actually writes 20 test per function? (In another post you talk about people not knowing how to write, that is plan and design tests.)
But you're right that fast is essential at the IDE. One way to turn slow into faster is to have tests run in the background on spare machine cycles. Like ADSL where it works because most of browsing is receiving and viewing, with much less need for sending data.
Related are static analysis tools which must be fast at the IDE, and some can run in the background. I believe that Riverblade's Visual Lint add-on for Visual Studio and PC-lint does that.
I'm very worried by this post. Is the author really suggesting that the value of a test is determined by it's speed?
Surely the value of a test is determined by the information it provides? If it runs in a millisecond - fine. If it takes 24 hours - so be it.
Is the author suggesting tests should be time-constrained? If a good test takes too long - should it be refactored. Emasculated?
A long time ago I heard a senior test manager present at a conference who espoused the sophistication of his automated test regime (on a mainframe no less). Countless tests ran every evening, unattended. Lots of clapping in the audience.
In the break I met a guy who worked for the presenter. He asked, "do you want to know why the tests run so fast?". "Sure" I said. "We took out all the comparison checks".
Misko is not suggesting time as the only measure. The rest of the post clearly describes requirements on the content, and gives an example which shows that the tests are doing what unit tests should really do -- promote refactoring. My limited experience with unit tests shows that just writing the tests is often enough to make you realize what's wrong with the code.
The requirement for speed is so that the test will run inside the IDE. 24 hours, or even 30 seconds, will not do if your IDE goes away with an hourglass while it's running the tests.
But yes, you're right to have a healthy concern. Whether automated tests run in a millisecond or in several hours, they always have to be re-reviewed to see if they test something useful. Passing tests are especially dangerous -- everyone thinks green is great so they don't look at the tests to discover that features have moved on and the tests don't test anything.
In short, automated tests are like automatic shift (does anyone still drive stick like I do?): very nice and we take it for granted, but when you press on the gas you still have to look where you're going so you don't cause an accident.
I think that if you have a test that takes 30 seconds, it's important to evaluate what it's doing that causes it to take that long, and based on that, to make sure it is running at the right time.
At my previous employer, the standard unit tests took upwards of an hour (for less than 2000 tests). This discouraged developers from running them, which resulted in broken builds for everyone. When we worked on fixing this, mock objects provided most of the salvation. However, a good portion of the tests were really integration tests (what Misko calls large here), and by correctly identifying them as such, we were able to remove them from the standard suite, and instead run them nightly.
I think what Misko is missing from this post is how these test classifications go hand in hand with scheduling tests.
If you would like to work for company which takes testing seriously, or if you like what you read, why not send me your resume. We are always looking for sharp, energetic people. misko@google.com
ReplyDeleteIsn't the method testPopTooSlowForVeryLargeSets() on KeyedMultiStackTest not consistent? Depending on the machine speed it could fail or pass.
ReplyDeleteMisko, good post.
ReplyDeleteWhat kind of tests take under 1ms? And if you've got 100 functions, then are 20 tests per function enough to do something useful? Probably yes, but then who actually writes 20 test per function? (In another post you talk about people not knowing how to write, that is plan and design tests.)
ReplyDeleteBut you're right that fast is essential at the IDE. One way to turn slow into faster is to have tests run in the background on spare machine cycles. Like ADSL where it works because most of browsing is receiving and viewing, with much less need for sending data.
Related are static analysis tools which must be fast at the IDE, and some can run in the background. I believe that Riverblade's Visual Lint add-on for Visual Studio and PC-lint does that.
Hi Misko,
ReplyDeleteVery nice post. I will be joining the Google Testing team next fall. I am really looking forward to it.
I'm very worried by this post. Is the author really suggesting that the value of a test is determined by it's speed?
ReplyDeleteSurely the value of a test is determined by the information it provides? If it runs in a millisecond - fine. If it takes 24 hours - so be it.
Is the author suggesting tests should be time-constrained? If a good test takes too long - should it be refactored. Emasculated?
A long time ago I heard a senior test manager present at a conference who espoused the sophistication of his automated test regime (on a mainframe no less). Countless tests ran every evening, unattended. Lots of clapping in the audience.
In the break I met a guy who worked for the presenter. He asked, "do you want to know why the tests run so fast?". "Sure" I said. "We took out all the comparison checks".
In response to Paul Gerrard.
ReplyDeleteMisko is not suggesting time as the only measure. The rest of the post clearly describes requirements on the content, and gives an example which shows that the tests are doing what unit tests should really do -- promote refactoring. My limited experience with unit tests shows that just writing the tests is often enough to make you realize what's wrong with the code.
The requirement for speed is so that the test will run inside the IDE. 24 hours, or even 30 seconds, will not do if your IDE goes away with an hourglass while it's running the tests.
But yes, you're right to have a healthy concern. Whether automated tests run in a millisecond or in several hours, they always have to be re-reviewed to see if they test something useful. Passing tests are especially dangerous -- everyone thinks green is great so they don't look at the tests to discover that features have moved on and the tests don't test anything.
In short, automated tests are like automatic shift (does anyone still drive stick like I do?): very nice and we take it for granted, but when you press on the gas you still have to look where you're going so you don't cause an accident.
In response to Paul:
ReplyDeleteI think that if you have a test that takes 30 seconds, it's important to evaluate what it's doing that causes it to take that long, and based on that, to make sure it is running at the right time.
At my previous employer, the standard unit tests took upwards of an hour (for less than 2000 tests). This discouraged developers from running them, which resulted in broken builds for everyone. When we worked on fixing this, mock objects provided most of the salvation. However, a good portion of the tests were really integration tests (what Misko calls large here), and by correctly identifying them as such, we were able to remove them from the standard suite, and instead run them nightly.
I think what Misko is missing from this post is how these test classifications go hand in hand with scheduling tests.