Very interesting post. I will be looking into implementing something similar in our system. I have one question. If the servers are receiving requests I assume they are running inside a service container. Starting up the likes of Tomcat WebSphere jboss etc US very slow. How do you manage to include these tests in a continuous build environment , they must be slow.?
Hi Dave,Yes, the tests are large tests and certainly some of our slower tests. We run them in a "continuous build" in the sense of running them automatically at a regular frequency such as every 15 mins or half hour. Running them at every changelist is definitely expensive. But running at every few changelists means a binary search between the last passed run and the failed run will help us track down the problem changelist fairly fast.
Have you used this to isolate and find non-network related performance bottlenecks in sub-systems as well(I don't see it listed as a potential use case as you were writing about end - end testing)?I think this would a really useful thing to do - especially to find performance issues related to the various subsystems in the SUT.For example - badly written stored procedures, deadlocks, paging, issues within the middleware etc can be found quite easily could be detected quite early in the development cycle.I suppose the biggest problem would be to package the hermetic server itself for testing. Could you list out some issues you have faced in doing so?
Hi Bharath,You are right that hermetic servers can be used in performance tests! We use them for micro-benchmarking tests and have been able to catch performance regressions in servers very early with such tests.In addition to the points we mentioned in the article for packaging the hermetic server, performance tests do need additional hooks in the package. One of them is the ability to inject simulated latencies to the request/response times between servers. We have found that useful for modeling real servers better.
I was talking about a simpler system wherein the code that gets checked in automatically gets bench-marked and compared versus its previous runs as well. Or at a component/system level perhaps the log times for the various tasks get compared with one another.I don't believe any hooks into the code is required at all - although logging may require some debugging capability within the system.Just by comparing the times as the product is being built out we can see how the feature/component/system performs as the code base becomes larger.
Nice post and good solution :)
Very insight article
This is nice one. we can easily to check performance of the testing in within server scale which we have no idea about when we used Sql query and sub query task ..!
If you serach for Mock Server solution, pl s have a look at this open sourced tool: https://github.com/epam/Wilma
The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.