How do you mark a test failure as flaky? Do you have an automated/intelligent system that flags a test run failure as flaky or do you do it manually?
Very interesting. The replay technique sounds like an interesting alternative/variation to contract tests. I gave a presentation last week that makes very similar recommendations to this article: https://skillsmatter.com/skillscasts/8567-testable-software-architecture
Sounds like the consumer driven contracts idea implemented here: https://docs.pact.io - cool in theory, but hard to write readable tests for in practice.
Yes, pacts were a strong influence on what we did.However, we never went quite as far as they did, and cut out some of the stuff that makes pacts very powerful in theory, but hard to write in practice. Most importantly, instead of writing the contracts in code, we simply store the exchanged data as protocol buffers (https://developers.google.com/protocol-buffers/). That has the advantage of being far simpler, but also restricts what contracts can do, since you have a "passive" contract, instead of code that gets executed.
This is really interesting, thanks for sharing. Will you be open sourcing your tool?
Understood that integration testing is now carried out as part of unit testing. Just wondering, is functional testing also being covered as part of unit tests? Wouldn't functional testing require some of E2E tests be retained?
Yes, functional and system testing do require some E2E tests to be retained. But these tests do not have to run during the developer cycle, which basic integration tests are quite important in SOA systems that change rapidly.
any chance this will be open sourced? We would love to contribute!
This is interesting approach. We are nearly in the same situation, but in the beginning. Could you share how did you solve connection to databases (and other services with different protocols). Do you have started DBs for tests or do you mock them as well?Another thing we need to cope with is order of returned data. Some of our methods are allowed to return items in array in random order (this random order originates form DB without order specification).Did you see such a problems?
So, generally speaking, at Google DBs are also services that "speak" protocol buffer. But for most tests and languages, we also have very lightweight in-memory implementations that are more convenient to use where a DB is needed.For sorted/unsorted stuff in arrays, that's a common problem. In the final version we opted for always treating arrays as unsorted, so our matching algorithm just checks if each element (and duplicates) occur, but not in which position. In a previous version we tried to add a markup language to the stored data to modify the way things are matched, but in this particular case, it turned out that just un-ordered lists work well practically always.
There have been several questions about open-sourcing the implementation of the library we built.There are currently no plans to do that. The two main reasons are: * A lot of what we did in the implementation is Google specific. Once we split of the parts that make sense in open source, there wouldn't be much left. * There are very good implementations of these principles out there that work well with common languages and OS stacks. For example https://docs.pact.io/.
Hi, thank you for this post I agree with you that Tight coupling and insufficient abstraction made unit testing very hard, and as a consequence, a lot of end-to-end tests served as functional tests of that code.very useful information
The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.