To be clear, except for the "API Infrastructure Service", every piece in the final diagram is part of the testing framework being described here? That seems like an impressive amount of frameworking, but you are solving, rather elegantly, a fairly complicated problem set. In terms of the framework itself, would you be able to estimate how the work breaks down to build and maintain it? Is it 50% Test API, 25% Test Library, etc? Did you have to build the whole thing before you started writing tests, or were you able to write some tests with only some pieces in place, and then iterate towards completion, evolving the tests along the way? Sorry to badger you, but given the generally vast resources, both in machines and people, available at Google, I'm very curious about the process through which something like this would evolve. Thanks!
Hi Alec, I just updated the final diagram with improved color coding and labels. The boundaries of the SUT, test case, and test infrastructure components should be more clear now. The amount of work was approximately: 80% Test API, 15% Abstraction Library, and 5% Adapter Server. We were able to iterate the development - always a good thing. The first iteration had a few basic API features working, an adapter for one language and one platform, the client abstraction library, and a single test. This became the proof of concept. We were happy with the initial results, so we decided to proceed with the design.
Hi Li, The tests are organized in code with clear comments. At Google, we have an internal service that stores, queries, and provides a UI for looking over the results of all tests, including the log output.
Do you simulate the APIs, for purposes of either functional or performance testing? In other words, are you able to make requests from the client library to a simulator, without accessing the live system? I'm curious to learn how you do that, if you do. Thanks.
Thanks Anthony. We're looking to simulate APIs in order to test, for example, client code before the real API is ready. The simulator would examine the request and send an appropriate reply back to the client. This provides functional testing. We could also do performance testing on the client side by firing off huge numbers of requests, again without affecting the real server. It gets complicated when you consider the fact that the requests can be HTTP, REST, EJB, etc, and there are multiple ways of creating the simulators themselves (request/response, WSDL, ...). There are a number of vendor products that will do this, in a variety of ways, but I'm interested in learning how large corporations perform simulations. Google is about the best example I could think of, given their size and client API library. Do you know where I can get more information on these best practices? Thank you.
It really depends on the focus of your testing. Just client functional testing: fake the server. Full system functional/performance testing: real server. There is rarely good reason to load test a client in a client-server system, as clients represent a single node instance of the system.
Anthony, Blog topic seems hot to me. I am surprise that there is so little discussion activities? I believe that its got to be some knowledge available to leverage. Our company has around 20+ services in Amazon cloud. They do interact and do depend on each other. Deployment, upgrade, etc became very difficult to execute. What You suggest to be done in testing environment? To develop Testing infrastructure like WTT (Microsoft) is not practical. It will take years to implement. Please advice.
To be clear, except for the "API Infrastructure Service", every piece in the final diagram is part of the testing framework being described here?
ReplyDeleteThat seems like an impressive amount of frameworking, but you are solving, rather elegantly, a fairly complicated problem set.
In terms of the framework itself, would you be able to estimate how the work breaks down to build and maintain it? Is it 50% Test API, 25% Test Library, etc?
Did you have to build the whole thing before you started writing tests, or were you able to write some tests with only some pieces in place, and then iterate towards completion, evolving the tests along the way?
Sorry to badger you, but given the generally vast resources, both in machines and people, available at Google, I'm very curious about the process through which something like this would evolve.
Thanks!
Hi Alec, I just updated the final diagram with improved color coding and labels. The boundaries of the SUT, test case, and test infrastructure components should be more clear now. The amount of work was approximately: 80% Test API, 15% Abstraction Library, and 5% Adapter Server. We were able to iterate the development - always a good thing. The first iteration had a few basic API features working, an adapter for one language and one platform, the client abstraction library, and a single test. This became the proof of concept. We were happy with the initial results, so we decided to proceed with the design.
DeleteThis was an excellent read. Such an elegant solution to a complex problem. Thank you Anthony!
ReplyDeleteHow are test case organized, in code or other format? And how about the output/log?
ReplyDeleteHi Li, The tests are organized in code with clear comments. At Google, we have an internal service that stores, queries, and provides a UI for looking over the results of all tests, including the log output.
Deletereally cool. thanks for share.
ReplyDeleteDo you simulate the APIs, for purposes of either functional or performance testing? In other words, are you able to make requests from the client library to a simulator, without accessing the live system? I'm curious to learn how you do that, if you do. Thanks.
ReplyDeleteIn this case, it was a large end-to-end test, so nothing was simulated. Smaller tests (unit and integration) should use mocks and fakes.
DeleteThanks Anthony. We're looking to simulate APIs in order to test, for example, client code before the real API is ready. The simulator would examine the request and send an appropriate reply back to the client. This provides functional testing. We could also do performance testing on the client side by firing off huge numbers of requests, again without affecting the real server. It gets complicated when you consider the fact that the requests can be HTTP, REST, EJB, etc, and there are multiple ways of creating the simulators themselves (request/response, WSDL, ...). There are a number of vendor products that will do this, in a variety of ways, but I'm interested in learning how large corporations perform simulations. Google is about the best example I could think of, given their size and client API library. Do you know where I can get more information on these best practices? Thank you.
DeleteIt really depends on the focus of your testing. Just client functional testing: fake the server. Full system functional/performance testing: real server. There is rarely good reason to load test a client in a client-server system, as clients represent a single node instance of the system.
DeleteAnthony, Blog topic seems hot to me. I am surprise that there is so little discussion activities? I believe that its got to be some knowledge available to leverage. Our company has around 20+ services in Amazon cloud. They do interact and do depend on each other. Deployment, upgrade, etc became very difficult to execute. What You suggest to be done in testing environment?
DeleteTo develop Testing infrastructure like WTT (Microsoft) is not practical. It will take years to implement.
Please advice.