This is the second of a two part blog series titled 'Taming the Beast : How to test AJAX applications'. In
part one we discussed some philosophies around web application testing. In this part we walk through a real example of designing a test strategy for an AJAX application by going 'beyond the GUI'.
Application under test
The sample application we want to test is a simple inventory management system that allows users to increase or decrease the number of parts at various store locations. The application is built using
GWT (Google Web Toolkit) but the testing methodology described here could be applied to any AJAX application.
To quickly recap from
part one, here's our recipe for testing goodness:
- Explore the system's functionality
- Identify the system's architecture
- Identify the interfaces between components
- Identify dependencies and fault conditions
- For each function
- Identify the participating components
- Identify potential problems
- Test in isolation for problems
- Create a 'happy path' test
Let's look at each step in detail:
1. Explore the system's functionality
Simple as this sounds, it is a crucial first step to testing the application. You need to know how the system functions from a user's perspective before you can begin writing tests. Open the app, browse around, click on buttons and links and just get a 'feel' of the app. Here's what our example app looks like:
The app has a NavPane to filter the inventory by locations, list number of items in each location, increase/decrease the balance for items and sort the list by office and by product.
2. Identify the architecture
Learning about the system architecture is the next critical step. At this point think of the system as a set of components and figure out how they talk to each other. Design documents and architecture diagrams are helpful in this step. In our example we have the following components:
- GWT client: Java code compiled into JavaScript that lives in the users browser. Communicates with the server via HTTP-RPC
- Servlet: standard Apache Tomcat servlet that serves the "frontend.html" (main page) with the injected JavaScript and also serves RPCs to communicate with the client-side JavaScript.
- Server-side implementation of the RPC-Stubs: The servlet dispatches the RPC over HTTP calls to this implementation. The RPCImpl communicates with the RPC-Backend via protocol-buffers over RPC
- RPC backend: deals with the business logic and data storage.
- Bigtable: for storing data
It helps to draw a simple diagram representing the data flows between these components, if one doesn't already exist:
In our sample application, the RPC-Implementation is called "StoreService" and the other RPC-Backend is called "OfficeBackend".
3. Identify the interfaces between components
Some obvious ones are:
- gwt_module target in Ant build file
- "service" servlet of Apache Tomcat
- definition of the RPC-Interface
- Protocol buffers
- Bigtable
- UI (it is an interface, after all!)
4. Identify dependencies and fault conditions
With the interfaces correctly identified, we need to identify dependencies and figure out input values that are needed to simulate error conditions in the system.
In our case the UI talks to the servlet which in turn talks to StoreService (RPCImpl). We should verify what happens when the StoreService:
- returns null
- returns empty lists
- returns huge lists
- returns lists with malformed content (wrongly encoded, null or long strings)
- times out
- gets two concurrent calls
In addition the RPCImpl (StoreService) talks to the RPC-Backend (OfficeAdministration). Again we want to make sure the proper calls are made and what happens when the backend:
- returns malformed content
- times out
- sends two concurrent requests
- throws exceptions
To achieve these goals, we will want to replace the RPCImpl (StoreService) with a mock that we can control, and have the servlet talk to the mock. The same is true for the OfficeAdministration - we will want to replace the real RPCBackend with a more controllable fake, and have StoreService communicate with the mock instead.
To get a better overview, we will first look at individual use-cases, and see how the components interact. An example would be the filter-function in the UI (only those items under a 'checked' in a checked-location in the NavPane will be displayed in the table).
Analyze the NavPane filter
- Gets all offices from RPC
- On select, fetch items with RPC. On completion, update table.
- On deselect, clear items from table.
- Gets all offices from RPC-Backend
- Fetches all stock for an office from RPC-Backend
- Scan bigtable for all offices
- Query stock for a given office from bigtable.
Our next step is to figure out the "smallest test" that can give us confidence that each of the components works as expected.
Test client-side behavior
Make sure that de-selecting an item removes it. For that, we need to be sure what items will be in the list. A fake RPCImpl could do just that - independent of other tests that might use the same datasource.
The task is to make the Servlet talk to the "MockStoreService" as RPCImpl. We have different possibilities to achieve that:
- Introduce a flag to switch
- Use the proxy-pattern
- Switch it at run time
- Add a different constructor to the servlet
- Introduce a different build-target that links to the fake implementation
- Use dependency injection to swap out real for fake implementations
Any one of these options would do the job depending on the application. Solutions like adding a new constructor to the servlet would need production code to depend on test code, which is obviously a bad idea. Switching implementations at run time (using class loader trickery) is also an option but could expose security holes. Dependency injection offers a flexible and efficient way to do the same job without polluting production code.
There are various frameworks to allow this form of dependency injection. We want to briefly introduce GuiceBerry as one of them.
Test the StoreService (RPCImpl)
The methods in StoreService (RPCImpl) need a lot of good unit testing. If we write a good amount of unit tests, we probably already have a MockOfficeAdministration (RPC-Backend) that we can use for our further testing efforts.
The main value we can add here is to verify that (1) each interface method in the StoreService behaves correctly, even in the face of communication errors with the RPC-Backend and (2) each method behaves as expected. By using a MockOfficeAdministration as RPC-Backend, we don't have to worry about setting up the data (plus injecting faults is easy!)
Besides testing the basic functionality, e.g.
- Are all the records that we expect retrieved
- Are no records that shouldn't be retrieved passed on to the caller
- Does the application behave correctly, even if no records are found
... we can now also look at
- Malformed or Unexpected data
- Too much data
- Empty replies
- Exceptions
- Time-outs
- Concurrency problems
How can we replace our real RPC-Backend with the mock? That shouldn't be all that difficult, as using an RPC mechanism already forced us to define interfaces for the server. All we need to do is implement a mock-RPC-Backend and run that instead. You might want to consider running the mock-RPC-Backend on the same machine as the tests, to make your tests run faster.
Some example test cases at this level are:
- Retrieve the list of all offices Let the mock-RPC-Backend
- return no office
- return 100 offices, 1 malformed encoded
- return 100 offices, 1 null
- ...
- throw an exception
- time out
- Retrieve product / stock for an office Let the mock-RPC-Backend stubby return
- Retrieve a product for an office Let the mock-RPC-Backend block, and
- issue a second query for the same product at the same time (and to make it more interesting, play with the results that the mock could return!).
- ....
Let's see what we have found out so far: We know that
- the UI works in isolation as expected
- the StoreService (RPCImpl) appropriately invokes the RPC-Backend-Service
- the StoreService (RPCImpl) properly handles any error-conditions
- A little bit about the app's behavior under concurrency
We don't know whether
- the RPC-Backend-Service really expects the behavior the StoreServiceImpl exposes.
It is easy to see that we can do the same excercise for OfficeAdministration (RPC-Backend) and possibly use a MockBigtable implementation. After that, we would know that:
- Backend correctly reads from Bigtable
- Business logic in the backend works correctly
- Backend knows how to handle error-conditions
- Backend knows how to handle missing data
We don't know whether
- Backend is used correctly, i.e. in the way it is intended to be used
Test the OfficeAdministration (RPC-Backend) and StoreService (RPCImpl)
Now let us verify the interaction between OfficeAdministration (RPC-Backend) and StoreService (RPCImpl). This is an essential task, and is not really that difficult. The following points should make testing this quick and easy:
- Easy to test (through Java API)
- Easy to understand
- Ideally contains all the business logic
- Available early
- Executes fast (MockBigtable is an option here)
- Maintenance burden is low (because of stable interfaces)
- Potentially subset of tests as for StoreService (RPCImpl) alone
Let's see what we have found out so far: We know that
- the UI works in isolation as expected
- the OfficeAdministration (RPC-Backend) and the StoreService (RPCImpl) work together as expected
We don't know whether
- The results find their way to the user
Last but not the least ... system test!
Now we need to plug all the components together and do the 'big' system test. In our case, a typical set up would be:
- Manipulate the "real" Bigtable and populate with "good" data for our test
- 5 offices, each with 5 products and each with a stock of 5
- Use Selenium (with the hooks) to
- Navigate via the Navbar
- Exclude an item
- Add an item
- ...
We now know that all components plugged together can handle one typical use case. We should repeat this test for each function that we can invoke through the UI.
The biggest advantage, however, is that we just need to look for communication issues between all 3 building blocks. We don't need to verify boundary cases, inject network errors, or other things (because we have already verified that earlier!)
Conclusion
Our approach requires that we
- Understand the system
- Understand the platform
- Understand what can go wrong (and where)
- Start early with our tests
- Invest in infrastructure to run our tests (mocks, fakes, ...)
What we get in return is
- Faster test execution
- Less maintenance for the tests
- shared ownership
- early execution > early breakage > easy fix
- Shorter feedback loops
- Easier debugging / better localization of bugs due to fewer false negatives.
The two diagrams in this post are returning a zero length HTTP response are hence are not being displayed.
ReplyDeleteInteresting. Remember security and perf testing too :)
ReplyDeletecould you provide the sample source code download?
ReplyDeleteWe started to use Ajax in our recent projects. This article is really helpful
ReplyDelete