Part 1 and Part 2 of this series provided how-tos and usefulness tips for creating acceptance tests for Web apps. This final post reflects on some of the broader topics for our acceptance tests.
Aims and drivers of our tests
In my experience and that of my colleagues, there are drivers and aims for acceptance tests. They should act as ‘safety rails’, by analogy similar to crash barriers at the sides of roads, that keep us from straying too far from the right direction. Our tests need to ensure development doesn’t break essential functionality. The tests must also provide early warning, preferably minutes after relevant changes have been made to the code.
My advice for developing acceptance tests for Web applications: start simple, keep them simple, and find ways to build and establish trust in your automation code. One of the maxims I use when assessing the value of a test is to think of ways to fool my test into giving erroneous results. Then I decide whether the test is good enough or whether we need to add safeguards to the test code to make it harder to fool. I’m pragmatic and realise that all my tests are imperfect; I prefer to make tests ‘good enough’ to be useful where essential preconditions are embedded into the test. Preconditions should include checking for things that invalidate assumptions for that test (for example, the logged-in account is assumed to have administrative rights) and checking for the appropriate system state (for example, to confirm the user is starting from the correct homepage and has several items in the shopping basket).
The value of the tests, and their ability to act as safety rails, is directly related to how often failing tests are a "false positive." Too many false positives, and a team loses trust in their acceptance tests entirely.
Acceptance tests aren’t a ‘silver bullet.’ They don’t solve all our problems or provide complete confidence in the system being tested (real life usage generates plenty of humbling experiences). They should be backed up by comprehensive automated unit tests and tests for quality attributes such as performance and security. Typically, unit tests should comprise 70% of our functional tests, integration tests 20%, and acceptance tests the remaining 10%.
We need to be able to justify the benefits of the automated tests and understand both the return on investment (ROI) and Opportunity Cost – the time we spend on creating the automated tests is not available to do other things, so we need to ask whether we could spend our time better. Here, the intent is to consider the effects and costs rather than provide detailed calculations; I typically spend a few minutes thinking about these factors as I’m deciding whether to create or modify an automated test. As code spends the vast majority of time in maintenance mode, living on for a long time after active work has ceased, I recommend assessing most costs and benefits over the life of the software. However, opportunity cost must be considered within the period I’m actively working on the project, as that’s all the time I have available.
Unlike testing of traditional web sites, where the contents tend not to change once they have been loaded, tests for web applications need to cope with highly dynamic contents that may change several times a second, sometimes in hard-to-predict ways, caused by factors outside our control.
As web applications are highly dynamic, the tests need to detect relevant changes, wait until the desired behaviour has occurred, and interrogate the application state before the system state changes again. There is a window of opportunity for each test where the system is in an appropriate state to query. The changes can be triggered by many sources, including input, such as a test script clicking a button; clock based, such as a calendar reminder is displayed for 1 minute; and server initiated changes, such as when a new chat message is received.
The tests can simply poll the application, trying to detect relevant changes or timeouts. If the test only looks for expected behaviour, it might spend a long time waiting in the event of problems. We can improve the speed and reliability of the tests by checking for problems, such as error messages.
Browser-based UI tests are relatively heavy-weight, particularly if each test has to start from a clean state, such as the login screen. Individual tests can take seconds to execute. While this is much faster than a human could execute a test, it’s much slower than a unit test (which takes milliseconds). There is a trade-off between optimizing tests by reducing the preliminary steps (such as bypassing the need to log in by using an authentication cookie) and maintaining the independence of the tests – the system or the browser may be affected by earlier tests. Fast tests make for happier developers, unless the test results prove to be erroneous.
As with other software, automated tests need ongoing nurturing to retain their utility, especially when the application code is changed. If each test contains information on how to obtain information, such as an xpath expression to get the count of unread email, then a change to the UI can affect many tests and require each of those tests to be changed and retested. By applying good software design practices, we can encapsulate the ‘how’ from the rest of our tests. That way, if the application changes, we only need to change how we get the email count in one piece of code, instead of having to change it in every piece of code.
Practical tests
Lots of bugs are discovered by means other than automated testing – they might be reported by users, for example. Once these bugs are fixed, the fixes must be tested. The tests must establish whether the problem has been fixed and, where practical, show that the root cause has been addressed. Since we want to make sure the bug doesn’t resurface unnoticed in future releases, having automated tests for the bug seems sensible. Create the acceptance tests first, and make sure they expose the problem; then fix the bug and run the tests again to ensure the fix works. Antony Marcano is one of the pioneers of acceptance tests for bugs.
Although this article focuses on acceptance tests, I’d like to encourage you to consider creating smaller tests when practical. Smaller tests are more focused, run significantly faster, and are more likely to be run sooner and more often. We sweep through our acceptance tests from time to time and replace as many as we can with small or medium tests. The remaining acceptance tests are more likely to be maintained because we know they’re essential, and the overall execution time is reduced – keeping everyone happy!
Further information
A useful tutorial on xpathhttp://www.zvon.org/xxl/XPathTutorial/General/examples.html
Google Test Automation Conference (GTAC) 2008: The value of small testshttp://www.youtube.com/watch?v=MpG2i_6nkUg
GTAC 2008: Taming the Beast - How to Test an AJAX Applicationhttp://www.youtube.com/watch?v=5jjrTBFZWgk
Part 1 of this article contains an additional long list of excellent resources.
The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.
No comments :
Post a Comment
The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.