I don't really agree when you suggest exposing internal APIs to UAT (user acceptance tests) or coupling tests to the database.
Schemas change almost as much as UIs do. If we allow UAT tests to read arbitrarily from the DB we will effectively be breaking any encapsulation we have put into our persistence components and other framework code.
There is also the fact that end-to-end UAT scripts are theoretically comprehensible by UI designers and customers. Coupling to the database would stop this.
You say not to test boundary conditions/edge cases through end-to-end UAT. I agree, you should use it to look for regressions of normal cases. In effect you are arguing for unit testing of APIs, which everyone should already be doing.
I think looking for races by multithreading tests is a good idea, but it applies to unit tests more readily than UAT.
Perhaps I've got the wrong end of the stick; could we have a more concrete example?
Thanks for your valuable comments. I think you have a few valid points there, but I do not agree to everything you said, either.
1) A schema change is a far bigger deal to everyone involved into system development than a change to the UI- there is usually a whole lot of code (that the dev-teams own) that depends on the database-schema. Additionally, those components that deal with the data-store are usually done earlier than the ones that manage the UI. Needless to say, that you can also encapsulate DB-dependencies in a layer of your testing framework (which is something we also need to do for UI-automation).
I don't think that our tests should be entirely 'black box', a 'grey box' approach allows more valuable insights into the system under test. That might sometimes entail breaking of encapsulation- however, encapsulation was not introduced to separate test-code from the system under test, but to allow modules to have 'secrets' from the depending modules. This doesn't need to apply for tests. BTW, you can make the same point for the UI. Actions to deal with the UI should be encapsulated (as MVC patterns teach us), and we still need to deal with the UI from the outside... the difference is only the frequency of change.
As you write, the end-to-end scripts are 'theoretically' comprehensible. In practice, what is comprehensible (if at all) is the DSL that is used for scripting. It is in the responsibility of the designer of the DSL how he implements the semantics of the DSL commands, i.e. whether 'check balance' means to go to a function in the UI, or to do a look-up in the database. That doesn't make a difference for the user of the system.
For the storage system there is also an additional difference (sorry for the bad example- right now I don't have a better one that is fit for publishing). If you go through the UI cycle, how will you ever be sure, that the new item has really been written back to the DB? Maybe it is just stored internally in a cache (I have seen that before). Maybe it was written to the DB, but not to the expected table? You might say, this is ok, as long as the system reads from the correct table. But what if the same DB is used by different systems? The developer might not have been aware of it, and hence never done a unit-test.
In the later case a change to the DB will brake the test. True. But it will also brake other systems that depend on the DB... so if encapsulation was not fully adhered by the dev-team (and there exists evidence that the older your product, the more likely this will be), our tests add important warning signs.
I don't think that you usually 'unit-test' APIs. A unit-test is just that: executing an encapsulated unit of code, to make sure that it works. To achieve these aims, we often use techniques like Mock-Objects together with dependency injection, to get rid of external dependencies like databases, 3rd party systems, ... In an integration level API test, on the other hand, you will leave some of these dependencies in place, or inject faults into Mock-Behaviour, or ... I agree that this is not the classical UAT that you have in mind- still it is something else than a typical unit-test.
Mocking is one reason why it is sometimes hard to spot things like race-conditions or memory-leaks in unit-tests. You make sure that the component works correctly, but you are not investigating whether it is used correctly. A higher level API test can do that (as a UI test can - the question is only the cost of running and maintaining each of them).
I agree with most of what is written here and like the approach to reducing time to find, fix and ultimately prevent QA issue impact. We take a similar approach from the developers perspective. We get developers to write unit tests as close to the actor boundary (for use cases) as possible. Of course if you have an MVC, or similar architecture, this helps with the decoupling but still goes as as possible to the user boundary. One problem in avoiding the presentation layer is that there may be eventing mechanisms that occur as part of presentation. Additionally one benefit if that this also then occurs as part of the developer/continuous build and doesn't require an explicit steps on the part of the QA team. Anyway, overall, nice article, I'm in full agreement with you.
>>>>Scripting your manual tests this way takes far longer than just executing them manually.
I am curious to know about a (any) way in which scripting (I mean a machine executable version of a manual test) takes LESS time than manual execution.
I believe that while *typically* it takes more time to script a manual test than mere execution, in some cases, if manual execution involves a great degree of observation of multiple parameters and analysis.
I am of the opinion that comparing time and effort taken for scripting and executing manual tests - is highly dependent on the context and testing mission.
I for my part have a more pragmatic view of user acceptance tests: for one thing not only the GUI is target of user acceptance tests, also APIs and service interfaces as well as commonly used data formats have to be tested from this perspective. I also do not agree that it is useful to artificially expose interfaces if they, by themselves, do not provide any value to the customer. This should be also considered for the database as well as for other external resources. If the are not a shared resource, as example a data model which is used by more than one project, they should be only tested through the interfaces which are using the resource. The other thing is, tests are tools: either to support and verify development, validating functionality or to help to simulate exceptional conditions. As a tool they have to match the purpose. So unit tests are useful as safety net or used as design tool (TDD-like), up to component tests. Integration and system tests broaden the scope and user acceptance tests validate the usability, either for people or other software systems. Every one of this tests should be automated as far as possible. It is true that scripting complex tests is cumbersome, but you have to balance it with the big advantage of automated tests: they are repeatable, manual tests are normally not and for that they can be easily measured. Functional testing and user acceptance testing should be separated because functionality by itself can be unacceptable for the user. With your summary I can agree with exception that you should not expose artificially internal interfaces and the addition that test goes through the whole project lifecycle, analysis to deployment and different test types have a unique view on the system (although most of them should be automated to a level which is reasonable possible). And yes, testing has a lot in common with development so most practices can be adapted.
I agree with every idea in the initial post. I recently gave a talk for our internal test organization and emphasized the exact same points.
Right now, our testing teams are really focused on UI testing, so our goal was to lower the cost of UI-script creation.
We were able to dramatically lower our scripting costs for one of our applications by doing two things:
+Implementing base classes for our test scripts (that manage environment info, data retrieval, etc.)
+Creating reusable screen objects to represent our application. This makes scripting incredibly fast and effective.
We created our own IE scripting tool in Java, (because or dev teams are using Java), that allows incredible flexibility in how tests are constructed, and also allows very simple access to UI components: e.g. button("id=submit").click.
This approach has allowed us to crank out MANY scripts quickly, and also isolates application changes in our reusable layer, making maintenance of our scripts much easier.
Now that we're getting our UI testing under control, our next step will be to work more closely with our dev teams to start "peeking under the covers," and looking for the right places to start testing the API.
I disagree on a few things (or I think that more explanation is needed):
- "execution is slow": tests are never fast enough nevertheless there are huge differences between tools here. Such a general statement is misleading.
- "test break for the wrong reasons" & "maintenance of the tests takes a significant amount of time" isn't this a sign for badly written scripts? It's a common practice for companies to use the "bad" developers (ie the guys that shouldn't touch the production code) to write tests. The result is that you have often bad tests.
- I'm missing something concerning the application's "testability". A good coordination between developers and testers helps to make the application easier to test and therefore the tests easier to maintain.
Couldn't agree more... I've been finding that many tests written against the domain layer run faster, are less prone to fragility, and expose more bugs.
Including the UI or persistence layer in the picture always muddies the waters a bit!
I think we need to distinguish between automated acceptance tests and integration tests. When our tests are accessing domain model/busienss logic directly without going through fragile UI then it's integration tests. When tests are going through UI by clicking on buttons and/or links (not to mention AJAX functionality) then they are automated acceptance tests.
Integration tests definitely have some advantages but we can't replace one with another, I would rather say that we need both.
It's like unit and acceptance tests, they are working on different level and only together they are benifitial for the application testing in general.
This is correct approach. This article clears difference between Test Automation and Automated Testing. Although this is not alternative for UI Test or automation however can reduce problems in manual testing and its automation.
Manual testing is way to check how system behaves for human interactions. Automated tests execute a sequence of actions without human intervention. This approach helps to eliminate human error, and provides faster results. Since most products require tests to be run many times, automated testing generally leads to significant labor cost savings over time.
API testing is right approach specifically in SOA but whenever during any transitions if delegates are implemented or UI is tightly coupled with business logic API Testing is bit impossible. In such a case, we have to resort to conventional test methods and automation.
I run a test automation team, and have been struggling with this very distinction. I want my team to do real test automation, but most of the development organization, including the CTO, is expected us to automate the existing tests.
You did an excellent job of laying out the differences. It's much clearer in my mind now what I need to communicate to the rest of the product development organization.
Very nice post! I agree with most of what is written here and like the approach to reducing time to find, fix and ultimately prevent QA issue impact. I had viewed another Article in macrotesting http://www.macrotesting.com which had much more valid reasons and examples related to this. Thank you for the post Markus Clermont...
Until my last project I was using mostly API-based unit-testing (the one you propose). But last project allowed (forced) me to discover contract-based random integration tests.
Project was not designed fully to be compatible with unit tests (high coupling) thus Unit-like test were hardly an option here. UI-level tests were the only option to continuosly measure system quality during development.
And they're working very good from two years perspective.
hi i agree this post for Test Automation testing and Software Quality Assurance this is great help provide the test automation testing tools and your API services are good i know one great company to provide the test Automation Services but i really like your post nice job keep it up
Very Nice Post.. "What to Automate? • Automate those functionalities which have to be run repeatedly . • Regression test suite is the best candidate for automation because whenever bugs gets fixed we have to run the test suite to verify that there is no impact on application due to those fixes.
When to Automate? • Start automation when build get stabilized so that there is no need to make changes within the test scripts. • Automate those functionalities which have to be verified in every new build. It provides more time to work on complex functionalities.
How to Automate? There are many automation (Open source and licensed) tools available to automate the test cases such as QTP , Silk test , Selenium, tellurium etc. Automation tools provide us : 1). Record & Play: It helps to record the test cases for future use and run the same test script repeatedly whenever required. 2). Writing Scripts using VB scripting, Java, Perl etc.: Automation tools facilitate to write test scripts using scripting languages like VB scripting, Java, Perl etc. to automate test cases."
Should complexity be a factor deciding what to automate? For example, a test scenario if executed manually takes lot of time to execute and retested sparingly. Is that a good candidate for automation.
This is not so. Automated tests are automated tests, but there is more to QA. Half the people do not know how to write the test, how to test properly called, etcetc.
On my website, at this link is the first part of the exercise of automated tests: http://www.adrian-stolarski.pl/exercices/Example-unit-test.html
Very nice post!
ReplyDeleteI've also blogged on my experience in this area
http://woftime.blogspot.com/2007/10/automated-acceptance-tests.html
Interesting reading.
ReplyDeleteI don't really agree when you suggest exposing internal APIs to UAT (user acceptance tests) or coupling tests to the database.
Schemas change almost as much as UIs do. If we allow UAT tests to read arbitrarily from the DB we will effectively be breaking any encapsulation we have put into our persistence components and other framework code.
There is also the fact that end-to-end UAT scripts are theoretically comprehensible by UI designers and customers. Coupling to the database would stop this.
You say not to test boundary conditions/edge cases through end-to-end UAT. I agree, you should use it to look for regressions of normal cases. In effect you are arguing for unit testing of APIs, which everyone should already be doing.
I think looking for races by multithreading tests is a good idea, but it applies to unit tests more readily than UAT.
Perhaps I've got the wrong end of the stick; could we have a more concrete example?
In Reply to "Return-Path" Markus says...
ReplyDeleteThanks for your valuable comments. I think you have a few valid points there, but I do not
agree to everything you said, either.
1) A schema change is a far bigger deal to everyone involved into system development
than a change to the UI- there is usually a whole lot of code (that the dev-teams own)
that depends on the database-schema. Additionally, those components that deal with
the data-store are usually done earlier than the ones that manage the UI. Needless to
say, that you can also encapsulate DB-dependencies in a layer of your testing framework
(which is something we also need to do for UI-automation).
I don't think that our tests should be entirely 'black box', a 'grey box' approach allows more
valuable insights into the system under test. That might sometimes entail breaking of encapsulation-
however, encapsulation was not introduced to separate test-code from the system under
test, but to allow modules to have 'secrets' from the depending modules. This doesn't need
to apply for tests. BTW, you can make the same point for the UI. Actions to deal with the UI
should be encapsulated (as MVC patterns teach us), and we still need to deal with the UI
from the outside... the difference is only the frequency of change.
As you write, the end-to-end scripts are 'theoretically' comprehensible. In practice, what is
comprehensible (if at all) is the DSL that is used for scripting. It is in the responsibility of
the designer of the DSL how he implements the semantics of the DSL commands, i.e. whether
'check balance' means to go to a function in the UI, or to do a look-up in the database. That
doesn't make a difference for the user of the system.
For the storage system there is also an additional difference (sorry for the bad example- right
now I don't have a better one that is fit for publishing). If you go through the UI cycle, how will
you ever be sure, that the new item has really been written back to the DB? Maybe it is just stored
internally in a cache (I have seen that before). Maybe it was written to the DB, but not to the
expected table? You might say, this is ok, as long as the system reads from the correct table.
But what if the same DB is used by different systems? The developer might not have been
aware of it, and hence never done a unit-test.
In the later case a change to the DB will brake the test. True. But it will also brake other systems
that depend on the DB... so if encapsulation was not fully adhered by the dev-team (and
there exists evidence that the older your product, the more likely this will be), our tests add
important warning signs.
I don't think that you usually 'unit-test' APIs. A unit-test is just that: executing an encapsulated unit
of code, to make sure that it works. To achieve these aims, we often use techniques like
Mock-Objects together with dependency injection, to get rid of external dependencies like
databases, 3rd party systems, ... In an integration level API test, on the other hand, you will
leave some of these dependencies in place, or inject faults into Mock-Behaviour, or ... I agree
that this is not the classical UAT that you have in mind- still it is something else than a typical
unit-test.
Mocking is one reason why it is sometimes hard to spot things like race-conditions or memory-leaks
in unit-tests. You make sure that the component works correctly, but you are not investigating whether
it is used correctly. A higher level API test can do that (as a UI test can - the question is only the
cost of running and maintaining each of them).
I agree with most of what is written here and like the approach to reducing time to find, fix and ultimately prevent QA issue impact. We take a similar approach from the developers perspective. We get developers to write unit tests as close to the actor boundary (for use cases) as possible. Of course if you have an MVC, or similar architecture, this helps with the decoupling but still goes as as possible to the user boundary. One problem in avoiding the presentation layer is that there may be eventing mechanisms that occur as part of presentation. Additionally one benefit if that this also then occurs as part of the developer/continuous build and doesn't require an explicit steps on the part of the QA team.
ReplyDeleteAnyway, overall, nice article, I'm in full agreement with you.
>>>>Scripting your manual tests this way takes far longer than just executing them manually.
ReplyDeleteI am curious to know about a (any) way in which scripting (I mean a machine executable version of a manual test) takes LESS time than manual execution.
I believe that while *typically* it takes more time to script a manual test than mere execution, in some cases, if manual execution involves a great degree of observation of multiple parameters and analysis.
I am of the opinion that comparing time and effort taken for scripting and executing manual tests - is highly dependent on the context and testing mission.
Shrini
I for my part have a more pragmatic view of user acceptance tests: for one thing not only the GUI is target of user acceptance tests, also APIs and service interfaces as well as commonly used data formats have to be tested from this perspective. I also do not agree that it is useful to artificially expose interfaces if they, by themselves, do not provide any value to the customer. This should be also considered for the database as well as for other external resources. If the are not a shared resource, as example a data model which is used by more than one project, they should be only tested through the interfaces which are using the resource.
ReplyDeleteThe other thing is, tests are tools: either to support and verify development, validating functionality or to help to simulate exceptional conditions. As a tool they have to match the purpose. So unit tests are useful as safety net or used as design tool (TDD-like), up to component tests. Integration and system tests broaden the scope and user acceptance tests validate the usability, either for people or other software systems. Every one of this tests should be automated as far as possible. It is true that scripting complex tests is cumbersome, but you have to balance it with the big advantage of automated tests: they are repeatable, manual tests are normally not and for that they can be easily measured.
Functional testing and user acceptance testing should be separated because functionality by itself can be unacceptable for the user.
With your summary I can agree with exception that you should not expose artificially internal interfaces and the addition that test goes through the whole project lifecycle, analysis to deployment and different test types have a unique view on the system (although most of them should be automated to a level which is reasonable possible). And yes, testing has a lot in common with development so most practices can be adapted.
I agree with every idea in the initial post. I recently gave a talk for our internal test organization and emphasized the exact same points.
ReplyDeleteRight now, our testing teams are really focused on UI testing, so our goal was to lower the cost of UI-script creation.
We were able to dramatically lower our scripting costs for one of our applications by doing two things:
+Implementing base classes for our test scripts (that manage environment info, data retrieval, etc.)
+Creating reusable screen objects to represent our application. This makes scripting incredibly fast and effective.
We created our own IE scripting tool in Java, (because or dev teams are using Java), that allows incredible flexibility in how tests are constructed, and also allows very simple access to UI components: e.g. button("id=submit").click.
This approach has allowed us to crank out MANY scripts quickly, and also isolates application changes in our reusable layer, making maintenance of our scripts much easier.
Now that we're getting our UI testing under control, our next step will be to work more closely with our dev teams to start "peeking under the covers," and looking for the right places to start testing the API.
Great post Markus!
ReplyDelete> I figured out that a successful automation project needs:
> [...]
> to start at the same time as development
If possible and usually for unit tests I prefer to start with them BEFORE development :-)
Other points I would like to add:
- Good communication between project manager, development and testing. (e.g. new change requests)
> to use the same tools as the development team
- Better, if we also follow the rules and good practices of software development.
Hi Markus,
ReplyDeletemany good points. Congratulation!
I disagree on a few things (or I think that more explanation is needed):
- "execution is slow":
tests are never fast enough nevertheless there are huge differences between tools here. Such a general statement is misleading.
- "test break for the wrong reasons" & "maintenance of the tests takes a significant amount of time"
isn't this a sign for badly written scripts? It's a common practice for companies to use the "bad" developers (ie the guys that shouldn't touch the production code) to write tests. The result is that you have often bad tests.
- I'm missing something concerning the application's "testability". A good coordination between developers and testers helps to make the application easier to test and therefore the tests easier to maintain.
Cheers,
Marc.
Couldn't agree more... I've been finding that many tests written against the domain layer run faster, are less prone to fragility, and expose more bugs.
ReplyDeleteIncluding the UI or persistence layer in the picture always muddies the waters a bit!
I think we need to distinguish between automated acceptance tests and integration tests.
ReplyDeleteWhen our tests are accessing domain model/busienss logic directly without going through fragile UI then it's integration tests.
When tests are going through UI by clicking on buttons and/or links (not to mention AJAX functionality) then they are automated acceptance tests.
Integration tests definitely have some advantages but we can't replace one with another, I would rather say that we need both.
It's like unit and acceptance tests, they are working on different level and only together they are benifitial for the application testing in general.
the whole article is worth reading. Good example and lastly the points mentioned in the summary of successful automation project at last are helpful.
ReplyDeleteSachin
the whole article is worth reading, great work,specifically it is summarized in an excellent manner.
ReplyDeleteSachin
This is correct approach. This article clears difference between Test Automation and Automated Testing. Although this is not alternative for UI Test or automation however can reduce problems in manual testing and its automation.
ReplyDeleteManual testing is way to check how system behaves for human interactions. Automated tests execute a sequence of actions without human intervention. This approach helps to eliminate human error, and provides faster results. Since most products require tests to be run many times, automated testing generally leads to significant labor cost savings over time.
API testing is right approach specifically in SOA but whenever during any transitions if delegates are implemented or UI is tightly coupled with business logic API Testing is bit impossible. In such a case, we have to resort to conventional test methods and automation.
Regards,
Mandar Kulkarni
I really appreciate this post.
ReplyDeleteI run a test automation team, and have been struggling with this very distinction. I want my team to do real test automation, but most of the development organization, including the CTO, is expected us to automate the existing tests.
You did an excellent job of laying out the differences. It's much clearer in my mind now what I need to communicate to the rest of the product development organization.
Very nice post!
ReplyDeleteI agree with most of what is written here and like the approach to reducing time to find, fix and ultimately prevent QA issue impact.
I had viewed another Article in macrotesting http://www.macrotesting.com which had much more valid reasons and examples related to this. Thank you for the post Markus Clermont...
How to run a project multiple times in microsoft test manager 2010?
ReplyDeleteAutomation testing plays an important role to save precious of time when you are going through the testing process.
ReplyDeleteThere are many companies offering automation testing to speed up your testing process as well.
It is good to have quality article from your side though.
Until my last project I was using mostly API-based unit-testing (the one you propose). But last project allowed (forced) me to discover contract-based random integration tests.
ReplyDeleteProject was not designed fully to be compatible with unit tests (high coupling) thus Unit-like test were hardly an option here. UI-level tests were the only option to continuosly measure system quality during development.
And they're working very good from two years perspective.
More on my experiences with DBC here:
http://blog.aplikacja.info/2012/03/whats-wrong-with-automated-integration-tests/
hi i agree this post for Test Automation testing and Software Quality Assurance this is great help provide the test automation testing tools and your API services are good i know one great company to provide the test Automation Services but i really like your post nice job keep it up
ReplyDeleteVery Nice Post..
ReplyDelete"What to Automate?
• Automate those functionalities which have to be run repeatedly .
• Regression test suite is the best candidate for automation because whenever bugs gets fixed we have to run the test suite to verify that there is no impact on application due to those fixes.
When to Automate?
• Start automation when build get stabilized so that there is no need to make changes within the test scripts.
• Automate those functionalities which have to be verified in every new build. It provides more time to work on complex functionalities.
How to Automate?
There are many automation (Open source and licensed) tools available to automate the test cases such as QTP , Silk test , Selenium, tellurium etc. Automation tools provide us :
1). Record & Play: It helps to record the test cases for future use and run the same test script repeatedly whenever required.
2). Writing Scripts using VB scripting, Java, Perl etc.: Automation tools facilitate to write test scripts using scripting languages like VB scripting, Java, Perl etc. to automate test cases."
Should complexity be a factor deciding what to automate? For example, a test scenario if executed manually takes lot of time to execute and retested sparingly. Is that a good candidate for automation.
DeleteThanks,
CloudQA
This is not so. Automated tests are automated tests, but there is more to QA. Half the people do not know how to write the test, how to test properly called, etcetc.
ReplyDeleteOn my website, at this link is the first part of the exercise of automated tests:
http://www.adrian-stolarski.pl/exercices/Example-unit-test.html
Is there any standard to define the test data for the automation?
ReplyDelete