Testing Blog

Automating tests vs. test-automation

Wednesday, October 24, 2007
Share on Twitter Share on Facebook
Google
Labels: Markus Clermont

24 comments :

  1. Renat ZubairovOctober 25, 2007 at 3:46:00 AM PDT

    Very nice post!
    I've also blogged on my experience in this area

    http://woftime.blogspot.com/2007/10/automated-acceptance-tests.html

    ReplyDelete
    Replies
      Reply
  2. UnknownOctober 25, 2007 at 4:33:00 AM PDT

    Interesting reading.

    I don't really agree when you suggest exposing internal APIs to UAT (user acceptance tests) or coupling tests to the database.

    Schemas change almost as much as UIs do. If we allow UAT tests to read arbitrarily from the DB we will effectively be breaking any encapsulation we have put into our persistence components and other framework code.

    There is also the fact that end-to-end UAT scripts are theoretically comprehensible by UI designers and customers. Coupling to the database would stop this.

    You say not to test boundary conditions/edge cases through end-to-end UAT. I agree, you should use it to look for regressions of normal cases. In effect you are arguing for unit testing of APIs, which everyone should already be doing.

    I think looking for races by multithreading tests is a good idea, but it applies to unit tests more readily than UAT.

    Perhaps I've got the wrong end of the stick; could we have a more concrete example?

    ReplyDelete
    Replies
      Reply
  3. Patrick CopelandOctober 25, 2007 at 11:06:00 AM PDT

    In Reply to "Return-Path" Markus says...

    Thanks for your valuable comments. I think you have a few valid points there, but I do not
    agree to everything you said, either.

    1) A schema change is a far bigger deal to everyone involved into system development
    than a change to the UI- there is usually a whole lot of code (that the dev-teams own)
    that depends on the database-schema. Additionally, those components that deal with
    the data-store are usually done earlier than the ones that manage the UI. Needless to
    say, that you can also encapsulate DB-dependencies in a layer of your testing framework
    (which is something we also need to do for UI-automation).

    I don't think that our tests should be entirely 'black box', a 'grey box' approach allows more
    valuable insights into the system under test. That might sometimes entail breaking of encapsulation-
    however, encapsulation was not introduced to separate test-code from the system under
    test, but to allow modules to have 'secrets' from the depending modules. This doesn't need
    to apply for tests. BTW, you can make the same point for the UI. Actions to deal with the UI
    should be encapsulated (as MVC patterns teach us), and we still need to deal with the UI
    from the outside... the difference is only the frequency of change.

    As you write, the end-to-end scripts are 'theoretically' comprehensible. In practice, what is
    comprehensible (if at all) is the DSL that is used for scripting. It is in the responsibility of
    the designer of the DSL how he implements the semantics of the DSL commands, i.e. whether
    'check balance' means to go to a function in the UI, or to do a look-up in the database. That
    doesn't make a difference for the user of the system.

    For the storage system there is also an additional difference (sorry for the bad example- right
    now I don't have a better one that is fit for publishing). If you go through the UI cycle, how will
    you ever be sure, that the new item has really been written back to the DB? Maybe it is just stored
    internally in a cache (I have seen that before). Maybe it was written to the DB, but not to the
    expected table? You might say, this is ok, as long as the system reads from the correct table.
    But what if the same DB is used by different systems? The developer might not have been
    aware of it, and hence never done a unit-test.

    In the later case a change to the DB will brake the test. True. But it will also brake other systems
    that depend on the DB... so if encapsulation was not fully adhered by the dev-team (and
    there exists evidence that the older your product, the more likely this will be), our tests add
    important warning signs.

    I don't think that you usually 'unit-test' APIs. A unit-test is just that: executing an encapsulated unit
    of code, to make sure that it works. To achieve these aims, we often use techniques like
    Mock-Objects together with dependency injection, to get rid of external dependencies like
    databases, 3rd party systems, ... In an integration level API test, on the other hand, you will
    leave some of these dependencies in place, or inject faults into Mock-Behaviour, or ... I agree
    that this is not the classical UAT that you have in mind- still it is something else than a typical
    unit-test.

    Mocking is one reason why it is sometimes hard to spot things like race-conditions or memory-leaks
    in unit-tests. You make sure that the component works correctly, but you are not investigating whether
    it is used correctly. A higher level API test can do that (as a UI test can - the question is only the
    cost of running and maintaining each of them).

    ReplyDelete
    Replies
      Reply
  4. UnknownOctober 26, 2007 at 1:34:00 AM PDT

    I agree with most of what is written here and like the approach to reducing time to find, fix and ultimately prevent QA issue impact. We take a similar approach from the developers perspective. We get developers to write unit tests as close to the actor boundary (for use cases) as possible. Of course if you have an MVC, or similar architecture, this helps with the decoupling but still goes as as possible to the user boundary. One problem in avoiding the presentation layer is that there may be eventing mechanisms that occur as part of presentation. Additionally one benefit if that this also then occurs as part of the developer/continuous build and doesn't require an explicit steps on the part of the QA team.
    Anyway, overall, nice article, I'm in full agreement with you.

    ReplyDelete
    Replies
      Reply
  5. Shrini KulkarniOctober 27, 2007 at 9:16:00 PM PDT

    >>>>Scripting your manual tests this way takes far longer than just executing them manually.

    I am curious to know about a (any) way in which scripting (I mean a machine executable version of a manual test) takes LESS time than manual execution.

    I believe that while *typically* it takes more time to script a manual test than mere execution, in some cases, if manual execution involves a great degree of observation of multiple parameters and analysis.

    I am of the opinion that comparing time and effort taken for scripting and executing manual tests - is highly dependent on the context and testing mission.

    Shrini

    ReplyDelete
    Replies
      Reply
  6. rmeindlOctober 28, 2007 at 4:28:00 PM PDT

    I for my part have a more pragmatic view of user acceptance tests: for one thing not only the GUI is target of user acceptance tests, also APIs and service interfaces as well as commonly used data formats have to be tested from this perspective. I also do not agree that it is useful to artificially expose interfaces if they, by themselves, do not provide any value to the customer. This should be also considered for the database as well as for other external resources. If the are not a shared resource, as example a data model which is used by more than one project, they should be only tested through the interfaces which are using the resource.
    The other thing is, tests are tools: either to support and verify development, validating functionality or to help to simulate exceptional conditions. As a tool they have to match the purpose. So unit tests are useful as safety net or used as design tool (TDD-like), up to component tests. Integration and system tests broaden the scope and user acceptance tests validate the usability, either for people or other software systems. Every one of this tests should be automated as far as possible. It is true that scripting complex tests is cumbersome, but you have to balance it with the big advantage of automated tests: they are repeatable, manual tests are normally not and for that they can be easily measured.
    Functional testing and user acceptance testing should be separated because functionality by itself can be unacceptable for the user.
    With your summary I can agree with exception that you should not expose artificially internal interfaces and the addition that test goes through the whole project lifecycle, analysis to deployment and different test types have a unique view on the system (although most of them should be automated to a level which is reasonable possible). And yes, testing has a lot in common with development so most practices can be adapted.

    ReplyDelete
    Replies
      Reply
  7. UnknownOctober 29, 2007 at 9:18:00 AM PDT

    I agree with every idea in the initial post. I recently gave a talk for our internal test organization and emphasized the exact same points.

    Right now, our testing teams are really focused on UI testing, so our goal was to lower the cost of UI-script creation.

    We were able to dramatically lower our scripting costs for one of our applications by doing two things:

    +Implementing base classes for our test scripts (that manage environment info, data retrieval, etc.)

    +Creating reusable screen objects to represent our application. This makes scripting incredibly fast and effective.

    We created our own IE scripting tool in Java, (because or dev teams are using Java), that allows incredible flexibility in how tests are constructed, and also allows very simple access to UI components: e.g. button("id=submit").click.

    This approach has allowed us to crank out MANY scripts quickly, and also isolates application changes in our reusable layer, making maintenance of our scripts much easier.

    Now that we're getting our UI testing under control, our next step will be to work more closely with our dev teams to start "peeking under the covers," and looking for the right places to start testing the API.

    ReplyDelete
    Replies
      Reply
  8. Diego COctober 30, 2007 at 5:40:00 AM PDT

    Great post Markus!

    > I figured out that a successful automation project needs:

    > [...]
    > to start at the same time as development

    If possible and usually for unit tests I prefer to start with them BEFORE development :-)

    Other points I would like to add:

    - Good communication between project manager, development and testing. (e.g. new change requests)

    > to use the same tools as the development team

    - Better, if we also follow the rules and good practices of software development.

    ReplyDelete
    Replies
      Reply
  9. UnknownOctober 30, 2007 at 7:59:00 AM PDT

    Hi Markus,

    many good points. Congratulation!

    I disagree on a few things (or I think that more explanation is needed):

    - "execution is slow":
    tests are never fast enough nevertheless there are huge differences between tools here. Such a general statement is misleading.

    - "test break for the wrong reasons" & "maintenance of the tests takes a significant amount of time"
    isn't this a sign for badly written scripts? It's a common practice for companies to use the "bad" developers (ie the guys that shouldn't touch the production code) to write tests. The result is that you have often bad tests.

    - I'm missing something concerning the application's "testability". A good coordination between developers and testers helps to make the application easier to test and therefore the tests easier to maintain.

    Cheers,
    Marc.

    ReplyDelete
    Replies
      Reply
  10. James CarrOctober 30, 2007 at 8:45:00 PM PDT

    Couldn't agree more... I've been finding that many tests written against the domain layer run faster, are less prone to fragility, and expose more bugs.

    Including the UI or persistence layer in the picture always muddies the waters a bit!

    ReplyDelete
    Replies
      Reply
  11. Renat ZubairovOctober 31, 2007 at 12:15:00 AM PDT

    I think we need to distinguish between automated acceptance tests and integration tests.
    When our tests are accessing domain model/busienss logic directly without going through fragile UI then it's integration tests.
    When tests are going through UI by clicking on buttons and/or links (not to mention AJAX functionality) then they are automated acceptance tests.

    Integration tests definitely have some advantages but we can't replace one with another, I would rather say that we need both.

    It's like unit and acceptance tests, they are working on different level and only together they are benifitial for the application testing in general.

    ReplyDelete
    Replies
      Reply
  12. Sachin DhallDecember 18, 2007 at 9:04:00 PM PST

    the whole article is worth reading. Good example and lastly the points mentioned in the summary of successful automation project at last are helpful.

    Sachin

    ReplyDelete
    Replies
      Reply
  13. Sachin DhallDecember 18, 2007 at 9:06:00 PM PST

    the whole article is worth reading, great work,specifically it is summarized in an excellent manner.

    Sachin

    ReplyDelete
    Replies
      Reply
  14. Mandar KulkarniJanuary 3, 2008 at 2:14:00 AM PST

    This is correct approach. This article clears difference between Test Automation and Automated Testing. Although this is not alternative for UI Test or automation however can reduce problems in manual testing and its automation.

    Manual testing is way to check how system behaves for human interactions. Automated tests execute a sequence of actions without human intervention. This approach helps to eliminate human error, and provides faster results. Since most products require tests to be run many times, automated testing generally leads to significant labor cost savings over time.

    API testing is right approach specifically in SOA but whenever during any transitions if delegates are implemented or UI is tightly coupled with business logic API Testing is bit impossible. In such a case, we have to resort to conventional test methods and automation.

    Regards,
    Mandar Kulkarni

    ReplyDelete
    Replies
      Reply
  15. PeteJanuary 8, 2008 at 10:35:00 AM PST

    I really appreciate this post.

    I run a test automation team, and have been struggling with this very distinction. I want my team to do real test automation, but most of the development organization, including the CTO, is expected us to automate the existing tests.

    You did an excellent job of laying out the differences. It's much clearer in my mind now what I need to communicate to the rest of the product development organization.

    ReplyDelete
    Replies
      Reply
  16. AnonymousJune 22, 2009 at 6:31:00 AM PDT

    Very nice post!
    I agree with most of what is written here and like the approach to reducing time to find, fix and ultimately prevent QA issue impact.
    I had viewed another Article in macrotesting http://www.macrotesting.com which had much more valid reasons and examples related to this. Thank you for the post Markus Clermont...

    ReplyDelete
    Replies
      Reply
  17. UnknownFebruary 6, 2012 at 5:04:00 AM PST

    How to run a project multiple times in microsoft test manager 2010?

    ReplyDelete
    Replies
      Reply
  18. UnknownJuly 11, 2012 at 12:13:00 AM PDT

    Automation testing plays an important role to save precious of time when you are going through the testing process.

    There are many companies offering automation testing to speed up your testing process as well.

    It is good to have quality article from your side though.

    ReplyDelete
    Replies
      Reply
  19. Dariusz CieslakJuly 19, 2012 at 3:12:00 PM PDT

    Until my last project I was using mostly API-based unit-testing (the one you propose). But last project allowed (forced) me to discover contract-based random integration tests.

    Project was not designed fully to be compatible with unit tests (high coupling) thus Unit-like test were hardly an option here. UI-level tests were the only option to continuosly measure system quality during development.

    And they're working very good from two years perspective.

    More on my experiences with DBC here:

    http://blog.aplikacja.info/2012/03/whats-wrong-with-automated-integration-tests/

    ReplyDelete
    Replies
      Reply
  20. UnknownJuly 31, 2012 at 11:17:00 PM PDT

    hi i agree this post for Test Automation testing and Software Quality Assurance this is great help provide the test automation testing tools and your API services are good i know one great company to provide the test Automation Services but i really like your post nice job keep it up

    ReplyDelete
    Replies
      Reply
  21. QA Testing methodologyOctober 25, 2012 at 1:02:00 AM PDT

    Very Nice Post..
    "What to Automate?
    • Automate those functionalities which have to be run repeatedly .
    • Regression test suite is the best candidate for automation because whenever bugs gets fixed we have to run the test suite to verify that there is no impact on application due to those fixes.

    When to Automate?
    • Start automation when build get stabilized so that there is no need to make changes within the test scripts.
    • Automate those functionalities which have to be verified in every new build. It provides more time to work on complex functionalities.

    How to Automate?
    There are many automation (Open source and licensed) tools available to automate the test cases such as QTP , Silk test , Selenium, tellurium etc. Automation tools provide us :
    1). Record & Play: It helps to record the test cases for future use and run the same test script repeatedly whenever required.
    2). Writing Scripts using VB scripting, Java, Perl etc.: Automation tools facilitate to write test scripts using scripting languages like VB scripting, Java, Perl etc. to automate test cases."

    ReplyDelete
    Replies
    1. UnknownDecember 16, 2015 at 3:47:00 AM PST

      Should complexity be a factor deciding what to automate? For example, a test scenario if executed manually takes lot of time to execute and retested sparingly. Is that a good candidate for automation.

      Thanks,
      CloudQA

      Delete
      Replies
        Reply
    2. Reply
  22. UnknownFebruary 13, 2013 at 4:19:00 PM PST

    This is not so. Automated tests are automated tests, but there is more to QA. Half the people do not know how to write the test, how to test properly called, etcetc.

    On my website, at this link is the first part of the exercise of automated tests:
    http://www.adrian-stolarski.pl/exercices/Example-unit-test.html

    ReplyDelete
    Replies
      Reply
  23. VishwaNovember 25, 2014 at 1:40:00 AM PST

    Is there any standard to define the test data for the automation?

    ReplyDelete
    Replies
      Reply
Add comment
Load more...

The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.

  

Labels


  • TotT 77
  • GTAC 61
  • James Whittaker 42
  • Misko Hevery 32
  • Anthony Vallone 27
  • Patrick Copeland 23
  • Jobs 18
  • Code Health 13
  • C++ 11
  • Andrew Trenk 10
  • Patrik Höglund 8
  • JavaScript 7
  • Allen Hutchison 6
  • George Pirocanac 6
  • Zhanyong Wan 6
  • Harry Robinson 5
  • Java 5
  • Julian Harty 5
  • Alberto Savoia 4
  • Ben Yu 4
  • Erik Kuefler 4
  • Philip Zembrod 4
  • Shyam Seshadri 4
  • Chrome 3
  • John Thomas 3
  • Lesley Katzen 3
  • Marc Kaplan 3
  • Markus Clermont 3
  • Sonal Shah 3
  • APIs 2
  • Abhishek Arya 2
  • Adam Bender 2
  • Alan Myrvold 2
  • Alek Icev 2
  • Android 2
  • April Fools 2
  • Chaitali Narla 2
  • Chris Lewis 2
  • Chrome OS 2
  • Diego Salas 2
  • Dillon Bly 2
  • Dori Reuveni 2
  • Jason Arbon 2
  • Jochen Wuttke 2
  • Kostya Serebryany 2
  • Marc Eaddy 2
  • Marko Ivanković 2
  • Max Kanat-Alexander 2
  • Mobile 2
  • Oliver Chang 2
  • Simon Stewart 2
  • Stefan Kennedy 2
  • Test Flakiness 2
  • Tony Voellm 2
  • WebRTC 2
  • Yvette Nameth 2
  • Zuri Kemp 2
  • Aaron Jacobs 1
  • Adam Porter 1
  • Adel Saoud 1
  • Alan Faulkner 1
  • Alex Eagle 1
  • Anantha Keesara 1
  • Antoine Picard 1
  • App Engine 1
  • Ari Shamash 1
  • Arif Sukoco 1
  • Benjamin Pick 1
  • Bob Nystrom 1
  • Bruce Leban 1
  • Carlos Arguelles 1
  • Carlos Israel Ortiz García 1
  • Cathal Weakliam 1
  • Christopher Semturs 1
  • Clay Murphy 1
  • Dan Shi 1
  • Dan Willemsen 1
  • Dave Chen 1
  • Dave Gladfelter 1
  • Derek Snyder 1
  • Diego Cavalcanti 1
  • Dmitry Vyukov 1
  • Eduardo Bravo Ortiz 1
  • Ekaterina Kamenskaya 1
  • Elliott Karpilovsky 1
  • Espresso 1
  • Google+ 1
  • Goran Petrovic 1
  • Goranka Bjedov 1
  • Hank Duan 1
  • Havard Rast Blok 1
  • Hongfei Ding 1
  • Jason Elbaum 1
  • Jason Huggins 1
  • Jay Han 1
  • Jeff Listfield 1
  • Jessica Tomechak 1
  • Jim Reardon 1
  • Joe Allan Muharsky 1
  • Joel Hynoski 1
  • John Micco 1
  • John Penix 1
  • Jonathan Rockway 1
  • Jonathan Velasquez 1
  • Josh Armour 1
  • Julie Ralph 1
  • Karin Lundberg 1
  • Kaue Silveira 1
  • Kevin Bourrillion 1
  • Kevin Graney 1
  • Kirkland 1
  • Kurt Alfred Kluever 1
  • Manjusha Parvathaneni 1
  • Marek Kiszkis 1
  • Mark Ivey 1
  • Mark Striebeck 1
  • Matt Lowrie 1
  • Meredith Whittaker 1
  • Michael Bachman 1
  • Michael Klepikov 1
  • Mike Aizatsky 1
  • Mike Wacker 1
  • Mona El Mahdy 1
  • Noel Yap 1
  • Patricia Legaspi 1
  • Peter Arrenbrecht 1
  • Peter Spragins 1
  • Phil Rollet 1
  • Pooja Gupta 1
  • Project Showcase 1
  • Radoslav Vasilev 1
  • Rajat Dewan 1
  • Rajat Jain 1
  • Rich Martin 1
  • Richard Bustamante 1
  • Roshan Sembacuttiaratchy 1
  • Ruslan Khamitov 1
  • Sean Jordan 1
  • Sharon Zhou 1
  • Siddartha Janga 1
  • Simran Basi 1
  • Stephen Ng 1
  • Tejas Shah 1
  • Test Analytics 1
  • Test Engineer 1
  • Tom O'Neill 1
  • Vojta Jína 1
  • iOS 1
  • mutation testing 1


Archive


  • ►  2022 (2)
    • ►  Feb (2)
  • ►  2021 (3)
    • ►  Jun (1)
    • ►  Apr (1)
    • ►  Mar (1)
  • ►  2020 (8)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  May (1)
  • ►  2019 (4)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Jul (1)
    • ►  Jan (1)
  • ►  2018 (7)
    • ►  Nov (1)
    • ►  Sep (1)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (1)
    • ►  Feb (1)
  • ►  2017 (17)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2016 (15)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (1)
    • ►  Sep (2)
    • ►  Aug (1)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (1)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2015 (14)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Aug (1)
    • ►  Jun (1)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (1)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2014 (24)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Sep (2)
    • ►  Aug (2)
    • ►  Jul (3)
    • ►  Jun (3)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2013 (16)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Jan (2)
  • ►  2012 (11)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (1)
    • ►  Aug (4)
  • ►  2011 (39)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (2)
    • ►  Aug (4)
    • ►  Jul (2)
    • ►  Jun (5)
    • ►  May (4)
    • ►  Apr (3)
    • ►  Mar (4)
    • ►  Feb (5)
    • ►  Jan (3)
  • ►  2010 (37)
    • ►  Dec (3)
    • ►  Nov (3)
    • ►  Oct (4)
    • ►  Sep (8)
    • ►  Aug (3)
    • ►  Jul (3)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (3)
    • ►  Mar (3)
    • ►  Feb (2)
    • ►  Jan (1)
  • ►  2009 (54)
    • ►  Dec (3)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (5)
    • ►  Aug (4)
    • ►  Jul (15)
    • ►  Jun (8)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (5)
    • ►  Jan (4)
  • ►  2008 (75)
    • ►  Dec (6)
    • ►  Nov (8)
    • ►  Oct (9)
    • ►  Sep (8)
    • ►  Aug (9)
    • ►  Jul (9)
    • ►  Jun (6)
    • ►  May (6)
    • ►  Apr (4)
    • ►  Mar (4)
    • ►  Feb (4)
    • ►  Jan (2)
  • ▼  2007 (41)
    • ▼  Oct (6)
      • TotT: Avoiding friend Twister in C++
      • Automating tests vs. test-automation
      • Overview of Infrastructure Testing
      • Testing Google Mashup Editor Class
      • Performance Testing
      • Post Release: Closing the loop
    • ►  Sep (5)
    • ►  Aug (3)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (7)
    • ►  Mar (5)
    • ►  Feb (5)
    • ►  Jan (4)

Feed

follow us in feedly
  • Google
  • Privacy
  • Terms