Testing Blog

Tuesday, June 14, 2011
Share on Twitter Share on Facebook
Google
Labels: John Penix , Mark Ivey , Pooja Gupta

14 comments :

  1. JohnJune 14, 2011 at 3:00:00 PM PDT

    This is a great concept, and one that I've had a lot of questions about in my test role in a CI-focused team.

    Two specific questions:
    1) How well does this scale when you have a much more larger and more complicated graph of test (i.e. orders of magnitude)?
    2) When running system-level, rather than unit-level tests, can one really map tests to limited domains enough to significantly cut down the test fanout for a given set of changes?

    Thanks for any feedback.

    ReplyDelete
    Replies
      Reply
  2. JohnJune 14, 2011 at 3:38:00 PM PDT

    Great concept; one that has been discussed in my team more than once.

    Two questions:
    o Does this really work when the number of tests, number of software modules and the corresponding graph complexity gets orders of magnitude bigger?
    o Any thoughts on if/how this can apply when the tests are more system-oriented and hence most tests have a transitive relationship with most software modules?

    Thanks for this post...love reading the Google testing blog.

    ReplyDelete
    Replies
      Reply
  3. UnknownJune 15, 2011 at 11:07:00 PM PDT

    That's amazing. But, how do you maintain the dependencies? Is it each individual test owner's responsibility to set his/her tests dependencies?

    ReplyDelete
    Replies
      Reply
  4. PJayTycyJune 16, 2011 at 12:49:00 AM PDT

    Can you give more info about the dependency graph generation?
    Is it an automated process (ie: based on build system / makefiles / ...) or do the developers need to manually specify the dependencies?

    ReplyDelete
    Replies
      Reply
  5. krisskrossJune 16, 2011 at 2:29:00 AM PDT

    Intresting, i have been thinking about this idea for a long time. Could be useful for upgrades aswell.

    How does the framework handle callback interfaces?

    -Kristoffer

    ReplyDelete
    Replies
      Reply
  6. MPJune 17, 2011 at 3:08:00 AM PDT

    It's an interesting idea, applying the dependency management to the test runs and not only the builds.

    But I have a question: in the first example you can track down that both change #2 and change #3 are breaking gmail_server_tests independently.

    But how do you go back in the version control system and fix those changes individually? Do you go back to pre-change #2 and then serialize the (fixed) changes?

    Another question is that now you can know earlier if it was change #2 or #3 that broke the tests, but you will not know (till later) if the combination of change #2 and #3 (also) breaks the tests.

    Is this a tradeoff you agree upon since maybe individual changes breaking the build are more common than combination of changes?

    Thanks for the interesting post!

    ReplyDelete
    Replies
      Reply
  7. Blogger KesepianJune 17, 2011 at 10:34:00 PM PDT

    I do not quite understand, but I'll learn it. Thank you for this interesting information.

    ReplyDelete
    Replies
      Reply
  8. PJJune 20, 2011 at 12:04:00 PM PDT

    Does the dependency graph get updated by the compiler at compile time ?

    This would work great for currently extant/new features tied to the current dependency graph/map but how do the dependencies tie in with third party or external systems (same as the call back question posted by an earlier comment) that could get impacted by a change made within google's code base? Are their dependencies also mapped into this graph?

    Honestly, do you guys ever miss any tests that should have been run - how often are these a result of machine error vs human error?

    ReplyDelete
    Replies
      Reply
  9. Pooja GuptaJune 21, 2011 at 11:20:00 PM PDT

    @John

    1. This scales pretty well at the current scale as described in http://google-engtools.blogspot.com/2011/05/welcome-to-google-engineering-tools where all these numbers correspond to same code repository. Yes, it works! :)
    2. System level tests are the ones that tend to be flaky, so even when they cannot be cut down to an extent as much as the unit tests, running them fewer that at every change definitely helps.

    @Alex/@PJayTycy : Build rules are used to maintain dependencies. See similar discussion in: http://google-engtools.blogspot.com/2011/06/testing-at-speed-and-scale-of-google.html. The build rules are defined as per compiler specifications, and then the dependency graph generation is automatic.

    @krisskross Can you elaborate more?

    @PJ: There are occasional instances when we do miss tests because of software bug, but it is rare enough that teams can continue to rely on the system.

    ReplyDelete
    Replies
      Reply
  10. eduardoJuly 10, 2011 at 4:02:00 AM PDT

    doit is an open-source project that was created to be able to handle this kind of "runs only those tests for every change".

    For running python unittests, there is py.test plugin that integrates with doit to automatically find imports for python files and keep track of dependencies.

    ReplyDelete
    Replies
      Reply
  11. AdiJuly 27, 2011 at 6:23:00 AM PDT

    It sounds like this is more about helping the RCA process than saving time/resources in running the tests. It's great to have a quick feedback about how your single change passed/failed the regression suite, but like John mentioned, the system-level tests never map to a single component, and so the graph becomes much more complex.

    In our previous build system we had a similar system which was based on code coverage data. Test selection was automatic based on the code every changeset changed.

    ReplyDelete
    Replies
      Reply
  12. VankenNovember 13, 2011 at 6:04:00 AM PST

    nice..its really usefull, i have try n make it like a personal barometer.

    ReplyDelete
    Replies
      Reply
  13. AnonymousMay 18, 2017 at 9:07:00 AM PDT

    Images are broken. It would be really nice if these are fixed!

    ReplyDelete
    Replies
      Reply
  14. Mark NowackiAugust 23, 2017 at 2:16:00 PM PDT

    With all the testing going on, how do you maintain your test results? Do you keep track of them somewhere that can be data-mined, to see trends or particularly unstable tests? What kind of audit-ability does it have? Do you have a requirement for 'test evidence' which means you have to keep & store your results? Thanks.

    ReplyDelete
    Replies
      Reply
Add comment
Load more...

The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.

  

Labels


  • TotT 77
  • GTAC 61
  • James Whittaker 42
  • Misko Hevery 32
  • Anthony Vallone 27
  • Patrick Copeland 23
  • Jobs 18
  • Code Health 13
  • C++ 11
  • Andrew Trenk 10
  • Patrik Höglund 8
  • JavaScript 7
  • Allen Hutchison 6
  • George Pirocanac 6
  • Zhanyong Wan 6
  • Harry Robinson 5
  • Java 5
  • Julian Harty 5
  • Alberto Savoia 4
  • Ben Yu 4
  • Erik Kuefler 4
  • Philip Zembrod 4
  • Shyam Seshadri 4
  • Chrome 3
  • John Thomas 3
  • Lesley Katzen 3
  • Marc Kaplan 3
  • Markus Clermont 3
  • Sonal Shah 3
  • APIs 2
  • Abhishek Arya 2
  • Adam Bender 2
  • Alan Myrvold 2
  • Alek Icev 2
  • Android 2
  • April Fools 2
  • Chaitali Narla 2
  • Chris Lewis 2
  • Chrome OS 2
  • Diego Salas 2
  • Dillon Bly 2
  • Dori Reuveni 2
  • Jason Arbon 2
  • Jochen Wuttke 2
  • Kostya Serebryany 2
  • Marc Eaddy 2
  • Marko Ivanković 2
  • Max Kanat-Alexander 2
  • Mobile 2
  • Oliver Chang 2
  • Simon Stewart 2
  • Stefan Kennedy 2
  • Test Flakiness 2
  • Tony Voellm 2
  • WebRTC 2
  • Yvette Nameth 2
  • Zuri Kemp 2
  • Aaron Jacobs 1
  • Adam Porter 1
  • Adel Saoud 1
  • Alan Faulkner 1
  • Alex Eagle 1
  • Anantha Keesara 1
  • Antoine Picard 1
  • App Engine 1
  • Ari Shamash 1
  • Arif Sukoco 1
  • Benjamin Pick 1
  • Bob Nystrom 1
  • Bruce Leban 1
  • Carlos Arguelles 1
  • Carlos Israel Ortiz García 1
  • Cathal Weakliam 1
  • Christopher Semturs 1
  • Clay Murphy 1
  • Dan Shi 1
  • Dan Willemsen 1
  • Dave Chen 1
  • Dave Gladfelter 1
  • Derek Snyder 1
  • Diego Cavalcanti 1
  • Dmitry Vyukov 1
  • Eduardo Bravo Ortiz 1
  • Ekaterina Kamenskaya 1
  • Elliott Karpilovsky 1
  • Espresso 1
  • Google+ 1
  • Goran Petrovic 1
  • Goranka Bjedov 1
  • Hank Duan 1
  • Havard Rast Blok 1
  • Hongfei Ding 1
  • Jason Elbaum 1
  • Jason Huggins 1
  • Jay Han 1
  • Jeff Listfield 1
  • Jessica Tomechak 1
  • Jim Reardon 1
  • Joe Allan Muharsky 1
  • Joel Hynoski 1
  • John Micco 1
  • John Penix 1
  • Jonathan Rockway 1
  • Jonathan Velasquez 1
  • Josh Armour 1
  • Julie Ralph 1
  • Karin Lundberg 1
  • Kaue Silveira 1
  • Kevin Bourrillion 1
  • Kevin Graney 1
  • Kirkland 1
  • Kurt Alfred Kluever 1
  • Manjusha Parvathaneni 1
  • Marek Kiszkis 1
  • Mark Ivey 1
  • Mark Striebeck 1
  • Matt Lowrie 1
  • Meredith Whittaker 1
  • Michael Bachman 1
  • Michael Klepikov 1
  • Mike Aizatsky 1
  • Mike Wacker 1
  • Mona El Mahdy 1
  • Noel Yap 1
  • Patricia Legaspi 1
  • Peter Arrenbrecht 1
  • Peter Spragins 1
  • Phil Rollet 1
  • Pooja Gupta 1
  • Project Showcase 1
  • Radoslav Vasilev 1
  • Rajat Dewan 1
  • Rajat Jain 1
  • Rich Martin 1
  • Richard Bustamante 1
  • Roshan Sembacuttiaratchy 1
  • Ruslan Khamitov 1
  • Sean Jordan 1
  • Sharon Zhou 1
  • Siddartha Janga 1
  • Simran Basi 1
  • Stephen Ng 1
  • Tejas Shah 1
  • Test Analytics 1
  • Test Engineer 1
  • Tom O'Neill 1
  • Vojta Jína 1
  • iOS 1
  • mutation testing 1


Archive


  • ►  2022 (2)
    • ►  Feb (2)
  • ►  2021 (3)
    • ►  Jun (1)
    • ►  Apr (1)
    • ►  Mar (1)
  • ►  2020 (8)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  May (1)
  • ►  2019 (4)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Jul (1)
    • ►  Jan (1)
  • ►  2018 (7)
    • ►  Nov (1)
    • ►  Sep (1)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (1)
    • ►  Feb (1)
  • ►  2017 (17)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2016 (15)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (1)
    • ►  Sep (2)
    • ►  Aug (1)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (1)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2015 (14)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Aug (1)
    • ►  Jun (1)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (1)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2014 (24)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Sep (2)
    • ►  Aug (2)
    • ►  Jul (3)
    • ►  Jun (3)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2013 (16)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Jan (2)
  • ►  2012 (11)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (1)
    • ►  Aug (4)
  • ▼  2011 (39)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (2)
    • ►  Aug (4)
    • ►  Jul (2)
    • ▼  Jun (5)
      • Google at STAR West 2011
      • Lessons in a 21st Century Tech Career: Failing Fas...
      • Introducing DOM Snitch, our passive in-the-browser...
      • GTAC 2011 Keynotes
      • (Cross-posted from the Google Engineering Tools b...
    • ►  May (4)
    • ►  Apr (3)
    • ►  Mar (4)
    • ►  Feb (5)
    • ►  Jan (3)
  • ►  2010 (37)
    • ►  Dec (3)
    • ►  Nov (3)
    • ►  Oct (4)
    • ►  Sep (8)
    • ►  Aug (3)
    • ►  Jul (3)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (3)
    • ►  Mar (3)
    • ►  Feb (2)
    • ►  Jan (1)
  • ►  2009 (54)
    • ►  Dec (3)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (5)
    • ►  Aug (4)
    • ►  Jul (15)
    • ►  Jun (8)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (5)
    • ►  Jan (4)
  • ►  2008 (75)
    • ►  Dec (6)
    • ►  Nov (8)
    • ►  Oct (9)
    • ►  Sep (8)
    • ►  Aug (9)
    • ►  Jul (9)
    • ►  Jun (6)
    • ►  May (6)
    • ►  Apr (4)
    • ►  Mar (4)
    • ►  Feb (4)
    • ►  Jan (2)
  • ►  2007 (41)
    • ►  Oct (6)
    • ►  Sep (5)
    • ►  Aug (3)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (7)
    • ►  Mar (5)
    • ►  Feb (5)
    • ►  Jan (4)

Feed

follow us in feedly
  • Google
  • Privacy
  • Terms