Testing Blog

How Google Tests Software - Part Three

Wednesday, February 16, 2011
Share on Twitter Share on Facebook
Google
Labels: James Whittaker

15 comments :

  1. TorstenFebruary 16, 2011 at 4:43:00 AM PST

    Thanks for sharing your experiences. I like the way you do things at google :-)

    ReplyDelete
    Replies
      Reply
  2. Shaun HersheyFebruary 16, 2011 at 5:31:00 AM PST

    Hey James,

    I haven't been in a testing position for very long, but I've been actively scouring the blogs and reading up on different methodologies in my free time to supplement my hands-on experience. The lack of experience may prove my following point to not really be the case in some companies, but I'll mention it anyways:

    The one major issue I've noticed has been that developers who test their own code tend to overlook glaring mistakes. Even going back to when I was going to school to be a developer (and have since decided that development is not for me), we all constantly ended up peer-testing rather than testing our own code because of the inattentional blindness effect. We are so focused on what our code is supposed to do that we overlook side effects that could vastly affect a customer.

    How do you get around this? My current company tries to keep a 1 to 1 ratio (although it's not quite there, we definitely have more on the development side), and it's been working amazingly well for us. Here's what I tend to see on a daily basis:

    Dev: Hey Shaun, I checked in some code last night, it should be in this morning's build. I tested it myself, so you can probably just give it a passing glance and close the issue as completed.

    Me: Yeah okay. (At which point even spending 10 minutes with the feature I find some issues with it, most are minor but occassionally something major)

    Me: Hey I thought you said you tested this...

    Dev: Yeah well one of the other devs must have done something to break my stuff/I don't see that issue on my machine/Are you sure you installed the latest build/That's not a bug it's a feature/etc.
    -----------------------

    Some devs are better than others about the thoroughness of their testing, but they still miss things. Without dedicated testers, I can't see a product shipping without some major bugs in it.

    Sorry this was kind of a long comment, I just got to work and saw the new blog post and had to mention this. This is actually the first blog I've ever commented on, so hopefully I provided some sort of usefulness to the discussion.

    Thanks for reading,
    Shaun Hershey

    ReplyDelete
    Replies
      Reply
  3. JillesFebruary 16, 2011 at 11:24:00 AM PST

    This makes a lot of sense. We have similar experience at Nokia where I have worked in different (server side Java) projects and as you may be aware, Nokia has had to grow up in a hurry in the past three years regarding server side development.

    A few things I have learned the hard way is that dedicated test engineers are nice until they leave your team and the build breaks and nobody really knows how to fix things. I find it is absolutely essential that all engineers can step in the role of writing any kind of test. It is vital that the team understands the test infrastructure and knows how to fix, utilize and add to it. It is also essential that engineers take the pain when the quality degrades. You have to make it their problem. If you break it, you fix it.

    That being said, it is nice to have somebody around to to dig in deeper where needed. Finding bugs is a skill and requires an attitude and attention to details that not all engineers have. In my view such people add most value if the engineers do their best to cover as much as possible with tests themselves. This frees up the test engineer for focusing on those areas that actually require their skills.

    So, unit tests, integration tests, acceptance tests, performance tests, etc. are not something I would put a dedicated engineer on.

    Regarding quality requirements. They seem abstract at first sight but they don't have to be. You can almost always translate quality requirements into functional requirements. The best way for this is to define them in terms of scenarios and then come up with acceptance tests for those.

    Finally, quality requirements tend to come on the table once you have a quality problem. As you mentioned, preventing a quality problem is better than fixing that problem. One way of preventing quality issues is to have enough good, experienced, senior engineers on the team. This sounds simplistic but I've seen projects where test engineers were brought in to fix a project that had essentially only junior engineers and a lot of quality problems. It doesn't work. It's better to have seniors on the team to do the job properly from day one and it costs much less in the end.

    Finally, monitoring is a great alternative to testing. If you have good monitoring in place, it will be fairly easy to see when you have a performance issue. I'd prefer to have a system that can tell me how its performing over an extensive test suite for that system that tells me how the system performs.

    ReplyDelete
    Replies
      Reply
  4. MatthewFebruary 16, 2011 at 3:24:00 PM PST

    Shaun,

    I'm not from Google, but I can probably shed some light on what you are seeing. In the teams that I have lead, I try to bring in a few key testing dimensions. Having a liberal sprinkling of all these test dimensions strengthens not only the quality of the component that is getting the change, but also reduces the risk of integration level issues.

    * Component Oriented Testing - testing the actual component itself.
    * Interdependency Testing - testing other components that consume or are consumed by the component being developed.
    * System Level Testing - testing broader system functionality that touches the component in question.

    What this breeds into the developers is an awareness of not only their change but also the impact that their change has on the system and on other components. Most good engineers when exposed to this information begin to "get it" and are more careful within that context.

    My understanding is that google is _very_ strong on ensuring the inter-dependencies between components are absolutely clear and absolutely visible to the developers.

    This paired with a focus on automated testing will lower the effort for the engineers, while keeping them aware. That solves the first part of your conversation.

    The second part is disassociating a breakage from a person. Complex systems have complex interactions and need sometimes things break. Investing in tools to detect, identify and remove issues is critical. Make it impartial through centralized testing, automated identification of regressing changelists go a long way to making it an engineering behaviour rather than a personal issue.

    My understanding is that google has virtually institutionalized a lot of what I've said above so engineers can take risks, find it hard to blame others and take ownership not only for the changes they make, but also the impact on the system.

    (I could go on for pages... But I won't :)

    ReplyDelete
    Replies
      Reply
  5. UnknownFebruary 16, 2011 at 9:55:00 PM PST

    Appreciate your sharing. My current BU definitely splits dev and test, and keeps dev:test ratio as 3:1. The dev almost don't do any unit testing. It makes tester life very difficult. Your points give us very good inspiration.

    ReplyDelete
    Replies
      Reply
  6. UnknownFebruary 16, 2011 at 11:42:00 PM PST

    Great Post James ,
    I completely agree - DevTest is an essential for good quality ,
    I’ve been working on several project with a lot of legacy code ( not testable code ) and
    Building quality into those products was a huge challenge ( which required serious refactoring )
    From my experience creating a single testing platform for SWE , STE & SE was one of the major success factors ,
    Each one have different needs but still have a common ground , finding it hooking everyone to the same platform / infrastructure
    Was important to us ,
    Also having the TE working with the product owners and understanding the customer was another key factor .
    Still this change requires a lot of resources and isn’t trivial .
    What about performance / security , are those treated the same , are they part of the SDLC or just a service ?
    -Lior

    ReplyDelete
    Replies
      Reply
  7. Stuart TaylorFebruary 17, 2011 at 2:00:00 AM PST

    Hi James (long time no see),

    i totally agree with the comment you make about the ownership of quality.

    The model we strive for is that of defect prevention rather than detection.

    Stuart Taylor

    ReplyDelete
    Replies
      Reply
  8. MeghaFebruary 17, 2011 at 4:49:00 PM PST

    In am a test engineer and Google has been one of my dream companies. Reading your blog I feel that Testers are so un-important at Google and can be easily laid off. Its sad.

    ReplyDelete
    Replies
      Reply
  9. GengodoFebruary 17, 2011 at 11:56:00 PM PST

    A sad day for testers around the world. Our own spokesman has turned his back on us. What happened to "devs can't test"? I always knew there was something fishy going on. At least the Bach brothers are still kicking it old school with a compelling, fun and efficient buccaneer twist.

    ReplyDelete
    Replies
      Reply
  10. AnonymousFebruary 18, 2011 at 6:21:00 AM PST

    Wow. The comments are becoming larger than my posts! You guys are making my job really easy. Answers coming, I promise! Please stay tuned.

    ReplyDelete
    Replies
      Reply
  11. zecarreraFebruary 18, 2011 at 10:19:00 AM PST

    it is very interesting the way you guys are working to make dev and testing blend together... it is a real challenge and your approach seems to be very good.... I would like to see more details on the work of the TE...

    ReplyDelete
    Replies
      Reply
  12. Peter HoughtonFebruary 20, 2011 at 12:47:00 PM PST

    I agree that 'quality' can not be 'tested in'. But the approach you describe appears to go-ahead and attempt to do something just, if not more, difficult. You suggest that a programmer will produce quality work by just coding 'better'. While a skilled and experienced programmer is capable of producing high quality software, who will tell them when they don't or can't? We are all potentially victims of the Dunning–Kruger effect, and as such we need co-workers to help.

    There are a host of biases that stop a programmer, product owner or project manager from questioning their work. The confirmation and congruence bias to name just two. These are magnified by group-think, and without the input of a more independent, experienced and skilled critical thinker, soon allow mistakes to occur.

    Think of it this way, how do you know your products are good enough? how do you know they are not plagued by flaws? Flaws like: a message that tells me my payment method is invalid - before I've entered one or the absence of a scale on the iPhone maps app.

    Thanks
    Pete

    ReplyDelete
    Replies
      Reply
  13. UnknownFebruary 21, 2011 at 1:39:00 AM PST

    This will work if there is no much integration involved. If a team in geo position A writes code which gets integrated to Team B in Geo position B, manager will face issue of "its not my issue". To add more complexity, who will be doing System testing?
    Another issue with mindset of developers. They think, yes, this the way it is supposed to work and no need to test that. e.g. if the web app works on IE7, no need to test on IE8. Surely it is third party issue, but we need to remember it is our application which is going out to user and he/she wont care if the issue is in third party or our app. For user, it is issue.

    We have application developed for live streaming. Developer knows that yes, it is going to take 7 secs if bandwidth is say 1mbps. Hence ignores this 7 secs. But user will no accept this.

    Blending of Tester with Developer is good idea when tester can force developer to write "testable" code, provide hooks in code for testers to automate the testing.

    ReplyDelete
    Replies
      Reply
  14. jikan lordMay 27, 2011 at 7:48:00 PM PDT

    Shaun Hershey is right.

    The condition for what you describe can only happen when the developers have no deadline to meet and has no personal interest in the code.

    And with application interacting with other application get more in depth then before.

    if a developer has time to work on testing his app, he will be more productive working on paying back his "loan" on all the temp workaround he has implemented in the past due to time constraint.

    ReplyDelete
    Replies
      Reply
  15. UnknownJuly 3, 2015 at 2:54:00 AM PDT

    yeah... I now I see the quality of Google and MS which try to follow devs own the quality and testing. User experiance is terrible and it is going down. Crashes and obvious UI bags are not acceptable.

    ReplyDelete
    Replies
      Reply
Add comment
Load more...

The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.

  

Labels


  • TotT 104
  • GTAC 61
  • James Whittaker 42
  • Misko Hevery 32
  • Code Health 31
  • Anthony Vallone 27
  • Patrick Copeland 23
  • Jobs 18
  • Andrew Trenk 13
  • C++ 11
  • Patrik Höglund 8
  • JavaScript 7
  • Allen Hutchison 6
  • George Pirocanac 6
  • Zhanyong Wan 6
  • Harry Robinson 5
  • Java 5
  • Julian Harty 5
  • Adam Bender 4
  • Alberto Savoia 4
  • Ben Yu 4
  • Erik Kuefler 4
  • Philip Zembrod 4
  • Shyam Seshadri 4
  • Chrome 3
  • Dillon Bly 3
  • John Thomas 3
  • Lesley Katzen 3
  • Marc Kaplan 3
  • Markus Clermont 3
  • Max Kanat-Alexander 3
  • Sonal Shah 3
  • APIs 2
  • Abhishek Arya 2
  • Alan Myrvold 2
  • Alek Icev 2
  • Android 2
  • April Fools 2
  • Chaitali Narla 2
  • Chris Lewis 2
  • Chrome OS 2
  • Diego Salas 2
  • Dori Reuveni 2
  • Jason Arbon 2
  • Jochen Wuttke 2
  • Kostya Serebryany 2
  • Marc Eaddy 2
  • Marko Ivanković 2
  • Mobile 2
  • Oliver Chang 2
  • Simon Stewart 2
  • Stefan Kennedy 2
  • Test Flakiness 2
  • Titus Winters 2
  • Tony Voellm 2
  • WebRTC 2
  • Yiming Sun 2
  • Yvette Nameth 2
  • Zuri Kemp 2
  • Aaron Jacobs 1
  • Adam Porter 1
  • Adam Raider 1
  • Adel Saoud 1
  • Alan Faulkner 1
  • Alex Eagle 1
  • Amy Fu 1
  • Anantha Keesara 1
  • Antoine Picard 1
  • App Engine 1
  • Ari Shamash 1
  • Arif Sukoco 1
  • Benjamin Pick 1
  • Bob Nystrom 1
  • Bruce Leban 1
  • Carlos Arguelles 1
  • Carlos Israel Ortiz García 1
  • Cathal Weakliam 1
  • Christopher Semturs 1
  • Clay Murphy 1
  • Dagang Wei 1
  • Dan Maksimovich 1
  • Dan Shi 1
  • Dan Willemsen 1
  • Dave Chen 1
  • Dave Gladfelter 1
  • David Bendory 1
  • David Mandelberg 1
  • Derek Snyder 1
  • Diego Cavalcanti 1
  • Dmitry Vyukov 1
  • Eduardo Bravo Ortiz 1
  • Ekaterina Kamenskaya 1
  • Elliott Karpilovsky 1
  • Elliotte Rusty Harold 1
  • Espresso 1
  • Felipe Sodré 1
  • Francois Aube 1
  • Gene Volovich 1
  • Google+ 1
  • Goran Petrovic 1
  • Goranka Bjedov 1
  • Hank Duan 1
  • Havard Rast Blok 1
  • Hongfei Ding 1
  • Jason Elbaum 1
  • Jason Huggins 1
  • Jay Han 1
  • Jeff Hoy 1
  • Jeff Listfield 1
  • Jessica Tomechak 1
  • Jim Reardon 1
  • Joe Allan Muharsky 1
  • Joel Hynoski 1
  • John Micco 1
  • John Penix 1
  • Jonathan Rockway 1
  • Jonathan Velasquez 1
  • Josh Armour 1
  • Julie Ralph 1
  • Kai Kent 1
  • Kanu Tewary 1
  • Karin Lundberg 1
  • Kaue Silveira 1
  • Kevin Bourrillion 1
  • Kevin Graney 1
  • Kirkland 1
  • Kurt Alfred Kluever 1
  • Manjusha Parvathaneni 1
  • Marek Kiszkis 1
  • Marius Latinis 1
  • Mark Ivey 1
  • Mark Manley 1
  • Mark Striebeck 1
  • Matt Lowrie 1
  • Meredith Whittaker 1
  • Michael Bachman 1
  • Michael Klepikov 1
  • Mike Aizatsky 1
  • Mike Wacker 1
  • Mona El Mahdy 1
  • Noel Yap 1
  • Palak Bansal 1
  • Patricia Legaspi 1
  • Per Jacobsson 1
  • Peter Arrenbrecht 1
  • Peter Spragins 1
  • Phil Norman 1
  • Phil Rollet 1
  • Pooja Gupta 1
  • Project Showcase 1
  • Radoslav Vasilev 1
  • Rajat Dewan 1
  • Rajat Jain 1
  • Rich Martin 1
  • Richard Bustamante 1
  • Roshan Sembacuttiaratchy 1
  • Ruslan Khamitov 1
  • Sam Lee 1
  • Sean Jordan 1
  • Sebastian Dörner 1
  • Sharon Zhou 1
  • Shiva Garg 1
  • Siddartha Janga 1
  • Simran Basi 1
  • Stan Chan 1
  • Stephen Ng 1
  • Tejas Shah 1
  • Test Analytics 1
  • Test Engineer 1
  • Tim Lyakhovetskiy 1
  • Tom O'Neill 1
  • Vojta Jína 1
  • automation 1
  • dead code 1
  • iOS 1
  • mutation testing 1


Archive


  • ►  2025 (1)
    • ►  Jan (1)
  • ►  2024 (13)
    • ►  Dec (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (1)
    • ►  May (3)
    • ►  Apr (3)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2023 (14)
    • ►  Dec (2)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (3)
    • ►  Aug (1)
    • ►  Apr (1)
  • ►  2022 (2)
    • ►  Feb (2)
  • ►  2021 (3)
    • ►  Jun (1)
    • ►  Apr (1)
    • ►  Mar (1)
  • ►  2020 (8)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  May (1)
  • ►  2019 (4)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Jul (1)
    • ►  Jan (1)
  • ►  2018 (7)
    • ►  Nov (1)
    • ►  Sep (1)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (1)
    • ►  Feb (1)
  • ►  2017 (17)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Sep (1)
    • ►  Aug (1)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2016 (15)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (1)
    • ►  Sep (2)
    • ►  Aug (1)
    • ►  Jun (2)
    • ►  May (3)
    • ►  Apr (1)
    • ►  Mar (1)
    • ►  Feb (1)
  • ►  2015 (14)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Aug (1)
    • ►  Jun (1)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (1)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2014 (24)
    • ►  Dec (2)
    • ►  Nov (1)
    • ►  Oct (2)
    • ►  Sep (2)
    • ►  Aug (2)
    • ►  Jul (3)
    • ►  Jun (3)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Feb (1)
    • ►  Jan (2)
  • ►  2013 (16)
    • ►  Dec (1)
    • ►  Nov (1)
    • ►  Oct (1)
    • ►  Aug (2)
    • ►  Jul (1)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (2)
    • ►  Mar (2)
    • ►  Jan (2)
  • ►  2012 (11)
    • ►  Dec (1)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (1)
    • ►  Aug (4)
  • ▼  2011 (39)
    • ►  Nov (2)
    • ►  Oct (5)
    • ►  Sep (2)
    • ►  Aug (4)
    • ►  Jul (2)
    • ►  Jun (5)
    • ►  May (4)
    • ►  Apr (3)
    • ►  Mar (4)
    • ▼  Feb (5)
      • This Code is CRAP
      • How Google Tests Software - A Brief Interlude
      • Who reads this blog?
      • How Google Tests Software - Part Three
      • How Google Tests Software - Part Two
    • ►  Jan (3)
  • ►  2010 (37)
    • ►  Dec (3)
    • ►  Nov (3)
    • ►  Oct (4)
    • ►  Sep (8)
    • ►  Aug (3)
    • ►  Jul (3)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (3)
    • ►  Mar (3)
    • ►  Feb (2)
    • ►  Jan (1)
  • ►  2009 (54)
    • ►  Dec (3)
    • ►  Nov (2)
    • ►  Oct (3)
    • ►  Sep (5)
    • ►  Aug (4)
    • ►  Jul (15)
    • ►  Jun (8)
    • ►  May (3)
    • ►  Apr (2)
    • ►  Feb (5)
    • ►  Jan (4)
  • ►  2008 (75)
    • ►  Dec (6)
    • ►  Nov (8)
    • ►  Oct (9)
    • ►  Sep (8)
    • ►  Aug (9)
    • ►  Jul (9)
    • ►  Jun (6)
    • ►  May (6)
    • ►  Apr (4)
    • ►  Mar (4)
    • ►  Feb (4)
    • ►  Jan (2)
  • ►  2007 (41)
    • ►  Oct (6)
    • ►  Sep (5)
    • ►  Aug (3)
    • ►  Jul (2)
    • ►  Jun (2)
    • ►  May (2)
    • ►  Apr (7)
    • ►  Mar (5)
    • ►  Feb (5)
    • ►  Jan (4)

Feed

  • Google
  • Privacy
  • Terms