How Google Tests Software - Part One
Tuesday, January 25, 2011
By James Whittaker
This is the first in a series of posts on this topic.
The one question I get more than any other is "How does Google test?" It's been explained in bits and pieces on this blog but the explanation is due an update. The Google testing strategy has never changed but the tactical ways we execute it has evolved as the company has evolved. We're now a search, apps, ads, mobile, operating system, and so on and so forth company. Each of these Focus Areas (as we call them) have to do things that make sense for their problem domain. As we add new FAs and grow the existing ones, our testing has to expand and improve. What I am documenting in this series of posts is a combination of what we are doing today and the direction we are trending toward in the foreseeable future.
Let's begin with organizational structure and it's one that might surprise you. There isn't an actual testing organization at Google. Test exists within a Focus Area called Engineering Productivity. Eng Prod owns any number of horizontal and vertical engineering disciplines, Test is the biggest. In a nutshell, Eng Prod is made of:
1. A product team that produces internal and open source productivity tools that are consumed by all walks of engineers across the company. We build and maintain code analyzers, IDEs, test case management systems, automated testing tools, build systems, source control systems, code review schedulers, bug databases... The idea is to make the tools that make engineers more productive. Tools are a very large part of the strategic goal of prevention over detection.
2. A services team that provides expertise to Google product teams on a wide array of topics including tools, documentation, testing, release management, training and so forth. Our expertise covers reliability, security, internationalization, etc., as well as product-specific functional issues that Google product teams might face. Every other FA has access to Eng Prod expertise.
3. Embedded engineers that are effectively loaned out to Google product teams on an as-needed basis. Some of these engineers might sit with the same product teams for years, others cycle through teams wherever they are needed most. Google encourages all its engineers to change product teams often to stay fresh, engaged and objective. Testers are no different but the cadence of changing teams is left to the individual. I have testers on Chrome that have been there for several years and others who join for 18 months and cycle off. Keeping a healthy balance between product knowledge and fresh eyes is something a test manager has to pay close attention to.
So this means that testers report to Eng Prod managers but identify themselves with a product team, like Search, Gmail or Chrome. Organizationally they are part of both teams. They sit with the product teams, participate in their planning, go to lunch with them, share in ship bonuses and get treated like full members of the team. The benefit of the separate reporting structure is that it provides a forum for testers to share information. Good testing ideas migrate easily within Eng Prod giving all testers, no matter their product ties, access to the best technology within the company.
This separation of project and reporting structures has its challenges. By far the biggest is that testers are an external resource. Product teams can't place too big a bet on them and must keep their quality house in order. Yes, that's right: at Google it's the product teams that own quality, not testers. Every developer is expected to do their own testing. The job of the tester is to make sure they have the automation infrastructure and enabling processes that support this self reliance. Testers enable developers to test.
What I like about this strategy is that it puts developers and testers on equal footing. It makes us true partners in quality and puts the biggest quality burden where it belongs: on the developers who are responsible for getting the product right. Another side effect is that it allows us a many-to-one dev-to-test ratio. Developers outnumber testers. The better they are at testing the more they outnumber us. Product teams should be proud of a high ratio!
Ok, now we're all friends here right? You see the hole in this strategy I am sure. It's big enough to drive a bug through. Developers can't test! Well, who am I to deny that? No amount of corporate kool-aid could get me to deny it, especially coming off my GTAC talk last year where I pretty much made a game of developer vs. tester (spoiler alert: the tester wins).
Google's answer is to split the role. We solve this problem by having two types of testing roles at Google to solve two very different testing problems. In my next post, I'll talk about these roles and how we split the testing problem into two parts.
This is the first in a series of posts on this topic.
The one question I get more than any other is "How does Google test?" It's been explained in bits and pieces on this blog but the explanation is due an update. The Google testing strategy has never changed but the tactical ways we execute it has evolved as the company has evolved. We're now a search, apps, ads, mobile, operating system, and so on and so forth company. Each of these Focus Areas (as we call them) have to do things that make sense for their problem domain. As we add new FAs and grow the existing ones, our testing has to expand and improve. What I am documenting in this series of posts is a combination of what we are doing today and the direction we are trending toward in the foreseeable future.
Let's begin with organizational structure and it's one that might surprise you. There isn't an actual testing organization at Google. Test exists within a Focus Area called Engineering Productivity. Eng Prod owns any number of horizontal and vertical engineering disciplines, Test is the biggest. In a nutshell, Eng Prod is made of:
1. A product team that produces internal and open source productivity tools that are consumed by all walks of engineers across the company. We build and maintain code analyzers, IDEs, test case management systems, automated testing tools, build systems, source control systems, code review schedulers, bug databases... The idea is to make the tools that make engineers more productive. Tools are a very large part of the strategic goal of prevention over detection.
2. A services team that provides expertise to Google product teams on a wide array of topics including tools, documentation, testing, release management, training and so forth. Our expertise covers reliability, security, internationalization, etc., as well as product-specific functional issues that Google product teams might face. Every other FA has access to Eng Prod expertise.
3. Embedded engineers that are effectively loaned out to Google product teams on an as-needed basis. Some of these engineers might sit with the same product teams for years, others cycle through teams wherever they are needed most. Google encourages all its engineers to change product teams often to stay fresh, engaged and objective. Testers are no different but the cadence of changing teams is left to the individual. I have testers on Chrome that have been there for several years and others who join for 18 months and cycle off. Keeping a healthy balance between product knowledge and fresh eyes is something a test manager has to pay close attention to.
So this means that testers report to Eng Prod managers but identify themselves with a product team, like Search, Gmail or Chrome. Organizationally they are part of both teams. They sit with the product teams, participate in their planning, go to lunch with them, share in ship bonuses and get treated like full members of the team. The benefit of the separate reporting structure is that it provides a forum for testers to share information. Good testing ideas migrate easily within Eng Prod giving all testers, no matter their product ties, access to the best technology within the company.
This separation of project and reporting structures has its challenges. By far the biggest is that testers are an external resource. Product teams can't place too big a bet on them and must keep their quality house in order. Yes, that's right: at Google it's the product teams that own quality, not testers. Every developer is expected to do their own testing. The job of the tester is to make sure they have the automation infrastructure and enabling processes that support this self reliance. Testers enable developers to test.
What I like about this strategy is that it puts developers and testers on equal footing. It makes us true partners in quality and puts the biggest quality burden where it belongs: on the developers who are responsible for getting the product right. Another side effect is that it allows us a many-to-one dev-to-test ratio. Developers outnumber testers. The better they are at testing the more they outnumber us. Product teams should be proud of a high ratio!
Ok, now we're all friends here right? You see the hole in this strategy I am sure. It's big enough to drive a bug through. Developers can't test! Well, who am I to deny that? No amount of corporate kool-aid could get me to deny it, especially coming off my GTAC talk last year where I pretty much made a game of developer vs. tester (spoiler alert: the tester wins).
Google's answer is to split the role. We solve this problem by having two types of testing roles at Google to solve two very different testing problems. In my next post, I'll talk about these roles and how we split the testing problem into two parts.
Thanks for the interesting post.
ReplyDelete2 questions about : "So this means that testers report to Eng Prod managers but identify themselves with a product team"
1) does this also apply to developpers ?
2) Where are Eng Prod Managers sitting ? Meaning : are they embeded in projects also or they stay outsite ?
Thanks,
Laurent
Surreal!
ReplyDeleteThanks for the insight. I can fully understand how testing is a very tricky area for a behemoth as large as Google, especially with the diversity of products you have.
ReplyDeleteOur company, Market Dojo, finds testing challenging enough, even though we have just the one piece of software!
Hi James,
ReplyDeleteEnjoy reading these posts. Keen to hear more about how Google tests, particularly, what methods/processes are used in estimations.
With dev's doing the lions share of the testing, how are estimates gathered for testing? Is there a particular process used for estimation (or one you could recommend)?
Thanks.
Hi James,
ReplyDeleteThis setup is very similar to how our company works and like at Google it works very well for us.
One line though I'd have probably have been more careful about is this:
"The better they are at testing the more they outnumber us. Product teams should be proud of a high ratio!"
I think you'll find you'll have upset a lot of your testers at Google with that.
Cem Kaner wrote a very good paper around metrics, I’d say this falls into an unwritten management metric which is probably best left unspoken.
http://www.kaner.com/pdfs/measurement_segue.pdf
Cheers,
Darren
Would like to add few comments.
ReplyDelete1. Looks like there is a de-facto matrix structure for the testers here with dual reporting to product manager and core Eng Prod Mgr. Works well if the tester also has a long term career option linked to the Eng Prod team - basically some one to take care of the career, skill set development etc.
2. There could also be problems if the product line suffers budget constraints - who insists on minimum quality then? Is there a minimum RMI that each product team has to sign up for before release?
3. As we split test engineering and testing roles - eagerly looking forward to this explanation in the next post - it might be worthwhile to look at engaging testing service vendors as a lower cost option for the testing portion. Might be a good bridge between dog fooding and crowd testing, especially if you can throw in few SLA adherence requirements.
Hi James,
ReplyDeleteVery interesting post. Looking forward to reading the rest of the series.
It would be interesting to see your Test Case and defect management systems and see how they stack up against ones that are commonly used throughout the industry.
I bet they're slicker and sexier than the majority of the ones available on the market!
Regards,
Adam Brown
http://www.gobanana.co.uk
Great post, already anxious for the others...
ReplyDeleteReally interested to know whether Google promotes TDD/BDD or whether they use a test-after approach. If TDD/BDD, how is it enforced/maintained?
ReplyDeleteExactly the reason I have always felt that IT organisation structure are disfunctional in nature.Google is pretty much the same structure.
ReplyDeleteSo basically governance lies with product teams and testing belongs to service lines.Huge pie of success if it comes by goes to product teams and small pie comes to service lines whereas in real sense the real execution of those products is done by service lines.Just because product team conceives a great product idea does not assure you a success, it needs that greats ideas needs to have strong implementation strategies and to do this we need to ensure that we are making everyone accountable for their role and share the burden equally.
I do not have right solution now but problem statement is pretty much clear, you have operational issue to deal with in your plate.
Your structure is one of the reasons why I feel google lacks quality now a days.
Hope you are not reading my comments in wrong spirit,sometimes I often research on the ways organisations are structured.
Thanks for the interesting post
ReplyDelete100 points
Hello James,
ReplyDeleteI'm reading the book "How Google Tests Software" and I'm going to share the information in this book with my friends in a workshop.
So could you please confirm if the information in that book (like roles, testing process, testing philosophy) is still correct and is till using at Google now?
Thanks in advance,
Long Lee