Testing Blog
Testing 2.0
Thursday, August 30, 2012
By Anthony F. Voellm (aka Tony the perfguy)
It’s amazing what has happened in the field of test in the last 20 years... a lot of “art” has turned into “science”. Computer scientists, engineers, and many other disciplines have worked on
provable systems and calculus
,
pioneered model based testing
, invented
security fuzz testing
, and even settled on a common pattern for unit tests called
xunit
. The xunit pattern shows up in open source software like JUnit as well as in Microsoft development
test tools
.
With all this innovation in test, there’s no wonder
test is dead
. The situation is no different from the late 1800’s when
patents were declared dead
. Everything had been invented. So now that everything in test has been invented, it’s dead.
Well... if you believe everything in test has been invented then please stop reading now :)
As an aside: “
Test is dead
” was a keynote at the Google Test Automation Conference (GTAC) in 2011. You can watch that talk and many other GTAC test talks on YouTube, and I definitely recommend you check them out
here
. Talks span a wide range of topics ranging from
GUI Automation
to
Cloud
.
What really excites me these days is that we have closed a chapter on test. A lot of the foundation of writing and testing great software has been laid (examples at the beginning of the post, tools like Webdriver for UI, FIO for storage, and much more), which I think of as Testing 1.0. We all use Testing 1.0 day in and day out. In fact at Google, most of the developers (called Software Engineers or SWEs) do the basic Testing 1.0 work and we have a high bar on quality.
Knuth
once said "Be careful about using the following code -- I've only proven that it works, I haven't tested it."
This brings us to the current chapter in test which I call Testing 1.5. This chapter is being written by computer scientists, applied scientists, engineers, developers, statisticians, and many other disciplines. These people come together in the Software Engineer in Test (
SET
) and Test Engineer (TE) roles at Google. SET/TEs focus on; developing software faster, building it better the first time, testing it in depth, releasing it quicker, and making sure it works in all environments. We often put deep test focus on Security, Reliability and Performance. I sometimes think of the SET/TE’s as risk assessors whose role is to figure out the probability of finding a bug, and then working to reduce that probability. Super interesting computer science problems where we take a solid engineering approach, rather than a process oriented / manual / people intensive based approach. We always look to scale with machines wherever possible.
While Testing 1.0 is done and 1.5 is alive and well, it’s Testing 2.0 that gets me up early in the morning to start my day. Imagine if we could reinvent how we use and think about tests. What if we could automate the complex decisions on good and bad quality that humans are still so good at today? What would it look like if we had a system collecting all the “quality signals” (think: tests, production information, developer behavior, …) and could predict how good the code is today, and what it most likely will be tomorrow? That would be so awesome...
Google is working on Testing 2.0 and we’ll continue to contribute to Testing 1.0 and 1.5. Nothing is static... keep up or miss an amazing
ride
.
Peace.... Tony
Special thanks to Chris, Simon, Anthony, Matt, Asim, Ari, Baran, Jim, Chaitali, Rob, Emily, Kristen, Annie, and many others for providing input and suggestions for this post.
4 comments
Testing Google's New API Infrastructure
Monday, August 20, 2012
By
Anthony Vallone
If you haven’t noticed, Google has been launching
many public APIs
recently. These APIs are empowering mobile app, desktop app, web service, and website developers by providing easy access to Google tools, services, and data. In the past couple of years, we have invested heavily in building a new API infrastructure for our APIs. Before this infrastructure, our teams had numerous technical challenges to solve when releasing an API: scalability, authorization, quota, caching, billing, client libraries, translation from external REST requests to internal RPCs, etc. The new infrastructure generically solves these problems and allows our teams to focus on their service. Automating testing of these new APIs turned out to be quite a large problem. Our solution to this problem is somewhat unique within Google, and we hope you find it interesting.
System Under Test (SUT)
Let’s start with a simplified view of the SUT design:
A developer’s application uses a Google-supplied API Client Library to call Google API methods. The library connects to the API Infrastructure service and sends the request. Part of the request defines the particular API and version being used by the client. This service is knowledgeable of all Google APIs, because they are defined by API Configuration files. These files are created by each API providing team. Configuration files declare API versions, methods, method parameters, and other API settings. Given an API request and information about the API, the API Infrastructure Service can translate the request to Google’s internal RPC format and pass it to the correct API Provider Service. This service then satisfies the request and passes the response back to the developer’s app via the API Infrastructure Service and API Client Library.
Now, the Fun Part
As of this writing, we have released 10
language-specific client libraries
and 35 public APIs built on this infrastructure. Also, each of the libraries need to work on multiple platforms. Our test space has three dimensions: API (35), language (10), and platform (varies by lib). How are we going to test all the libraries on all the platforms against all the APIs when we only have two engineers on the team dedicated to test automation?
Step 1: Create a Comprehensive API
Each API uses different features of the infrastructure, and we want to ensure that every feature works. Rather than use the APIs to test our infrastructure, we create a Test API that uses every feature. In some cases where API configuration options are mutually exclusive, we have to create API versions that are feature-specific. Of course, each API team still needs to do basic integration testing with the infrastructure, but they can assume that the infrastructure features that their API depends on are well tested by the infrastructure team.
Step 2: Client Abstraction Layer in the Tests
We want to avoid creating library-specific tests, because this would lead to mass duplication of test logic. The obvious solution is to create a test library to be used by all tests as an abstraction layer hiding the various libraries and platforms. This allows us to define tests that don’t care about library or platform.
Step 3: Adapter Servers
When a test library makes an API call, it should be able to use any language and platform. We can solve this by setting up servers on each of our target platforms. For each target language, create a language-specific server. These servers receive requests from test clients. The servers need only translate test client requests into actual library calls and return the response to the caller. The code for these servers is quite simple to create and maintain.
Step 4: Iterate
Now, we have all the pieces in place. When we run our tests, they are configured to run over all supported languages and platforms against the test API:
Test Nirvana Achieved
We have a suite of straightforward tests that focus on infrastructure features. When the tests run, they are quick, reliable, and test all of our supported features, platforms, and libraries. When a feature is added to the API infrastructure, we only need to create one new test, update each adapter server to handle a new call type, and add the feature to the Test API.
11 comments
Covering all your codebases: A conversation with a Software Engineer in Test
Saturday, August 11, 2012
Cross-posted from the
Google Student Blog
Today we’re featuring Sabrina Williams, a Software Engineer in Test who joined Google in August 2011. Software Engineers in Test undertake a broad range of challenges on a daily basis, designing and building intelligent systems that can explore various use cases and scenarios for distributed computing infrastructure. Read on to learn more about Sabrina’s path to Google and what she works on now that she’s here!
Tell us about yourself and how you got to Google.
I grew up in rural Prunedale, Calif. and went to Stanford where I double-majored in philosophy and computer science. After college I spent six years as a software engineer at HP, working primarily on printer drivers. I began focusing on testing my last two years there—reading books, looking up information and prototyping test tools in my own time. By the time I left, I’d started a project for an automated test framework that most of our lab used.
I applied for a software engineering role at Google four years ago and didn’t do well in my interviews. Thankfully, a Google recruiter called last year and set me up for software engineer (SWE) interviews again. After a day of talking about testing and mocking for every design question I answered, I was told that there were opportunities for me in SWE and SET. I ended up choosing the SET role after speaking with the SET hiring manager. He said two things that convinced me. First, SETs spend as much time coding as SWEs, and I wanted a role where I could write code. Second, the SETs job is to creatively solve testing problems, which sounded more interesting to me than writing features for a product. This seemed like a really unique and appealing opportunity, so I took it!
So what exactly do SETs do?
SETs are SWEs who are really into testing. We help SWEs design and refactor their code so that it is more testable. We work with test engineers (TEs) to figure out how to automate difficult test cases. We also write harnesses, frameworks and tools to make test automation possible. SETs tend to have the best understanding of how everything plays together (production code, manual tests, automated tests, tools, etc.) and we have to make that information accessible to everyone on the team.
What project do you work on?
I work on the
Google Cloud Print
team. Our goal is to make it possible to print anywhere from any device. You can use Google Cloud Print to connect home and work printers to the web so that you (and anyone you share your printers with) can access them from your phone, tablet, Chromebook, PC or any other supported web-connected device.
What advice would you give to aspiring SETs?
First, for computer science majors in general: if there’s any other field about which you are passionate, at least minor in it. CS is wonderfully chameleonic in that it can be applied to anything. So if, for example, you love art history, minor in art and you can write software to help restore images of old paintings.
For aspiring SETs, challenge yourself to write tests for all of the code you write for school. If you can get an internship where you have access to a real-world code base, study how that company approaches testing their code. If it’s well-tested, see how they did it. If it’s not well-tested, think about how you would test it. I don’t (personally) know of a CS program that has even a full course based on testing, so you’ll have to teach yourself. Start by looking up buzzwords like “unit test” and “test-driven development.” Look up the different types of tests (unit, integration, component, system, etc.). Find a code coverage tool (if a free/cheap one is available for your language of choice) and see how well you’re covering your code with your tests. Write a tool that will run all of your tests every time you build your code. If all of this sounds like fun...well...we need more people like you!
If you’re interested in applying for a Software Engineer in Test position, please apply for our general
Software Engineer
position, then indicate in your resume objective line that you’re interested in the SET role.
Posted by Jessica Safir, University Programs
2 comments
Welcome to the Next Generation of Google Testing
Saturday, August 04, 2012
By
Anthony Vallone
Wow... it has been a long time since we’ve posted to the blog. This past year has been a whirlwind of change for many test teams as Google has restructured leadership with a focus on products. Now that the dust has settled, our teams are leaner, more focused, and more effective. We have learned quite a bit over the past year about how best to tackle and manage test problems at monumental scale. The next generation of test teams at Google are looking forward to sharing all that we have learned. Stay tuned for a revived Google Testing Blog that will provide deep insight into our latest testing technologies and strategies.
12 comments
Labels
TotT
101
GTAC
61
James Whittaker
42
Misko Hevery
32
Code Health
30
Anthony Vallone
27
Patrick Copeland
23
Jobs
18
Andrew Trenk
12
C++
11
Patrik Höglund
8
JavaScript
7
Allen Hutchison
6
George Pirocanac
6
Zhanyong Wan
6
Harry Robinson
5
Java
5
Julian Harty
5
Alberto Savoia
4
Ben Yu
4
Erik Kuefler
4
Philip Zembrod
4
Shyam Seshadri
4
Adam Bender
3
Chrome
3
Dillon Bly
3
John Thomas
3
Lesley Katzen
3
Marc Kaplan
3
Markus Clermont
3
Max Kanat-Alexander
3
Sonal Shah
3
APIs
2
Abhishek Arya
2
Alan Myrvold
2
Alek Icev
2
Android
2
April Fools
2
Chaitali Narla
2
Chris Lewis
2
Chrome OS
2
Diego Salas
2
Dori Reuveni
2
Jason Arbon
2
Jochen Wuttke
2
Kostya Serebryany
2
Marc Eaddy
2
Marko Ivanković
2
Mobile
2
Oliver Chang
2
Simon Stewart
2
Stefan Kennedy
2
Test Flakiness
2
Titus Winters
2
Tony Voellm
2
WebRTC
2
Yiming Sun
2
Yvette Nameth
2
Zuri Kemp
2
Aaron Jacobs
1
Adam Porter
1
Adam Raider
1
Adel Saoud
1
Alan Faulkner
1
Alex Eagle
1
Amy Fu
1
Anantha Keesara
1
Antoine Picard
1
App Engine
1
Ari Shamash
1
Arif Sukoco
1
Benjamin Pick
1
Bob Nystrom
1
Bruce Leban
1
Carlos Arguelles
1
Carlos Israel Ortiz García
1
Cathal Weakliam
1
Christopher Semturs
1
Clay Murphy
1
Dagang Wei
1
Dan Maksimovich
1
Dan Shi
1
Dan Willemsen
1
Dave Chen
1
Dave Gladfelter
1
David Bendory
1
David Mandelberg
1
Derek Snyder
1
Diego Cavalcanti
1
Dmitry Vyukov
1
Eduardo Bravo Ortiz
1
Ekaterina Kamenskaya
1
Elliott Karpilovsky
1
Elliotte Rusty Harold
1
Espresso
1
Felipe Sodré
1
Francois Aube
1
Gene Volovich
1
Google+
1
Goran Petrovic
1
Goranka Bjedov
1
Hank Duan
1
Havard Rast Blok
1
Hongfei Ding
1
Jason Elbaum
1
Jason Huggins
1
Jay Han
1
Jeff Hoy
1
Jeff Listfield
1
Jessica Tomechak
1
Jim Reardon
1
Joe Allan Muharsky
1
Joel Hynoski
1
John Micco
1
John Penix
1
Jonathan Rockway
1
Jonathan Velasquez
1
Josh Armour
1
Julie Ralph
1
Kai Kent
1
Karin Lundberg
1
Kaue Silveira
1
Kevin Bourrillion
1
Kevin Graney
1
Kirkland
1
Kurt Alfred Kluever
1
Manjusha Parvathaneni
1
Marek Kiszkis
1
Marius Latinis
1
Mark Ivey
1
Mark Manley
1
Mark Striebeck
1
Matt Lowrie
1
Meredith Whittaker
1
Michael Bachman
1
Michael Klepikov
1
Mike Aizatsky
1
Mike Wacker
1
Mona El Mahdy
1
Noel Yap
1
Palak Bansal
1
Patricia Legaspi
1
Per Jacobsson
1
Peter Arrenbrecht
1
Peter Spragins
1
Phil Norman
1
Phil Rollet
1
Pooja Gupta
1
Project Showcase
1
Radoslav Vasilev
1
Rajat Dewan
1
Rajat Jain
1
Rich Martin
1
Richard Bustamante
1
Roshan Sembacuttiaratchy
1
Ruslan Khamitov
1
Sam Lee
1
Sean Jordan
1
Sharon Zhou
1
Shiva Garg
1
Siddartha Janga
1
Simran Basi
1
Stan Chan
1
Stephen Ng
1
Tejas Shah
1
Test Analytics
1
Test Engineer
1
Tim Lyakhovetskiy
1
Tom O'Neill
1
Vojta Jína
1
automation
1
dead code
1
iOS
1
mutation testing
1
Archive
►
2024
(11)
►
Sep
(1)
►
Aug
(1)
►
Jul
(1)
►
May
(3)
►
Apr
(3)
►
Mar
(1)
►
Feb
(1)
►
2023
(14)
►
Dec
(2)
►
Nov
(2)
►
Oct
(5)
►
Sep
(3)
►
Aug
(1)
►
Apr
(1)
►
2022
(2)
►
Feb
(2)
►
2021
(3)
►
Jun
(1)
►
Apr
(1)
►
Mar
(1)
►
2020
(8)
►
Dec
(2)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
May
(1)
►
2019
(4)
►
Dec
(1)
►
Nov
(1)
►
Jul
(1)
►
Jan
(1)
►
2018
(7)
►
Nov
(1)
►
Sep
(1)
►
Jul
(1)
►
Jun
(2)
►
May
(1)
►
Feb
(1)
►
2017
(17)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Sep
(1)
►
Aug
(1)
►
Jul
(2)
►
Jun
(2)
►
May
(3)
►
Apr
(2)
►
Feb
(1)
►
Jan
(2)
►
2016
(15)
►
Dec
(1)
►
Nov
(2)
►
Oct
(1)
►
Sep
(2)
►
Aug
(1)
►
Jun
(2)
►
May
(3)
►
Apr
(1)
►
Mar
(1)
►
Feb
(1)
►
2015
(14)
►
Dec
(1)
►
Nov
(1)
►
Oct
(2)
►
Aug
(1)
►
Jun
(1)
►
May
(2)
►
Apr
(2)
►
Mar
(1)
►
Feb
(1)
►
Jan
(2)
►
2014
(24)
►
Dec
(2)
►
Nov
(1)
►
Oct
(2)
►
Sep
(2)
►
Aug
(2)
►
Jul
(3)
►
Jun
(3)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Feb
(1)
►
Jan
(2)
►
2013
(16)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
Jun
(2)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Jan
(2)
▼
2012
(11)
►
Dec
(1)
►
Nov
(2)
►
Oct
(3)
►
Sep
(1)
▼
Aug
(4)
Testing 2.0
Testing Google's New API Infrastructure
Covering all your codebases: A conversation with a...
Welcome to the Next Generation of Google Testing
►
2011
(39)
►
Nov
(2)
►
Oct
(5)
►
Sep
(2)
►
Aug
(4)
►
Jul
(2)
►
Jun
(5)
►
May
(4)
►
Apr
(3)
►
Mar
(4)
►
Feb
(5)
►
Jan
(3)
►
2010
(37)
►
Dec
(3)
►
Nov
(3)
►
Oct
(4)
►
Sep
(8)
►
Aug
(3)
►
Jul
(3)
►
Jun
(2)
►
May
(2)
►
Apr
(3)
►
Mar
(3)
►
Feb
(2)
►
Jan
(1)
►
2009
(54)
►
Dec
(3)
►
Nov
(2)
►
Oct
(3)
►
Sep
(5)
►
Aug
(4)
►
Jul
(15)
►
Jun
(8)
►
May
(3)
►
Apr
(2)
►
Feb
(5)
►
Jan
(4)
►
2008
(75)
►
Dec
(6)
►
Nov
(8)
►
Oct
(9)
►
Sep
(8)
►
Aug
(9)
►
Jul
(9)
►
Jun
(6)
►
May
(6)
►
Apr
(4)
►
Mar
(4)
►
Feb
(4)
►
Jan
(2)
►
2007
(41)
►
Oct
(6)
►
Sep
(5)
►
Aug
(3)
►
Jul
(2)
►
Jun
(2)
►
May
(2)
►
Apr
(7)
►
Mar
(5)
►
Feb
(5)
►
Jan
(4)
Feed
Follow @googletesting