Testing Blog
Google at STAR West 2011
Tuesday, June 28, 2011
By James Whittaker
STAR West will feature something unprecedented this year: back-to-back tutorials by Googlers plus a keynote and track session.
The tutorials will be Monday October 3. I have the morning session on "How Google Tests Software" and my colleague Ankit Mehta has the afternoon session on "Testing Rich Internet AJAX-based Applications." You can spend the whole day in Google Test Land.
I highly recommend Ankit's tutorial. He is one of our top test managers and has spent years minding Gmail as it grew up from a simple cloud-based email system into the mass-scale, ubiquitous rich web app that it is today. Ankit now leads all testing efforts around our social offerings (which are already starting to appear). Anyone struggling to automate the testing of rich web apps will have plenty to absorb in his session. He's not spouting conjecture and generalities; he's speaking from the position of actual accomplishment. Bring a laptop.
Jason Arbon and Sebastian Schiavone are presenting a track talk on "Google's New Methodology for Risk Driven Testing" and will be demonstrating some of the latest tools coming out of Google Test Labs. Tools that were born of real need built to serve that need. I am expecting free samples! Jason was test lead for Chrome and Chrome OS before taking over Google Test Labs where incredibly clever code is woven into useful test tools. Sebastian is none other than my TPM (technical program manager) who is well known for taking my vague ideas about how things should be done and making them real.
Oh and the keynote, well that's me again, something about testing getting in the way of quality. I wrote this talk while I was in an especially melancholy mood about my place in the universe. It's a wake-up call to testers: the world is changing and your relevance is calling ... will you answer the call or ignore it and pretend that yesterday is still today?
6 comments
Lessons in a 21st Century Tech Career: Failing Fast, 20% Time and Project Mobility
Thursday, June 23, 2011
By James Whittaker
If your name is Larry Page, stop reading this now.
Let me first admit that as I write this I am sitting in a company lounge reminiscent of a gathering room in a luxury hotel with my belly full of free gourmet food waiting for a meeting with the lighthearted title "Beer and Demos" to start.
Let me secondly admit that none of this matters. It's all very nice, and I hope it continues in perpetuity, but it doesn't matter. Engineers don't need to be spoiled rotten to be happy. The spoiling of engineers has little to do with the essence of a 21st century tech career.
Now, what exactly does matter? What is the essence of a 21st century tech career that keeps employees loyal and engaged with productivity that would shame the most seasoned agile-ist? I don't yet have the complete story, but here are three important ingredients:
Failing Fast
. Nothing destroys morale more than a death march. Projects going nowhere
should do so with the utmost haste
. The ability of a company to implode pet projects quickly correlates directly to a great place to work. Engineers working on these project gain not only valuable engineering experience, they experience first-hand the company's perception of what is important (and, in the case of their project, what is not important). It's a built-in lesson on company priorities and it ensures good engineers don't get monopolized by purposeless projects. You gotta like a company willing to experiment. You have to love a company willing to laugh at itself when the experiments don't pan out.
20% Time
. Any company worth working for has any number of projects that are worth working on. It's frustrating for many super-sharding engineers to see cool work going on down the hall or in the next building and not being part of it. A day job that takes all day is tiresome. Enter 20% time, a concept meant to send a strong message to all engineers:
you always have a spare day
. Use it wisely.
Project Mobility
. Staying fresh by changing projects is part of mobility. Continuous cycling of fresh ideas from new project members to existing projects is another part. The downside here is obviously projects with a steep learning curve but I scoff in the general direction of this idea. Whose fault is it when a wicked smart engineer can't learn the system fast enough to be useful in some (even a small) context? Only the weakest organization with the poorest documentation can use that excuse. The only good reason for keeping people on a project is because they have no desire to leave.
These three concepts are better than all the lounges and free food any company can provide. Here's an example, a real example, of how it worked recently for an employee I'll call Paul (because that happens to be his name!).
Paul joined Google a little over a year ago and spent two months on a project that was then cancelled. He learned enough to be useful anywhere but was new enough that he really didn't have great context on what project he wanted next. Solution: I assigned him to a project that was a good skill set match.
Less than a year later, his new project ships. He played an important role in making this happen but in that time he also realized that the role was leaning toward feature development and he was more interested in a pure test development role. However, he was steeped in post-ship duties and working on the next release. A cycle that, happily, can be broken pretty easily here.
Another project had a test developer opening that suited Paul perfectly. He immediately signed up for 20% on this new project and spent his 80% ramping down in his old project. At some point these percentages will trade places and he'll spend 20% of his time training his replacement on the old project. This is a friction-less process. His manager cannot deny him
his day
to do as he pleases and now he can spend his time getting off the critical path of his old project and onto the critical path of his new project.
Mobility means a constant stream of openings on projects inside Google. It also creates a population of engineering talent with an array of project experiences and a breadth of expertise to fill those positions. 20% time is a mechanism for moving onto and off of projects without formal permissions, interviews and other make-work processes engineers deplore.
Let's face it, most benefits are transient. I enjoy a good meal for the time it is in front of me. I enjoy great medical when I am sick. I appreciate luxury when I have time for it. Even my paycheck comes with such monotonous regularity that it is an expectation that brings little joy apart from the brief moment my bank balance takes that joyful upward tick. But if I am unhappy the rest of the day, none of those islands of pampering mean squat. Empower me as an engineer during the much larger blocks of my time when I am doing engineering. Feed my creativity. Remove the barriers that prevent me from working on the things I want to work on.
Do these things and you have me. Do these things and you make my entire work day better. This is the essence of a 21st century tech career:
make the hours I spend working better
. Anything more is so dot com.
Ok, Larry you can start reading again.
9 comments
Introducing DOM Snitch, our passive in-the-browser reconnaissance tool
Tuesday, June 21, 2011
By Radoslav Vasilev from Google Zurich
Every day modern web applications are becoming increasingly sophisticated, and as their complexity grows so does their attack surface. Previously we introduced open source tools such as
Skipfish
and
Ratproxy
to assist developers in understanding and securing these applications.
As existing tools focus mostly on testing server-side code, today we are happy to introduce
DOM Snitch
— an experimental* Chrome extension that enables developers and testers to identify insecure practices commonly found in client-side code. To do this, we have adopted
several approaches
to intercepting JavaScript calls to key and potentially dangerous browser infrastructure such as document.write or HTMLElement.innerHTML (among
others
). Once a JavaScript call has been intercepted, DOM Snitch records the document URL and a complete stack trace that will help assess if the intercepted call can lead to cross-site scripting, mixed content, insecure modifications to the
same-origin policy for DOM access
, or other client-side issues.
Here are the benefits of DOM Snitch:
Real-time:
Developers can observe DOM modifications as they happen inside the browser without the need to step through JavaScript code with a debugger or pause the execution of their application.
Easy to use:
With built-in
security heuristics
and nested views, both advanced and less experienced developers and testers can quickly spot areas of the application being tested that need more attention.
Easier collaboration:
Enables developers to easily export and share captured DOM modifications while troubleshooting an issue with their peers.
DOM Snitch is intended for use by developers, testers, and security researchers alike.
Click here
to download DOM Snitch. To read the documentation, please visit
this page
.
*Developers and testers should be aware that DOM Snitch is currently experimental. We do not guarantee that it will work flawlessly for all web applications. More details on known issues can be found
here
or in the project’s
issues tracker
.
No comments
GTAC 2011 Keynotes
Thursday, June 16, 2011
By James Whittaker
I am pleased to confirm 3 of our keynote speakers for GTAC 2011 at the Computer History Museum in Mountain View CA.
Google's own
Alberto Savoia
, aka Testivus.
Steve McConnell
the best selling author of
Code Complete
and CEO of Construx Software.
Award winning speaker ("the Jon Stewart of Software Security")
Hugh Thompson
.
This is the start of an incredible lineup. Stay tuned for updates concerning their talks and continue to nominate additional speakers and keynotes. We're not done yet and we're taking nominations through mid July.
In addition to the keynotes, we're going to be giving updates on How Google Tests Software from teams across the company including Android, Chrome, Gmail, You Tube and many more.
2 comments
Tuesday, June 14, 2011
(Cross-posted from the
Google Engineering Tools blog
)
By Pooja Gupta, Mark Ivey and John Penix
Continuous integration systems play a crucial role in keeping software working while it is being developed. The basic steps most continuous integration systems follow are:
1. Get the latest copy of the code.
2. Run all tests.
3. Report results.
4. Repeat 1-3.
This works great while the codebase is small, code flux is reasonable and tests are fast. As a codebase grows over time, the effectiveness of such a system decreases. As more code is added, each clean run takes much longer and more changes gets crammed into a single run. If something breaks, finding and backing out the bad change is a tedious and error prone task for development teams.
Software development at
Google is big and fast
. The code base receives
20+ code changes per minute and 50% of the files change every month
! Each product is developed and released from ‘head’ relying on automated tests verifying the product behavior. Release frequency varies from multiple times per day to once every few weeks, depending on the product team.
With such a huge, fast-moving codebase, it is possible for teams to get stuck spending a lot of time just keeping their build ‘green’. A continuous integration system should help by providing the
exact
change at which a test started failing, instead of a range of suspect changes or doing a lengthy binary-search for the offending change. To find the exact change that broke a test, we could run every test at every change, but that would be very expensive.
To solve this problem, we built a continuous integration system that uses dependency analysis to determine all the tests a change transitively affects and then runs only those tests for every change. The system is built on top of Google’s cloud computing infrastructure enabling many builds to be executed concurrently, allowing the system to run affected tests as soon as a change is submitted.
Here is an example where our system can provide faster and more precise feedback than a traditional continuous build. In this scenario, there are two tests and three changes that affect these tests. The gmail_server_tests are broken by the second change, however a typical continuous integration system will only be able to tell that either change #2 or change #3 caused this test to fail. By using concurrent builds, we can launch tests without waiting for the current build/test cycle to finish. Dependency analysis limits the number of tests executed for each change, so that in this example, the total number of test executions is the same as before.
Let’s look deeper into how we perform the dependency analysis.
We maintain an in-memory graph of coarse-grained dependencies between various tests and build rules across the entire codebase. This graph, several GBs in-memory, is kept up-to-date with each change that gets checked in. This allows us to transitively determine all tests that depend on the code modified in a given change and hence need to be re-run to know the current state of the build. Let’s walk through an example.
Consider two sample projects, each containing a different set of tests:
where the build dependency graph looks like this:
We will see how two isolated code changes, at different depths of the dependency tree, are analyzed to determine affected tests, that is the minimal set of tests that needs to be run to ensure that both Gmail and Buzz projects are “green”.
Case1: Change in common library
For first scenario, consider a change that modifies files in
common_collections_util
.
As soon as this change is submitted, we start a breadth-first search to find all tests that depend on it.
Once all the direct dependencies are found, continue BFS to collect all transitive dependencies till we reach all the leaf nodes.
When done, we have all the tests that need to be run, and can calculate the projects that will need to update their overall status based on results from these tests.
Case2: Change in a dependent project:
When a change modifying files in
youtube_client
is submitted.
We perform the same analysis to conclude that only
buzz_client_tests
is affected and status of Buzz project needs to be updated:
The example above illustrates how we optimize the number of tests run per change without sacrificing the accuracy of end results for a project. A lesser number of tests run per change allows us to run all
affected
tests for every change that gets checked in, making it easier for a developer to detect and deal with an offending change.
Use of smart tools and cloud computing infrastructure in the continuous integration system makes it fast and reliable. While we are constantly working on making improvements to this system, thousands of Google projects are already using it to launch-and-iterate quickly and hence making faster user-visible progress.
14 comments
Labels
TotT
103
GTAC
61
James Whittaker
42
Misko Hevery
32
Code Health
31
Anthony Vallone
27
Patrick Copeland
23
Jobs
18
Andrew Trenk
13
C++
11
Patrik Höglund
8
JavaScript
7
Allen Hutchison
6
George Pirocanac
6
Zhanyong Wan
6
Harry Robinson
5
Java
5
Julian Harty
5
Adam Bender
4
Alberto Savoia
4
Ben Yu
4
Erik Kuefler
4
Philip Zembrod
4
Shyam Seshadri
4
Chrome
3
Dillon Bly
3
John Thomas
3
Lesley Katzen
3
Marc Kaplan
3
Markus Clermont
3
Max Kanat-Alexander
3
Sonal Shah
3
APIs
2
Abhishek Arya
2
Alan Myrvold
2
Alek Icev
2
Android
2
April Fools
2
Chaitali Narla
2
Chris Lewis
2
Chrome OS
2
Diego Salas
2
Dori Reuveni
2
Jason Arbon
2
Jochen Wuttke
2
Kostya Serebryany
2
Marc Eaddy
2
Marko Ivanković
2
Mobile
2
Oliver Chang
2
Simon Stewart
2
Stefan Kennedy
2
Test Flakiness
2
Titus Winters
2
Tony Voellm
2
WebRTC
2
Yiming Sun
2
Yvette Nameth
2
Zuri Kemp
2
Aaron Jacobs
1
Adam Porter
1
Adam Raider
1
Adel Saoud
1
Alan Faulkner
1
Alex Eagle
1
Amy Fu
1
Anantha Keesara
1
Antoine Picard
1
App Engine
1
Ari Shamash
1
Arif Sukoco
1
Benjamin Pick
1
Bob Nystrom
1
Bruce Leban
1
Carlos Arguelles
1
Carlos Israel Ortiz García
1
Cathal Weakliam
1
Christopher Semturs
1
Clay Murphy
1
Dagang Wei
1
Dan Maksimovich
1
Dan Shi
1
Dan Willemsen
1
Dave Chen
1
Dave Gladfelter
1
David Bendory
1
David Mandelberg
1
Derek Snyder
1
Diego Cavalcanti
1
Dmitry Vyukov
1
Eduardo Bravo Ortiz
1
Ekaterina Kamenskaya
1
Elliott Karpilovsky
1
Elliotte Rusty Harold
1
Espresso
1
Felipe Sodré
1
Francois Aube
1
Gene Volovich
1
Google+
1
Goran Petrovic
1
Goranka Bjedov
1
Hank Duan
1
Havard Rast Blok
1
Hongfei Ding
1
Jason Elbaum
1
Jason Huggins
1
Jay Han
1
Jeff Hoy
1
Jeff Listfield
1
Jessica Tomechak
1
Jim Reardon
1
Joe Allan Muharsky
1
Joel Hynoski
1
John Micco
1
John Penix
1
Jonathan Rockway
1
Jonathan Velasquez
1
Josh Armour
1
Julie Ralph
1
Kai Kent
1
Kanu Tewary
1
Karin Lundberg
1
Kaue Silveira
1
Kevin Bourrillion
1
Kevin Graney
1
Kirkland
1
Kurt Alfred Kluever
1
Manjusha Parvathaneni
1
Marek Kiszkis
1
Marius Latinis
1
Mark Ivey
1
Mark Manley
1
Mark Striebeck
1
Matt Lowrie
1
Meredith Whittaker
1
Michael Bachman
1
Michael Klepikov
1
Mike Aizatsky
1
Mike Wacker
1
Mona El Mahdy
1
Noel Yap
1
Palak Bansal
1
Patricia Legaspi
1
Per Jacobsson
1
Peter Arrenbrecht
1
Peter Spragins
1
Phil Norman
1
Phil Rollet
1
Pooja Gupta
1
Project Showcase
1
Radoslav Vasilev
1
Rajat Dewan
1
Rajat Jain
1
Rich Martin
1
Richard Bustamante
1
Roshan Sembacuttiaratchy
1
Ruslan Khamitov
1
Sam Lee
1
Sean Jordan
1
Sebastian Dörner
1
Sharon Zhou
1
Shiva Garg
1
Siddartha Janga
1
Simran Basi
1
Stan Chan
1
Stephen Ng
1
Tejas Shah
1
Test Analytics
1
Test Engineer
1
Tim Lyakhovetskiy
1
Tom O'Neill
1
Vojta Jína
1
automation
1
dead code
1
iOS
1
mutation testing
1
Archive
►
2025
(1)
►
Jan
(1)
►
2024
(13)
►
Dec
(1)
►
Oct
(1)
►
Sep
(1)
►
Aug
(1)
►
Jul
(1)
►
May
(3)
►
Apr
(3)
►
Mar
(1)
►
Feb
(1)
►
2023
(14)
►
Dec
(2)
►
Nov
(2)
►
Oct
(5)
►
Sep
(3)
►
Aug
(1)
►
Apr
(1)
►
2022
(2)
►
Feb
(2)
►
2021
(3)
►
Jun
(1)
►
Apr
(1)
►
Mar
(1)
►
2020
(8)
►
Dec
(2)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
May
(1)
►
2019
(4)
►
Dec
(1)
►
Nov
(1)
►
Jul
(1)
►
Jan
(1)
►
2018
(7)
►
Nov
(1)
►
Sep
(1)
►
Jul
(1)
►
Jun
(2)
►
May
(1)
►
Feb
(1)
►
2017
(17)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Sep
(1)
►
Aug
(1)
►
Jul
(2)
►
Jun
(2)
►
May
(3)
►
Apr
(2)
►
Feb
(1)
►
Jan
(2)
►
2016
(15)
►
Dec
(1)
►
Nov
(2)
►
Oct
(1)
►
Sep
(2)
►
Aug
(1)
►
Jun
(2)
►
May
(3)
►
Apr
(1)
►
Mar
(1)
►
Feb
(1)
►
2015
(14)
►
Dec
(1)
►
Nov
(1)
►
Oct
(2)
►
Aug
(1)
►
Jun
(1)
►
May
(2)
►
Apr
(2)
►
Mar
(1)
►
Feb
(1)
►
Jan
(2)
►
2014
(24)
►
Dec
(2)
►
Nov
(1)
►
Oct
(2)
►
Sep
(2)
►
Aug
(2)
►
Jul
(3)
►
Jun
(3)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Feb
(1)
►
Jan
(2)
►
2013
(16)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
Jun
(2)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Jan
(2)
►
2012
(11)
►
Dec
(1)
►
Nov
(2)
►
Oct
(3)
►
Sep
(1)
►
Aug
(4)
▼
2011
(39)
►
Nov
(2)
►
Oct
(5)
►
Sep
(2)
►
Aug
(4)
►
Jul
(2)
▼
Jun
(5)
Google at STAR West 2011
Lessons in a 21st Century Tech Career: Failing Fas...
Introducing DOM Snitch, our passive in-the-browser...
GTAC 2011 Keynotes
(Cross-posted from the Google Engineering Tools b...
►
May
(4)
►
Apr
(3)
►
Mar
(4)
►
Feb
(5)
►
Jan
(3)
►
2010
(37)
►
Dec
(3)
►
Nov
(3)
►
Oct
(4)
►
Sep
(8)
►
Aug
(3)
►
Jul
(3)
►
Jun
(2)
►
May
(2)
►
Apr
(3)
►
Mar
(3)
►
Feb
(2)
►
Jan
(1)
►
2009
(54)
►
Dec
(3)
►
Nov
(2)
►
Oct
(3)
►
Sep
(5)
►
Aug
(4)
►
Jul
(15)
►
Jun
(8)
►
May
(3)
►
Apr
(2)
►
Feb
(5)
►
Jan
(4)
►
2008
(75)
►
Dec
(6)
►
Nov
(8)
►
Oct
(9)
►
Sep
(8)
►
Aug
(9)
►
Jul
(9)
►
Jun
(6)
►
May
(6)
►
Apr
(4)
►
Mar
(4)
►
Feb
(4)
►
Jan
(2)
►
2007
(41)
►
Oct
(6)
►
Sep
(5)
►
Aug
(3)
►
Jul
(2)
►
Jun
(2)
►
May
(2)
►
Apr
(7)
►
Mar
(5)
►
Feb
(5)
►
Jan
(4)
Feed
Follow @googletesting