The idea for Testapalooza came out of discussions about how to build a vibrant testing community here at Google. Many diverse groups work daily on quality-related activities, but each group uses different tools and has different ideas for testing an application, so it can be difficult to find out what others are doing. So we decided to put on a conference!
We asked engineers from Testing, Development, User Experience, and other groups to submit conference sessions: tool presentations, tutorials, workshops, panels, and experience reports. We hoped to get 30-40 submissions from which we could select about 20. In typical conference mode, the day before the submission deadline, we had 12 submissions. The day after the deadline, we had more than 130! It was very impressive and fun to read what our engineers submitted. We had some of our most involved engineers on the reviewing committee, and even they were surprised about the breadth and depth of the proposed sessions. It was extremely hard to pick just 41 of these proposals, but we couldn't fit any more into a one-day conference.
We ran 11 tracks: Agile, Automation, Developer, Internationalization, Perf, QA, Release Engineering, Security, Reliability, SysOps, and User Experience. Registrations for the event filled up quickly and proved that there is indeed a great desire for more cross-specialty collaboration: software developers signed up to attend QA sessions, operations engineers learned more about unit testing, and QA engineers were everywhere.
The conference was a great success. We had sessions going the whole day, and people were discussing testing in the hallways. New ideas were generated and debated in every corner of the Googleplex. People appreciated the variety of topics from agile testing to micro-level unit testing to testing tools to usability testing. We also had a poster session, where internal groups could show other Googlers what they were doing, our equivalent of the conference expo.
Of course, this wouldn't be a true Google event without some great food, and we were fortunate to have enthusiastic participation from our chefs: Taste-a-Palooza!
We finished the day with an hour of lightning talks. At the end of the day everybody was exhausted, but with new and interesting ideas to think about.
All Testapalooza sessions were video recorded (many were videoconferenced to other offices). We want to publish as many of these videos as possible, and will review them over the coming weeks to publish sessions which did not contain any confidential information. Watch this space for more information on the videos.
Posted by Allen Hutchison, Engineering Manager
Regardless of the amount of testing you do for an application, if the application doesn't scale, there is a good chance that no one will ever see it. As you can imagine, we at Google care a lot about scalability. In fact, it's rare to talk to another Googler about a new idea without the question, "How does it scale?" coming into the discussion. On June 23, our Seattle office will host a conference on scalable systems.
The team is currently accepting proposals for 45-minute talks. You can find out more from the Google Research Blog.
In software as in life there are things we notice that help confirm whether something is satisfactory or unsatisfactory. Let's call these things that affect our judgment of the results "factors." Some factors provide stronger indications than others. When using these factors to rate results, we will assign higher scores (or "weightings") to the stronger indicators. By assigning higher weightings to the stronger indicators, we enable these to have a stronger influence on the overall outcome.
Posted by Harry Robinson, Software Engineer in Test
Several readers have commented that our current blog slogan, "Life is too short for manual testing," implies that we don't value manual and exploratory testing. In fact, we are big fans of exploratory testing, and we intended the message to be liberating, not insulting.
Manual testing can find bugs quickly and with little overhead in the short run. But it can be expensive and exhausting in a long project. And manual testing is gated by how fast and long humans can work. Running millions of test sequences and combinations by hand would take longer than most people's lifetimes - life is literally too short for manual testing to reach all the bugs worth reaching.
We originally featured the "Life is too short ..." slogan on T-shirts at the Google London Test Automation Conference. One theme of that conference was that it makes sense to get machines to do some of the heavy lifting in software testing, leaving human testers free to do the kinds of testing that people do well.
If you'd like to find out more about computer-assisted testing, check out the LTAC videos as well as Cem Kaner's excellent STAR 2004 presentation on High Volume Test Automation . And if you can wait a bit, I will be doing a talk on "The Bionic Exploratory Tester" at CAST 2007 in Bellevue, Washington, in July.
I asked Jon Bach 's opinion on the slogan, and he suggested that what we are really trying to say is:
Life's too short to only use an approach for testing that relies solely on a human's ability to execute a series of mouse clicks and keystrokes when the processing power that makes computers so useful can be leveraged to execute these tests, freeing testers from especially mundane or repetitive testing so that their brains can be used for higher order tests that computers can't do yet.
I agree, but it would've been a heck of a T-shirt. :-)
Future slogans:
Testing is about being willing to try different approaches and entertain different perspectives, so a single slogan can't do it justice. We are planning to feature different slogans on a regular basis, and already have a few of our favorites lined up. If you've got a slogan to share, we'd love to hear it. Post it in the comments below or email us.
For a class, try having a corresponding set of test methods, where each one describes a responsibility of the object, with the first word implicitly the name of the class under test. For example, in Java:
class HtmlLinkRewriterTest ... { void testAppendsAdditionalParameterToUrlsInHrefAttributes(){?} void testDoesNotRewriteImageOrJavascriptLinks(){?} void testThrowsExceptionIfHrefContainsSessionId(){?} void testEncodesParameterValue(){?} }
This can be read as:
HtmlLinkRewriter appends additional param to URLs in href attrs. HtmlLinkRewriter does not rewrite image or JavaScript links. HtmlLinkRewriter throws exception if href contains session ID. HtmlLinkRewriter encodes parameter value.
Benefits
The tests emphasizes the object's responsibilities (or features) rather than public methods and inputs/output. This makes it easier for future engineers who want to know what it does without having to delve into the code.
These naming conventions can help point out smells. For example, when it's hard to construct a sentence where the first word is the class under test, it suggests the test may be in the wrong place. And classes that are hard to describe in general often need to be broken down into smaller classes with clearer responsibilities.
Additionally, tools can be used to help understand code quicker:
(This example shows a class in IntelliJ with the TestDox plugin giving an overview of the test.)
Remember to download this episode of Testing on the Toilet, print it, and flyer your office.