Testing Blog
GTAC Coming to New York in the Spring
Tuesday, October 30, 2012
By
Anthony Vallone
on behalf of the GTAC Committee
The next and seventh
GTAC
(Google Test Automation Conference) will be held on April 23-24, 2013 at the beautiful
Google New York
office! We had a late start preparing for GTAC, but things are now falling into place. This will be the second time it is hosted at our second largest engineering office. We are also sticking with our tradition of changing the region each year, as the last GTAC was held in California.
The GTAC event brings together engineers from many organizations to discuss test automation. It is a great opportunity to present, learn, and challenge modern testing technologies and strategies. We will soon be recruiting speakers to discuss their innovations.
This year’s theme will be “Testing Media and Mobile“. In the past few years, substantial changes have taken place in both the media and mobile areas. Television is no longer the king of media. Over 27 billion videos are streamed in the U.S. per month. Over 1 billion people now own smartphones. HTML5 includes support for audio, video, and scalable vector graphics, which will liberate many web developers from their dependence on third-party media software. These are incredibly complex technologies to test. We are thrilled to be hosting this event in which many in the industry will share their innovations.
Registration information for speakers and attendees will soon be posted here and on the GTAC site (
https://developers.google.com/gtac
). Even though we will be focusing on “Testing Media and Mobile”, we will be accepting proposals for talks on other topics.
10 comments
Why Are There So Many C++ Testing Frameworks?
Friday, October 26, 2012
By Zhanyong Wan - Software Engineer
These days, it seems that everyone is rolling their own C++ testing framework, if they haven't done so already. Wikipedia has a
partial list
of such frameworks. This is interesting because many OOP languages have only one or two major frameworks. For example, most Java people seem happy with either
JUnit
or
TestNG
. Are C++ programmers the do-it-yourself kind?
When we started working on
Google Test
(Google’s C++ testing framework), and especially after we
open-sourced it
, people began asking us why we were doing it. The short answer is that we couldn’t find an existing C++ testing framework that satisfied all our needs. This doesn't mean that these frameworks were all poorly designed or implemented. Rather, many of them had great ideas and tricks that we learned from. However, Google had a huge number of C++ projects that got compiled on various operating systems (Linux, Windows, Mac OS X, and later Android, among others) with different compilers and all kinds of compiler flags, and we needed a framework that worked well in all these environments and ccould handle many different types and sizes of projects.
Unlike Java, which has the famous slogan "Write once, run anywhere," C++ code is being written in a much more diverse environment. Due to the complexity of the language and the need to do low-level tasks, compatibility between different C++ compilers and even different versions of the same compiler is poor. There is a C++ standard, but it's not well supported by compiler vendors. For many tasks you have to rely on unportable extensions or platform-specific functionality. This makes it hard to write a reasonably complex system that can be built using many different compilers and works on many platforms.
To make things more complicated, most C++ compilers allow you to turn off some standard language features in return for better performance. Don't like using exceptions? You can turn it off. Think dynamic cast is bad? You can disable
Run-Time Type Identification
, the feature behind dynamic cast and run-time access to type information. If you do any of these, however, code using these features will fail to compile. Many testing frameworks rely on exceptions. They are automatically out of the question for us since we turn off exceptions in many projects (in case you are curious, Google Test doesn’t require exceptions or run-time type identification by default; when these language features are turned on, Google Test will try to take advantage of them and provide you with more utilities, like the
exception assertions
.).
Why not just write a portable framework, then? Indeed, that's a top design goal for Google Test. And authors of some other frameworks have tried this too. However, this comes with a cost. Cross-platform C++ development requires much more effort: you need to test your code with different operating systems, different compilers, different versions of them, and different compiler flags (combine these factors and the task soon gets daunting); some platforms may not let you do certain things and you have to find a workaround there and guard the code with conditional compilation; different versions of compilers have different bugs and you may have to revise your code to bypass them all; etc. In the end, it's hard unless you are happy with a bare-bone system.
So, I think a
major reason that we have many C++ testing frameworks is that C++ is different in different environments, making it hard to write portable C++ code
. John's framework may not suit Bill's environment, even if it solves John's problems perfectly.
Another reason is that
some limitations of C++ make it impossible to implement certain features really well, and different people chose different ways to workaround the limitations
. One notable example is that C++ is a statically-typed language and doesn't support reflection. Most Java testing frameworks use reflection to automatically discover tests you've written such that you don't have to register them one-by-one. This is a good thing as manually registering tests is tedious and you can easily write a test and forget to register it. Since C++ has no reflection, we have to do it differently. Unfortunately there is no single best option. Some frameworks require you to register tests by hand, some use scripts to parse your source code to discover tests, and some use macros to automate the registration. We prefer the last approach and think it works for most people, but some disagree. Also, there are different ways to devise the macros and they involve different trade-offs, so the result is not clear cut.
Let’s see some actual code to understand how Google Test solves the test registration problem. The simplest way to add a test is to use the
TEST
macro (what else would we name it?):
TEST(
Subject, HasCertainProperty
) {
… testing code goes here …
}
This defines a test method whose purpose is to verify that the given subject has the given property. The macro automatically registers the test with Google Test such that it will be run when the test program (which may contain many such
TEST
definitions) is executed.
Here’s a more concrete example that verifies a Factorial() function works as expected for positive arguments:
TEST(FactorialTest, HandlesPositiveInput) {
EXPECT_EQ(1, Factorial(1));
EXPECT_EQ(2, Factorial(2));
EXPECT_EQ(6, Factorial(3));
EXPECT_EQ(40320, Factorial(8));
}
Finally,
many C++ testing framework authors neglected extensibility and were happy just providing canned solutions
, so we ended up with many solutions, each satisfying a different niche but none general enough. A versatile framework must have a good extension story. Let's face it: you cannot be all things to all people, no matter what. Instead of bloating the framework with rarely used features, we should provide good out-of-box solutions for maybe 95% of the use cases, and leave the rest to extensions. If I can easily extend a framework to solve a particular problem of mine, I will feel less motivated to write my own thing. Unfortunately, many framework authors don't seem to see the importance of extensibility. I think that mindset contributed to the plethora of frameworks we see today. In
Google Test
, we try to make it easy to expand your testing vocabulary by
defining custom assertions
that generate informative error messages. For instance, here’s a naive way to verify that an int value is in a given range:
bool IsInRange(int value, int low, int high) {
return low <= value && value <= high;
}
...
EXPECT_TRUE(IsInRange(SomeFunction(), low, high));
The problem is that when the assertion fails, you only know that the value returned by SomeFunction() is not in range [low, high], but you have no idea what that return value and the range actually are -- this makes debugging the test failure harder.
You
could
provide a custom message to make the failure more descriptive:
EXPECT_TRUE(IsInRange(SomeFunction(), low, high))
<<
"SomeFunction() = "
<< SomeFunction()
<<
", not in range ["
<< low <<
", "
<< high <<
"]"
;
Except that this is incorrect as
SomeFunction()
may
return a different answer each time. You can fix that by introducing an intermediate variable to hold the function’s result:
int result = SomeFunction();
EXPECT_TRUE(IsInRange(result, low, high))
<<
"result (return value of SomeFunction()) = "
<< result
<<
", not in range ["
<< low <<
", "
<< high <<
"]"
;
However this is tedious and obscures what you are really trying to do. It’s not a good pattern when you need to do the “is in range” check repeatedly. What we need here is a way to abstract this pattern into a reusable construct.
Google Test lets you define a test predicate like this:
AssertionResult IsInRange(int value, int low, int high) {
if (value < low)
return
AssertionFailure
()
<< value <<
" < lower bound "
<< low;
else if (value > high)
return
AssertionFailure
()
<< value <<
" > upper bound "
<< high;
else
return
AssertionSuccess
()
<< value <<
" is in range ["
<< low <<
", "
<< high <<
"]"
;
}
Then the statement
EXPECT_TRUE(IsInRange(SomeFunction(), low, high))
may print (assuming that
SomeFunction()
returns 13):
Value of: IsInRange(SomeFunction(), low, high)
Actual: false (13 < lower bound 20)
Expected: true
The same
IsInRange()
definition also lets you use it in an
EXPECT_FALSE
context, e.g.
EXPECT_FALSE(IsInRange(AnotherFunction(), low, high))
could print:
Value of: IsInRange(AnotherFunction(), low, high)
Actual: true (25 is in range [20, 60])
Expected: false
This way, you can build a library of test predicates for your problem domain, and benefit from clear, declarative test code and descriptive failure messages.
In the same vein,
Google Mock
(our C++ mocking framework) allows you to
easily define matchers
that can be used exactly the same way as built-in matchers. Also, we have included an
event listener API
in Google Test for people to write plug-ins. We hope that people will use these features to extend Google Test/Mock for their own need and contribute back extensions that might be generally useful.
Perhaps one day we will solve the C++ testing framework fragmentation problem, after all. :-)
3 comments
Hermetic Servers
Wednesday, October 03, 2012
By
Chaitali Narla
and
Diego Salas
Consider a complex and rich web app. Under the hood, it is probably a maze of servers, each performing a different task and most talking to each other. Any user action navigates this server maze on its round-trip from the user to the datastores and back. A lot of Google’s web apps are like this including
GMail
and
Google+
. So how do we write end-to-end tests for them?
The “End-To-End” Test
An end-to-end test in the Google testing world is a test that exercises the entire server stack from a user request to response. Here is a simplified view of the System Under Test (SUT) that an end-to-end test would assert. Note that the frontend server in the SUT connects to a third backend which this particular user request does not need.
One of the challenges to writing a
fast
and
reliable
end-to-end test for such a system is avoiding network access. Tests involving network access are slower than their counterparts that only access local resources, and accessing external servers might lead to
flakiness
due to lack of determinism or unavailability of the external servers.
Hermetic Servers
One of the tricks we use at Google to design end-to-end tests is
Hermetic Servers
.
What is a Hermetic Server? The short definition would be a “
server in a box
”. If you can start up the entire server on a single machine that has no network connection AND the server works as expected, you have a hermetic server! This is a special case of the more general “hermetic” concept which applies to an isolated system not necessarily on a single machine.
Why is it useful to have a hermetic server? Because if your entire SUT is composed of hermetic servers, it could all be started on a single machine for testing; no network connection necessary! The single machine could be a physical or virtual machine.
Designing Hermetic Servers
The process for building a hermetic server starts early in the design phase of any new server. Some things we watch out for:
All connections to other servers are injected into the server at runtime using a suitable form of
dependency injection
such as commandline flags or
Guice
.
All required static files are bundled in the server binary.
If the server talks to a datastore, make sure the datastore can be
faked
with data files or in-memory implementations.
Meeting the above requirements ensures we have a highly configurable server that has potential to become a hermetic server. But it is not yet ready to be used in tests. We do a few more things to complete the package:
Make sure those connection points which our test won’t exercise have appropriate
fakes
or mocks to verify this non-interaction.
Provide modules to easily populate datastores with test data.
Provide logging modules that can help trace the request/response path as it passes through the SUT.
Using Hermetic Servers in tests
Let’s take the SUT shown earlier and assume all the servers in it are hermetic servers. Here is how an end-to-end test for the same user request would look:
The end-to-end test does the following steps:
starts the entire SUT as shown in the diagram on a single machine
makes requests to the server via the test client
validates responses from the server
One thing to note here is the mock server connection for the backend is not needed in this test. If we wish to test a request that needs this backend, we would have to provide a hermetic server at that connection point as well.
This end-to-end test is more reliable because it uses no network connection. It is faster because everything it needs is available in-memory or in the local hard disk. We run such tests on our continuous builds, so they run at each changelist affecting any of the servers in the SUT. If the test fails, the logging module helps track where the failure occurred in the SUT.
We use hermetic servers in a lot of end-to-end tests. Some common cases include
Startup tests for servers using
Guice
to verify that there are no Guice errors on startup.
API tests for backend servers.
Micro-benchmark performance tests.
UI and API tests for frontend servers.
Conclusion
Hermetic servers do have some limitations. They will increase your test’s runtime since you have to start the entire SUT each time you run the end-to-end test. If your test runs with limited resources such as memory and CPU, hermetic servers might push your test over those limits as the server interactions grow in complexity. The dataset size you can use in the in-memory datastores will be much smaller than production datastores.
Hermetic servers are a great testing tool. Like all other tools, they need to be used thoughtfully where appropriate.
10 comments
Labels
TotT
103
GTAC
61
James Whittaker
42
Misko Hevery
32
Code Health
31
Anthony Vallone
27
Patrick Copeland
23
Jobs
18
Andrew Trenk
13
C++
11
Patrik Höglund
8
JavaScript
7
Allen Hutchison
6
George Pirocanac
6
Zhanyong Wan
6
Harry Robinson
5
Java
5
Julian Harty
5
Adam Bender
4
Alberto Savoia
4
Ben Yu
4
Erik Kuefler
4
Philip Zembrod
4
Shyam Seshadri
4
Chrome
3
Dillon Bly
3
John Thomas
3
Lesley Katzen
3
Marc Kaplan
3
Markus Clermont
3
Max Kanat-Alexander
3
Sonal Shah
3
APIs
2
Abhishek Arya
2
Alan Myrvold
2
Alek Icev
2
Android
2
April Fools
2
Chaitali Narla
2
Chris Lewis
2
Chrome OS
2
Diego Salas
2
Dori Reuveni
2
Jason Arbon
2
Jochen Wuttke
2
Kostya Serebryany
2
Marc Eaddy
2
Marko Ivanković
2
Mobile
2
Oliver Chang
2
Simon Stewart
2
Stefan Kennedy
2
Test Flakiness
2
Titus Winters
2
Tony Voellm
2
WebRTC
2
Yiming Sun
2
Yvette Nameth
2
Zuri Kemp
2
Aaron Jacobs
1
Adam Porter
1
Adam Raider
1
Adel Saoud
1
Alan Faulkner
1
Alex Eagle
1
Amy Fu
1
Anantha Keesara
1
Antoine Picard
1
App Engine
1
Ari Shamash
1
Arif Sukoco
1
Benjamin Pick
1
Bob Nystrom
1
Bruce Leban
1
Carlos Arguelles
1
Carlos Israel Ortiz García
1
Cathal Weakliam
1
Christopher Semturs
1
Clay Murphy
1
Dagang Wei
1
Dan Maksimovich
1
Dan Shi
1
Dan Willemsen
1
Dave Chen
1
Dave Gladfelter
1
David Bendory
1
David Mandelberg
1
Derek Snyder
1
Diego Cavalcanti
1
Dmitry Vyukov
1
Eduardo Bravo Ortiz
1
Ekaterina Kamenskaya
1
Elliott Karpilovsky
1
Elliotte Rusty Harold
1
Espresso
1
Felipe Sodré
1
Francois Aube
1
Gene Volovich
1
Google+
1
Goran Petrovic
1
Goranka Bjedov
1
Hank Duan
1
Havard Rast Blok
1
Hongfei Ding
1
Jason Elbaum
1
Jason Huggins
1
Jay Han
1
Jeff Hoy
1
Jeff Listfield
1
Jessica Tomechak
1
Jim Reardon
1
Joe Allan Muharsky
1
Joel Hynoski
1
John Micco
1
John Penix
1
Jonathan Rockway
1
Jonathan Velasquez
1
Josh Armour
1
Julie Ralph
1
Kai Kent
1
Kanu Tewary
1
Karin Lundberg
1
Kaue Silveira
1
Kevin Bourrillion
1
Kevin Graney
1
Kirkland
1
Kurt Alfred Kluever
1
Manjusha Parvathaneni
1
Marek Kiszkis
1
Marius Latinis
1
Mark Ivey
1
Mark Manley
1
Mark Striebeck
1
Matt Lowrie
1
Meredith Whittaker
1
Michael Bachman
1
Michael Klepikov
1
Mike Aizatsky
1
Mike Wacker
1
Mona El Mahdy
1
Noel Yap
1
Palak Bansal
1
Patricia Legaspi
1
Per Jacobsson
1
Peter Arrenbrecht
1
Peter Spragins
1
Phil Norman
1
Phil Rollet
1
Pooja Gupta
1
Project Showcase
1
Radoslav Vasilev
1
Rajat Dewan
1
Rajat Jain
1
Rich Martin
1
Richard Bustamante
1
Roshan Sembacuttiaratchy
1
Ruslan Khamitov
1
Sam Lee
1
Sean Jordan
1
Sebastian Dörner
1
Sharon Zhou
1
Shiva Garg
1
Siddartha Janga
1
Simran Basi
1
Stan Chan
1
Stephen Ng
1
Tejas Shah
1
Test Analytics
1
Test Engineer
1
Tim Lyakhovetskiy
1
Tom O'Neill
1
Vojta Jína
1
automation
1
dead code
1
iOS
1
mutation testing
1
Archive
►
2025
(1)
►
Jan
(1)
►
2024
(13)
►
Dec
(1)
►
Oct
(1)
►
Sep
(1)
►
Aug
(1)
►
Jul
(1)
►
May
(3)
►
Apr
(3)
►
Mar
(1)
►
Feb
(1)
►
2023
(14)
►
Dec
(2)
►
Nov
(2)
►
Oct
(5)
►
Sep
(3)
►
Aug
(1)
►
Apr
(1)
►
2022
(2)
►
Feb
(2)
►
2021
(3)
►
Jun
(1)
►
Apr
(1)
►
Mar
(1)
►
2020
(8)
►
Dec
(2)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
May
(1)
►
2019
(4)
►
Dec
(1)
►
Nov
(1)
►
Jul
(1)
►
Jan
(1)
►
2018
(7)
►
Nov
(1)
►
Sep
(1)
►
Jul
(1)
►
Jun
(2)
►
May
(1)
►
Feb
(1)
►
2017
(17)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Sep
(1)
►
Aug
(1)
►
Jul
(2)
►
Jun
(2)
►
May
(3)
►
Apr
(2)
►
Feb
(1)
►
Jan
(2)
►
2016
(15)
►
Dec
(1)
►
Nov
(2)
►
Oct
(1)
►
Sep
(2)
►
Aug
(1)
►
Jun
(2)
►
May
(3)
►
Apr
(1)
►
Mar
(1)
►
Feb
(1)
►
2015
(14)
►
Dec
(1)
►
Nov
(1)
►
Oct
(2)
►
Aug
(1)
►
Jun
(1)
►
May
(2)
►
Apr
(2)
►
Mar
(1)
►
Feb
(1)
►
Jan
(2)
►
2014
(24)
►
Dec
(2)
►
Nov
(1)
►
Oct
(2)
►
Sep
(2)
►
Aug
(2)
►
Jul
(3)
►
Jun
(3)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Feb
(1)
►
Jan
(2)
►
2013
(16)
►
Dec
(1)
►
Nov
(1)
►
Oct
(1)
►
Aug
(2)
►
Jul
(1)
►
Jun
(2)
►
May
(2)
►
Apr
(2)
►
Mar
(2)
►
Jan
(2)
▼
2012
(11)
►
Dec
(1)
►
Nov
(2)
▼
Oct
(3)
GTAC Coming to New York in the Spring
Why Are There So Many C++ Testing Frameworks?
Hermetic Servers
►
Sep
(1)
►
Aug
(4)
►
2011
(39)
►
Nov
(2)
►
Oct
(5)
►
Sep
(2)
►
Aug
(4)
►
Jul
(2)
►
Jun
(5)
►
May
(4)
►
Apr
(3)
►
Mar
(4)
►
Feb
(5)
►
Jan
(3)
►
2010
(37)
►
Dec
(3)
►
Nov
(3)
►
Oct
(4)
►
Sep
(8)
►
Aug
(3)
►
Jul
(3)
►
Jun
(2)
►
May
(2)
►
Apr
(3)
►
Mar
(3)
►
Feb
(2)
►
Jan
(1)
►
2009
(54)
►
Dec
(3)
►
Nov
(2)
►
Oct
(3)
►
Sep
(5)
►
Aug
(4)
►
Jul
(15)
►
Jun
(8)
►
May
(3)
►
Apr
(2)
►
Feb
(5)
►
Jan
(4)
►
2008
(75)
►
Dec
(6)
►
Nov
(8)
►
Oct
(9)
►
Sep
(8)
►
Aug
(9)
►
Jul
(9)
►
Jun
(6)
►
May
(6)
►
Apr
(4)
►
Mar
(4)
►
Feb
(4)
►
Jan
(2)
►
2007
(41)
►
Oct
(6)
►
Sep
(5)
►
Aug
(3)
►
Jul
(2)
►
Jun
(2)
►
May
(2)
►
Apr
(7)
►
Mar
(5)
►
Feb
(5)
►
Jan
(4)
Feed
Follow @googletesting