Sometimes you need to test client-side JavaScript code that uses setTimeout() to do some work in the future. jsUnit contains the Clock.tick() method, which simulates time passing without causing the test to sleep. For example, this function will set up some callbacks to update a status message over the course of four seconds ...
Sometimes you need to test client-side JavaScript code that uses setTimeout() to do some work in the future. jsUnit contains the Clock.tick() method, which simulates time passing without causing the test to sleep. For example, this function will set up some callbacks to update a status message over the course of four seconds:


function showProgress(status) {
status.message = "Loading";
for (var time = 1000; time <= 3000; time += 1000) {
// Append a '.' to the message every second for 3 secs.
setTimeout(function() {
status.message += ".";
}, time);
}
setTimeout(function() {
// Special case for the 4th second.
status.message = "Done";
}, 4000);
}


The jsUnit test for this function would look like this:


function testUpdatesStatusMessageOverFourSeconds() {
Clock.reset(); // Clear any existing timeout functions on the event queue.
var status = {};
showProgress(status); // Call our function.
assertEquals("Loading", status.message);
Clock.tick(2000); // Call any functions on the event queue that have
// been scheduled for the first two seconds.
assertEquals("Loading..", status.message);
Clock.tick(2000); // Same thing again, for the next two seconds.
assertEquals("Done", status.message);
}


This test will run very quickly - it does not require four seconds to run.

Clock supports the functions setTimeout(),
setInterval(), clearTimeout(), and
clearInterval(). The Clock object is defined in
jsUnitMockTimeout.js, which is in the same directory as
jsUnitCore.js.

1 comment

Posted by Harry Robinson, Software Engineer in Test

Software testing is tough. It can be exhausting, and there is rarely enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. On Thursday, March 22, I'll give a lunchtime presentation titled "How to Build Your Own Robot Army" for the Quality Assurance SIG of the Software Association of Oregon.

Posted by Harry Robinson, Software Engineer in Test

Software testing is tough. It can be exhausting, and there is rarely enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. On Thursday, March 22, I'll give a lunchtime presentation titled "How to Build Your Own Robot Army" for the Quality Assurance SIG of the Software Association of Oregon.


Two decades ago, machine time was expensive, so test suites had to run as quickly and efficiently as possible. Today, CPUs are cheap, so it becomes reasonable to move test creation to the shoulders of a test machine army. But we're not talking about the run-of-the-mill automated scripts that only do what you explicitly told them. We're talking about programs that create and execute tests you never thought of to find bugs you never dreamed of. From Orcs to Zergs to Droids to Cyborgs, this presentation will show how to create a robot test army using tools lying around on the Web. Most importantly, it will cover how to take appropriate credit for your army's work!

Posted by Harry Robinson, Software Engineer in Test
  • Al Snow - Form Letter Generator Technique

  • Chris McMahon – Emulating User Actions in Random and Deterministic Modes

  • Dave Liebreich – Test Mozilla

  • David Martinez – Tk-Acceptance

  • Dave W. Smith – System Effects of Slow Tests

  • Harry RobinsonExploratory Automation

  • Jason Reid – Not Trusting Your Developers

  • Jeff Brown – MBUnit

  • Jeff Fry – Generating Methods on the Fly

  • Keith Ray – ckr_spec

  • Kurman Karabukaev – Whitebox testing using Watir

  • Mark Striebeck – How to Get Developers and Tester to Work Closer Together

  • Sergio Pinon – UI testing + Cruise Control

Posted by Harry Robinson, Software Engineer in Test

The first-ever industry Developer-Tester/Tester-Developer Summit was held at the Mountain View Googleplex on Saturday, February 24th. Hosted by Elisabeth Hendrickson and Chris McMahon, the all-day workshop consisted of experience reports and lightning talks including:

  • Al Snow - Form Letter Generator Technique

  • Chris McMahon – Emulating User Actions in Random and Deterministic Modes

  • Dave Liebreich – Test Mozilla

  • David Martinez – Tk-Acceptance

  • Dave W. Smith – System Effects of Slow Tests

  • Harry RobinsonExploratory Automation

  • Jason Reid – Not Trusting Your Developers

  • Jeff Brown – MBUnit

  • Jeff Fry – Generating Methods on the Fly

  • Keith Ray – ckr_spec

  • Kurman Karabukaev – Whitebox testing using Watir

  • Mark Striebeck – How to Get Developers and Tester to Work Closer Together

  • Sergio Pinon – UI testing + Cruise Control

There were also brainstorming exercises and discussions on the benefits that DT/TDs can bring to organizations and the challenges they face. Several participants have blogged about the Summit. The discussions continue at http://groups.google.com/group/td-dt-discuss.

If you spend your days coding and testing, try this opening exercise from the Summit. Imagine that:

T – – – – – – – – – – D

is a spectrum that has "Tester" at one end and "Developer" at the other. Where would you put yourself, and why?

Posted by Allen Hutchison, Engineering Manager

Some of the most difficult challenges in creating great software are guaranteeing it works every time, for every customer, ensuring that it will scale well, and making it accessible to all users. Over the years, languages have become easier to work with, frameworks have become extensible for the creation of several products, and integrated development environments have made the software developer faster and more productive. But automation techniques, extensible testing frameworks, and easy-to-use test tools have lagged behind. While there are many good solutions for automated testing, there is plenty of room for innovation.

Posted by Allen Hutchison, Engineering Manager

Some of the most difficult challenges in creating great software are guaranteeing it works every time, for every customer, ensuring that it will scale well, and making it accessible to all users. Over the years, languages have become easier to work with, frameworks have become extensible for the creation of several products, and integrated development environments have made the software developer faster and more productive. But automation techniques, extensible testing frameworks, and easy-to-use test tools have lagged behind. While there are many good solutions for automated testing, there is plenty of room for innovation.

I'm happy to announce that Google will be hosting our 2nd Annual Google Test Automation Conference (GTAC) in our New York office on August 23 and 24, 2007. Our goal is to create a collegial atmosphere where participants can discuss challenges facing people on the cutting edge of test automation, evaluate solutions for meeting those challenges, and have a little fun.

Call for Proposals
We're looking for speakers with exciting ideas and new approaches to test automation. If you have a subject you'd like to talk about, please send an email to gtac-submission@google.com and include a description of your 45 minute session in 500 words or less (no attachments, please). Deadline for submissions is April 6.

We're planning to have 10 people give presentations at the conference, followed by adequate time for discussion. If you'd like to attend as a non-speaker, please check back to this page on May 7, when we'll post our slate of speakers and information about how to attend.

Posted by Allen Hutchison, Engineering Manager and Jay Han, Software Engineer in Test

The testing world has a lot of terms for the activity that we undertake every day. You'll often hear the words QA, QC, and Test Engineering used interchangeably. While it is usually enough to get your point across with a developer, it is helpful to think about these terms and how they apply to the world of software testing. In the classic definition QC is short for Quality Control, a ...
Posted by Allen Hutchison, Engineering Manager and Jay Han, Software Engineer in Test

The testing world has a lot of terms for the activity that we undertake every day. You'll often hear the words QA, QC, and Test Engineering used interchangeably. While it is usually enough to get your point across with a developer, it is helpful to think about these terms and how they apply to the world of software testing. In the classic definition QC is short for Quality Control, a process of verifying predefined requirements for quality. In the terms of an assembly-line this might involve pulling manufactured units off at the end of the process and verifying different parts of the assembly process. For software the QC function may involve checking the software against a set of requirements and verifying that the software meets the predefined requirements.

Quality Assurance, on the other hand, is much more about providing the continuous and consistent improvement and maintenance of process that enables the QC job. We use the QC process to verify a product does what we think it does, and we use the QA process to give us confidence that the product will meet the needs of customers. To that end the QA process can be considered a meta process that includes aspects of the QC process. It also goes beyond that to influence usability and design, to verify that functionality is not only correct, but useful.

Here at Google, we tend to take a third approach that we call Test Engineering. We look at this as a bridge between the meta world of QA and the concrete world of QC. Our approach allows us to ensure that we get the opportunity to think about customers and their needs, while we still provide results that are needed on day to day engineering projects.

Our teams certainly work with Software Engineers in QA and QC roles, but we also work with teams to ensure that a product is testable, that it is adequately unit tested, and that it can be automated even further in our teams. We often review design documents and ask for more test hooks in a project, and we implement mock objects and servers to help developers with their unit testing and to allow our teams to test components individually.

We put an emphasis on building automated tests so that we can let people do what people are good at, and have computers do what computers are good at. That doesn't mean that we never do manual testing, but instead that we do the "right" amount of manual testing with more human-oriented focus (e.g. exploratory testing), and we try to ensure that we never do repetitive manual testing.