def checkout(cart): if not cart.items: throw Error("cart empty") return checkout_internal(cart)
def checkout(cart): if cart.items: throw Error("cart empty") return checkout_internal(cart)
log.Infof("Found %d (%.2f %%)!", e, float64(e)*100.0 / total)
log.Infof("Found %d (%.2f %%)!", e, float64(e)*100.0 - total)
Error.create(((key + " disabled")));
Error.create(((key + " disabled") + 1));
showCart := len(cart.GetItems()) > 0
showCart := len(cart.GetItems()) < 0
if (__name__ == '__main__'):
if (__name__ != '__main__'):
message = ('id ' + run_idx)
message = ('id ' * run_idx)
slo = (20 * time.Second)
slo = (20 * time.Second) + 1
_TIMEOUT = (60 * 10)
_TIMEOUT = (60 / 10)
df = df.replace( [numpy.inf, -numpy.inf], numpy.nan )
df = df.replace( [numpy.inf, -numpy.inf - 1], numpy.nan )
if _, err := Del(req); err != nil { log.Errorf("cleanup failed: %v”, cerr) }
if _, err := c.Del(req); err == nil { log.Errorf("cleanup failed: %v”, cerr) }
if (!(!status.ok())) { LOG(WARNING) << "Updating dependency graph failed" << status; }
func (s *Serv) calculate(in int32) int { if val, ok := if s.cache[in] { return val } val := s.calc(in) s.cache[in] = val return val }
func (s *Serv) calculate(in int32) int { val := s.calc(in) s.cache[in] = val return val }
absl::Minutes(10) - elapsed;
absl::Minutes(-abs(10)) - elapsed;
Reason for Flakiness
Tips for Triaging
Type of Remedy
Improper initialization or cleanup.
Look for compiler warnings about uninitialized variables. Inspect initialization and cleanup code. Check that the environment is set up and torn down correctly. Verify that test data is correct.
Explicitly initialize all variables with proper values before their use.
Properly set up and tear down the testing environment. Consider an initial test that verifies the state of the environment.
Invalid assumptions about the state of test data.
Rerun test(s) independently.
Make tests independent of any state from other tests and previous runs.
Invalid assumptions about the state of the system, such as the system time.
Explicitly check for system dependency assumptions.
Remove or isolate the SUT dependencies on aspects of the environment that you do not control.
Dependencies on execution time, expecting asynchronous events to occur in a specific order, waiting without timeouts, or race conditions between the tests and the application.
Log the times when accesses to the application are made.
As part of debugging, introduce delays in the application to check for differences in test results.
Add synchronization elements to the tests so that they wait for specific application states. Disable unnecessary caching to have a predictable timeline for the application responses.
Note: Do NOT add arbitrary delays as these can become flaky again over time and slow down the test unnecessarily.
Dependencies on the order in which the tests are run. (Similar to the second case above.)
Make tests independent of each other and of any state from previous runs.
Failure to allocate enough resources for the SUT, thus preventing it from running.
Check logs to see if SUT came up.
Allocate sufficient resources.
Improper scheduling of the tests so they “collide” and cause each other to fail.
Explicitly run tests independently in different order.
Make tests runnable independently of each other.
Insufficient system resources to satisfy the test requirements. (Similar to the first case but here resources are consumed while running the workflow.)
Check system logs to see if SUT ran out of resources.
Fix memory leaks or similar resource “bleeding.”
Allocate sufficient resources to run tests.
Race conditions.
Log accesses of shared resources.
Add synchronization elements to the tests so that they wait for specific application states. Note: Do NOT add arbitrary delays as these can become flaky again over time.
Uninitialized variables.
Look for compiler warnings about uninitialized variables.
Being slow to respond or being unresponsive to the stimuli from the tests.
Log the times when requests and responses are made.
Check and remove any causes for delays.
Memory leaks.
Look at memory consumption during test runs. Use tools such as Valgrind to detect.
Fix programming error causing memory leak. This Wikipedia article has an excellent discussion on these types of errors.
Oversubscription of resources.
Changes to the application (or dependent services) out of sync with the corresponding tests.
Examine revision history.
Institute a policy requiring code changes to be accompanied by tests.
Networking failures or instability.
Check for hardware errors in system logs.
Fix hardware errors or run tests on different hardware.
Disk errors.
Resources being consumed by other tasks/services not related to the tests being run.
Examine system process activity.
Reduce activity of other processes on test system(s).