<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
I spent some time today looking at automated test failures via
CloudBees/Jenkins (<a class="moz-txt-link-freetext" href="https://wmf.ci.cloudbees.com">https://wmf.ci.cloudbees.com</a>), and a pretty
common theme of tests that fail inconsistently is
"Watir::Wait::TimeoutError" issues. Here's an example of a recent
failure that falls into this category:<br>
<br>
<a
href="https://wmf.ci.cloudbees.com/job/browsertests-commons.wikimedia.beta.wmflabs.org-linux-chrome/463/testReport/%28root%29/UploadWizard/Navigate_to_Describe_page/">https://wmf.ci.cloudbees.com/job/browsertests-commons.wikimedia.beta.wmflabs.org-linux-chrome/463/testReport/(root)/UploadWizard/Navigate_to_Describe_page/</a><br>
<br>
From previous experience working with SauceLabs, I know that this is
not unusual, since by definition you're initiating a test workflow
that creates a lot of network traffic, and latencies are probably
inevitable.<br>
<br>
What I'm wondering is whether or not it might be a good idea to use
the page-object "wait_until" method more widely. For example, we
currently use it in <a
href="https://github.com/wikimedia/qa-browsertests/blob/master/features/step_definitions/aftv5_steps.rb">aftv5_steps.rb</a>.<br>
<br>
I realize that adding any type of sleep or wait behavior to a test
just causes overall test execution time to increase, but I'm
thinking it's more important to have fewer failing tests overall, so
that folks can focus their trouble-shooting efforts on test failures
that may be a consequence of actual bugs (and not just timeouts).<br>
<br>
I'd love to hear other opinions on this topic, so please speak up if
you have an opinion ;)<br>
<br>
Thanks,<br>
<br>
Jeff<br>
<br>
</body>
</html>