[QA] Flow browser tests: issues in improving them

Željko Filipin zfilipin at wikimedia.org
Mon Jul 7 14:28:11 UTC 2014


On Wed, Jul 2, 2014 at 10:52 PM, S Page <spage at wikimedia.org> wrote:

> I'd like to meet with the brains in QA about Flow browser tests. Chris
> McMahon is going on vacation and Željko might be difficult to schedule. So
> here are some topics for a non-meeting.  Thanks for ideas and comments, or
> we can meet later.
>

I am available for pairing session this week 8-9am San Francisco time
(5-6pm my time) Wednesday-Friday. If somebody is closer to my time zone
(UTC+2), we can pair earlier.


> Background runs once per scenario, when really I only need it once per
> feature (from Matt Flaschen)
>

What do you mean by that? Every scenario is a test for itself, and it
should be able to run in parallel with other scenarios. The most important
thing for a test is to be robust, not to be fast. If it is possible, we try
to do both, but if it is not possible we always choose robust over fast.


> * *generally useful?* Speed up the "Given I am logged in" by using an API
> call to login.
>

But that would log in the API client, not the browser, right? There are
some things that we could do to speed up the tests[1] (see #4 Pre-populate
the cookies)


> If you look at e.g.
> https://saucelabs.com/jobs/465f257ffdeb422e95b029c5f8df3446 , after login
> the test does
>                         POST elements
>                                                using: "xpath"
>                                                value: ".//a"
> then spends the next 8 seconds getting dozens of attribute hrefs from the
> Main_Page. And then it seems to repeat this. What is it doing?
>

This sounds like a good topic for a pairing session. :)


> * Reuse existing content instead of always creating new topics and posts.
> E.g. for Load More/infinite scroll, instead of creating 11 topics, use
> "Pending there are 11 topics"; for sorting, use "Pending there are more
> than 2 topics" and add a reply to one of them; etc.
>

To make tests as robust as possible, every test should create everything it
needs and clean up after it is done. If speed is the problem,
creating/deleting should be done via the API[2].


> *generally useful?* Always check for ResourceLoader errors
>
>    - We want a way (@RLcheck annotation?) to assert
>         And page has no ResourceLoader errors
>    on start and end of every single page
>
> If we have consensus that this is a good thing, we can make it so.


> *generally useful?* Always check for a pink errorbox on the page
> https://bugzilla.wikimedia.org/show_bug.cgi?id=61304
>
>    - Similarly this should be a default assertion.
>
>
The same as above.


> *Increase Flow browser test reliability*
>
> Problem: Typical test creates a topic or post on Talk:Flow_QA, then checks
> elements in the first topic or post on the board. But in Continuous
> Integration (and sometimes on shared labs servers), other browsers are
> simultaneously hitting the same page, and we get weird failures.
>
> Solutions:
>
>    - Start a new board for some tests. This would mean adding a dedicated
>    Flow_QA namespace on test hosts, and adding it to $wgFlowOccupyNamespaces.
>
>
>    - Change tests to target *the topic or post they just created.* Tests
>    know what the text of the topic is, so search for that and look within it
>    for elements.
>
> I vote for tests creating things they need and cleaning up after they are
done.


> Problem: Flow tests require MEDIAWIKI_USER to have admin and oversight
> rights because they assume block user, delete topic/post, and suppress
> topic/post are available. Maybe tests should skip if the MEDIAWIKI_USER
> doesn't have the right. Or could we grant the Selenium_user these rights
> but only in a dedicated Flow_QA namespace?
> https://bugzilla.wikimedia.org/show_bug.cgi?id=67158
>

We should run tests on machines/sites that are set up to run tests, so
problems like this do not happen.



> *New features for new tests*
>
> *generally useful*? To test Thank, mentions appearing in the
> thanked/mentioned user's notifications, and future subscription features,
> we need a second QA user.
>

This is doable. Is there a problem here? The most robust way would be when
every test would create user(s) it needs.

Željko
--
1: https://saucelabs.com/resources/selenium/speed-up-your-selenium-tests
2: https://rubygems.org/gems/mediawiki_api
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.wikimedia.org/pipermail/qa/attachments/20140707/79c1ec98/attachment.html>


More information about the QA mailing list