On Sun, Apr 8, 2012 at 10:52 AM, Platonides Platonides@gmail.com wrote:
I wasn't able to make the selenium work (presumably selenium 1.0). How hard would it be to make this work? Who would be responsible of sorting out any problems encountered doing that and ensuring all developers can locally run those tests? (Actually, I'd try this before creating too many tests in that platform)
Because Selenium manipulates browsers at the UI level, whether a test passes or fails depends on the environment the test runs against. Ultimately I would like to have the labs beta wikis serve as the test environment of record, but we're not there yet. Until then, my approach would be to grow the number of these tests slowly and to keep them shallow (but still useful) as the test environment becomes more dependable.
As with Selenium in any language, running tests locally requires enough infrastructure in place to do that. I have not yet written any detailed instructions to do that, but my prototype I mentioned earlier is entirely functional and a few people in the Ruby testing community have tried it out already and offered some critique.
Say I've fixed a bug and want to add a test. How would I do it? In which
language? Do I need to know ruby for that?
Assuming that the change is manifested in the UI, one would add a test according to a Page Object model: define the structure of the page in question if it is not already defined, then create a test to manipulate that structure.
In Ruby there is a standard way to define a Page Object with the 'page-object' gem, and a standard way to create a test with Rspec. The documentation for these is widely available.
This is harder in other languages, because there is no standard Page Object model or standard reporting structure in say, Python. This is an issue, for example, for Mozilla, where anyone who wants to contribute a test first has to understand a custom implementation of the Page Object model and reporting structure in Python.
Unlike unit tests, browser/UI tests are by their nature expensive to run and expensive to maintain. The Selenium test suite should have a tight focus on exercising high-risk/high-value paths through the application. I don't foresee having Selenium tests for every aspect of every feature ever developed, but rather having a manageable set of browser tests with a clear focus on value.
Suppose there's a test:
- Failing in jenkins but passing locally.
Since Jenkins would be the test environment of record, if a test fails there, it would indicate either an issue with the code in the test environment, or else a need to fix the test itself.
OR
- Failing just for a single person.
Could indicate any number of things depending on circumstance.
OR
- Failing but it's apparently ok.
Browser tests need maintenance. A test that fails but does not indicate a problem of some sort will be removed or subject to maintenance.
OR
- Passing but should be failing.
This would indicate a useless test. Again, such a test would be either removed or altered to be useful.
How do you debug it? Do you need to know ruby for that? To which extent?
My idea right now is that maintaining the Selenium test suite as run by Jenkins would be primarily a QA activity, with contributions from any other interested parties in the greater community or among the Mediawiki/Wikipedia dev/ops community. Contributions from developers would be welcome but not required.