On 03/12/2010 06:33 PM, Ryan Lane wrote:
That said, this discussion isn't fruitful. Can we
please move the
discussion back to planning how this is going to work?
Sure, the main problem I see with the proposed set up [1] is that you
have no way of ensuring the MediaWiki you are hitting is in a consistent
state; having tests fail because of edit conflicts, modified pages,
users that already exist, have been blocked, etc. as a result of other
tests, is very annoying; (tests can't be relied on to clean up after
themselves, and one failure should not cause the rest of the suite to
fail until it is manually fixed)
To some extent this can be worked around by using carefully selected
random parameters for many things, but that is a horrible hack, and
requires extra work in writing test scripts; though as I assume/hope
they will be written in PHP, not a huge difficultly, providing we teach
everyone how.
Much cleaner is to have a set up like the current parser tests where
each test can specify which articles it expects to exist with what
content (selenium tests may also wish to specify which users exist with
what privileges/preferences as well) in addition to being able to tweak
configuration settings (otherwise we're going to need a fair few
MediaWiki's even to test configurations that are live at WikiMedia).
This is quite readily doable, if you run a MediaWiki instance on the
same machine as the test runner, and I imagine it would also be possible
to do by building a communication protocol between the two, though that
seems like a waste of effort.
This won't be a problem for local developers, the test runner, the
browser and MediaWiki are all on localhost, the tests can be written in
PHP (or exported from selenium IDE into PHP) and run with a wrapper
script (maintenance/seleniumTests.php - or w/e) that handles the
configuration, output is handled by PHPUnit, all happy.
For a selenium-grid setup, it's not so obvious how to do it, I'd suggest
that, instead of having developers run scripts against the grid
themselves, they simply request a run on a server designed for this
task, which runs the test through the grid using a hostname that will
resolve back to the runner. This allows easy local control over the
MediaWiki, and makes it reasonably easy to write an interface for normal
developers who won't/can't run selenium to run tests against MediaWiki.
[developer] (remote)
|
"request run" (could just be php seleniumTests.php over SSH)
|
[selenium runner <-> MediaWiki] (both in PHP, load balance at will)
| |
"run test" |
| |
[selenium grid] "HTTP requests"
| |
"run test" |
| |
[browser virtual machines]
I don't think reconfiguring MediaWiki per test-script, or per set of
test-scripts is an outrageous overhead, Selenium is a very "enterprise"
tool, and booting virtual machines with browsers in is likely much more
costly than that. The advantages it gives are obvious, tests should not
fail because of faults in the testing environment, that just wastes time.
Cleaning the state of the browsers is probably not so critical here, but
it's another "gotcha", if one test leaves the user logged in, and the
next test tries to click the "Login" link, it explodes, and vice/versa.
Selenium can helps somewhat here (if you persuade it to, and to varying
extents in various browser versions), but it's likely easier to cleanse
the database.
[1]
http://usability.wikimedia.org/wiki/File:Selenium_architecture_diagram.svg
Hope this gives some food for thought, it's a bit longer than I intended.
Conrad