On Tue, Oct 27, 2009 at 12:58 PM, Naoko Komura nkomura@wikimedia.org wrote:
Thought to share the early concept of automating user interface testing using Selenium. The following plan was outlined by Ryan Lane. The goal is to have the central location of client testing, and open up the test case submission to MediaWiki authors and allows the reuse of test cases simultaneously to multiple users.
http://usability.wikimedia.org/wiki/Resources#Interaction_testing_automation
Feel free to add your comments and input to the discussion page. (preferred over email thread)
Will keep you all posted with the progress.
Thanks,
- Naoko
Usability Initiative
-- Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Didn't we just discuss this recently? :) Indeed yes, improving our test suite is one of the perennial discussions on this list, along with rewriting the parser and trying to properly sort non-English locales. We're very good at beating the dead horse here. Couple of things that come to mind from the last time we discussed this, so we don't have to rehash the obvious.
Test cases are no good if they're not used regularly. We've had some test cases sitting in the Mediawiki repository since the beginning of time, and they largely haven't been touched since. They've been allowed to bitrot, to the point of being pretty much useless at this point. The problem is that they were never run regularly (at all), so nobody cared whether they passed or not. The rare exception to this is the parser tests, which have been used very heavily since their inception, and are now integrated to Code Review--any commit to core code causes the parser tests to be run to check for regressions.
If any new tests are to be successful, I think they should be _required_ to be integrated to the code review process. Forced running of tests and clearly displaying the results helps clearly track and identify when regressions occurred, so it's easier to fix them when they happen--which they do. I'd like to see a whole lot of other tests created to cover many aspects of the code, and keeping them as simple and straightforward as possible would be ideal. Making tests easy to write makes developers more likely to write them to include with their patches. If you make the learning curve too hard, developers won't bother, and you've shot yourself in the foot :)
Just my 0.02USD
-Chad