Test cases are no good if they're not used
regularly. We've had some test cases
sitting in the Mediawiki repository since the beginning of time, and
they largely
haven't been touched since. They've been allowed to bitrot, to the
point of being
pretty much useless at this point. The problem is that they were never run
regularly (at all), so nobody cared whether they passed or not. The
rare exception
to this is the parser tests, which have been used very heavily since
their inception,
and are now integrated to Code Review--any commit to core code causes the
parser tests to be run to check for regressions.
If any new tests are to be successful, I think they should be _required_ to be
integrated to the code review process. Forced running of tests and clearly
displaying the results helps clearly track and identify when
regressions occurred,
so it's easier to fix them when they happen--which they do. I'd like to see a
whole lot of other tests created to cover many aspects of the code, and keeping
them as simple and straightforward as possible would be ideal. Making tests
easy to write makes developers more likely to write them to include with their
patches. If you make the learning curve too hard, developers won't bother, and
you've shot yourself in the foot :)
There was also a comment on the talk page from Michael Dale mentioning
that this should integrate with Code Review. This is definitely a good
idea, and I'll try to work it into the plan.
As for the tests, simple tests seem fairly easy to create, as you can
use Selenium IDE to record actions, then output the script in a few
different languages (I'm assuming we'll use PHP). I think the hardest
part is going to be keeping the tests up to date with the code.
Respectfully,
Ryan Lane