bhagya - QA engineer for Calcey Technologies, will be committing Selenium tests to /trunk/testing/selenium janesh - same
Roan Kattouw (Catrope)
Should we define some sort of convention for extensions to develop selenium tests? i.e perhaps some of the tests should live in a testing folder of each extension or in /trunk/testing/extensionName ? So that its easier to modularize the testing suite?
--michael
Roan Kattouw wrote:
bhagya - QA engineer for Calcey Technologies, will be committing Selenium tests to /trunk/testing/selenium janesh - same
Roan Kattouw (Catrope)
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Wed, Mar 10, 2010 at 1:43 PM, Michael Dale mdale@wikimedia.org wrote:
Should we define some sort of convention for extensions to develop selenium tests? i.e perhaps some of the tests should live in a testing folder of each extension or in /trunk/testing/extensionName ? So that its easier to modularize the testing suite?
--michael
Perhaps "/testing/extensions/<name>/" since not all extensions will ever have tests so it would make it easier and cleaner to sort the directory.
-Peachey
On 03/10/2010 09:28 AM, K. Peachey wrote:
On Wed, Mar 10, 2010 at 1:43 PM, Michael Dale mdale@wikimedia.org wrote:
Should we define some sort of convention for extensions to develop selenium tests? i.e perhaps some of the tests should live in a testing folder of each extension or in /trunk/testing/extensionName ? So that its easier to modularize the testing suite?
--michael
Perhaps "/testing/extensions/<name>/" since not all extensions will ever have tests so it would make it easier and cleaner to sort the directory.
-Peachey
This should work the same way as extension parser-tests, with the tests in the extension's directory, included using a config variable. Otherwise, extensions are split over two places, so there's not such an easy way to just tarball them, and it's less clear from the SVN commit paths that all work was just to the extension, it also provides more flexibility if people want it.
If we were to write selenium tests, should we be using the "selenese" HTML format or should we use the PHP/Python client libraries? (I'd strongly prefer the latter given the deficiencies of selenese and the recorder, shameless-plug: we rewrote both at http://go-test.it)
Conrad
2010/3/10 Conrad Irwin conrad.irwin@googlemail.com:
This should work the same way as extension parser-tests, with the tests in the extension's directory, included using a config variable. Otherwise, extensions are split over two places, so there's not such an easy way to just tarball them, and it's less clear from the SVN commit paths that all work was just to the extension, it also provides more flexibility if people want it.
On the other hand, we intend on setting up a server that runs these tests automatically. Instead of having to pull these tests from a million different places, it'd be nice if this server could just update from one dir and be done with it, which is exactly why I set it up the way I did. I agree that this is kind of the enemy of nice bundling of tests with extensions, but I think having an automatic test setup that tests stuff /before/ it's released is more important and more frequently used than providing easy access to tests to those .3% of downloaders that are actually gonna run them.
Roan Kattouw (Catrope)
Roan Kattouw wrote:
2010/3/10 Conrad Irwin:
This should work the same way as extension parser-tests, with the tests in the extension's directory, included using a config variable. Otherwise, extensions are split over two places, so there's not such an easy way to just tarball them, and it's less clear from the SVN commit paths that all work was just to the extension, it also provides more flexibility if people want it.
On the other hand, we intend on setting up a server that runs these tests automatically. Instead of having to pull these tests from a million different places, it'd be nice if this server could just update from one dir and be done with it, which is exactly why I set it up the way I did. I agree that this is kind of the enemy of nice bundling of tests with extensions, but I think having an automatic test setup that tests stuff /before/ it's released is more important and more frequently used than providing easy access to tests to those .3% of downloaders that are actually gonna run them.
Roan Kattouw (Catrope)
Shouldn't that server have the extensions checked out and installed on a local mediawiki before running them ? (I am assuming it's a single server testing against localhost)
2010/3/10 Platonides Platonides@gmail.com:
Shouldn't that server have the extensions checked out and installed on a local mediawiki before running them ? (I am assuming it's a single server testing against localhost)
The point is it's not a single server testing them against localhost: it's a virtual machine testing against a number other VMs, each running a different OS/browser.
Roan Kattouw (Catrope)
The point is it's not a single server testing them against localhost: it's a virtual machine testing against a number other VMs, each running a different OS/browser.
That is how Wikimedia will test, but we should also write the scripts in a generic enough way that they can run against a localhost as well. We can't let every developer use the tesla cluster for testing; I doubt we have enough capacity for that.
That said, I think it'll be difficult for people to run selenium tests against their own installs, as many tests are going to expect the wiki to be configured in a specific way. We plan on handling this on the automated test server by dynamically reconfiguring wikis to run tests against. There is probably a way to do this that makes it easy for people to run locally as well though.
Are we tracking the testing architecture plans anywhere? I have a bunch of notes written up that would be good to share. It'll also help clear up some of the confusion around the subject.
Respectfully,
Ryan Lane
On Wed, Mar 10, 2010 at 11:08 AM, Lane, Ryan Ryan.Lane@ocean.navo.navy.mil wrote:
That is how Wikimedia will test, but we should also write the scripts in a generic enough way that they can run against a localhost as well. We can't let every developer use the tesla cluster for testing; I doubt we have enough capacity for that.
It would definitely be an extremely bad thing if it were hard to test on localhost. Ideally, it should be as easy to run as parser tests. We can't ask people (whether committers or not) to write tests along with their patches if they can't confirm that the tests pass.
That said, I think it'll be difficult for people to run selenium tests against their own installs, as many tests are going to expect the wiki to be configured in a specific way. We plan on handling this on the automated test server by dynamically reconfiguring wikis to run tests against. There is probably a way to do this that makes it easy for people to run locally as well though.
Parser tests just set a bunch of configuration variables to particular values when they start. From maintenance/parserTests.inc, line 507 ff.:
$settings = array( 'wgServer' => 'http://localhost', 'wgScript' => '/index.php', 'wgScriptPath' => '/',
etc. This works fine, although when a new config variable is introduced, it might have to have a value added to the array if it causes test failures with some settings.
Are we tracking the testing architecture plans anywhere? I have a bunch of notes written up that would be good to share. It'll also help clear up some of the confusion around the subject.
I'm a little surprised by this altogether, actually. I had seen some mention of Selenium adoption, but nothing concrete. Was this discussed publicly anywhere, or has this kind of project started to migrate behind closed doors now that there are so many paid people? I'm subscribed to all major MediaWiki development discussion fora, as far as I know, but I've felt increasingly out of the loop lately. Having more paid developers is great (and so is a good testing framework!), but that shouldn't mean that decisions are made where volunteers can't weigh in.
2010/3/10 Aryeh Gregor Simetrical+wikilist@gmail.com:
I'm a little surprised by this altogether, actually. I had seen some mention of Selenium adoption, but nothing concrete. Was this discussed publicly anywhere, or has this kind of project started to migrate behind closed doors now that there are so many paid people? I'm subscribed to all major MediaWiki development discussion fora, as far as I know, but I've felt increasingly out of the loop lately. Having more paid developers is great (and so is a good testing framework!), but that shouldn't mean that decisions are made where volunteers can't weigh in.
We're initially setting up Selenium for use by the usability initiative, with the idea of extending it to the rest of the software when we have time/resources for that. You're right that this should probably have been communicated earlier, but until a few weeks ago the status was "Ryan's patiently waiting for the Selenium servers to arrive and get set up".
Roan Kattouw (Catrope)
On Wed, Mar 10, 2010 at 5:10 PM, Roan Kattouw roan.kattouw@gmail.com wrote:
We're initially setting up Selenium for use by the usability initiative, with the idea of extending it to the rest of the software when we have time/resources for that. You're right that this should probably have been communicated earlier, but until a few weeks ago the status was "Ryan's patiently waiting for the Selenium servers to arrive and get set up".
Okay, but it seems like several employees (including you) already knew this, and several volunteers (including me) did not. This implies that there's some communications channel that employees are reading, but not volunteers. Is it some public place that we just don't visit? If so, where? Or do you have internal face-to-face meetings, private mailing lists, something like that? Assuming Wikimedia intends to maintain a bazaar development model, it's quite important that interested volunteers can be on the same page as employees.
Okay, but it seems like several employees (including you) already knew this, and several volunteers (including me) did not. This implies that there's some communications channel that employees are reading, but not volunteers. Is it some public place that we just don't visit? If so, where? Or do you have internal face-to-face meetings, private mailing lists, something like that? Assuming Wikimedia intends to maintain a bazaar development model, it's quite important that interested volunteers can be on the same page as employees.
I was tasked to make a selenium cluster for the usability initiative. It wasn't necessarily for the developer community as a whole. The usability initiative wanted its QA engineers to be able to test in a quicker fashion. The tests weren't even necessarily going to become part of a normal set of MediaWiki tests.
Now that we are thinking about integrating selenium tests into the development process, we are opening up to the community to get ideas. We aren't trying to keep anything secret. This is essentially the first step in the process. We had a brief initial discussion on this at the tech meeting.
That said, this discussion isn't fruitful. Can we please move the discussion back to planning how this is going to work?
Respectfully,
Ryan Lane
On 03/12/2010 06:33 PM, Ryan Lane wrote:
That said, this discussion isn't fruitful. Can we please move the discussion back to planning how this is going to work?
Sure, the main problem I see with the proposed set up [1] is that you have no way of ensuring the MediaWiki you are hitting is in a consistent state; having tests fail because of edit conflicts, modified pages, users that already exist, have been blocked, etc. as a result of other tests, is very annoying; (tests can't be relied on to clean up after themselves, and one failure should not cause the rest of the suite to fail until it is manually fixed)
To some extent this can be worked around by using carefully selected random parameters for many things, but that is a horrible hack, and requires extra work in writing test scripts; though as I assume/hope they will be written in PHP, not a huge difficultly, providing we teach everyone how.
Much cleaner is to have a set up like the current parser tests where each test can specify which articles it expects to exist with what content (selenium tests may also wish to specify which users exist with what privileges/preferences as well) in addition to being able to tweak configuration settings (otherwise we're going to need a fair few MediaWiki's even to test configurations that are live at WikiMedia). This is quite readily doable, if you run a MediaWiki instance on the same machine as the test runner, and I imagine it would also be possible to do by building a communication protocol between the two, though that seems like a waste of effort.
This won't be a problem for local developers, the test runner, the browser and MediaWiki are all on localhost, the tests can be written in PHP (or exported from selenium IDE into PHP) and run with a wrapper script (maintenance/seleniumTests.php - or w/e) that handles the configuration, output is handled by PHPUnit, all happy.
For a selenium-grid setup, it's not so obvious how to do it, I'd suggest that, instead of having developers run scripts against the grid themselves, they simply request a run on a server designed for this task, which runs the test through the grid using a hostname that will resolve back to the runner. This allows easy local control over the MediaWiki, and makes it reasonably easy to write an interface for normal developers who won't/can't run selenium to run tests against MediaWiki.
[developer] (remote) | "request run" (could just be php seleniumTests.php over SSH) | [selenium runner <-> MediaWiki] (both in PHP, load balance at will) | | "run test" | | | [selenium grid] "HTTP requests" | | "run test" | | | [browser virtual machines]
I don't think reconfiguring MediaWiki per test-script, or per set of test-scripts is an outrageous overhead, Selenium is a very "enterprise" tool, and booting virtual machines with browsers in is likely much more costly than that. The advantages it gives are obvious, tests should not fail because of faults in the testing environment, that just wastes time.
Cleaning the state of the browsers is probably not so critical here, but it's another "gotcha", if one test leaves the user logged in, and the next test tries to click the "Login" link, it explodes, and vice/versa. Selenium can helps somewhat here (if you persuade it to, and to varying extents in various browser versions), but it's likely easier to cleanse the database.
[1] http://usability.wikimedia.org/wiki/File:Selenium_architecture_diagram.svg
Hope this gives some food for thought, it's a bit longer than I intended.
Conrad
Sure, the main problem I see with the proposed set up [1] is that you have no way of ensuring the MediaWiki you are hitting is in a consistent state; having tests fail because of edit conflicts, modified pages, users that already exist, have been blocked, etc. as a result of other tests, is very annoying; (tests can't be relied on to clean up after themselves, and one failure should not cause the rest of the suite to fail until it is manually fixed)
The diagram was created with manual testing against the usability prototypes in mind. I'll update it to include automated testing with a test runner and code review when we work out the plan.
To some extent this can be worked around by using carefully selected random parameters for many things, but that is a horrible hack, and requires extra work in writing test scripts; though as I assume/hope they will be written in PHP, not a huge difficultly, providing we teach everyone how.
Much cleaner is to have a set up like the current parser tests where each test can specify which articles it expects to exist with what content (selenium tests may also wish to specify which users exist with what privileges/preferences as well) in addition to being able to tweak configuration settings (otherwise we're going to need a fair few MediaWiki's even to test configurations that are live at WikiMedia). This is quite readily doable, if you run a MediaWiki instance on the same machine as the test runner, and I imagine it would also be possible to do by building a communication protocol between the two, though that seems like a waste of effort.
This is what I was hoping for. I think the test runner should reconfigure the wiki for each test. If we want to be able to run multiple tests in parallel, we should have multiple wikis that can be reconfigured, and tested against independently.
Do the parser tests only test core parser functionality, or do they also test extensions, like ParserFunctions, and SyntaxHighlight GeSHi? It is likely we'll have tests that will need to dynamically include extensions, and configure them dynamically as well.
This won't be a problem for local developers, the test runner, the browser and MediaWiki are all on localhost, the tests can be written in PHP (or exported from selenium IDE into PHP) and run with a wrapper script (maintenance/seleniumTests.php - or w/e) that handles the configuration, output is handled by PHPUnit, all happy.
For a selenium-grid setup, it's not so obvious how to do it, I'd suggest that, instead of having developers run scripts against the grid themselves, they simply request a run on a server designed for this task, which runs the test through the grid using a hostname that will resolve back to the runner. This allows easy local control over the MediaWiki, and makes it reasonably easy to write an interface for normal developers who won't/can't run selenium to run tests against MediaWiki.
For the grid setup, we were exploring the possibility of a test runner that automatically tests commits, and reports them to Code Review, like the parser tests do now. For the most part, people shouldn't be hitting the grid, only bots, unless we have a QA team that is doing something special.
I don't think reconfiguring MediaWiki per test-script, or per set of test-scripts is an outrageous overhead, Selenium is a very "enterprise" tool, and booting virtual machines with browsers in is likely much more costly than that. The advantages it gives are obvious, tests should not fail because of faults in the testing environment, that just wastes time.
Yeah, depending on the browser, OS, and hardware specs of the machine, browsers can take 10-70 seconds to run even simple scripts. The overhead of re-configuring the wiki is nothing in comparison.
Cleaning the state of the browsers is probably not so critical here, but it's another "gotcha", if one test leaves the user logged in, and the next test tries to click the "Login" link, it explodes, and vice/versa. Selenium can helps somewhat here (if you persuade it to, and to varying extents in various browser versions), but it's likely easier to cleanse the database.
When selenium launches a browser, it does so using a clean profile. It launches a fresh browser from a new profile every test it runs. This shouldn't be an issue.
Respectfully,
Ryan Lane
On 03/12/2010 07:48 PM, Ryan Lane wrote:
Do the parser tests only test core parser functionality, or do they also test extensions, like ParserFunctions, and SyntaxHighlight GeSHi? It is likely we'll have tests that will need to dynamically include extensions, and configure them dynamically as well.
Yes, if you look at extensions/Poem/Poem.php, you see
$wgParserTestFiles[] = dirname( __FILE__ ) . "/poemParserTests.txt";
I'd hope this would work in exactly the same way with $wgSeleniumTestFiles. The parserTests are currently in their own light-weight file format (which isn't great, but it's certainly not terrible). I imagine that selenium tests will be (initially at least) based on the PHP exuded by the selenium IDE [see below], with code added to the setUp() for MediaWiki configuration.
For the grid setup, we were exploring the possibility of a test runner that automatically tests commits, and reports them to Code Review, like the parser tests do now. For the most part, people shouldn't be hitting the grid, only bots, unless we have a QA team that is doing something special.
Great, though obviously some cleverness is needed to avoid running all tests on all browsers on every commit, but nothing too challenging.
When selenium launches a browser, it does so using a clean profile. It launches a fresh browser from a new profile every test it runs. This shouldn't be an issue.
That's what the docs say :), I did not find this to always be the case.
Conrad
<?php
require_once 'PHPUnit/Extensions/SeleniumTestCase.php';
class Example extends PHPUnit_Extensions_SeleniumTestCase { function setUp() { $this->setBrowser("*chrome"); $this->setBrowserUrl("http://en.wiktionary.org/"); }
function testMyTestCase() { $this->open("/wiki/idiom"); $this->type("searchInput", "adsa"); $this->click("searchGoButton"); $this->waitForPageToLoad("30000"); } } ?>
On Fri, Mar 12, 2010 at 3:12 PM, Conrad Irwin conrad.irwin@googlemail.com wrote:
On 03/12/2010 07:48 PM, Ryan Lane wrote:
Do the parser tests only test core parser functionality, or do they also test extensions, like ParserFunctions, and SyntaxHighlight GeSHi? It is likely we'll have tests that will need to dynamically include extensions, and configure them dynamically as well.
Yes, if you look at extensions/Poem/Poem.php, you see
$wgParserTestFiles[] = dirname( __FILE__ ) . "/poemParserTests.txt";
I'd hope this would work in exactly the same way with $wgSeleniumTestFiles. The parserTests are currently in their own light-weight file format (which isn't great, but it's certainly not terrible). I imagine that selenium tests will be (initially at least) based on the PHP exuded by the selenium IDE [see below], with code added to the setUp() for MediaWiki configuration.
For the grid setup, we were exploring the possibility of a test runner that automatically tests commits, and reports them to Code Review, like the parser tests do now. For the most part, people shouldn't be hitting the grid, only bots, unless we have a QA team that is doing something special.
Great, though obviously some cleverness is needed to avoid running all tests on all browsers on every commit, but nothing too challenging.
When selenium launches a browser, it does so using a clean profile. It launches a fresh browser from a new profile every test it runs. This shouldn't be an issue.
That's what the docs say :), I did not find this to always be the case.
Conrad
<?php require_once 'PHPUnit/Extensions/SeleniumTestCase.php'; class Example extends PHPUnit_Extensions_SeleniumTestCase { function setUp() { $this->setBrowser("*chrome"); $this->setBrowserUrl("http://en.wiktionary.org/"); } function testMyTestCase() { $this->open("/wiki/idiom"); $this->type("searchInput", "adsa"); $this->click("searchGoButton"); $this->waitForPageToLoad("30000"); } } ?>
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
This is exactly how I think it should work. Our test cases should be very lightweight, with as much work as possible deferred to a common area (abstract MediaWiki_TestCase anyone?). Each time we write a test we're reinventing the wheel and I think that should be avoided. We need a common way to setUp()--with variants for specific test-cases, of course-- a working MediaWiki instance that is consistent every time.
We want writing testcases to be as easy as humanly possible, or people won't write them. This is exactly the situation we're in right now.
-Chad
On Fri, Mar 12, 2010 at 3:48 PM, Ryan Lane rlane32@gmail.com wrote:
For the grid setup, we were exploring the possibility of a test runner that automatically tests commits, and reports them to Code Review, like the parser tests do now. For the most part, people shouldn't be hitting the grid, only bots, unless we have a QA team that is doing something special.
Unless it's very easy to set up Selenium on localhost on all platforms (is it?), committers should be able to submit code to run on the test servers. Otherwise, there's no way to tell if your code causes no regressions without committing it -- and code should be tested *before* commit. (As well as an automatic run after commit, since we don't know if people actually did test before committing.) As far as I know, projects with test servers tend to also allow committers to submit code changes to them for testing.
Unless it's very easy to set up Selenium on localhost on all platforms (is it?), committers should be able to submit code to run on the test servers. Otherwise, there's no way to tell if your code causes no regressions without committing it -- and code should be tested *before* commit. (As well as an automatic run after commit, since we don't know if people actually did test before committing.) As far as I know, projects with test servers tend to also allow committers to submit code changes to them for testing.
It is not easy to set up Selenium to test all browsers on all platforms. I've been at it for a couple weeks now, and I'm running into a lot of bugs and strange configuration issues.
I'm not opposed to letting committers access the grid. Unfortunately, we may not have the resources to do it with the current system. After we do some initial testing, we should know how much load it can handle, and determine from there whether or not we can support direct access from committers.
Respectfully,
Ryan Lane
2010/3/12 Aryeh Gregor Simetrical+wikilist@gmail.com:
Okay, but it seems like several employees (including you) already knew this, and several volunteers (including me) did not. This implies that there's some communications channel that employees are reading, but not volunteers. Is it some public place that we just don't visit? If so, where? Or do you have internal face-to-face meetings, private mailing lists, something like that?
Most of this communication happened by internal e-mail (we don't have an internal mailing list, but we do have some group e-mail aliases that are mostly used for, well, internal stuff, mostly stuff that's not ready yet or just not interesting to the general public). Some of it happened in #wikipedia_usability , which is a public channel keeping public logs. Of course most of the usability team are in the same office, so obviously face-to-face conversations and meetings happen; these are not necessarily "internal" or "secret" or whatever, just a natural consequence of being in the same room.
Again, as Ryan pointed out, we started out exploring Selenium for internal usage only; he described that pretty well, so I won't elaborate on that.
Roan Kattouw (Catrope)
Aryeh Gregor wrote:
On Wed, Mar 10, 2010 at 5:10 PM, Roan Kattouw roan.kattouw@gmail.com wrote:
We're initially setting up Selenium for use by the usability initiative, with the idea of extending it to the rest of the software when we have time/resources for that. You're right that this should probably have been communicated earlier, but until a few weeks ago the status was "Ryan's patiently waiting for the Selenium servers to arrive and get set up".
Okay, but it seems like several employees (including you) already knew this, and several volunteers (including me) did not. This implies that there's some communications channel that employees are reading, but not volunteers. Is it some public place that we just don't visit? If so, where? Or do you have internal face-to-face meetings, private mailing lists, something like that? Assuming Wikimedia intends to maintain a bazaar development model, it's quite important that interested volunteers can be on the same page as employees.
I made an announcement about interaction testing automation back in October and received some feedback from some of you. http://lists.wikimedia.org/pipermail/wikitech-l/2009-October/045801.html
One of the suggestions was to start small and expand the scope later. I thought it was a brilliant and a practical idea. The system is designed to offload manual testing for basic interaction regression tests and put the focus for manual testing on to new features and things visual confirmations are needed. We have one system up for OS X and another system is in the process of being built. We are still experimenting how the whole system works with less manual intervention. Integrating code review process, extension validation, and central reporting and publishing, and etc will be nice enhancements to add on as we prove this automation system work in a small work group and mature the system with the process as a whole.
The intention is to expand this automation to wider developer community and hopefully creating test cases will be part of the development process. :-)
Cheers,
- Naoko
Parser tests just set a bunch of configuration variables to particular values when they start. From maintenance/parserTests.inc, line 507 ff.:
$settings = array( 'wgServer' => 'http://localhost', 'wgScript' => '/index.php', 'wgScriptPath' => '/',
etc. This works fine, although when a new config variable is introduced, it might have to have a value added to the array if it causes test failures with some settings.
It would be good if all testing frameworks were fairly consistent. I believe a project is going to be set up to coordinate this.
I'm a little surprised by this altogether, actually. I had seen some mention of Selenium adoption, but nothing concrete. Was this discussed publicly anywhere, or has this kind of project started to migrate behind closed doors now that there are so many paid people? I'm subscribed to all major MediaWiki development discussion fora, as far as I know, but I've felt increasingly out of the loop lately. Having more paid developers is great (and so is a good testing framework!), but that shouldn't mean that decisions are made where volunteers can't weigh in.
As Roan mentioned, we are primarily setting up a selenium grid architecture up for doing testing for usability initiative. The long term goal for this is to integrate with code review, and to have a general selenium test suite for MediaWiki. We are in the initial stages for planning how selenium will fit in with the MediaWiki development process, and we welcome all input. If you, or anyone, want to be included in the planning process, let us know.
I was asking if we had a planning page so that we can keep this public and get input from all of the developers.
Here's the small of amount of documentation we currently have:
http://usability.wikimedia.org/wiki/Resources#Interaction_testing_automation
http://usability.wikimedia.org/wiki/File:Selenium_architecture_diagram.svg
Respectfully,
Ryan Lane
It would be good if all testing frameworks were fairly consistent. I believe a project is going to be set up to coordinate this.
That's good, as the parser tests are the most widespread, copying that configuration seems easiest. (Notwithstanding all the other advantages mentioned previously).
As Roan mentioned, we are primarily setting up a selenium grid architecture up for doing testing for usability initiative. The long term goal for this is to integrate with code review, and to have a general selenium test suite for MediaWiki. We are in the initial stages for planning how selenium will fit in with the MediaWiki development process, and we welcome all input. If you, or anyone, want to be included in the planning process, let us know.
Yes, I am interested - this mailing list or a http://mediawiki.org/wiki/Selenium would be the most findable.
http://usability.wikimedia.org/wiki/Resources#Interaction_testing_automation
http://usability.wikimedia.org/wiki/File:Selenium_architecture_diagram.svg
Thanks for the links.
Conrad
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Roan Kattouw wrote:
2010/3/10 PlatonidesPlatonides@gmail.com:
Shouldn't that server have the extensions checked out and installed on a local mediawiki before running them ? (I am assuming it's a single server testing against localhost)
The point is it's not a single server testing them against localhost: it's a virtual machine testing against a number other VMs, each running a different OS/browser.
Roan Kattouw (Catrope)
I see. Still, sharing the checkout read-only with the client VMs is probably a good idea, since that ensures no client will be using non-synced tests.
On Wed, Mar 10, 2010 at 8:20 AM, Roan Kattouw roan.kattouw@gmail.com wrote:
On the other hand, we intend on setting up a server that runs these tests automatically. Instead of having to pull these tests from a million different places, it'd be nice if this server could just update from one dir and be done with it, which is exactly why I set it up the way I did.
The way parser tests work is you do $wgParserTestFiles[] = 'foo' in the extension setup file. That seems like a much tidier and more self-contained way of doing it. You'll need a full MediaWiki checkout with the extensions enabled in LocalSettings.php to run the tests to begin with, I assume.
2010/3/10 Aryeh Gregor Simetrical+wikilist@gmail.com:
On Wed, Mar 10, 2010 at 8:20 AM, Roan Kattouw roan.kattouw@gmail.com wrote:
On the other hand, we intend on setting up a server that runs these tests automatically. Instead of having to pull these tests from a million different places, it'd be nice if this server could just update from one dir and be done with it, which is exactly why I set it up the way I did.
The way parser tests work is you do $wgParserTestFiles[] = 'foo' in the extension setup file. That seems like a much tidier and more self-contained way of doing it. You'll need a full MediaWiki checkout with the extensions enabled in LocalSettings.php to run the tests to begin with, I assume.
Good point. Since we're trying to make these things mirror the parser tests (see other thread), it's a good idea to make the directory structure the same as well.
Roan Kattouw (Catrope)
2010/3/10 K. Peachey p858snake@yahoo.com.au:
Perhaps "/testing/extensions/<name>/" since not all extensions will ever have tests so it would make it easier and cleaner to sort the directory.
Yes, I've actually already created /trunk/testing/UsabilityIntiative for usability tests, but that could easily be renamed to /trunk/testing/extensions/UsabilityInitiative of course.
Roan Kattouw (Catrope)
On Wed, Mar 10, 2010 at 8:18 AM, Roan Kattouw roan.kattouw@gmail.com wrote:
Yes, I've actually already created /trunk/testing/UsabilityIntiative for usability tests, but that could easily be renamed to /trunk/testing/extensions/UsabilityInitiative of course.
It makes more sense to me to have it be in trunk/extensions/UsabilityInitiative/testing/. This gives a consistent model that even unofficial extensions can follow. It's also closer to how parser tests currently work -- see trunk/extensions/Cite/citeParserTests.txt, for instance.
* Roan Kattouw roan.kattouw@gmail.com [Tue, 9 Mar 2010 13:55:27 +0100]:
bhagya - QA engineer for Calcey Technologies, will be committing Selenium tests to /trunk/testing/selenium janesh - same
Roan Kattouw (Catrope)
That's really interesting. I hope that someone will have a time to provide documentation for such tests at www.mediawiki.org So the trunk will not be a hidden treasure :-) Dmitriy
wikitech-l@lists.wikimedia.org