I've just finished the SVN migration.
If you've logged into the new server before the migration via SSH,
you'll have SSH key problems. To fix this issue, edit
~/.ssh/known_hosts, and remove the entries for formey and
Just a quick message to reintroduce myself to people who might be
wondering who is this new committer.
I am from France and discovered Wikipedia in 2002. Getting interested
in bug fixing, I have eventually been granted commit access by Tim or
Brion back in 2003 or 2004.
I haven't contributed a lot of code but have an overall knowledge of
MediaWiki. I mostly fixed funny bugs, converted double quotes to single
quotes and occasionally synced stuff to live (read: blank page on live
I am back around since a few weeks and willing to contribute again to
MediaWiki development. I have no aim in particular beside having fun
and meeting some new people. My area of interests are in no special order :
- parser (still have to understand Tim's preprocessing stuff)
- ajax features
My secret project is to migrate to git.
I beg your pardon for my very basic english :^b
I got a short user page at :
My main page is on the french wikipedia (french language only) :
Ashar "hashar" Voultoiz
In the next hour or two we'll be migrating SVN to a new server.
Nothing is changing from the usage perspective. During this time SVN
may be inaccessible for a few minutes. Let me know if you are having
an access issue, and I'll fix it for you.
There have been a number of calls to make the release process more
predictable (or maybe just faster). There are plenty of examples of
projects that have very predictable release schedules, such as the
GNOME project or the Ubuntu Linux distribution. It's not at all
unreasonable to expect that we could achieve that same level of
predictability if we're prepared to make some tradeoffs, such as:
1. Is the release cadence is more important (i.e. reverting features
if they pose a schedule risk) or is shipping a set of features is
important (i.e. slipping the date if one of the predetermined feature
isn't ready)? For example, as pointed out in another thread + IRC,
there was a suggestion for creating a branch point prior to the
introduction of the Resource Loader. Is our priority going to be
about ensuring a fixed list of features is ready to go, or should we
be ruthless about cutting features to make a date, even if there isn't
much left on the feature list for that date?
2. Projects with generally predictable schedules also have a process
for deciding early in the cycle what is going to be in the release.
For example, in Ubuntu's most recently completed release schedule ,
they alloted a little over 23 weeks for development (a little over 5
months). The release team slated a "Feature Definition Freeze" on
June 17 (week 7), with what I understand was a pretty high bar for
getting new features listed after that, and a feature freeze on August
12 (week 15). Many features originally slated in the feature
definition were cut. Right now, we have nothing approaching that
level of formality. Should we?
3. How deep is the belief that Wikimedia production deployment must
precede a MediaWiki tarball release? Put another way, how tightly are
Thoughts on these? Any other tradeoffs we need to consider? We're
going to have a number of conversations over the coming days on this
topic, so I wanted to add a little structure and get some (more)
initial impressions now.
 MZMcBride's mail:
...which in turn references IRC from 2010-10-18 @ 14:08 or so:
 Ubuntu Maverick Meerkat (10.10) release schedule:
Since the discussion about staff collaboration with volunteers started
a few weeks ago, actions and statements by staff members have
undergone an increasing amount of scrutiny and criticism. That in
itself is not a bad thing necessarily: staff members need to be kept
on their toes and not be allowed to get away with doing bad things,
and some scrutiny and criticism is needed to accomplish this.
In recent weeks, however, posts on this mailing list have gone way
beyond 'some' scrutiny and criticism, instead suggesting something
closer to distrust and paranoia. Statements made by staff members have
been picked apart, with anything that could be interpreted to suggest
an exclusive, disrespectful or otherwise negative attitude towards
volunteers being interpreted this way, along with the occasional
ominous warning about how the world will end if this attitude won't
This extreme behavior comes from just a few people, but I'm seeing a
less extreme version of it in other people too. Unlike the former
group, the latter group doesn't seem to be particularly paranoid or
uncivil, but they seem to be getting increasingly critical of staff
members as well.
Quite understandably, staff members aren't gonna be encouraged to be
more collaborative when they get the feeling that their attempts to do
so more often than not result in increased scrutiny, criticism or
drama and that their sometimes unfortunate but nevertheless good-faith
and well-intentioned actions or words backfire the way we've seen
happen a few times recently. Rather than feeling this environment
encourages them to collaborate (which it should), they'll feel this
environment is hostile and will be driven away from it if it continues
to feel hostile.
A crucial point that I think is being missed by a number of people
right now is that collaboration is a two-way street. Staffers and
volunteers are both responsible for making it work. While staff
members have to be open to, respectful of and collaborative with
volunteer developers, the reverse is also true: volunteers are
supposed to make staff members feel welcome and appreciated, and treat
them as their equals. Right now, the opposite seems to be happening,
which I fear will lead to a negative spiral.
A few weeks ago, staff members were called upon to adjust their
attitudes to do their part in fostering collaboration between staff
and volunteers. Volunteers, in turn, should be aware that they have a
part to play too. Also, both sides should realize behaviors don't
change overnight, and should give each other time to adapt and cut
each other some slack in the meantime.
Roan Kattouw (Catrope)
There seems to be some confusion about how ResourceLoader works, which
has been leading people to make commits like r73196 and report bugs like
#25362. I would like to offer some clarification.
ResourceLoader, if you aren't already aware, is a new system in
MediaWiki 1.17 which allows developers to bundle collections of
*modules*. Modules may represent any number of scripts, styles and
messages, which are read from the file system, the database, or
generated by software.
When a request is made for one or more modules, each resource is
packaged together and sent back to the client as a response. The way in
which these requests and responses are performed depends on whether
debug is on or off.
When debug mode is off:
* Modules are requested in batches
* Resources are combined into modules
* Modules are combined into a response
* The response is minified
When debug mode is on:
* Modules are requested individually
* Resources are combined into modules
I think it's debatable whether debug=true mode goes far enough, since it
still combines resources into modules, and I am open to contributions
that can make debug=true mode even more debugging friendly by delivering
the resources to the client as unchanged as possible. I also think it's
debatable if debug=false mode goes far enough, since things like Google
Closure Compiler have been proven to even further reduce the size of
debug=false even more production friendly by improving front-end
The commits and bugs that I'm contending here are ones which are aiming
to dilute the optimized nature of debug=false mode, when debug=true mode
is really what they should be using or improving. These kinds of changes
and suggestions result in software that is neither optimized for
debugging or for production, making the front-end performance of the
site in production slower without making it any easier to debug than it
would have been by using debug=true.
If you are a developer, working on your localhost, you probably want to
$wgResourceLoaderDebug = true;
.. and then test that things work in debug=false mode before committing
your code. This will result in more requests but less processing, which
will be much faster when developing on localhost.
I hope this helps clarify this situation.
A question from an IRC/wikipedia newbie.
I've been experimenting with processing pubmsg events in
irc://irc.wikimedia.org/en.wikipedia (thanks for the channel btw) and
have been noticing some control characters that I wasn't expecting to
see in the message content. I've attached a raw line form the channel,
where you should be able to see an 0x03 byte (ctrl-c) at position 60.
There are several others scattered throughout the line followed by
integers. Is this a character encoding of some kind that I need to
decode, or some artifact of the IRC protocol that I need to handle?
Any advice/tips would be greatly appreciated!
I have been tasked to evaluate whether we can use the parserTests db code
for the selenium framework. I just looked it over and have serious
reservations. I would appreciate any comments on the following analysis.
The environment for selenium tests is different than that for
parserTests. It is envisioned that multiple concurrent tests could run
using the same MW code base. Consequently, each test run must:
+ Use a db that if written to will not destroy other test wiki
+ Switch in a new images and math directory so any writes do not
interfere with other tests.
+ Maintain the integrity of the cache.
Note that tests would *never* run on a production wiki (it may be
possible to do so if they do no writes, but safety considerations suggest
they should always run on a test data, not production data). In fact
production wikis should always retain the setting $wgEnableSelenium =
false, to ensure selenium test are disabled.
Given this background, consider the following (and feel free to comment
parserTests temporary table code:
A fixed set of tables are specified in the code. parserTests creates
temporary tables with the same name, but using a different static prefix.
These tables are used for the parserTests run.
Problems using this approach for selenium tests:
+ Selenium tests on extensions may require use of extension specific
tables, the names of which cannot be elaborated in the code.
+ Concurrent test runs of parserTests are not supported, since the
temporary tables have fixed names and therefore concurrent writes to them
by parallel test runs would cause interference.
+ Clean up from aborted runs requires dropping fossil tables. But, if a
previous run tested an extension with extension-specific tables, there is
no way for a test of some other functionality to figure out which tables
For these reasons, I don't think we can reuse the parserTests code.
However, I am open to arguments to the contrary.
-- Dan Nessett
Back in June the Selenium Framework had a local configuration file called
LocalSeleniumSettings.php. This was eliminated by Tim Starling in a 6/24
commit with the comment that it was an insecure concept. In that commit,
new globals were added that controlled test runs.
Last Friday, mah ripped out the globals and put the configuration
information into the execute method of RunSeleniumTests.php with the
comment "@todo Add an alternative where settings are read from an INI
file." So, it seems we have dueling developers with contrary ideas about
what is the best way to configure selenium framework tests. Should
configuration data be exposed as globals or hidden in a local
Either approach works. But, by going back and forth, it makes development
of functionality for the Framework difficult. I am working on code not
yet submitted as a patch that now requires reworking because how to
reference configuration data has changed. We need a decision that decides
which of the two approaches to use.
-- Dan Nessett