I've decided to take a stab at hacking mediawiki (hacking at articles is '''so''' passé), and in the interest of starting small, here's a fix for special:Mycontributions, which is otherwise broken on a clean default install.
I'm not sure how to find out whether the FullURL already contains a ?, so I'm cheating and using the (much prettier) / syntax instead. I assume that doesn't give any other problems?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Frank v Waveren wrote: | I've decided to take a stab at hacking mediawiki (hacking at articles | is '''so''' passé), and in the interest of starting small, here's a | fix for special:Mycontributions, which is otherwise broken on a clean | default install. | | I'm not sure how to find out whether the FullURL already contains a ?, | so I'm cheating and using the (much prettier) / syntax instead. I | assume that doesn't give any other problems?
Thanks! However this patched version would still break on usernames containing either & or ? (depending on the $wgArticlePath style used), and may break on those containing a space or some other illegal URL characters.
I've just made a couple of fixes to the original version:
* First, any additional URL parameters should be passed as the first parameter to Title::getLocalURL() or Title::getFullURL() instead of appended on afterwards. The method will use a & or ? as appropriate, so the calling code doesn't need to choose.
* Second, URL parameters always need to be escaped using the urlencode() function to ensure that they do not get mangled. (If you have a lot of paremeters, the wfArrayToCGI() function will transform and encode an associative array of key => value pairs. If you have just one it's easier to do it manually.)
The Special:Pagename/Parameter format is more attractive for simple one-parameter cases but might not be totally reliable due to enforcement of the title length limit. Very long names could hit the limit on the combined name and then not go through at all. I'm not sure how big of a problem this would really be, but remember that the limit is in bytes. Many Asian characters take 3 bytes each in UTF-8 encoding, and this is particularly onerous for indic alphabets.
- -- brion vibber (brion @ pobox.com)
wikitech-l@lists.wikimedia.org