I'm happy to announce the availability of the second beta release of the
new MediaWiki 1.19 release series.
Please try it out and let us know what you think. Don't run it on any
wikis that you really care about, unless you are both very brave and
very confident in your MediaWiki administration skills.
MediaWiki 1.19 is a large release that contains many new features and
bug fixes. This is a summary of the major changes of interest to users.
You can consult the RELEASE-NOTES-1.19 file for the full list of changes
in this version.
Five security issues were discovered.
It was discovered that the api had a cross-site request forgery (CSRF)
vulnerability in the block/unblock modules. It was possible for a user
account with the block privileges to block or unblock another user without
providing a token.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=34212
It was discovered that the resource loader can leak certain kinds of private
data across domain origin boundaries, by providing the data as an executable
JavaScript file. In MediaWiki 1.18 and later, this includes the leaking of
CSRF
protection tokens. This allows compromise of the wiki's user accounts, say
by
changing the user's email address and then requesting a password reset.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=34907
Jan Schejbal of Hatforce.com discovered a cross-site request forgery (CSRF)
vulnerability in Special:Upload. Modern browsers (since at least as early as
December 2010) are able to post file uploads without user interaction,
violating previous security assumptions within MediaWiki.
Depending on the wiki's configuration, this vulnerability could lead to
further
compromise, especially on private wikis where the set of allowed file types
is
broader than on public wikis. Note that CSRF allows compromise of a wiki
from
an external website even if the wiki is behind a firewall.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=35317
George Argyros and Aggelos Kiayias reported that the method used to generate
password reset tokens is not sufficiently secure. Instead we use various
more
secure random number generators, depending on what is available on the
platform. Windows users are strongly advised to install either the openssl
extension or the mcrypt extension for PHP so that MediaWiki can take
advantage
of the cryptographic random number facility provided by Windows.
Any extension developers using mt_rand() to generate random numbers in
contexts
where security is required are encouraged to instead make use of the
MWCryptRand class introduced with this release.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=35078
A long-standing bug in the wikitext parser (bug 22555) was discovered to
have
security implications. In the presence of the popular CharInsert extension,
it
leads to cross-site scripting (XSS). XSS may be possible with other
extensions
or perhaps even the MediaWiki core alone, although this is not confirmed at
this time. A denial-of-service attack (infinite loop) is also possible
regardless of configuration.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=35315
*********************************************************************
What's new?
*********************************************************************
MediaWiki 1.19 brings the usual host of various bugfixes and new features.
Comprehensive list of what's new is in the release notes.
* Bumped MySQL version requirement to 5.0.2.
* Disable the partial HTML and MathML rendering options for Math,
and render as PNG by default.
* MathML mode was so incomplete most people thought it simply didn't work.
* New skins/common/*.css files usable by skins instead of having to copy
piles of
generic styles from MonoBook or Vector's css.
* The default user signature now contains a talk link in addition to the
user link.
* Searching blocked usernames in block log is now clearer.
* Better timezone recognition in user preferences.
* Extensions can now participate in the extraction of titles from URL paths.
* The command-line installer supports various RDBMSes better.
* The interwiki links table can now be accessed also when the interwiki
cache
is used (used in the API and the Interwiki extension).
Internationalization
- --------------------
* More gender support (for instance in user lists).
* Add languages: Canadian English.
* Language converter improved, e.g. it now works depending on the page
content language.
* Time and number-formatting magic words also now depend on the page
content language.
* Bidirectional support further improved after 1.18.
Release notes
- -------------
Full release notes:
https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=blob_plain;f=RE
LEASE-NOTES-1.19;hb=1.19.0beta2
https://www.mediawiki.org/wiki/Release_notes/1.19
Co-inciding with these security releases, the MediaWiki source code
repository has
moved from SVN (at https://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3)
to Git (https://gerrit.wikimedia.org/gitweb/mediawiki/core.git). So the
relevant
commits for these releases will not be appearing in our SVN repository. If
you use
SVN checkouts of MediaWiki for version control, you need to migrate these to
Git.
If you up are using tarballs, there should be no change in the process for
you.
Please note that any WMF-deployed extensions have also been migrated to Git
also, along with some other non WMF-maintained ones.
Please bear with us, some of the Git related links for this release may not
work instantly,
but should later on.
To do a simple Git clone, the command is:
git clone https://gerrit.wikimedia.org/r/p/mediawiki/core.git
More information is available at https://www.mediawiki.org/wiki/Git
For more help, please visit the #mediawiki IRC channel on freenode.netirc://irc.freenode.net/mediawiki or email The MediaWiki-l mailing list
at mediawiki-l(a)lists.wikimedia.org.
**********************************************************************
Download:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.tar.gz
Patch to previous version (1.19.0beta1), without interface text:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.patch.gz
Interface text changes:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-i18n-1.19.0beta2.patc
h.gz
GPG signatures:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.tar.gz.si
g
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.patch.gz.
sig
http://download.wikimedia.org/mediawiki/1.19/mediawiki-i18n-1.19.0beta2.patc
h.gz.sig
Public keys:
https://secure.wikimedia.org/keys.html
TL;DR: A few ideas follow on how we could possibly help legit editors
contribute from behind Tor proxies. I am just conversant enough with
the security problems to make unworkable suggestions ;-), so please
correct me, critique & suggest solutions, and perhaps volunteer to help.
The current situation:
https://en.wikipedia.org/wiki/Wikipedia:Advice_to_users_using_Tor_to_bypass…
We generally don't let anyone edit or upload from behind Tor; the
TorBlock extension stops them. One exception: a person can create an
account, accumulate lots of good edits, and then ask for an IP block
exemption, and then use that account to edit from behind Tor. This is
unappealing because then there's still a bunch of in-the-clear editing
that has to happen first, and because then site functionaries know that
the account is going to be making controversial edits (and could
possibly connect it to IPs in the future, right?). And right now
there's no way to truly *anonymously* contribute from behind Tor
proxies; you have to log in. However, since JavaScript delivery is hard
for Tor users, I'm not sure how much editing from Tor -- vandalism or
legit -- is actually happening. (I hope for analytics on this and thus
added it to https://www.mediawiki.org/wiki/Analytics/Dreams .) We know
at least that there are legitimate editors who would prefer to use Tor
and can't.
People have been talking about how to improve the situation for some
time -- see http://cryptome.info/wiki-no-tor.htm and
https://lists.torproject.org/pipermail/tor-dev/2012-October/004116.html
. It'd be nice if it could actually move forward.
I've floated this problem past Tor and privacy people, and here are a
few ideas:
1) Just use the existing mechanisms more leniently. Encourage the
communities (Wikimedia & Tor) to use
https://en.wikipedia.org/wiki/Wikipedia:Request_an_account (to get an
account from behind Tor) and to let more people get IP block exemptions
even before they've made any edits (< 30 people have gotten exemptions
on en.wp in 2012). Add encouraging "get an exempt account" language to
the "you're blocked because you're using Tor" messaging. Then if
there's an uptick in vandalism from Tor then they can just tighten up again.
2) Encourage people with closed proxies to re-vitalize
https://en.wikipedia.org/wiki/Wikipedia:WOCP . Problem: using closed
proxies is okay for people with some threat models but not others.
3) Look at Nymble - http://freehaven.net/anonbib/#oakland11-formalizing
and http://cgi.soic.indiana.edu/~kapadia/nymble/overview.php . It would
allow Wikimedia to distance itself from knowing people's identities, but
still allow admins to revoke permissions if people acted up. The user
shows a real identity, gets a token, and exchanges that token over tor
for an account. If the user abuses the site, Wikimedia site admins can
blacklist the user without ever being able to learn who they were or
what other edits they did. More: https://cs.uwaterloo.ca/~iang/ Ian
Golberg's, Nick Hopper's, and Apu Kapadia's groups are all working on
Nymble or its derivatives. It's not ready for production yet, I bet,
but if someone wanted a Big Project....
3a) A token authorization system (perhaps a MediaWiki extension) where
the server blindly signs a token, and then the user can use that token
to bypass the Tor blocks. (Tyler mentioned he saw this somewhere in a
Bugzilla suggestion; I haven't found it.)
4) Allow more users the IP block exemption, possibly even automatically
after a certain number of unreverted edits, but with some kind of
FlaggedRevs integration; Tor users can edit but their changes have to be
reviewed before going live. We could combine this with (3); Nymble
administrators or token-issuers could pledge to review edits coming from
Tor. But that latter idea sounds like a lot of social infrastructure to
set up and maintain.
Thoughts? Are any of you interested in working on this problem? #tor on
the OFTC IRC server is full of people who'd be interested in talking
about this.
--
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
As multilingual content grows, interlanguage links become longer on
Wikipedia articles. Articles such as "Barak Obama" or "Sun" have more than
200 links, and that becomes a problem for users that often switch among
several languages.
As part of the future plans for the Universal Language Selector, we were
considering to:
- Show only a short list of the relevant languages for the user based on
geo-IP, previous choices and browser settings of the current user. The
language the users are looking for will be there most of the times.
- Include a "more" option to access the rest of the languages for which
the content exists with an indicator of the number of languages.
- Provide a list of the rest of the languages that users can easily scan
(grouped by script and region ao that alphabetical ordering is possible),
and search (allowing users to search a language name in another language,
using ISO codes or even making typos).
I have created a prototype <http://pauginer.github.io/prototype-uls/#lisa> to
illustrate the idea. Since this is not connected to the MediaWiki backend,
it lacks the advanced capabilities commented above but you can get the idea.
If you are interested in the missing parts, you can check the flexible
search and the list of likely languages ("common languages" section) on the
language selector used at http://translatewiki.net/ which is connected to
MediaWiki backend.
As part of the testing process for the ULS language settings, I included a
task to test also the compact interlanguage designs. Users seem to
understand their use (view
recording<https://www.usertesting.com/highlight_reels/qPYxPW1aRi1UazTMFreR>),
but I wanted to get some feedback for changes affecting such an important
element.
Please let me know if you see any possible concern with this approach.
Thanks
--
Pau Giner
Interaction Designer
Wikimedia Foundation
All,
The developer team at Wikimedia is making some changes to how accounts
work, as part of our on-going efforts to provide new and better tools
for our users (like cross-wiki notifications). These changes will mean
users have the same account name everywhere, will let us give you new
features that will help you edit & discuss better, and will allow more
flexible user permissions for tools. One of the pre-conditions for
this is that user accounts will now have to be unique across all 900
Wikimedia wikis.[0]
Unfortunately, some accounts are currently not unique across all our
wikis, but instead clash with other users who have the same account
name. To make sure that all of these users can use Wikimedia's wikis
in future, we will be renaming a number of accounts to have "~” and
the name of their wiki added to the end of their accounts' name. This
change will take place on or around 27 May. For example, a user called
“Example” on the Swedish Wiktionary who will be renamed would become
“Example~svwiktionary”.
All accounts will still work as before, and will continue to be
credited for all their edits made so far. However, users with renamed
accounts (whom we will be contacting individually) will have to use
the new account name when they log in.
It will now only be possible for accounts to be renamed globally; the
RenameUser tool will no longer work on a local basis - since all
accounts must be globally unique - therefore it will be withdrawn from
bureaucrats' tool sets. It will still be possible for users to ask on
Meta for their account to be renamed further, if they do not like
their new user name, once this takes place.
A copy of this note is posted to meta [1] for translation. Please
forward this to your local communities, and help get it translated.
Individuals who are affected will be notified via talk page and e-mail
notices nearer the time.
[0] - https://meta.wikimedia.org/wiki/Help:Unified_login
[1] - https://meta.wikimedia.org/wiki/Single_User_Login_finalisation_announcement
Yours,
--
James D. Forrester
Product Manager, VisualEditor
Wikimedia Foundation, Inc.
jforrester(a)wikimedia.org | @jdforrester
Hi all,
I'd like to announce a recently created tool that might help the Wikimedia
technical community find stuff more easily. Sometimes relevant information
is buried in IRC chat logs, messages in any of several mailing lists, pages
in mediawiki.org, commit messages, etc. This tool (essentially a custom
google search engine that filters results to a few relevant URL patterns)
is aimed at relieving this problem. Test it here: http://hexm.de/mw-search
The motivation for the tool came from a post by Niklas [1], specifically
the section "Coping with the proliferation of tools within your community".
In the comments section, Nemo announced his initiative to create a custom
google search to fit at least some of the requirements presented in that
section, and I've offered to help him tweak it further. The URL list is
still incomplete and can be customized by editing the page
http://www.mediawiki.org/wiki/Wikimedia_technical_search (syncing with the
actual engine still will have to happen by hand, but should be quick).
Besides feedback on whether the engine works as you'd expect, I would like
to start some discussion about the ability for Google's bots to crawl some
of the resources that are currently included in the URL filters, but return
no results. For example, the IRC logs at bots.wmflabs.org/~wm-bot/logs/.
Some workarounds are used (e.g. using github for code search since gitweb
isn't crawlable) but that isn't possible for all resources. What can we do
to improve the situation?
--Waldir
1.
http://laxstrom.name/blag/2013/02/11/fosdem-talk-reflections-23-docs-code-a…
*Marc-Andre Pelletier discovered a vulnerability in the MediaWiki OpenID
extension for the case that MediaWiki is used as a “provider” and the wiki
allows renaming of users.
All previous versions of the OpenID extension used user-page URLs as
identity URLs. On wikis that use the OpenID extension as “provider” and
allows user renames, an attacker with rename privileges could rename a user
and could then create an account with the same name as the victim. This
would have allowed the attacker to steal the victim’s OpenID identity.
Version 3.00 fixes the vulnerability by using Special:OpenIDIdentifier/<id>
as the user’s identity URL, <id> being the immutable MediaWiki-internal
userid of the user. The user’s old identity URL, based on the user’s
user-page URL, will no longer be valid.
The user’s user page can still be used as OpenID identity URL, but will
delegate to the special page.
This is a breaking change, as it changes all user identity URLs. Providers
are urged to upgrade and notify users, or to disable user renaming.
Respectfully,
Ryan Lane
https://gerrit.wikimedia.org/r/#/c/52722
Commit: f4abe8649c6c37074b5091748d9e2d6e9ed452f2*
How to load up high-resolution imagery on high-density displays has been an
open question for a while; we've wanted this for the mobile web site since
the Nexus One and Droid brought 1.5x, and the iPhone 4 brought 2.0x density
displays to the mobile world a couple years back.
More recently, tablets and a few laptops are bringing 1.5x and 2.0x density
displays too, such as the new Retina iPad and MacBook Pro.
A properly responsive site should be able to detect when it's running on
such a display and load higher-density image assets automatically...
Here's my first stab:
https://bugzilla.wikimedia.org/show_bug.cgi?id=36198#c6https://gerrit.wikimedia.org/r/#/c/24115/
* adds $wgResponsiveImages setting, defaulting to true, to enable the
feature
* adds jquery.hidpi plugin to check window.devicePixelRatio and replace
images with data-src-1-5 or data-src-2-0 depending on the ratio
* adds mediawiki.hidpi RL script to trigger hidpi loads after main images
load
* renders images from wiki image & thumb links at 1.5x and 2.0x and
includes data-src-1-5 and data-src-2-0 attributes with the targets
Note that this is a work in progress. There will be places where this
doesn't yet work which output their imgs differently. If moving from a low
to high-DPI screen on a MacBook Pro Retina display, you won't see images
load until you reload.
Confirmed basic images and thumbs in wikitext appear to work in Safari 6 on
MacBook Pro Retina display. (Should work in Chrome as well).
Same code loaded on MobileFrontend display should also work, but have not
yet attempted that.
Note this does *not* attempt to use native SVGs, which is another potential
tactic for improving display on high-density displays and zoomed windows.
This loads higher-resolution raster images, including rasterized SVGs.
There may be loads of bugs; this is midnight hacking code and I make no
guarantees of suitability for any purpose. ;)
-- brion
Hello! So I tried converting
https://github.com/wikimedia/qa-browsertests/pull/1 into a Gerrit changeset
(https://gerrit.wikimedia.org/r/#/c/54097/) , and was mostly successful. It
is also a relatively painless process - at least for single commits.
This assumes you (person doing the GitHub -> Gerrit bridge) have a Gerrit
account. I wrote a small script that sortof makes this easy:
https://gist.github.com/yuvipanda/5174162
This only does things one time - it moves a set of commits in a pull
request to a squashed single commit on gerrit, assuming your current
directory is a cloned version of the gerrit repo you want to commit to. It
should not to be too hard to write an actual, idempotent sync script that
maintains a 1-to-1 correspondence between Pull Requests and Gerrit
Changesets, and I'll attempt to do that tomorrow.
Note that this is a shitty bash script (to put it mildly) - but that seems
to be all I can write at 5:30 AM :) I'll probably rewrite it to be a proper
python one soon. That should also allow me to use the GitHub API to also
mirror the GitHub Pull Request Title / Description to Gerrit.
I also offer to manually sync pull requests into gerrit as they come until
the automatic Gerrit integration is ready. Shall be writing another small
script tomorrow to have me 'watch' all the wikimedia/* GitHub repositories.
Thank you :) I'll update this thread as the script gets less shitty. Do let
me know if you have build a far more complete script :)
--
Yuvi Panda T
http://yuvi.in/blog
Hi everyone!
In Semantic MediaWiki we have great but very long documentation. I've tried
to write a small introduction to SMW. It's only in Russian now (I know that
some amount of Russian MediaWikers are reading this mailing list), but I
think to write something similar in English (maybe to IBM DeveloperWorks).
http://habrahabr.ru/post/173877/
If you don't know Russian you can enjoy the pictures anyway ;)
Cheers,
-----
Yury Katkov, WikiVote
hi-
i'm hopeful this is the appropriate venue for this topic - i recently
had occasion to visit #mediawiki on freenode, looking for help. i found
myself a bit frustrated by the amount of bot activity there and wondered
if there might be value in some consideration for this. it seems to
frequently drown out/dilute those asking for help, which can be a bit
discouraging/frustrating. additionally, from the perspective of those
who might help [based on my experience in this role in other channels],
constant activity can sometimes engender disinterest [e.g. the irc
client shows activity in the channel, but i'm less inclined to look as
it's probably just a bot].
to offer one possibility - i know there are a number of mediawiki and/or
wikimedia related channels - might there be one in which bot activity
might be better suited, in the context of less contention between the
two audiences [those seeking help vs. those interested in development,
etc]? one nomenclature convention that seems to be at least somewhat of
a defacto standard is #project for general help, and #project-dev[el]
for development topics. a few examples of this i've seen are android,
libreoffice, python, and asterisk. adding yet another channel to this
list might not be terribly welcome, but maybe the distinction would be
worth the addition?
as i'm writing this, i see another thread has begun wrt freenode, and i
also see a bug filed that relates at least to some degree
[https://bugzilla.wikimedia.org/show_bug.cgi?id=35427], so i may just be
repeating an existing sentiment, but i wanted to at least offer a brief
perspective.
regards
-ben