I'm happy to announce the availability of the second beta release of the
new MediaWiki 1.19 release series.
Please try it out and let us know what you think. Don't run it on any
wikis that you really care about, unless you are both very brave and
very confident in your MediaWiki administration skills.
MediaWiki 1.19 is a large release that contains many new features and
bug fixes. This is a summary of the major changes of interest to users.
You can consult the RELEASE-NOTES-1.19 file for the full list of changes
in this version.
Five security issues were discovered.
It was discovered that the api had a cross-site request forgery (CSRF)
vulnerability in the block/unblock modules. It was possible for a user
account with the block privileges to block or unblock another user without
providing a token.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=34212
It was discovered that the resource loader can leak certain kinds of private
data across domain origin boundaries, by providing the data as an executable
JavaScript file. In MediaWiki 1.18 and later, this includes the leaking of
CSRF
protection tokens. This allows compromise of the wiki's user accounts, say
by
changing the user's email address and then requesting a password reset.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=34907
Jan Schejbal of Hatforce.com discovered a cross-site request forgery (CSRF)
vulnerability in Special:Upload. Modern browsers (since at least as early as
December 2010) are able to post file uploads without user interaction,
violating previous security assumptions within MediaWiki.
Depending on the wiki's configuration, this vulnerability could lead to
further
compromise, especially on private wikis where the set of allowed file types
is
broader than on public wikis. Note that CSRF allows compromise of a wiki
from
an external website even if the wiki is behind a firewall.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=35317
George Argyros and Aggelos Kiayias reported that the method used to generate
password reset tokens is not sufficiently secure. Instead we use various
more
secure random number generators, depending on what is available on the
platform. Windows users are strongly advised to install either the openssl
extension or the mcrypt extension for PHP so that MediaWiki can take
advantage
of the cryptographic random number facility provided by Windows.
Any extension developers using mt_rand() to generate random numbers in
contexts
where security is required are encouraged to instead make use of the
MWCryptRand class introduced with this release.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=35078
A long-standing bug in the wikitext parser (bug 22555) was discovered to
have
security implications. In the presence of the popular CharInsert extension,
it
leads to cross-site scripting (XSS). XSS may be possible with other
extensions
or perhaps even the MediaWiki core alone, although this is not confirmed at
this time. A denial-of-service attack (infinite loop) is also possible
regardless of configuration.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=35315
*********************************************************************
What's new?
*********************************************************************
MediaWiki 1.19 brings the usual host of various bugfixes and new features.
Comprehensive list of what's new is in the release notes.
* Bumped MySQL version requirement to 5.0.2.
* Disable the partial HTML and MathML rendering options for Math,
and render as PNG by default.
* MathML mode was so incomplete most people thought it simply didn't work.
* New skins/common/*.css files usable by skins instead of having to copy
piles of
generic styles from MonoBook or Vector's css.
* The default user signature now contains a talk link in addition to the
user link.
* Searching blocked usernames in block log is now clearer.
* Better timezone recognition in user preferences.
* Extensions can now participate in the extraction of titles from URL paths.
* The command-line installer supports various RDBMSes better.
* The interwiki links table can now be accessed also when the interwiki
cache
is used (used in the API and the Interwiki extension).
Internationalization
- --------------------
* More gender support (for instance in user lists).
* Add languages: Canadian English.
* Language converter improved, e.g. it now works depending on the page
content language.
* Time and number-formatting magic words also now depend on the page
content language.
* Bidirectional support further improved after 1.18.
Release notes
- -------------
Full release notes:
https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=blob_plain;f=RE
LEASE-NOTES-1.19;hb=1.19.0beta2
https://www.mediawiki.org/wiki/Release_notes/1.19
Co-inciding with these security releases, the MediaWiki source code
repository has
moved from SVN (at https://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3)
to Git (https://gerrit.wikimedia.org/gitweb/mediawiki/core.git). So the
relevant
commits for these releases will not be appearing in our SVN repository. If
you use
SVN checkouts of MediaWiki for version control, you need to migrate these to
Git.
If you up are using tarballs, there should be no change in the process for
you.
Please note that any WMF-deployed extensions have also been migrated to Git
also, along with some other non WMF-maintained ones.
Please bear with us, some of the Git related links for this release may not
work instantly,
but should later on.
To do a simple Git clone, the command is:
git clone https://gerrit.wikimedia.org/r/p/mediawiki/core.git
More information is available at https://www.mediawiki.org/wiki/Git
For more help, please visit the #mediawiki IRC channel on freenode.netirc://irc.freenode.net/mediawiki or email The MediaWiki-l mailing list
at mediawiki-l(a)lists.wikimedia.org.
**********************************************************************
Download:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.tar.gz
Patch to previous version (1.19.0beta1), without interface text:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.patch.gz
Interface text changes:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-i18n-1.19.0beta2.patc
h.gz
GPG signatures:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.tar.gz.si
g
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.patch.gz.
sig
http://download.wikimedia.org/mediawiki/1.19/mediawiki-i18n-1.19.0beta2.patc
h.gz.sig
Public keys:
https://secure.wikimedia.org/keys.html
TL;DR: A few ideas follow on how we could possibly help legit editors
contribute from behind Tor proxies. I am just conversant enough with
the security problems to make unworkable suggestions ;-), so please
correct me, critique & suggest solutions, and perhaps volunteer to help.
The current situation:
https://en.wikipedia.org/wiki/Wikipedia:Advice_to_users_using_Tor_to_bypass…
We generally don't let anyone edit or upload from behind Tor; the
TorBlock extension stops them. One exception: a person can create an
account, accumulate lots of good edits, and then ask for an IP block
exemption, and then use that account to edit from behind Tor. This is
unappealing because then there's still a bunch of in-the-clear editing
that has to happen first, and because then site functionaries know that
the account is going to be making controversial edits (and could
possibly connect it to IPs in the future, right?). And right now
there's no way to truly *anonymously* contribute from behind Tor
proxies; you have to log in. However, since JavaScript delivery is hard
for Tor users, I'm not sure how much editing from Tor -- vandalism or
legit -- is actually happening. (I hope for analytics on this and thus
added it to https://www.mediawiki.org/wiki/Analytics/Dreams .) We know
at least that there are legitimate editors who would prefer to use Tor
and can't.
People have been talking about how to improve the situation for some
time -- see http://cryptome.info/wiki-no-tor.htm and
https://lists.torproject.org/pipermail/tor-dev/2012-October/004116.html
. It'd be nice if it could actually move forward.
I've floated this problem past Tor and privacy people, and here are a
few ideas:
1) Just use the existing mechanisms more leniently. Encourage the
communities (Wikimedia & Tor) to use
https://en.wikipedia.org/wiki/Wikipedia:Request_an_account (to get an
account from behind Tor) and to let more people get IP block exemptions
even before they've made any edits (< 30 people have gotten exemptions
on en.wp in 2012). Add encouraging "get an exempt account" language to
the "you're blocked because you're using Tor" messaging. Then if
there's an uptick in vandalism from Tor then they can just tighten up again.
2) Encourage people with closed proxies to re-vitalize
https://en.wikipedia.org/wiki/Wikipedia:WOCP . Problem: using closed
proxies is okay for people with some threat models but not others.
3) Look at Nymble - http://freehaven.net/anonbib/#oakland11-formalizing
and http://cgi.soic.indiana.edu/~kapadia/nymble/overview.php . It would
allow Wikimedia to distance itself from knowing people's identities, but
still allow admins to revoke permissions if people acted up. The user
shows a real identity, gets a token, and exchanges that token over tor
for an account. If the user abuses the site, Wikimedia site admins can
blacklist the user without ever being able to learn who they were or
what other edits they did. More: https://cs.uwaterloo.ca/~iang/ Ian
Golberg's, Nick Hopper's, and Apu Kapadia's groups are all working on
Nymble or its derivatives. It's not ready for production yet, I bet,
but if someone wanted a Big Project....
3a) A token authorization system (perhaps a MediaWiki extension) where
the server blindly signs a token, and then the user can use that token
to bypass the Tor blocks. (Tyler mentioned he saw this somewhere in a
Bugzilla suggestion; I haven't found it.)
4) Allow more users the IP block exemption, possibly even automatically
after a certain number of unreverted edits, but with some kind of
FlaggedRevs integration; Tor users can edit but their changes have to be
reviewed before going live. We could combine this with (3); Nymble
administrators or token-issuers could pledge to review edits coming from
Tor. But that latter idea sounds like a lot of social infrastructure to
set up and maintain.
Thoughts? Are any of you interested in working on this problem? #tor on
the OFTC IRC server is full of people who'd be interested in talking
about this.
--
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
Hi all,
I'd like to announce a recently created tool that might help the Wikimedia
technical community find stuff more easily. Sometimes relevant information
is buried in IRC chat logs, messages in any of several mailing lists, pages
in mediawiki.org, commit messages, etc. This tool (essentially a custom
google search engine that filters results to a few relevant URL patterns)
is aimed at relieving this problem. Test it here: http://hexm.de/mw-search
The motivation for the tool came from a post by Niklas [1], specifically
the section "Coping with the proliferation of tools within your community".
In the comments section, Nemo announced his initiative to create a custom
google search to fit at least some of the requirements presented in that
section, and I've offered to help him tweak it further. The URL list is
still incomplete and can be customized by editing the page
http://www.mediawiki.org/wiki/Wikimedia_technical_search (syncing with the
actual engine still will have to happen by hand, but should be quick).
Besides feedback on whether the engine works as you'd expect, I would like
to start some discussion about the ability for Google's bots to crawl some
of the resources that are currently included in the URL filters, but return
no results. For example, the IRC logs at bots.wmflabs.org/~wm-bot/logs/.
Some workarounds are used (e.g. using github for code search since gitweb
isn't crawlable) but that isn't possible for all resources. What can we do
to improve the situation?
--Waldir
1.
http://laxstrom.name/blag/2013/02/11/fosdem-talk-reflections-23-docs-code-a…
*Marc-Andre Pelletier discovered a vulnerability in the MediaWiki OpenID
extension for the case that MediaWiki is used as a “provider” and the wiki
allows renaming of users.
All previous versions of the OpenID extension used user-page URLs as
identity URLs. On wikis that use the OpenID extension as “provider” and
allows user renames, an attacker with rename privileges could rename a user
and could then create an account with the same name as the victim. This
would have allowed the attacker to steal the victim’s OpenID identity.
Version 3.00 fixes the vulnerability by using Special:OpenIDIdentifier/<id>
as the user’s identity URL, <id> being the immutable MediaWiki-internal
userid of the user. The user’s old identity URL, based on the user’s
user-page URL, will no longer be valid.
The user’s user page can still be used as OpenID identity URL, but will
delegate to the special page.
This is a breaking change, as it changes all user identity URLs. Providers
are urged to upgrade and notify users, or to disable user renaming.
Respectfully,
Ryan Lane
https://gerrit.wikimedia.org/r/#/c/52722
Commit: f4abe8649c6c37074b5091748d9e2d6e9ed452f2*
How to load up high-resolution imagery on high-density displays has been an
open question for a while; we've wanted this for the mobile web site since
the Nexus One and Droid brought 1.5x, and the iPhone 4 brought 2.0x density
displays to the mobile world a couple years back.
More recently, tablets and a few laptops are bringing 1.5x and 2.0x density
displays too, such as the new Retina iPad and MacBook Pro.
A properly responsive site should be able to detect when it's running on
such a display and load higher-density image assets automatically...
Here's my first stab:
https://bugzilla.wikimedia.org/show_bug.cgi?id=36198#c6https://gerrit.wikimedia.org/r/#/c/24115/
* adds $wgResponsiveImages setting, defaulting to true, to enable the
feature
* adds jquery.hidpi plugin to check window.devicePixelRatio and replace
images with data-src-1-5 or data-src-2-0 depending on the ratio
* adds mediawiki.hidpi RL script to trigger hidpi loads after main images
load
* renders images from wiki image & thumb links at 1.5x and 2.0x and
includes data-src-1-5 and data-src-2-0 attributes with the targets
Note that this is a work in progress. There will be places where this
doesn't yet work which output their imgs differently. If moving from a low
to high-DPI screen on a MacBook Pro Retina display, you won't see images
load until you reload.
Confirmed basic images and thumbs in wikitext appear to work in Safari 6 on
MacBook Pro Retina display. (Should work in Chrome as well).
Same code loaded on MobileFrontend display should also work, but have not
yet attempted that.
Note this does *not* attempt to use native SVGs, which is another potential
tactic for improving display on high-density displays and zoomed windows.
This loads higher-resolution raster images, including rasterized SVGs.
There may be loads of bugs; this is midnight hacking code and I make no
guarantees of suitability for any purpose. ;)
-- brion
Hello! So I tried converting
https://github.com/wikimedia/qa-browsertests/pull/1 into a Gerrit changeset
(https://gerrit.wikimedia.org/r/#/c/54097/) , and was mostly successful. It
is also a relatively painless process - at least for single commits.
This assumes you (person doing the GitHub -> Gerrit bridge) have a Gerrit
account. I wrote a small script that sortof makes this easy:
https://gist.github.com/yuvipanda/5174162
This only does things one time - it moves a set of commits in a pull
request to a squashed single commit on gerrit, assuming your current
directory is a cloned version of the gerrit repo you want to commit to. It
should not to be too hard to write an actual, idempotent sync script that
maintains a 1-to-1 correspondence between Pull Requests and Gerrit
Changesets, and I'll attempt to do that tomorrow.
Note that this is a shitty bash script (to put it mildly) - but that seems
to be all I can write at 5:30 AM :) I'll probably rewrite it to be a proper
python one soon. That should also allow me to use the GitHub API to also
mirror the GitHub Pull Request Title / Description to Gerrit.
I also offer to manually sync pull requests into gerrit as they come until
the automatic Gerrit integration is ready. Shall be writing another small
script tomorrow to have me 'watch' all the wikimedia/* GitHub repositories.
Thank you :) I'll update this thread as the script gets less shitty. Do let
me know if you have build a far more complete script :)
--
Yuvi Panda T
http://yuvi.in/blog
hi-
i'm hopeful this is the appropriate venue for this topic - i recently
had occasion to visit #mediawiki on freenode, looking for help. i found
myself a bit frustrated by the amount of bot activity there and wondered
if there might be value in some consideration for this. it seems to
frequently drown out/dilute those asking for help, which can be a bit
discouraging/frustrating. additionally, from the perspective of those
who might help [based on my experience in this role in other channels],
constant activity can sometimes engender disinterest [e.g. the irc
client shows activity in the channel, but i'm less inclined to look as
it's probably just a bot].
to offer one possibility - i know there are a number of mediawiki and/or
wikimedia related channels - might there be one in which bot activity
might be better suited, in the context of less contention between the
two audiences [those seeking help vs. those interested in development,
etc]? one nomenclature convention that seems to be at least somewhat of
a defacto standard is #project for general help, and #project-dev[el]
for development topics. a few examples of this i've seen are android,
libreoffice, python, and asterisk. adding yet another channel to this
list might not be terribly welcome, but maybe the distinction would be
worth the addition?
as i'm writing this, i see another thread has begun wrt freenode, and i
also see a bug filed that relates at least to some degree
[https://bugzilla.wikimedia.org/show_bug.cgi?id=35427], so i may just be
repeating an existing sentiment, but i wanted to at least offer a brief
perspective.
regards
-ben
Hi folks,
Short version: This mail is fishing for feedback on proposed work on
Gerrit-Bugzilla integration to replace code review tags.
Long version:
One feature of our old code review system that was a tagging system
that made it quick and easy to assign a keyword to a revision at any
time. There were a number of uses we had for the system, which are
documented here:
http://www.mediawiki.org/wiki/Subversion/Code_review/tags
Examples of tags that we miss:
scaptrap - this change requires special care at deploy time. It may
be that it requires two components to be in sync that aren't generally
deployed in sync, or it requires a configuration change, or something
else.
fixme - this change introduces a bug (weirdly, not in our old
documentation - go figure). In Gerrit, this would be a -1 or -2 if
the code hasn't been merged yet, but post-merge, there's no uniform
way to attach that metadata.
backcompat - backwards compatibility breakers.
This has been a frequently requested feature of the upstream Gerrit
developers. However, they've been reluctant to implement such a
feature until after some unscheduled major architectural work is
completed, so we shouldn't hold our breath waiting for that.
With that in mind, we have bug 38239, assigned to Chad:
https://bugzilla.wikimedia.org/38239
Chad worked up a hacky version of tagging as a MediaWiki extension
earlier this week, which would have kept the tags in MediaWiki instead
of Gerrit. That's only one half of what might be an acceptable
solution, since the other half would need to be some Javascript in the
Gerrit user interface to allow for insertion into the MediaWiki tag
database. I discouraged Chad from continuing on this because it
seemed to me that it would have been a little *too* hacky. I
preferred that if we were going to have our own hacky solution, it
should at least be implemented as a Gerrit plugin, so that it would at
least stand a chance of becoming a well-integrated solution.
The problem with tagging generally, though, is that it ended up being
this weird parallel workflow to other systems. "fixme" was frequently
used as a substitute for filing a bug report. "scaptrap" was a
substitute for proper deployment notes, and "backcompat" was a
solution for proper developer notes. That said, it was lightweight,
which meant that it actually got used, as opposed to many "proper"
solutions, which are frequently enough work that it's difficult to
expect uniform followthrough.
The solution that Chad and I discussed is an addition to the
Bugzilla-Gerrit plugin that Christian is already working on. The idea
would be that, for any given revision, there would be a "file bug
about this revision" link. Following that link would throw to the
standard Bugzilla bug filing page, with as many fields prepopulated
based on Gerrit context as could sensibly be filled (including, at the
very minimum, a link back to the Gerrit rev, but probably also with
the assignee set to the developer who introduced the issue, the
component set based on the repo).
A Bugzilla-based solution would be an ideal replacement for "fixme",
since fixmes are basically bugs anyway. It would work reasonably well
for "scaptrap", since they generally imply something that needs to be
done prior to deployment. It would be an awkward replacement for
"backcompat" and others.
Still, the nice thing about this is that a Bugzilla-based solution is
that it's general purpose enough that it may very well find use
outside of Wikimedia-land. The BZ-Gerrit work is actually being done
as part of a larger issue tracker plugin that the GerritForge folks
have written to support Jira. Filing issues based on revisions is
likely a common request for people integrating Gerrit with their issue
tracking system.
Is a Bugzilla-based solution worthwhile enough for our purposes for a
modest (but probably not insignificant) investment in this area, or
should we prioritize other Gerrit work higher (say, for example, a
native Gerrit tagging plugin)? Assuming we move forward with
development, anything we need to consider?
I've put the bulk of this email on mediawiki.org here:
http://www.mediawiki.org/wiki/Git/Tagging
We should evolve that page into a spec for the work that Chad and
Christian will be doing.
Rob
Hi,
Last November, I started to clean up on the Glossary page on meta, as
an attempt to revive it and expand it to include many technical terms,
notably related to Wikimedia Engineering (see e-mail below).
There were (and are) already many glossaries spread around the wikis:
* one for MediaWiki: https://www.mediawiki.org/wiki/Manual:Glossary
* one for Wikidata: https://www.wikidata.org/wiki/Wikidata:Glossary
* one for Labs: https://wikitech.wikimedia.org/wiki/Help:Terminology
* two for the English Wikipedia:
https://en.wikipedia.org/wiki/Wikipedia:Glossary &
https://en.wikipedia.org/wiki/Wikipedia:WikiSpeak
* etc.
My thinking at the time was that it would be better to include tech
terms in meta's glossary, because fragmentation isn't a good thing for
glossaries: The user probably doesn't want to search a term through a
dozen glossaries (that they know of), and it would be easier if they
could just search in one place.
The fact is, though, that we're not going to merge all the existing
glossaries into one anytime soon, so overlap and duplication will
remain anyway. Also, it feels weird to have tech content on meta, and
the glossary is getting very long (and possibly more difficult to
maintain). Therefore, I'm now reconsidering the decision of mixing
tech terms and general movement terms on meta.
Below are the current solutions I'm seeing to move forward; I'd love
to get some feedback as to what people think would be the best way to
proceed.
* Status quo: We keep the current glossaries as they are, even if they
overlap and duplicate work. We'll manage.
* Wikidata: If Wikidata could be used to host terms and definitions
(in various languages), and wikis could pull this data using
templates/Lua, it would be a sane way to reduce duplication, while
still allowing local wikis to complement it with their own terms. For
example, "administrator" is a generic term across Wikimedia sites
(even MediaWiki sites), so it would go into the general glossary
repository on Wikidata; but "DYK" could be local to the English
Wikipedia. With proper templates, the integration between remote and
local terms could be seamless. It seems to me, however, that this
would require significant development work.
* Google custom search: Waldir recently used Google Custom Search to
created a search tool to find technical information across many pages
and sites where information is currently fragmented:
http://lists.wikimedia.org/pipermail/wikitech-l/2013-March/067450.html
. We could set up a similar tool (or a floss alternative) that would
include all glossaries. By advertising the tool prominently on
existing glossary pages (so that users know it exists), this could
allow us to curate more specific glossaries, while keeping them all
searchable with one tool.
Right now, I'm inclined to go with the "custom search" solution,
because it looks like the easiest and fastest to implement, while
reducing maintenance costs and remaining flexible. That said, I'd love
to hear feedback and opinions about this before implementing anything.
Thanks,
guillaume
On Tue, Nov 20, 2012 at 7:55 PM, Guillaume Paumier
<gpaumier(a)wikimedia.org> wrote:
> Hi,
>
> The use of jargon, acronyms and other abbreviations throughout the
> Wikimedia movement is a major source of communication issues, and
> barriers to comprehension and involvement.
>
> The recent thread on this list about "What is Product?" is an example
> of this, as are initialisms that have long been known to be a barrier
> for Wikipedia newcomers.
>
> A way to bridge people and communities with different vocabularies is
> to write and maintain a glossary that explains jargon in plain English
> terms. We've been lacking a good and up-to-date glossary for Wikimedia
> "stuff" (Foundation, chapter, movement, technology, etc.).
>
> Therefore, I've started to clean up and expand the outdated Glossary
> on meta, but it's a lot of work, and I don't have all the answers
> myself either. I'll continue to work on it, but I'd love to get some
> help on this and to make it a collaborative effort.
>
> If you have a few minutes to spare, please consider helping your
> (current and future) fellow Wikimedians by writing a few definitions
> if there are terms that you can explain in plain English. Additions of
> new terms are much welcome as well:
>
> https://meta.wikimedia.org/wiki/Glossary
>
> Some caveats:
> * As part of my work, I'm mostly interested in a glossary from a
> technical perspective, so the list currently has a technical bias. I'm
> hoping that by sending this message to a wider audience, people from
> the whole movement will contribute to the glossary and balance it out.
> * Also, I've started to clean up the glossary, but it still contains
> dated terms and definitions from a few years ago (like the FundCom),
> so boldly edit/remove obsolete content.
--
Guillaume Paumier
Technical Communications Manager — Wikimedia Foundation
https://donate.wikimedia.org