I'm happy to announce the availability of the second beta release of the
new MediaWiki 1.19 release series.
Please try it out and let us know what you think. Don't run it on any
wikis that you really care about, unless you are both very brave and
very confident in your MediaWiki administration skills.
MediaWiki 1.19 is a large release that contains many new features and
bug fixes. This is a summary of the major changes of interest to users.
You can consult the RELEASE-NOTES-1.19 file for the full list of changes
in this version.
Five security issues were discovered.
It was discovered that the api had a cross-site request forgery (CSRF)
vulnerability in the block/unblock modules. It was possible for a user
account with the block privileges to block or unblock another user without
providing a token.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=34212
It was discovered that the resource loader can leak certain kinds of private
data across domain origin boundaries, by providing the data as an executable
JavaScript file. In MediaWiki 1.18 and later, this includes the leaking of
CSRF
protection tokens. This allows compromise of the wiki's user accounts, say
by
changing the user's email address and then requesting a password reset.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=34907
Jan Schejbal of Hatforce.com discovered a cross-site request forgery (CSRF)
vulnerability in Special:Upload. Modern browsers (since at least as early as
December 2010) are able to post file uploads without user interaction,
violating previous security assumptions within MediaWiki.
Depending on the wiki's configuration, this vulnerability could lead to
further
compromise, especially on private wikis where the set of allowed file types
is
broader than on public wikis. Note that CSRF allows compromise of a wiki
from
an external website even if the wiki is behind a firewall.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=35317
George Argyros and Aggelos Kiayias reported that the method used to generate
password reset tokens is not sufficiently secure. Instead we use various
more
secure random number generators, depending on what is available on the
platform. Windows users are strongly advised to install either the openssl
extension or the mcrypt extension for PHP so that MediaWiki can take
advantage
of the cryptographic random number facility provided by Windows.
Any extension developers using mt_rand() to generate random numbers in
contexts
where security is required are encouraged to instead make use of the
MWCryptRand class introduced with this release.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=35078
A long-standing bug in the wikitext parser (bug 22555) was discovered to
have
security implications. In the presence of the popular CharInsert extension,
it
leads to cross-site scripting (XSS). XSS may be possible with other
extensions
or perhaps even the MediaWiki core alone, although this is not confirmed at
this time. A denial-of-service attack (infinite loop) is also possible
regardless of configuration.
For more details, see https://bugzilla.wikimedia.org/show_bug.cgi?id=35315
*********************************************************************
What's new?
*********************************************************************
MediaWiki 1.19 brings the usual host of various bugfixes and new features.
Comprehensive list of what's new is in the release notes.
* Bumped MySQL version requirement to 5.0.2.
* Disable the partial HTML and MathML rendering options for Math,
and render as PNG by default.
* MathML mode was so incomplete most people thought it simply didn't work.
* New skins/common/*.css files usable by skins instead of having to copy
piles of
generic styles from MonoBook or Vector's css.
* The default user signature now contains a talk link in addition to the
user link.
* Searching blocked usernames in block log is now clearer.
* Better timezone recognition in user preferences.
* Extensions can now participate in the extraction of titles from URL paths.
* The command-line installer supports various RDBMSes better.
* The interwiki links table can now be accessed also when the interwiki
cache
is used (used in the API and the Interwiki extension).
Internationalization
- --------------------
* More gender support (for instance in user lists).
* Add languages: Canadian English.
* Language converter improved, e.g. it now works depending on the page
content language.
* Time and number-formatting magic words also now depend on the page
content language.
* Bidirectional support further improved after 1.18.
Release notes
- -------------
Full release notes:
https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=blob_plain;f=RE
LEASE-NOTES-1.19;hb=1.19.0beta2
https://www.mediawiki.org/wiki/Release_notes/1.19
Co-inciding with these security releases, the MediaWiki source code
repository has
moved from SVN (at https://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3)
to Git (https://gerrit.wikimedia.org/gitweb/mediawiki/core.git). So the
relevant
commits for these releases will not be appearing in our SVN repository. If
you use
SVN checkouts of MediaWiki for version control, you need to migrate these to
Git.
If you up are using tarballs, there should be no change in the process for
you.
Please note that any WMF-deployed extensions have also been migrated to Git
also, along with some other non WMF-maintained ones.
Please bear with us, some of the Git related links for this release may not
work instantly,
but should later on.
To do a simple Git clone, the command is:
git clone https://gerrit.wikimedia.org/r/p/mediawiki/core.git
More information is available at https://www.mediawiki.org/wiki/Git
For more help, please visit the #mediawiki IRC channel on freenode.netirc://irc.freenode.net/mediawiki or email The MediaWiki-l mailing list
at mediawiki-l(a)lists.wikimedia.org.
**********************************************************************
Download:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.tar.gz
Patch to previous version (1.19.0beta1), without interface text:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.patch.gz
Interface text changes:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-i18n-1.19.0beta2.patc
h.gz
GPG signatures:
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.tar.gz.si
g
http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.0beta2.patch.gz.
sig
http://download.wikimedia.org/mediawiki/1.19/mediawiki-i18n-1.19.0beta2.patc
h.gz.sig
Public keys:
https://secure.wikimedia.org/keys.html
How to load up high-resolution imagery on high-density displays has been an
open question for a while; we've wanted this for the mobile web site since
the Nexus One and Droid brought 1.5x, and the iPhone 4 brought 2.0x density
displays to the mobile world a couple years back.
More recently, tablets and a few laptops are bringing 1.5x and 2.0x density
displays too, such as the new Retina iPad and MacBook Pro.
A properly responsive site should be able to detect when it's running on
such a display and load higher-density image assets automatically...
Here's my first stab:
https://bugzilla.wikimedia.org/show_bug.cgi?id=36198#c6https://gerrit.wikimedia.org/r/#/c/24115/
* adds $wgResponsiveImages setting, defaulting to true, to enable the
feature
* adds jquery.hidpi plugin to check window.devicePixelRatio and replace
images with data-src-1-5 or data-src-2-0 depending on the ratio
* adds mediawiki.hidpi RL script to trigger hidpi loads after main images
load
* renders images from wiki image & thumb links at 1.5x and 2.0x and
includes data-src-1-5 and data-src-2-0 attributes with the targets
Note that this is a work in progress. There will be places where this
doesn't yet work which output their imgs differently. If moving from a low
to high-DPI screen on a MacBook Pro Retina display, you won't see images
load until you reload.
Confirmed basic images and thumbs in wikitext appear to work in Safari 6 on
MacBook Pro Retina display. (Should work in Chrome as well).
Same code loaded on MobileFrontend display should also work, but have not
yet attempted that.
Note this does *not* attempt to use native SVGs, which is another potential
tactic for improving display on high-density displays and zoomed windows.
This loads higher-resolution raster images, including rasterized SVGs.
There may be loads of bugs; this is midnight hacking code and I make no
guarantees of suitability for any purpose. ;)
-- brion
hi-
i'm hopeful this is the appropriate venue for this topic - i recently
had occasion to visit #mediawiki on freenode, looking for help. i found
myself a bit frustrated by the amount of bot activity there and wondered
if there might be value in some consideration for this. it seems to
frequently drown out/dilute those asking for help, which can be a bit
discouraging/frustrating. additionally, from the perspective of those
who might help [based on my experience in this role in other channels],
constant activity can sometimes engender disinterest [e.g. the irc
client shows activity in the channel, but i'm less inclined to look as
it's probably just a bot].
to offer one possibility - i know there are a number of mediawiki and/or
wikimedia related channels - might there be one in which bot activity
might be better suited, in the context of less contention between the
two audiences [those seeking help vs. those interested in development,
etc]? one nomenclature convention that seems to be at least somewhat of
a defacto standard is #project for general help, and #project-dev[el]
for development topics. a few examples of this i've seen are android,
libreoffice, python, and asterisk. adding yet another channel to this
list might not be terribly welcome, but maybe the distinction would be
worth the addition?
as i'm writing this, i see another thread has begun wrt freenode, and i
also see a bug filed that relates at least to some degree
[https://bugzilla.wikimedia.org/show_bug.cgi?id=35427], so i may just be
repeating an existing sentiment, but i wanted to at least offer a brief
perspective.
regards
-ben
On Tue, Jul 24, 2012 at 10:25 PM, Steven Walling <steven.walling <at>
gmail.com> wrote:
> But do we have a plan for improving Gerrit in a substantial way?
Hi everyone,
In my response to Steven at the time [1], I indicated that we have a
modest contractor budget for this work. The RFP is now posted here:
http://hire.jobvite.com/Jobvite/Job.aspx?j=o4gIWfwI&c=qSa9VfwQ
Please let me know if you're interested (and apply if you're really
interested). Also, please let me know if you have any questions.
Thanks!
Rob
[1] http://article.gmane.org/gmane.science.linguistics.wikipedia.technical/62630
On Fri, Jun 15, 2012 at 8:48 AM, Sumana Harihareswara
<sumanah(a)wikimedia.org> wrote:
> If you merge into mediawiki/core.git, your change is considered safe for
> inclusion in a wmf branch. The wmf branch is just branched out of
> master and then deployed. We don't review it again. Because we're
> deploying more frequently to WMF sites, the code review for merging into
> MediaWiki's core.git needs to be more like deployment/shell-level
> review, and so we gave merge access to people who already had deployment
> access. We have since added some more people. The current list:
> https://gerrit.wikimedia.org/r/#/admin/groups/11,members
Let me elaborate on this. As unclear as our process is for giving
access, it's even less clear what our policy is for taking it away.
If we can settle on a policy for taking access away/suspending access,
it'll make it much easier to loosen up about giving access.
Here's the situation we want to avoid: we give access to someone who
probably shouldn't have it. They continually introduce deployment
blockers into the code, making us need to slow down our frequent
deployment process. Two hour deploy windows become six hour deploy
windows as we need time to fix up breakage introduced during the
window. Even with the group we have, there are times where things
that really shouldn't slip through do. It's manageable now, but
adding more people is going to multiply this problem as we get back
into a situation where poorly conceived changes become core
dependencies.
We haven't had a culture of making a big deal about the case when
someone introduces a breaking change or does something that brings the
db to its knees or introduces a massive security hole or whatever.
That means that if the situation were to arise that we needed to
revoke someones access, we have to wait until it gets egregious and
awful, and even then the person is likely to be shocked that their
rights are being revoked (if we even do it then). To be less
conservative about giving access, we also need to figure out how to be
less conservative about taking it away. We also want to be as
reasonably objective about it. It's always going to be somewhat
subjective, and we don't want to completely eliminate the role of
common sense.
It would also be nice if we didn't have to resort to the nuclear
option to get the point across. One low-stakes way we can use to make
sure people are more careful is to have some sort of rotating "oops"
award. At one former job I had, we had a Ghostbusters Stay Puft doll
named "Buster" that was handed out when someone broke the build that
they had to prominently display in their office. At another job, it
was a pair of Shrek ears that people had to wear when they messed
something up in production. In both cases, it was something you had
to wear until someone else came along. Perhaps we should institute
something similar (maybe as simple as asking people to append "OOPS"
to their IRC nicks when they botch something).
Rob
Hi all!
Since https://gerrit.wikimedia.org/r/#/c/21584/ got merged, people have been
complaining that they get tons of warnings. A great number of them seem to be
caused by the fact the MediaWiki will, if the DBO_TRX flag is set,
automatically start a transaction on the first call to Database::query().
See e.g. https://bugzilla.wikimedia.org/show_bug.cgi?id=40378
The DBO_TRX flag appears to be set by default in sapi (mod_php) mode. According
to the (very limited) documentation, it's intended to wrap the entire web
request in a single database transaction.
However, since we do not have support for nested transactions, this doesn't
work: the "wrapping" transaction gets implicitely comitted when begin() is
called to start a "proper" transaction, which is often the case when saving new
revisions, etc.
So, DBO_TRX sems to be misguided, or at least broken, to me. Can someone please
explain why it was introduced? It seems the current situation is this:
* every view-only request is wrapped in a transaction, for not good reason i can
see.
* any write operation that uses an explicit transaction, like page editing,
watching pages, etc, will break the wrapping transaction (and cause a warning in
the process). As far as I understand, this really defies the purpose of the
automatic wrapping transaction.
So, how do we solve this? We could:
* suppress warnings if the DBO_TRX flag is set. That would prevent the logs from
being swamped by transaction warnings, but it would not fix the current broken
(?!) behavior.
* get rid of DBO_TRX (or at least not use it per default). This seems to be the
Right Thing to me, but I suppose there is some point to the automatic
transactions that I am missing.
* Implement support for nested transactions, either using a counter (this would
at least make DBO_TRX work as I guess it was intended) or using safepoints (that
would give us support for actual nested transactions). That would be the Real
Solution, IMHO.
So, can someone shed light on what DBO_TRX is intended to do, and how it is
supposed to work?
-- daniel
Since a few years ago, we have several query [special] pages, also
called "maintenance reports" in the list of special pages, which are
never updated for performance reasons: 6 on all wikis and 6 more only on
en.wiki. <https://bugzilla.wikimedia.org/show_bug.cgi?id=39667#c6>
A proposal is to run them again and quite liberally on all "small wikis"
(to start with); another, to update them everywhere but one at a time
and with proper breathing time for servers.[1]
The problem is, which pages are safe to run an update on even on
en.wiki, and how frequently; and which would kill it? Or, at what point
a wiki is too big to run such updates carelessly?[2]
Can someone estimate it by looking at the queries, or maybe by running
them on some DB where it's not a problem to test?
We only know that originally pages were disabled if taking "more than
about 15 minutes to update". If now such a page took, say, four times
that ie 60 min, would it be a problem to update one such page per
day/week/month? Etc.
Most updates seem to already rely on slave DBs, but maybe this should be
confirmed; on the other hand, writing huge sets of results to DB
shouldn't be a problem because those are limited as well.[3]
Nemo
[1] In (reviewed) puppet terms: <https://gerrit.wikimedia.org/r/#/c/33713/>
[2] Below that limit, a wiki should be "small" for
<https://gerrit.wikimedia.org/r/#/c/33694> and frequently updated for
the benefit of the editors' engagement.
[3] 'wgQueryCacheLimit' => array(
'default' => 5000,
'enwiki' => 1000, // safe to raise?
'dewiki' => 2000, // safe to raise?
),
---------- Forwarded message ----------
From: Erik Moeller <erik(a)wikimedia.org>
Date: Tue, Nov 27, 2012 at 6:49 PM
Subject: Wikimedia/mapping event in Europe early next year?
To: maps-l(a)lists.wikimedia.org
Hi folks,
it's been a long time coming, but we're finally gearing up for putting
some development effort into an OSM tileservice running in production
to serve Wikimedia sites. This is being driven by the mobile team but
obviously has lots of non-mobile use cases as well, including the
recent Wikivoyage addition to the Wikimedia familiy. This work will
probably not kick off before January/February 2013; before then, the
mobile team is working to finish up the GeoData extension (
https://www.mediawiki.org/wiki/Extension:Geodata ).
To get broader community involvement and sync up with existing
volunteer efforts in this area, it'd IMO be useful to plan a
face-to-face meetup/hackfest just focused on geodata/mapping related
development work sometime around Feb/March 2013.
WMF is not going to organize this, but we can help sponsor travel and
bring the key developers from our side who will work on this. Are
there any takers for supporting a 20-30 people development event in
Europe focused on mapping/geodata? I'm suggesting Europe because I
know quite a few of the relevant folks are there, but am open to other
options as well.
Cheers,
Erik
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
Support Free Knowledge: https://wikimediafoundation.org/wiki/Donate
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
Support Free Knowledge: https://wikimediafoundation.org/wiki/Donate
LevelUp is a mentorship program that will start in January 2013 and that
replaces the "20% time" policy
https://www.mediawiki.org/wiki/Wikimedia_engineering_20%25_policy for
Wikimedia Foundation engineers. Technical contributors, volunteer or
staff, have the opportunity to participate; see
https://www.mediawiki.org/wiki/Mentorship_programs/LevelUp for more details.
We started 20% time to ensure that Wikimedia Foundation engineers would
spend at least 20% of each week on tasks that directly serve the
Wikimedia developer and user community, including bug triage, code
review, extension review, documentation, urgent bugfixes, and so on. It
had various flaws. 1 day every week, I made people task-switch and it
got in the way of their deadlines, and it was perceived as a chore that
always needed doing.
It felt like enforcing a rota to do the dishes. So instead, let's build
a dishwasher. :-) We can cross-train each other and fill in the empty
rows on the maintainership table
https://www.mediawiki.org/wiki/Developers/Maintainers so our whole
community gains the capacity to get stuff done faster.
If you've been frustrated because of code review delays, I want you to
sign up for LevelUp -- by March 2013 you could be a comaintainer of a
codebase and be merging and improving other people's patchsets, which
will give them more time and incentive to merge yours. :-)
When I asked what people wanted to learn, I got a variety of responses
-- including "MediaWiki in general", "puppet", "networking", and "JS,
PHP, HTML, CSS, SQL" -- all of which you can learn through LevelUp.
When I asked how you wanted to learn, all of you said you wanted
real-life, hands-on work with mentors who could answer your questions.
Here you go. :-)
I won't be starting the matchmaking process in earnest till I come back
from the Thanksgiving break on Monday, but I will reply to talk page
messages and emails then. :-)
--
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
Hello,
I attended a talk [1] by Elaine Weyuker [2] on Wed, 7 Nov 2012.
The talk, “Looking for Bugs In All the RIGHT Places”, discussed her work on
predicting where bugs would be found in the next release of a program product.
She and her collaborators have created a well validated tool that predicts, in
under a minute, the 20% of the source files of the product, frozen before the
next release, that will contain about 80% of the faults that will be corrected
in that release.
The tool is not a silver bullet, but it is useful; especially because it
sometimes points attention to files that were not expected to have a lot of
problems.
The tool has two parts, a prediction front end and a back end interface to the
revision control system and bug tracker. As I remember it, the entire system
consisted of under 800 lines of python and under 3000 lines of C++. Using it
would require adding a new back end.
I thought that this tool might be useful in mediawiki development. She was
amenable to helping get it working if there was interest.
[1]http://www.ece.udel.edu/spotlight/WeyukerDLS.php
[2]http://en.wikipedia.org/wiki/Elaine_Weyuker
--
Jim Laurino
wican.x.jimlaur(a)dfgh.net
Please direct any reply to the list.
Only mail from the listserver reaches this address.