I would like to announce the release of MediaWiki Language Extension
Bundle 2015.02. This bundle is compatible with MediaWiki 1.23.8 and
MediaWiki 1.24.1 releases.
* Download: https://translatewiki.net/mleb/MediaWikiLanguageExtensionBundle-2015.02.tar…
* sha256sum: 536cf86e7080d8293a02cb59f99d96328c7009c8239e818556b28b77b02ff88d
* Installation instructions are at: https://www.mediawiki.org/wiki/MLEB
* Announcements of new releases will be posted to a mailing list:
* Report bugs to: https://phabricator.wikimedia.org/
* Talk with us at: #mediawiki-i18n @ Freenode
Release notes for each extension are below.
-- Kartik Mistry
== Babel, CLDR, CleanChanges and LocalisationUpdate ==
* Only localisation updates.
== Translate ==
* Improvements in Special:PagePreparation:
** T69591: Added 'Cancel' button.
** T68880: Categories kept as a part of page template.
* T87503: Validate and normalize input file encoding in FFS.
* T54728: Split language details to subpage on Special:SupportedLanguages.
* Performance improvements by removing unneeded queries.
* T53410: Performance improvements in Special:MessageGroupStats and
== UniversalLanguageSelector ==
* Restore compatibility with IE8. If you still have any issue, please report it!
* Magnifying glass icon is now clickable!
* Localisation updates.
Kartik Mistry/કાર્તિક મિસ્ત્રી | IRC: kart_
---------- Forwarded message ----------
From: Yuvi Panda <yuvipanda(a)gmail.com>
Date: Fri, Feb 27, 2015 at 11:42 AM
Subject: Another labs outage - curse of the accursed hardware failure continues
To: Wikimedia Labs <labs-l(a)lists.wikimedia.org>
A repeat of the failure that happened a few days ago. Underlying flaky
hardware, andrewbogott is looking into it atm.
== Why is everything so terrible? ==
Labs instances are Virtual Machines that run on physical hardware.
When the underlying hardware dies, the virtual machines on them also
die. This is similar to AWS or other cloud providers. We had one spare
machine (virt1012) in case any of the currently in use machines died
and needed a lifeboat.
A week or so ago one of the machines (virt1005) died, and we migrated
things to virt1012. This week, the new machine, virt1012, has been
having issues, and that's why the outages are all so similar. So the
current instability is basically caused by *two* different
hardware-related issues happening to two different machines with
IT IS A CURSE!
== Making things better? ==
We're adding more hardware. https://phabricator.wikimedia.org/T90783
is the ticket for that.
And specifically for toollabs, it would be awesome for it to be able
to survive one virt* node being down. This is not an easy problem to
solve, but here's the tracking ticket for it:
Andrew is working through his night (again) to diagnose / fix this
issue (thanks!) and we'll keep you updated as things progress. Thank
you for your patience.
Yuvi Panda T
Yuvi Panda T
The <gallery> tag generation has been updated to include srcset attributes
for high-density displays: <https://phabricator.wikimedia.org/T64709>
An unfortunate consequence is that if extensions have parser test cases
including a <gallery> they will need to be updated for the new HTML.
The only one I noticed on my setup with in Cite, so I submitted a patch to
fix that; but keep an eye out for surprise test failures elsewhere.
It's an easy fix -- run the parser tests, find the failing test, and
copy-paste the 'srcset' attribute from the generated HTML into the test
case expected output.
I've gone ahead and flipped a long-requested config change for
DismissableSiteNotice (which controls local, not Central, site notices) to
allow anons to dismiss them.
IIRC the only reason the dismiss wasn't allowed for anons was that back in
2007 or so it would have been prohibitively expensive to deal with more
cookies, especially since I think local sitenotice was still being used for
fundraising banners and such.
My understanding of our current cookie-handling config in the caching layer
is that this shouldn't explode. But if it does, or it doesn't seem to be
working right, deployers please feel free to revert it and we'll poke at it
some more until it works better -- it's an easy config switch, just revert
and sync the settings file.
If there's a better process I should follow on this sort of thing do let me
know, I don't mean to step on any toes!
The day Extension:Newsletter is deployed, everybody will ask themselves why
didn't we have this feature before.
While this happens, I keep trying to find a GSoC / Outreach co-mentor, and
now a hackathon buddy in Lyon just in case this alternative model works
better. Someone willing to write a prototype of this MediaWiki extension.
I have a clear idea of what we need and I can help sketching the UI,
writing strings, testing, discussing improvements, promoting the features,
perhaps even recruiting more people. If you are interested, please check
Possible Project for a Newsletter MediaWiki extension
Public / private questions and feedback are very welcome!
Engineering Community Manager @ Wikimedia Foundation
After 14 months of discussion, HTML templating is now live in core
MediaWiki. Currently only the server-side implementation has been merged. A
client-side implementation has also been submitted, but is stalled in
You can now use Mustache templates in your extensions and core code by
calling TemplateParser->processTemplate(). Full documentation can be found
There are three main impetuses for this new feature:
1. Improving the sanity and readability of MediaWiki code. Ideally, our PHP
code should have little or no HTML in it. We should strive to keep our PHP
interfaces high-level and with clear separation of concerns. We are a long
way from conforming to anything like an MVC pattern, but this brings us one
step closer to being able to achieve that. The work on OOjs and related
interfaces is another important component of this.
2. Standardizing our templating implementations. There are currently six
different HTML templating implementations in various MediaWiki
extensions. Hopefully, we can now reduce that number.
3. Moving MobileFrontend into core. We are also a long way away from
achieving this goal, but now one step closer. MobileFrontend relies heavily
on HTML templating, so having this feature in core is a pre-requisite to
moving more MobileFrontend features over.
I know there is still disagreement about the specific implementation
details of this feature (such as the choice of Mustache), but this is just
the first iteration of this feature and I hope we can work together to
revise and improve it further.
Apparently the idea to rename the Priority field value from "Lowest" in
Bugzilla to "Needs Volunteer" in Phabricator created more confusion than
expected. I am sorry for that - wasn't intended.
To fix that, there are two questions that welcome input in
1) Our Phabricator currently offers six levels of prioritization. See
Do project maintainers / devs really feel a need for planning to
differentiate between "low" priority and one level below that?
Or could we reduce our six levels with five levels?
2) If there is a need for a level below "low": Let's rename "Needs
Volunteer" back to "Lowest" (Looking at the proposed names in T78617,
"Lowest" seems to be the least confusing term / smallest evil.)
If you feel like helping make a decision by adding some *additional*
arguments based on your experience, please raise your voice in T78617
after reading the existing comments and arguments in that task.
Thank you for your help!
Andre Klapper | Wikimedia Bugwrangler
It is with a heavy heart that I must share the news of an upcoming Labs
The labs NFS store (which you probably know as /data/project) is filling
up rapidly and we need to add more drives. By weird coincidence the
actual physical space for that server in the datacenter is ALSO filling
up, so Chris Johnson has graciously agreed to spend his day re-shuffling
servers in order to make space for the new diskshelf. This involves
lots of unplugging and replugging and amounts to the fact that the NFS
server will need to be turned off for several hours.
During this window Chris will take care of another long-deferred
maintenance task -- he's putting more RAM into the labs puppet master,
What will break:
- Shared storage for all labs and tools instances. That includes
volumes like /data/project, /public/dumps, /data/scratch, /home
- Logins to all instances running ubuntu Precise. (Trusty hosts will
/probably/ still support logins.)
- Login to wikitech and manipulation of instances.
What won't break:
- Labs instances will continue to run
- Tasks running on instances will continue to run; those that don't rely
on shared storage should be fine.
- Web proxies should keep working, if the services they support aren't
relying on shared storage.
What will get better:
- More storage space!
- Fewer problems with dumps filling up NFS (which is basically the same
as 'more storage space'.
- More reliable puppet runs and fewer outages with miscellaneous
OpenStack services (which also run on virt1000)
I apologize in advance for this downtime. Don't hesitate to contact me
or Coren either here or on IRC with advice about how to harden your tool
against this upcoming outage. We will also be available on IRC during
and after the outage to help revive things that are angry about the