*TL;DR*: Reminder to please bike-shed at
Just when you thought it was safe, there's the next stage in our migration
of developer tools over to Phabricator: moving all our code into the
Diffusion module <https://www.mediawiki.org/wiki/Phabricator/Diffusion>.
This is *not* about doing code review in Phabricator; that task will be
left for another time. However, it does establish some immutable URLs and
so there's a lot of scope for discussion and verification about how exactly
we want to do things.
We currently use gitblit to provide our service at git.wikimedia.org ; it's
a down-stream, read-only, HTTPS service for browsing all our git repos.
We'd like to replace this service with the single platform of Phabricator
because (a) we need to make these decisions anyway for the code review
workstream, (b) fewer tools makes for a simpler learning environment for
newbies, and (c) more integrated tools makes for fewer hacky bots and "work
arounds" for everyone.
To explore what Diffusion looks like, compare:
- GitHub: https://github.com/wikimedia/visualeditor
- Diffusion: https://phabricator.wikimedia.org/diffusion/VE/
We need to agree how we are going to name our repos, and much more
importantly because it can't change, what their "callsign" is. These will
be at the heart of e-mails, IRC notifications and git logs for a long time,
so it's important to get this right rather than regret it after the fact.
A handful of repos are so important and high-profile that we can use an
acronym without too much worry, like "MW" for MediaWiki or "VE" for
VisualEditor. For the rest, we need to make sure we've got a good enough
name that won't cause inconveniences or confusion, and doesn't repeat the
mistakes we've identified over time. We've learnt since the SVN to git
migration a few years ago that calling your repository "/core" is a bad
plan, for instance.
The proposed naming conventions
in particular the plan for what we'll call the existing repos
when we duplicate them would benefit from more people looking at them, if
only to say that you don't care. :-) We've had these under discussion since
October so you may well have seen these before.
We plan to declare the current list as "agreed" in a week's time (that is,
by the end of 1 December) unless there's significant on-going discussion.
James D. Forrester
Product Manager, Editing
Wikimedia Foundation, Inc.
jforrester(a)wikimedia.org | @jdforrester
TL;DR: jQuery will soon be upgraded from v1.8.3 to v1.11.x (the latest). This
major release removes deprecated functionality. Please migrate away from this
deprecated functionality as soon as possible.
It's been a long time coming but we're now finally upgrading the jQuery package
that ships with MediaWiki.
We used to regularly upgrade jQuery in the past, but got stuck at v1.8 a couple
of years ago due to lack of time and concern about disruption. Because of this,
many developers have needed to work around bugs that were already fixed in later
versions of jQuery. Thankfully, jQuery v1.9 (and its v2 counterpart) has been
the first release in jQuery history that needed an upgrade guide. It's a
major release that cleans up deprecated and dubious functionality.
Migration of existing code in extensions, gadgets, and user & site scripts
should be trivial (swapping one method for another, maybe with a slight change
to the parameters passed). This is all documented in the upgrade guide.
The upgrade guide may look scary (as it lists many of your favourite methods),
but they are mostly just addressing edge cases.
== Call to action ==
This is a call for you, to:
1) Get familiar with http://jquery.com/upgrade-guide/1.9/.
2) Start migrating your code.
jQuery v1.9 is about removing deprecated functionality. The new functionality is
already present in jQuery 1.8 or, in some cases, earlier.
3) Look out for deprecation warnings.
Once instrumentation has begun, using "?debug=true" will log jQuery deprecation
warnings to the console. Look for ones marked "JQMIGRATE" . You might also
find deprecation notices from mediawiki.js, for more about those see the mail
from last October .
== Plan ==
1) Instrumentation and logging
The first phase is to instrument jQuery to work out all the areas which will
need work. I have started work on loading jQuery Migrate alongside the current
version of jQuery. I expect that to land in master this week , and roll out on
Wikimedia wikis the week after. This will enable you to detect usage of most
deprecated functionality through your browser console. Don't forget the upgrade
guide, as Migrate cannot detect everything.
2) Upgrade and Migrate
After this, the actual upgrade will take place, whilst Migrate stays. This
should not break anything since Migrate covers almost all functionality that
will be removed. The instrumentation and logging will remain during this phase;
the only effective change at this point is whatever jQuery didn't think was
worth covering in Migrate or were just one of many bug fixes.
3) Finalise upgrade
Finally, we will remove the migration plugin (both the Migrate compatibility
layer and its instrumentation). This will bring us to a clean version of latest
jQuery v1.x without compatibility hacks.
A rough timeline:
* 12 May 2014 (1.24wmf4 ): Phase 1 – Instrumentation and logging starts. This
will run for 4 weeks (until June 9).
* 19 May 2014 (1.24wmf5): Phase 2 – "Upgrade and Migrate". This will run for 3
weeks (upto June 9). The instrumentation continues during this period.
* 1 June 2014 (1.24wmf7) Finalise upgrade.
== FAQ ==
Q: The upgrade guide is for jQuery v1.9, what about jQuery v1.10 and v1.11?
A: Those are regular updates that only fix bugs and/or introduce non-breaking
enhancements. Like jQuery v1.7 and v1.8, we can upgrade to those without any
hassle. We'll be fast-forwarding straight from v1.8 to v1.11.
Q: What about the jQuery Migrate plugin?
A: jQuery developed a plugin that adds back some of the removed features (not
all, consult the upgrade guide for details). It also logs usage of these to
Q: When will the upgrade happen?
A: In the next few weeks, once we are happy that the impact is reasonably low.
An update will be sent to wikitech-l just before this is done as a final reminder.
This will be well before the MediaWiki 1.24 branch point for extension authors
looking to maintain compatibility.
Q: When are we moving to jQuery v2.x?
A: We are not currently planing to do this. Despite the name, jQuery v2.x
doesn't contain any new features compared to jQuery v1 . The main difference
is in the reduced support for different browsers and environments; most
noticeably, jQuery 2.x drops support for Internet Explorer 8 and below, which
MediaWiki is still supporting for now, and is outside the scope of this work.
Both v1 and v2 continue to enjoy simultaneous releases for bug fixes and new
features. For example, jQuery released v1.11 and v2.1 together.
As noted in the server admin log , Phabricator is currently down due to
a network outage impacting one of our racks in the Ashburn data-center.
We're investigating and will aim to restore service ASAP.
VP of Product & Strategy, Wikimedia Foundation
I would like to announce the release of MediaWiki Language Extension
Bundle 2014.11. This bundle is compatible with MediaWiki 1.23.7 and
MediaWiki 1.24.0 releases. It should also work with 1.22.14 but we no
longer test against it.
* Download: https://translatewiki.net/mleb/MediaWikiLanguageExtensionBundle-2014.11.tar…
* sha256sum: 39b397a05561f743962cfb499f59a58219338607ea13ebfcc7a8806105e7dedc
* Installation instructions are at: https://www.mediawiki.org/wiki/MLEB
* Announcements of new releases will be posted to a mailing list:
* Report bugs to: https://phabricator.wikimedia.org/
* Talk with us at: #mediawiki-i18n @ Freenode
Release notes for each extension are below.
-- Kartik Mistry
== Babel, CleanChanges and LocalisationUpdate ==
* Only localisation updates.
== CLDR ==
* Fixed some time displays if CLDR had only partial localisation of time units.
== Translate ==
* Translate WebAPI documentation is now localized. Only works in
MediaWiki 1.24 and newer.
* Fixed a bug which prevented bootstrapping of shared TTMServer
database with the ElasticSearch backend.
* If you are using the '''Solr backend '''for the translation memory
or the translation search, please let us know. If there are no users
for the Solr backend, we will deprecate and later remove it in favor
of the better maintained ElasticSearch backend.
== UniversalLanguageSelector ==
* ULS WebAPI documentation is now localized. Only works in MediaWiki
1.24 and newer.
* T67516: Removed font-size for ULS language selection panel buttons,
which caused tiny font sizes on the Monobook skin.
* Small compatibility fix when both ULS and VisualEditor are in use.
* About 20 new languages are now supported in the language selector
and a couple language names were changes.
* Added support for WOFF2 webfont format. Note that there are no WOFF2
webfonts in the font repository yet due to pending issues in WOFF2
Kartik Mistry/કાર્તિક મિસ્ત્રી | IRC: kart_
0) Use cases
Use cases for this project are simple (motivation stated in parentheses):
a) Originally I just wanted a mirror of the `enwiki' on my desktop
that I could browse when my Internet service provider went down
b) Later I wanted a mirror of the `simplewiki' and `simplewiktionary'
on my laptop so I could move about (mobility);
c) Then came unhappy disclosures about domestic surveillance which
make it prudent to browse offline (privacy);
d) Still later I wanted mirrors of other projects, such as
`enwikisource' and `enwikiversity', because I like reading books offline,
usually keeping them open for days (availability); and
e) Now I want to generate ZIM files and all other dump files from
these mirrors to create a `WMF in a microcosm.' This is for use by the
offline community, and for archiving, experimenting, etc. For example:
o You can have a desktop where the `enwikinews' mirror updates
daily and generates a ZIM file daily. This ZIM file can be synced to your
handheld device at your convenience. (availability, mobility);
o You can periodically generate and archive an image thumbs
tarball (durability); and
o You can dump your mirror, conduct experiments that may trash
your database, and then rebuild your mirror (durability).
The recent release WP-MIRROR 0.7.4 delivers all but use case (e).
1) Road map
Dump file generating capability is planned for the next version series,
WP-MIRROR 0.8.x, which will be packaged for Debian 8 (jessie) and Ubuntu
On Sun, Nov 30, 2014 at 12:02 AM, Asaf Bartov <asaf.bartov(a)gmail.com> wrote:
> Thanks! (and \o/ LISP!)
> Could you tell us a little about the use case that drove you to develop
> On Sat, Nov 29, 2014 at 6:42 PM, wp mirror <wpmirrordev(a)gmail.com> wrote:
>> Dear list members,
>> WP-MIRROR 0.7.4 is now available.
>> 0) Features
>> Configuration of MediaWiki has been greatly improved.
>> Incremental XML data dump files now used.
>> SSL enabled so that wikis are now protocol independent (may access via
>> URL fallback list increases reliability of downloading XML data dump
>> Wiki `talk' pages now installed.
>> 1) Updates
>> Dependencies have been brought up to date:
>> MediaWiki updated to 1.24.22.
>> MediaWIki extensions updated to 1.24.22.
>> XML Data Dump Schema updated to 0.10.
>> 2) MediaWiki extensions
>> Many extensions have been added for use with various WMF projects:
>> Wikinews: DynamicPageList;
>> Wikipedia: CommonsMetadata, JsonConfig, Mantle, MultimediaViewer,
>> Wikisource: DoubleWiki, Proofreadpage, RandomRootPage;
>> Wikiversity: Quiz; and
>> Wikivoyage: CustomData, GeoCrumbs, MapSources.
>> 3) Home pages updated
>> 4) Thanks
>> I would especially like to thank the following contributors:
>> Luiz Augusto for submitting bug reports, and for requesting features of
>> importance to the wikisource and wikiversity projects.
>> Guy Catagnoli for reading the WP-MIRROR code and submitting many
>> comments; for submitting bug reports with log files containing valuable
>> debug info, and for feature requests.
>> Sincerely Yours,
>> Offline-l mailing list
> Asaf Bartov <asaf.bartov(a)gmail.com>
I'm Rexford, and just posting here for the first time. Please inform if
this place isn't the right area to suggest features. I was encouraged to
request features here. Please correct me if I'm wrong.
Its about edit conflict on Wikipedia and other projects. It happens when I
get into an article to edit, but before I could save, someone else goes
into the article, edits and save. It happens to myself and many out there.
Sometimes many minutes work of changes can be lost.
The feature request is this: When a person starts editing an article, and
another person tries to edit that same article, he or she gets a message on
screen that the article is already engaged. This suggestion is similar to
how WordPress informs the second person who tries to edit a page whiles
someone else is already editing.
Its likely one wouldn't like to edit a page when he or she knows someone is
in it editing. I think its much better that way than allowing multiple
edits on the page but allowing one persons edit to go in per save.
rexford | google.com/+Nkansahrexford | sent from smartphone
Dear list members,
WP-MIRROR 0.7.4 is now available.
Configuration of MediaWiki has been greatly improved.
Incremental XML data dump files now used.
SSL enabled so that wikis are now protocol independent (may access via
URL fallback list increases reliability of downloading XML data dump files.
Wiki `talk' pages now installed.
Dependencies have been brought up to date:
MediaWiki updated to 1.24.22.
MediaWIki extensions updated to 1.24.22.
XML Data Dump Schema updated to 0.10.
2) MediaWiki extensions
Many extensions have been added for use with various WMF projects:
Wikipedia: CommonsMetadata, JsonConfig, Mantle, MultimediaViewer,
Wikisource: DoubleWiki, Proofreadpage, RandomRootPage;
Wikiversity: Quiz; and
Wikivoyage: CustomData, GeoCrumbs, MapSources.
3) Home pages updated
I would especially like to thank the following contributors:
Luiz Augusto for submitting bug reports, and for requesting features of
importance to the wikisource and wikiversity projects.
Guy Catagnoli for reading the WP-MIRROR code and submitting many comments;
for submitting bug reports with log files containing valuable debug info,
and for feature requests.
I'd like to get some input on a tricky problem regarding caching in memcached
(or accelerator cache). For Wikidata, we often need to look up the label (name)
of an item in a given language - on wikidata itself, as well as on wikis that
use wikidata. So, if something somewhere references Q5, we need to somehow look
up the label "Human" in English resp "Mensch" in German, etc.
The typical access pattern is to look up labels for a dozen or so items in a
handful of languages in order to generate a single response. In some situations
we can batch that into a single lookup, but at other times, we do not have
sufficient context, and it will be one label at a time. Also, some items (and
thus, their labels) are references a lot more than others.
Anyway, to get the labels, we can either fetch the full data item (several KB of
data) from the page content store (external store). This is what we currently
do, and it's quite bad. Full data items are cached in memcached and shared
between wikis - but it's still a lot of data to move, and may swamp memcached.
Alternatively, we can fetch the labels from the wb_terms table - we have a
mechanism for that, but no caching layer.
And now, the point of this email: how to make a caching layer for lookups to the
Naively, we could just give each label (one per item and language) a cache key,
and put it into memcached. I'm not sure this would improve performance much, and
it would mean massive overhead to memcached; also, putting *all* labels into
memcached would likely swamp it (we have in the order of 100 million labels and
We could put all "terms" for a given entity under one cache key, shared between
wikis.But then we'd still be moving a lot of pointless data around. Or we could
group using some hashing mechanism. But then we would not be able to take
advantage of the fact that some items are used a lot more often than others.
I'd like a good mechanism to cache just the 1000 or so most used labels,
preferably locally in the accelerator cache. Would it be possible to make a
"compartment" with LRU semantics in our caching infrastructure? As far as I
know, all of a memcached server, or all of an APC instance, acts as a single LRU
In order to make it less likely for rarely used labels to hog the cache, I can
think of two strategies:
1) Use a low expiry time. If set to 1 hour, only stuff accessed every hour stays
in the cache.
2) Use randomized writing: put things into the cache only 1/3 of the time; this
makes it more likely for frequently used labels to get into the cache... I'm no
good at probabilities and statistics, but I'd love to discuss this option with
someone who can actually calculate how well this might work.
So, which strategy should we use? We have:
* Full item data from external store + memcached
* Individual simple database queries, no cache
* DB query + memcached, low duration, one key per label
* DB query + memcached, randomized, one key per label
* Group cache entries by item (similar to caching full entities)
Are there other options, or other aspects that should be considered? Which
strategy would you recommend?