Next week we're going to switch the back end system that hosts thumbnail
images. We have been using
Sun server running linux) to serve all thumbnails. We are switching
Swift <http://wikitech.wikimedia.org/view/Swift>, a clustered object
store. Though we have done testing and expect no problems, I want to
publicize the change so that if issues do appear, they are quickly directed
to the right place.
Here's the schedule:
* Monday Feb 6th: move 0.4% of all thumbnail traffic from ms5 to swift.
Only thumbnails with "/thumb/a/a2/" in the URL will be affected.
* Tuesday: move 12.5% of traffic - thumbnails with "/thumb/a/" or
"/thumb/b/" will be affected
* Wednesday: move 50% of traffic - thumbnails with "/thumb/(a-f or 0 or
1)/" will be affected
* Thundlay: move 100% of traffic - all thumbnails will be served from Swift
Potential symptoms that might be related to this change on Monday:
* a thumbnail image with /a/a2/ simply fails to load
** try changing the number preceding 'px' to generate a different size
* you delete an image and the thumbnail is still available
** try purging the cache and see if that makes it go away
** try changing the number preceding 'px' and see what happens
* you move an image to a new name and it's still available at the old name,
when either the new or old name has /a/a2/ in the URL
** please provide full URLs for both the new and old names
If any of these things happen, or if something else odd happens with images
or thumbnails and you feel it might be related, please join #wikimedia-tech
in IRC and ping me (maplebed) or Aaron (AaronSchulz). If neither of us are
available or you would prefer to just leave us a message, add a note to
https://www.mediawiki.org/wiki/SwiftMedia/Issues. :) If things are
severely broken, you can ask someone in the #wikimedia-tech to page me.
Thanks for your help!
 In case you don't know what I mean by thumbnail images, I'm talking
about all URLs that start http://upload.wikimedia.org/ and have /thumb/ in
the path. When an image is uploaded to a wiki (or commons), Mediawiki
automatically generates scaled versions of the image. These are what I
call 'thumbnails'. Nearly every time an image is used in a wiki page it is
actually a thumbnail being used. Try going to any wiki page with an image,
right click on the image, and choose 'view image'. You'll get something
as the URL (this was taken from enwiki's featured article today)
 For example,
 For example,
 Find the URL for the main image file (rearrange the URL to find the
original image - /wikipedia/commons -> commons.wikipedia.org) and add
"?action=purge" to the end of the URL (eg
 Image in the public domain
 cc-by 3.0, Alexanderwdark<http://commons.wikimedia.org/w/index.php?title=User:Alexanderwdark&action=e…>
2012/1/6 Platonides <platonides(a)gmail.com>:
> Integrating this into ConfirmEdit extension shouldn't be hard. It's the
> extra features what makes this tricky. This system is interesting for
> gathering translations, but doesn't work for verifying that the answer
> is right. How would you verify that?
> The approach that comes to my mkind is to show both the current captcha
> plus another, optional, captcha, with a note about how filling that
> second captcha helps wikisource, and that the answer will be logged with
> their username/ip.
ReCAPTCHA already works in a way similar to this. Two words are
presented but only one is known and actually serves to filtrate
accesses. They then collect answers for both words and if the test on
the first is passed (which indicates a human) then the answer for the
second is recorded. When a certain number of people agree on the
transcription of a previously unknown word then that transcription is
taken as good and used in future as a filter word.
We could say that we accept a transcription as "valid" after N people
agrees on a given word and put back on Wikisource only the validated
words (and also use them as filters, too) . This seems both a reliable
and easy-to-implement system to me.
Anyway, at the beginning we could use the system you describe using
the current captcha and words from books.
I believe the trickiest part is creating a system to put results back
in Wikisource in a semi-automated way, but having "captcha reviewers"
We could also decorate our captcha with "this captcha helps
transcribing <BOOK TITLE> + link".
And this leads me to what I think is the real point: once we have a
basically-working system we can think to whatever useful feature and
implement it, in principle we can have a modular system which can be
refined /at libitum/.
Hi, I have installed the Mobile Frontend extension in my pet project
I only get to the mobile view manually, through the link at the
footer. Autodetection of mobile browsers doesn't seem to work or I
have done / missed something in my installation. I have got feedback
from a bunch of users with a bunch of devices, nobody got the mobile
view by default and all of them get it when visiting
Something that looks different in my installation compared to
Wikipedia's is the lack of a redirect to a ".m." URL. I'm actually a
bit confused about this. It is mentioned briefly at
http://www.mediawiki.org/wiki/Extension:MobileFrontend but it is not
clear to me whether this is something supposed to work automagically
when installing the extension or whether there is something I need to
do with the WURFL database, the LocalSettings.php, the web server...
Other than this the extension works great presenting the mobile view
of pages. Thank you very much for your work!
I'm using more and more #switch into templates, it's surprising how many
issues it can solve, and how much large arrays it can manage. My questions
1. Is there a reasonable upper limit for the number of #switch options?
2. Is there a difference in server load between a long list of #switch
options and a "tree" of options t.i. something as a nested #switch?
3. Is there any drawback into a large use of #swtch-based templates?
I presume that my questions are far from original, please give me a link to
previous talks or help/doc pages if any.
It’s with great pleasure that I announce the promotion of Howie Fung
to the position of Director of Product Development at the Wikimedia
Foundation, effective February 1.
Howie joined us in October 2009 as a consultant for usability
projects, and became a permanent staff member in May 2010. Prior to
Wikimedia, Howie was Senior Product Manager at Rhapsody, where he
helped grow the music site's traffic five-fold within the the first
year on the basis of extensive customer research, including web
analytics, focus groups, user testing, and customer surveys. Prior to
that, Howie was Product Manager at eBay, prioritizing features based
on business objectives, usability studies, and economic impact. He
has an MBA from The Anderson School at UCLA and a Bachelor of Science
in Chemical Engineering from Stanford University.
I’m really proud of all the work Howie’s done for Wikimedia since he’s
joined, calmly and rationally introducing method where there was
madness, always challenging us to increase our understanding of our
communities and to use our limited resources for the projects that are
likely to have the highest impact. In addition to the work he’s done
on the Usability Initiative, he’s worked on a variety of projects,
including the Editor Trends Study, the Former Contributors Survey, the
Article Feedback Tool, Moodbar, and the Feedback Dashboard. We’re
very lucky to have him in this new role.
This announcement also means that we’re formally establishing a
Product Development department at Wikimedia, which is part of the
larger Engineering department. Product, in our context, means really
digging into what we want our projects to look like in a year, in two
years, in three years, and working together with software developers
and architects, as well as across Wikimedia, to make that vision a
reality. Our work will be organized along the following product
areas: Editor Engagement, Mobile, Analytics, and
The following staff and contractors will be part of the Product group,
going forward: Phil Chang, Brandon Harris, Fabrice Florin, Diederik
van Liere, Siebrand Mazeland, Dario Taraborelli, Oliver Keyes, and the
new Interaction Designer, when hired.
The Mobile team, which works on both mobile apps (such as the
Wikipedia Android app) and the mobile web experience, is a good
example of how this works in practice. It has Phil as a product owner
(reporting to Howie), Tomasz as a scrum master and engineering
director, and Patrick, Arthur, Max, and Yuvi as engineers (reporting
to Tomasz). The team itself is the most important unit here: it drives
the success of any given initiative. The connection into the Product
Development group helps to ensure we follow a consistent strategy and
coordinate efforts across the board. 
This is an important step in our organizational development and will
help us parallelize and coordinate product and engineering work more
Please join me in congratulating Howie, and WMF. :-)
 In case you’d like to learn more about agile product development
and software engineering, this presentation is a good intro to scrum,
a specific methodology we've started to use on a couple of teams:
VP of Engineering and Product Development, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
Hi, as some of you may already noticed, I've resurrected Trevor's
creation of old - ArticleEmblems that allows to add icons to the right
of the area below article title. All the architectural problems that
caused its demise before were resolved, so I'd like to invite more
people to participate. First of all, the way these templates are
outputted is flaky - but I think it's not a problem to fix.
Max Semenik ([[User:MaxSem]])
I’d like to welcome Chris McMahon to the Platform Engineering team as
our new QA Lead. Chris has a long history working in software
testing, coming to us most recently from Sentry Data Systems where he
was responsible for test automation. One particularly relevant bit of
experience from Chris’s past was his work at Socialtext on their wiki
product, expanding the Selenium-based automated test suite from 400
individual assertions to 10,000 over the span of two years.
Chris is also active in the outside community. He leads the Writing
About Testing group and annual conference, which he founded in 2009.
He also helped design and build the SeleNesse testing framework, which
is a wiki-based tool for building acceptance tests that get executed
In his role as QA Lead here, Chris will be responsible for figuring
out what sorts of testing process we can bring to MediaWiki
development. His first task will to join in on the tail end of the
1.19 deployment process, helping us with whatever last minute testing
that makes sense at this stage, but then he’ll have the much larger
task of looking at our release and deployment process generally, and
figure out which parts would most benefit from the injection of
testing rigor. He’ll also be responsible for establishing a more
coordinated volunteer effort around testing.
Chris will be working remotely from his home in Durango, Colorado.
Is it supposed to be possible to transmit arbitrary nested data
structures in the API? Or do the format serializers make some
assumptions that limit what we can do?
I added a change yesterday which returns a complex parse tree via the
API. It works fine in JSON format, but in XML (depending on how I do it)
I either get empty elements, or I get Sanitizer warnings. I was using
the addPageSubItem() method, inherited from ApiQueryBase.
Some more details are in this bug:
Neil Kandalgaonkar ) <neilk(a)wikimedia.org>