Annoyed by the difficulties of tracking events in the Wikimedia tech
community? Or by the difficulties of announcing events in an effective way?
Check this out:
Consolidate the many tech events calendars in Phabricator's calendar
The hypothesis is that it is worth improving the current situation with
calendars in the Wikimedia tech community, and that Phabricator Calendar is
the best starting point. If we get a system that works for Wikimedia Tech,
I believe we can get a system that works for the rest of Wikimedia,
probably with some extra steps.
The Technical Collaboration team has some budget that we could use to fund
the Phabricator maintainers to prioritize some improvements in their
Calendar. If you think this is a bad idea and/or you have a better one,
please discuss in the task (preferred) or here. If you think this is a good
idea, your reassuring feedback is welcome too. ;)
Engineering Community Manager @ Wikimedia Foundation
We have more and more MediaWiki PHPUnit tests which is great, but the
test runner is crippled with a lot of performances issues that makes it
rather slow. One would have noticed that running 10k+ tests is no fun.
What a surprise when this morning I noticed Ori Livneh (wmf Performance
team) sent a series of patch that would definitely speed up the test
run. Ranging from removing a sleep() to implementing base32 in plain PHP.
The speed up will benefit everyone and have the tests report faster when
patchsets are proposed in Gerrit. Ori kindly regrouped them under the
topic 'unit-tests-perf'. Please take sometime to review them:
Antoine "hashar" Musso
Dear Wikimedia technical community,
If you ever thought of organizing a small event, print some stickers, or
any other activity for the good of Wikimedia that costed money, now Rapid
Grants might be the simple solution you were looking for. Details below.
---------- Forwarded message ----------
From: Alex Wang <awang(a)wikimedia.org>
Date: Wed, May 18, 2016 at 5:11 AM
Subject: [Wikimedia-l] Announcing Rapid Grants
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
We are excited to announce the launch of a new Wikimedia Foundation grants
program, Rapid Grants!
Rapid grants fund Wikimedia community members -- individuals, groups, or
organizations contributing to Wikimedia projects -- to organize projects
throughout the year for up to USD 2,000. Projects can include experiments
or standard needs that don't need broad review to get started. Applications
are reviewed weekly by WMF staff.
Read more about the new program and apply here:
Questions? Email rapidgrants(a)wikimedia.org
For more information about next steps and important dates for the grants
program redesign, please visit:
Wikimedia Foundation <http://wikimediafoundation.org/wiki/Home>
Wikimedia-l mailing list, guidelines at:
New messages to: Wikimedia-l(a)lists.wikimedia.org
Engineering Community Manager @ Wikimedia Foundation
I'm a student from Chennai, India and my project is going to be related to
performing image processing on the images on commons.wikimedia to automate
categorization. DrTrigon had made the script catimages.py a few years ago
which was made in the old pywikipedia-bot framework. I'll be working
towards updating the script to the pywikibot-core framework, updating it's
dependencies, and using newer techniques when possible.
catimages.py is a script that analyzes an image using various computer
vision algorithms and allots categories to the image on commons. For
example, consider algorithms that detect faces, barcodes, etc. The script
uses these to categorize images to Category:Unidentified People,
Category:Barcode, and so on.
If you have any suggestions and categorizations you think might be useful
to you, drop in at #gsoc-catimages on freenode or my talk page. You can
find out more about me on User:AbdealiJK and about the project
 - https://commons.wikimedia.org/wiki/User_talk:AbdealiJK
 - https://meta.wikimedia.org/wiki/User:AbdealiJK
 - https://phabricator.wikimedia.org/T129611
For the past few weeks I've been working on rewriting Linker::link()
to be non-static, use LinkTarget/TitleValue and some of the other fancy
new services stuff. Yay!
For the most part, you'd use it in similar ways:
Linker::link( $title, $html, $attribs, $query );
$linkRenderer = MediaWikiServices::getInstance()
$linkRenderer->makeLink( $title, $html, $attribs, $query );
And there are makeKnownLink() and makeBrokenLink() entry points as well.
Unlike Linker::link(), there is no $options parameter to pass in every
time a link needs to be made. Those options are set on the
HtmlPageLinkRenderer instance, and then applied to all links made using
it. MediaWikiServices has an instance using the default settings, but
other classes like Parser will have their own that should be used.
I'm also deprecating the two hooks called by Linker::link(), LinkBegin
and LinkEnd. They are being replaced by the mostly-equivalent
HtmlPageLinkRendererBegin and HtmlPageLinkRendererEnd hooks. More
details are in the commit message.  is an example conversion for
The commit is still a WIP because I haven't gotten around to writing
specific tests for it (it passes all the pre-existing Linker and parser
tests though!), and will be doing that in the next few days.
Regardless, reviews / comments / feedback on  is appreciated!
Adam noticed that I broke the installer when introducing MediaWikiServices
(see T135169). In particular, localization and CSS stopped working. Here's the
Senior Software Developer
Gesellschaft zur Förderung Freien Wissens e.V.
I am Sriharsh ( I go by the nick of darthbhyrava on IRC and on
Phabricator), and this is a rather late introduction of myself. :P
I am a second year undergrad pursuing a dual degree (B.Tech in Computer
Science and MS in Computational Linguisitcs) course at IIIT-Hyderabad. I
was fortunate enough to be selected as a Google Summer of Code intern at
Wikimedia this year, and I will be working on implementing pywikibot
support for the Thanks extension. :)
I would like to thank my mentor jayvdb for helping me extensively since
February, and for being one of the primary reasons for my getting selected.
I would also like to polybuildr, my friend from college, who introduced me
to the world of Wikimedia, fellow intern AbdealiJK for his help, and all
the other people who replied to my doubts on IRC or reviewed my patches or
commented on my tasks. Thank you all, for giving me an opportunity to work
on Thanks! :)
It's been a while into the Community Bonding period, and I have enjoyed it
so far. I have realized the demands of the project, and look forward to the
challenge of implementing Pywikibot-Thanks over the next two months. This
being my first GSoC experience, I look forward to a learning a lot over the
summer and beyond!
Thank you for your time.
P.S: You can take a look at my proposal and progress at
https://phabricator.wikimedia.org/T130585, or read my blog on my wonderful
Wikimedia experience so far at http://bhyrava.me/code . Thank you once
For the last decade we've supported uploading SVG vector images to
MediaWiki, but we serve them as rasterized PNGs to browsers. Recently,
display resolutions are going up and up, but so is concern about
low-bandwidth mobile users.
This means we'd like sharper icons and diagrams on high-density phone
displays, but are leery of adding extra srcset entries with 3x or 4x
size PNGs which could become very large. (In fact currently MobileFrontend
strips even the 1.5x and 2x renderings we have now, making diagrams very
blurry on many mobile devices. See https://phabricator.wikimedia.org/T133496 -
fix in works.)
Here's the base bug for SVG client side rendering:
I've turned it into an "epic" story tracking task and hung some blocking
tasks off it; see those for more details.
TL;DR stop reading here. ;)
One of the basic problems in the past was reliably showing them natively in
or breaking the hamlet caching layer. This is neatly resolved for current
browsers by using the "srcset" attribute -- the same one we use to specify
higher-resolution rasterizations. If instead of PNGs at 1.5x and 2x
density, we specify an SVG at 1x, the SVG will be loaded instead of the
Since all srcset-supporting browsers allow SVG in <img> this should "just
work", and will be more compatible than using the experimental <picture>
element or the classic <object> which deals with events differently. Older
browsers will still see the PNG, and we can tweak the jquery.hidpi srcset
polyfill to test for SVG support to avoid breaking on some older browsers.
This should let us start testing client-side SVG via a beta feature (with
parser cache split on the user pref) at which point we can gather more
real-world feedback on performance and compatibility issues.
Rendering consistency across browser engines is a concern. Supposedly
modern browsers are more consistent than librsvg but we haven't done a
compatibility survey to confirm this or identify problematic constructs.
This is probably worth doing.
Performance is a big question. While clean simple SVGs are often nice and
small and efficient, it's also easy to make a HUGEly detailed SVG that is
much larger than the rasterized PNGs. Or a fairly simple small file may
still render slowly due to use of filters.
So we probably want to provide good tools for our editors and image authors
to help optimize their files. Show the renderings and the bandwidth balance
versus rasterization; maybe provide in-wiki implementation of svgo or other
lossy optimizer tools. Warn about things that are large or render slowly.
Maybe provide a switch to run particular files through rasterization always.
And we'll almost certainly want to strip comments and white space to save
bandwidth on page views, while retaining them all in the source file for
download and reediting.
Feature parity also needs more work. Localized text in SVGs is supported
with our server side rendering but this won't be reliable in the client;
which means we'll want to perform a server side transformation that creates
per-language "thumbnail" SVGs. Fonts for internationalized text are a big
deal, and may require similar transformations if we want to serve them...
Which may mean additional complications and bandwidth usage.
And then there are long term goals of taking more advantage of SVGs dynamic
nature -- making things animated or interactive. That's a much bigger
question and has implementation and security issues!