A quick update from me about the work of the Asynchronous Content Fragments
We're meeting each week to talk about adding async content support to MW.
This week we discussed possible upper-level use cases, with initial
thoughts documented on this page
The overall goal <https://phabricator.wikimedia.org/T282585> is to explore,
decide on, and build a way to include asynchronously-available content
fragments in MediaWiki pages, to allow new forms of content (like
Wikifunctions) and a faster, less tightly-coupled design for MediaWiki
overall. The working group (Subbu and C. Scott from Content Transformation,
Tim from Platform, Moriel from Architecture, and me from Abstract
Wikipedia) exists to turn the Decision Statement Overview
agreed by the Technical Decision Forum into a set of options for
(as will be finally agreed in a Decision Record).
*Work this week*
This week we discussed some use cases that I proposed. There was a lot of
talk around the differing needs in what readers, most API consumers, search
engines, *etc.* will want vs. what editors (and other logged-in users) will
need to be effective.
In particular, we considered the need for anti-abuse features,
Notifications, Recent Changes and Watchlist entries to all trigger
immediately (as is current behaviour) despite not having the full result
yet, and then needing the final renders to update wherever they're stored.
This will be complicated, and vary by specific use case. For example the
product need for immediacy will be very high and second results not wanted
(e.g. talk page notification e-mails should be sent immediately and not
wait, but also not be duplicated later); in others, it will be lower and so
things can wait a little bit for some things (e.g. link notifications can
wait a few minutes).
We also talked about the need for having default values, placeholders,
timeouts, and handling errors, and whether that should be controlled by
MediaWiki centrally or if each fragment provider could be synchronously
called to provide default/placeholders as needed.
Much more of this will be discussed next week.
Hope this is of interest. If you have thoughts or comments, please do let
us know on the discussion page
*James D. Forrester* (he/him <http://pronoun.is/he> or they/themself
Wikimedia Foundation <https://wikimediafoundation.org/>
Last week, I spoke to a few of my Wikimedia Foundation colleagues about how
we deploy code—I completely botched it.
At the end of the conversation, I was pretty sure I'd only succeeded in
making a complex process more opaque. I decided to write a blog to redeem
myself: How We Deploy Code
My goal was to write a very high-level overview of the process we use to
deploy code to Wikimedia production.
Hopefully, this is helpful.
– Tyler Cipriani (he/him)
Engineering Manager, Release Engineering
It’s time for our third edition of the Coolest Tool Award!
Tools play an essential role at Wikimedia, and so do the many volunteer
developers who experiment with new ideas, develop & maintain local &
global solutions and enhance the experience for Wikimedia communities.
We’d like to invite you all to nominate your favorite & most used tools
and help us celebrate the people who create them!
As no one can possibly know all the cool tools out there, we’re looking
for some help and inspiration: please point us to the tools that you
think are great - for any reason you can think of!
Please go to https://meta.wikimedia.org/wiki/Coolest_Tool_Award
to recommend tools by October 27, 2021. You can nominate as many tools
as you want by filling out the form multiple times.
Thank you very much for your ideas & recommendation(s)!
The award is organized & selected by the Coolest Tool Academy 2021. We
plan to recognize the greatest tools in a variety of categories (for
examples, see last year’s categories). The award ceremony will take
place virtually again this year and we will provide more details soon
about the specific logistics and dates.
We will continue to spread the word over the next week, but if you get
the chance, please feel welcome to share this information with others
Andre, for the Coolest Tool Academy 2021
Andre Klapper (he/him) | Bugwrangler / Developer Advocate
tl;dr: External shell outs are now run via Shellbox. Any deployed code
needs to use Shellbox/BoxedCommand, and documentation is available to
To safely re-enable Score (LilyPond) on Wikimedia wikis, we developed
Shellbox, a way to run shell commands in a remote, isolated container.
This is (hopefully) a stronger level of isolation than we previously had
with firejail, since it's relying on Linux containers and Kubernetes to
do the isolation. At the same time, this helps us in moving towards
running MediaWiki on Kubernetes, as we don't want to include all these
external commands inside the MediaWiki container. For the most part, any
new shelling out to external commands needs to be done via Shellbox.
A lot of the design and rationale behind Shellbox is captured in the
In Wikimedia production, so far Score, Timeline, SyntaxHighlight and
Wikidata constraint regex checking are all using Shellbox. Details about
that and links to dashboards are available at
<https://wikitech.wikimedia.org/wiki/Shellbox>. The main things that are
left are media-handling code that extracts metadata: DjVu, PdfHandler
and PagedTiffHandler, which is tracked at
<https://phabricator.wikimedia.org/T289228>, and videoscaling
Some work has to be done in MediaWiki to make code compatible with
Shellbox, specifically switching to "BoxedCommand", which now has its
own documentation page:
<https://www.mediawiki.org/wiki/Manual:BoxedCommand>. BoxedCommand works
transparently whether you have a separate Shellbox service set up or
not. This is the preferred way to write new shellouts going forward,
though Shell::command() isn't officially deprecated yet. So far all
shellouts that are used in Wikimedia production have already been
converted except for TimedMediaHandler.
Looking forward, I think this also gives us a lot of flexibility in
using more external commands in the future. First, we're less tied to
whatever OS version MediaWiki is running on, as long as it can be
built/shipped in a container, we can use it. And secondly, it's probably
OK if external commands aren't super well behaved (e.g. use too much
memory) since they're no longer sharing the same resources as an
appserver (this shouldn't be interpreted as a free pass for super
inefficient stuff of course).
I tried to keep this summary short, and am intending to write a longer
blog post that explains some more history in detail. But if you have any
questions or something isn't clear, please ask!
I'm confused about what the term "ltsrel" means. The way I understand it,
it *should* mean that the extension supports only Long Term Support
versions of MediaWiki. It would have a branch for every LTS version of
MediaWiki like REL1_31 and REL1_35, but not for REL1_32, REL1_33 or REL1_34.
However, on the Compatibility
the rel and ltsrel policies are described as:
* release branches (key: rel): For every MediaWiki release, there is a
corresponding branch in the extension. So e.g. if you use MediaWiki 1.36,
you should use the REL1_36 branch of the extension.
* long-term support release branches (key: ltsrel): For every MediaWiki
release, there is a corresponding branch in the extension, following the
Version lifecycle release policy.
These sound exactly alike to me.
This email is a summary of the Wikimedia production deployment of
- Conductor: Jeena Huneidi
- Backup: Dan Duvall
- Blocker task: T281166 <https://phabricator.wikimedia.org/T281166>
- Current Status: Live everywhere <https://versions.toolforge.org/>
- 351 patches ▁▄▇██
- 1 rollback █▁█▁█
- 0 days of delay █▁█▁▁
- 8 blocking tasks ▆▃▃▁█
- Closest to the buzzer: 3.3 hours before branch cut
<https://gerrit.wikimedia.org/r/c/mediawiki/core/+/724202> a nice catch
*🚂🌈 Trainbow love*
This train was bound for glory thanks to:
- Kosta Harlan
- Jon Robson
- Olga Vasileva
- Taavi Väänänen
- Petr Pchelko
Sincere trainbow appreciation to you all <3
Tyler Cipriani (he/him)
Engineering Manager, Release Engineering