I'm refactoring MonoBook, starting with MonoBookTemplate. The current
change gets rid of the entire immediate print/html soup approach and
instead assembles a giant string and prints that in one statement at the
end. See: https://gerrit.wikimedia.org/r/#/c/420154/ and
Pending reviews and assurances that I have indeed not totally broken
everything, we'll probably be merging this in the next week or so. If
anyone would like to point out particular reasons why this is a terrible
idea, please do so now.
Future plans include putting that ridiculous getPortlet into core
BaseTemplate, making a less dumb BaseTemplate::getFooter without
breaking anything already using it so MonoBook can lose its silly
replication thereof, organising all the files in MonoBook better
(putting them in resources, includes, etc according to standard skin
practices), and making MonoBook responsive.
There are also some problems that need addressing down the road: that
I'm not sure how safe it is for caching and the like to just go moving
images/css files around willy-nilly, that there are no 'standard' skin
practices as far as anyone can tell, and that some people seem to think
MonoBook is bad and not worth this. But MonoBook is not bad. It is a
delightful skin. We should preserve it, and not just in formaldehyde.
I'm a final year Mathematics student at the University of Bristol, and I'm
studying Wikipedia as a graph for my project.
I'd like to get data regarding the number of outgoing links on each page,
and the number of pages with links to each page. I have already
inquired about this with the Analytics Team mailing list, who gave me a few
One of these was to run the code at this link https://quarry.wmflabs.org/
with these instructions:
"You will have to fork it and remove the "LIMIT 10" to get it to run on
all the English Wikipedia articles. It may take too long or produce
too much data, in which case please ask on this list for someone who
can run it for you."
I ran the code as instructed, but the query was killed as it took longer
than 30 minutes to run. I asked if anyone on the mailing list could run it
for me, but no one replied saying they could. The guy who wrote the code
suggested I try this mailing list to see if anyone can help.
I'm a beginner in programming and coding etc., so any and all help you can
give me would be greatly appreciated.
University of Bristol
I ask because someone put this revision in (which is now deleted):
cryptocurrency miner in common.js!
Obviously this is not going to be a common thing, and common.js is
closely watched. (The above edit was reverted in 7 minutes, and the
MediaWiki, outside one's own personal usage? And what permissions are
needed? I ask with threats like this in mind.
Sometimes I find adding assert() calls in my code very handy for various
- failures in development mode on some complex code where exposing all the
details to unit tests is sometimes hard and/or pointless
- readability of the code
But I worry about the perf implications of these lines of code. I don't
want these assertions to be used to track errors in production mode.
PHP7 introduced expectations which permit to have zero-cost assert() 
Looking at the MW codebase we don't seem to use assert frequently (only 26
files  ).
Are there some discussions about this?
Is assert() a good practice for the MW code base?
If yes would it make sense to benefit from zero-cost assertions in WMF
Wikimedia Foundation’s Language team would like to invite you for an online
office hour session scheduled for Wednesday, March 21st, 2018 at 13:00 UTC.
This will be an open session to talk about our work, and in particular the
changes to interlanguage links, which were recently rolled-out on the
The new option shows a list of up to 9 languages instead of a long list
that can have more than 200 items, and a panel with all the links that can
be looked up in any language using a search box. The purpose of this
feature is to make articles in all languages easier to find. We recently
published a blog post about this feature and the thoughts behind the
This session is going to be an online discussion over Google
Hangouts/Youtube with a simultaneous IRC conversation. Due to the
limitation of Google Hangouts, only a limited number of participation slots
are available. Hence, do please let us know in advance if you would like to
join in the Hangout. The IRC channel will be open for interactions during
Please read below for the event details, including local time, youtube
session links and do let us know if you have any questions.
== Details ==
# Event: Wikimedia Foundation Language office hour session
# When: March 21st, 2018 (Wednesday) at 13:00 UTC (check local time
# Where: and on IRC #wikimedia-office (Freenode) and
Discussion about Compact Language Links, and Q & A.
Engineering Manager, Language (Contributors)
TL;DR: When using X-Wikimedia-Debug to profile web requests on Wikimedia
wikis, the generated profile information will now include details from
"w/index.php", and MWMultiVersion, and things like
wmf-config/CommonSettings.php. Details at
The debug profiler provided on Wikimedia production wikis previously
could not cover the code that executes before MediaWiki core instantiates
ProfilerXhprof, which was in charge of calling `xhprof_enable`. This
normally happens within core's Setup.php.
While that point in Setup.php is before any important MediaWiki core logic,
it misses out on two other chunks of code:
1. Initialisation of MediaWiki core – This includes entry point code (eg.
index.php, PHPVersionCheck), but also the first steps of Setup before
Profiler. Such as AutoLoader, vendor, and LocalSettings.php. At WMF,
LocalSettings.php loads wmf-config/InitialiseSettings.php and
2. Wrapping of MediaWiki entrypoint – At Wikimedia, the index.php
entrypoint is itself further wrapped in something called "multiversion".
Multiversion is what determines the wiki ID (eg. "enwiki") and MediaWiki
branch (eg. "1.31.0-wmf.25") associated with the current domain (eg. "
Over the past weeks, I've been refactoring MediaWiki core, wmf-config and
Wikimedia's HHVM settings to make we can instrument the above code as part
of our performance profiles.
This change happened in three phases:
## 1. Update wmf-config/StartProfiler to.. actually start the profiler!
The file name is somewhat deceptive because traditionally this is (and can)
only be used to *configure* the profiler, by assigning $wgProfiler. It
makes sense that we cannot instantiate the Profiler subclass from this
file, because the classes and run-time configuration are not and cannot be
available this early.
However, we don't the Profiler class to record data. The Profiler classes
typically obtain their data from native PHP. The one used at WMF is XHProf.
Previously, we would assign $wgProfiler['class'] = 'ProfilerXhprof', and
then later MediaWiki core instantiates ProfilerXhprof, which then calls
xhprof_enable. We now xhprof_enable directly from StartProfiler.php.
This change enabled coverage of code in Setup.php between 'include
StartProfiler' and 'Profiler::instance()'. – Mainly: vendor, LocalSettings,
## 2. Update MediaWiki core to include StartProfiler earlier.
It is now the first thing included by Setup.php.
This change enabled coverage of code in Setup.php that previously was
before 'include StartProfiler'. – Namely: AutoLoader.php, Defines.php.
## 3. Configure WMF's PHP engine to use auto_prepend_file
This is the big one, and requires a PHP ini setting change. Third parties
can follow the same pattern
in order to get the same benefits:
* Put `xhprof_enable( $flags )`, along with any sampling/conditional
logic, in a separate file.
* Use it from two places:
** In StartProfiler.php, include using require_once.
** In php.ini, set auto_prepend_file=path/to/profiler.php.
This change enabled coverage of all remaining code. – Namely: multiversion,
w/index.php and things like PHPVersionCheck.
$ curl -H 'X-Wikimedia-Debug: 1' 'https://en.wikipedia.beta.
Output now includes:
- main() # resembles the wrapper at
-- Timo Tijhof