Since a few years ago, we have several query [special] pages, also
called "maintenance reports" in the list of special pages, which are
never updated for performance reasons: 6 on all wikis and 6 more only on
A proposal is to run them again and quite liberally on all "small wikis"
(to start with); another, to update them everywhere but one at a time
and with proper breathing time for servers.
The problem is, which pages are safe to run an update on even on
en.wiki, and how frequently; and which would kill it? Or, at what point
a wiki is too big to run such updates carelessly?
Can someone estimate it by looking at the queries, or maybe by running
them on some DB where it's not a problem to test?
We only know that originally pages were disabled if taking "more than
about 15 minutes to update". If now such a page took, say, four times
that ie 60 min, would it be a problem to update one such page per
Most updates seem to already rely on slave DBs, but maybe this should be
confirmed; on the other hand, writing huge sets of results to DB
shouldn't be a problem because those are limited as well.
 In (reviewed) puppet terms: <https://gerrit.wikimedia.org/r/#/c/33713/>
 Below that limit, a wiki should be "small" for
<https://gerrit.wikimedia.org/r/#/c/33694> and frequently updated for
the benefit of the editors' engagement.
 'wgQueryCacheLimit' => array(
'default' => 5000,
'enwiki' => 1000, // safe to raise?
'dewiki' => 2000, // safe to raise?
Show replies by date