Hi Community Metrics team,
This is your automatic monthly Phabricator statistics mail.
Accounts created in (2016-06): 240
Active users (any activity) in (2016-06): 836
Task authors in (2016-06): 471
Users who have closed tasks in (2016-06): 252
Projects which had at least one task moved from one column to another on
their workboard in (2016-06): 199
Tasks created in (2016-06): 2478
Tasks closed in (2016-06): 2141
Open and stalled tasks in total: 30571
Median age in days of open tasks by priority:
Unbreak now: 18
Needs Triage: 173
(How long tasks have been open, not how long they have had that priority)
TODO: Numbers which refer to closed tasks might not be correct, as
described in https://phabricator.wikimedia.org/T1003 .
Fab Rick Aytor
(via community_metrics.sh on iridium at Thu Jul 14 22:31:51 UTC 2016)
The plan for next week's ArchCom office hour is to discuss T126641:
[RFC] Devise plan for a cross-wiki watchlist back-end. There's been a
fair amount of discussion already on T126641 which is trying to sort
out the following questions:
* Is keeping a central table of all recent changes for all wikis a
* Is doing separate queries for each wiki a better or worse idea?
* Are there ways that we could mitigate the impact of either of those options?
* Are there other options that we haven't even thought about yet?
#wikimedia-office: 2016-07-20 (Wednesday) 21:00 UTC (2pm PDT, 23:00 CEST)
More details about the meeting: <https://phabricator.wikimedia.org/E235>
More details about the topic: <https://phabricator.wikimedia.org/T126641>
Of course, no one needs to wait for the meeting to comment. Please
make your thoughts known on this list, or in Phab on T126641.
There are still 73 unclaimed projects listed on
https://wikitech.wikimedia.org/wiki/Purge_2016. Please claim your
projects today! (Or, better yet, mark them as unused so I can start
On 7/8/16 11:09 AM, Andrew Bogott wrote:
> If you are exclusively a user of tool labs, you can ignore this email.
> If you use or administer another labs project, this email REQUIRES
> ACTION ON YOUR PART.
> We are reclaiming unused resources due to an ongoing shortage.
> Visit this page and add a signature under projects you know to be active:
> Associated wmflabs.org <http://wmflabs.org> domains are included to
> identify projects by offered services.
> We are not investigating why projects are needed at this time. If one
> person votes to preserve then we will do so in this round of cleanup.
> In a month, projects and associated instances not claimed will be
> suspended or shutdown. A month later if no one complains these
> projects will be deleted..
> - Andrew (on behalf of all Labs Admins everywhere)
It seems there is disagreement about what the correct interpretation of NULL in
the rev_content_model column is. Should NULL there mean
(a) "the current page content model, as recorded in page_content_model"
or should it mean
(b) "the default for this title, no matter what page_content_model says"?
Kunal and I have had an unintentional edit war about this question in Revision.php:
Kunal changed it from (a) to (b) in https://gerrit.wikimedia.org/r/#/c/222043/
I later changed it from (b) to (a) in https://gerrit.wikimedia.org/r/#/c/297787/
Kunal reverted me from (a) to (b) in https://gerrit.wikimedia.org/r/#/c/298239/
So, which way do we want it?
The conflict seems to arise from (at least) three competing use cases:
I) re-interpreting page content. For instance, a user may move a misnamed
User:Foo.jss to User:Foo.js. In this case, the content should be re-interpreted
(a), though it still works with (b), because the default model changes based on
the suffix ".js". I think it would however be better to only rely on title
parsing magic once, when creating the page, not later, when rendering old revisions.
II) converting page content. For instance, if a talk page gets converted to
using Flow, new revisions (and page_content_model) will have the Flow model,
while old revisions need to keep their original wikitext model (even though
their rev_content_model is null). That would need behavior (b).
III) changing a namespace's default content model. E.g. when installing an
extension that changes the default content model of a namespace (such as
Wikibase with Items in the main namespace, or Flow-per-default for Talk pages),
existing pages that were already in that namespace should still be readable.
With (b), this would fail: even though page_content_model has the correct model
for reading the page, rev_content_model is null, so the new namespace default is
used, which will fail. With (a), this would simply work: the page will be
rendered according to page_content_model.
In all cases it's possible to resolve the issue by replacing the NULL entries
for all revisions of a page with the current model id. The question is just when
and how we do that, and when and how we can even detect that this needs doing.
There is also an in-between option, let's call it a/b: fall back to
page_content_model for the latest revision (that should *always* be right), but
to ignore page_content_model for older revisions. That would cater to use case
III at least in so far as it would be possible to view the "misplaced" pages.
But viewing old revisions or diffs would still fail with a nasty error. This
option may look better on the surface, but I fear it will just add to the confusion.
There's another fix: never write null into rev_content_model. Always put the
actual model ID there. That's pretty wasteful, but it's robust and reliable.
When we decided to use null as a placeholder for the default, we assumed the
default would never change. But as we now see, it sometimes does...
So, what should it be, option (a) or (b)? And how do we address the use case
that is then broken? What should we write into rev_content_model in the future?
I personally think that option (a) makes more sense, because the resolutions of
defaults is then local to the database. It could even be done within the SQL
query. It's easier to maintain consistency that way. For use case II, that would
require us to "fill in" all the rev_content_model fields in old revisions when
converting a page. I think it would be a good thing to do that. If we have the
content model change between revisions, it seems prudent to record it explicitly.
Senior Software Developer
Gesellschaft zur Förderung Freien Wissens e.V.
= 2016-07-13 =
== Product ==
=== Reading ===
==== Reading Infrastructure ====
* No update, but there will be a follow up meeting on Echo auth impl bug
20-July-2016 (multistakeholder bug)
==== Reading Web ====
* No update, working on language switcher on mobile web
==== iOS native app ====
* 5.0.5 is currently in beta - heading to regression on Tuesday
* Setup new plan to incrementally improve regression testing
* Development of 5.1 is in progress
* Planning of 5.2 is in progress
* Discovery implemented https://phabricator.wikimedia.org/T139378 to enable
iOS to do https://phabricator.wikimedia.org/T130889
* Filed ticket to start work on a real time trending API:
==== Android native app ====
==== Mobile Content Service ====
* Getting close to having public endpoints for the random and aggregated
feed endpoints. Removing blockers for that.
* Service had been flapping quite a bit lately. Log files indicate heap
=== Community Tech ===
* Investigating cross-wiki watchlist, how to implement back-end
** Refered to ArchComm as RFC
* Will be deploying PageAssessments to English WIkipedia soon (
* Also will be switching English Wikipedia to UCA collation with numerical
=== Editing ===
==== Collaboration ====
* Blocked: None
* Blocking: No change
** Finished rollout out bundling and re-sorting changes. Had a couple
*** We forgot to change Alerts/Messages to Alerts/Notices, so had to do a
SWAT for this. This had complications initially:
*** We learned yesterday there was a bug in Echo that was causing users to
have inconsistent state, which prevented login. This was fixed yesterday:
==== Multimedia ====
* Blocked: None?
* Blocking: None?
* Update: None?
==== Language ====
* Blocked: None?
* Blocking: None?
* Update: None?
==== Parsing ====
* Blocked: service-runner migration
* Blocking: None?
* Update: Services is helping with migrating Parsoid to the service-runner
framework. Parsoid deployments are on hold till that time. Migration to
node v4, jessie, scap3 will follow that migration.
==== VisualEditor ====
* Blocked: None.
* Blocking: None known.
* Update: Lots of work on bugs and regressions this week;
https://gerrit.wikimedia.org/r/#/c/298392/ / T129360 in MobileFrontend is
the remaining focus.
== Technology ==
=== Services ===
* Node security upgrade: RB, AQS and SCB on Nodejs 4.4.6, Parsoid pending
move to Jessie
* RESTBase Cassandra cluster upgraded to 2.2.6
* Preparing deploy of new feed API end points (ex: featured article /
image) with Reading. ETA likely this week.
* Parsoid move to service-runner -
** Currently testing in BetaCluster
**Ping for Ops: need to coordinate the transition in prod
=== Fundraising Tech ===
* Civi upgrade complete
* CentralNotice deployed
* Upgrading payments to MW 1.26
* Building out new servers
* Kafka sampling problem solved
* No blockers
=== TechOps ===
** https://phabricator.wikimedia.org/T135483 - HHVM crashes - raised to
UBN! after issue recurrence. Currently no one owns the ticket.
** About 25% of the MediaWiki clusters run on jessie, we will start
reimaging old servers during this quarter, a schedule will be announced.
** Enabled TCP Fast Open on some caching clusters
** Insecure POST traffic cutoff: All external traffic is prohibited, but
labs is still permitted (with a 20% failure rate)
=== Security ===
* Node.js dependencies are now being checked nightly for disclosed
vulnerabilities; Darian will be manually creating Phab tickets about this
initially, with automation to follow
* Request security reviews:
* A security release will be prepared soon, possibly July 20
* Reviews: Tool Labs Console (cont.)
===Wikidata / WMDE===
* Daniel is working on multi-content revisions
* Continuing work on structured data support for commons
* Preparing deploy of RevisionSlider to test wikis and then as a beta
feature on German Wikipedia https://phabricator.wikimedia.org/T140232
=== Discovery ===
* '''Blocking''': none
* '''Blocked''': none
* Logstash upgrade postponed to Jul 18th, 19-22 UTC
* Initiated discussion on using ? in searches:
* Geosearch (near*:) is ready to roll out in production:
https://phabricator.wikimedia.org/T139378 in 1.28.0-wmf.10
* Started discussion on new wikipedia.org portal layout:
* Geoshapes in graphs and (soon) maps:
=== RelEng ===
** New gerrit update needs testing: https://gerrit-new.wikimedia.org/r/
** wmf.9 was reverted, wmf.10 will get pushed to group0 and group1 today on
a short schedule
*** Retrospective to come
=== Architecture ===
* ArchCom weekly meetings today:
** 1pm PDT Planning meeting (private): [[Phab:E227]]
** 2pm PDT Discussion (public) IRC #wikimedia-office: [[Phab:E228]]
*** Topic: [[Phab:T589]] - image and oldimage tables
* Anything stalled? Comment in Phab here: [[Phab:Z425]]
[Tried mediawiki-l, now trying wikitech-l]
What does it mean when I run MediaWiki unit tests on my server, and phpunit skips some tests with this error:
Extension mysql is required
This is on a fully working MediaWiki server with mySQL installed, running on Ubuntu 16.04.
$ mysql --version
mysql Ver 14.14 Distrib 5.7.12, for Linux (x86_64) using EditLine wrapper
$ dpkg -s php-mysql
Status: install ok installed
For a while we've had the ability to test changes in production on a
single host (mw1017) using a special HTTP header (X-Wikimedia-Debug).
This has proved useful for many when deploying changes in production and
we are adding it to the SWAT deploy process.
See the steps at:
4. After merge, the SWAT team member fetches the patch(es) to tin and then
runs scap pull on mw1017
5. The submitter tests the change by using the instructions at
X-Wikimedia-Debug#Staging_changes AND the SWAT team member checks the
6. If there are no errors and the fix seems to work (if testable in that
manner), then then SWAT team member deploys the patch to the entire
How to test on mw1017:
To less exciting SWAT deploys,
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
What is the proper incantation for faking a successful login (for a given user) as part of a unit test? My old code was:
$context = RequestContext::getMain();
$specialPage = new LoginForm( $context->getRequest() );
What is the right way to perform this task with AuthManager?
Thanks very much,