Problem: while apps have face detection available from iOS and Android for
use in lead image positioning/cropping, mobile web doesn't have it, and
even for apps detection is quite slow, resulting in battery drain and
UX-problematic slowdown, especially on low-end Android devices.
With Dmitry's help, I discovered Android's face detection library sources
(, separated out to a standalone library at ). This means that we can
build a face detection service and supply its results to all users, be that
apps, web or third parties.
Max Semenik ([[User:MaxSem]])
I explored using User pages as a storage mechanism for the minimal
viable product for the collections work the mobile team is doing. The
goal is to prove the success of the feature and then feed our findings
into the multiple lists in core RFC.
I completed a proof of concept patch for storing collections as lists.
Essentially it stores all the meta data associated with a users
collection as a page at User:<username>/MobileWebCollections.json
You can test it out by:
1) checking out: https://gerrit.wikimedia.org/r/#/c/188225/
2) visiting Special:MobileCollections
3) refreshing page and seeing a collection with 2 items in it
Whilst doing this I have discovered that potentially race conditions
could be an issue with this approach. The sample code carries out
various transactions and the end state can differ based on which
finish first. I'm not much of a PHP expert so I'm not sure how to
remedy this problem. It may not be a problem based on the fact that we
only anticipate one user managing these lists at a given time.
Apart from the race condition it seems to work nicely. I imagine the
API could also be used to handle watchlist watch and unwatch actions
so that we wouldn't have nasty special cased code for watching
Currently the API is only designed to work on private lists for the
current user. I would expect a user parameter to be added later.
Let me know if you have any questions.
Ed Sanders, Trevor and I had an in real life conversation, last
Friday, where we spoke about the VisualEditor tablet (and future
mobile) offering. We've had a bunch of issues with it so far breaking
and a lot of this is due to two conflicting technologies.
We're keen to bring VE code and mobile code closer together with the
ultimate goal to get a good VisualEditor mobile experience going.
Since Damon has asked us all to get VisualEditor released this
quarter, between us we think mobile VE will become a high priority
We'd thus like to spend this quarter making the VisualEditorOverlay
 more OOJS UI like.
Ideally all the existing code should live in VisualEditor and be based
on OOJS UI. In the future we could also imagine the wikitext editor in
mobile becoming part of the VisualEditor tool itself.
At a bare minimum this could be rewritten as an OOJS UI dialog  and
would require us looking at the mobile overlay manager and seeing if
it can be consolidated with VisualEditor's seemingly similar window
manager  which I do not believe has any support for storing state
via the hash fragment [citation needed, please correct me]
As a result of this work, the mobile frontend code and VisualEditor's
front end code should naturally become more aligned.
Ed, Trevor anything you would like to add to the above? Any
corrections to my interpretation of this chat?