the Gerrit Cleanup Day is only two days away (Wed 23rd).
More info: https://phabricator.wikimedia.org/T88531
Do you feel prepared and all Reading team members know what to do?
If not, what are you missing and how can we help?
Some Gerrit queries for each team are listed under "Gerrit queries per
team/area" in https://phabricator.wikimedia.org/T88531
Are they helpful and a good start? Or do they miss some areas (or do
you have existing Gerrit team queries to use instead or to "integrate",
e.g. for parts of MediaWiki core you might work on)?
Also, which person will be the main team contact for the day (and
available in #wikimedia-dev on IRC) and help organize review work in
your areas, so other teams could easily reach out?
Some team plates are emptier than others so they're wondering where and
how to lend a helping hand (to find out in advance, due to timezones).
Thanks for your help to make the Gerrit Cleanup day a success!
Andre Klapper | Wikimedia Bugwrangler
The reading team has been having a series of meetings as part of the
ongoing strategy process,. We documented and clarified as much details as
order to empower everyone to become part of the process, while following
the same methodology.
For example, instead of saying "*The overall page views numbers are
declining and thats a problem that we need to solve*" by applying our
process, the suggested statement is questioned to whether this is a problem
in itself or it is a result of another problem? If we picked one possible
reason, what are our choices to solve the problem, and what possibilities
does each choice entail? What are the concerns with each possibility and
what are the tests that we need to run to justify our concerns?
Sounds complicated? :-)
Not really. The key is to ask the right questions and always remain
focused on the initial problem.
In our own exercise, we identified one problem that manifests itself across
different indicators is our core system's lack of optimization for emerging
platforms, experiences, and communities.
The team can not do this alone. We need more people to join our exercise,
please check the documentation
yourself familiar with the process, and think of suggesting choices
and designing tests
Questions and comments are welcome on the talk page.
Lets get this done, together!
Cross posting to mobile-l as I think we have interested parties here
---------- Forwarded message ----------
From: Tomasz Finc <tfinc(a)wikimedia.org>
Date: Thu, Sep 17, 2015 at 12:26 PM
Subject: Announcing the launch of Maps
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Cc: Yuri Astrakhan <yastrakhan(a)wikimedia.org>, Max Semenik <
The Discovery Department has launched an experimental tile and static maps
service available at https://maps.wikimedia.org.
Using this service you can browse and embed map tiles into your own tools
using OpenStreetMap data. Currently, we handle traffic from *.wmflabs .org
and *.wikivoyage .org (referrer header must be either missing or set to
these values) but we would like to open it up to Wikipedia traffic if we
see enough use. Our hope is that this service fits the needs of the
numerous maps developers and tool authors who have asked for a WMF hosted
tile service with an initial focus on WikiVoyage.
We'd love for you to try our new service, experiment writing tools using
our tiles, and giving us feedback <https://www.mediawiki.org/wiki/Talk:Maps> .
If you've built a tool using OpenStreetMap-based imagery then using our
service is a simple drop-in replacement.
Getting started is as easy as
How can you help?
* Adapt your labs tool to use this service - for example, use Leaflet js
library and point it to https://maps.wikimedia.org
* File bugs in Phabricator
* Provide us feedback to help guide future features
* Improve our map style <https://github.com/kartotherian/osm-bright.tm2>
* Improve our data extraction
Based on usage and your feedback, the Discovery team
<https://www.mediawiki.org/wiki/Discovery> will decide how to proceed.
We could add more data sources (both vector and raster), work on additional
services such as static maps or geosearch, work on supporting all
languages, switch to client-side WebGL rendering, etc. Please help us
decide what is most important.
https://www.mediawiki.org/wiki/Maps has more about the project and related
== In Depth ==
Tiles are served from https://maps.wikimedia.org, but can only be accessed
from any subdomains of *.wmflabs .org and *.wikivoyage.org. Kartotherian
can produce tiles as images (png), and as raw vector data (PBF Mapbox
format or json):
Additionally, Kartotherian can produce snapshot (static) images of any
location, scaling, and zoom level with
For example, to get an image centered at 42,-3.14, at zoom level 4, size
800x600, use https://maps.wikimedia.org/img/osm-intl,4,42,-3.14,800x600.png
(copy/paste the link, or else it might not work due to referrer
Do note that the static feature is highly experimental right now.
We would like to thank WMF Ops (especially Alex Kosiaris, Brandon Black,
and Jaime Crespo), services team, OSM community and engineers, and the
Mapnik and Mapbox teams. The project would not have completed so fast
Sending this to mobile-l in case you don't follow wikitech-l.
---------- Forwarded message ----------
From: Adam Baso <abaso(a)wikimedia.org>
Date: Thu, Sep 17, 2015 at 10:58 AM
Subject: A couple videos: Parsoid with Reading Engineering; Reading
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Just wanted to share a couple recent videos. Enjoy!
== Parsoid with Reading ==
C. Scott Ananian and Subbu Sastry from the Wikimedia Foundation provide an
overview of Parsoid, a Wikimedia technology that supports HTML to wikitext
(and vice versa) translation, supporting rich annotated output markup for
translation layers and other clients (e.g., Wikipedia related technology)
to more specifically query elements from wiki content.
You may have caught some of this material at Wikimania, or maybe this is
your first time. In any event, it's good stuff! This was an interactive
session with questions from Wikimedia Reading Engineering.
== Reading Showcase 20150824 ==
About every 4 weeks the Wikimedia Reading department gets together to
showcase experiments and works in progress and the like. This is the
session from 24-August-2015.
Sam asked me to write up my recent adventures with ServiceWorkers and
making requests for MediaWiki content super super fast so all our
lovely users can access information quicker. Right now we're trying to
make the mobile site ridiculously fast by using new shiny standard web
The biggest issue we have on the mobile site right now is we ship a
lot of content - HTML and images - since we ship those on desktop. On
desktop it's not really a problem from a performance perspective but
it may be an issue from a download perspective if you have some kind
of data limit on your broadband and you are addicted to Wikipedia.
The problem however is that on mobile, the connection speeds are not
quite up to desktops standard. To take an example the Barack Obama
article contains 102 image tags and 186KB of HTML resulting in about
1MB of HTML. If you're on your mobile phone just to look up his place
of birth (which is in the lead section) or to see the County results
of the 2004 U.S. Senate race in Illinois  that's a lot of
unnecessary stuff you are forced to load. You have to load all the
images and all the text! Owch!
Gilles D said a while back  "The Barack Obama article might be a
bit of an extreme example due to its length, but in that case the API
data needed for section 0's text + the list of sections is almost 30
times smaller than the data needed for all sections's text (5.9kb
gzipped versus 173.8kb gzipped)."
Somewhat related, some experimenting with webpagetest.org has
suggested that disabling images on this page has a serious impact on
first paint (which we believe is due to too many simultaneous
Given that ServiceWorker is here (in Chrome first  but hopefully
others soon) I wrote a small proof of concept that lazy loads images
to expose myself to this promising technology.
For those interested I've documented my idea here:
but basically what is does is:
1) intercept network requests for HTML
2) Rewrites the src and srcset attributes to data-src and data-srcset attributes
4) Without JS the ServiceWorker doesn't run so the web remains unbroken
(But as Jake Archibald points out though there are downsides to this
It doesn't quite work as a user script due to how scope works in
service workers but if we want to use these in production we can use a
header  to allow use of scope: '/' so if we wanted to do this in
production there's no real problem with that, but we will have to
ensure we can accurately measure that... 
A more radical next step for ServiceWorkers would be to intercept
network requests for HTML to use an API to serve just the lead section
. This won't help first ever loads from our users, but it might be
enough to get going quickly.
If we want to target that first page load we need to really rethink a
lot of our parser architecture.... fun times.
Would this be a good topic to bring up in January at the dev summit?
We're excited to present our latest update to the Wikipedia Android
app, available now on the Google Play store! Here are the major
highlights from this release:
- Link previews: tapping on a link will now show you a quick "preview" that
contains the first two sentences of the article, plus a swipeable gallery
of image thumbnails from the article. This lets you get the gist of the
link subject without losing your place in the article you were reading. You
can tap on the image thumbnails to view them full-screen, or tap on the
overflow menu (three dots) to save the article for later reading offline,
or share it with other apps. And of course, you can easily continue to the
linked article if you'd like to delve further into it.
- More options when pressing-and-holding links: In addition to opening a
link in a new tab, you can now save the linked article for offline reading,
share the link to another app, or copy the link to the clipboard.
And some further minor enhancements:
- Better search result ordering
- Improved handling and refreshing of saved pages
- Improved screen rotation behavior
- More Material Design components and styles
- Share link to the current article from main overflow menu
- Lots of bugs fixed and translations updated
Until next time, happy reading!
Product Owner (Android), Mobile Apps Team
Dear Greg, and anyone else that is involved in deployment
This is a follow-up from Dan Duvall's talk today during the metrics
meeting about voting browser tests.
The reading web team this quarter with the help of Dan Duvall has made
huge strides in our QA infrastructure. The extensions Gather,
MobileFrontend and now the new extension QuickSurveys are all running
browser tests on a per commit basis. A selected set of MobileFrontend
@smoke tests (a selected subset of all he tests) are running in
15minutes on every commit and the entire set of Gather browser tests
is running in around 21minutes. It marginally slows down getting
patches deployed... but I think this is a good thing. The results
speak for themselves.
In the past month (August 4th-September 4th) only 3/33 builds failed
for MobileFrontend's daily smoke test build  (all 3 due to issues
with the Jenkins infrastructure). For the full set of tests only 10/33
failed in the Chrome daily build , 8 of which were due to tests
being flakey and needing improvement or issues with the Jenkin
infrastructure and the two others serious bugs [4,5] brought about by
work the performance team had been doing that we were able to fix
In Firefox  there were only 6 failures and only 2 of these were
serious bugs, again caused by things outside MobileFrontend [4,6]. One
users with legacy browsers such as IE6. These were caught prior to the
daily builds when suddenly our MobileFrontend commits would not merge.
Given this success:
1) I would like to see us run @integration tests on core, but I
understand given the number of bugs this might not be feasible so far.
2) We should run @integration tests prior to deployments to the
cluster via the train and communicate out when we have failures (and
make a decision to push broken code)
3) I'd like to see other extensions adopt browser test voting on their
extensions. Please feel free to reach out to me if you need help with
that. The more coverage across our extensions we have, the better.
We really have no excuse going forward to push broken code out to our
users and at the very least we need to be visible to each other when
we are deploying broken code. We have a responsibility to our users.
Thoughts? Reactions? Who's with me?!