Hi everyone,
We've released our latest update to the Wikipedia Android app[1][2],
available now on the Google Play store. This is mostly a maintenance
release that contains a few bug fixes and minor design updates since our
previous version.
More notably, as we announced earlier[3], this will be the last version of
the app that supports older Android devices, specifically devices older
than Ice Cream Sandwich MR1 (i.e. older than version 4.0.3, API 15). As we
continue to release future versions of the app, these older devices will
still be able to install our app from the Play Store, but the version that
they receive will be frozen at today's release. Newer devices will, of
course, continue to receive the latest version of the app, as before.
Best,
--
Dmitry Brant
Product Owner (Android), Mobile Apps Team
Wikimedia Foundation
https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
[1] https://play.google.com/store/apps/details?id=org.wikipedia&hl=en
[2]
https://releases.wikimedia.org/mobile/android/wikipedia/stable/wikipedia-2.…
[3] https://lists.wikimedia.org/pipermail/mobile-l/2015-August/009724.html
The reading web dashboard -
https://phabricator.wikimedia.org/dashboard/manage/125/ - has been
updated to have panels allowing you to easily find easy patches to
work on (in priority order) and code to review.
I'm using the code to review column as part of Gerrit cleanup day
since it seems to be a more reliable mechanism to identify what needs
reviewing.
Please add it to your bookmarks so we are all on the same page. I'm
going to be encouraging the Wikimedia reading web team to use this
frequently in our standups.
Happy coding reviewing/submitting! :)
Hi all,
with this email we are starting a weekly look at our most important
readership metrics.
The main purpose is to raise awareness about how these are developing, call
out the impact of any unusual events in the preceding week (e.g. the media
promotion for the Android app this time) and facilitate thinking about core
metrics in general.
We will iterate on the presentation (e.g. to better take seasonality into
account, in particular including year-over-year comparisons) and eventually
want to create dashboards for those which are not already available in that
form already. Feedback and discussion welcome.
(All numbers below are averages for September 7-13, 2015 unless otherwise
noted.)
Pageviews:
Total: 546 million/day
Context (April 2015-September 2015):
[image: Wikimedia daily pageviews, all vs. mobile (April 2015-).png]
After the drop in June, which by now looks not merely seasonal but likely
to be at least partially connected to the rollout of HTTPS-only access
begun on June 12, overall pageviews are slightly rising again recently.
(Notably, the recent drop
<https://reportcard.wmflabs.org/graphs/unique_visitors> in comScore’s
numbers - also in their pageview estimates for Wikimedia sites, not
reproduced in that report card - is not consistent with our own traffic
data; these discrepancies are being looked into.)
Desktop: 58.8%
Mobile web: 39.9%
Apps: 1.2%
Global North ratio: 77.5% of total pageviews
Context (April 2015-September 2015):
[image: Wikimedia pageviews, Global South vs. Global North (April
2015-).png]
Unique app users
Android: 1.136 million /day
Context (January 2015-September 2015):
While this number is slightly higher than in the preceding week, there’s no
noticeable effect visible yet from the media promotion for the Android
app (blog
post
<https://blog.wikimedia.org/2015/09/10/wikipedia-app-new-navigation-features/>
on September 10, media coverage
<https://meta.wikimedia.org/wiki/Communications_committee/Press_clippings/20…>
on September 10, 11 and 13).
iOS: 290k / day
Context (January 2015-September 2015):
New app installations
Android: 40,168/day (Daily installs per device, from Google Play, September
7-11)
Context (September 2014-September 2015):
Unfortunately, the stats function of the Google Play is having issues
currently and data for recent days has been delayed. (They “expect to
resolve the issue shortly”, but the September 11 only became available this
morning and I’m sending this report out now, a fuller assessment of the
impact of the media campaign will need to wait.) The first two days after
the blog post went out on September 10 did not yet show a marked increase:
20150911
40648
20150910
39862
20150909
39927
20150908
39981
20150907
40420
iOS: 5,262/day (download numbers from App Annie)
Context (September 2014-September 2015):
We seem to have a slight upwards trend in recent weeks, but it’s still way
below the download rate a year ago.
----
For reference, the queries and source links used are listed below (access
is needed for each). I’ll also see to upload charts to Commons.
hive (wmf)> SELECT SUM(view_count)/7000000 AS avg_daily_views_millions FROM
wmf.projectview_hourly WHERE agent_type = 'user' AND
CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0")) BETWEEN "2015-09-07"
AND "2015-09-13";
hive (wmf)> SELECT year, month, day,
CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0")) as date,
sum(IF(access_method <> 'desktop', view_count, null)) AS mobileviews,
SUM(view_count) AS allviews FROM wmf.projectview_hourly WHERE year=2015 AND
agent_type = 'user' GROUP BY year, month, day ORDER BY year, month, day
LIMIT 1000;
hive (wmf)> SELECT SUM(IF (FIND_IN_SET(country_code,
'AD,AL,AT,AX,BA,BE,BG,CH,CY,CZ,DE,DK,EE,ES,FI,FO,FR,FX,GB,GG,GI,GL,GR,HR,HU,IE,IL,IM,IS,IT,JE,LI,LU,LV,MC,MD,ME,MK,MT,NL,NO,PL,PT,RO,RS,RU,SE,SI,SJ,SK,SM,TR,VA,AU,CA,HK,MO,NZ,JP,SG,KR,TW,US')
> 0, view_count, 0))/SUM(view_count) FROM wmf.projectview_hourly WHERE
agent_type = 'user' AND
CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0")) BETWEEN "2015-09-07"
AND "2015-09-13";
hive (wmf)> SELECT access_method, SUM(view_count)/7 FROM
wmf.projectview_hourly WHERE agent_type = 'user' AND
CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0")) BETWEEN "2015-09-07"
AND "2015-09-13" GROUP BY access_method;
hive (wmf)> SELECT SUM(IF(platform = 'Android',unique_count,0))/7 AS
avg_Android_DAU_last_week, SUM(IF(platform = 'iOS',unique_count,0))/7 AS
avg_iOS_DAU_last_week FROM wmf.mobile_apps_uniques_daily WHERE
CONCAT(year,LPAD(month,2,"0"),LPAD(day,2,"0")) BETWEEN 20150907 AND
20150913;
hive (wmf)> SELECT CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0"))
as date, unique_count AS Android_DAU FROM wmf.mobile_apps_uniques_daily
WHERE platform = 'Android';
hive (wmf)> SELECT CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0"))
as date, unique_count AS Android_DAU FROM wmf.mobile_apps_uniques_daily
WHERE platform = 'iOS';
https://play.google.com/apps/publish/?dev_acc=18062848292902787645#StatsPla…https://www.appannie.com/dashboard/252257/item/324715238/downloads/?breakdo…
--
Tilman Bayer
Senior Analyst
Wikimedia Foundation
IRC (Freenode): HaeB
Hi Reading,
the Gerrit Cleanup Day is only two days away (Wed 23rd).
More info: https://phabricator.wikimedia.org/T88531
Do you feel prepared and all Reading team members know what to do?
If not, what are you missing and how can we help?
Some Gerrit queries for each team are listed under "Gerrit queries per
team/area" in https://phabricator.wikimedia.org/T88531
Are they helpful and a good start? Or do they miss some areas (or do
you have existing Gerrit team queries to use instead or to "integrate",
e.g. for parts of MediaWiki core you might work on)?
Also, which person will be the main team contact for the day (and
available in #wikimedia-dev on IRC) and help organize review work in
your areas, so other teams could easily reach out?
Some team plates are emptier than others so they're wondering where and
how to lend a helping hand (to find out in advance, due to timezones).
Thanks for your help to make the Gerrit Cleanup day a success!
andre
--
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/
Greetings,
The reading team has been having a series of meetings as part of the
ongoing strategy process,. We documented and clarified as much details as
possible, here
<https://www.mediawiki.org/wiki/Reading/Strategy/Strategy_Process>, in
order to empower everyone to become part of the process, while following
the same methodology.
For example, instead of saying "*The overall page views numbers are
declining and thats a problem that we need to solve*" by applying our
process, the suggested statement is questioned to whether this is a problem
in itself or it is a result of another problem? If we picked one possible
reason, what are our choices to solve the problem, and what possibilities
does each choice entail? What are the concerns with each possibility and
what are the tests that we need to run to justify our concerns?
Sounds complicated? :-)
Not really. The key is to ask the right questions and always remain
focused on the initial problem.
In our own exercise, we identified one problem that manifests itself across
different indicators is our core system's lack of optimization for emerging
platforms, experiences, and communities.
The team can not do this alone. We need more people to join our exercise,
please check the documentation
<https://www.mediawiki.org/wiki/Reading/Strategy/Strategy_Process>, make
yourself familiar with the process, and think of suggesting choices
<https://www.mediawiki.org/wiki/Reading/Strategy/Strategy_Process/Choices>,
generating possibilities
<https://www.mediawiki.org/wiki/Reading/Strategy/Strategy_Process/Possibilit…>,
and designing tests
<https://www.mediawiki.org/wiki/Reading/Strategy/Strategy_Process/Tests>.
Questions and comments are welcome on the talk page.
Lets get this done, together!
Happy weekend,
M
Cross posting to mobile-l as I think we have interested parties here
---------- Forwarded message ----------
From: Tomasz Finc <tfinc(a)wikimedia.org>
Date: Thu, Sep 17, 2015 at 12:26 PM
Subject: Announcing the launch of Maps
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Cc: Yuri Astrakhan <yastrakhan(a)wikimedia.org>, Max Semenik <
msemenik(a)wikimedia.org>
The Discovery Department has launched an experimental tile and static maps
service available at https://maps.wikimedia.org.
Using this service you can browse and embed map tiles into your own tools
using OpenStreetMap data. Currently, we handle traffic from *.wmflabs .org
and *.wikivoyage .org (referrer header must be either missing or set to
these values) but we would like to open it up to Wikipedia traffic if we
see enough use. Our hope is that this service fits the needs of the
numerous maps developers and tool authors who have asked for a WMF hosted
tile service with an initial focus on WikiVoyage.
We'd love for you to try our new service, experiment writing tools using
our tiles, and giving us feedback <https://www.mediawiki.org/wiki/Talk:Maps> .
If you've built a tool using OpenStreetMap-based imagery then using our
service is a simple drop-in replacement.
Getting started is as easy as
https://www.mediawiki.org/wiki/Maps#Getting_Started
How can you help?
* Adapt your labs tool to use this service - for example, use Leaflet js
library and point it to https://maps.wikimedia.org
* File bugs in Phabricator
<https://phabricator.wikimedia.org/tag/discovery-maps-sprint/>
* Provide us feedback to help guide future features
<https://www.mediawiki.org/wiki/Talk:Maps>
* Improve our map style <https://github.com/kartotherian/osm-bright.tm2>
* Improve our data extraction
<https://github.com/kartotherian/osm-bright.tm2source>
Based on usage and your feedback, the Discovery team
<https://www.mediawiki.org/wiki/Discovery> will decide how to proceed.
We could add more data sources (both vector and raster), work on additional
services such as static maps or geosearch, work on supporting all
languages, switch to client-side WebGL rendering, etc. Please help us
decide what is most important.
https://www.mediawiki.org/wiki/Maps has more about the project and related
Maps work.
== In Depth ==
Tiles are served from https://maps.wikimedia.org, but can only be accessed
from any subdomains of *.wmflabs .org and *.wikivoyage.org. Kartotherian
can produce tiles as images (png), and as raw vector data (PBF Mapbox
format or json):
.../{source}/{zoom}/{x}/{y}[(a){scale}x].{format}
Additionally, Kartotherian can produce snapshot (static) images of any
location, scaling, and zoom level with
.../{source},{zoom},{lat},{lon},{width}x{height}[(a){scale}x].{format}.
For example, to get an image centered at 42,-3.14, at zoom level 4, size
800x600, use https://maps.wikimedia.org/img/osm-intl,4,42,-3.14,800x600.png
(copy/paste the link, or else it might not work due to referrer
restriction).
Do note that the static feature is highly experimental right now.
We would like to thank WMF Ops (especially Alex Kosiaris, Brandon Black,
and Jaime Crespo), services team, OSM community and engineers, and the
Mapnik and Mapbox teams. The project would not have completed so fast
without you.
Thank You
--tomasz
Sending this to mobile-l in case you don't follow wikitech-l.
-Adam
---------- Forwarded message ----------
From: Adam Baso <abaso(a)wikimedia.org>
Date: Thu, Sep 17, 2015 at 10:58 AM
Subject: A couple videos: Parsoid with Reading Engineering; Reading
Showcase 20150824
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Hi all,
Just wanted to share a couple recent videos. Enjoy!
== Parsoid with Reading ==
https://youtu.be/3WJID_WC7BQ?t=14
C. Scott Ananian and Subbu Sastry from the Wikimedia Foundation provide an
overview of Parsoid, a Wikimedia technology that supports HTML to wikitext
(and vice versa) translation, supporting rich annotated output markup for
translation layers and other clients (e.g., Wikipedia related technology)
to more specifically query elements from wiki content.
You may have caught some of this material at Wikimania, or maybe this is
your first time. In any event, it's good stuff! This was an interactive
session with questions from Wikimedia Reading Engineering.
== Reading Showcase 20150824 ==
https://youtu.be/F9ExzUaRI0k
About every 4 weeks the Wikimedia Reading department gets together to
showcase experiments and works in progress and the like. This is the
session from 24-August-2015.
-Adam
Sam asked me to write up my recent adventures with ServiceWorkers and
making requests for MediaWiki content super super fast so all our
lovely users can access information quicker. Right now we're trying to
make the mobile site ridiculously fast by using new shiny standard web
technologies.
The biggest issue we have on the mobile site right now is we ship a
lot of content - HTML and images - since we ship those on desktop. On
desktop it's not really a problem from a performance perspective but
it may be an issue from a download perspective if you have some kind
of data limit on your broadband and you are addicted to Wikipedia.
The problem however is that on mobile, the connection speeds are not
quite up to desktops standard. To take an example the Barack Obama
article contains 102 image tags and 186KB of HTML resulting in about
1MB of HTML. If you're on your mobile phone just to look up his place
of birth (which is in the lead section) or to see the County results
of the 2004 U.S. Senate race in Illinois [1] that's a lot of
unnecessary stuff you are forced to load. You have to load all the
images and all the text! Owch!
Gilles D said a while back [2] "The Barack Obama article might be a
bit of an extreme example due to its length, but in that case the API
data needed for section 0's text + the list of sections is almost 30
times smaller than the data needed for all sections's text (5.9kb
gzipped versus 173.8kb gzipped)."
Somewhat related, some experimenting with webpagetest.org has
suggested that disabling images on this page has a serious impact on
first paint (which we believe is due to too many simultaneous
connections) [3,4]
Given that ServiceWorker is here (in Chrome first [5] but hopefully
others soon) I wrote a small proof of concept that lazy loads images
to expose myself to this promising technology.
For those interested I've documented my idea here:
https://en.m.wikipedia.org/wiki/User:Jdlrobson/lazyloader
but basically what is does is:
1) intercept network requests for HTML
2) Rewrites the src and srcset attributes to data-src and data-srcset attributes
3) Uses JavaScript to lazy load images when they appear in the screen.
4) Without JS the ServiceWorker doesn't run so the web remains unbroken
(But as Jake Archibald points out though there are downsides to this
approach [6].)
It doesn't quite work as a user script due to how scope works in
service workers but if we want to use these in production we can use a
header [7] to allow use of scope: '/' so if we wanted to do this in
production there's no real problem with that, but we will have to
ensure we can accurately measure that... [8]
A more radical next step for ServiceWorkers would be to intercept
network requests for HTML to use an API to serve just the lead section
[9]. This won't help first ever loads from our users, but it might be
enough to get going quickly.
If we want to target that first page load we need to really rethink a
lot of our parser architecture.... fun times.
Would this be a good topic to bring up in January at the dev summit?
[1] https://en.m.wikipedia.org/wiki/Barack_Obama#/media/File:2004_Illinois_Sena…
[2] https://phabricator.wikimedia.org/T97570
[3] https://phabricator.wikimedia.org/T110615
[4] https://phabricator.wikimedia.org/T105365#1477762
[5] https://jakearchibald.com/2014/using-serviceworker-today/
[6] https://twitter.com/jaffathecake/status/644168091216310273
[7] https://gerrit.wikimedia.org/r/#/c/219960/8/includes/resourceloader/Resourc…
[8] https://phabricator.wikimedia.org/T112588
[9] https://phabricator.wikimedia.org/T100258