Hi everyone,
*tl;dr: We'll be stripping all content contained inside brackets from the
first sentence of articles in the Wikipedia app.*
The Mobile Apps Team is focussed on making the app a beautiful and engaging
reader experience, and trying to support use cases like wanting to look
something up quickly to find what it is. Unfortunately, there are several
aspects of Wikipedia at present that are actively detrimental to that goal.
One example of this are the lead sentences.
As mentioned in the other thread on this matter
<https://lists.wikimedia.org/pipermail/mobile-l/2015-March/008715.html>,
lead sentences are poorly formatted and contain information that is
detrimental to quickly looking up a topic. The team did a quick audit
<https://docs.google.com/a/wikimedia.org/spreadsheets/d/1BJ7uDgzO8IJT0M3UM2q…>
of
the information available inside brackets in the first sentences, and
typically it is pronunciation information which is probably better placed
in the infobox rather than breaking up the first sentence. The other
problem is that this information was typically inserted and previewed on a
platform where space is not at a premium, and that calculation is different
on mobile devices.
In order to better serve the quick lookup use case, the team has reached
the decision to strip anything inside brackets in the first sentence of
articles in the Wikipedia app.
Stripping content is not a decision to be made lightly. People took the
time to write it, and that should be respected. We realise this is
controversial. That said, it's the opinion of the team that the problem is
pretty clear: this content is not optimised for users quickly looking
things up on mobile devices at all, and will take a long time to solve
through alternative means. A quicker solution is required.
The screenshots below are mockups of the before and after of the change.
These are not final, I just put them together quickly to illustrate what
I'm talking about.
- Before: http://i.imgur.com/VwKerbv.jpg
- After: http://i.imgur.com/2A5PLmy.jpg
If you have any questions, let me know.
Thanks,
Dan
--
Dan Garry
Associate Product Manager, Mobile Apps
Wikimedia Foundation
There are currently plans on deploying skin Blueprint on mediawiki.org
<https://phabricator.wikimedia.org/T93613>. Besides my work on UI
Standardization I'll also continue to work on the skin. I think Blueprint
should be on that list.
Best,
Volker
On Thu, Jul 23, 2015 at 6:25 PM, Adam Baso <abaso(a)wikimedia.org> wrote:
> Would you please share this on the list?
>
> On Thursday, July 23, 2015, Volker Eckl <veckl(a)wikimedia.org> wrote:
>
>> Hi Adam,
>> there are currently plans on deploying skin Blueprint on mediawiki.org
>> <https://phabricator.wikimedia.org/T93613>. Besides my work on UI
>> Standardization I'll also continue to work on Blueprint. Although UI
>> Standardization is a "special case", formally we belong to Reading and
>> therefore I think Blueprint should be on that list.
>>
>>
>> Best,
>> Volker
>>
>> On Mon, Jul 20, 2015 at 10:58 PM, Adam Baso <abaso(a)wikimedia.org> wrote:
>>
>>> Hi all -
>>>
>>> I've been reviewing a list of extensions with Reading Engineering and
>>> Reading Infrastructure leads - props to James Forrester for promoting this
>>> discussion. Here's a list of extensions we believe currently falls under
>>> Reading for triage (n.b., not all extensions will get active development
>>> support).
>>>
>>> https://www.mediawiki.org/wiki/User:ABaso_(WMF)/Extension_Responsibility
>>>
>>> Presuming no major issues with this, I think we should move the page to
>>> mw:Reading/Extension_Responsibility.
>>>
>>> One important outstanding question:
>>>
>>> Is MultimediaViewer appropriate for Reading given its
>>> consumption-oriented nature? Or is this actually better suited to Editing
>>> (where there exists a team named Multimedia)?
>>>
>>> Some other notes:
>>>
>>> * For skins with low utilization, we in time probably should coordinate
>>> handover to interested community members (or discuss with community members
>>> practical approaches for EOL).
>>>
>>> * Regarding the Nostalgia skin, we believe it's only used on
>>> https://nostalgia.wikipedia.org/wiki/HomePage, so maintenance would be
>>> updating for breaking skin changes or security issues only.
>>>
>>> * JsonConfig, ZeroBanner, ZeroPortal - we'll need to examine this more
>>> closely. Yuri (who has deepest PHP knowledge on extensions) is now over in
>>> Discovery, Jeff (JS & Lua) is in Reading, and now I'm managing instead of
>>> writing lots of code.
>>>
>>> * Collection probably belongs in Services
>>>
>>>
>>>
>>> _______________________________________________
>>> Mobile-l mailing list
>>> Mobile-l(a)lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/mobile-l
>>>
>>>
>>
+ mobile-l
Here's a rough summary of the discussion based on my understanding:
*Problem and background:*
While most parameters we pass to the PHP API action=mobileview endpoint are
constant, there are a couple of parameters which depend either on device
dimensions or on user preferences.
The questions revolve around trading off caching of requests/trying to
avoid too much variance of requests vs. processing on clients when we move
to RESTBase services for page content. We want to be able to take advantage
of caching on both the edge cache side (Varnish) and also on the
server-side (RESTBase stores the results of each page revision) as well.
In the first phase of using RESTBase it won't pre-generate the results when
a new page revision gets created. Instead, it would generate and save the
results on-demand. In a later phase we aim to get pre-generation enabled.
*1) leadImageWidth*: The Android app provides the desired lead image width
and passes that to the mobileview action API as "thumbsize".[1] The Android
app provides only one of three possible values: 640, 800, 1024.[3]. It only
uses the the URL for the lead image, not the dimensions since it gets them
when the actual image finished downloading. The iOS app currently uses
"thumbwidth"
which is somewhat similar to "thumbsize" but has its own pros and cons.[4]
*2) noimages*: In the Android app settings, the user can chose to not show
any images. (The iOS app doesn't have this setting.) When this is the case
we add a noimages=true query parameter to the PHP mobileview request.[1]
Then the payload replaces the <img> tags with <span> tags. BTW, if the
client specified noimages=true then the value of leadImageWidth does not
matter; in fact, then we could omit the whole lead image info from the
result as well.
It's unclear to me which percentage of users actually use this setting.
*Possible solution alternatives:*
*1) leadImageWidth: *
*1A)* If the clients uses a constant value, let's say 800px for thumbsize
action=mobileview parameter then the client could replace the /800px-
portion in the resulting URL with the desired width, as long as the URL
structure stays predictable[2]. If the string replacement fails we could
still use the 800px URL.
*1B)* The new RESTBase API could provide an array of leadImage URL values
to the client (instead of the thumb JSON object).
*2) noimages: *
*2A)* The clients could replace the <img> tags with <span> tags, to emulate
what the nomiages flag of mobileview does. This would help caching by
reducing variability. OTOH this puts more burden on clients since DOM
transformations is something clients want to avoid. In this case in
particular since this is usually set because there are bandwidth or CPU
issues on the client side.
*2B)* We could provide a noimages=true query parameter also with RESTBase. We
could keep this uncached or implement this as a transform on the cached
base version (ideally in the service).
Thoughts, comments?
Cheers,
Bernd
[1]
https://en.m.wikipedia.org/w/api.php?action=mobileview&format=json&page=CER…
[2] Example:
"//
upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/800px-Ce…"
would become "//
upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/1024px-C…
".
[3] We don't want to add arbitrary values and follow certain bucket sizes
to enhance chances of cache hits and reduce burden on servers.
Width buckets: *https://git.wikimedia.org/blob/mediawiki%2Fextensions%2FMultimediaViewer.git/f9e7bae91a8032fa13fc68114a0d57d190ea77f9/resources%2Fmmv%2Fmmv.ThumbnailWidthCalculator.js#L69
<https://git.wikimedia.org/blob/mediawiki%2Fextensions%2FMultimediaViewer.gi…>*
[4] The Android app wanted to move to thumbwidth as well but iOS
encountered issues with svg files: https://phabricator.wikimedia.org/T91144
+ https://phabricator.wikimedia.org/T98528
On Sun, Jul 26, 2015 at 11:01 PM, Bernd Sitzmann <bernd(a)wikimedia.org>
> wrote:
>
>>
>> Correct me if I'm wrong, but the actual JPEG / PNG of the (lead) image
>>> will not be sent together with the first response, right? If so, simply
>>> adding the JSON with the three sizes adds an overhead of 100 or so bytes,
>>> while allowing us to cache/store the response correctly.
>>
>>
>> Yes, you are correct. The actual image is downloaded in a separate
>> request. This is just to get the URL of the lead image. Earlier I thought
>> we would also use the dimensions provided in the JSON output, but looking
>> at the Android code I don't see this used.
>> I'm now thinking that we could just provide one standard value (e.g.
>> 800px) for the mobileview request, and then the client could just adjust
>> the lead image URL
>> Example:
>> "//upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/
>> *800px*-Cernfounders.png" would become "//
>> upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/
>> *1024px*-Cernfounders.png".
>> While this seems a bit hacky by not following hypermedia principles it
>> would also avoid the issue thumbwidth issues.[1][2]
>>
>> Bernd
>>
>> [1] https://phabricator.wikimedia.org/T91144
>> [2] https://phabricator.wikimedia.org/T98528
>>
>>
>> On Sat, Jul 25, 2015 at 4:37 PM, Gabriel Wicke <gwicke(a)wikimedia.org>
>> wrote:
>>
>>>
>>>
>>> On Sat, Jul 25, 2015 at 5:50 AM, Marko Obrovac <mobrovac(a)wikimedia.org>
>>> wrote:
>>>
>>>> Correct me if I'm wrong, but the actual JPEG / PNG of the (lead) image
>>>> will not be sent together with the first response, right? If so, simply
>>>> adding the JSON with the three sizes adds an overhead of 100 or so bytes,
>>>> while allowing us to cache/store the response correctly.
>>>>
>>>> As for the options, I'd go with (1) as well. Mostly because external
>>>> requests will not be POSTs, but GETs, so we would still need some magic
>>>> translation in RESTBase hashing the query parameters and deducing the exact
>>>> storage request. I might be wrong here as well, though.
>>>>
>>>> Perhaps we should consider option (1a): RESTBase sends the request
>>>> together with the HTML to mangle right away. Hm, that looks more closely to
>>>> option (2) though and still needs a specialised RESTBase module.
>>>>
>>>
>>> 2) should work without a special module once the post_request_storage
>>> stanza is implemented. We can point that to the main content storage
>>> bucket, and get the implicit data fetching that way.
>>>
>>>
>>>>
>>>> Cheers,
>>>> Marko
>>>>
>>>> On 24 July 2015 at 23:53, Gabriel Wicke <gwicke(a)wikimedia.org> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jul 24, 2015 at 2:39 PM, Bernd Sitzmann <bernd(a)wikimedia.org>
>>>>> wrote:
>>>>>
>>>>>> Option 1 sounds interesting to me.
>>>>>> Not sure I fully understand option 2. (Sounds like pre-generation to
>>>>>> me.)
>>>>>>
>>>>>
>>>>> Yes, it would normally use the pre-generated content, but generate &
>>>>> save it on demand if needed. That's the case in both variants, though. Only
>>>>> difference is recursive GET back to RESTBase vs. RB POSTing the needed
>>>>> content directly.
>>>>>
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Bernd
>>>>>>
>>>>>> On Fri, Jul 24, 2015 at 3:22 PM, Gabriel Wicke <gwicke(a)wikimedia.org>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jul 24, 2015 at 2:05 PM, Bernd Sitzmann <bernd(a)wikimedia.org
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Transforms on the cached base version sounds interesting for both
>>>>>>>> cases. How does that work?
>>>>>>>>
>>>>>>>
>>>>>>> I see three main options:
>>>>>>>
>>>>>>> 1) the app service provides a GET end point and, when called with
>>>>>>> the custom parameters, fetches the base version from RESTBase & returns a
>>>>>>> patched version corresponding to the custom settings. RESTBase just proxies
>>>>>>> the custom entry point.
>>>>>>>
>>>>>>> 2) is basically the same, except that RESTBase POSTs the base
>>>>>>> version to the service. We are just starting work on T105975 which might
>>>>>>> give us a way to do this without writing a custom module.
>>>>>>>
>>>>>>> 3) is to do the post-processing in a custom RESTBase module. I'm not
>>>>>>> in favor of this unless absolutely needed, which I don't think is the case
>>>>>>> here.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Bernd
>>>>>>>>
>>>>>>>> On Fri, Jul 24, 2015 at 2:48 PM, Gabriel Wicke <
>>>>>>>> gwicke(a)wikimedia.org> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Jul 24, 2015 at 1:28 PM, Bernd Sitzmann <
>>>>>>>>> bernd(a)wikimedia.org> wrote:
>>>>>>>>>
>>>>>>>>>> I tend to agree and I think we should try to take advantage of
>>>>>>>>>> the storage & caching capabilities as much as possible. Not just
>>>>>>>>>> on our servers but also on the edge-caches.
>>>>>>>>>>
>>>>>>>>>> I'd venture a guess that the *noimages* flag is rarely used
>>>>>>>>>> (<5%). Dmitry, do we have any data about the use of "Show images"
>>>>>>>>>> preference being turned off? If not then that would be another good one for
>>>>>>>>>> EL. I'm going out on a limb here saying that if my guess is correct then we
>>>>>>>>>> could potentially replace the <img> tags with the respective <span> tags to
>>>>>>>>>> emulate the noimages flag on the clients. It's not ideal since the <img>
>>>>>>>>>> tags have a bigger payload and post-processing the payload on the clients
>>>>>>>>>> is something we would like to avoid. It's really a tradeoff between caching
>>>>>>>>>> and pure payload size.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> We could keep this uncached or implement this as a transform on
>>>>>>>>> the cached base version (ideally in the service).
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> The *leadImageWidth* has currently three possible values:
>>>>>>>>>> * 640px for phones,
>>>>>>>>>> * 800px for 7" tablets/phablets,
>>>>>>>>>> * 1024px for 10" tablets.
>>>>>>>>>> So, it's not completely variable. We try to take the image size
>>>>>>>>>> buckets[1] into account to help the servers with caching. Here the
>>>>>>>>>> distribution is not so clear-cut. I'm not sure if there is a reasonable
>>>>>>>>>> default value. But the difference in the payload would be very minor. This
>>>>>>>>>> only affects the thumb JSON object at the top level of the JSON payload.
>>>>>>>>>>
>>>>>>>>>> Examples:
>>>>>>>>>> 640[2]:
>>>>>>>>>> "thumb": {
>>>>>>>>>> "url": "//
>>>>>>>>>> upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/635px-Ce…
>>>>>>>>>> ","width": 635,"height": 640},
>>>>>>>>>> 800:
>>>>>>>>>> "thumb": {"url": "//
>>>>>>>>>> upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/794px-Ce…
>>>>>>>>>> ","width": 794,"height": 800},
>>>>>>>>>> 1024:
>>>>>>>>>> "thumb": {"url": "//
>>>>>>>>>> upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/1017px-C…
>>>>>>>>>> ","width": 1017,"height": 1024},
>>>>>>>>>>
>>>>>>>>>> So, I'm thinking before we enable to pre-generation we could drop
>>>>>>>>>> the parameters and do something else instead, like:
>>>>>>>>>> Make "thumb" an (associative?) array so we have all three values
>>>>>>>>>> always included. I'm not a big fan of it since this mean we need to deviate
>>>>>>>>>> the parsing code between action=mobileview and RESTBase further and we have
>>>>>>>>>> again more data in the payload than the client is actually using.
>>>>>>>>>>
>>>>>>>>>> To summarize, I think we have some alternatives we could consider
>>>>>>>>>> but they come with a price.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> You could also both the old & new dimensions in the PHP response
>>>>>>>>> for a transition period. That way you could eventually phase out the
>>>>>>>>> top-level width & height. Since the urls are all the same apart from the
>>>>>>>>> size, you could perhaps also use something more compact like
>>>>>>>>>
>>>>>>>>> thumb: {
>>>>>>>>> baseURL: "//
>>>>>>>>> upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/
>>>>>>>>> <http://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/6…>
>>>>>>>>> ",
>>>>>>>>> 640: {
>>>>>>>>> w: 635,
>>>>>>>>> h: 640,
>>>>>>>>> url: "635px-Cernfounders.png
>>>>>>>>> <http://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/6…>
>>>>>>>>> "
>>>>>>>>> },
>>>>>>>>> 800: {
>>>>>>>>> w: 794,
>>>>>>>>> h: 800,
>>>>>>>>> url: "794px-Cernfounders.png
>>>>>>>>> <http://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/6…>
>>>>>>>>> "
>>>>>>>>> },
>>>>>>>>> 1024: {
>>>>>>>>> w: 1017,
>>>>>>>>> h: 1024,
>>>>>>>>> url: "1017px-Cernfounders.png
>>>>>>>>> <http://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/6…>
>>>>>>>>> "
>>>>>>>>> }
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> or, if you really wanted to go super compact at the cost of
>>>>>>>>> readability:
>>>>>>>>>
>>>>>>>>> ["//
>>>>>>>>> upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/
>>>>>>>>> <http://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/6…>
>>>>>>>>> {size}px-Cernfounders.png
>>>>>>>>> <http://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Cernfounders.png/6…>
>>>>>>>>> ",
>>>>>>>>> [640,635,640,635],
>>>>>>>>> [800,794,800,794],
>>>>>>>>> [1024,1017,1024,1017]
>>>>>>>>> ]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Bernd
>>>>>>>>>>
>>>>>>>>>> [1]
>>>>>>>>>> https://git.wikimedia.org/blob/mediawiki%2Fextensions%2FMultimediaViewer.gi…
>>>>>>>>>> [2]
>>>>>>>>>> https://en.m.wikipedia.org/w/api.php?action=mobileview&format=json&page=CER…
>>>>>>>>>> *size*=640
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Jul 24, 2015 at 11:13 AM, Gabriel Wicke <
>>>>>>>>>> gwicke(a)wikimedia.org> wrote:
>>>>>>>>>>
>>>>>>>>>>> This does complicate the storage & caching story. We likely
>>>>>>>>>>> won't want to pre-generate all permutations for each revision, which means
>>>>>>>>>>> that request performance will be worse than stored content.
>>>>>>>>>>>
>>>>>>>>>>> In the short term we can deploy this without storage and
>>>>>>>>>>> caching, but for the longer term we should really figure out a way to make
>>>>>>>>>>> this efficient. Could some of this processing be done on the client,
>>>>>>>>>>> perhaps by running a string replacement on HTML?
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Jul 24, 2015 at 7:27 AM, Marko Obrovac <
>>>>>>>>>>> mobrovac(a)wikimedia.org> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Bernd,
>>>>>>>>>>>>
>>>>>>>>>>>> On 24 July 2015 at 08:07, Bernd Sitzmann <bernd(a)wikimedia.org>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Marko,
>>>>>>>>>>>>>
>>>>>>>>>>>>> There are a couple of parameters we pass to the mobileview
>>>>>>>>>>>>> action which depend either on device dimensions or on user preferences.
>>>>>>>>>>>>> * leadImageWidth: We calculate the desired lead image width to
>>>>>>>>>>>>> download on the client and pass that to the mobileview action API as
>>>>>>>>>>>>> "thumbsize".[1]
>>>>>>>>>>>>> * noimages: The user can chose to not download any images.
>>>>>>>>>>>>> When this is the case we add a "noimages": true flag to the PHP.[1] Then
>>>>>>>>>>>>> the payload returns no <img> tags.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In the future there might be a few more. I could also see
>>>>>>>>>>>>> something similar to leadImageWidth, where we calculate the best size of
>>>>>>>>>>>>> images or videos to display.
>>>>>>>>>>>>>
>>>>>>>>>>>>> What do you recommend to accomplish the equivalent for
>>>>>>>>>>>>> RESTBase endpoints?
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> What you are describing seems like complimentary information,
>>>>>>>>>>>> so I would recommend providing them as query parameters, with the
>>>>>>>>>>>> MobileApps service having some (sane) defaults in case these are missing.
>>>>>>>>>>>> The public API call would then be something like: https://
>>>>>>>>>>>> (en|m).
>>>>>>>>>>>> wikipedia.org/api/rest_v1/page/mobile-html-full/Foobar?thumbsize=200&noimag…
>>>>>>>>>>>> .
>>>>>>>>>>>>
>>>>>>>>>>>> Note that RESTBase needs the explicit list of query params and
>>>>>>>>>>>> headers that can be forwarded to back-end services, so if/when you do
>>>>>>>>>>>> implement this in the apps service, please notify us (phab, mail, irc, etc)
>>>>>>>>>>>> or try to include them in the RESTBase config concerning MobileApps~[1]
>>>>>>>>>>>> yourselves.
>>>>>>>>>>>>
>>>>>>>>>>>> Cheers,
>>>>>>>>>>>> Marko
>>>>>>>>>>>>
>>>>>>>>>>>> [1]
>>>>>>>>>>>> https://github.com/wikimedia/restbase/blob/master/specs/mediawiki/v1/mobile…
>>>>>>>>>>>>
>>>>>>>>>>>> P.S. We are making really good progress on the deployment! Hope
>>>>>>>>>>>> to see it live soon :)
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Bernd
>>>>>>>>>>>>>
>>>>>>>>>>>>> [1]
>>>>>>>>>>>>> https://en.m.wikipedia.org/w/api.php?action=mobileview&format=json&page=CER…
>>>>>>>>>>>>> *noimages=true&thumbsize=640*
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Marko Obrovac, PhD
>>>>>>>>>>>> Senior Services Engineer
>>>>>>>>>>>> Wikimedia Foundation
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Gabriel Wicke
>>>>>>>>>>> Principal Engineer, Wikimedia Foundation
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Gabriel Wicke
>>>>>>>>> Principal Engineer, Wikimedia Foundation
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Gabriel Wicke
>>>>>>> Principal Engineer, Wikimedia Foundation
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Gabriel Wicke
>>>>> Principal Engineer, Wikimedia Foundation
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Marko Obrovac, PhD
>>>> Senior Services Engineer
>>>> Wikimedia Foundation
>>>>
>>>
>>>
>>>
>>> --
>>> Gabriel Wicke
>>> Principal Engineer, Wikimedia Foundation
>>>
>>
>>
>
Hi Mobilizers,
1. Do you think it would make sense to develop a Wikipedia app for Kindle,
or to encourage Amazon to do so or financially sponsor its development?
2. Same questions for Windows and Microsoft, especially considering the
Windows 10 launch.
Thanks,
Pine
moving to mobile-l, and cc Search & Discovery.
---------- Forwarded message ----------
From: Dmitry Brant <dbrant(a)wikimedia.org>
Date: Wed, Jul 29, 2015 at 3:38 PM
Subject: "Morelike" suggestions - the results are in!
To: Internal communication for WMF Reading team <
reading-wmf(a)lists.wikimedia.org>
Hi all,
For the last few weeks, we've had an A/B test in the Android app where we
measure user engagement with the "read more" suggestions that we show at
the bottom of each article. We display three suggestions for further
reading, based on either (A) a plain full-text search query based on the
title of the current article, or (B) a query using the "morelike" feature
in CirrusSearch.
And the winner is... (perhaps not entirely surprisingly) "morelike"! Users
who saw suggestions based on "morelike" were over 20% more likely to click
on one of the suggestions.
Here's a quick analysis and chart of the data from the last 10 days:
*https://docs.google.com/spreadsheets/d/1BFsrAcPgexQyNVemmJ3k3IX5rtPvJ_5vdYOyGgS5R6Y/edit?usp=sharing
<https://docs.google.com/spreadsheets/d/1BFsrAcPgexQyNVemmJ3k3IX5rtPvJ_5vdYO…>*
-Dmitry
Hi all - we hold a Reading showcase every 4 weeks where people show off
cool work and experimental projects. Thought some people on the list may
have an interest in the following from the one earlier this week. For a
couple of the presentations the screensharing cut out, so that was trimmed
from the end.
https://www.youtube.com/watch?v=UM9YwsAMC_o
Here are the ones where video isn't available.
Kaity and Sherah showed the feeds prototype:
http://feeds-a.meteor.com/http://feeds-b.meteor.com/
Baha showed off output fromfrom Barry the Browser Test Bot, which Jon
Robson has been rocking:
https://github.com/jdlrobson/Barry-the-Browser-Test-Bot
Enjoy!
-Adam
> Adam Baso <abaso(a)wikimedia.org> asked:
> > Hi there - is Schema:MobileWebClickTracking dead?
>
Jon Robson <jrobson(a)wikimedia.org> replied:
> Yes.
>
Then someone should have updated its talk page.
https://meta.wikimedia.org/wiki/Schema_talk:MobileWebClickTracking
See e.g. https://meta.wikimedia.org/wiki/Schema_talk:EchoPrefUpdate
Back in 2013 the talk pages for schemas was where we kept track of their
use. (Dario, is this still the case?) It's not ideal but the alternative is
looking at the git logs of operations/mediawiki-config and the extension to
figure out when things were disabled or decommissioned.
I did it this time, but I think engineers should update the talk page when
they stop using a schema, unless there's a better approach.
> It got split into main-menu-daily, page-ui-daily and various other schemas.
>
I think you mean MobileWebMainMenuClickTracking and
MobileWebUIClickTracking. I pasted baho's "break up MobileWebClickTracking"
commit message into the talk page.
FWIW https://meta.wikimedia.org/wiki/Special:PrefixIndex/Schema:MobileWeb
lists 18 schemas, the EventLoggingRegisterSchemas hook[1] registers 9
schemas, and the active schemas category [2] only has four MobileWeb
schemas.
[1]
https://phabricator.wikimedia.org/diffusion/EMFR/browse/master/includes/Mob…
[2] https://meta.wikimedia.org/wiki/Category:Schemas_(active)
--
=S Page WMF Tech writer
Oops, should have sent this on mobile-l to start with. Thanks. Do you think
we should remove that chart from the dashboard, or just prune stuff after
2015 and disable the job that keeps adding records to it?
Kevin and Marcel from Analytics on the CC.
-Adam
On Thu, Jul 30, 2015 at 11:57 AM, Jon Robson <jrobson(a)wikimedia.org> wrote:
> Yes. It got split into main-menu-daily, page-ui-daily and various other
> schemas.
>
> On Thu, Jul 30, 2015 at 11:54 AM, Adam Baso <abaso(a)wikimedia.org> wrote:
> > Hi there - is Schema:MobileWebClickTracking dead?
> >
> > On the ui-daily-historic chart in
> > https://mobile-reportcard.wmflabs.org/#other-graphs-tab it looks like
> stuff
> > pretty much stops at the beginning of 2015.
> >
> > I'm going to ask Analytics to prune the long tail of events in 2015 to
> make
> > the graph prettier without the big drop off if no one objects.
> >
> > -Adam
> >
> > _______________________________________________
> > reading-wmf mailing list
> > reading-wmf(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/reading-wmf
> >
>
> _______________________________________________
> reading-wmf mailing list
> reading-wmf(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/reading-wmf
>
Hi everyone!
We're gearing up release another minor update the app. You can download it
by opening the TestFlight app on your device and downloading the *Wikipedia
Beta 4.1.7 (171)*. You'll see "What To Test" notes in TestFlight, but I've
also copied them below for convenience.
Please reply to me directly if you're not already a member of the beta
testing group or have issues installing the app on your device.
Thanks!
Brian, Corey, & Monte
- Make sure your favorite faces are nicely centered on the page's lead
> image. Someone besides President Obama, we've checked him already ;-)
> - Try to break the language picker! Both when changing languages of a
> specific page (language "A" button on bottom toolbar) and changing the
> search site (tap "W" > "More" > "Search ____ Wikipedia").
> - Try to break read more! Tap on the "Read More" suggestions at the bottom
> of a page.
> - Try to break saved pages! We had some issues where saving pages was
> crashing the app. We're not sure which pages (or series of events before
> the save) were causing the crash, so do your worst!
--
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
The reading team are currently focusing energy on speeding up the site
for all our users (https://phabricator.wikimedia.org/T98986 is the
tracking bug where this work can be followed)
Off the back of https://phabricator.wikimedia.org/T105361 I had a
quick chat with Ori to document how the performance team is currently
identifying problems with MediaWiki's code. I'm sharing here, so
anyone who is interested in helping us improve the time our users can
load our content can analyse our data, raise tasks, and submit
patches.
I'm hoping this will be useful for anyone who wants to get involved in
an effort to make our site faster for our users (this is not desktop
specific). If you have anything useful to add please do, after some
discussion or nods I'd love to share some best practices on
mediawiki.org
Tool 1) Use http://webpagetest.org (no credentials necessary)
* Use https://en.wikipedia.org/wiki/Facebook as an example wiki page
* Choose a region of the world and browser
* Select first view only since this is what we are currently
interested in (repeat view is when they load again - and it should be
quicker as it is from cache).
* Capture video can be turned off - I personally find the screenshots
more useful
To shout out some of the advanced settings, the more
interesting/useful features include:
*Chrome > capture dev tools timeline
* Setting speed 3G or 2G
* Script can be used to conditionally turn on things which are not yet
available to everyone e.g. VisualEditor
You can do a lot of this in your Chrome browser locally, but different
browsers may have different behaviours and are in a fixed location so
this does not get captured in this tool. The visual screenshots also
make it easier to see where things get blocked. With the timeline from
advanced tools you can match up white screens with blocking
scripts/styles
Tool 2) Add http://performance.wikimedia.org to your browser
bookmarks. Navigation timing section is probably the most interesting
right now. It points to https://grafana.wikimedia.org (no credentials
needed) which is powered by http://graphite.wikimedia.org (Access
graphite with your wikitech credentials). This data is sourced from
our users, so is a good representation of how we are doing.
If a graph is missing you can create a new one from data in graphite
by clicking "add row" or editing an existing graph.
Clicking edit on
https://grafana.wikimedia.org/#/dashboard/db/navigation-timing?panelId=12&f…
you'll be able to understand where the data comes from on graphite
e.g. metrics/frontend/navtiming/totalPageLoadTime
Note for graphs median data is less sensitive to edge cases so best to
use this as a more realistic indicator.
Folders in graphite, are populated by scripts that live in:
https://github.com/wikimedia/operations-puppet/tree/production/modules/webp…
To create a graph, simply go to an existing workboard, save it under a
different name (this clones it) - don't worry you can't mess up and
delete existing workboards.
Tool 3) Speedcurve requires you to setup an account but it gives you
an opiniated view about things you care about and is nicely presented
so could be a good source of inspiration for your own grafana
dashboard.
To oversimplify what it does: each day it will access a page, store
result, allow you to see historic data.
Note the performance team has plans to setup infrastructure to automate this.
Tool 4) is one we are not using - http://sitespeed.io. We might want
to use it for performance regressions test.
In the grand scheme of things it would be great to get to a place
where Jenkins complains if you cause a regression in firstPaint time
but we are a long way from that but let's work in that direction :-)
Let's live up to the Hawaiian word after which we are named!
Apologies if this is oversimplified, please take this as an
opportunity to share how you/your team/your company test page
performance. I see this mailing list as a good place to share these
sort of things!