The reduced quality images is now live in production. To see it for yourself, compare original http://upload.wikimedia.org/wikipedia/commons/thumb/c/cb/Buildings_of_Bedford_Road_Historic_District%2C_Armonk%2C_NY.jpg/1024px-Buildings_of_Bedford_Road_Historic_District%2C_Armonk%2C_NY.jpg with low quality http://upload.wikimedia.org/wikipedia/commons/thumb/c/cb/Buildings_of_Bedford_Road_Historic_District%2C_Armonk%2C_NY.jpg/qlow-1024px-Buildings_of_Bedford_Road_Historic_District%2C_Armonk%2C_NY.jpg images (253KB => 99.9KB, 60% reduction).
The quality reduction is triggered by adding "qlow-" in front of the file name's pixel size.
Continuing our previous discussion, now we need to figure out how to best use this feature. As covered before, there are two main approaches: * JavaScript rewrite - dynamically change <img> tag based on network/device/user preference conditions. Issues may include multiple downloads of the same image (if the browser starts the download before JS runs), parser cache fragmentation.
* Varnish-based rewrite - varnish decides which image to server under the same URL. This approach requires Varnish to know everything needed to make a decision.
Zero plans to go the first route, but if we make it mobile, or ever site wide, all the better.
Thanks Yuri,
CC'ing Multimedia team
Maryana, this could be something interesting for the Mobile Web team to look at to optimize image delivery.
Have you guys done any perf work around images?
--tomasz
On Thu, Jun 5, 2014 at 4:10 PM, Yuri Astrakhan yastrakhan@wikimedia.org wrote:
The reduced quality images is now live in production. To see it for yourself, compare original with low quality images (253KB => 99.9KB, 60% reduction).
The quality reduction is triggered by adding "qlow-" in front of the file name's pixel size.
Continuing our previous discussion, now we need to figure out how to best use this feature. As covered before, there are two main approaches:
- JavaScript rewrite - dynamically change <img> tag based on
network/device/user preference conditions. Issues may include multiple downloads of the same image (if the browser starts the download before JS runs), parser cache fragmentation.
- Varnish-based rewrite - varnish decides which image to server under the
same URL. This approach requires Varnish to know everything needed to make a decision.
Zero plans to go the first route, but if we make it mobile, or ever site wide, all the better.
Mobile-l mailing list Mobile-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mobile-l
I'd rather we didn't serve poor quality images to all our users. This is a poorer user experience in my opinion. if I understand correctly this change was more to encourage more providers to join zero by providing an incentive of Zero users using less data. I'm still not convinced there is a huge benefit to the user themselves when you consider gzipping etc. Have you benchmarked and documented how this change effects load time? Pulling in Ori and Aaron since they should have expertise in this area.
A lot of browsers these days allow you to turn off images altogether and I think if a user you'd rather do this then receive poorer quality images. To me it's a binary switch - no images or images... On retina displays we actually go the opposite direction and pull in better quality images. I would hazard a guess that the issue here is the number of http requests rather than the size of the images.
I think if we wanted to invest any time in this sort of thing we should explore deferring the load of images until they are visible (we dabbled with this when we explore lazy loading sections) [1]. It would be interesting to rewrite any image after the first heading to be a link to the image and pull it in via JavaScript when it is scrolled into view. I think this would give us more bang for our buck...
On a side notice I notice all images on mobile are missing a cache expiration - is that intended? Also have we considered adding a header Cache-Control: public to them?
[1] http://24ways.org/2010/speed-up-your-site-with-delayed-content/ On 9 Jun 2014 11:45, "Tomasz Finc" tfinc@wikimedia.org wrote:
Thanks Yuri,
CC'ing Multimedia team
Maryana, this could be something interesting for the Mobile Web team to look at to optimize image delivery.
Have you guys done any perf work around images?
--tomasz
On Thu, Jun 5, 2014 at 4:10 PM, Yuri Astrakhan yastrakhan@wikimedia.org wrote:
The reduced quality images is now live in production. To see it for yourself, compare original with low quality images (253KB => 99.9KB, 60% reduction).
The quality reduction is triggered by adding "qlow-" in front of the file name's pixel size.
Continuing our previous discussion, now we need to figure out how to best use this feature. As covered before, there are two main approaches:
- JavaScript rewrite - dynamically change <img> tag based on
network/device/user preference conditions. Issues may include multiple downloads of the same image (if the browser starts the download before JS runs), parser cache fragmentation.
- Varnish-based rewrite - varnish decides which image to server under the
same URL. This approach requires Varnish to know everything needed to
make a
decision.
Zero plans to go the first route, but if we make it mobile, or ever site wide, all the better.
Mobile-l mailing list Mobile-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mobile-l
Mobile-l mailing list Mobile-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mobile-l
CC'ing zero@ to include them in the conversation
On Mon, Jun 9, 2014 at 5:01 PM, Jon Robson jdlrobson@gmail.com wrote:
I'd rather we didn't serve poor quality images to all our users. This is a poorer user experience in my opinion. if I understand correctly this change was more to encourage more providers to join zero by providing an incentive of Zero users using less data. I'm still not convinced there is a huge benefit to the user themselves when you consider gzipping etc. Have you benchmarked and documented how this change effects load time? Pulling in Ori and Aaron since they should have expertise in this area.
Yuri, you mentioned that we we're seeing a 2x decrease in payload traffic as a result of this change.
When you profile against a sample of our traffic data does this increase/decrease/stay the same?
A lot of browsers these days allow you to turn off images altogether and I think if a user you'd rather do this then receive poorer quality images. To me it's a binary switch - no images or images... On retina displays we actually go the opposite direction and pull in better quality images. I would hazard a guess that the issue here is the number of http requests rather than the size of the images.
While thats great for end users it doesn't help the carrier as it requires the user to opt into the experience. Our carrier partners have to pay the bill irregardless of wether someone turns images on or off. In fact a zero rated customer will have even less incentive to turn off images if their network is fast enough and the data is free.
I think if we wanted to invest any time in this sort of thing we should explore deferring the load of images until they are visible (we dabbled with this when we explore lazy loading sections) [1]. It would be interesting to rewrite any image after the first heading to be a link to the image and pull it in via JavaScript when it is scrolled into view. I think this would give us more bang for our buck...
Zero team, what is our target device matrix these days and how robust is its Javascript support?
On a side notice I notice all images on mobile are missing a cache expiration - is that intended? Also have we considered adding a header Cache-Control: public to them?
This historically has been due to upstream caches not respecting purges and our ops team wanting to keep TTL's as low as possible. We can take it up to the ops list if we want to know more
--tomasz
Yuri, you mentioned that we we're seeing a 2x decrease in payload traffic as a result of this change.
When you profile against a sample of our traffic data does this increase/decrease/stay the same?
Best if Yuri speaks to this. That said, as I recall, Yuri ran random page samples (or maybe it was representative samples), looking at the impact of the preexisting image library compression on the included page images in those pages versus the more aggressive image library compression (that is, what's now possible with an image quality paramater) on those images.
Zero team, what is our target device matrix these days and how robust is its Javascript support?
My perspective is anything that will handle HTML. On some partner networks 30% or more of the pageviews come from browsers lacking JavaScript support or are blacklisted by the ResourceLoader bootstrapping, so they don't run the JavaScript. (Sufficient) JavaScript support is definitely present on some devices - we saw that in an older ResourceLoader module used by the ZeroRatedMobileAccess extension - but it's far from universal.
Incidentally, we have discussed having a means of capturing the trendline for (sufficient) JS support; we should consider use of EventLogging or some cookie setter with cookies processed at Varnish and added into X-Analytics; this is easiest done as an RL module.
Sumanah just ran the regular RFC review on this, and the go-forward plan is this:
1. Implement rewriting of the thumbnail image tags on http://en.m.wikipedia.org/wiki/Cats for Wikipedia Zero networks starting in a week or so. 2. Then the week following that roll it out on a particular language for Wikipedia Zero networks. 3. Then the week following that roll it out across all language for Wikipedia Zero networks.
As data roll in through this gradual rollout, I think we could re-open discussion on the feasibility of a hybrid approach for the mobile web in general:
1. Always rewrite thumbs. 2. On higher-JS support devices on non-Wikipedia Zero networks, as the user nears a thumbnail, fetch the higher quality version as well.
-Adam
On 06/11/2014 03:03 PM, Adam Baso wrote:
Incidentally, we have discussed having a means of capturing the trendline for (sufficient) JS support; we should consider use of EventLogging or some cookie setter with cookies processed at Varnish and added into X-Analytics; this is easiest done as an RL module.
Having information on JS support would be very valuable in general, for [desktop, mobile, zero] and [anonymous, logged-in]. So if you do this, then please try to do so in a way that works across areas.
Gabriel
Good point. Alright!
On Wed, Jun 11, 2014 at 3:22 PM, Gabriel Wicke gwicke@wikimedia.org wrote:
On 06/11/2014 03:03 PM, Adam Baso wrote:
Incidentally, we have discussed having a means of capturing the trendline for (sufficient) JS support; we should consider use of EventLogging or
some
cookie setter with cookies processed at Varnish and added into
X-Analytics;
this is easiest done as an RL module.
Having information on JS support would be very valuable in general, for [desktop, mobile, zero] and [anonymous, logged-in]. So if you do this, then please try to do so in a way that works across areas.
Gabriel
Just asked Toby when I ran into him in the kitchen. He thought that there might be some info already, he'll ask his team.
On 06/11/2014 03:23 PM, Adam Baso wrote:
Good point. Alright!
On Wed, Jun 11, 2014 at 3:22 PM, Gabriel Wicke <gwicke@wikimedia.org mailto:gwicke@wikimedia.org> wrote:
On 06/11/2014 03:03 PM, Adam Baso wrote: > Incidentally, we have discussed having a means of capturing the trendline > for (sufficient) JS support; we should consider use of EventLogging or some > cookie setter with cookies processed at Varnish and added into X-Analytics; > this is easiest done as an RL module. Having information on JS support would be very valuable in general, for [desktop, mobile, zero] and [anonymous, logged-in]. So if you do this, then please try to do so in a way that works across areas. Gabriel
Using the Top 25 articles ( https://en.wikipedia.org/wiki/Wikipedia:Top_25_Report), and comparing those pages with the pre- and post-enhancement implementation kB served - checking within 5-10 minutes of each other - here's an unscientific comparison of page weights (.ods format).
https://drive.google.com/file/d/0BxJX28FKLm78QjFkOFd2dDVtVTA/edit?usp=sharin...
Note, the image rewriting is geared at m.wikipedia.org dimensioned images on non-File: namespace pages on Wikipedia Zero under a test operator configuration only at the moment. Additional enhancements to come would target non-dimensioned images on non-File: namespace pages on mdot on Wikipedia Zero, and should eke out a little extra bandwidth reduction. The bizdev team will be speaking with an operator or two to trial this, and the hope is to roll it out to Wikipedia Zero in general after that.
Nice work, Yuri!
-Adam
On Wed, Jun 11, 2014 at 3:03 PM, Adam Baso abaso@wikimedia.org wrote:
Yuri, you mentioned that we we're seeing a 2x decrease in payload
traffic as a result of this change.
When you profile against a sample of our traffic data does this increase/decrease/stay the same?
Best if Yuri speaks to this. That said, as I recall, Yuri ran random page samples (or maybe it was representative samples), looking at the impact of the preexisting image library compression on the included page images in those pages versus the more aggressive image library compression (that is, what's now possible with an image quality paramater) on those images.
Zero team, what is our target device matrix these days and how robust is its Javascript support?
My perspective is anything that will handle HTML. On some partner networks 30% or more of the pageviews come from browsers lacking JavaScript support or are blacklisted by the ResourceLoader bootstrapping, so they don't run the JavaScript. (Sufficient) JavaScript support is definitely present on some devices - we saw that in an older ResourceLoader module used by the ZeroRatedMobileAccess extension - but it's far from universal.
Incidentally, we have discussed having a means of capturing the trendline for (sufficient) JS support; we should consider use of EventLogging or some cookie setter with cookies processed at Varnish and added into X-Analytics; this is easiest done as an RL module.
Sumanah just ran the regular RFC review on this, and the go-forward plan is this:
- Implement rewriting of the thumbnail image tags on
http://en.m.wikipedia.org/wiki/Cats for Wikipedia Zero networks starting in a week or so. 2. Then the week following that roll it out on a particular language for Wikipedia Zero networks. 3. Then the week following that roll it out across all language for Wikipedia Zero networks.
As data roll in through this gradual rollout, I think we could re-open discussion on the feasibility of a hybrid approach for the mobile web in general:
- Always rewrite thumbs.
- On higher-JS support devices on non-Wikipedia Zero networks, as the
user nears a thumbnail, fetch the higher quality version as well.
-Adam
Tomasz Finc, 11/06/2014 22:51:
On a side notice I notice all images on mobile are missing a cache expiration - is that intended? Also have we considered adding a header Cache-Control: public to them?
This historically has been due to upstream caches not respecting purges and our ops team wanting to keep TTL's as low as possible. We can take it up to the ops list if we want to know more
Is mobile different in this? Just this week I stumbled upon https://bugzilla.wikimedia.org/show_bug.cgi?id=17577 which is quite depressing.
Nemo
Mobile serves the same images as desktop (from upload.wikimedia.org), and they are served without Cache-Control because we rely on ETags for expiry.
On Sun, Sep 14, 2014 at 12:59 PM, Federico Leva (Nemo) nemowiki@gmail.com wrote:
Tomasz Finc, 11/06/2014 22:51:
On a side notice I notice all images on mobile are missing a cache expiration - is that intended? Also have we considered adding a
header
Cache-Control: public to them?
This historically has been due to upstream caches not respecting purges and our ops team wanting to keep TTL's as low as possible. We can take it up to the ops list if we want to know more
Is mobile different in this? Just this week I stumbled upon https://bugzilla.wikimedia.org/show_bug.cgi?id=17577 which is quite depressing.
Nemo
Mobile-l mailing list Mobile-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mobile-l
The gzipping does on the bulk of user agents pretty drastically reduce the bandwidth consumed for non-thumbnail resources.
To follow up on our face-to-face discussions, would it be possible to do both? That is:
1. Always pare down the thumbnail file size. Thus, cut down initial page footprint for everyone without sacrificing thumbnails altogether (i.e., without requiring the user to have to tap on a "click to view" link) on low-JS devices. 2. If the user is on a non-zero-rated network and has higher-JS support, trigger the image retrieval for the more bandwidth intensive image when the user nears the thumbnail. Because step #1 saves bandwidth, the impact of #2 on bandwidth consumption is in effect minimized.
-Adam
On Mon, Jun 9, 2014 at 5:01 PM, Jon Robson jdlrobson@gmail.com wrote:
I'd rather we didn't serve poor quality images to all our users. This is a poorer user experience in my opinion. if I understand correctly this change was more to encourage more providers to join zero by providing an incentive of Zero users using less data. I'm still not convinced there is a huge benefit to the user themselves when you consider gzipping etc. Have you benchmarked and documented how this change effects load time? Pulling in Ori and Aaron since they should have expertise in this area.
A lot of browsers these days allow you to turn off images altogether and I think if a user you'd rather do this then receive poorer quality images. To me it's a binary switch - no images or images... On retina displays we actually go the opposite direction and pull in better quality images. I would hazard a guess that the issue here is the number of http requests rather than the size of the images.
I think if we wanted to invest any time in this sort of thing we should explore deferring the load of images until they are visible (we dabbled with this when we explore lazy loading sections) [1]. It would be interesting to rewrite any image after the first heading to be a link to the image and pull it in via JavaScript when it is scrolled into view. I think this would give us more bang for our buck...
On a side notice I notice all images on mobile are missing a cache expiration - is that intended? Also have we considered adding a header Cache-Control: public to them?
[1] http://24ways.org/2010/speed-up-your-site-with-delayed-content/ On 9 Jun 2014 11:45, "Tomasz Finc" tfinc@wikimedia.org wrote:
Thanks Yuri,
CC'ing Multimedia team
Maryana, this could be something interesting for the Mobile Web team to look at to optimize image delivery.
Have you guys done any perf work around images?
--tomasz
On Thu, Jun 5, 2014 at 4:10 PM, Yuri Astrakhan yastrakhan@wikimedia.org wrote:
The reduced quality images is now live in production. To see it for yourself, compare original with low quality images (253KB => 99.9KB, 60% reduction).
The quality reduction is triggered by adding "qlow-" in front of the
file
name's pixel size.
Continuing our previous discussion, now we need to figure out how to
best
use this feature. As covered before, there are two main approaches:
- JavaScript rewrite - dynamically change <img> tag based on
network/device/user preference conditions. Issues may include multiple downloads of the same image (if the browser starts the download before
JS
runs), parser cache fragmentation.
- Varnish-based rewrite - varnish decides which image to server under
the
same URL. This approach requires Varnish to know everything needed to
make a
decision.
Zero plans to go the first route, but if we make it mobile, or ever site wide, all the better.
Mobile-l mailing list Mobile-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mobile-l
Mobile-l mailing list Mobile-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mobile-l
Mobile-l mailing list Mobile-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mobile-l