On 04/24/2014 06:00 AM, Gilles Dubuc wrote:It might be helpful to consider this as a fairly generic request limiting /
> Instead of each image scaler server generating a thumbnail immediately when
> a new size is requested, the following would happen in the script handling
> the thumbnail generation request:
load shedding problem. There are likely simpler and more robust solutions to
this using plain Varnish or Nginx, where you basically limit the number of
backend connections, and let other requests wait.
Rate limiting without client keys is very limited though. It really only
works around the root cause of us allowing clients to start very expensive
operations in real time.
A possible way to address the root cause might be to generate screen-sized
thumbnails in a standard size ('xxl') in a background process after upload,
and then scale all on-demand thumbnails from those. If the base thumb is not
yet generated, a placeholder can be displayed and no immediate scaling
happens. With the expensive operation of extracting reasonably-sized base
thumbs from large originals now happening in a background job, rate limiting
becomes easier and won't directly affect the generation of thumbnails of
existing images. Creating small thumbs from the smaller base thumb will also
be faster than starting from a larger original, and should still yield good
quality for typical thumb sizes if the 'xxl' thumb size is large enough.
The disadvantage for multi-page documents would be that we'd create a lot of
screen-sized thumbs, some of which might not actually be used. Storage space
is relatively cheap though, at least cheaper than service downtime or
degraded user experience from normal thumb scale requests being slow.
Gabriel
_______________________________________________
Ops mailing list
Ops@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/ops