There've been some issues reported lately with image scaling, where resource usage on very large images has been huge (problematic for batch uploads from a high-resolution source). Even scaling time for typical several-megapixel JPEG photos can be slower than desired when loading up into something like the MMV extension.
I've previously proposed limiting the generatable thumb sizes and pre-generating those fixed sizes at upload time, but this hasn't been a popular idea because of the lack of flexibility and potentially poor client-side scaling or inefficient network use sending larger-than-needed fixed image sizes.
Here's an idea that blends the performance benefits of pre-scaling with the flexibility of our current model...
A classic technique in 3d graphics is mip-mappinghttps://en.wikipedia.org/wiki/Mip-mapping, where an image is pre-scaled to multiple resolutions, usually each 1/2 the width and height of the next level up.
When drawing a textured polygon on screen, the system picks the most closely-sized level of the mipmap to draw, reducing the resources needed and avoiding some classes of aliasing/moiré patterns when scaling down. If you want to get fancy you can also use trilinear filteringhttps://en.wikipedia.org/wiki/Trilinear_filtering, where the next-size-up and next-size-down mip-map levels are combined -- this further reduces artifacting.
I'm wondering if we can use this technique to help with scaling of very large images: * at upload time, perform a series of scales to produce the mipmap levels * _don't consider the upload complete_ until those are done! a web uploader or API-using bot should probably wait until it's done before uploading the next file, for instance... * once upload is complete, keep on making user-facing thumbnails as before... but make them from the smaller mipmap levels instead of the full-scale original
This would avoid changing our external model -- where server-side scaling can be used to produce arbitrary-size images that are well-optimized for their target size -- while reducing resource usage for thumbs of huge source images. We can also still do things like applying a sharpening effect on photos, which people sorely miss when it's missing.
If there's interest in investigating this scenario I can write up an RfC with some more details.
(Properly handling multi-page files like PDFs, DjVu, or paged TIFFs could complicate this by making the initial rendering extraction pretty slow, though, so that needs consideration.)
-- brion