Hi opsen,
I'm currently working on this changeset: https://gerrit.wikimedia.org/r/#/c/157157/ whose goal is to pre-render commonly used thumbnail sizes at upload time, in order to avoid the delay experienced by users who are the first to view a particular image at a given size (particularly in media viewer).
So far I've implemented it as a job (in the job queue sense of the term). Which implies that the server(s) picking up this job type would need to have the whole stack of image processing software installed on them. The idea being that we could the resources for this prerendering separately from the existing pool of on-demand image scalers. Does this approach make sense from an Ops perspective? Basically having one or more servers with the same software as image scalers installed on them, configured as job runners for that particular job type.
The alternative is that the job would cURL the thumbnail urls to hit the image scalers. I'm not sure that this is a desirable network path, it might not be the most future-proof thing to expect job runners to be able to hit our public-facing URLs. Not to mention this makes it a very WMF-specific solution, whereas the job type approach is more generic. Maybe there's a better way for a job runner to make the image scalers do something, though. That alternative approach of hitting thumbnail urls would imply the job running on the regular pool of job runners. And it would mean that we probably wouldn't be able to tell apart the resource usage of the prerendering compared to the regular on-demand thumbnailing that's happening at the moment.