On Fri, Feb 22, 2008 at 8:26 AM, Roan Kattouw <roan.kattouw(a)home.nl> wrote:
The job queue works fine for doing lots of small
things (do one small
thing per request, nobody notices the delay), but big things will just
delay some random guy's request by 10 seconds because MW is busy
resizing an image someone else uploaded. We should really do this in
background processes that don't interfere with Apache/PHP, but even then
the load might be too heavy to take.
Um, we do, don't we? Wikimedia runs the job queue as a cron job,
using maintenance/runJobs.php, and $wgJobRunRate = 0. It would be
kind of silly for anyone to do otherwise, if you have the right to run
cron jobs in the first place. Image resizing is done on totally
different servers anyway, IIRC.
On Fri, Feb 22, 2008 at 9:50 AM, Magnus Manske
<magnusmanske(a)googlemail.com> wrote:
Or, we could have one dedicated server for
long-running jobs.
Pre-generate the "usual" thumbnail sizes for each large image.
Maybe generate smaller thumbnails from larger ones, or multiple in one
go, so it won't have to load a large image 10 times for 10 thumbnails.
Not sure what to do for "unusual" sizes, though. Could be a DOS attac vector.
Well, I'm not clear on exactly what the issue is with resizing. Is it
merely that it takes a minute per image? Or does it also take 45 GB
of RAM? If the former, we could of course just toss a couple of extra
servers at the problem. If the latter, not so easy. Domas' "No."
makes me think something closer to the latter, but I really don't
know.