https://scaleyourcode.com/interviews/interview/23

Claims resizing JPEGs "any size" in 25ms or less on m3.medium AWS instances.

While this is closed source, and is part of a worrying trend of closed SaaS frameworks one has to pay by the hour, the author reveals enough technical details to figure out how it works. Namely:

- the inspiration for the code is an unnamed Japanese paper which describes how to process the Y, U and V components of a JPEG in parallel
- it uses a similar technique as the jpeg:size option of ImageMagick, whereby only parts of the JPEG are read, instead of every pixel, according to the needed target thumbnail size
- it leverages "vector math" in the processor, which I assume means AVX instructions and registries

Essentially, it's parallelized decoding and resizing of JPEGs, using hardware-specific instructions for optimization.

Of course writing something similar would be a large undertaking. Let's hope that the folks who work on ImageMagick/GraphicsMagick take note and try to do just that :)

I have confirmation that it's extremely fast from pals at deviantArt (whose infrastructure is on AWS) who tried it out. To the point that they're likely getting rid of their storage of intermediary resized images.

I have a feeling that we'll be seeing more of this sort of hardware-optimized JPEG decoding/transcoding once Intel releases their first CPUs with integrated FPGAs, which is supposed to happen soon-ish. Unfortunately these Xeon CPUs will be released "in limited quantities, to cloud providers first". Here's that annoying trend again...