On Fri, Apr 24, 2009 at 12:31 AM, Wu Zhe wu@madk.org wrote:
Asynchronous daemon doesn't make much sense if page purge occurs on server side, but what if we put off page purge to the browser? It works like this:
- mw parser send request to daemon
- daemon finds the work non-trivial, reply *immediately* with a best
fit or just a place holder 3. browser renders the page, finds it's not final, so sends a request to daemon directly using AJAX 4. daemon reply to the browser when thumbnail is ready 5. browser replace temporary best fit / place holder with new thumb using Javascript
Daemon now have to deal with two kinds of clients: mw servers and browsers.
Letting browser wait instead of mw server has the benefit of reduced latency for users while still have an acceptable page to read before image replacing takes place and a perfect page after that. For most of users, it's likely that the replacing occurs as soon as page loading ends, since transfering page takes some time, and daemon would have already finished thumbnailing in the process.
How long does it take to thumbnail a typical image, though? Even a parser cache hit (but Squid miss) will take hundreds of milliseconds to serve and hundreds of more milliseconds for network latency. If we're talking about each image adding 10 ms to the latency, then it's not worth it to add all this fancy asynchronous stuff.
Moreover, in MediaWiki's case specifically, *very* few requests should actually require the thumbnailing. Only the first request for a given size of a given image should ever require thumbnailing: that can then be cached more or less forever. So it's not a good case to optimize for. If the architecture should be simplified significantly at the cost of slight extra latency in 0.01% of requests, I think it's clear that the simpler architecture is superior.