On 04/17/2014 11:38 AM, Max Semenik wrote:
On Thu, Apr 17, 2014 at 11:21 AM, Gergo Tisza <gtisza@wikimedia.org mailto:gtisza@wikimedia.org> wrote:
Four of those could be combined, but that would complicate the code a lot even in its current state (and much more if we do some sort of caching, and need to deal with invalidation, which is different for every API query). I am not sure there is much benefit to it; when cached, those queries should be fast anyway, and when not cached, the single query might actually be slower since everything happens sequentially in PHP, while the independent JS requests would be parallel to some extent. (We should probably measure this.)
Wrong. Every request has an overhead in MediaWiki, Apache and Varnish. See the nice spike in [1] for example when mobile was making 2 requests instead of 1. You're proposing to make 4.
The current PHP per-request overheads are indeed less than ideal and justify some application-level batching for small requests. With HHVM, SPDY, node.js etc things are moving towards lower per-request overheads though. A cached response over SPDY will typically be faster than anything you can do in PHP, and will at the same time use less server-side resources.
Also, we need to carefully distinguish client-side latency (perceived 'performance') from efficiency. Performing several requests in parallel will typically result in a lower latency for a client, but might cause higher loads on the servers if those requests are not cached and per-request overheads are high.
Gabriel