On Thu, Apr 17, 2014 at 8:53 AM, Gilles Dubuc gilles@wikimedia.org wrote:
Including the multimedia list, since the discussion is now broader. Gergo, Mark, I encourage you to read the backlog: https://lists.wikimedia.org/mailman/private/ops/2014-April/thread.html#31981
ops archives are private.
On Thu, Apr 17, 2014 at 4:31 PM, Brad Jorsch (Anomie) <
bjorsch@wikimedia.org> wrote:
When I tried it just now, I saw 6 queries: one to prop=imageinfo to fetch a number of different props, one to meta=filerepoinfo, one to list=imageusage, one to prop=globalusage, and two more to prop=imageinfo to fetch the URLs for two different sizes of the image.
There is a userinfo API call as well (you probably missed it because it is via JSONP in some cases). We just got rid of the extra imageinfo calls (in most cases), so 4 queries per image + a filerepoinfo query once per page, now.
Four of those could be combined, but that would complicate the code a lot even in its current state (and much more if we do some sort of caching, and need to deal with invalidation, which is different for every API query). I am not sure there is much benefit to it; when cached, those queries should be fast anyway, and when not cached, the single query might actually be slower since everything happens sequentially in PHP, while the independent JS requests would be parallel to some extent. (We should probably measure this.)
Also, getting really offtopic here, "guprop[]=url&guprop[]=namespace" and
"&iunamespace[]=0&iunamespace[]=100" that I see in your original queries doesn't actually work; it gives the same results as if guprop and iunamespace are omitted entirely. The API should give a warning about that (filed as bug 64057).
Probably also a bug in the mediawiki.api JS library which produces such an URL if the argument is an array. Or is there a legitimate use case for that?