On Monday, February 11, 2013, Mark Bergsma wrote:
On Feb 9, 2013, at 11:21 PM, Asher Feldman <afeldman@wikimedia.orgjavascript:;> wrote:
Whether or not it makes sense for mobile to move in the direction of splitting up article views into many api requests is something I'd love
to
see backed up by data. I'm skeptical for multiple reasons.
What is the main motivation used here? Reducing article sizes/transfers at the expense of more latency?
In cases where most sections (probably not even all) are loaded, I'd expect it to increase the amount of data transfered beyond just the overhead of the additional requests. gzip might take a 30k article down to 4k but will be less efficient on individual sections. Text compresses really well, and roundtrip latency is high on many cell networks.
And then I'd wonder about the server side implementation. How will frontend cache invalidation work? Are we going to need to purge every individual article section relative to /w/api.php on edit? Article HTML in memcached (parser cache), mobile processed HTML in memcached.. Now individual sections in memcached? If so, should we calculate memcached space needs for article text as 3x the current parser cache utilization? More memcached usage is great, not asking to dissuade its use but because its better to capacity plan than to react.