Erik Moeller wrote:
Actually, section *expanding* without caching is slower. You have benchmarked with collapse=false. That will only happen if an expansion is explicitly requested, which is a dynamic operation which may well be slow. How about benchmarking with the auto-collapsing (no URL parameters) versus the old behavior (set threshold to 0)? Surely the massively smaller pages far outweigh any benefits of caching?
With server-side collapsing, expansion will require a new request which will include re-transferring everything that's already there. That's an extra burden for the slow downloader, as well as requiring parsing and rendering time on the server.
With client-side collapsing, the page need be transferred only once, and sections may be opened and closed instantaneously. The data is sent compressed if the browser supports it, as most browsers do, so it will be smaller than the raw size. A large page should also compress better as a whole than several page fragments compressed separately (each with its own copy of header, footer, sidebar markup, toc, and other parts of the page). Also the page need only be parsed/rendered to HTML once, and will be served identically from the squid cache, reducing the wait latency before the page is tranferred over the net.
-- brion vibber (brion @ pobox.com)