On Fri, Jul 30, 2010 at 2:42 AM, Alex Brollo alex.brollo@gmail.com wrote:
Yes, but I presume that a big advantage could come from having a simplified, unique, js-free version of the pages online, completely devoid of "user preferences" to avoid any need to parse it again when uploaded by different users with different preferences profile.
This is exactly what we have when you're logged out. The request goes to a Squid, and it serves a static cached file, no dynamic bits (if it's already cached). When you log in, it can't be static, because we display your name in the upper right, etc.
On Fri, Jul 30, 2010 at 4:49 AM, John Vandenberg jayvdb@gmail.com wrote:
Could we add a logged-in-reader mode, for people who are infrequent contributors but wish to be logged in for the prefs.
As soon as you're logged in, you're missing Squid cache, because we have to add your name to the top, attach your user CSS/JS, etc. You can't be served the same HTML as an anonymous user. If you want to be served the same HTML as an anonymous user, log out.
Fortunately, the major slowdown is parser cache misses, not Squid cache misses. To avoid parser cache misses, just make sure you don't change parser-affecting preferences to non-default values. (We don't say which these are, of course . . .)
They could be served a slightly old cached version of the page when one is available for their prefs. e.g. if the cached version is less than a minute old.
That would make no difference. If you've fiddled with your preferences nontrivially, there's a good chance that not a single other user has the exact same preferences, so you'll only hit the parser cache if you yourself have viewed the page recently. For instance, if you set your stub threshold to 357 bytes, you'll never hit anyone else's cache (unless someone else has that exact stub threshold). Even if you just fiddle with on/off options, there are several, and the number of combinations is exponential.
Moreover, practically no page changes anywhere close to once per minute. If the threshold is set that low, you'll essentially never get extra parser cache hits. On the other hand, extra infrastructure will be needed to keep around stale parser cache entries, so it's a clear overall loss.
The down side is that if they see an error, it may already be fixed. OTOH, if the page is being revised frequently, the same is likely to happen anyway. The text could be stale before it hits the wire due to parsing delay.
However, in that case everyone will see the new contents at more or less the same time -- it won't be inconsistent.