Hi,
Just wanted to share some of bits we've been doing this week - it was hopping around and analyzing our performance and application workflow from multiple sides (kind of "Hello 2008!!!" systems performance review).
It all started with application object cache - the caching arena was bumped up from 55GB to 160G - and here more work had to be done to make our parser output cacheable. Any use of magic words (and most templates do use them) would decrease cache TTLs to 1 hour, so vast increase in caching space didn't help much. Though, once this was fixed, pages are reparsed just once few days. Additionally, we did move the revision text caching for external storages to a global pool, instead of maintaining local caches on each of these nodes. That allows to reuse old external store box memory space for caching more actively fetched revisions, instead those archived ones.
Another major review was done on extension loading - there by delaying or eliminating expensive initializations, especially for very-rarely-used extensions (relatively :) - we did shave at least 20ms off site base loading time (and service request average). That also resulted in huge CPU use reduction. Here special thanks goes to folks on #mediawiki (Aaron, Nikerabbit, siebrand, Simetrical, and others) who joined this effort of analysis, education and engineering :) There're still more difficult extensions to handle, but I hope they will evolve into more adaptive performance-wise. This was long-time regression caused by increasing quality of translations - that resulted in bigger data set to handle at every page load.
A small bit, but noticeable, was simplification of mediawiki:pagecategories message on en.wikipedia.org. Such simple logic like "show Category: if there is just one category, and Categories: otherwise" needs a parser to be used, which invokes lots and lots of overhead for every page served. Those few milliseconds needed for that absolutely grammatically correct label could be counted in thousands of dollars. :)
There were few other victims in this unequal fight. TitleBlacklist didn't survive the performance audit, - the current architecture of this feature is doing work in places it never should do, and as initial performance guidelines for it were not followed, it got disabled for a while. Also some of CentralNotice functionality was not optimized for work it was used after the fundraiser, so for now this feature is disabled. Of course, these features will be enabled - but they just need more work before they can run live.
On another front - in software core part - database connection flow was reviewed - and few adjustments were made, which reduce master server load quite a bit, as well as less communication is done with all database servers (transaction coordination was too verbose before - now it is far more lax).
Here again, some of application flow still is irrational - and might have quite a bit of refactoring/fixing in future. Tim pointed out that my knowledge of xdebug profiler is seriously outdated (my mind was stuck at 2.0.1 features, where 2.0.2 introduced quite significant changes that make life easier) ;-) Another shocking revelation was that CPU microbenchmarks provided by MediaWiki internal profiler were not accurate at all - the getrusage() call we use provides information rounded at 10ms each - and most of functions execute far faster than that. It was really amusing, that I trusted numbers, which were similar to rational and reasonable ones only because of huge profiling scale and eventual statistical magic. This a bit complicates profiling in general - as there's no easy way to determine which wait happened because of i/o blocking or context switches.
Few images from the performance analysis work: http://flake.defau.lt/mwpageview.png http://flake.defau.lt/mediawikiprofile.png (somewhere here you should see why TitleBlacklist died)
This one made me giggle: http://flake.defau.lt/mwmodernart.png
Tim was questioning here if people are using wikitext for scientific calculations, or was that just another crazy over-templating we are used to see. Such templates as Commons' 'picture of the day' one cause such output =) Actually - the new parser code makes far nicer graphs (at least, from performance engineering perspective).
And one of biggest changes happened on our Squid caching layer - because of how different browsers request data, we generally had different cache sets for IE, Firefox, Opera, Googlebot, KHTML, etc. Now we do normalize the 'accept encoding' specified by browsers, what makes most of connections fall into single class. In theory this may at least double our caching efficiency. In practice, we will see - the change has been live just on one cluster just for few hours. As a side effect we turned off 'refresh' button on your browsers. Sorrty - please let us know if anything is seriously wrong with that (if you feel offended about your constitutional refreshing rights - use purge instead :)
Additionally I've heard there has been quite a bit of development in new parser, as well as networking in Amsterdam ;-)
Quite a few people also noticed the huge flamewar of 'oh noes, dev enabled a feature despite our lack of consensus' . Now we're sending people to board for all the minor changes they ask for :-)
Oh, and Mark changed the scale on our 'backend service time' graph, which is used to measure our health and performance - now the upper limit is at 0.3s (used to be our minimum few years ago) instead of old 1s: http://www.nedworks.org/~mark/reqstats/svctimestats-weekly.png
So, that much of fun we've seen this week in site operations :)
Cheers, Domas
P.S. I'll spend next week in Disneyworld instead ;-)~~
On 1/13/08, Domas Mituzas midom.lists@gmail.com wrote:
As a side effect we turned off 'refresh' button on your browsers. Sorrty - please let us know if anything is seriously wrong with that (if you feel offended about your constitutional refreshing rights - use purge instead :)
This means what, exactly? That, e.g., Ctrl-F5 on Firefox will no longer bypass the caching layer? You'll still get the update if the Squid cache has been cleared in the interim by a change to the page, presumably.
Quite a few people also noticed the huge flamewar of 'oh noes, dev enabled a feature despite our lack of consensus' . Now we're sending people to board for all the minor changes they ask for :-)
Whoops, guess nobody told Tim that when he added a new group to mw.org for me last night. ;)
Simetrical wrote:
On 1/13/08, Domas Mituzas midom.lists@gmail.com wrote:
As a side effect we turned off 'refresh' button on your browsers.
This means what, exactly? That, e.g., Ctrl-F5 on Firefox will no longer bypass the caching layer?
Apparently so -- I just had a friend ask me if I knew why he was suddenly having trouble refreshing one of the Reference Desk pages.
I suspect this will cause a lot of confusion for talk and talk-like pages.
On Jan 13, 2008 8:45 PM, Steve Summit scs@eskimo.com wrote:
Simetrical wrote:
On 1/13/08, Domas Mituzas midom.lists@gmail.com wrote:
As a side effect we turned off 'refresh' button on your browsers.
This means what, exactly? That, e.g., Ctrl-F5 on Firefox will no longer bypass the caching layer?
Apparently so -- I just had a friend ask me if I knew why he was suddenly having trouble refreshing one of the Reference Desk pages.
I suspect this will cause a lot of confusion for talk and talk-like pages.
He shouldn't have to refresh. The page should have been purged from the wikimedia caches by mediawiki when it was changed.
Whatever caused the edit triggered purge to get missed needs to be fixed. I have no doubt that there are a few code pathways where some needed purges may be missed, but it seems that users have adapted by adding a fair amount of paranoid pointless refreshing and purging. Better to make the problems more expose so they can be found and fixed.
Otherwise most users (whom haven't developed the habit of aggressively refreshing/purging) will continue to get stale data in that situation, and client induced cache flushing will remain a cause of poor performance and a possible source of DOS vulnerabilities.
Greg Maxwell wrote:
On Jan 13, 2008 8:45 PM, Steve Summit scs@eskimo.com wrote:
Apparently so -- I just had a friend ask me if I knew why he was suddenly having trouble refreshing one of the Reference Desk pages.
He shouldn't have to refresh. The page should have been purged from the wikimedia caches by mediawiki when it was changed.
Whatever caused the edit triggered purge to get missed needs to be fixed...
Okay, thanks. He and I will keep an eye out for what that might have been.
I wrote:
Greg Maxwell wrote:
Whatever caused the edit triggered purge to get missed needs to be fixed...
Okay, thanks. He and I will keep an eye out for what that might have been.
Okay, it looks like the problem occurs only in lynx.
My friend reported that:
1. In lynx, http://en.wikipedia.org/wiki/Wikipedia:RD/H was displaying nothing more recent than 06:59 13 January 2008, even when refreshed.
2. Again in lynx, http://en.wikipedia.org/wiki/Wikipedia: Reference desk/Humanities was behaving normally.
I've confirmed that:
3. Firefox is behaving normally, both for RD/H and Reference desk/Humanities.
4. In lynx, RD/H is displaying nothing more recent than 06:59 13 January 2008.
Very strange. I suspect that lynx is using a very different Accept-Encoding, and ends up receiving a rendition encoded differently than Firefox, and that the encoding which lynx receives is for some reason stalely cached. (Though why the behavior differs for "RD/H" versus "Reference desk/Humanities" is stranger still...)
On 1/13/08, Steve Summit scs@eskimo.com wrote:
(Though why the behavior differs for "RD/H" versus "Reference desk/Humanities" is stranger still...)
Well, they're cached separately. Squid is (by design) far from clever enough to insert the "redirected from" line.
Anyway, yeah, I can reproduce this. Return headers are fairly unremarkable:
HTTP/1.0 200 OK Date: Sun, 13 Jan 2008 19:54:28 GMT Server: Apache X-Powered-By: PHP/5.1.2 Content-Language: en Vary: Accept-Encoding,Cookie Cache-Control: private, s-maxage=0, max-age=0, must-revalidate Last-Modified: Sun, 13 Jan 2008 19:50:50 GMT Content-Encoding: gzip Content-Length: 101721 Content-Type: text/html; charset=utf-8 Age: 29338 X-Cache: HIT from sq19.wikimedia.org X-Cache-Lookup: HIT from sq19.wikimedia.org:3128 X-Cache: MISS from sq26.wikimedia.org X-Cache-Lookup: MISS from sq26.wikimedia.org:80 Via: 1.0 sq19.wikimedia.org:3128 (squid/2.6.STABLE16), 1.0 sq26.wikimedia.org:80 (squid/2.6.STABLE16) Connection: close
For the non-stale page:
HTTP/1.0 200 OK Date: Mon, 14 Jan 2008 04:06:32 GMT Server: Apache X-Powered-By: PHP/5.2.1 Content-Language: en Vary: Accept-Encoding,Cookie Cache-Control: private, s-maxage=0, max-age=0, must-revalidate Last-Modified: Mon, 14 Jan 2008 03:25:37 GMT Content-Encoding: gzip Content-Length: 95947 Content-Type: text/html; charset=utf-8 X-Cache: MISS from sq24.wikimedia.org X-Cache-Lookup: HIT from sq24.wikimedia.org:3128 X-Cache: MISS from sq21.wikimedia.org X-Cache-Lookup: MISS from sq21.wikimedia.org:80 Via: 1.0 sq24.wikimedia.org:3128 (squid/2.6.STABLE16), 1.0 sq21.wikimedia.org:80 (squid/2.6.STABLE16) Connection: close
I don't think this has to do with the browser at all, it's just the Squid you get. sq19 gave me an old copy, sq24 had to look it up fresh and so I got a new one. When I visit in Firefox I get (mangled a bit courtesy of Firebug, but still readable):
Date: Sat, 12 Jan 2008 06:40:38 GMT Server: Apache X-Powered-By: PHP/5.2.1 Content-Language: en Vary: Accept-Encoding,Cookie Cache-Control: private, s-maxage=0, max-age=0, must-revalidate Last-Modified: Sat, 12 Jan 2008 06:37:46 GMT Content-Encoding: gzip Content-Length: 96660 Content-Type: text/html; charset=utf-8 Age: 157163 X-Cache: HIT from sq19.wikimedia.org, MISS from sq39.wikimedia.org X-Cache-Lookup: HIT from sq19.wikimedia.org:3128, MISS from sq39.wikimedia.org:80 Via: 1.0 sq19.wikimedia.org:3128 (squid/2.6.STABLE16), 1.0 sq39.wikimedia.org:80 (squid/2.6.STABLE16) Connection: keep-alive
I get it from sq19 again, this time even older (maybe I requested a slightly different URL?). Ctrl-F5 in Firefox gets me the same page consistently from sq19.
Actually, this ties into my last post, doesn't it? Do we purge the URLs of redirects to the edited page? Looking at the code, I very much don't see that anywhere. That would explain a fair percentage of outdated cache hits, if true, given the prevalence of redirects.
On 1/13/08, Simetrical Simetrical+wikilist@gmail.com wrote:
Actually, this ties into my last post, doesn't it? Do we purge the URLs of redirects to the edited page? Looking at the code, I very much don't see that anywhere. That would explain a fair percentage of outdated cache hits, if true, given the prevalence of redirects.
Ah, I guess it's in the job queue with page links and so on. If that's right, I think it would be a good idea to purge redirects immediately instead of deferring it. Surely linking pages' Squid purges only needs to be deferred because there's no point in doing it until the Apaches and MySQLs have updated the linking pages -- this isn't relevant to redirects. Or is there a performance issue with sending possibly hundreds of simultaneous Squid purge requests synchronously on every edit made to some pages?
On 1/13/08, Gregory Maxwell gmaxwell@gmail.com wrote:
He shouldn't have to refresh. The page should have been purged from the wikimedia caches by mediawiki when it was changed.
Whatever caused the edit triggered purge to get missed needs to be fixed. I have no doubt that there are a few code pathways where some needed purges may be missed, but it seems that users have adapted by adding a fair amount of paranoid pointless refreshing and purging. Better to make the problems more expose so they can be found and fixed.
Does it take such a negligible amount of time for Squid cache purges to occur? How long is it from the time the purge code is run on the application servers until the Squids in all clusters have discarded the old cache?
I don't think there are any article update code paths where Title::purgeSquid isn't called. It's called at a very low level, Article::doEdit, which should be the only thing used to modify articles. Glancing over the code, though, I notice that the Squids purge by literal URL. The URLs purged are from $title->getInternalURL() and $title->getInternalURL( 'action=history' ), and variants for languages with variants.
Could there be some URL forms that are being missed? For instance, are long URLs purged? (Are they cached at all? It seems like it, from the HTTP headers.)
Simetrical wrote:
On 1/13/08, Gregory Maxwell gmaxwell@gmail.com wrote:
Whatever caused the edit triggered purge to get missed needs to be fixed. I have no doubt that there are a few code pathways where some needed purges may be missed, but it seems that users have adapted by adding a fair amount of paranoid pointless refreshing and purging. Better to make the problems more expose so they can be found and fixed.
Does it take such a negligible amount of time for Squid cache purges to occur? How long is it from the time the purge code is run on the application servers until the Squids in all clusters have discarded the old cache?
A fraction of a second, mostly determined by the propagation delay of the packet to reach the Squid.
On 13/01/2008, Domas Mituzas midom.lists@gmail.com wrote:
Quite a few people also noticed the huge flamewar of 'oh noes, dev enabled a feature despite our lack of consensus' . Now we're sending people to board for all the minor changes they ask for :-)
The whining will likely be channelled through en:wp ArbCom, who hate silly makework for the sake of bureaucracy at least as much as the devs do. Someone currently on the AC (not me) needs to get on #wikimedia-tech and work out what stuff sanely falls into the 'AC request first' category. This should avoid hindering simple stuff while supplying sufficient process to quell the querulous.
- d.
Bah, now I can't clear my cache whenever I edit a .js file of mine, the old version just gets stuck in the cache. Others have had this problem at least with .css.
Domas Mituzas wrote:
Hi,
Just wanted to share some of bits we've been doing this week - it was hopping around and analyzing our performance and application workflow from multiple sides (kind of "Hello 2008!!!" systems performance review).
It all started with application object cache - the caching arena was bumped up from 55GB to 160G - and here more work had to be done to make our parser output cacheable. Any use of magic words (and most templates do use them) would decrease cache TTLs to 1 hour, so vast increase in caching space didn't help much. Though, once this was fixed, pages are reparsed just once few days. Additionally, we did move the revision text caching for external storages to a global pool, instead of maintaining local caches on each of these nodes. That allows to reuse old external store box memory space for caching more actively fetched revisions, instead those archived ones.
Another major review was done on extension loading - there by delaying or eliminating expensive initializations, especially for very-rarely-used extensions (relatively :) - we did shave at least 20ms off site base loading time (and service request average). That also resulted in huge CPU use reduction. Here special thanks goes to folks on #mediawiki (Aaron, Nikerabbit, siebrand, Simetrical, and others) who joined this effort of analysis, education and engineering :) There're still more difficult extensions to handle, but I hope they will evolve into more adaptive performance-wise. This was long-time regression caused by increasing quality of translations
- that resulted in bigger data set to handle at every page load.
A small bit, but noticeable, was simplification of mediawiki:pagecategories message on en.wikipedia.org. Such simple logic like "show Category: if there is just one category, and Categories: otherwise" needs a parser to be used, which invokes lots and lots of overhead for every page served. Those few milliseconds needed for that absolutely grammatically correct label could be counted in thousands of dollars. :)
There were few other victims in this unequal fight. TitleBlacklist didn't survive the performance audit, - the current architecture of this feature is doing work in places it never should do, and as initial performance guidelines for it were not followed, it got disabled for a while. Also some of CentralNotice functionality was not optimized for work it was used after the fundraiser, so for now this feature is disabled. Of course, these features will be enabled - but they just need more work before they can run live.
On another front - in software core part - database connection flow was reviewed - and few adjustments were made, which reduce master server load quite a bit, as well as less communication is done with all database servers (transaction coordination was too verbose before
- now it is far more lax).
Here again, some of application flow still is irrational - and might have quite a bit of refactoring/fixing in future. Tim pointed out that my knowledge of xdebug profiler is seriously outdated (my mind was stuck at 2.0.1 features, where 2.0.2 introduced quite significant changes that make life easier) ;-) Another shocking revelation was that CPU microbenchmarks provided by MediaWiki internal profiler were not accurate at all - the getrusage() call we use provides information rounded at 10ms each - and most of functions execute far faster than that. It was really amusing, that I trusted numbers, which were similar to rational and reasonable ones only because of huge profiling scale and eventual statistical magic. This a bit complicates profiling in general - as there's no easy way to determine which wait happened because of i/o blocking or context switches.
Few images from the performance analysis work: http://flake.defau.lt/mwpageview.png http://flake.defau.lt/mediawikiprofile.png (somewhere here you should see why TitleBlacklist died)
This one made me giggle: http://flake.defau.lt/mwmodernart.png
Tim was questioning here if people are using wikitext for scientific calculations, or was that just another crazy over-templating we are used to see. Such templates as Commons' 'picture of the day' one cause such output =) Actually - the new parser code makes far nicer graphs (at least, from performance engineering perspective).
And one of biggest changes happened on our Squid caching layer - because of how different browsers request data, we generally had different cache sets for IE, Firefox, Opera, Googlebot, KHTML, etc. Now we do normalize the 'accept encoding' specified by browsers, what makes most of connections fall into single class. In theory this may at least double our caching efficiency. In practice, we will see - the change has been live just on one cluster just for few hours. As a side effect we turned off 'refresh' button on your browsers. Sorrty - please let us know if anything is seriously wrong with that (if you feel offended about your constitutional refreshing rights - use purge instead :)
Additionally I've heard there has been quite a bit of development in new parser, as well as networking in Amsterdam ;-)
Quite a few people also noticed the huge flamewar of 'oh noes, dev enabled a feature despite our lack of consensus' . Now we're sending people to board for all the minor changes they ask for :-)
Oh, and Mark changed the scale on our 'backend service time' graph, which is used to measure our health and performance - now the upper limit is at 0.3s (used to be our minimum few years ago) instead of old 1s: http://www.nedworks.org/~mark/reqstats/svctimestats-weekly.png
So, that much of fun we've seen this week in site operations :)
Cheers, Domas
P.S. I'll spend next week in Disneyworld instead ;-)~~
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/wikitech-l
I think refreshing should be enabled again for pages viewed in "raw" mode (.js and .css in particular).
On 1/14/08, Voice of All jschulz_4587@msn.com wrote:
Bah, now I can't clear my cache whenever I edit a .js file of mine, the old version just gets stuck in the cache. Others have had this problem at least with .css.
On Jan 14, 2008, at 5:37 PM, Huji wrote:
I think refreshing should be enabled again for pages viewed in "raw" mode (.js and .css in particular).
or .js/.css pages detected as such, and have their raw URLs purged... um...
On 15/01/2008, Domas Mituzas midom.lists@gmail.com wrote:
On Jan 14, 2008, at 5:37 PM, Huji wrote:
I think refreshing should be enabled again for pages viewed in "raw" mode (.js and .css in particular).
or .js/.css pages detected as such, and have their raw URLs purged... um...
Not just the js/css pages themselves but also the generated jss/css files which are built by MediaWiki from one or more editable js/css pages.
Andrew Dunbar (hippietrail)
-- Domas Mituzas -- http://dammit.lt/ -- [[user:midom]]
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Jan 14, 2008 10:37 AM, Huji huji.huji@gmail.com wrote:
I think refreshing should be enabled again for pages viewed in "raw" mode (.js and .css in particular).
Repeat after me: Refreshing is not the solution.
In the environment here squid is effectively part of MediaWiki. If mediawiki isn't flushing squid, it's broken. That you could refresh to skirt brokenness was itself a bug. Most users won't know to refresh and a culture of pervasive refreshing would be terrible for performance.
Having highly effective caching is essential for long term performance. Without it Wikimedia has to spend more money on webserver resources (which are VASTLY more expensive per object served than caches), bandwidth is wasted transferring data to remote clusters, and the ability to benifit from geographic distribution to reduce client latency is seriously reduced.
(and keep up the good reports of things which are not flushing on their own!)
So should I (or someone) adjust Title::purgeSquid so that it purges redirects to the page too? And figure out if generated CSS/JS pages are being correctly purged, and if not fix those?
On 16/01/2008, Steve Summit scs@eskimo.com wrote:
Simetrical wrote:
So should I (or someone) adjust Title::purgeSquid so that it purges redirects to the page too?
If my opinion counts, I would say, yes, certainly! (And even Please!)
With hindsight I would say that the raw & generated CSS and JavaScript should have been purged from the beginning. If there is some technical reason why they can not be I would consider this a major bug now with this new caching scheme in operation.
Andrew Dunbar (hippietrail)
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 1/15/08, Steve Summit scs@eskimo.com wrote:
Simetrical wrote:
So should I (or someone) adjust Title::purgeSquid so that it purges redirects to the page too?
If my opinion counts, I would say, yes, certainly! (And even Please!)
I was pretty reluctant to try it myself, because I had never looked at that code and didn't know how to do it properly. But Tim has done it now, in r29893.
I wrote:
Simetrical wrote:
I was pretty reluctant to try it myself, because I had never looked at that code and didn't know how to do it properly. But Tim has done it now, in r29893.
Splendid! Thanks, all.
Hate to pester, but when is r29893 slated to go live on en.wp? (I assume it hasn't yet, because I and others are still seeing stale pages, at least when using shortcuts like WP:RD/S.)
On Jan 28, 2008 10:47 PM, Steve Summit scs@eskimo.com wrote:
I wrote:
Simetrical wrote:
I was pretty reluctant to try it myself, because I had never looked at that code and didn't know how to do it properly. But Tim has done it now, in r29893.
Splendid! Thanks, all.
Hate to pester, but when is r29893 slated to go live on en.wp? (I assume it hasn't yet, because I and others are still seeing stale pages, at least when using shortcuts like WP:RD/S.)
It went live days ago. Perhaps it's not working correctly, or perhaps you're seeing some other problem.
Simetrical wrote:
On Jan 28, 2008 10:47 PM, Steve Summit scs@eskimo.com wrote:
Hate to pester, but when is r29893 slated to go live on en.wp? (I assume it hasn't yet, because I and others are still seeing stale pages, at least when using shortcuts like WP:RD/S.)
It went live days ago. Perhaps it's not working correctly, or perhaps you're seeing some other problem.
Try loading http://en.wikipedia.org/wiki/WP:RD/S, in lynx. When I do it just now, I get a version dated 16:15, on January 25.
I have no idea why stale pages are more prevalent in lynx, but they really do appear to be.
Steve Summit wrote:
Simetrical wrote:
On Jan 28, 2008 10:47 PM, Steve Summit wrote:
Hate to pester, but when is r29893 slated to go live on en.wp? (I assume it hasn't yet, because I and others are still seeing stale pages, at least when using shortcuts like WP:RD/S.)
It went live days ago. Perhaps it's not working correctly, or perhaps you're seeing some other problem.
Try loading http://en.wikipedia.org/wiki/WP:RD/S, in lynx. When I do it just now, I get a version dated 16:15, on January 25.
I have no idea why stale pages are more prevalent in lynx, but they really do appear to be.
Using a different browser, you'll have different queries (Accept-Encoding?) so accesing a different set of cached results. Having few people using lynx is probably a factor to have the stale pages longer.
Platonides wrote:
Steve Summit wrote:
Try loading http://en.wikipedia.org/wiki/WP:RD/S, in lynx. When I do it just now, I get a version dated 16:15, on January 25.
I have no idea why stale pages are more prevalent in lynx, but they really do appear to be.
Using a different browser, you'll have different queries (Accept-Encoding?) so accesing a different set of cached results.
Aha! I shouldn't have said "I have no idea", because that's what I kind of suspected. But I'd never managed to confirm that there's caching at that level of granularity and header visibility. Thanks.
Having few people using lynx is probably a factor to have the stale pages longer.
Sure, but: why are the pages still stale at all?
Domas Mituzas wrote:
Just wanted to share some of bits we've been doing this week - it was hopping around and analyzing our performance and application workflow from multiple sides (kind of "Hello 2008!!!" systems performance review).
[snip]
Good work, men!
*pins medals on everybody*
-- brion
wikitech-l@lists.wikimedia.org