The kerfuggle touched my what? I dunno :-)
I was under the impression that EVERY TIME a user requests a page, the software has to double-check each internal link for the presence or absence of the linked page.
My ancient Greek cry of jubilation only applies, if this is the case...
Ed
On Tuesday, Oct 21, 2003, at 11:38 US/Pacific, Poor, Edmund W wrote:
The kerfuggle touched my what? I dunno :-)
I was under the impression that EVERY TIME a user requests a page, the software has to double-check each internal link for the presence or absence of the linked page.
My ancient Greek cry of jubilation only applies, if this is the case...
Well, I generally recommend reading the software in detail to see what it does ;) but here's a summary for those who wish to remain partially sane:
* Grab page info from db * Are you anonymous user? if so: * check for cached output html * compare its timestamp against cur_touched * if ok, send that out and leave * Check if client told us it's got a locally-cached copy of the page * compare its timestamp against cur_touched * if ok, say to use the cached copy and leave * Do a quick query to the link tables for all existing and non-existing links from this page * Parse the wikitext * As we find each wikilink, look it up in the lists we already got to see if it exists * On the off chance the tables are corrupt, anything that wasn't listed will be checked individually against cur * Output * If anon, save output html in the server cache.
On page creation and deletion we update the cur_touched timestamp of all pages that link to the given page, thus forcing them to re-render, and we update the link tables to reflect the new existence or non-existence of the page.
-- brion vibber (brion @ pobox.com)
wikitech-l@lists.wikimedia.org