Luca de Alfaro escribió:
Dear Platonides,
many many thanks for pointing the dump to us!
This is extremely welcome. It does not seem so large, so we will start
experimenting with it immediately. I will report back once we get it
running, also with some stats on the time the analysis took.
How did you find it? From
download.mediawiki.org
<http://download.mediawiki.org>, I saw only the interrupted dump in
progress; how does one get hold of older dumps?
Visit the dump page,
while is_broken( dump )
Follow the link at the top "previous dump from ..."
Or just removing the last uri component, shows you a folder list with
the archived dumps.
Yes, I know the distinction between computer-admins
and wiki-admins
(sysops, bureaucrats, etc).
Who are the latter for wikibooks?
Just wanted to make it clear.
See
http://en.wikibooks.org/wiki/Special:ListUsers/sysop
Why would you need page hits?
If anything relies on page hits to do a background computation, it
should be using the job queue instead. On small systems the job queue
will be running based on page hits, on bigger ones (eg. wikipedia) there
will be some boxes dedicated to it (maintenance/runJobs.php).
Front-end caching would interfer with it, anyway.
I think you would need to restrict the expiry time for the cached page
to the expected one an editor trust will be different. If the page is
edited before, it is updated (and a new expiry time is set). If the
cache expires, the next hit will make it regenerated. You can always set
an upper bound for the caching.
This is the mechanism used for some magic words such as
{{currentmonth}}. Note that i'm not talkin about the browser/squid cache
but also the parser cache.