The new template inclusion tracking code is now live. Here's the
description of it from RELEASE-NOTES in case you missed it:
* Added templatelinks table, to track template inclusions. User-visible
effects
will be:
* (inclusion) tag for inclusions in Special:Whatlinkshere
* More accurate list of used templates on the edit page
* More reliable cache invalidation when templates outside the template
namespace are changed
There were a few teething problems when it was introduced. The following
are now fixed:
* Random fatal errors from OutputPage.php line 230, BoardVote.php line 6
and LinksUpdate line 158 due to sync problems on srv11, srv50 and some
other server. Root cause unclear.
* Save and some other functions broken on yaseo wikis (ja, th, ms and ko
wikipedia), forgot to apply schema update.
* Missing page list from category pages, due to index.php being pinned
at version 1.111 in the live copy.
* Missing or broken category list on image pages, redirects and preview
pages, bug.
* Undelete broken, bug.
* Special:Whatlinkshere for certain templates showing only half the
requested number of pages, bug.
Please mark any related bug reports (on or off bugzilla) as resolved,
unless they can be reproduced at the present time.
The (inclusion) tag in [[Special:Whatlinkshere]] is currently based on
whether the target page is in the template namespace or not. This will
progressively be fixed over the next week or so, as refreshLinks.php
works its way through several million articles.
Here's the more technically-oriented commit message describing my changes:
* Added templatelinks table. The table currently represents a literal
list of templates included from each article. That is, the table
contains pages which were actually loaded during parsing, not the markup
which went into resolving their names.
* Ended the role of $wgLinkCache in link updates. Instead, links (and
related entities) are registered in the ParserOutput object during a
parse. The LinksUpdate constructor now takes a ParserOutput object as a
parameter. $wgLinkCache is still used, but only as a cache of article IDs.
* Because the link list is now saved and restored in the parser cache,
meta tag keywords now work on parser cache hits. Some refactoring took
place in this area.
* Rendering of the HTML for category links has moved from Parser to
OutputPage.
* Did some general pottering around in Article.php, such as allowing an
Article object to be created with a specified revision ID, thereby
optionally removing the dependence on $wgRequest. Not used at the
current time.
* A few documentation tweaks.
-- Tim Starling
This comment on slashdot isn't ignorant:
http://slashdot.org/comments.pl?sid=172759&cid=14379365
"
I've donated to Wikipedia twice in a year. At this point, I've given
probably four times the amount of money that I would for, say,
Encarta. I love Wikipedia, but 1) I don't have a permanent copy of it
on a DVD, like I would for Encarta, and 2) I feel like I'm being
"forced" to buy the latest upgrade of Wikipedia when they set up these
pleas for donations, since the performance of my encyclopedia directly
depends on these fund drives.
"
I know there's been a lot of discussion about making a WP 1.0 of
reviewed articles, but what do people think of giving out a DVD of
current text for donations over, say, $50?
This would be whatever's current at a point in time, with no guarantee
about the content accuracy.
I know there'd be some initial hardware cost for a mass burner and
some admin overhead, but I can see the point that donating to a
service is something done a little skeptically.
Thanks,
Jeremy
Hi everyone,
after my latest f*ckup with Special:Newimages, I try to do repentance by
merging Newpages and Imagelist (which both do basically the same) into a
new special page.
Thus, I have uploaded SpecialFilelist /to the extension module/ (so
noone can say I'm contaminating the main module again :-)
It *does* contain my dreaded "images of user" function, but deactivated
until img_user gets an index.
Please have a look (especially at the "previous" link, it's broken somehow).
Magnus
We are running MediaWiki on a Windows Server at
http://www.relisoft.com/wiki
- MediaWiki 1.5.4
- PHP 5.0.5
- MySQL 4.0.18-nt
First problem: When I access it from one of my home machine, I see a
completely blank main page. Other people can see the full page. I can see
some of the other pages, if I specify the title.
Second problem: I get the error:
Undefined index: REQUEST_URI in includes\WebRequests.php, line 284.
This is the offending line of code:
/**
* Return the path portion of the request URI.
* @return string
*/
function getRequestURL() {
return $_SERVER['REQUEST_URI'];
}
Is this a problem with PHP/MediaWiki mismatch?
Bartosz
For those interested, the namespace manager from the WIKIDATA branch
(intended to ship with MW 1.6) now has basic user docs at:
http://meta.wikimedia.org/wiki/Help:Namespace_manager
Please comment on the talk page if you think any particular aspect needs
more explanations.
Erik
The site was offline recently for about 20-30 minutes, with some additional
downtime of uploads only, while our upload fileserver amane was broken.
Quick summary of affairs before I run off to dinner:
* amane's mount of izwinger:/home had broken in some way, such that accesses
were hanging
** amane's syslog shows a large number of RPC failures for zwinger's NFS for the
last few hours
* user ssh logins to amane failed due to the broken /home
* lighttpd ran out of connections, with lots of stuck php processes, likely
because thumbnail rendering used files on /home
* amane's nfs server still worked, so the site ran internally
* root ssh login worked, and i was able to kill lighty and remount /home
* however shortly after I tried restarting lighty, it died more thoroughly: i
was unable to continue the ssh session (stuck) and new ssh sessions didn't get
past opening port 22
* at this point amane's nfs died too
* can't find anything in syslog relating to that
* from this point the whole site was broken
* there's a donation link on the error page, which points to a wiki page so it's
also broken
* tried to change the error page to link to the separate fundraising server, but
the update didn't quite take before we finsihed
* we had the colo reboot the machine
* they had to call us back for more info because the machine was not properly
labeled
* amane is not on the serial console server!
* after rebooting, things settled down after a few minutes
* site seems ok at the moment
Recommendations for future:
* make sure all servers are marked
* important machines *must* be on the serial console when installed
* the site should still work if images are offline. check code that works with
image files to make it fail more gracefully
* check NFS mount settings, try to set them up to a more failure-friendly way
and of course
* try to get a backup image server online
* have a way to switch to it automatically
-- brion vibber (brion @ pobox.com)
On 1/3/06, Jan Vanoverpelt <jan.vanoverpelt(a)gmail.com> wrote:
> >
> > I would like to place a (hyper)link on an image so that when the reader
> > clicks on the image, he is redirected to a "normal" wiki-page instead of
> > to
> > the imagepage of the image. But I do NOT want to put a textual link on or
> > near the image (the link has to be placed on the image itself). Is it
> > possible to make such kind of hotspot on an image in a wiki and if
> > yes...how?
http://meta.wikimedia.org/wiki/User:Jporter/iconmap_extension
Icon images linking to MediaWiki pages.
Amgine
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Is there any reason why gmane encrypts the e-mails on WikiMedia lists? I
would think that since the official archives don't do it, gmane
shouldn't either.
(I believe if you contact them about it, they can change it.)
- -- Jamie
- -------------------------------------------------------------------
http://endeavour.zapto.org/astro73/
Thank you to JosephM for inviting me to Gmail!
Have lots of invites. Gmail now has 2GB.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFDubY3hSTbi9X4m60RAvnuAKCMMAwkkqaC9JKq5r/z/P48m+KZ1wCfeFIP
nbXiuoVs0DlqOaNUCeEAIXI=
=biqC
-----END PGP SIGNATURE-----
Hoi,
Often this question about ISO 639 pops up and it is confused even more
by SIL codes.. Today the question about the mch code was raised..
http://www.ethnologue.com/show_language.asp?code=mch this clearly shows
that SIL now uses these codes on their website .. Consequently, there is
no need for the old SIL codes anymore.
Thanks,
GerardM
Please consider my request at Wikipedia:Requests for arbitration/
Developer help needed
The problem is that semi-protection as now coded relies on % of users
rather than on measurement of edits. This permits stockpiling of
sleeper accounts which can later be activated as sockpuppets
defeating semi-protection.
Fred