Hi,
When I clicked on a link to [[Indian Institutes of Technology]] I
got a popup saying: "Alert: The file /wiki/Indian_Institutes_of_Technology
could not be found. Please check the name and try again later" or
something like that. Repeated several times, same result. Didn't
get this error on any other page. After a while I clicked on some
link which redirected to [[Indian Institutes of Technology]], and
it worked. Then I tried the original link and it worked too. Some
cache problem?
Arvind
--
Its all GNU to me
Experimentally we've loaded 2.6.3 kernel packages onto coronelli, one
of the squid caches. If it dies completely, we can swap the IP back to
browne to take over. If coronelli reboots, it should revert to the
2.4.x it had before (hopefully).
If things seem positive and more performant, we'll upgrade the others
in turn.
I'm using the test packages for Red Hat 9 from:
http://people.redhat.com/arjanv/2.5/
-- brion vibber (brion @ pobox.com)
Hi,
can somebody with access to the logs check if a user
in the Spanish wikipedia named Johnny H. is in fact
a banned user named Michael? Another user made the
accusation, but offered no proof.
AstroNomo/AstroNomer
__________________________________
Do you Yahoo!?
Get better spam protection with Yahoo! Mail.
http://antispam.yahoo.com/tools
Is there more to be done?
The following code in Parser.php starting at line 165 has a problem. The
'else' needs a left brace following it. But even fixing that there are
execution errors.--Nick Pisarro
$doesexist = false ;
if ( $doesexist ) {
$sql = "SELECT l_from FROM links WHERE l_to={$id}" ;
} else
$sql = "SELECT cur_title,cur_namespace FROM cur,brokenlinks WHERE
bl_to={$id} AND bl_from=cur_id" ;
}
$res = wfQuery ( $sql, DB_READ ) ;
I guess coronelli is setup differently than the other machines?
In any event, a billion trillion messages per second are flashing
on the console there.
Second, it turns out to be not true that these messages are only
on the first console. I can hit alt-f2, etc. to get to new consoles,
but on all the machines, all this stuff flashes by anyway.
This is only a problem when I'm physically standing here in the
colo, but it sure sucks.
I'm going to be taking down some of the servers in turn, to unplug
them and put them on the power port, so that we'll have developer
access to power cycle without having the delay of contacting me.
I'll take each one down and back up in turn, so as not to disrupt
things any more than absolutely necessary. The db server is already
pugged in so it won't be that one.
--Jimbo
"Erik Moeller" <erik_moeller(a)gmx.de> schrieb:
> The MediaWiki FT search has locking issues. It's very fast for a normal
> wiki, but for a huge one like Wikipedia it tends to get in a deadlock
> state. That's why we have to disable it on Wikipedia. Some other queries
> could also be made faster. Many of our queries have been optimized now and
> every single one should eventually return results with a response time of
> milliseconds.
Does this mean that we will NOT get full text search back soon? That would be
a shame. If it is indeed the case that with another DB server we will still
have locking problems with searches, we might want another architecture,
with a dedicated search (plus several other functions, such as special pages
and SQL queries, perhaps even Whatlinkshere) machine, which gets its database
updated only once per second/minute/hour/whatever rather than in real time.
Andre Engels
There is a Historical Event Markup and Linking Project
http://heml.mta.ca/heml-cocoon/
They try to address issues mentioned here.
There is demo of an animated map there as well.
They choose SVG as the medium which seems a good choice.
I know there are (were ?) problems with SVG viewing on Mozilla,
but that will be fixed, I'm sure.
Erik Zachte
I notice that the latest wikipedia home page does inclusions by using the
{{msg:...} "transclusion" variable.
It seems to me that inclusions like this should come from the
$wgMetaNamespace or other namespace, not the MediaWiki namespace. The
MediaWiki namespace is really for running the software, and should not
really contain specific user modifyable site data.
Perhaps we need a new set of variables for these operations.
How about:
{{incl:article}} # include from the default namespace
{{incl:namespace:article}} # include an article from 'namespace'.
{{incl:language:namespace:article}}
{{inclnw: ...} } # use the nowiki tag
{{repl: ...}} # new version of {{subst:}}
This provide a more general mechanism without making obsolete the old one
and providing a migration path for the type of include used by Main_page.
It also uses the same naming convention as the [[...]] markup.
Nick Pisarro