Hi,
I want to add a Google search to my Wiki system, similar to the Google link
displayed by http://creatures.wikicities.com/wiki/Main_Page
What code do I need to do this?
Your help is very much appreciated,
Alison
Neil Harris wrote:
<snip>
> * one-click page-move reverting for admins
> * a page move log, working on the same principles as the deletion log
Hello,
The Mediawiki development version already log moves in Special:Log ,
still have to implement a one click revert though.
Note: you can move a page over an existing redirect poiting to it
without deleting the redirect given there is no history:
[[Foo]] contains : #REDIRECT [[bar]]
Move [[bar]] as [[Foo]].
cheers,
--
Ashar Voultoiz - WP++++
http://en.wikipedia.org/wiki/User:Hasharhttp://www.livejournal.com/community/wikitech/
Servers in trouble ? noc (at) wikimedia (dot) org
Hi,
I am in the process of creating an extension tag to suit my project
requirements.
Actually, we have an xml structure which should be processed using
Templates inside this extension implementation.
Briefly, I woud like to create pieces of a table using different
templates.
Example :
If I need a HTML Table with header "Agents" as follows :
Agents
Agent1
Agent2
Agent3
.
.
AgentN
For the above table with 1 column and N rows, the wiki text would look
something like :
{| border="1" cellpadding="2"
! Agents
|-
| Agent1
|-
| Agent2
|-
| Agent3
|-
.
.
| AgentN
|-
|}
In my case, I dont know the value of "N". This depends on the values from
XML.
So, I would like to create a template to process each row based on the
input from XML iteratively for N rows..
Wikitext for this template would look something like :
| {{{parameterName}}} |-
where parameterName's value could be : Agent1, Agent2, ....AgentN.
But the current mediawiki parser wont parse the above template properly
because it expects complete table syntax not pieces of it.
I am wondering if there is any workaround or way out to achieve this ?
Please let me know if this is not the right place to post questions about
mediawiki.
Suresh
Theresa Knott wrote:
>On Tue, 08 Mar 2005 13:55:04 +0000, David Gerard <dgerard(a)gmail.com> wrote:
>
>
>>
>>>Yeah. Possibly the isNewbie() function needs to be set to the last 2% or
>>>whatever, to minimise collateral inability to move pages.
>>>
>>>The thing is that's not a definite measure, but advertising the metric
>>>on the move page would only encourage Willy on Wheels! to work around it
>>>with account creations and leaving them dormant until it's time for a
>>>move attack.
>>>
>>>Although it's a prominent PITA, the page move vandalism is probably not
>>>presently bad enough to disable the last 10% of accounts from page moves
>>>as an emergency measure.
>>>
>>>
>
>What if we lowered it to 1%. how many people would that affect and for
>how long roughly? ( I take it that we are still growing
>exponentially?)
>
>Theresa
>
>
>
About a month per percent, given the current statistics.
I also think it's reasonable to work on the assumption that WoW is
reading this list. We will still need to rely on soft security in the
end. Currently, the technical advantage WoW has is the ability to user a
tabbed browser to generate vandalism "bursts" that defeat the normal
human processes of reverting and user blocking. (I've tried it, all
apart from actually making the edits, and it's very quick to set up a
batch of page moves, ready to commit by clicking one after another).
Page-move vandals know what they are doing, and deliberately choose that
form of vandalism for maximum annoyance: even when done manually, page
moves take longer to fix and tidy up after than to commit, so the
advantage is to the vandal.
Putting in a 2% block will hold off WoW and imitators for at least a
month or two, which has got to be a good thing, and gives us time to set
up better tools to await the return of the page move vandals. In the
longer run, what we need are three things:
* one-click page-move reverting for admins
* a page move log, working on the same principles as the deletion log
* add page-move rate limiting for non-sysops, and you more or less have
drawn the teeth of page-move vandals.
With these tools, page move vandalism need be no more annoying than any
other trivial edit vandalism: simply block the user, call up their
page-move history, and click revert as many times as needed.
And I agree: we should reserve the right to redefine the heuristics for
is_newbie() without notice, to resist any attempts to "game" it. (I can
think of several ways right now, but I see no reason to make vandals'
lives any easier).
-- Neil
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
As some of you may know, a German company (DirectMedia) is currently
working on publishing a snapshot of the German wikipedia on DVD. They
use XML as their internal format.
To help them out, I patched together a database-to-XML converter. Turns
out, though, that they've just written their own.
So, this converter is sitting there [1] without a job ;-)
If you have a Windoze installation running, with access to a MediaWiki
(pre-1.5) database, you might want to try it. It takes the database
parameters and generates a single xml file (out.xml in the installation
directory) containing the XML of all the articles in the main namespace
from that database. It can also automatically reslove templates, if
desired. Installation and running is dead-easy.
It does *not* use the Flex/Bison-parser (I have developed a slight Bison
allergy when exposed to it;-) but works quite well otherwise. It also
generates XML extremely similar to what the Bison parser would do. I ran
about 10000 articles from a de database dump, and the generated XML was
valid.
The software is GPL, but I didn't upload the source anywhere yet. I can
mail you source and instructions if desired.
A slight variation of this software might be quite useful a number of
projects. For example, it could esaily be altered to take a number of
article titles and generate XML only for these. The XML could then be
converted to PDF or whatever.
Just letting you know,
Magnus
[1] http://www.magnusmanske.de/stuff/StaticWikiInstaller.exe
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFCLW3OCZKBJbEFcz0RArDCAJ9sndURQ3bt/Mt9TDaCNz2aMOjIZQCfS78E
8Q5m+vQ+hFB+6QHwqOsMBsw=
=DKB3
-----END PGP SIGNATURE-----
Yes I know, I am working like a preyer mill: But there is no
downloadable cur_table newer than Feb 9.
Please understand, that for us as a mirror operator (and Wikipedia
propagandist), a pretty recent database is important.
Sorry
jo
Hi there,
I'm trying the following in MediaWiki and not getting any Maths - any
suggestions?
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math>
<math>x3\phi</math>
I'm using the latest rc1 release.
Thanks,
Julian
hello,
downtime during the last ~2 hours (13-15 UTC) was caused by a power strip in
the colo blowing. switch/router and some machines were down, but the database
is unaffected. the site is now mostly up but may be unreliable / slow while
we finish working on it.
we're currently discussing options WRT better management & redundancy to
prevent this happening again in the future...
thanks,
kate.
(i should point out that this problem was entirely our fault, not the colo's,
and they were very helpful in bringing the site back online.
please follow up technical queries to wikitech-l rather than wikipedia-l).
Hi there
There seems to be a problem with custom namespaces and the SpecialAllpages.php
(tested: mediawiki 1.4rc1). Custom namespaces will not show up in the selectbox.
This problem seems not to be reported on bugzilla, but it can also be found on:
http://meta.wikimedia.org/wiki/Help:Namespace
Under the section "custom namespaces":
Special:Allpages does not show custom namespaces (this seems to be a bug), but
the URLs work, e.g.
http://meta.wikimedia.org/w/index.php?title=Special:Allpages&namespace=100
Here's the patch for a working dropdown list:
# Patch for SpecialAllpages.php to show custom namespaces.
# By Joel Wiesmann aka tr0nix - www.secuserv.ch
38a39
>
40,41c41,43
< for ( $i = 0; $i < 14; $i++ ) {
< $namespacename = str_replace ( "_", " ", $arr[$i] );
---
> foreach( $arr as $i => $name )
> {
> $namespacename = str_replace ( "_", " ", $name );
usage:
patch -i <patchfile> ./includes/SpecialAllpages.php
Hope this helps and gets accepted :). For me it works fine! Unfortunately I'm
not very familliar with CVS, so this is _maybe_ already fixed.
Regards
Joel Wiesmann, www.secuserv.ch (patch can also be found on my homepage)
A question for you MySQL gurus out there:
Currently, the categorylinks table uses the index
cl_sortkey(cl_to,cl_sortkey(128)),
Note that the cl_sortkey field in the table is varchar(255).
In the categoryPage query, there is an ORDER BY clause on cl_sortkey.
However, this query ends up using a filesort. I've found that if the
index is changed to use the full width of the cl_sortkey field, then
the ORDER BY does not need the filesort.
Does anyone know how to eliminate the need for a filesort without
having to expand the index to the full width of cl_sortkey? Or should
we just bite the bullet and use the full width?
I've tried ORDER BY LEFT(cl_sortkey,128), but that doesn't help
(probably no big surprise).
-Rich Holton
en.wikipedia:User:Rholton