Hi everyone,
I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Kind Regards,
Hugo Vincent,
Bluewater Systems.
Dear,
After installation of the Mediawiki-software, I changed the default
"English" language into Dutch by changing $wgLanguageCode in
localsettings.php AND by setting $wgUseDatabaseMessages=false in
defaultsettings.php (the latter had to be done in order to see the changes,
like is indicated in the Mediawiki-FAQ).
My problem is that because $wgUseDatabaseMessages=false, I cannot display
the page "special:allmessages" anymore, neither can I change the content of
the navigation bar. As a solution for this problem, the FAQ-answer proposes
to run the script rebuildMessages.php in the maintenance-folder. I did this
by running an FTP-program, selecting the remote file and pressing "execute"
but without any results.
Does anyone know how I can fix this problem so I am allowed to see the page
special:allmessages and change the content of the navigation bar while I
still have my wiki-site in the Dutch language?
thanks in advance!
Good Wikitech-l Folk,
I write in the hopes of getting some input or ideas about an extension
I've written that attempted to solve a particular maintenance and
navigation problem I think I have. I most interested in whether it's
anything like The MediaWiki Way of doing things.
My problem: I have a lot of documents with a fairly straight-forward a
priori order (these are digitized notebooks and papers of family
memorabilia), where lots is hundreds of individual page-sides. I wanted
a couple of things:
To put them in a wiki to allow for them to be viewed, commented on,
linked, &c.
To lay down the original order the page-sides occured.
Sure, my audience is going to come along and add other connections, but
this is the first path they can start with.
One way I could have done this was to have each page contain a link to
the next and previous pages explicitly. Straihghtforward, and fairly
easy to have a bot go through and programmatically add the links. But
this approach also struck me as a maintenance problem: suppose I got
something wrong the first automation, and wrong in a tedious way to fix?
Suppose new information comes along later than makes us want to
re-arrange this order? Suppose I don't want to just write a bot to do
this for me?
If, on the other hand, instead of explicit links, something like
MediaWiki's Categories were added to each page, something that placed
the specific page in a broader context through code, then that would be
preferable. Categories, though, didn't quite answer, they are really
for grouping things that have some trait in common together, not for
sequencing many things.
Suppose one page in the wiki contained a map of some other pages? If I
included a tag in a page that refered to some map page, then my
extension could look up the map, find the context of the referring page,
and print out suitable navigation links.
Some crufting around later, Nav.php was born: at a little sandbox
mediawiki 1.5.4 at
http://www.nigelk.org/wiksand/index.php?title=Nav_Map_Demonstration,
find a couple of demonstrations of this Nav.php extension I've written.
Is there a built-in or better way of trying to solve this problem I
think I have? Would it be thought of differently by the mediawiki
developers here? Is this kind of lashup of interest to anyone? Is my
php completely horrible? Any thoughts or inputs, either here on the
list or in discussion at that sandbox above, is appreciated.
Respectfully submitted,
Nigel Kerr
nigelk(a)nigelk.org
We are running MediaWiki on a Windows Server at
http://www.relisoft.com/wiki
- MediaWiki 1.5.4
- PHP 5.0.5
- MySQL 4.0.18-nt
First problem: When I access it from one of my home machine, I see a
completely blank main page. Other people can see the full page. I can see
some of the other pages, if I specify the title.
Second problem: I get the error:
Undefined index: REQUEST_URI in includes\WebRequests.php, line 284.
This is the offending line of code:
/**
* Return the path portion of the request URI.
* @return string
*/
function getRequestURL() {
return $_SERVER['REQUEST_URI'];
}
Is this a problem with PHP/MediaWiki mismatch?
Bartosz
I've been tinkering with an extension to provide for a captcha to reduce
automated linkspamming while still staying out of the way for common use.
My preliminary code is running now on test.leuksman.com; the actual
"captcha" part is a really primitive plain text hack which would take
all of a few minutes for a dedicated attacker to crack, but don't worry
about that -- I'm not testing the protection yet, just the framework it
plugs into.
By default the captcha prompt will only kick in if an edit adds new URLs
to the text. Most regular editing shouldn't trip this -- wiki links,
plain text, or just preserving existing links. But if you add new HTTP
links that weren't there before, it'll then make you pass the captcha
before it saves.
The captcha step can also be bypassed based on user group (eg registered
bots, sysop accounts, optionally all registered users), and can also be
set to skip for any user who has gone through confirmation of their
account e-mail address.
I haven't coded it yet, but it should also be possible to add a URL
whitelist, for instance for the site's own local URLs.
As for a 'real' captcha generator to put into this system; I'm not too
sure what code is already out there that's not awful. There's a Drupal
plugin which would be easy to rip GPL'd PHP code from, but it doesn't
seem very robust.
There's a set of samples of various captcha output and their weaknesses
here: http://sam.zoy.org/pwntcha/
Obviously it would be good to either find something on the 'hard
captchas' list rather than 'defeated captchas', or roll our own that
doesn't suck too bad.
There's also the question of whether we can feasibly provide an audio
alternative or whathaveyou.
-- brion vibber (brion @ pobox.com)
What is the current state of any technical solution to the problem with
conditional templates?
I know that at east one solution has been proposed
(http://meta.wikimedia.org/wiki/User:AzaToth/Logic which I personally think
is over-egging the pudding, but is a good first step) but I have heard
absolutely nothing as to whether anybody is ever going to actually do
anything about it.
In the meantime, various users (Netoholic and SnowSpinner chief amongst
them) are making vaguely threatening rumblings which imply simply destroying
any and all templates which use anything like conditional code: the dreaded
"meta-template" term is bandied about with wild abandon. Other users, in a
mad attempt to escape from the [[WP:AUM]] bludgeon, are driven to perpetrate
atrocities like this:
http://en.wikipedia.org/w/index.php?title=Template:Infobox_TV_channel&oldid…
(warning: obscene code alert).
The only response I have heard from anybody on the development side is "we
don't want it Turing compliant", which I assume means that they don't want
sufficient bells and whistles to make it possible to write nasty little
virus thingies.
Well, neither do the users: we simply want something in Mediawiki which will
perform the same function as {{qif}}
(http://en.wikipedia.org/wiki/Template:Qif ) and {{switch}}
(http://en.wikipedia.org/wiki/Template:Switch ) without the strain these
templates are said to impose on the servers. Anything else (as I implied
earlier) would IMNSHO be overdoing it and unnecessary.
HTH HAND
--
Phil
[[en:User:Phil Boswell]]
Due to a suggestion (forgot from whom, though...) the Tasks extension
can now automatically create a task for a newly created article. This
can be configured separately for article creations by logged-in users
and anons (should we allow that again).
Major advantages of this mechanism over the "new pages patrol" are
* tasks are not forgotten, even if noone checked the page for days or
month
* tasks can be closed by logged-in users, so patrols can focus on
unchecked (new) articles
Magnus
P.S.: Thanks to Brion and Hashar for fixes and improvements!
On 12/31/05, Magnus Manske <magnus.manske(a)web.de> wrote:
> Ævar Arnfjörð Bjarmason wrote:
> > What's the point of allowing user agents to specify a user ID to limit
> > to?
> The point is to have a "my pictures" link to click on in the sidebar,
> similar to "my contributions"; also, possible, "list images uploaded by
> userXYZ". I didn't get around to the interface part yet.
> > This should use ->getText() and User::newFromName() instead.
> Why?
Because it doesn't make sense to make users specify a user ID for a
user to show uploads for, we specify things by username everywhere
else (e.g. Special:Contributions/Ævar not
Special:Contributions/123234). If you made the user specify a username
instead of an idea you'd use that.
> > and wfMsg() instead of wfMsgForContent().
> >
>
> OK. It seems to work both ways. As this is open source, and you were
> looking at it anyway, you just fixed it, right?
No, I wanted to check with you why you did it because it seemed quite strange.
I'm currently collecting peer-reviewed academic articles that cite
Wikipedia at
http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_as_an_academic_source
and for whatever reason, several refer to Wikipedia using the address
en2.wikipedia.org. Currently that address does not work, which should
be fixed. These are printed articles that will sit in the world's
libraries forever.
Axel
__________________________________________
Yahoo! DSL Something to write home about.
Just $16.99/mo. or less.
dsl.yahoo.com