I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Sorry about bugging the list about it, but can anyone please explain
the reason for not enabling the Interlanguage extension?
See bug 15607 -
I believe that enabling it will be very beneficial for many projects
and many people expressed their support of it. I am not saying that
there are no reasons to not enable it; maybe there is a good reason,
but i don't understand it. I also understand that there are many other
unsolved bugs, but this one seems to have a ready and rather simple
I am only sending it to raise the problem. If you know the answer, you
may comment at the bug page.
Thanks in advance.
Amir Elisha Aharoni
heb: http://haharoni.wordpress.com | eng: http://aharoni.wordpress.com
cat: http://aprenent.wordpress.com | rus: http://amire80.livejournal.com
"We're living in pieces,
I want to live in peace." - T. Moore
I have already got the article data from the wikipedia and I stored it
on my computer .Now I want to add the article to the local wiki. I have
done a lot of reaserches and I know that there are a lot of things to do.
If add a record to the page table ,then the
revision,recentchange,text,pagelink table and so on will be changed.So I
think maybe there is a easy way to do that .
Can you tell me what should I do ? Should I simply use
Sincerely looking froward you help, thanks
I have an extension called BugSquish, which I have been happily using on
MediaWiki 1.6.10 for quite a long time. I am also aware of other people
using it on later versions, but cannot cite specific version numbers where
it is known to work. The code works by performing a regex replace on code
passed into the ParserAfterStrip hook function that I have set up, to strike
out links to bugs that have been marked as fixed.
On MW 1.6 this correctly handles <nowiki> and <pre> tags, in that text
within these tags is not parsed by the extension.
On MW 1.14 and above the code within the <nowiki> tags is parsed and ends up
having the regex applied to it, though it is subsequently rendered as plain
text by the engine (so the page ends up being filled with HTML/CSS
gobbledygook, rendered literally).
I am not sure at which revision this change took place.
First question: Is this a bug or a deliberate change in functionality, or
have I been mis-using the hook all along?
Second question: Assuming this is not a bug, how should I rewrite the code
to make it behave as it used to?
The current code for the extension is available here, if you want to test:
- Mark Clements (HappyDog)
I was contacted by someone working (for) (at) Webzzle, who want to meet
me (for unknown reasons as of today).
I googled Webzzle to better identify who the guy is and what he is
likely to talk to me about. And what I found let me quite dubious.
Please have a look here: http://www.webzzle.com/explorer/home.kol
There is a presentation of the concept here:
I kind of now suspect what the meeting will be about and I think I'll
settle down for a phone meeting rather than an expensive and time
consuming face to face meeting in Paris :)
But I'd like to have your opinion about this website and the use it
makes of Wikipedia.
I am asking here in particular because there are obvious technical
implications. But I am sure some of you could reflect on the less
technical implications ;)
Thought to share the early concept of automating user interface testing
using Selenium. The following plan was outlined by Ryan Lane. The goal
is to have the central location of client testing, and open up the test
case submission to MediaWiki authors and allows the reuse of test cases
simultaneously to multiple users.
Feel free to add your comments and input to the discussion page.
(preferred over email thread)
Will keep you all posted with the progress.
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
As we are all very aware, it is long past time for branching 1.16 for release.
Tim, as our release manager, has stated previously that the current state of
trunk is in no condition to be a release; I'm inclined to agree.
It's just shy of 6 months since 1.15 was branched, and we've had over 8000
commits since then, including 3 branches completed and merged. That is a hell
of a lot of code, and a hell of a lot of code to review. I've been thinking,
and I've mentioned it in a few places to some people, that perhaps it's time
for a code freeze prior to branching.
If we keep committing new code at the pace we've been going, we're never going
to catch up on review, and the 1.16 final release is going to be even longer in
coming. If, however, we can at least freeze trunk to new development, it could
perhaps aid in the review/cleanup process associated with a release. Focusing
solely on bugfixing/cleanup rather than trying to push even more new things
into 1.16 allows us to cleanup 1.16 and get it in a state we're happy
Infoboxes in Wikipedia often contain information which is quite useful
outside Wikipedia but can be surprisingly difficult to data-mine.
I would like to find all Wikipedia pages that use
Template:Infobox_Language and parse the parameters iso3 and
But my attempts to find such pages using either the Toolserver's
Wikipedia database or the Mediawiki API have not been fruitful. In
particular, SQL queries on the templatelinks table are intractably
slow. Why are there no keys on tl_from or tl_title?
Andrew Dunbar (hippietrail)