I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Sorry about bugging the list about it, but can anyone please explain
the reason for not enabling the Interlanguage extension?
See bug 15607 -
I believe that enabling it will be very beneficial for many projects
and many people expressed their support of it. I am not saying that
there are no reasons to not enable it; maybe there is a good reason,
but i don't understand it. I also understand that there are many other
unsolved bugs, but this one seems to have a ready and rather simple
I am only sending it to raise the problem. If you know the answer, you
may comment at the bug page.
Thanks in advance.
Amir Elisha Aharoni
heb: http://haharoni.wordpress.com | eng: http://aharoni.wordpress.com
cat: http://aprenent.wordpress.com | rus: http://amire80.livejournal.com
"We're living in pieces,
I want to live in peace." - T. Moore
I am from Malayalam Wikipedia (ml.wikipedia - user:Praveenp), and my
language is Malayalam. Consider our one big problem.
After the release of Unicode 5.1.0, there are two kind of encoding for
some characters of Malayalam alphabet (because of reverse
compatibility). This cause serious problems in linking, searching etc in
mediawiki software. Currently Windows 7 is the only operating system
which supports Unicode 5.1.0. (? according to my knowledge), but lot of
third-party tools for writing and reading Malayalam supports new
version. And now large quantity of data in Wikimedia projects are in new
version. It is not possible to link, or search titles encoded in
pre-Unicode 5.1.0 from Unicode 5.1.0 or vice versa. Currently one of our
namespace ???????? (Category) also has one such character, so it is
possible to write ???????? as ?????? which renders same as first but
different in encoding. It causes problem in categorization also.
Is it possible to put some unicode equivalence
<http://en.wikipedia.org/wiki/Unicode_equivalence> in mediawiki
software? We need urgent help.
*Visual * *Representation in 5.0 and Prior* *Preferred 5.1
1 CHILLU_NN.png 0D23, 0D4D, 200D 0D7A
CHILLU_N.png 0D28, 0D4D, 200D 0D7B
3 CHILLU_RR.png 0D30, 0D4D, 200D 0D7C
4 CHILLU_L.png 0D32, 0D4D, 200D 0D7D
5 CHILLU_LL.png 0D33, 0D4D, 200D 0D7E
Wikipedia Affiliate Button
Just a note to say that I didn't go ahead with my
planned implementation of revision suppression
for all administrators, because Aaron said that he
would rather that I wait until bug 20928. Once that
is fixed, I will again look into deploying single-revision
deletion for administrators.
> Message: 3
> Date: Sat, 23 Jan 2010 13:33:20 +0100
> From: Sylvain Leroux <sylvain(a)chicoree.fr>
> Subject: [Wikitech-l] Managing group of pages in white-list
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
> Message-ID: <4B5AEC90.9080302(a)chicoree.fr>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> For a private wiki, I had the request to add groups of pages to the white-list.
> Contributors will regularly add (and possibly delete) pages in those groups. So
> manually editing $wgWhitelistRead appears to be a maintenance nightmare.
> So, is there a way to add regexp or namespace (or any other "collection" of
> pages) in $wgWhitelistRead?
> If not (as I think), is there a hook I could use to patch the white-list validation?
> Thanks in advance for your answers,
> Sylvain Leroux
I was involved with a wiki that had similar needs. I made a small
little extension to whitelist a namespace -
it might be useful to you.