I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
I have just joined, I am from mumbai, india. I would like to get the
articles translated in marathi, my mother tongue. Looking at the effort
and no of volunteers, this will not be usable in any reasonable amount
That has made me think of alternatives - machine translation. A state
funded institute has a software available but I don't have access to it
Pl. comment about this approach. Has this been tried for any other
Thanks & regards,
Yahoo! India Matrimony: Find your life partner online
Go to: http://yahoo.shaadi.com/india-matrimony
As wikipedia is slow at the busy time, I propose to get some new servers for our cluster.
- Some new web servers(3 or 4), P4 2,8Ghz with 2Go of RAM
- A server which could be a backup for nfs server, zwinger, with bigger disk, 80Go is very low, maybe 200 or 250Go
- Upgrading disk of zwinger to 200 or 250Go (or add a new one)
- A db server in 64 bits mode with 4Go of RAM (if we cant make working geoffrin), like this one :
With raid 10 disk system, 4 or 6 drives in raid and 1 stand-by. I prefer 15000rpm disk, but I can understand that they are more expensive
- Maybe another squid server
What do you think of that ?
I have been on this list awhile, when i originally joined i was
interesting in the possibility of exporting the wiktionary data as
.dict format. Now that the newest version of OSX 10.4 has a built-in
dictionary that uses the dict:// to look-up words i was interested to
see if anyone on the technicaly side would like to explore the
possibility of either exporting the Wiktionary database as .dict
format, or run a dictionary daemon that would access the wiktionary
database server and return dict entries. It would be read-only, but it
would be another interesting way to access the wiktionary besides the
Does anyone on the tech list know if this is even possible? I'm not
asking you to do it (i can write the export), i was wondering if there
is some sort of database schema available to extract the data into
dict format, or are the entries too fragmented to even attempt an
Sending as a separate mail to not mess things :)
At Les trophees du Libre, MediaWiki won the following prizes:
* a nice trophy, of which i shall upload a picture at some point, and
which'll be for someone to to you
* an HP laptop dual boot. I guess you guys/gals decide what to do with
* an hosting offer "pack premium", description at
.Once again, i guess you decide whether to accept it or not, what to
do with it. I think the offer is for one year (but i can contact
someone to make sure, or have more details)
* 4 subscriptions to DirectionPHP (
I'd like to request that selected parts of the user table be included
in the dumps for statisticical purposes, namely the user_id, user_name
and user_options fields. It would be useful to have this data to
collect statistics on what parts of the preferences are actually being
used, and being able to compare this with the user_id (and user_name)
field would enable checking for what settings regular editors have and
how many change their default settings and so on.
And before people get all up in arms about this then no, this does not
include your email or the hash of your password (those are in
user_email and user_password respectively).
I've added my latest LDAP Authentication patch to bugzilla:
I will update my corresponding documentation to match the current patch
This documentation is located here:
Is this still being considered to be added to mediawiki 1.5? I'm almost
all of the changes to the core code that are required for all of my planned
functionality have been added. Almost all of the changes that were made were
hooks, the rest were for security. If there are any required changes,
or security concerns, let me know.
At this time, the LDAP patch has support for:
* Simple authentication through SSL using direct binds, or
** Note: proxy authentication is not currently working using multiple
Also, you will not be able to add LDAP users when using proxy authentication
yet. This will be added next version.
* Storage/Retrieval of some user preferences
* Ability to add new users to LDAP from Mediawiki
* Ability to change LDAP passwords through Mediawiki
* Ability to mail a temporary password so that users can change their LDAP
* Ability to do all of the above on multiple domains (including the local
Future versions will eventually have the following functionality:
* A custom schema for LDAP
* Access control using security groups (Authentication only)
* Ability to use smart cards, or CAC cards to login to mediawiki using
* Ability to use LDAP as a complete backend for user information using a
or multiple domains (or a combination of LDAP and the local database as
If anyone can think of other features that should be added, let me know.
Hi, I would like to add some code to allow changing some options on the
gallery. In particular I would like to change the thumb size and
perhaps also the number of columns (and perhaps also add a caption
I have traced through the relevant bits of code of the parser to where
we extract the attributes, but it looks as though the parser right now
doesn't allow for attributes in the "<gallery>" tag.
For example if I wanted something like <gallery
thumb=150px;cols=1;header="My gallery heading"> then I don't think this
extra stuff is parsed out right now?
Any suggestions on the cleanest way to implement this? Will it be
accepted for inclusion...?
at least, they will be when the dump is complete, which will take a few more
hours yet. once done, image dumps will be found at:
please read the readme files. (note: don't download dumps without an
"upload.tar" symlink, because that means the dump is still in progress and
the file will be incomplete!)
please let me know about any problems with these files, particularly if they
don't extract correctly.
We'll create some new indexes that should improve site
performance. To do this, we need to set the wikis to
read only at 3 a.m. UTC (5a.m. Berlin/Paris, about
10 p.m. Chicago). The downtime will take about 2 hours.
Thanks for your understanding.