I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
today we came over 10k HTTP requests per second (even with inter-squid
traffic eliminated). Especially thanks to Mark and Tim, who've been
improving our caching, as well as doing lots of other work, and
achieved incredible results (while I was slacking). Really, thanks!
I've put together an extension for rating articles if anyone is
interested. It's just a first version and hasn't been tested much, but
the details can be found here:
You can see an example here on our development server:
(username password wikihow / wikihow2006) - scroll down to the bottom
of the page for the checkmarks.
I'd appreciate feedback if anyone has any. If someone wants to add
this to extensions in svn, that'd be great.
On 30/05/07, tstarling(a)svn.wikimedia.org <tstarling(a)svn.wikimedia.org> wrote:
> Revision: 22580
> Author: tstarling
> Date: 2007-05-30 14:02:32 -0700 (Wed, 30 May 2007)
> Log Message:
> Merged filerepo-work branch:
So, does Tim drink Foster's, like the ads would have us believe, or
should I have some other drink planned in case I ever run into him? ;)
AutoWikiBrowser is open source but Windows-only, being written to the
.NET 2 framework. Mono isn't up to .NET 2, and .NET 2 doesn't install
under Wine on Linux. But I've opened a Wine bug for it:
Others are invited to give their stacktraces, relay traces, etc. Be
sure to be using the current Wine, the .NET issue is being actively
worked on and two weeks can make a difference.
(Darn, a reason to keep Winders around. AWB is just unbelievably cool,
and is a much nicer browser to *edit* Wikipedia in. See [[WP:AWB]].)
The related .NET 2 on Wine bug is http://bugs.winehq.org/show_bug.cgi?id=3972 .
If you have other useful .NET programs you would like to run under
Wine, give them a try on the current version and let wine-users know
and possibly file a bug.
-----BEGIN PGP SIGNED MESSAGE-----
Some of you may have noticed that the Subversion server has been
particularly slow the last few days. For the year since we set it up,
we've been hosting our Subversion repository on my personal server
account, keeping it separate while we figured out the security issues
and for availability if our main data center was down.
Lately the traffic has been getting too high to manage though, to the
point of exceeding my personal site's bandwidth quota in release months. :)
So, Mark's set aside a machine in Amsterdam to move the repository to.
I'll be running the migration tomorrow if all the setup seems to be
going well. There should not be much disruption in service, though there
might be some interruption as DNS changes or if there's a problem.
What you *may* see is that the SSH host key for svn.wikimedia.org will
change, which will toss up warnings for developers using their svn+ssh
For OpenSSH, remove the offending line from your ~/.ssh/known_hosts file
and let it recognize the new host key.
- -- brion vibber (brion @ wikimedia.org)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v184.108.40.206 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
[My first post - please let me know if I should be asking somewhere
Almost two months ago, User:GMaxwell undertook to arrange for Google to
read the new 'coord' template:
when they parse the raw Wikipedia data for their Google Earth Wikipedia
Unfortunately, he hasn't posted for some time (I do hope he's OK).
Can someone else pick up that baton, please, or advise me who to
contact? Or perhaps even confirm that that's been done?
* Say "NO!" to compulsory ID Cards: <http://www.no2id.net/>
* Free Our Data: <http://www.freeourdata.org.uk>
* Are you using Microformats, yet: <http://microformats.org/> ?
I use the html-download of wikipedia to extract a net of main- and
subcategories with the connected articles.
To achieve this, I parse all Category~*.* pages.
Now it happens, that categories with count of (i.e) subcategories
greater than 200 aren't represented completely in the html-dump. The
page only contains the first 200 elements, further elements are not in
anymore. The link "next 200" redirects to itself and actually, no page
with the "next 200" can be found.
So I can only extract the first 200 elements. Can anything be done about
Thanks in advance,