Hi everyone,
I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Kind Regards,
Hugo Vincent,
Bluewater Systems.
I've searched the archives for the past several months, and cannot find
the status of this project, other than a busy fellow saying that it was
the top priority in February. Anybody know?
Hi,
Apologies for this dupe of a wiki discussion page, but this appears to
be where all the action is.
It's fantastic news that the WikiMedia Foundation has been accepted as a
mentor for Google Summer of Code 2006! The suggestion of writing a
programmable REST/XML-RPC/SOAP API that exposes MediaWiki domain objects
is really quite exciting and I plan to apply with this project in mind.
Do you think that this project suggestion is likely to make the final
cut? I'm starting to really set my heart on the idea but I don't want to
spend too much time thinking about it if it's unlikely to be used. I'm
not entirely clear on how the applicaiton process works (even after
reading the Google FAQ).
I commented on the wiki discussion page that there appears to be a third
party API run by Ontok (http://www.ontok.com/wiki/index.php/Wikipedia) -
I assume they have no direct affliation with the WikiMedia Foundation?
Will there be any need to contact those people?
Also, someone has linked to the alpha version of a query interface that
directly accesses the wiki database
(http://en.wikipedia.org/w/query.php), but like "IndyGreg" on the
discussion page I don't think this constitutes an API in the same sense
described here, would you agree? Perhaps eventually this code could be
integrated into the same extension.
It would be great to work with the pywikipediabot, perlmediawikiclient,
and java mediawikiclient guys to create something that they could all
use too, and the possibilities for the use of this API are endless!
Best Wishes
Ben
--
Ben "tola" Francis
http://hippygeek.co.uk
Hey all,
http://meta.wikimedia.org/wiki/Summer_of_Code_2006#Upload_form_improvements
I would like to submit an application for that but it seems like it
requieres some javascript and AJAX and it looks like wikimedia devs are
against AJAX as it stated on top of the page (see "Hi folks, please don't
add "AJAX" etc" on top). So my question is, is this improvement approved by
the wikimedia dev ?
thanx in advance
Pat
Please direct me to the appropriate place if this this question should
not be here.
I'm editing my User:XXX/monobook.CSS because any LI in my navigation bar with
multipe A's is displayed over several lines but I want them in one
line. I have tried
these to no avail:
p-navigation li a { display: inline !important; }
p-navigation a { display: inline !important; }
They do not work. I also cannot pinpoint in which CSS file these are set to
display: block
Apologies again if this is the wrong forum. I could not tell which was the right
forum from looking at the Wikipedia pages...
Andrew Dunbar (hippietrail)
--
http://linguaphile.sf.net
robchurch(a)svn.leuksman.com wrote:
> Revision: 13955
[snip]
> - $wgUser = $u;
> - $wgUser->setCookies();
[snip]
> + # Call hooks
> wfRunHooks( 'AddNewAccount', array( $u ) );
Calling the hook here, now before $wgUser is set, caused a privacy leak for a
few minutes. IP addresses of people registering new accounts were broadcast on
Recent Changes and the IRC feeds until the change was reverted.
I've removed the offending entries from the recentchanges tables.
-- brion vibber (brion @ pobox.com)
Sent to a Squid developer I just had a discussion with on IRC...
-------- Original Message --------
Subject: Squid: Conversion of disk cache reads into cache misses when
disk bandwidth is saturated
Date: Sun, 30 Apr 2006 14:07:29 +0200
From: Mark Bergsma <mark@...>
Hi. As discussed on IRC, a summary:
We are having some problems efficiently using disk caches in our
Wikimedia Squid cache cluster. When the disk cache is too big, disk
cache reads can easily saturate all of the available disk bandwidth on
all disks. This makes Squid very slow as it keeps queuing up the
requests. Therefore we deploy many Squids without disk caches and have
them memory only, which results in much faster Squids (lower request
times with much shorter distribution tails), but the total hit rate suffers.
It would be neat if Squid could decide to convert a disk cache read to a
cache miss when the relevant cache_dir is overloaded. That would make
Squid more load-tolerant reduces the need for administrators to tune and
readjust disk caches for each server over time.
As Squid already does this for cache *writes*, this should be fairly
easy to implement for reads too. Let me know when you have something,
I'll be happy to test patches on a test server in our cluster. I'll
stick around on IRC too...
Thanks,
--
Mark
mark(a)nedworks.org
An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Magic Word: {{NUMBEROFFILES}}... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test Language converter: output gets cut off unexpectedly (bug 5757)... FAILED!
Passed 300 of 304 tests (98.68%) FAILED!
robchurch(a)svn.leuksman.com wrote:
> (reopened bug 5185) Match on two or more slashes on the protocol to prevent another blacklist workaround
Better to fix the parser so such illegal URLs aren't recognized, no?
-- brion vibber (brion @ pobox.com)