Hi everyone,
I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Kind Regards,
Hugo Vincent,
Bluewater Systems.
Sorry about bugging the list about it, but can anyone please explain
the reason for not enabling the Interlanguage extension?
See bug 15607 -
https://bugzilla.wikimedia.org/show_bug.cgi?id=15607
I believe that enabling it will be very beneficial for many projects
and many people expressed their support of it. I am not saying that
there are no reasons to not enable it; maybe there is a good reason,
but i don't understand it. I also understand that there are many other
unsolved bugs, but this one seems to have a ready and rather simple
solution.
I am only sending it to raise the problem. If you know the answer, you
may comment at the bug page.
Thanks in advance.
--
Amir Elisha Aharoni
heb: http://haharoni.wordpress.com | eng: http://aharoni.wordpress.com
cat: http://aprenent.wordpress.com | rus: http://amire80.livejournal.com
"We're living in pieces,
I want to live in peace." - T. Moore
Added Jan Gerber ('j'). Jan is the developer of Firefogg, and will be
helping out with some of the open video player & sequencer work with
Michael Dale that Kaltura's sponsoring.
-- brion
I've been putting placeholder images on a lot of articles on en:wp.
e.g. [[Image:Replace this image male.svg]], which goes to
[[Wikipedia:Fromowner]], which asks people to upload an image if they
own one.
I know it's inspired people to add free content images to articles in
several cases. What I'm interested in is numbers. So what I'd need is
a list of edits where one of the SVGs that redirects to
[[Wikipedia:Fromowner]] is replaced with an image. (Checking which of
those are actually free images can come next.)
Is there a tolerably easy way to get this info from a dump? Any
Wikipedia statistics fans who think this'd be easy?
(If the placeholders do work, then it'd also be useful convincing some
wikiprojects to encourage the things. Not that there's ownership of
articles on en:wp, of *course* ...)
- d.
Steve Bennett wrote:
> On Tue, Jul 7, 2009 at 5:01 AM, <WJhonson(a)aol.com> wrote:
>
>> The reason BASIC was and still enjoys wide popularity is because it's
>> easier to learn.
>>
>> The example does not make the substantial point because it veers so
>> strongly to the opposite end of the spectrum as to be unrelated to the argument
>> whatsoever. I never suggested that a language should *mimic* English (or a
>> bizarre type of hyper-English).
>>
>> I welcome however, anyone who wants to actually conduct this argument, on
>> Earth.
>>
>
>
> The difference between this thread and the parallel one on wikitech-l:
> that thread quickly focussed on four genuine candidates: Lua, Python,
> JavaScript and PHP. People identified the basic requirements
> (security, speed...) and pointed out the pros and cons of each
> language, in terms of available interpreters, tried and tested
> experiments with sandboxing each, etc.
>
> Here, we're talking about bringing back BASIC because it's so much
> more readable. *yawn*
>
> Steve
>
>
Can we take this discussion back to wikitech-l now, please, and focus on
specific, concrete proposals for syntax reform and/or language replacement?
-- Neil
Hi,
with regard to the recent discussion on SSL, it would be
really nice to have the certificates issued by CAcert (whose
root certificate will not be included in many browsers for
some time) published on a trustworthy server (a footer on
<URI:https://www.wikimedia.org/> perhaps?). I'm primarily
thinking about the certificates for:
- wikitech.leuksman.com
- www.wikimedia.de
(Feel free to append if you encounter others.)
TIA,
Tim
A fundamental principle of medicine is "do no harm." It has a long history and you can find it in the Hippocratic oath with a slightly different wording.
This is also an important principle of software development. If you add a new feature or fix a bug, make sure the resulting code isn't worse off than before. Do no harm is the basic motivation behind regression testing.
I have been thinking about Brion's suggestion of fixing the bug in WebRequest::extractTitle(). It is a reasonable point. Don't just whine about a problem. Fix it. He even provided the best strategy for accomplishing this. Make "sure $wgScriptPath gets properly escaped when initialized." I am sure doing this would not require a significant amount of coding. But, how would changing the way $wgScriptPath is formatted affect the rest of the code base?
I decided to do a multi-file search for $wgScriptPath in phase3 and extensions [r53650]. There are 439 references to it in phase3 and extensions combined. In phase3 alone, there are 47 references. Roughly 1/3 of these are in global declarations, so phase3 has about 30 "active" references and in phase3 and extensions combined there are roughly 300. [By "active" I mean references in which the value of $wgScriptPath affects the code's logic.]
So, if I were to change the formatting of $wgScriptPath, there potentially are 30 places in the main code and 300 places in extensions where problems might occur. To ensure the change does no harm, I would have to observe the effect of the change on at least 30 and up to 330 places in the distribution. This is pretty onerous requirement. My guess is very few developers would take the time to do it.
On the other hand, if there were regression tests for the main code and for the most important extensions, I could make the change, run the regression tests and see if any break. If some do, I could focus my attention on those problems. I would not have to find every place the global is referenced and see if the change adversely affects the logic.
All,
I had this idea cross my mind earlier today, but I got so tied up
in meetings that I couldn't sit down and write out a proper e-mail
until this evening. I was curious as to whether we think FlaggedRevs
might be of use to Mediawiki.org, and if so, how exactly would
we use it?
The idea crossed my mind after the past several days noticing
quite a bit of information on Mediawiki.org is either poorly worded,
outdated or just plain wrong. Now, MW.org doesn't suffer from most
of the issues that are seen on other projects: we're small, we don't
really have anything to edit war over, and we don't seem to get (as
much :) spam and vandalism. I was curious as to whether we could
use FlaggedRevs as a quality control over our documentation.
A lot of the docs have been written by people other than developers,
and a lot of the docs have never been read by a developer. That being
said, using FlaggedRevs we might be able to deliver more solid docs
on MW.org by flagging docs at like two levels. One could be like a basic
"has been looked over for glaring errors and basic readability" and
a second could be "has been thoroughly reviewed and is considered
the doc on the given subject."
Hopefully we can improve the overall quality of the docs on MW.org.
I'm certainly open to other ideas too.
-Chad
We're applying a Solaris kernel patch for the ZFS performance problem on
ms1, our main media file server.
While in progress, uploads will be temporarily disabled. Cached images
should still display, but you may see some missing images in the meantime...
-- brion vibber (brion @ wikimedia.org)