I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
I've been putting placeholder images on a lot of articles on en:wp.
e.g. [[Image:Replace this image male.svg]], which goes to
[[Wikipedia:Fromowner]], which asks people to upload an image if they
I know it's inspired people to add free content images to articles in
several cases. What I'm interested in is numbers. So what I'd need is
a list of edits where one of the SVGs that redirects to
[[Wikipedia:Fromowner]] is replaced with an image. (Checking which of
those are actually free images can come next.)
Is there a tolerably easy way to get this info from a dump? Any
Wikipedia statistics fans who think this'd be easy?
(If the placeholders do work, then it'd also be useful convincing some
wikiprojects to encourage the things. Not that there's ownership of
articles on en:wp, of *course* ...)
We now have english wikipedia fully migrated to new servers / new search
backend. We cannot fully migrate other wikis until we resolve some hardware
issues. In the meantime, here is the overview of new features now deployed
1) Did you mean... - we now have search suggestions. Care has been taken to
provide suggestions that are context-sensitive, i.e. on phrases, proper
2) fuzzy and wildcard queries - a word can be made fuzzy by adding ~ to it's
end, e.g. query sarah~ thompson~ will give all different spellings and
similar names to sarah thompson. Wildcards can now be prefix and suffix,
e.g. *stan will give various countries in central asia.
3) prefix: - using this magic prefix, queries can be limited to pages
beginning with certain prefix. E.g.
mwsuggest prefix:Wikipedia:Village Pump
will search all village pumps and archives for mwsuggest. This should be
especially useful for archive searching in concert with inputbox or
4) intitle: - using this magic prefix, queries can be limited to titles only
5) generally improved quality of search results via usage of related
articles (based on co-occurrence of links), anchor text, text abstracts,
proximity within articles, sections, redirects, improved stemming and such
-----BEGIN PGP SIGNED MESSAGE-----
i have written a new extension to embed music scores in MediaWiki pages:
unlike the Lilypond extension, this uses a simple input language (ABC) that is
much easier to validate for security. ABC is mostly used to transcribe Irish
trad and other simple tunes, but it recently gained support for more advanced
features, e.g. multiple staves and lyrics. this is supported in the extension
using the 'abcm2ps' tool.
unlike the existing ABC extension (AbcMusic), it doesn't support opening
arbitrary files as ABC input (which is a potential security issue), and has
several additional features:
- - The original ABC can be downloaded easily
- - The score can be downloaded as PDF, PostScript, MIDI or Ogg Vorbis
- - A media player can be embedded in the page to play the media file
i believe the ABC format is suitable for transcribing the majority of scores
currently on Wikimedia projects. although it can't handle all of them, it is
better than the current situation. plus, as ABC is simple, and existing ABC
scores are easily available, it's easier for novice users to contribute.
i would be interested to hear peoples' thoughts on enabling this extension on
-----BEGIN PGP SIGNATURE-----
-----END PGP SIGNATURE-----
Can someone explain why the Wikimedia Commons accepts uploads of
printable PDF documents (e.g. brochures) but not the editable
source version in Open Document Format (e.g. .ODT). This seems to
violate the open source principle.
This should be an FAQ but, but it isn't obvious from
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
Am 10.10.2008 um 21:22 schrieb Erik Moeller:
> 2008/10/10 Derbeth <derbeth(a)wp.pl>:
>> I wonder about the legal aspects. In my opinion, when you create a
>> ready-to-print version,
>> you have to attach the text of GFDL license to it - directly, not
>> as a link. Like it is done in
As Erik wrote: This is already implemented (either a title of an
article or a URL to some license text can be set in
LocalSettings.php), but it's currently not configured.
>> Secondly, current version of the tool does a plagiarism - beacause
>> it does not mention
>> image authors and does not provide any mean (like by making images
>> clickable) to check
>> these authors.
> Ouch, thanks for pointing that out. Tricky to do this automatically
> since it's all wiki-text with templates, but we'll investigate a
> solution here.
We'd highly appreciate input from the community regarding this topic!
The printed books from PediaPress contain a list of figures where the
license of each image is listed, together with the URL to the image
description page. As some kind of "hotfix" this solution could be
implemented in the PDF export of the Collection extension, too. But
this doesn't really solve the problem.
We think it's more of a technical/software thing, so I cross-posted
(and set Reply-To) to Wikitech-l.
In our opinion, license management/handling must be a core feature of
MediaWiki, because the software is explicitely developed for the
collaborative distribution of free content. Licenses of the containing
articles and images should not be represented via some agreed-upon
convention but via structured (and machine-readable) information,
available for each relevant object in the wiki.
Some information that would be desired:
- Full (official) name of the license(s).
- Whether the full text of the license has to be included or a
- Reference to the full text of the license(s) (in some rigidly
defined format like wikitext).
- Whether attribution is required. If so: The list of required
So, basically all the information that's required to check if it's
possible to take some part of the MediaWiki and use it somewhere else
and all the information that has to be included in that other place.
This information could be made accessible via MediaWiki API, but
ideally it's contained in the wikitext and/or XHTML, too.
All this could be handled via microformats, even inside of templates,
but the main point is that any kind of new technique has to be
enforced, ideally via MediaWiki software itself: In the commons wikis
there are some conventions that can be used in software by people/
companies like us (although we have to work with hacks and
workarounds), but oftentimes, in wikis with smaller communities this
information doesn't even exist at all.
-- Johannes Beigel
I am an administrator, bureaucrat and checkuser on the Romanian
Wikipedia. I have contacted the current owner of domain wikipedia.ro
asking him to consider donating the domain to Wikimedia Foundation, Inc.
(there is no local chapter in Romania). He's considering the option, but
in the meanwhile he has offered to allow us use of the domain -- in
other words, he asked for the appropriate nameserver IP addresses he
should associate with that domain (obviously, that should lead to the
content currently served at ro.wikipedia.org). Could that be arranged?
If so, please provide the respective IP addresses so I can pass them on
In a different train of thoughts, should he agree donating the domain
altogether to WMF, can WMF to take ownership, or is that against any policy?
Names with non-Latin characters in the donation comments are broken
and outputting as question marks. Some people are understandably
unhappy that their names are not appearing next to their donations.
For example, see <
(Thanks to [[ja:user:Aotake]] for pointing it out in #wikimedia.)
Jesse Plamondon-Willard (Pathoschild)
On Sun, Nov 30, 2008 at 3:35 PM, Michael Peel <email(a)mikepeel.net> wrote:
> On 30 Nov 2008, at 20:11, Robert Rohde wrote:
> > On Sun, Nov 30, 2008 at 8:20 AM, Erik Zachte
> > <erikzachte(a)infodisiac.com> wrote:
> >> English -> English dump
> >>> Because myself and others have been frustrated by the lack of good
> >>> stats on the number of active editors on the English Wikipedia, I
> >>> have
> >>> compiled some stats on the editing frequency on enwiki:
> >> No worries: in only 176 days from now the English dump will be
> >> ready and I
> >> can run wikistats scripts on it.
> >> It just started 52 days ago, so let us be patient for a while ;)
> > Is there any reason at all to believe that it is more likely to finish
> > this time than all the previous attempts during the last two years?
> > I have virtually zero faith in a script that takes 230 days and where
> > any error wipes out all progress.
> > -Robert Rohde
> Hold on...what? There is no recent dump of the English Wikipedia, and
> there hasn't been for the last 2 years?
> Please tell me I'm misunderstanding things here.
> foundation-l mailing list
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
(cc'd to wikitech-l)
I saw this the other day as well and found it odd. While enwiki dumps
do take the longest, this does seem like an _incredibly_ long time for
"All pages with complete page edit history (.bz2)" to finish (May 2009).