Hi everyone,
I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Kind Regards,
Hugo Vincent,
Bluewater Systems.
Sorry about bugging the list about it, but can anyone please explain
the reason for not enabling the Interlanguage extension?
See bug 15607 -
https://bugzilla.wikimedia.org/show_bug.cgi?id=15607
I believe that enabling it will be very beneficial for many projects
and many people expressed their support of it. I am not saying that
there are no reasons to not enable it; maybe there is a good reason,
but i don't understand it. I also understand that there are many other
unsolved bugs, but this one seems to have a ready and rather simple
solution.
I am only sending it to raise the problem. If you know the answer, you
may comment at the bug page.
Thanks in advance.
--
Amir Elisha Aharoni
heb: http://haharoni.wordpress.com | eng: http://aharoni.wordpress.com
cat: http://aprenent.wordpress.com | rus: http://amire80.livejournal.com
"We're living in pieces,
I want to live in peace." - T. Moore
Added Jan Gerber ('j'). Jan is the developer of Firefogg, and will be
helping out with some of the open video player & sequencer work with
Michael Dale that Kaltura's sponsoring.
-- brion
I've been putting placeholder images on a lot of articles on en:wp.
e.g. [[Image:Replace this image male.svg]], which goes to
[[Wikipedia:Fromowner]], which asks people to upload an image if they
own one.
I know it's inspired people to add free content images to articles in
several cases. What I'm interested in is numbers. So what I'd need is
a list of edits where one of the SVGs that redirects to
[[Wikipedia:Fromowner]] is replaced with an image. (Checking which of
those are actually free images can come next.)
Is there a tolerably easy way to get this info from a dump? Any
Wikipedia statistics fans who think this'd be easy?
(If the placeholders do work, then it'd also be useful convincing some
wikiprojects to encourage the things. Not that there's ownership of
articles on en:wp, of *course* ...)
- d.
Hi,
I have just seen two independent instances where people said that they
sent posts to foundation-l in the last +/- 12 hours, which never got
posted on the list. The emails do not show up in the moderation queue
either (nor are these two subscribers, or the entire list, moderated).
Are there any technical problems with the mail(inglist) server that
you are aware of?
Michael
--
Michael Bimmler
mbimmler(a)gmail.com
Hi all,
We now have english wikipedia fully migrated to new servers / new search
backend. We cannot fully migrate other wikis until we resolve some hardware
issues. In the meantime, here is the overview of new features now deployed
on en.wiki:
1) Did you mean... - we now have search suggestions. Care has been taken to
provide suggestions that are context-sensitive, i.e. on phrases, proper
names, etc..
2) fuzzy and wildcard queries - a word can be made fuzzy by adding ~ to it's
end, e.g. query sarah~ thompson~ will give all different spellings and
similar names to sarah thompson. Wildcards can now be prefix and suffix,
e.g. *stan will give various countries in central asia.
3) prefix: - using this magic prefix, queries can be limited to pages
beginning with certain prefix. E.g.
mwsuggest prefix:Wikipedia:Village Pump
will search all village pumps and archives for mwsuggest. This should be
especially useful for archive searching in concert with inputbox or
searchbox
4) intitle: - using this magic prefix, queries can be limited to titles only
5) generally improved quality of search results via usage of related
articles (based on co-occurrence of links), anchor text, text abstracts,
proximity within articles, sections, redirects, improved stemming and such
Cheers, Robert
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
hello,
i have written a new extension to embed music scores in MediaWiki pages:
https://secure.wikimedia.org/wikipedia/mediawiki/wiki/Extension:ABC
unlike the Lilypond extension, this uses a simple input language (ABC) that is
much easier to validate for security. ABC is mostly used to transcribe Irish
trad and other simple tunes, but it recently gained support for more advanced
features, e.g. multiple staves and lyrics. this is supported in the extension
using the 'abcm2ps' tool.
unlike the existing ABC extension (AbcMusic), it doesn't support opening
arbitrary files as ABC input (which is a potential security issue), and has
several additional features:
- - The original ABC can be downloaded easily
- - The score can be downloaded as PDF, PostScript, MIDI or Ogg Vorbis
- - A media player can be embedded in the page to play the media file
i believe the ABC format is suitable for transcribing the majority of scores
currently on Wikimedia projects. although it can't handle all of them, it is
better than the current situation. plus, as ABC is simple, and existing ABC
scores are easily available, it's easier for novice users to contribute.
i would be interested to hear peoples' thoughts on enabling this extension on
Wikimedia.
- river.
-----BEGIN PGP SIGNATURE-----
iD8DBQFJBwL+IXd7fCuc5vIRAqG6AJ9RxKTGjJ7ywdZoesrTJWrMPtBYrACgjgDX
lIY552ilDFaVG1mLzqW1F/Y=
=7Tda
-----END PGP SIGNATURE-----
Can someone explain why the Wikimedia Commons accepts uploads of
printable PDF documents (e.g. brochures) but not the editable
source version in Open Document Format (e.g. .ODT). This seems to
violate the open source principle.
This should be an FAQ but, but it isn't obvious from
http://commons.wikimedia.org/wiki/Commons:File_types
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
Two separate sites indicate potential sources of torrents for *.tar.gz
downloads of the en wikipedia database material :
http://en.wikipedia.org/wiki/Wikipedia_database and
http://meta.wikimedia.org/wiki/Data_dumps#What_about_bittorrent.3F
(so far).
Is it possible for anyone to indicate more comprehensive lists of
torrents/trackers than these? Are there any plans for all the
database download files to be available in this way (I imagine that
there would also be some PDF manual which would go along with these to
indicate offline viewing, and potentially more info than this).
J
On 4/15/09, Petr Kadlec <petr.kadlec(a)gmail.com> wrote:
> 2009/4/14 Platonides <Platonides(a)gmail.com>:
>> IMHO the benefits of separated files are similar to the disadvantages. A
>> side side benefit if it would be that hashes would be splitted, too. If
>> you were unlucky, knowing that 'something' (perhaps just a bit) on the
>> 150GB you downloaded is wrong, is not that helpful.
>> So having hashes for file sections on the big ones, even if not
>> 'standard' would be an improvement.
>
> For that, something like Parchive would probably be better…
>
> -- [[cs:User:Mormegil | Petr Kadlec]]
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l