I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
I've read on the techblog that the new UI go live in April. I have
1) What version? Acai, babaco, citron?
2) How/where could a wiki customize the special character insert menu,
and the inserted strings? And the embed file (picture) button inserts
this: "[[Example.jpg]]", without any "File:" or "Image:"!
3) The search and replace button is available in firefox, but does not
appear at all in opera. Why?
4) Currently the new navigable TOC does not work on FF/Opera at all
(I've tried those).
Not too early for live deployment?
Akos Szabo (Glanthor Reviol)
Sorry about bugging the list about it, but can anyone please explain
the reason for not enabling the Interlanguage extension?
See bug 15607 -
I believe that enabling it will be very beneficial for many projects
and many people expressed their support of it. I am not saying that
there are no reasons to not enable it; maybe there is a good reason,
but i don't understand it. I also understand that there are many other
unsolved bugs, but this one seems to have a ready and rather simple
I am only sending it to raise the problem. If you know the answer, you
may comment at the bug page.
Thanks in advance.
Amir Elisha Aharoni
heb: http://haharoni.wordpress.com | eng: http://aharoni.wordpress.com
cat: http://aprenent.wordpress.com | rus: http://amire80.livejournal.com
"We're living in pieces,
I want to live in peace." - T. Moore
I am from Malayalam Wikipedia (ml.wikipedia - user:Praveenp), and my
language is Malayalam. Consider our one big problem.
After the release of Unicode 5.1.0, there are two kind of encoding for
some characters of Malayalam alphabet (because of reverse
compatibility). This cause serious problems in linking, searching etc in
mediawiki software. Currently Windows 7 is the only operating system
which supports Unicode 5.1.0. (? according to my knowledge), but lot of
third-party tools for writing and reading Malayalam supports new
version. And now large quantity of data in Wikimedia projects are in new
version. It is not possible to link, or search titles encoded in
pre-Unicode 5.1.0 from Unicode 5.1.0 or vice versa. Currently one of our
namespace ???????? (Category) also has one such character, so it is
possible to write ???????? as ?????? which renders same as first but
different in encoding. It causes problem in categorization also.
Is it possible to put some unicode equivalence
<http://en.wikipedia.org/wiki/Unicode_equivalence> in mediawiki
software? We need urgent help.
*Visual * *Representation in 5.0 and Prior* *Preferred 5.1
1 CHILLU_NN.png 0D23, 0D4D, 200D 0D7A
CHILLU_N.png 0D28, 0D4D, 200D 0D7B
3 CHILLU_RR.png 0D30, 0D4D, 200D 0D7C
4 CHILLU_L.png 0D32, 0D4D, 200D 0D7D
5 CHILLU_LL.png 0D33, 0D4D, 200D 0D7E
Wikipedia Affiliate Button
A short question about wikistats. Having had a look at http://stats.grok.se/ I
found that visitor numbers between 200712 and now can be searched. So I
assume that the raw hourly visitor number log files (except for a few
missing) are present on grok.se (though not directly downloadable) starting
200712. At http://mituzas.lt/wikistats I could find the hourly visitor log
files only between 200910 and now. Do you happen to know, if someone has
these older wikistats log files (between 200712 and now)?
Wikimania is an annual global event devoted to Wikimedia projects
around the globe (including Wikipedia, Wikibooks, Wikinews,
Wiktionary, Wikispecies, Wikimedia Commons, and MediaWiki). The
conference is a community gathering, giving the editors, users
and developers of Wikimedia projects an opportunity to meet each
other, exchange ideas, report on research and projects, and
collaborate on the future of the projects. The conference is open
to the public, and is a chance for educators, researchers,
programmers and free culture activists who are interested in the
Wikimedia projects to learn more and share ideas about the
This year's conference will be held JULY 9-11, 2010 in Gdansk,
Poland at Polish Baltic Philharmonic. For more information, please
visit the official Wikimania 2010 site:
Wikimania 2010 will be a mix of submitted talks, open space
meetings, birds of a feather groups, and lightning talks.
Submissions will be discussed and selected in an informal process
on the wiki. If your submission is not added to the schedule, you
will still have many opportunities to bring topics forward
* Deadline for submitting workshop, tutorial, panel and
presentation proposals: May 20
* Notification of acceptance: May 25 (workshops), May 31
(panels, tutorials, presentations)
* All proposals and presentations will be welcome in the
Open Space track of the conference, whether or not they
are accepted in this initial process.
Submissions will be reviewed informally by a team of volunteers.
This year Wikimania will offer three tracks for submissions for
members of wiki communities and interested observers to share
their own experiences and thoughts and to present new ideas:
People and Community
The People and Community track provides a unique forum for
discussing topics related to people using/building wikis.
Relevant topics include, but are not restricted to, the
* Wiki Community: Conflict resolution and community dynamics;
reputation and identity;
* Wiki Outreach: Promotion of wikis and Wikimedia projects among
the general public;
* North meets south, east meets west: How can people of a
different cultural background create an encyclopedia according
to common rules? Same subject in the eye of different cultures.
* Special: Wikipedia in Central/Eastern Europe: this theme will
provide a forum to present and discuss the latest progress of
Wikis in the central/eastern European community.
Knowledge and Collaboration
The Knowledge and Collaboration track aims to promote research
and find exciting ideas related to knowledge...
* Wiki Content: New ways to improve content quality, credibility;
legal issues and copyrights (is free knowledge free?); use of
the content in education, journalism, research;
* Semantic Wikis: The use of semantic web technologies, linked
data; semantic annotation and metadata (in particular manual
vs. automated approaches).
The Infrastructure track at Wikimania will provide a forum where
both researchers and practitioners can share new approaches,
applications, and explore how to make Wiki access ever more
* MediaWiki development: issues related to MediaWiki development
* Moving beyond MediaWiki: what other Wiki-like platforms exist;
what tools and features do we need for collaboration on
different types of knowledge?
* Mobile Wikis: The Web is moving off the desktop and into mobile
phones, how we use wikis on mobile devices?; wiki-based
Augmented Reality (AR) applications, location based services
* User Interface Design: Usability and user experience;
accessibility, adaptive interfaces and personalization; novel
Please note that Wikimania 2010 is co-located with WikiSym, The
International Symposium on Wikis and Open Collaboration. More
information about WikiSym can be found on the conference website:
SUBMIT A PROPOSAL
To submit a proposal for a presentation, workshop, panel or
tutorial, please visit:
Thank you for helping make Wikimania 2010 a successful event. :-)
See you in Gdansk, July 9-11!
Wikimania 2010 Gdansk
In XML, named entity references like and • (with the
special exceptions of < > & " ') can be treated as
well-formedness errors across the board by conformant XML processors.
(Yes, this means that *any* XML document that uses *any* named entity
reference except the special five is not well-formed, if you ask these
XML processors.) Alternatively, if a DTD is provided, conformant XML
processors can retrieve the DTD, parse it, and treat the reference as
a well-formedness error if it doesn't occur in the DTD, otherwise
parse it as you'd expect. (Yes, processors can really pick whichever
behavior they want, as far as I understand it. As we all know, the
great thing about standards is how many there are to choose from.)
In practice, as far as I can tell, XML UAs that our users use do the
latter, retrieving the DTD. (Otherwise they'd instantly break, and
our users wouldn't use them!) Thus we get away with using and
such, and still work in these UAs. But this means we have to provide
a doctype with a DTD, which means not just <!DOCTYPE html>. This is
the default behavior on trunk -- we output an XHTML Strict DTD when
the document is actually HTML5. This has a few disadvantages, in
addition to just being odd:
1) Validators treat the content as XHTML Strict, not HTML5, so it
fails validation unless you specifically ask for HTML5 validation.
I've already seen a couple of complaints about this, and we haven't
even released yet. Lots of people care about validation.
2) XML processors are still within their rights to reject the page,
declining to process the DTD and treating the page as non-well-formed.
3) For XML processors that do process the DTD, we force them to do a
network load as soon as they start parsing the page. Presumably this
slows down parsing (dunno how much in practice), and it also hurts the
W3C's poor servers:
The alternative is to simply not use any named character references --
replace them all by numeric ones. E.g., use   instead of ,
and · instead of •. Then we can use <!DOCTYPE html> by default
and avoid these problems. In fact, we already do this for anything
that passes through the parser, as far as I can tell -- we convert it
The problem is that if we do this and then miss a few entities
somewhere in the source code, some pages will mysteriously become
non-well-formed and tools will break. Plus, of course, you have the
usual risks of breakage from mass changes. Overall, though, I'd
prefer that we do this, because the alternative is that I'd have to
pester the standards people and validator people for a means to let us
validate properly with an XHTML Strict doctype.
Are there any objections to me removing all named entity references
from MediaWiki output?
Google earlier today announced the selected students for Google Summer of
Code 2010. We're happy to report we've received six students, listed here:
We had a lot of really great proposals this year, and a really engaged
mentor group helping with the selection process. It was both wonderful to
have so many choices, and really sad that we couldn't pick them all, but in
the end, we had to narrow the list down.
To the students that weren't selected: do know that we were inspired by the
quality level of all of the proposals, and we had to turn down some really
exceptional proposals. Please don't be discouraged, and do consider us next
To the students selected: congratulations! Welcome aboard! We really look
forward to working with you to make sure you are successful and have a great
time in the process.
To everyone volunteering as a mentor who helped with the selection process:
thank you for your effort and dedication! There was a lot to sort through,
but I think we can all feel great that we have a group of very capable
students on the case this year thanks to your work.
I use wampsever and mediawiki to structure local Wikipedia.I have
imported the data of Wikipedia.
At the very start,everything is fine. But a few days later,inquiry speed
is slow.When I search for an article,it costs about 20 seconds.
Is there any way to slove this problem?