I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
today we came over 10k HTTP requests per second (even with inter-squid
traffic eliminated). Especially thanks to Mark and Tim, who've been
improving our caching, as well as doing lots of other work, and
achieved incredible results (while I was slacking). Really, thanks!
I'm getting this both at home and at work, almost every time I try to
edit a page:
Sorry! We could not process your edit due to a loss of session data.
Please try again. If it still doesn't work, try logging out and
logging back in.
Normally, when I've had this message it's been after attempting to
save a page after extended inactivity - you click Save again and the
problem goes away. However, it's currently occurring as soon as I
click "Edit", but doesn't cause any problems when clicking Save.
Just wondering if it's the sort of thing that everyone is
experiencing, but no one thought of reporting? :)
Someone just found out about an issue using the "Cite this article" function.
for example, when we see the "Wikipedia" article as of 00:33, 16 June
The URL link to the "Cite this article" is linking to the current
version of the article. It should supposed to cite that article as of
that modification time, instead of the current revision , IMHO. Is
this really an issue about the Cite.php extension?
thanks and regards
I've got Mediawiki 1.6.6 running on a Windows XP SP2 system with Apache 2.2, php 5.1.4, PEAR's Mail package ver. 1.1.10 and MySQL 5.0.21. This PC is being run behind a firewall and will be accessed only by internal users. The system is up and running and I can login, create new discussions, edit...etc but when I try to "Confirm your email address" by pushing the "Mail a confirmation code" button, all I get is a blank page and no email is sent.
My localsetting.php file contains the following:
$wgEnableEmail = true;
$wgEnableUserEmail = true;
$wgEmergencyContact = "wikiadminl(a)internaltest.com";
$wgPasswordSender = "wikiadmin(a)internaltest.com";
$wgSMTP = array(
"host" => '192.168.1.2',
"IDHost" => 'internaltest.com',
"port" => "25",
# 'auth' => false,
"auth" => true,
"username" => 'wikiadmin',
"password" => 'BlahBlah'
For "host" I have tried the ip address of the system, the machine name, smtp.machine name, smpt.machine name.FQDN but no luck.
My php.ini has:
; For Win32 only.
SMTP = 192.168.1.2
smtp_port = 25
; For Win32 only.
sendmail_from = wikiadmin(a)internaltest.com
Any suggestions that could be offered would be appreciated.
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
I've patched up some old problems with Special:Import and Special:Export:
* Import updates categories etc
* Imports are logged and reviewable in Special:Log/import
* Imported pages also get a null edit in the history indicating the import and
* Export is fixed up to allow fetching history for shorter pages, while still
aborting to avoid bogging down the servers on longer pages (currently set to a
cutoff of 100 edits)
* Transwiki import allows selecting the import-with-history
Wikis wishing transwiki import capability should let us know which wikis they
want to be able to import directly from (for instance, from a wikipedia ->
wiktionary) and we can enable it.
Please pass this notice on to wikis where it will be of interest.
-- brion vibber (brion @ pobox.com)
A very big update was made to the Swedish language
file a some weeks ago. On the swedish Wikimedia
older unupdated messages in the MediWiki namespace
were edited only by a user called MediaWiki_default
(some script run on the servers?). Many of those
messages are now old and "blocks" the updated messages
in the language files.
Can User:MediaWiki_default update or delete the
messages on swedish Wikibooks and Wikisource were
MediaWiki_default is the only contributor? (on the
other swedish projects administrators has already
manually deleted those messages, but that seems to be
unnecceary work if a script can do it automatically).
Inbox full of spam? Get leading spam protection and 1GB storage with All New Yahoo! Mail. http://uk.docs.yahoo.com/nowyoucan.html
I've been directed here by Brion, Robchurch and others on #wikimedia-tech.
So I propose a new feature for Wikipedia which people on
#wikimedia-tech mostly refer as blame page or blame map. I would
prefer to call it something like "Track contributions mode" (because
of similarity with MS Word track changes mode) or "Hall of fame" but
whatever. I have live prototype written in PHP&MySQL at
http://18.104.22.168:9000/ Example of "blame map" can be seen at
http://22.214.171.124:9000/history::171 two blame maps compared
For some reason folks at #wikimedia-tech. were mainly concerned with
speed and almost nothing else so I'll try explaining performance
issues as best as I can.
First of all, I DO NOT propose to recalculate diffs for all zillions
of edits Wikipedia already has. Diffs would only be calculated for a
Next, I want to explain in detail how I see this working. So first I
propose to modify revision table and add a flag with following
possible values: "Revision is too old to be diffed", "Revision is
awaiting to be diffed", "Revision has been diffed". Also another table
should be added that will store blame maps for each revision. Blame
map for each subsequent revision will be calculated incrementally. So
it doesn't really matter whether article has 10 or 1000 revisions. We
would only need last blame map.
I also propose to have separate dedicated diff server(s) with sole job
to calculate diffs in background. I.e. diff server grabs revision with
"Revision is awaiting to be diffed" flag and last blame map from
database, calculates diff and finally stores new blame map in the
database and also changes revision flag to "Revision has been diffed".
In addition, article display logic should be altered. The module that
displays article should check diff flag. If diff flag is set to
"Revision is too old to be diffed" no further changes needed. If diff
flag is set to "Revision is awaiting to be diffed" then Credits
section should be created that only contains message "Calculation in
progress". If diff flag is set to "Revision has been diffed" then
Credits section should be created that contains list of contributors
ordered by contribution size. The list of contributors in correct
order can be generated with a single select to blame map table. In
addition this select can be cached. Direct link to blame map should be
displayed too. If user clicks on this blame map link corresponding
blame map should be presented. Every blame map can be generated with a
single select and can be placed in cache. Yawn
If you are still awake by now more thoughts on fault tolerance here.
Should diff server die, crash, fail or whatever the only side effect
end user will see is "Calculation in progress" message right after
article body. That's it. No slowdown or anything. If user still wants
see some kind of diff he/she can still use old diff engine. Because
blame maps aren't calculated in real time this feature is impractical
target for DoS attacks. However I should point out that any real time
diff algorithm is one big fat target for DoS attacks on other wikis
which are run on single server without some sort of acceleration.
There is also small Unicode issue. Due to crappy utf-8 support in PHP
all non-latin characters are currently ignored. I believe this could
be solved either by enabling proper Unicode support in PHP or writing
custom code to separate words. But before that I propose to test on
English Wikipedia first because if it will works for English it should
work for other languages.
So I offer following practical steps. Dedicate one of servers to be
diff playground. I will need a shell account on this server. Install
mediawiki on it alongside with diff logic running in background.
Create read only mysql account on live database server. So as a result
this diff server can grab new revisions from live database, diff them
and store results locally. This way we can find out how many edits
single server can process and see how many servers this feature will
require in total (I don't think it will be more than 2-3 though).
In conclusion, I'd like to say that in my opinion this feature will be
useful and practical if implemented. It also can be crucial building
block for other interesting features. However, I want to stress that
I'm not interested in doing this *unless* it is used in English
Wikipedia and I'm given appropriate credit. I can give a reason why I
want that in private e-mail.
Thank you for reading this long and boring e-mail.
It has been asked multiple times at [[commons:Commons talk:File types]]
to enable OpenOffice.org 2.0 formats (=OASIS OpenDocument Format) for
file upload. Any objections? Can you please enable it?
Filename extensions are .odt (text), .ods (spreadsheet), .odp
(presentation), and .odg (graphics), then there is .odc (chart), .odf
(formula), .odb (database), .odi (image), and the according templates
.ott, .ots, .otp, .otg, .otc, otf, .oti and finally .odm (text-master)
appendix C, page 697f.)
I've been asked by some of the editors on dawiki of the possibilities
to create a set of questions that can lead to the correct image
license(s) for image uploads. Furthermore, there is a request for
enforcing a name for the photographer, so proper crediting can be
Looking at SpecialUpload.php, there are no hooks that is really
suitable for this, and also I'm a little unsure about any effects of
beginning to send people to new pages mid-upload, which would happen
if added to any of the two existing hooks.
So I propose to add two hooks: One for adding fields to the initial
form, and one for validating the contents before proceeding with the
upload. Is there any reason that validation of license information
should not be done in this way? Also, if a more suitable way of
"quizzing" people exists, I'd be happy to hear about that.