I haven't been receiving any emails notifying me of user talk page changes
for quite some time now. I do have it enabled in my preferences. There
doesn't seem to have been much point enabling this for en.wiki if it's meant
shutting it down for all wikis.
-- Adrignola
Hello,
During the Berlin hack-a-ton (which was an awesome event), I have added
a quick hack to MediaWiki which is correctly marked as fixme. The
revision I am requesting comments for is r87992:
http://www.mediawiki.org/wiki/Special:Code/MediaWiki/87992
Copy pasting the example from the commit message:
Example:
$db->makeList( array( 'field!' => array( 1,2,3 ) );
outputs:
'field' NOT IN ('1', '2', '3' );
$db->makeList( array( 'foo!' => array( 777 ) ) );
outputs:
'foo' =! 777
The exclamation mark is an easy mark to have the condition negated.
Brion raised concerns: syntax is not obvious and lacking potentially
useful feature. He recommends using full operators:
'some_field IS NOT' => null,
'some_value !=' => 23,
'some_ts >=' => $db->timestamp($cutoff),
'some_ts <' => $db->timestamp($cutoff),
Thus the proposed diff is:
- 'field!'
+ 'field !='
I would appreciate if developers can comment on this feature, either on
the mailing list or on the revision comment list.
NB: please note the 'field!' feature has been used in the code since
there. Might need to have to fix them if we change the code.
--
Ashar Voultoiz
If you're going to a conference, here's a leaflet you can print and
give out to encourage people to contribute to MediaWiki:
http://www.mediawiki.org/wiki/File:MediaWiki_flyer_20110725-1.pdf
Also, if you'd like to teach a two-hour "How to customize/hack
MediaWiki" workshop at some upcoming conference, unconference,
barcamp, whatever, I can help you out. Guillaume Paumier and I are
turning http://www.mediawiki.org/wiki/How_to_become_a_MediaWiki_hacker/Workshop
into a syllabus that anyone can teach.
Some upcoming conferences where we could recruit more testers, users,
and developers:
* Ubuntu Global Jam, 2-4 Sept., 42 events around the world
https://wiki.ubuntu.com/UbuntuGlobalJam
* Open Video Conference, especially the open media developers' track,
Sept. 10-12, New York City, USA http://openvideoconference.org/
* ZendCon (PHP conference), Oct. 17-20, Santa Clara, USA
http://www.zendcon.com/
* PostgreSQL Europe, Oct. 18-21, Amsterdam, The Netherlands
http://2011.pgconf.eu
* PHPCon Poland, Oct. 21-23, http://www.phpcon.pl/ Wikia's Łukasz
Garczewski is attending and giving the talk "Do you speak English, yes
I don't" and will help me out, but more help is welcome.
If you are going to one of those and you'd like Wikimedia stickers or
buttons to pass out, please let me know.
--
Sumana Harihareswara
Volunteer Development Coordinator
Wikimedia Foundation
I took a little time last week and wrapped Neik K's parserPlus library into
an extension. It implements a nice framework for grabbing localized message
strings, with limited but very useful PEG-based wikitext parsing, all on the
client-side. This is code that's been in use in UploadWizard for some time
now, but the extension makes it available to other projects as well.
tl;dr:
1. Enable the extension, it makes two functions available in your JS.
2. string example: $( '.status' ).append( gM( 'mwe-upwiz-file-all-ok' ) );
3. chainable jQuery example: $( '.status' ).msg( 'mwe-upwiz-file-all-ok' );
4. It does lots more stuff, check the docs.
Why does this exist? In Neil's own words:
"In the course of writing UploadWizard, I started to rely on MwEmbed's
message library, which had limited wikitext parsing. This was a great help
to internationalization, and the PLURAL support was nice.
"MwEmbed was ultimately not accepted for integration into MediaWiki, so the
ResourceLoader framework was invented to replace that. But we had little or
no support for wikitext-parsed messages. Simple replacements were handled,
but not complicated or nested parsing.
"Michael Dale and NeilK (that's me) wrote another class (MwMessage.js) to
supply the needed features and some advanced ideas like dropping jQuery
nodes right into message strings. But I felt that it was still a bit too
hacky and had some annoying flaws. For every message, you needed to
instantiate another parser. Also, parameters like $1 were replaced before
the message was actually parsed, leading to some unnecessary convolutions
and code repetition for the advanced jQuery-oriented features that Michael
was exploiting heavily."
It's pretty cool. I can imagine this functionality being pulled into Core
at some point, but for the moment the extension provides a low-impact way
for the rest of us to take advantage of it. When it does get incorporated
into core, code changes required in extensions should be very minimal.
Read more: http://www.mediawiki.org/wiki/Extension:JQueryMsg
-Ian
What: Collection extension triage bug triage
When: Wednesday, August 24, 17:00UTC
Time zone conversion: http://hexm.de/65
Where: #wikimedia-dev on freenode
Use http://webchat.freenode.net/ if you don't have an IRC
client
This week I'll be focusing on the Collection extension In week 2 of "The
Bugmeister and Tomasz Finc". If you've ever tried to create PDFs or
OpenZIM files using the Book Creator or wanted to try to adapt these
tools for your own site, this is the bug triage for you.
Following are the bugs I really want to focus on. But if you don't see
your bug here, then check out the etherpad: http://hexm.de/5l
If it isn't listed there, send me an email and I'll try to make sure
that it gets attention.
(FWIW, I hope to have a list of bugs ready for a sprint this weekend
based, mostly, on the Collection extension.)
http://bugzilla.wikimedia.org/30326 -- PDF export extension doesn't
support some characters in Arabic script
http://bugzilla.wikimedia.org/19830 -- PDF prints don't join Arabic
letters properly
http://bugzilla.wikimedia.org/28206 -- PDF generation does not support
Complex Script Wikis
http://bugzilla.wikimedia.org/30437 -- change the Hebrew default font to
Taamey Frank CLM
http://bugzilla.wikimedia.org/27462 -- <noinclude> showing in PDF
http://bugzilla.wikimedia.org/28060 -- Collection extension should not
add chapters in reverse order
http://bugzilla.wikimedia.org/30503 -- template exception for book maker
(pdf export)
http://bugzilla.wikimedia.org/26330 -- collection contents lost when
only loading js via https
http://bugzilla.wikimedia.org/24512 -- Collection uses curl_*()
functions instead of Http::*() functions
http://bugzilla.wikimedia.org/28118 -- The path to images
https://bugzilla.wikimedia.org/30511 -- Collection extention should
place time stamp of revision extracted into the offline file
http://bugzilla.wikimedia.org/30199 -- ZIM external links should be
always marked as external.... or removed
Happy hacking!
Mark.
Hi!
Over the last year, I have been using the Wikipedia XML dumps
extensively. I used it to conduct the Editor Trends Study [0] and me
and the Summer Research Fellows [1] have used it in the last three
months during the Summer of Research. I am proposing some changes to
the current XML schema based on those experiences.
The current XML schema presents a number of challenges for both the
people who are creating dump files as the people who are consuming the
dump files. Challenges include:
1) The embedded structure of the schema, a single <page> tag with
multiple <revision> tags makes it very hard to develop an incremental
dump utility
2) A lot of post processing is required.
3) By storing the entire text for each revision, the dump files are
getting so large that they become unmanageable for most people.
1. Denormalization of the schema
Instead of having a <page> tag with multiple <revision> tags, I
propose to just have <revision> tags. Each <revision> tag would
include a <page_id>, <page_title>, <page_namespace> and
<page_redirect> tag. This denormalization would make it much easier to
build an incremental dump utility. You only need to keep track of the
final revision of each article at the moment of dump creation and then
you can create a new incremental dump continueing from the last dump.
It would also easier to restore a dump process that crashed. Finally,
tools like Hadoop would have a way easier time handling this XML
schema than the current one.
2. Post-processing of data
Currently, a significant amount of time is required for
post-processing the data. Some examples include:
* The title includes the namespace and so to exclude pages from a
particular namespace requires generating a separate namespace
variable. Particularly, focusing on the main namespace is tricky
because that can only be done by checking whether a page does not
belong to any other namespace (see bug
https://bugzilla.wikimedia.org/show_bug.cgi?id=27775).
* The <redirect> tag currently is either True or False, more useful
would be the article_id of the page to which a page is redirected.
* Revisions within a <page> are sorted by revision_id, but they should
be sorted by timestamp. The current ordering makes it even harder to
generate diffs between two revisions (see bug
https://bugzilla.wikimedia.org/show_bug.cgi?id=27112)
* Some useful variables in the MySQL database are not yet exposed in
the XML files. Examples include:
- Length of revision (part of Mediawiki 1.17)
- Namespace of article
3. Smaller dump sizes
The dump files continue to grow as the text of each revision is stored
in the XML file. Currently, the uncompressed XML dump files of the
English Wikipedia are about 5.5Tb in size and this will only continue
to grow. An alternative would be to replace the <text> tag with a
<text_added> and <text_removed> tags. A page can still be
reconstructed by patching multiple <text_added> and <text_removed>
tags. We can provide a simple script / tool that would reconstruct the
full text of an article up to a particular date / revision id. This
has two advantages:
1) The dump files will be significantly smaller
2) It will be easier and faster to analyze the types of edits. Who is
adding a template, who is wikifying an edit, who is fixing spelling
and grammar mistakes.
4. Downsides
This suggestion is obviously not backwards compatible and it might
break some tools out there. I think that the upsides (incremental
backups, Hadoop-ready and smaller sizes) outweigh the downside of
being backwards incompatible. The current way of dump generation
cannot continue forever.
[0] http://strategy.wikimedia.org/wiki/Editor_Trends_Study,
http://strategy.wikimedia.org/wiki/March_2011_Update
[1] http://blog.wikimedia.org/2011/06/01/summerofresearchannouncement/
I would love to hear your thoughts and comments!
Best,
Diederik
Since I've seen it brought up in two places relevant to what I'm working
on with MediaWiki:Sidebar right now I'd like to see any notes people may
have on a replacement for MediaWiki:Sidebar.
The consensus seams to be that we want to replace MediaWiki:Sidebar with
a Special: page interface and stuff stored directly in the database
instead of something in the i18n system.
For now I'm thinking of calling it Special:EditNavigation/sidebar (I
plan to expand our support for navigation beyond just that sidebar so
that's why it's not Special:EditSidebar).
I'd like to see any comments or requirements people have of an
interface/system for editing the sidebar.
One question. How is our policy on JavaScript for these kind of things
right now?
For something like this where users are going to want to be able to drag
and reorganize things do we have to do this in a slow way that works
without JS?
--
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
Hello,
I would like to remove the checkbox which allow logged users to mark changes as 'minor' when editing a page. Is it possible ? I use MediaWiki 1.17.
Best regards,
Alex