Hi,
Gerrit change Id819246a9 proposes an implementation for a recent changes
stream broadcast via socket.io, an abstraction layer over WebSockets that
also provides long polling as a fallback for older browsers. Comment on <
https://gerrit.wikimedia.org/r/#/c/131040/> or the mailing list.
Thanks,
Ori
I just stumbled across <https://github.com/wikimedia/mediawiki-core/pull/19>,
a small but useful contribution to core from an HHVM developer. It has gone
unnoticed for two months, which is a bit sad.
Is there a way to accept pull-requests from GitHub? According to <
https://github.com/wikimedia/mediawiki-core/settings/hooks> (may not be
visible to non-Wikimedians, sorry), the WebHook receiver <
http://tools.wmflabs.org/suchaserver/cgi-bin/receiver.py> is defunct.
Anyone know the story there?
It'd be good if some additional people were watching (that is, receiving
notifications for) <https://github.com/wikimedia/mediawiki-core/>.
I haven't responded yet, by the way, so feel free to if you know the
answers to these questions. I don't know what effect accepting the
pull-request will have on the code in master, and telling someone who has
already submitted a patch to go sign up for Gerrit seems impolite.
Ori
Just wanted to send out an update on the progress we made around MW-Vagrant
improvements at the Zürich Hackathon. Our primary goal was to make some key
production services available in MW-Vagrant in order to make local
development/testing easier/more reliable. We made some excellent headway,
focussing on a few key services: SSL, Varnish, CentralAuth/Multiwiki.
SSL:
I spent a majority of my time focussing on this and received a lot of
support/help from Ori. There is now an 'https' role in mw-vagrant which
when enabled, will allow you to access your devwiki on port 4430 (forwarded
to 443 in Vagrant). There is one outstanding patchset which will make it
possible to use $wgSecureLogin in MW-Vagrant:
https://gerrit.wikimedia.org/r/#/c/132799/
Varnish:
This is proving to be much more difficult than anticipated, however some
progress was made and work is ongoing, spearheaded by Andrew Otto. The plan
is to set up varnish VCLs for mw-vagrant similar to what is set up for text
varnishes in production, with a frontend and backend instance running in
vagrant. Andrew is in the midst of refactoring the production varnish
module, to make it usable in Vagrant.
CentralAuth/Multiwiki:
Bryan Davis, Chris Steipp, and Reedy spent a lot of time hacking on this,
and we now have support for multiwiki/CentralAuth in Vagrant! There is
still some cleanup work being done for the role to remove kludge/hacks/etc
(see https://gerrit.wikimedia.org/r/#/c/132691/).
Also of significant note, Matt Flaschen created a mw-vagrant iso which can
be packaged on USB thumb drives, making it possible to set up mw-vagrant
without a network connection. There is still some work to be done here to
create a one-click installer as well as updating documentation. Matt got
this done before the hackathon, and we brought a bunch of USB sticks imaged
with the iso, which was instrumental in getting a bunch of folks new to
mw-vagrant up and running at the hackathon. This was particularly useful
during Bryan Davis's vagrant bootcamp sessions.
I believe Katie Filbert from Wikidata did some mw-vagrant work at the
hackathon as well, although I'm not clear on the current status. Katie, can
you let us know where things were at with what you were working on?
All in all it felt like a very fruitful hack session, and we're closer than
ever to having a ready-to-go developer instance that mimics our production
environment. Big thanks to everyone involved in making our work successful.
--
Arthur Richards
Software Engineer, Mobile
[[User:Awjrichards]]
IRC: awjr
+1-415-839-6885 x6687
(CCing wikimedia-l as well, please send any replies to wikitech-l only)
The Wikimedia technical community wants to have another hackathon next year
in Europe. Who will organize it?
Interested parties, check https://www.mediawiki.org/wiki/Hackathons
We would like to confirm a host by Wikimania, latest.
The same call goes for India and other locations with a good concentration
of Wikimedia contributors and software developers. Come on, step in. We
want to increase our geographical diversity of technical contributors.
--
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
Micru's Associated namespaces RfC is up for discussion this week.
https://www.mediawiki.org/wiki/Architecture_meetings/RFC_review_2014-04-23
(We also have room for 1 more RfC to discuss.)
Micru said about
https://www.mediawiki.org/wiki/Requests_for_comment/Associated_namespaces :
> The intended outcome would be:
> 1) find out if there is any objection against the "Namespace registry and
> association handlers" that Mark proposed
> 2) discuss possible problems with this approach
> 3) see if there would be any hands available to work on it; it is a
> delicate topic that might need someone with a deep understanding of MediaWiki
Micru also noted that we've had a previous suggestion for a namespace
manager https://www.mediawiki.org/wiki/Namespace_manager .
> AFIK, it never materialized because back then there was not such a great
> need as there is now - or at least the current use cases didn't exist back
> then.
> I hope this RFC moves forward because it affects important upcoming and
> already deployed projects (Commons migration, templates, Visual editor, WD,
> etc).
Come to #wikimedia-office at 2100 UTC this Wednesday
http://www.worldtimebuddy.com/?qm=1&lid=2950159,2147714,5391959,100&h=29501…
or reply/comment with your comments/suggestions.
2300 Berlin
5pm New York
2pm California
7am Sydney
--
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation
Hi all.
We (the Wikidata team) ran into an issue recently with the value that gets
passed as $baseRevId to Content::prepareSave(), see Bug 67831 [1]. This comes
from WikiPage::doEditContent(), and, for core, is nearly always set to "false"
(e.g. by EditPage).
We interpreted this rev ID to be the revision that is the nominal base revision
of the edit, and implemented an edit conflict check based on it. Which works
with the way we use doEditContent() for wikibase on wikidata, and with most
stuff in core (which generally has $baseRevId = false). But as it turns out, it
does not work with rollbacks: WikiPage::commitRollback sets $baseRevId to the ID
of the revision we revert *to*.
Now, is that correct, or is it a bug? What does "base revision" mean?
The documentation of WikiPage::doEditContent() is unclear about this (yes, I
wrote this method when introducing the Content class - but I copied the
interface WikiPage::doEdit(), and mostly kept the code as it was). And in the
code, $baseRevId is not used at all except for passing it to hooks and to
Content::prepareSave - which doesn't do anything with it for any of the Content
implementations in core - only in Wikibase we tried to implement a conflict
check here, which should really be in WikiPage, I think.
So, what *does* $baseRevId mean? If you happen to know when and why $baseRevId
was introduced, please enlighten me. I can think of three possibilities:
1) It's the edit's reference revision, used to detect edit conflicts (this is
how we use this in Wikibase). That is, an edit is done with respect to a
specific revision, and that revision is passed back to WikiPage when saving, so
a check for edit conflicts can be done as close to the actual edit as possible
(ideally, in the same DB transaction). Compare bug 56849 [2].
2) The edit's "physical parent": that would be the same as (1), unless there is
a conflict that was detected early and automatic resolved by rebasing the edit.
E.g. if an edit is performed based on revision 11, but revision 12 was added
since, and the edit was successfully rebased, the "parent" would be 12, not 11.
This is what WikiPage::doEditContent() calls $oldid, and what gets saved in
rev_parent_id. Since WikiPage::doEditContent() makes the distinction between
$oldid and $baseRevId, this is probably not what $baseRevId was intended to be.
3) It could be the "logical parent": this would be identical to (2), except for
a rollback: if I revert revision 15 and 14 back to revision 13, the new
revision's logical parent would be rev 13's parent. The idea is that you are
restoring rev 13 as it was, with the same parent rev 13 had. Something like this
seems to be the intention of what commitRollback() currently does, but the way
it is now, the new revision would have rev 13 as its logical parent (which, for
a rollback, would have identical content).
So at present, what commitRollback currently does is none of the above, and I
can't see how what it does makes sense.
I suggest we fix it, define $baseRevId to mean what I explained under (1), and
implement a "late" conflict check right in the DB transaction that updates the
revision (or page) table. This might confuse some extensions though, we should
double check AbuseFilter, if nothing else.
Is that a good approach? Please let me know.
-- daniel
[1] https://bugzilla.wikimedia.org/show_bug.cgi?id=65831
[2] https://bugzilla.wikimedia.org/show_bug.cgi?id=56849
Hi,
a recent discussion in
https://bugzilla.wikimedia.org/show_bug.cgi?id=65724#c3
revealed that parts of the SVG standard are deliberately broken on commons. While I see some reasons to not adhere fully to the standard, e.g. external resources might break over time, if they are moved or deleted, I don't feel it's good to break the standard as hard as it's done right now. It puts a burden on creators, on the principle of sharing within the wikimedia environment and overall, it's even technically inferior and leads or might lead to useless duplication of content.
The SVG standard defines an image element. The image resource is linked to using the xlink:href attribute. Optionally the image is embedded into the SVG using the https://en.wikipedia.org/wiki/Data_URI_scheme[https://en.wikipedia.org/wiki….
Combining SVGs with traditional bitmap images is useful in several ways: It allows creators sharing the way an image is manipulated and eases future modification that are hard to do or even impossible using traditional bitmap/photo editing. It basically has the same advantages that mash-up web content has over static content: Each layer or element can be modified individually without destroying the other elements. It's easy to see that a proper SVG is more to its potential users than a classig JPG or PNG with only one layer, being the result of all image operations.
These reasons point out the necessity for barrier-free access to the image element.
Currently, commons cripples this access layed out in the standard and originally implemented by "librsvg". It disables the handling of HTTP(S) resources. Users needing the same bitmap in more than one SVG are forces to base64-embed their source, and hence duplicate it, in each individual SVG. Indeed, there is quite some burden on creators and on wikimedia servers that duplicate lots of data right now and potentially even more in the future. Note that this duplication of data goes unnoticed by the routines specifically in place for bitmaps right now, that check uploads on MD5 collision and reject the upload on dup detection. Space might be cheap as long as donations are flowing, but reverting bad practice once it is common is harder than promoting good practice /now/ by adhering to the standard as closely as possible.
Therefore I advocate change to librsvg in one of the two ways layed out in comment 3 of the bug report given above and (re)support linking to external bitmaps in SVGs. Two strategies that come to mind to prevent disappearance of an external resource in the web are:
1) cache external refs on thumbnail generation, check for updates on external server on thumbnail re-generation
2) allow external refs to images residing on wikimedia servers only
Point 2) should be considered the easiest implementation, 1) is harder to implement but gives even more freedom to SVG creators and would adhere more closely to SVG standard. However, another argument for 2) would be the licensing issue: It ensures that only images are linked to that have been properly licensed by commons users and the upload process (and if a license violation is detected and the linked-to bitmap removed from commons, the SVG using such a bitmap breaks gracefully).
Regards,
Christian
Excerpt from the blog post:
https://blog.wikimedia.org/2014/05/27/request-for-proposals-mediawiki-relea…
--
Last year, the Wikimedia Foundation started to share the
responsibility[0] of the long term management of the MediaWiki software
project with the wider community. We are continuing the process with a
second Request for Proposals[1] to manage the third-party releases of
MediaWiki (PDF[2]).
The process for this RFP is a community-involved one. There is a
three-week period for organizations to prepare and submit their
proposals, after which the community can comment on and ask questions of
the proposers. The Wikimedia Foundation will take all of this feedback
into account when making the final decision for who will lead the
release management of MediaWiki for the next year.
The deadline for proposals is June 13.
Please do get involved if you are interested in the future of MediaWiki!
Greg
[0] https://blog.wikimedia.org/2013/05/21/request-for-proposals-mediawiki-relea…
[1] https://www.mediawiki.org/wiki/Release_Management_RFP
[2] https://commons.wikimedia.org/wiki/File:MediaWiki_Release_Request_For_Propo…
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
I've addressed the feedback I got in Zurich.
https://www.mediawiki.org/wiki/Performance_guidelines
I think this page is ready to have the {{draft}} tag removed. I believe
it now represents our consensus on what MediaWiki core, extensions, and
gadgets developers should do to preserve high performance. On May 23,
I'd like to move forward with making a tutorial and a poster based on
this. So, please edit, speak up, and so on, within the next week.
--
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation
Hi, in response to bug 54607 [1], we've changed the semantics of the
mobileformat parameter to action=parse
== Summary ==
Previously, it used to accept strings 'html' or 'wml', later just
'html' and modify the structure of output (see below). This was problematic
because you needed to retrieve the HTML from output in different ways,
depending on whether mobileformat is specified or not. Now,
mobileformat is a boolean parameter, that is if there's a 'mobileformat'
parameter in request, it will be treated as "the output should be
mobile-friendly", regardless of value. And the output structure will
be the same. For compatibility with older callers,
mobileformat=(html|wml) will be special-cased to return the older
structure at least for 6 month from now. These changes will start
being rolled out to the WMF sites starting from tomorrow, Tuesday
October 24th and this process will be complete by October 31st.
== Examples ==
=== Non-mobile parse ===
api.php?action=parse&format=xml
{
"parse": {
"title": "...",
"text": {
"*": "foo"
}
}
}
api.php?action=parse&format=json
<?xml version="1.0"?>
<api>
<parse title="..." displaytitle="...">
<text xml:space="preserve">foo</text>
</parse>
</api>
=== Parse that outputs mobile HTML, old style ===
api.php?action=parse&format=json&mobileformat=html
{
"parse": {
"title": "API",
"text": "foo"
}
}
api.php?action=parse&format=xml&mobileformat=html
<?xml version="1.0"?>
<api>
<parse title="..." text="foo" displaytitle="...">
</parse>
</api>
=== Parse that outputs mobile HTML, new style ===
api.php?action=parse&format=...&mobileformat
Same as for non-mobile parses.
== FAQ ==
Q: I didn't use mobileformat before, does anything change for me?
A: No.
Q: I use mobileformat=html, will my bot/tool be broken now?
A: No, you will have 6 months to switch to new style.
Q: I'm only planning to use mobileformat, what should I do?
A: Just use the new style.
Q: How did this format discrepancy appear in the first place?
A: To err is human.
-----
[1] https://bugzilla.wikimedia.org/show_bug.cgi?id=54607
--
Best regards,
Max Semenik ([[User:MaxSem]])