Hi.
We have two requests for comment indices on mediawiki.org:
* https://www.mediawiki.org/wiki/Requests_for_comment
* https://www.mediawiki.org/wiki/Requests_for_comment/Archive
The current system requires manually updating the various lists, which is
kind of gross as the lists predictably fall out of date. I've proposed and
begun to implement a classification scheme that will allow us to automate
the generation of these indices. Details of this scheme are available
here: <https://www.mediawiki.org/wiki/Special:Permalink/1315007#Cleanup>.
One consequence of this change is that the "Updated" and "Status" table
columns will likely go away soon. Looking at the contents of these table
columns, I think ditching them is probably fine.
Comments, criticisms, concerns, etc. are welcome.
MZMcBride
What about storing structured info about who actually owns a user account?
For example, bots -> maintainer(s), doppelgänger -> main account, WMF
account -> personal account.
This would lead to several advantages, such as the ability to block a
bot and its maintainer at once, as well as to prevent sockpuppets from
voting.
The relationship would have to be confirmed by both accounts, of course.
Please join us for the following tech talk:
*Tech Talk**:* Phabricator for Wikimedia projects
*Presenter:* Quim Gil & Andre Klapper
*Date:* Dec 11
*Time:* 1800 UTC
<http://www.timeanddate.com/worldclock/fixedtime.html?msg=Phab+Tech+Talk&iso…>
Link to live YouTube stream <http://www.youtube.com/watch?v=_yr5z9Ix2f8>
*IRC channel for questions/discussion:* #wikimedia-office
Google+ page
<https://plus.google.com/u/0/b/103470172168784626509/events/cjb4le2ntnqogbbu…>,
another
place for questions
*Talk description: *
Phabricator is a collaboration platform open to all Wikimedians. We focus
on bug reporting and software projects. Non-technical initiatives are
welcome as well. Did you know that the first reason why we chose
Phabricator was to serve as project management tool? Wikimedians use/used a
variety of tools for project management: Trello, Mingle, Scrumbugz,
Bugzilla, Asana, Google Docs, and of course wiki pages too. In this
demo-based session we will explain how to organize your work with
Phabricator at a project level, team level, and individual level. How to
set projects, priorities, and tags, how to manage workboards and
dashboards, how to organize sprints.
Thanks!
Rachel
Hello and welcome to the latest edition of the WMF Engineering Roadmap
and Deployment update.
The full log of planned deployments next week can be found at:
<https://wikitech.wikimedia.org/wiki/Deployments#Week_of_December_15th>
REMINDER:
After next week, there will be no more normal deployments until the new
year. In other words, there will be no more scheduled updates to
MediaWiki rolled out to production until January 6th.
A quick list of notable items...
== All Week ==
* Fundraising on-going through the rest of the year
* HHVM: reimaging servers to HHVM, should be completed by end of year
== Tuesday ==
* MediaWiki deploy
** group1 to 1.25wmf12: All non-Wikipedia sites (Wiktionary, Wikisource,
Wikinews, Wikibooks, Wikiquote, Wikiversity, and a few other sites)
** <https://www.mediawiki.org/wiki/MediaWiki_1.25/wmf12>
== Wednesday ==
* Phabricator Maintenance (outage) from 4pm Pacific until midnight
** ie: 00:00 UTC - 08:00 Thursday
** To migrate RT (internal ticketing system used by WMF Operations)
** See: <https://phabricator.wikimedia.org/T174>
* MediaWiki deploy
** group2 to 1.25wmf12 (all Wikipedias)
** group0 to 1.25wmf13 (test/test2/testwikidata/mediawiki)
Thanks and as always, questions and comments welcome,
Greg
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
Hello,
A good friend talked to me about yet another PHP Profiler which has a
fairly nice and friendly user interface.
Basically drop an extension on your server, add a plugin to your
Chromium browser and you are set to profile your backend straight from
the browser \O/
It has been created by SensioLabs, the authors of Symfony Frameworks and
looks gorgeous. I think it is worth a look at and figure out whether it
works with a HHVM
Homepage:
https://blackfire.io/
Review by my friend:
http://blog.blackfire.io/pomm-a-two-hours-run-with-blackfire.html
--
Antoine "hashar" Musso
I am pleased to announce Andrew Garret joining the Wikimedia
Foundation as a full time Software Engineer
Andrew has been contracting with the Wikimedia Foundation for over six
years now and its only fitting to announce that now that he's done
with his studies he'll be going full time at the Wikimedia Foundation.
In the past six years, he's worked on a huge variety of projects,
ranging from user preferences and spam filtering to notifications and
two attempts to fix talk pages. He's passionate about making people’s
workflows and processes make sense, so in the coming months, he's
looking forward to having the time and energy to focus on software
that humanizes our projects, eliminates busy work, and makes life
easier for new and old contributors.
Right now Andrew is in the process of moving from Sydney to Maastricht
to be with his girlfriend
He'll be working from Maastricht; and from early next year, Prague.
So if you find yourself in one of those cities, you should let him know!
When not working, doing assignments, or packing bags, Andrew is busy
travelling or homebrewing.
He's currently supporting the Flow team and we're exploring where
he'll help next.
Please join me in celebrating Andrew going full time
--tomasz
So for a while now, I have been toying a bit with
TimedMediaHandler/MwEmbed/TimedText, with the long term goal of wanting it
to be compatible with VE, live preview, flow etc.
There is a significant challenge here, that we are sort of conveniently
ignoring because stuff 'mostly works' currently and the MM team having
their plate full with plenty of other stuff:
1: There are many patches in our modules that have not been merged upstream
2: There are many patches upstream that were not merged in our tree
3: Upstream re-uses RL and much infrastructure of MW, but is also
significantly behind. They still use php i18n, and their RL classes
themselves are also out of date (1.19 style ?). This makes it difficult to
get 'our' changes merged upstream, because we need to bring any RL changes
etc with it as well.
4: No linting and code style checks are in place, making it difficult to
assess and maintain quality.
5: Old jQuery version used upstream
6: Lot's of what we consider deprecated methodologies are still used
upstream.
7: Upstream has a new skin ??
8: It uses loader scripts on every page, which really aren't necessary
anymore now that we can add modules to ParserOutput, but since I don't
fully understand upstream, i'm not sure what is needed to not break
upstream in this regard.
9: The JS modules arbitrarily add stuff to the mw. variables, no
namespacing there.
10: The RL modules are badly defined, overlap each other and some script
files contain what should be in separate modules
11: We have 5 'mwembed' modules, but upstream has about 20, so they have
quite a bit more code to maintain and migrate.
12: Brion is working on his ogvjs player which at some point needs to
integrate with this as well (Brion already has some patches for this [1]).
13: Kaltura itself seems very busy and doesn't seem to have too much time
to help us out. Since some of the code is however highly specific to their
use cases, it becomes difficult to validate changes.
Oh and the file trees are disjunct between us and upstream, making git
merging a lot more troublesome than it should be (anyone got tips ?).
This is maintenance hell, we need to come up with a plan here or we are
going the way of getting so far out of sync that the cheapest solution will
be to start from scratch...
So my questions:
1: Is there anything in upstream that we actually want ? I've been hearing
about the 'update' that was still coming from there for over a year now,
but due to how far both trees are now out of sync, i'm not really holding
my breath for that anymore. The last 'proper sync' seems to have been
'Kaltura 1.7' in july 2012.[2] They are now at v2.21.9
2: Who can think of a strategy to fix this ?
3: Or should we just split off our modules and let upstream sort this out ?
4: Should we consider starting something from scratch ?
DJ
I have done a bit of cleanup [3] with jshint and jscs on the modules that
we use. There are some remaining problems [4], some of which are true bugs
in the code. I don't intend to propose these changes for merging any time
soon, since it will probably make consolidation of the two variants even
more complicated, but I'll try to keep it up to date and maybe try to fix
some of these bugs upstream or in MW.
[1] https://gerrit.wikimedia.org/r/#/c/165477/https://gerrit.wikimedia.org/r/#/c/165478/https://gerrit.wikimedia.org/r/#/c/165479/
[2] https://gerrit.wikimedia.org/r/#/c/16468/
[3] https://github.com/hartman/mwEmbed/compare/jscleanup
[4] https://phabricator.wikimedia.org/P147
Hi Cornelius!
For images which it match against the catalog, it should give accurate
information. If it doesn't, use the "report" link to let us know!
You're right though that for images it doesn't find in its catalog, we
don't provide any information. That's the equivalent of saying "this
picture may or may not be openly licensed, but right now we have no
information to tell either way"
Sincerely,
Jonas
On 11 Dec 2014 15:57, "Cornelius Kibelka" <cornelius.kibelka(a)wikimedia.de>
wrote:
> Wow, what a nice and interesting browser extension. Congrats!
>
> Just a question: as far as I can see the tool doens't give the complete
> and correction licensing information, as the source is missing. Or I'm
> missleading?
>
> Best
> Cornelius
>
> 2014-12-10 19:30 GMT+01:00 Jonas Öberg <jonas(a)commonsmachinery.se>:
>
>> Dear all,
>>
>> thanks for all your help with answering questions and giving feedback
>> over the last couple of months. I'm happy to say that we're finally at
>> a stage where we've hashed 22,452,638 images from Wikimedia Commons
>> and launched Elog.io in public beta: http://elog.io/
>>
>> Elog.io is an open API as well as browser plugins, that can query and
>> get information about images using a perceptual hash that's easy and
>> quick to calculate in a browser.
>>
>> What the browser extensions allow you to do is match an image you find
>> "in the wild" against Wikimedia Commons. If it can be matched against
>> an image from Commons, it'll show you the title, author, and license,
>> and give you links back to Wikimedia, the license, and a quick and
>> handy "Copy as HTML" to copy the image and attribution as a HTML
>> snippet for pasting into Word, LibreOffice, Wordpress, etc.
>>
>> Our API provides lookup functions to find information using a URL (the
>> Commons' page name URL) or using the perceptual hash. You get
>> information back as JSON in W3C Media Annotations format. of course,
>> the information you get back is no better than the one provided by the
>> Commons API, so if you already have a page name URL, you may as well
>> query it directly, and rely on our API only for searching by
>> perceptual hashes.
>>
>> The algorithm we use for calculating perceptual hashes, which you'll
>> need to query our API, is at http://blockhash.io/
>>
>>
>> Sincerely,
>> Jonas
>>
>> _______________________________________________
>> Commons-l mailing list
>> Commons-l(a)lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/commons-l
>>
>
>
>
> --
> Cornelius Kibelka
>
> International Affairs
> Werkstudent | student trainee
>
> Wikimedia Deutschland e.V.
> Tempelhofer Ufer 23-24
> 10963 Berlin
>
> Tel.: +49 30 219158260
> http://wikimedia.de
>
> <http://wikimedia.de/>Stellen Sie sich eine Welt vor, in der jeder Mensch
> freien Zugang zu der
> Gesamtheit des Wissens der Menschheit hat. Helfen Sie uns dabei!
> http://spenden.wikimedia.de/
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
> unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt
> für Körperschaften I Berlin, Steuernummer 27/681/51985.
>
> _______________________________________________
> Commons-l mailing list
> Commons-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/commons-l
>
>
Due to migrating most of RT to Phabricator, phabricator.wikimedia.org
will be down and NOT AVAILABLE for eight hours starting on Thursday 18
December 00:00 UTC (that is Wed 17 Dec 16:00PST for people in San
Francisco).
In those eight hours, urgent issues which cannot wait should be brought
up either on IRC or https://www.mediawiki.org/wiki/Project:Support_desk
Thank you for your understanding and sorry for the inconvenience.
andre
--
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/
Hi all,
it's been quite a journey since we started working on HHVM, and last
week (November 25th) HHVM was finally introduced to all users who didn't
opt-in to the beta feature.
Starting on monday, we started reinstalling all the 150 remaining
servers that were running Zend's mod_php, upgrading them from Ubuntu
precise to Ubuntu trusty in the process. It seemed like an enormous task
that would require me weeks to complete, even with the improved
automation we built lately.
Thanks to the incredible work by Yuvi and Alex, who helped me basically
around the clock, today around 16:00 UTC we removed the last of the
mod_php servers from our application server pool: all the non-API
traffic is now being served by HHVM.
This new PHP runtime has already halved our backend latency and page
save times, and it has also reduced significantly the load on our
cluster (as I write this email, the average cpu load on the application
servers is around 16%, while it was easily above 50% in the pre-HHVM era).
The API traffic is still being partially served by mod_php, but that
will not be for long!
Cheers,
Giuseppe
--
Giuseppe Lavagetto
Wikimedia Foundation - TechOps Team