> Message: 5
> Date: Wed, 8 Feb 2012 23:31:37 +0400
> From: Max Semenik <maxsem.wiki(a)gmail.com>
> To: MediaWiki API announcements & discussion
> <mediawiki-api(a)lists.wikimedia.org>, Wikimedia developers
> <wikitech-l(a)lists.wikimedia.org>
> Subject: [Wikitech-l] Proposed removal of some API output formats
> Message-ID: <568811305.20120208233137(a)gmail.com>
> Content-Type: text/plain; charset=utf-8
>
> Hi, this idea had floated around for quite some time, but now that
> bug 34257[1] was added to the long list of problems, I would like to
> step up and start some progress. We[2] propose to remove the following
> formats[3]:
>
> * WDDX - doesn't seem to be used by anyone. Doesn't look sane either.
> * YAML - we don't serve real YAML anyway, currently it's just a subset
> of JSON.
> * rawfm - was created for debugging the JSON formatter aeons ago, not
> useful for anything now.
> * txt, dbg, dump - the only reason they were added is that it was
> possible to add them, they don't serve the purpose of
> machine/machine communication.
>
> So, only 3 formats would remain:
> * JSON - *the* recommended API format
> * XML - evil and clumsy but sadly used too widely to be revoved in the
> foreseeable future
> * php - this one is used by several extensions and probably by some
> third-party reusers, so we won't remove it this time. However,
> any new uses of it should be discouraged.
>
> We plan to remove the aforementioned formats as soon as MediaWiki 1.19
> is branched so that these changes will take effect in 1.20, but would
> like to hear from you first if there are good reasons why we shouldn't
> do it or postpone it. Please have your say.
>
> ------
> [1] https://bugzilla.wikimedia.org/show_bug.cgi?id=34257
> [2] Me and Roan Kattouw, one of API's primary developers
> [3] https://www.mediawiki.org/wiki/API:Data_formats
>
> --
> Best regards,
> Max Semenik ([[User:MaxSem]])
>
I have scripts that use the txt format. (Things that were originally
super hacky shell scripts, that became more permanently run scripts
[but still hacky beyond belief]). txt is mostly easy to process with
grep (So was the older version of yaml formatting). However, hopefully
not too many people were stupid enough to try to parse the api output
using grep. (In case anyone is wondering if using shell scripts to
access the api is really as bad as it sounds, I assure you its even
worse than you think).
If json becomes the new recommended format, does that mean that not
passing a format parameter would result in jsonfm instead of xmlfm?
(Probably not since the jsonfm pretty printer isn't all that pretty at
the moment).
> For a stable API, that's far too fast of a deprecation path. We don't even
> remove core functions that fast (or shouldn't). I'd suggest throwing some
> kinds of warnings in the output for at least one release (1.20) and then
> target them for removal in 1.21.
>
> -Chad
+1.
--
-bawolff
In December I wrote a cron job on the German toolserver, to collect
statistics on external links. It works fine, but to be useful I must
collect data over time, so I made a cron job to run each Monday morning.
While my attention was elsewhere, believing that this was running,
it turns out the 256 Mbyte quota (!) made all my files 0 bytes
in length for all of January. I have now requested and gotten an
increased quota, but 6 weeks of data have been lost. And I must
devote time to check my quota every week or two.
The /home disk is 600 GB of which 88 GB is free. That's not per user,
but for all users together. It should come as a surprise to most
people who donate money to the Wikimedia Foundation, that all of its
volunteer developers have to share a disk the size of what is found
in any laptop. According to an IRC discussion, some new disks that
were planned to arrive in mid January have not yet been delivered.
I have no idea what amount of disk has been ordered, or whether the
quota system will be kept. I get the impression that this doesn't
really matter to anybody.
This is the development system for the world's 6th most visited
website in 2012. It quite doesn't live up to my expectations.
It feels more like some hobby project in 2002. I'm a great fan
of hobby projects, but with the current budget of WMDE and WMF,
I thought we would have reached a higher ambition level by now.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
Hi everyone,
We're hitting the home stretch. Check this out:
http://www.mediawiki.org/wiki/MediaWiki_1.19/Revision_report
Summary: 27 unreviewed ("new") revisions, 14 fixmes
It looks like we're getting close enough to the bottom of the review
queue that we could conceivably even hit zero unreviewed early next
week. Should we target Tuesday for the branch point?
The biggest risk for a February 13 deployment is the number of fixmes
left. 14 is quite a lot to get through in the short period of time we
have left. Please do not be bashful about fixing other people's old
fixmes at this point, or even the newer ones provided you coordinate
with the author.
Thanks
Rob
Two new committers with core access:
Christian Aistleitner (qchris) is a new Wikimedia Foundation contractor
working on the XML dump infrastructure, database access and testing
thereof, and Labs.
Elizabeth Smith (emsmith) works for OmniTI as a developer and is joining
the AFTv5 project. (She doesn't yet have a mediawiki.org account - I've
asked for one and will link when she does.)
Welcome, Elizabeth and Christian!
--
Sumana Harihareswara
Volunteer Development Coordinator
Wikimedia Foundation
Hi everyone,
Chris McMahon (cmcmahon) is our latest addition to the committer list.
Rob
On Tue, Jan 31, 2012 at 10:02 AM, Rob Lanphier <robla(a)wikimedia.org> wrote:
> Hi everyone,
>
> I’d like to welcome Chris McMahon to the Platform Engineering team as
> our new QA Lead. Chris has a long history working in software
> testing, coming to us most recently from Sentry Data Systems where he
> was responsible for test automation. One particularly relevant bit of
> experience from Chris’s past was his work at Socialtext on their wiki
> product, expanding the Selenium-based automated test suite from 400
> individual assertions to 10,000 over the span of two years.
>
> Chris is also active in the outside community. He leads the Writing
> About Testing group and annual conference, which he founded in 2009.
> He also helped design and build the SeleNesse testing framework, which
> is a wiki-based tool for building acceptance tests that get executed
> by Selenium.
>
> In his role as QA Lead here, Chris will be responsible for figuring
> out what sorts of testing process we can bring to MediaWiki
> development. His first task will to join in on the tail end of the
> 1.19 deployment process, helping us with whatever last minute testing
> that makes sense at this stage, but then he’ll have the much larger
> task of looking at our release and deployment process generally, and
> figure out which parts would most benefit from the injection of
> testing rigor. He’ll also be responsible for establishing a more
> coordinated volunteer effort around testing.
>
> Chris will be working remotely from his home in Durango, Colorado.
>
> Welcome, Chris!
What: 1.19 Deployment blocker triage
When: Wednesday, February 8, 13:00 UTC
Time zone conversion: http://hexm.de/ej
Where: #wikimedia-dev on freenode
Use http://webchat.freenode.net/?channels=wikimedia-dev
if you don't have an IRC client
Notes: http://etherpad.wikimedia.org/BugTriage-2012-02
With the 1.19 deployment fast approaching, I'm scheduling one last bug
triage for 1.19 blocking issues.
There are currently only 4 blocking bugs, but there are some issues on
http://labs.wikimedia.beta.wmflabs.org/wiki/Problem_reports that may yet
make their way into blocking bugs.
If you haven't yet taken the time to try out the beta cluster now is a
good time to give it a whirl, try out 1.19, and let us know of any bugs
you find.
Thanks,
Mark.
Hi Mediawiki users,
I have experienced a repeatable problem in which after importing a
mediawiki database dump, I need to Export the language templates from
the live site and import them into my local instance. I use MWDumper
to build an SQL file which is read into my local MySql instance. The
problem is that the language templates articles are not included in
the dump.
I am referring to all templates articles that have the following form:
Template:EN
EN in this could be any language code.
This is not limited any particular Mediawiki dump, they all seem to
have this problem. That being said, it is not much to simply import
the missing templates manually, I was wondering if anyone had
experienced this problem or has a quicker solution than the manual
import/export.
Best Regards,
Nathan Day
Magnus Manske just sent this message to another list and I thought you'd
want to know. Baglama is a tool to "View counts for pages using Commons
images in certain category trees." It's especially useful for GLAM
institutions: https://outreach.wikimedia.org/wiki/GLAM
--
Sumana Harihareswara
Volunteer Development Coordinator
Wikimedia Foundation
-- original message --
Due to popular demand, my "baglama" tool now has charts for all
projects on its main page, and a project chart on the project page,
with the selected month highlighted. The bars are relative to the
maximum monthly view counter in the project. Enjoy.
http://toolserver.org/~magnus/baglama.php
Magnus
Hi everyone,
This mail is primarily directed toward Wikimedia Foundation employees,
but isn't private, so I'm directing it here.
As many of you know, we have a policy for Wikimedia Foundation
engineers that states that they need to work 20% of their time on
maintenance and communication tasks which immediately serve the
Wikimedia developer and user community, beyond working on assigned
features or longer-term work.[1] Managing how that manifests itself
is the responsibility of Platform Engineering, which is why I'm the
one frequently prattling on about it. :-)
Up until last week or so, this wasn't too tough an activity to
coordinate. Code review was in pretty rough shape, so my frequent
advice was "grab a bucket and bail" (as in, start reviewing whatever
is in the queue).
As the 1.19 deployment starts up next week, coordination is both more
important and harder. We're shifting our focus away from code review,
and onto fixing blockers and fixmes. Even before recent times, there
have been some folks at WMF who have felt a little adrift during their
20% time.
So, starting this week, we're instituting explicit 15 minute meeting
times for the beginning of everyone's review 20% period, on the
#wikimedia-dev Freenode channel. The schedule for these is listed
below:
https://www.mediawiki.org/wiki/20_percent#IRC_checkins
Those of you who noticed me drop this on your calendar last week, and
wondered what this was...well, now you know.
I've tried to schedule these such that they happen as close to the
beginning of everyone's shifts as possible, without having them at
times that are crazy for anyone. That's going to let us coordinate on
the particulars of what everyone will be working on reasonably close
to the time that they'll be doing the work.
Right now, the focus of these is going to be on the 1.19 deployment,
mainly around ensuring we get all of the fixmes and blockers figured
out before the rollout begins. However, even after the dust settles
on 1.19, I think we'll still need this level of coordination to start
managing the flow of Git pull requests, making real progress on our
backlog of patches in Bugzilla, and making progress on resolving the
7000 or so open bugs in Bugzilla. We have a lot of work to get done,
and this 20% is going to be a key part of making it happen.
Rob
[1] https://www.mediawiki.org/wiki/20_percent