On Fri, May 29, 2015 at 2:07 AM, Greg Grossmeier <greg(a)wikimedia.org> wrote:
<quote name="John Mark Vandenberg"
date="2015-05-29" time="01:39:52 +0700">
It was reported by pywikibot devs almost as soon
as we detected that
the test wikis were failing in our travis-ci tests. It was 12 hours
before a MediaWiki API fix was submitted to Gerrit, and it took four
additional *days* to get merged. The Phabricator task was marked
Unbreak Now! all that time.
Which shows the tooling works, but not the social aspects. The backport
process (eg SWAT and related things) will improve soon as well which
should address much of this.
Your tooling depends on pywikibot developers (all volunteers) merging
a patch within your branch-deploy cycle, which fires off a Travis-CI
build of *pywikibot* unit tests which runs some tests against
test.wikipedia.org and
test.wikidata.org ? And your proposing to
shorten the window in which all this can happen and get useful bug
reports out.
A little crazy but OK. The biggest problem with that approach is
Travis-CI is not very reliable - often they are backlogged and tests
are not run for days. So I suggest that you arrange to run the
pywikibot tests daily (or more regularly) on WMF test/beta servers,
and the unit tests of any other client which is a critical part of
processes on the Wikimedia wikis.
Not-a-great-response-but: can you specifically ping me
in phabricator
(I'm @greg) for issues like that above?
That is a process problem. The MediaWiki ops & devs need to detect &
escalate massive API breakages, especially after creating the fix
which needs to be code reviewed.
--
John Vandenberg