On Tue, Mar 1, 2016 at 3:36 PM, David Strine <dstrine(a)wikimedia.org> wrote:
> We will be holding this brownbag in 25 minutes. The Bluejeans link has
I'm not familiar with bluejeans and maybe have missed a transition
because I wasn't paying enough attention. is this some kind of
experiment? have all meetings transitioned to this service?
anyway, my immediate question at the moment is how do you join without
sharing your microphone and camera?
am I correct thinking that this is an entirely proprietary stack
that's neither gratis nor libre and has no on-premise (not cloud)
hosting option? are we paying for this?
can someone to update list https://phabricator.wikimedia.org/P10500 which
contains repositories which haven't mediawiki/mediawiki-codesniffer.
I found in list that much repositories are empty, and repositories which
aren't available on Gerrit.
So, can someone please update this list of repositories (in
mediawiki/extensions) which haven't mediawiki/mediawiki-codesniffer, but at
least, contains one PHP file. or to provide me command with which I can
update list when I want, so I don't need to request it every time.
P. S.: Happy weekend! :)
// sorry for cross-posting
Here is another change from WMDE’s Technical Wishes team concerning syntax
Soon, line numbers will be shown in wikitext editors when you have the
syntax highlighting feature (CodeMirror
enabled. The change will make it easier to detect line breaks and to
refer to a particular line in discussions. More information can be found on
this project page. 
We plan to deploy this with this week’s Mediawiki train, so it should be on
wikis from April 13-15. As a first step, it will be available on the
template namespace only. Deployment on other namespaces is planned for the
If you have any feedback, please let us know on the project’s talk page.
 We hope line numbering will be useful to you!
for the Technical Wishes team
Projektmanagerin Kommunikation Communitys Technische Wunschliste
Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Tel. (030) 219 158 26-0
Unsere Vision ist eine Welt, in der alle Menschen am Wissens der Menschheit
teilhaben, es nutzen und mehren können. Helfen Sie uns dabei!
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
A few questions to provoke discussion/share knowledge better:
* Why does the train run Tue,Wed, Thur rather than Mon,Tue,Wed
* Why do we only have 2 group 1 Wikipedia's (Catalan and Hebrew)
* Should there be a backport window Friday mornings for certain changes?
A few weeks ago a change I made led to a small but noticeable UI
regression. The site was perfectly usable, but looked noticeably off. It
was in a more obscure part of the UI so we missed it during QA/code review.
Late Wednesday a ticket was reported against Wikimedia commons, but I only
became aware of it late Thursday when the regression rolled out to English
Wikipedia. A village pump discussion was started and several duplicate
tickets were created. While the site could still be used it didn't look
great and upset the experience of many editors.
Once aware of the problem, the issue was easy to fix. A patch was written
I understand Friday backports are possible, but my team tend to use them as
a last resort in fear of creating more work for my fellow maintainers over
weekend periods. As a result, given the site was still usable, the fix
wasn't backported until the first available backport window on Monday. This
is unfortunately a regular pattern, particularly for small UI regressions.
We addressed the issue on Monday, but I got feedback from several users
that this particular issue took too long to get backported. I mentioned the
no Friday deploy policy. One user asked me why we don't run the train
Monday-Wednesday and to be honest I wasn't sure. I couldn't find anything
My team tries to avoid big changes on Mondays as Monday merged patches are
more likely to have issues since they don't always get the time to go
through QA during the week by our dedicated QA engineer.
So... Why don't we run the train Monday-Wednesday? Having a Thursday buffer
during which we can more comfortably backport any issues not caught in
testing, particularly UI bugs would be extremely helpful to my team and I
don't think we'd lose much by losing the Monday to rush last-minute changes.
Assuming there are good reasons for Tuesday-Thursday train, I think there
is another problem with our deploy process which is the size of group 1.
Given the complexity of our interfaces (several skins, gadgets, multiple
special pages, user preferences, gadgets, multiple extensions, and
different user rights), generally, many obscure UI bugs get missed in QA by
people who don't use the software every day and have a clear mental model
of how it looks and behaves. My team mostly works on visible user interface
changes and we rely heavily on Catalan and Hebrew Wikipedia users - our
only group 1 wikis to notice errors with UI before they go out to a wider
audience. Given the size of those audiences, that often doesn't work, and
it's often group 2 wikis that make us aware of issues. If we are going to
keep the existing train of Tue-Thur, I think it's essential we have at
least one larger Wikipedia in our group 1 deploy to give us better
protection against UI regressions living over the weekend. My understanding
is for some reason this is not a decision release engineering can make, but
one that requires an on-wiki RFC by the editors themselves. Is that
correct? While I can understand the reluctance of editors to experience
bugs, I'd argue that it's better to have a bug for a day than to have it
for an entire weekend, and definitely something we need to think more
I have been thinking of a way to organise data in Wiktionary that would allow
for words to automatically show translations to other languages with much less
work than is currently required.
Currently, translations to other languages have to be added manually, meaning
they are not automatically propagated across language pairs. What I mean by
this is showcased in the following example:
1. I create a page for word X in language A.
2. I create a page for word Y in language B.
3. I add a translation to the page for word X, and state that it translates to
word Y in language B.
4. If I want the page for word Y to show that it translates to word X in
language A, I have to do this manually.
Automating this seems a bit tricky. I think that the key is acknowledging that
meanings can be separated from language and used as the links of translation.
In this view, words and their definitions are language-specific, but meanings
Because I may have done a bad job at explaining this context, I have created a
short example in the form of an sqlite3 SQL script that creates a small
dictionary database with two meanings for the word "desert"; one of the
meanings has been linked to the corresponding words in Spanish and in German.
The script mainly showcases how words can be linked across languages with
You can find the script attached. To experiment with this, simply run
within an interactive sqlite3 session. (There may be other ways of doing it
but this is how I tested it.)
I believe this system can also be used to automate other word relations such as
hyponyms and hypernyms, meronyms and holonyms, and others. It can also allow
looking up words in other languages and getting definitions in the language of
choice. In short, it would allow Wiktionary to more effortlessly function as
a universal dictionary.
Has something like this been suggested before? I would be pleased to receive
feedback on this idea.
With kind regards,
wikibugs needs to be autovoiced in all the IRC channels it speaks in to
avoid being killed by antispam bots.
If wikibugs speaks in your channel and isn't voiced, please have a
channel founder autovoice it:
/msg chanserv flags <#channel> wikibugs +Vv
<https://phabricator.wikimedia.org/T283983> has a list of channels and
the people with appropriate permissions should've been pinged on it.
If your IRC channel isn't in use anymore please file a bug (or submit a
patch!) to have wikibugs removed from it.
Today we switched over most services and traffic caches from the eqiad
(Virginia) datacenter to codfw (Texas) as part of improving our
reliability. The goal is to have this procedure working and regularly
tested in case of an emergency when we actually need it.
We're only aware of one user-facing impact, for a short time WDQS lag
detection was broken, affecting Wikidata bots that check it. This is
tracked as <https://phabricator.wikimedia.org/T285710>.
Users will experience a bit of a latency increase for now as most user
traffic will need to talk to both eqiad and codfw datacenters. This will
go away tomorrow once MediaWiki is switched over (keep reading).
Also, we were a bit delayed in starting today because of an issue
causing appservers to get stuck:
== Services ==
Started at 14:29 UTC, officially finished at 15:09.
The main issues we ran into were:
* the helm-charts service is unique and doesn't have a service IP,
causing the automatic switchover verification to break. This required us
to manually check the other services that come after it in the list, and
then re-run cookbook while excluding it. Tracked as
* the restbase-async service has some special handling, which we debated
on whether to follow that or not, opted to not special case it. Figuring
out what to do long-term is <https://phabricator.wikimedia.org/T285711>.
* the WDQS issue mentioned earlier.
== Traffic ==
Started at 15:43, finished at 15:45.
It took until ~16:25 for eqiad to mostly depool. There's not much else
to report, it went very smoothly.
== Tomorrow's MediaWiki switchover ==
Scheduled for 14:00 UTC <https://zonestamp.toolforge.org/1624888854>.
It is our goal to minimize the read-only time and make this a non-event
from a user perspective.
All of the coordination will take place in the #wikimedia-operations IRC
channel on Libera Chat You're more than welcome to follow along but if
you have questions, please ask them in #wikimedia-tech so it doesn't get
disruptive. The procedure that we'll be following is documented at
I'm planning to do one more "live test" later today, will announce that
on IRC when it gets started.