Forwarding per request to help reconcile certain discrepancies in
Messages*.php files.
---------- Forwarded message ----------
From: Rob Lanphier <robla(a)wikimedia.org>
Date: Wed, Feb 15, 2012 at 11:58 AM
Subject: Re: examples Re: i18n testing for 1.19 deployment
To: Chris McMahon <cmcmahon(a)wikimedia.org>
Cc: Tim Starling <tstarling(a)wikimedia.org>, "Siebrand Mazeland (WMF)" <
smazeland(a)wikimedia.org>, Niklas Laxström <niklas.laxstrom(a)gmail.com>,
Sumana Harihareswara <sumanah(a)wikimedia.org>
Hi Chris,
It's probably best to take this discussion out to the wikitech-l
mailing list at this point. I imagine there are people not on this
thread that will be helpful in solving this problem.
Thanks
Rob
On Wed, Feb 15, 2012 at 10:23 AM, Chris McMahon <cmcmahon(a)wikimedia.org>
wrote:
> Here is the list of discrepancies between 1.18 and 1.19 for those files
containing the troublesome arrays:
> number of files with discrepancies in $namespaceNames array is 16
> MessagesEn_ca
> MessagesEn_rtl
> MessagesFrp
> MessagesIg
> MessagesMk
> MessagesMzn
> MessagesNb
> MessagesNds_nl
> MessagesNo
> MessagesOr
> MessagesOs
> MessagesQug
> MessagesSa
> MessagesSr_ec
> MessagesWar
> MessagesYue
> number of files with discrepancies in $namespaceAliases array is 17
> MessagesCs
> MessagesEn_ca
> MessagesEn_rtl
> MessagesFrp
> MessagesIg
> MessagesKsh
> MessagesMk
> MessagesMzn
> MessagesNb
> MessagesNds_nl
> MessagesNo
> MessagesOr
> MessagesOs
> MessagesRu
> MessagesSa
> MessagesSr_ec
> MessagesYue
> number of files with discrepancies in $magicWords array is 24
> MessagesAr
> MessagesBe
> MessagesDe
> MessagesEn
> MessagesEo
> MessagesEs
> MessagesFa
> MessagesFi
> MessagesFrp
> MessagesHe
> MessagesId
> MessagesMk
> MessagesMl
> MessagesMzn
> MessagesNb
> MessagesNds_nl
> MessagesNl
> MessagesPl
> MessagesPt
> MessagesRm
> MessagesSq
> MessagesSr_el
> MessagesTr
> MessagesYi
> number of files with discrepancies in $specialPageAliases array is 9
> MessagesEn_ca
> MessagesEn_rtl
> MessagesFrp
> MessagesIs
> MessagesMg
> MessagesNb
> MessagesNds_nl
> MessagesNo
> MessagesOr
-------- Original Message --------
Subject: Revision tagging: use cases needed
Date: Tue, 14 Feb 2012 14:18:49 -0800
From: Dario Taraborelli <dtaraborelli(a)wikimedia.org>
We're getting to a point where we need to be able to flag specific
revisions as generated via specific tools. For example if we generate
edits via AFT call-to-actions we want to measure:
• their volume (compared to regular edits)
• their survival/revert rate
The same request is now emerging from the Article Creation Workflow
team, and having talked to many of you it sounds like community, mobile
and other engineering teams would benefit from the ability of saying:
"revision N was created with tool X [version Y]"
I started capturing some use cases on this etherpad:
http://etherpad.wikimedia.org/RevisionTags
I'd like to have your input to start building requirements and
evaluating possible solutions. Let me know off list if you have any
question/concern
Dario
_______________________________________
Hi everyone,
The git migration is proceeding well, but I have a quick question of anyone
who works on a branch. Right now, I'm not planning on migrating most of
the defunct feature branches over to git. It's too much work (we've got ~140
of them) for very little gain in most cases. The REL1_n branches *are*
being migrated since they're our release branches.
If you work on a feature branch for MediaWiki core and you'd like it moved
to git as well, please reply to me offlist so I can add it to the conversion
rules.
-Chad
Hi,
Since we have more and more users with access to wikimedia labs, I think it
would be great to make online conference on irc to discuss, especially with
new users and people interested in getting access there how labs work, and
what all is needed to do. I would be interested in getting a feedback on
what is needed to set up right now. There is a bot project which is slowly
moving forward so I would appreciate some feedback from people who run bots
so that we know what all needs to get implemented before opening
"production" for this part of labs.
Also it would be great to discuss about stuff we are about to set up, write
some new proposals and comment on what we have done. Including:
* Shared SQL server
* Monitoring for services - ganglia and nagios
* Bots cluster
* Deployment cluster
So, if there are people interested in labs, please let me know and we could
make a conference in #wikimedia-labs on freenode - I propose it to happen
this week, on thursday (friday is a day when most of people are leaving
office or heading somewhere far from computers :)) or saturday. (But I
don't know if Ryan and other important people are available for talk on
saturday). Preferably in 18:00 GMT or 22:00 GMT (I guess 22:00 is a best
time for people who live either in US or Europe).
Let me know what you think or if you would prefer a better date
Another new committer: David Schoonover (dsc), new analytics staffer at
the Wikimedia Foundation.
Welcome to Subversion, David! (But don't get too used to it...
https://www.mediawiki.org/wiki/Git/Conversion#March_2012 )
--
Sumana Harihareswara
Volunteer Development Coordinator
Wikimedia Foundation
Hi everyone
Short version: Some people have reported issues with corrupted
thumbnail images in which they appear truncated. We’ve identified the
cause of the problem and believe we have a fix in place. It may take
a few days to fully propagate, during which you may continue to see
corrupted thumbnails. Manually purging the image page should repair
any broken thumbnail.
Longer version: Last week, we enabled Swift as a replacement for NFS
for storing images (see blog post [1]). Our goal with this project is
to replace a single point of failure, and increase fault tolerance and
capacity.
Recently, we discovered that images were sometimes getting corrupted.
After further investigation, Tim Starling and Ralf Schmitt both
independently figured out that whenever a client disconnected early
while fetching a thumbnail, the server would write out a partial
thumbnail to our Swift cluster and to the cache. Ben ran the numbers,
and estimated that roughly 1.6% of the thumbnails were corrupted, and
that roughly 4.5% of images had at least one corrupt thumbnail.
We've disabled Swift for the time being, going back to our old way of
serving thumbnails. Unfortunately, even though thumbnails are no
longer coming from our Swift cluster, there will still be images in
our Squid cache which we can't easily purge (without creating a large
performance problem), so that step only stops the problem from getting
worse rather than fixing it. Thankfully, we're reasonably confident
there won't be any new broken images.
Aaron Schulz came up with a pretty simple fix. We were writing
thumbnails to Swift while streaming them to the client, so when the
client disconnected, so did the process writing the file into Swift.
We have added an MD5 checksum of the generated image to the ETag
header when pushing it to Swift. Swift accepts the file for writing,
and if the MD5 checksum doesn’t match after the connection to Swift
closes, the partial thumbnail is deleted.
The changes were two small fixes: one in the thumb generation:
http://www.mediawiki.org/wiki/Special:Code/MediaWiki/111517
...and one in the process that writes images to Swift:
https://gerrit.wikimedia.org/r/#change,2598
We aren’t planning to deploy this right away. What we want to do
instead is repair the damage first. Ben will write a script that
crawls through Swift, searches for corrupt images, nukes them from
Swift and purges them from our Squid cache via HTCP. After that’s
complete, we’ll then re-enable the new improved thumbnail pipeline
with Swift, which *should* no longer keep partial images.
Sorry for any problems this might have caused.
Rob
[1] Ben’s announcement of our Swift deployment
http://blog.wikimedia.org/2012/02/09/scaling-media-storage-at-wikimedia-wit…
Hi everyone,
Stage 0 of our deployment of MediaWiki 1.19 is complete, which is our
deployment to "test2". This wiki is actually on the live cluster like
any other wiki, which is about as close as it gets to what things are
going to be like when we deploy it to production wikis like any
others.
Here's the URL: http://test2.wikipedia.org/wiki/Main_Page
In fact, all 800 or so wikis run on the same set of Apache servers,
whether they are 1.18 or 1.19, which means this isn't a truly isolated
test. There's a small chance that we could accidentally break 1.18
wikis in the process of doing this testing. In particular, if you see
something to the effect of "Internal 500 error" on any wiki (1.18 or
1.19), please let us know using the instructions here:
https://meta.wikimedia.org/wiki/Wikimedia_maintenance_notice
(basically #wikimedia-tech or the talk page for the notice above)
If all goes well, we should be making our first wave of deployments to
1.19 wikis in about 27 hours. Go forth and test!
Thanks!
Rob