FYI
---------- Forwarded message ----------
From: Yuvi Panda <yuvipanda(a)gmail.com>
Date: Fri, Jun 19, 2015 at 11:33 PM
Subject: Labs (almost fully!) back up
To: labs-announce(a)lists.wikimedia.org
Hello everyone!
All projects (except maps and mwoffliner) are fully back up.
Theyshould be up now (including tools) - restored from a backup taken
on June 9. Some have had NFS disabled - but those mostly have had no
significant NFS usage or have had members of the project confirm NFS
is unused. This increases their reliability significantly. If your
project has something missing, please file a bug or respond on list.
We have a fsck in progress on the old corrupted file system, and will
update if / when we can recover specific files.
https://wikitech.wikimedia.org/wiki/Incident_documentation/20150617-LabsNFS…
has updates coming as well.
Thank you.
--
Yuvi Panda T
http://yuvi.in/blog
--
Yuvi Panda T
http://yuvi.in/blog
This list seems more appropriate for this type of discussion.
--
Revi
https://www.revi.pe.kr
-- Sent from Android --
---------- 전달된 메일 ----------
보낸사람: "Yuri" <yuri(a)rawbw.com>
날짜: 2015. 6. 20. 오후 2:52
제목: [Wikimedia-l] What is the wikipedia http API address now?
받는사람: <wikimedia-l(a)lists.wikimedia.org>
참조:
Now all previously http URLs redirect to https.
https://www.mediawiki.org/wiki/API:Main_page also still mentions the old
http address that now redirects.
What is the new purely http API address?
I need to know the hit of https on various processes.
Yuri
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
Hello!
MediaWiki-Codesniffer 0.3.0 is now available for use in your MediaWiki
extensions and other projects. Here are the notable changes since the
last release (0.2.0):
* Don't require "wf" prefix on functions that are namespaced (Kunal Mehta)
* Simplify PHPUnit boostrap, require usage of composer for running tests
(Kunal Mehta)
* SpaceyParenthesis: Check for space before opening parenthesis (Vivek
Ghaisas)
* SpaceyParenthesesSniff: Search for extra/unnecessary space (Vivek Ghaisas)
* CharacterBeforePHPOpeningTagSniff: Support T_HASHBANG for HHVM
>=3.5,<3.7 (Kunal Mehta)
I have submitted patches which bump the depdencies for extensions
<https://gerrit.wikimedia.org/r/#/q/status:open+topic:bump-dev-deps,n,z>, however
some are failing due to the new sniffs. Please amend the patches to make
them pass and I can review them :)
-- Legoktm
Thought some people would find this stream of tweets from Charlie Kindel [0]
<http://ceklog.kindel.com/2015/06/18/what-it-means-to-be-great-product-manag…>
interesting. I'd also recommend perusing his other posts about leadership
& engineering culture. Here's my take on a few snippets, curious to hear
your thoughts as well:
*"the only work that truly matters is that of the engineers"*
While engineers might be responsible for "actually building things,"
Charlie himself admits that the quality (and relevance) of our work is
highly dependent on multiple factors leading up to the first engineer's
keystroke.
*"left to their own devices, engineers will do two things: 1) the most
complicated thing, 2) the thing they think is fun"*
Guilty as charged, but I think engineers who are "sold" on the teams'
mission are capable of making good decisions about what to work on. Our
current situation in the Readership vertical is a live experiment on this
subject.
Finally, I wholeheartedly agree that I do my best work when it's crystal
clear *"who the customer is, where the customer is, why the customer cares,
why it’s important for the business, and when it’s relevant."*
Happy reading!
Brian
0:
http://ceklog.kindel.com/2015/06/18/what-it-means-to-be-great-product-manag…
--
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
One of the reasons we've always worried about using the open Ogg and WebM
formats on iPhones and iPads is that we don't get to make use of the
hardware MP4/H.264 codec... using the main CPU cores is presumed to drain
the battery faster.
I've done a first-pass test measuring energy usage of native-code WebM/Ogg
playback using the new energy reporting in the Xcode 7 / iOS 9 beta:
https://brionv.com/log/2015/06/19/webm-and-ogg-energy-usage-on-ios-9-beta-w…
The good news is that playback of buffered data at my target resolutions
(360p for 32-bit, 720p for 64-bit) is barely under the "Low" mark on the
energy drain meter! :D
The bad news is that H.264 playback with the native widget reports
post-buffer-download energy drain even lower, at the "Zero" mark... if you
can believe that!
(In both cases, reported drain is significantly higher during network
download, at least on my fast wifi.)
But "Low" sounds pretty good... If folks would like to see more concrete
measures, I can rig up my test to run continuously until the battery runs
out and time it.
-- brion
Hello wikitech-l!
This year we will be holding the MediaWiki Developers Summit 2016 between
*Monday* *January 4th and Wednesday January 6th* in San Francisco.
Due to popular demand the WMDS 2016 will be scheduled for three days
instead of two. The third day being a more relaxed/informal hacking day
where people can work on projects, continue discussion from the previous
two days and catch up with each-other.
Registration is not even close to being open so no action needed besides
blocking off your calendars.
Looking forward to a great event - we are 7.5 months away!
https://www.mediawiki.org/wiki/MediaWiki_Developer_Summit_2016
I feel that bot operators should actively pay attention to the technical
aspects of the community and the mailing lists. So, the bot operator who
never updates their software, doesn't pay attention to the announcements,
and ignores api warnings should be blocked after the deadline. Bot
operators do not operate in a vacuum, and should never run bots just for
the sake of running them.
Community should always be able to find and communicate with the bot
operators.
Obviously we should not make sudden changes (except in the
security/breaking matters), and try to make the process as easy as
possible. The rawcontinue param is exactly that, simply adding it will keep
the logic as before.
Lastly, I again would like to promote the idea discussed at the hackathon
-- a client side minimalistic library that bigger frameworks like pywikibot
rely on, and that is designed in part by the core developers. See the
proposal at
https://www.mediawiki.org/wiki/Requests_for_comment/Minimalistic_MW_API_Cli…
On Jun 3, 2015 2:29 PM, "John Mark Vandenberg" <jayvdb(a)gmail.com> wrote:
> On Wed, Jun 3, 2015 at 3:42 AM, Brad Jorsch (Anomie)
> <bjorsch(a)wikimedia.org> wrote:
> > ...
> > I've compiled a list of bots that have hit the deprecation warning more
> > than 10000 times over the course of the week May 23–29. If you are
> > responsible for any of these bots, please fix them. If you know who is,
> > please make sure they've seen this notification. Thanks.
>
> Thank you Brad for doing impact analysis and providing a list of the
> 71 bots with more than 10,000 problems per week. We can try to solve
> those by working with the bot operators.
>
> If possible, could you compile a list of bots affected at a lower
> threshold - maybe 1,000. That will give us a better idea of the scale
> of bots operators that will be affected when this lands - currently in
> one months time.
>
> Will the deploy date be moved back if the impact doesnt diminish by
> bots being fixed?
>
> --
> John Vandenberg
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
---------- Forwarded message ----------
From: Yuvi Panda <yuvipanda(a)gmail.com>
Date: Thu, Jun 18, 2015 at 4:56 PM
Subject: Ongoing Labs Outage Update
To: labs-announce(a)lists.wikimedia.org
Yesterday, the filesystem used by many Labs tools suffered a
catastrophic failure, causing most tools to break. This was noticed
quickly but recovery is taking a long time because of the size of the
filesystem.
There have been file system corruption on the filesystem backing the
NFS setup that all of labs uses, causing a prolonged outage. The
Operations team is currently attempting to restore a backup made on
June 9 at 16:00 UTC. Recovery of modifications made after that date is
potentially possible, but our first priority is getting the backup
restored. We will update the incident report page
https://wikitech.wikimedia.org/wiki/Incident_documentation/20150617-LabsNFS…
with notes on our progress. E-mails will also be sent to the
labs-announce (https://lists.wikimedia.org/mailman/listinfo/labs-announce)
and labs-l (https://lists.wikimedia.org/mailman/listinfo/labs-l) on
significant changes. We are not yet able to estimate when things will
be back up fully.
This also means that tools hosted on tools.wmflabs.org will not be
accessible until this is finished, and even then they might need some
more fiddling to work properly. We will update
https://wikitech.wikimedia.org/wiki/Incident_documentation/20150617-LabsNFS…
as well as soon as we have more information.
If you have a non-tools project on labs that does not depend on NFS
and is currently down, you can recover it by getting rid of NFS. (We
can help you with that.) For instructions, see
https://wikitech.wikimedia.org/wiki/Recover_instance_from_NFS . Join
us on #wikimedia-labs and we will assist you.
--
Yuvi Panda T
http://yuvi.in/blog
--
Yuvi Panda T
http://yuvi.in/blog
I just noticed that Article::getContent is deprecated now, and the code
says that WikiPage::getContent is now the preferred method. What's the
recommended way to get the current text content of a normal wikitext
article now? Would it be this?
$text = $article->getPage()->getContent()->getNativeData();
We have an (informal?) policy that deprecation warnings shouldn't be raised
in WMF production. Thus if a core patch is going to add deprecation
warnings we have to make sure that all extensions deployed on the cluster
are updated to not trigger the warning. We can accomplish this by carefully
managing the addition of the warnings, by detecting the core version from
the extension and changing behavior, or by simply updating both core and
extension at the same time.
With Gerrit change 134827,[1] the affected extensions are Flow and
CentralAuth. For neither of these extensions do we care that
extension-master works with non-master versions of MediaWiki core, and
patches to update the extensions are ready.[2][3] So normally we'd just
merge all three at once and be done with it.
The problem is unit tests: we can't merge the core change[1] first because
Flow's unit tests will fail on the deprecation warning, and we can't merge
the Flow change[2] first because it doesn't work without the core change.
There are various ways to work around this, but all are ugly:
1. Merge the core change over Jenkins's objections, then the Flow change
can be merged as normal. But overriding Jenkins sucks.
2. Split the core patch into two parts: part 1 does everything except
add the wfDeprecated() call, while part 2 adds just the wfDeprecated() call
and will be merged immediately after. The make-work here just to make
Jenkins happy sucks and slightly clutters the commit history.
3. Rewrite the Flow unit test to detect whether core has the core change
and alter behavior accordingly. This is even more make-work than option 2
when we're otherwise happy to just coordinate the merges.
Which ugly option do we as a development community prefer in this
situation? Personally I'd go for option 1 as the most expedient with the
fewest long-term consequences.
[1]: https://gerrit.wikimedia.org/r/#/c/134827/
[2]: https://gerrit.wikimedia.org/r/#/c/190023/
[3]: https://gerrit.wikimedia.org/r/#/c/190026/
P.S. Let's not side-track this into whether the "extension master only
needs to be compatible with core master" policy for Flow and CentralAuth is
good/bad, or that situations exist where options 2 or 3 are necessary
choices (e.g. #2 when the extension fixes aren't ready yet and #3 when an
extension involved doesn't have the "master is only compatible with core
master" policy), or whether allowing core changes to be merged that cause
non-WMF-deployed extensions to raise deprecation warnings is somehow
discriminating against non-WMF-deployed extensions. Start a new thread if
you want to discuss those, please.
--
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation