(I'm not sure if Brooke already had a chance to respond)

Right now things are in a bit of a failure mode for dumps hosting which may be complicating the issue, it's hard to say what is sane at the moment.  If you could a) drop us a note in #wikimedia-cloud when you are going to kick off grabbing the files, and b) download from the web interface.  That could be the sanest course of action.  In general nothing going on should have been a problem, but we have a situation on our hands otherwise that is probably complicating issues.

Best,

Chase
 
Begin forwarded message:

From: Stas Malyshev <smalyshev@wikimedia.org>
Subject: Re: heads up--we killed a copy job
Date: July 8, 2018 at 1:14:05 PM MST
To: Brooke Storm <bstorm@wikimedia.org>

Hi!

On 7/8/18 1:00 PM, Brooke Storm wrote:
Hello Stas,
We killed your cp job on wdqs-test.wikidata-query.eqiad.wmflabs today at
19:30-ish UTC.  We wanted to give you a heads up.  Somehow, it was
causing huge load on the server, which we are trying to fix with some
new traffic shaping settings because that really shouldn’t cause that.
Sure. I was planning to do some experiments with dump
processing/compression, which involve moving some large amounts of data
around. Most of these should happen on the wdqs-test, locally, but I
needed the initial file of course. Please tell me if you have any
objections to that - i.e. if I should postpone this, or not do it on
labs at all, or any other limitation? Is there a best way/place to run
such things? I can download the initial file from web storage instead of
using nfs if that's a problem.

Thanks,
--
Stas Malyshev
smalyshev@wikimedia.org




--
Chase Pettet
chasemp on phabricator and IRC