[Labs-l] Starting migration to new hardware today, expect some downtime on your instances
Ryan Lane
rlane32 at gmail.com
Fri Jul 6 21:20:35 UTC 2012
I noticed a few instances down after migration. After doing some
investigation, it seems that the KVM block migration is causing
corruption of the images on the destination host. Any instance
migrated is likely corrupted and will likely need to be rebuilt.
Needless to say, I'm going to stop doing migrations via this method
now. I'm going to try another, more annoying, way of doing migrations
now.
If you need to retrieve data off of an instance, and can not do so
because the instance is down or is too messed up, let me know and I'll
mount the disks and retrieve the data for you. Sorry for the
inconvenience.
Here's a list of instances that were migrated and are likely corrupted
(it's 34 instances):
i-00000309
i-0000030a
i-0000030b
i-00000080
i-000000b7
i-000000c1
i-000000e7
i-000002dd
i-00000263
i-0000025a
i-000002bd
i-00000308
i-000000ae
i-0000006b
i-000002bb
i-000000b2
i-0000030c
i-00000264
i-000000e1
i-0000028c
i-000002d3
i-0000030d
i-00000302
i-00000289
i-00000170
i-000002d8
i-000000c2
i-000000e2
i-000002d4
i-00000118
i-0000009b
i-00000093
i-000000f8
i-00000105
- Ryan
On Fri, Jul 6, 2012 at 11:14 AM, Ryan Lane <rlane32 at gmail.com> wrote:
> I'll be migrating instances instances to the new hardware today. You
> should expect some downtime on your instances as these are cold
> migrations (we're moving away from gluster for instance storage). I've
> seen very large instances have downtime of up to two hours. KVM's
> migration, in lucid at least, seems to be fairly inefficient.
>
> Migrations will continue till all instances are off the old hardware.
> I expect this process to take roughly one week.
>
> - Ryan
More information about the Labs-l
mailing list