[Foundation-l] [Internal-l] Relocation Announcement

Mark Bergsma mark at wikimedia.org
Mon Sep 24 18:47:28 UTC 2007


Anthony wrote:
>>> It only takes light about 1/100th of a second to travel across the
>>> atlantic ocean.  Speed of light isn't the bottleneck.
>> It takes packets about 70ms to travel across the atlantic ocean and
>> back, determined by the speed of light through fiber. Since a typical
>> HTTP request requires a few of these round trips, the speed of light
>> *is* the bottleneck.
>>
> http://royal.pingdom.com/?p=143
> 
> My figure was off.  But 1.215 seconds still is nowhere near the
> theoretical maximum (about 133 milliseconds anywhere in the world).
> I'd say HTTP is the bigger bottleneck, with router forwarding probably
> equaling about the same as the speed of light.  (Even then, it's not a
> matter of physical distance.  I live in Tampa, and I just got 139ms
> pings, although a traceroute I just ran shows the packets going
> through Virginia).

You are correct, two points on the Internet are not necessarily 
connected in a straight shortest distance line. Fortunately Amsterdam is 
a big Internet hub in Europe, with many transatlantic cables ending up 
there. Tampa is not. Miami has some, but a lot of traffic goes via 
northern US east coast anyway.

And no, router forwarding doesn't account for half of that. Those same 
routers forward in microseconds on much shorter distances. 
Amplifiers/optical/electrical converters and other transmission kit 
needed on those long transatlantic connections may indeed introduce some 
extra delays, but since they are necessary anyhow at long distances it 
doesn't really help to factor it out.

> Anyway, my comment wasn't meant to be practical for the foundations
> particular problem.  Sorry about that.
> 
>> And that's just one HTTP request. It would be way worse if we sent all
>> the very many DB/memcached/DNS/etc lookups necessary to build a single
>> wiki page over the atlantic...

> If you didn't pipeline them.  But why in the world would you do that?

Because of the "update everywhere problem". Data of which multiple 
copies exist either needs all copies updated atomically (impractical due 
to the latency we were talking about) or needs to synced/merged later, 
with all the fun conflict resolution problems you will get.

-- 
Mark Bergsma <mark at wikimedia.org>
System & Network Administrator, Wikimedia Foundation



More information about the foundation-l mailing list