On Sun, Oct 5, 2008 at 2:29 PM, Max Semenik <maxsem.wiki(a)gmail.com> wrote:
On 05.10.2008, 21:00 Gregory wrote:
Probably pointless. It's small enough already
that the load time is
going to be latency bound for any user not sitting inside a Wikimedia
data center. On ones which are above the latency bound window (of
roughly 8k), gzipping should get them back under it.
mwsuggest.js loses 10 kb that way, wikibits.js - 11k.
The gzipped copy can't lose 11k, because it's not even that large when
gzipped (it's 9146 bytes gzipped).
Compare the gzipped sizes. Post gzipping the savings from whitespace
removal and friends is much smaller. Yet it makes the JS unreadable
and makes debugging into a pain.
For a logged in user with monobook it's 33k vs.
106 kb - not that
insignificant.
Logged in a mess of uncachability. You're worried about
one-time-per-session loaded object for logged in users?
In any case,
from the second page onwards pages typically display in
<100ms for me, and the cold cache (first page) load time for me looks
like it's about 230ms, which is also not bad. The grade 'f' is hardly
deserved.
Not everyone lives in the US and enjoys fast Internet.
You're missing my point. For small objects *latency* overwhelms the
loading time, even if you're on a slow connection, because TCP never
gets a chance to open the window up. The further you are away from
the Wikimedia datacenters the more significant that effect is.
Much of the poorly connected world suffers very high latencies due to
congestion induced queueing delay or service via satellite in addition
to being far from Wikimedia. (and besides, the US itself lags much of
the world in terms of throughput).
If it takes 75ms to get to the nearest Wikimedia datacenter and back
then a new HTTP get can not finish in less than 150ms. If you want to
improve performance you need to focus on shaving *round trips* rather
than bytes. Byte reduction only saves you round trips if you're able
to reduce the number of TCP windows worth of data, it's quantized and
the lowest threshold is about 8kbytes.
Removing round trips helps everyone, while shaving bytes only helps
people who are low delay and very low bandwidth, an increasingly
uncommon configuration. Also, getting JS out of the critical path
helps everyone. The reader does not care how long an once-per-session
object takes to load when it doesn't block rendering, and the site
already does really well at this.