-----Original Message----- From: wikitech-l-bounces@lists.wikimedia.org [mailto:wikitech-l-bounces@lists.wikimedia.org] On Behalf Of Brion Vibber Sent: 06 October 2008 17:56 To: Wikimedia developers Subject: Re: [Wikitech-l] Wikipedia & YSlow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Gregory Maxwell wrote:
On Sun, Oct 5, 2008 at 12:15 PM, howard chen
howachen@gmail.com wrote:
Results is quite surprising, Grade F (47). Of course lower
mark does
not always means bad, but there are some room for improvement, e.g.
[snip]
- Minify JS (e.g.
Probably pointless. It's small enough already that the load time is going to be latency bound for any user not sitting inside a
Wikimedia
data center. On ones which are above the latency bound window (of roughly 8k), gzipping should get them back under it.
Minification can actually decrease sizes significantly even with gzipping. Particularly for low-bandwidth and mobile use this could be a serious plus.
The big downside of minification, of course, is that it makes it harder to read and debug the code.
- Enable GZip compression (e.g.
The page text is gzipped. CSS/JS are not. Many of the CSS/JS are small enough that gzipping would not be a significant win
(see above)
but I don't recall the reason the the CSS/JS are not. Is there a client compatibility issue here?
CSS/JS generated via MediaWiki are gzipped. Those loaded from raw files are not, as the servers aren't currently configured to do that.
- Add expire header (e.g.
http://upload.wikimedia.org/wikipedia/en/9/9d/Commons-logo-31px.png)
Hm. There are expire headers on the skin provided images,
but not ones
from upload. It does correctly respond with 304 not
modified, but a
not-modified is often as time consuming as sending the
image. Firefox
doesn't IMS these objects every time in any case.
The primary holdup for serious expires headers on file uploads is not having unique per-version URLs. With a far-future expires header, things get horribly confusing when a file has been replaced, but everyone still sees the old cached version.
Anyway, these are all known issues.
Possible remedies for CSS/JS files:
- Configue Apache to compress them on the fly (probably easy)
- Pre-minify them and have Apache compress them on the fly
(not very hard)
- Run them through MediaWiki to compress them (slightly harder)
- Run them through MediaWiki to compress them *and* minify
them *and* merge multiple files together to reduce number of requests (funk-ay!)
Possible remedies for image URLs better caching:
- Stick a version number on the URL in a query string
(probably easy -- grab the timestamp from the image metadata and toss it on the url?)
- Store files with unique filenames per version (harder since
it requires migrating files around, but something I'd love us to do)
Wouldn't rollbacks waste space? Would've thought content addressing to store all the images, and the content address in the url?
Unless there is versioned metadata associated with images that would affect how its sent to the client.
Jared