On Tue, Sep 9, 2008 at 8:58 PM, Aryeh Gregor
On Tue, Sep 9, 2008 at 8:27 PM, Ilmari Karonen
I think the next step, after the log comparison
test we both suggested,
would be to set $wgLogo to a protocol-relative URL. A missing logo
wouldn't actually break anything, but you _bet_ people would notice it.
Now that's a simple, elegant, effective idea. It would require almost
no effort, hurt no one, and give immediate feedback. The only catch
with this, as with other image-based proposals, is that a client that
doesn't support images (as well as, in the case of the logo, CSS)
won't be picked up. But I don't see much help for that. There are
few enough of those anyway. lynx does support them, I just checked.
Probably every browser you can think to name supports them. I suspect
that if I added up the user agents known to support it we'd be well
above 99% of all traffic.
My bigger concerns are with things like content-munging proxies,
anti-porno filters, SSL VPN appliances, and the like, breaking things
for clients. Along with truly oddball clients (lynx is not oddball),
and mobile devices. Testing with the logo, however, won't give a good
exercise, but it wouldn't be a bad start.
Some of these may be obscure enough to ignore, some may not. But they
are going to be rare enough that further 'light' pre-deployment
testing isn't likely to find them.
One interesting catch, though, that I just noticed
when testing. What
happens when someone downloads the HTML to their hard drive and views
it locally? lynx assumes FTP as the protocol, which is completely
wrong. Firefox, Opera, Konqueror, and Chrome all try to use the
*file://* protocol -- which is absolutely reasonable but absolutely
terrible for us.
So this means that anyone who tries to save a Wikipedia page using
protocol-relative URLs to their hard drive will find that all the
relevant links are broken. This is, obviously, not a good thing. I
can't see any conceivable workaround, and if there is none I don't see
any way we (or anyone) can use protocol-relative URLs. Being able to
save web pages locally is pretty basic and important functionality
that a lot of people must be relying on.
Huh? Did you actually try saving a page?
A protocol relative url is a relative URL. We would use them where
sites normally use relative URLs, but where we currently use fully
qualified URLs because our 'site' spans many domain names (i.e.
If you take some relative URLs from a website and merely write them
into a file, of course it isn't going to work. Which is why browsers
do not do anything like that when you save a page.
Browsers support relative URLS in saved copies by rewriting them at
save time. Otherwise all relative URLs would break in saved
documents, and the overwhelming majority of anchors and images on
sites outside of Wikipedia are fully relative paths.
I just tested several browsers and they rewrote protocol relative URLs
just as they do for any other kind relative URL. Images get saved and
work fine. Links get fully qualified out just fine. It just
I also tried this on a captured copy of the Wikipedia HTML. It works
just as I'd expect.
As far as I can tell there is no problem here, but perhaps I'm missing