On Sat, 10 Jul 2004 12:44:20 -0700, Brion Vibber brion@pobox.com wrote:
For Wikipedia, we briefly discussed the possibility a couple years ago but were stymied by the nasty virtual server problem: basically, HTTPS and name-based virtual servers don't mix.
HTTPS and IP-based virtualhosts work just fine, however. TLS also works with name-based virtualhosts (although it isn't supported in all browsers).
Additionally there's the issue that any HTTPS access won't be cached at the squid level (or perhaps even the client level); if we restrict this to logins only (perhaps even optionally) then this oughtn't to impact performance too much.
Can't squid be reconfigured to handle the SSL portion itself? In other words, can it simply treat all requests to the backend as if they were HTTP, and simply serve out cached/fresh copies of pages via SSL? I understand why squid can't cache pages that IT has to retrieve via SSL, but that's not the case here.
That said, I tend to think that only logins really need to be secure anyway.
Further there's the certificate issue; would we be content with a self-signed certificate (BIG WARNINGS in your browser every time you login) or will we spend the foundation's money for a big fancy corporation's stamp of approval?
You can pick up "wildcard" certificates (i.e. *.wikipedia.org) for only a few hundred bucks from some providers. Most big e-commerce companies shell out the big bucks for one from Verisign for brand recognition, but since that's not a concern here, one of the cheaper providers should suffice.
Even if we wanted to go the free self-signed route, it's possible to accept the certificate as valid so that it doesn't give a warning every time, only the first time.
-Bill Clark