On Wed, Jun 11, 2008 at 2:04 PM, Brion Vibber <brion(a)wikimedia.org> wrote:
Duplicate parsing honestly isn't much of an
impediment here; the primary
impediment is just configuring things properly for virtual hosts and SSL
proxies on the same IPs that we run non-SSL on.
I'd think that 2x the memory usage / disk usage in caches would be
nothing to sneeze at... or the cpu cost of holding one cached copy and
replacing the URLs internally.
In any case, I've started testing protocol relatives. If they turn
out to be reliable then it's just a further enhancement. I'll let
you know when I have some results.
eg, we want
https://en.wikipedia.org/wiki/Foobar to
work, which requires:
* SSL proxies in each data center
* wildcart certs for each second-level domain
* appropriate connection setup for the certs to work; eg one public IP
per data center per second-level domain
We did some experimentation in this direction last year, but haven't
really got the ball rolling yet.
Right, and the wildcard certs tend to be more expensive for who knows
what reason... :(
Cool enough.