-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Gregory Maxwell wrote:
On Wed, Jun 11, 2008 at 2:04 PM, Brion Vibber
<brion(a)wikimedia.org> wrote:
Duplicate parsing honestly isn't much of an
impediment here; the primary
impediment is just configuring things properly for virtual hosts and SSL
proxies on the same IPs that we run non-SSL on.
I'd think that 2x the memory usage / disk usage in caches would be
nothing to sneeze at... or the cpu cost of holding one cached copy and
replacing the URLs internally.
Ehh, wouldn't hurt in theory but I'm always suspicious. :)
Consider also non-browser uses:
* search spiders
* RSS feed links
* screen-scraping goodies
* post-processing web tools such as online translators, kanji->furigana
converters, etc
Note also that the fully-qualified URL may be pulled by {{SERVERNAME}}
or {{FULLURL:}} in the middle of wikitext, and is used in the print
footer etc.
In any case, I've started testing protocol
relatives. If they turn
out to be reliable then it's just a further enhancement. I'll let
you know when I have some results.
Sweet... :D
* SSL proxies
in each data center
* wildcart certs for each second-level domain
...
Right, and the wildcard certs tend to be more expensive for who knows
what reason... :(
Otherwise people would buy one wildcard cert instead of two or three
individual-host certs, and the CAs would make less money... :D
- -- brion
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (Darwin)
Comment: Using GnuPG with Mozilla -
http://enigmail.mozdev.org
iEYEARECAAYFAkhQYtgACgkQwRnhpk1wk45M/wCfamv2BnhTGTL29Gn/roknDWm1
DlEAnjxqPHovWj65n1wUKi3G4RhtoITS
=N8CK
-----END PGP SIGNATURE-----