On Sat, Feb 12, 2011 at 10:26 PM, Chad innocentkiller@gmail.com wrote:
Yeah, secure.wikimedia.org's URL scheme isn't really friendly to outsiders. Historically, this is because SSL certificates are expensive, and there just wasn't enough money in the budget to get more of them for the top-level domains. Maybe this isn't the case anymore.
There was a discussion about this a while back that started in #mediawiki_security and migrated to #wikimedia-operations. The problem right now is that it would require some reconfiguration work and no one's gotten around to it. Currently, secure.wikimedia.org just points to a different host from the rest of the site:
$ dig +short secure.wikimedia.org 208.80.152.134 $ dig +short wikipedia.org 208.80.152.2
To put secure and regular at the same domain, they'd also need to be put on the same machine (that is, the same load balancer, which can obviously forward to separate hosts per protocol). Then you'd have the secure site split across several domains, so you'd have the question of which certificate to serve. The best answer to that seems to be putting each second-level domain (wikipedia.org, wiktionary.org, etc.) on a separate IP address, then getting a wildcard certificate for each one and serving that. You'd also want to put each second-level domain itself on a separate IP from all its subdomains, like wikipedia.org on a separate subdomain from *.wikipedia.org, because those again require separate certs.
This doesn't require too many IP addresses (which can all be assigned to the same interface anyway) and avoids having to mess with SNI or such. But it would require some amount of effort to set up. Ryan Lane said he'd be interested in working on it when he finds the time. Ideally we'd have all logged-in users (at least) on HTTPS by default. But anyway, the cost of certificates certainly hasn't been an issue for years!
On Sun, Feb 13, 2011 at 10:14 AM, River Tarnell r.tarnell@ieee.org wrote:
SSL certificates aren't that cheap, but only about 8 would be needed (one for each project, e.g. *.wikipedia.org), so the cost isn't prohibitive anymore.
You'd want two per project so that https://wikipedia.org/ works, right? Lots of sites fail at that, but it's lame: https://amazon.com/
On Sun, Feb 13, 2011 at 10:23 AM, Maury Markowitz maury.markowitz@gmail.com wrote:
Are there _no_ performance issues we should be concerned about here?
SSL adds an extra round-trip or two to each connection, and adds some server-side load. Currently we have much bigger client-side performance issues than this -- Resource Loader is a first stab at fixing some of the worst of those -- so I don't think we need to worry too much about it for now.
If enough users used SSL, the server-side computational load might be significant, compared to just serving stuff from Squid. (Google observed almost no load increase when enabling SSL by default for Gmail, but Gmail spends a lot of CPU cycles on each page anyway -- we usually spend almost no CPU for logged-out users, just serve the cached page from Squid's memory or disk.) But not many people will use it until we make it opt-out, so I don't think we have to worry about it for now.
I know local ISP's did (used to?) throttle all encrypted traffic. Would this fall into that category?
I'm not aware of any issue with this.