Someone posted a link to
https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Hogtie_bondage
Delving further, we find https://secure.wikimedia.org/wikipedia/commons/wiki/Main_Page says Welcome to Wikimedia Commons, when in fact the real site is http://commons.wikimedia.org/wiki/Main_Page . Or is it?
In fact on any page on either site, one cannot find any link to the corresponding page on the other site.
So now everybody will be passing around two times the amount of links to what in fact is the same material.
OK, I found the pattern for converting one to the other: https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Hogtie_bondage _________________ http://commons.wikimedia.org/wiki/Category:Hogtie_bondage
One would hope the owners of Wiki[pm]would redirect etc. one to the other, to stem the proliferation of non-canonical links.
jidanni@jidanni.org wrote:
Someone posted a link to
https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Hogtie_bondage
Delving further, we find https://secure.wikimedia.org/wikipedia/commons/wiki/Main_Page says Welcome to Wikimedia Commons, when in fact the real site is http://commons.wikimedia.org/wiki/Main_Page . Or is it?
In fact on any page on either site, one cannot find any link to the corresponding page on the other site.
So now everybody will be passing around two times the amount of links to what in fact is the same material.
OK, I found the pattern for converting one to the other: https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Hogtie_bondage _________________ http://commons.wikimedia.org/wiki/Category:Hogtie_bondage
One would hope the owners of Wiki[pm]would redirect etc. one to the other, to stem the proliferation of non-canonical links.
What in the Christ are you talking about?
Reading with squinted eyes and a cocked head, it sounds like you've discovered the secure site. It's documented here:
* http://en.wikipedia.org/wiki/Wikipedia:Secure_server * http://wikitech.wikimedia.org/view/secure.wikimedia.org
I have no idea what this has to do with hogtie bondage or why a post to this mailing list (or the gendergap mailing list, for that matter) was necessary.
MZMcBride
I don't understand the email also... The secure site has been arround for years...
2011/2/13, MZMcBride z@mzmcbride.com:
jidanni@jidanni.org wrote:
Someone posted a link to
https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Hogtie_bondage
Delving further, we find https://secure.wikimedia.org/wikipedia/commons/wiki/Main_Page says Welcome to Wikimedia Commons, when in fact the real site is http://commons.wikimedia.org/wiki/Main_Page . Or is it?
In fact on any page on either site, one cannot find any link to the corresponding page on the other site.
So now everybody will be passing around two times the amount of links to what in fact is the same material.
OK, I found the pattern for converting one to the other: https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Hogtie_bondage _________________ http://commons.wikimedia.org/wiki/Category:Hogtie_bondage
One would hope the owners of Wiki[pm]would redirect etc. one to the other, to stem the proliferation of non-canonical links.
What in the Christ are you talking about?
Reading with squinted eyes and a cocked head, it sounds like you've discovered the secure site. It's documented here:
- http://en.wikipedia.org/wiki/Wikipedia:Secure_server
- http://wikitech.wikimedia.org/view/secure.wikimedia.org
I have no idea what this has to do with hogtie bondage or why a post to this mailing list (or the gendergap mailing list, for that matter) was necessary.
MZMcBride
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
He's complaining, in effect, that there are more than one URL for identical content, which is in fact generally a bad idea, but in this case, of course, he's wrong: different *access protocols* are being used, so it's not possible to conform the two...
Whether it is in fact still a Best Practice to make sure that they're the same is another matter; I understand *why* we have a separate domain name for https, architecturally, but I'm not sure I *like* it.
Cheers, -- jra
----- Original Message -----
From: "Huib Laurens" sterkebak@gmail.com To: "Wikimedia developers" wikitech-l@lists.wikimedia.org Sent: Saturday, February 12, 2011 10:18:33 PM Subject: Re: [Wikitech-l] secure.wikimedia.org commons sockpuppet site I don't understand the email also... The secure site has been arround for years...
2011/2/13, MZMcBride z@mzmcbride.com:
jidanni@jidanni.org wrote:
Someone posted a link to
https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Hogtie_bondage
Delving further, we find https://secure.wikimedia.org/wikipedia/commons/wiki/Main_Page says Welcome to Wikimedia Commons, when in fact the real site is http://commons.wikimedia.org/wiki/Main_Page . Or is it?
In fact on any page on either site, one cannot find any link to the corresponding page on the other site.
So now everybody will be passing around two times the amount of links to what in fact is the same material.
OK, I found the pattern for converting one to the other: https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Hogtie_bondage _________________ http://commons.wikimedia.org/wiki/Category:Hogtie_bondage
One would hope the owners of Wiki[pm]would redirect etc. one to the other, to stem the proliferation of non-canonical links.
What in the Christ are you talking about?
Reading with squinted eyes and a cocked head, it sounds like you've discovered the secure site. It's documented here:
- http://en.wikipedia.org/wiki/Wikipedia:Secure_server
- http://wikitech.wikimedia.org/view/secure.wikimedia.org
I have no idea what this has to do with hogtie bondage or why a post to this mailing list (or the gendergap mailing list, for that matter) was necessary.
MZMcBride
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Verzonden vanaf mijn mobiele apparaat
Regards, Huib "Abigor" Laurens
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Sun, Feb 13, 2011 at 1:51 AM, Jay Ashworth jra@baylink.com wrote:
He's complaining, in effect, that there are more than one URL for identical content, which is in fact generally a bad idea, but in this case, of course, he's wrong: different *access protocols* are being used, so it's not possible to conform the two...
Whether it is in fact still a Best Practice to make sure that they're the same is another matter; I understand *why* we have a separate domain name for https, architecturally, but I'm not sure I *like* it.
This is something I'd very much like to fix. I had a fairly in depth discussion with the other ops folks about this last week. I think I'm going to put it on my goal list; however, we have a lot of higher priority tasks, so I wouldn't expect anything too soon.
- Ryan Lane
----- Original Message -----
From: "Ryan Lane" rlane32@gmail.com
On Sun, Feb 13, 2011 at 1:51 AM, Jay Ashworth jra@baylink.com wrote:
He's complaining, in effect, that there are more than one URL for identical content, which is in fact generally a bad idea, but in this case, of course, he's wrong: different *access protocols* are being used, so it's not possible to conform the two...
Whether it is in fact still a Best Practice to make sure that they're the same is another matter; I understand *why* we have a separate domain name for https, architecturally, but I'm not sure I *like* it.
This is something I'd very much like to fix. I had a fairly in depth discussion with the other ops folks about this last week. I think I'm going to put it on my goal list; however, we have a lot of higher priority tasks, so I wouldn't expect anything too soon.
Oh, I'm not, and secure.* is fine for me, for now. But see my other note to River.
Cheers, -- jra
When you finally retire https://secure.wikimedia.org/wikipedia/commons/... etc. etc. Have them HTTP 301 permanently redirect to https://commons.wikimedia.org/... etc.
On Sat, Feb 12, 2011 at 10:02 PM, jidanni@jidanni.org wrote:
Someone posted a link to
https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Hogtie_bondage
Mmmmm.
Delving further, we find https://secure.wikimedia.org/wikipedia/commons/wiki/Main_Page says Welcome to Wikimedia Commons, when in fact the real site is http://commons.wikimedia.org/wiki/Main_Page . Or is it?
They're the same site. Served from two different domains.
In fact on any page on either site, one cannot find any link to the corresponding page on the other site.
Yeah, secure.wikimedia.org's URL scheme isn't really friendly to outsiders. Historically, this is because SSL certificates are expensive, and there just wasn't enough money in the budget to get more of them for the top-level domains. Maybe this isn't the case anymore.
So now everybody will be passing around two times the amount of links to what in fact is the same material.
If people are pasting double links, then they're being silly. I imagine a lot of stuff on Commons uses {{fullurl:}} so the links are properly generated by MediaWiki.
One would hope the owners of Wiki[pm]would redirect etc. one to the other, to stem the proliferation of non-canonical links.
Redirection would be pointless. Serving them from the same domain (eg: https://commons.wikimedia.org) would be great and is already posted as a bug[0]. I think this is your primary complaint, but as usual you spent half of your post insulting people and creating straw men.
-Chad
----- Original Message -----
From: "Chad" innocentkiller@gmail.com
In fact on any page on either site, one cannot find any link to the corresponding page on the other site.
Yeah, secure.wikimedia.org's URL scheme isn't really friendly to outsiders. Historically, this is because SSL certificates are expensive, and there just wasn't enough money in the budget to get more of them for the top-level domains. Maybe this isn't the case anymore.
Is that in fact the root cause, Chad? I assumed, myself, that it's because of the squid architecture.
So now everybody will be passing around two times the amount of links to what in fact is the same material.
If people are pasting double links, then they're being silly. I imagine a lot of stuff on Commons uses {{fullurl:}} so the links are properly generated by MediaWiki.
No, in fact the root cause of his complaint is pretty likely to be HTTPS-everywhere, which redirects users to the https site in case they're at an insecure wifi spot, so their creds don't get stolen.
This is likely to markedly increase https traffic; I've myself been wondering if that's been noticed the last month.
Redirection would be pointless. Serving them from the same domain (eg: https://commons.wikimedia.org) would be great and is already posted as a bug[0]. I think this is your primary complaint, but as usual you spent half of your post insulting people and creating straw men.
Aspergers' syndrome is a bitch.
Cheers, -- jra
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
In article 18849937.7157.1297583642909.JavaMail.root@benjamin.baylink.com, Jay Ashworth jra@baylink.com wrote:
Yeah, secure.wikimedia.org's URL scheme isn't really friendly to outsiders. Historically, this is because SSL certificates are expensive, and there just wasn't enough money in the budget to get more of them for the top-level domains. Maybe this isn't the case anymore.
Is that in fact the root cause, Chad? I assumed, myself, that it's because of the squid architecture.
LVS is in front of Squid, so it would be fairly simple to send SSL traffic (port 443) to a different machine; which is how secure.wm.o works now, except that instead of using LVS, it requires a different hostname.
However, I think the idea is not to start allowing https://en.wikipedia.org URLs until there's a better SSL infrastructure which can handle the extra load an easy-to-use, widely advertised SSL gateway is likely to create. secure.wm.o is currently a single machine and sometimes falls over, e.g. when Squid breaks for some reason and people notice that secure still works.
SSL certificates aren't that cheap, but only about 8 would be needed (one for each project, e.g. *.wikipedia.org), so the cost isn't prohibitive anymore.
- river.
Are there _no_ performance issues we should be concerned about here?
I know local ISP's did (used to?) throttle all encrypted traffic. Would this fall into that category?
Maury
afaik, the serverside encryption hasn't got any mentionable performance penalty. Clients might be a bit slower due to additional roundtrips caused by the handshakes. On Sunday, February 13, 2011 at 4:23 PM, Maury Markowitz wrote:
Are there _no_ performance issues we should be concerned about here?
I know local ISP's did (used to?) throttle all encrypted traffic. Would this fall into that category?
Maury
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Sun, Feb 13, 2011 at 9:28 AM, Leo diebuche@gmail.com wrote:
afaik, the serverside encryption hasn't got any mentionable performance penalty. Clients might be a bit slower due to additional roundtrips caused by the handshakes. On Sunday, February 13, 2011 at 4:23 PM, Maury Markowitz wrote:
The server side is definitely more the problem here. There is a fairly substantial performance hit for setting up/breaking down the SSL connections. To support this we'll need a cluster of systems dedicated to acting as SSL proxies.
- Ryan Lane
On Sun, Feb 13, 2011 at 9:23 AM, Maury Markowitz maury.markowitz@gmail.com wrote:
Are there _no_ performance issues we should be concerned about here?
I know local ISP's did (used to?) throttle all encrypted traffic. Would this fall into that category?
Well, there's nothing we can really do about this.
It's better to offer a secure site than to not. If people are having performance issues because of their ISP, they should complain to their ISP. They can also choose to use the http version, if that is the case. Right now people either get to choose between the crappy secure, or the fast insecure version of our site.
- Ryan
there are actually providers like www.startssl.com who issue free certificates (only validated by email address though). StartSSLs root certificate is included in nearly all recent browsers.
Leo On Sunday, February 13, 2011 at 4:14 PM, River Tarnell wrote: -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
In article 18849937.7157.1297583642909.JavaMail.root@benjamin.baylink.com, Jay Ashworth jra@baylink.com wrote:
Yeah, secure.wikimedia.org's URL scheme isn't really friendly to outsiders. Historically, this is because SSL certificates are expensive, and there just wasn't enough money in the budget to get more of them for the top-level domains. Maybe this isn't the case anymore.
Is that in fact the root cause, Chad? I assumed, myself, that it's because of the squid architecture.
LVS is in front of Squid, so it would be fairly simple to send SSL traffic (port 443) to a different machine; which is how secure.wm.o works now, except that instead of using LVS, it requires a different hostname.
However, I think the idea is not to start allowing https://en.wikipedia.org URLs until there's a better SSL infrastructure which can handle the extra load an easy-to-use, widely advertised SSL gateway is likely to create. secure.wm.o is currently a single machine and sometimes falls over, e.g. when Squid breaks for some reason and people notice that secure still works.
SSL certificates aren't that cheap, but only about 8 would be needed (one for each project, e.g. *.wikipedia.org), so the cost isn't prohibitive anymore.
- river.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (NetBSD)
iEYEARECAAYFAk1X9R4ACgkQIXd7fCuc5vKwtACeLCWBLoOs8ymRfwJujpdcpcEx l+QAn2i/35DVQ/qLSsSY7auws/YqkW0v =oyfW -----END PGP SIGNATURE-----
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
In article CFB39D2BB60F411485D2BE6D9DB072F1@gmail.com, Leo diebuche@gmail.com wrote:
there are actually providers like www.startssl.com who issue free certificates (only validated by email address though). StartSSLs root certificate is included in nearly all recent browsers.
I use StartSSL myself, but they don't offer wildcard certificates for free. Using non-wildcard certs would require us to request (and regularly renew) around 1,000 certs, as well as requiring a new cert for each new language project which opens.
StartSSL wildcard certs might be cheaper than another provider (it's not very clear from their website), but either way there's a cost involved.
- river.
Ah, right. That obviously be too much hassle. Their website is not that clear, but I think it's 100$ (per year?) for wildcard certificates. ("StartSSL™ identity and organization validation are available for only US $ 49.90 each, where organization validation implies prior identity validation. Once validated, certificates are freely available through the advanced StartSSL™ Control Panel and unlimited for 350 days of the validated identity/organization.")
Leo On Sunday, February 13, 2011 at 4:44 PM, River Tarnell wrote: -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
In article CFB39D2BB60F411485D2BE6D9DB072F1@gmail.com, Leo diebuche@gmail.com wrote:
there are actually providers like www.startssl.com who issue free certificates (only validated by email address though). StartSSLs root certificate is included in nearly all recent browsers.
I use StartSSL myself, but they don't offer wildcard certificates for free. Using non-wildcard certs would require us to request (and regularly renew) around 1,000 certs, as well as requiring a new cert for each new language project which opens.
StartSSL wildcard certs might be cheaper than another provider (it's not very clear from their website), but either way there's a cost involved.
- river.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (NetBSD)
iEYEARECAAYFAk1X+/IACgkQIXd7fCuc5vLM5QCfYhRwPRZXc4/ClCWebZ+o08RU 5YcAnRm97EjPSeeuLtR40tWErMan1m2R =SYl3 -----END PGP SIGNATURE-----
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
----- Original Message -----
From: "River Tarnell" r.tarnell@IEEE.ORG
In article 18849937.7157.1297583642909.JavaMail.root@benjamin.baylink.com, Jay Ashworth jra@baylink.com wrote:
Yeah, secure.wikimedia.org's URL scheme isn't really friendly to outsiders. Historically, this is because SSL certificates are expensive, and there just wasn't enough money in the budget to get more of them for the top-level domains. Maybe this isn't the case anymore.
Is that in fact the root cause, Chad? I assumed, myself, that it's because of the squid architecture.
LVS is in front of Squid, so it would be fairly simple to send SSL traffic (port 443) to a different machine; which is how secure.wm.o works now, except that instead of using LVS, it requires a different hostname.
Got it.
However, I think the idea is not to start allowing https://en.wikipedia.org URLs until there's a better SSL infrastructure which can handle the extra load an easy-to-use, widely advertised SSL gateway is likely to create. secure.wm.o is currently a single machine and sometimes falls over, e.g. when Squid breaks for some reason and people notice that secure still works.
You did get the "EFF is pushing a Firefox plugin that has a rule that redirects all WP accesses to the secure site" part of that report, though, right? This curve has probably already started to ramp; now might be a good time for someone ops-y to be thinking about this.
Cheers, -- jra
You did get the "EFF is pushing a Firefox plugin that has a rule that redirects all WP accesses to the secure site" part of that report, though, right? This curve has probably already started to ramp; now might be a good time for someone ops-y to be thinking about this.
I'd be concerned about this if this would actually result in a substantial amount of traffic to secure, but I seriously doubt it will. I love the EFF, but they have a fairly shallow reach when it comes to things like this.
- Ryan Lane
On Sat, Feb 12, 2011 at 10:26 PM, Chad innocentkiller@gmail.com wrote:
Yeah, secure.wikimedia.org's URL scheme isn't really friendly to outsiders. Historically, this is because SSL certificates are expensive, and there just wasn't enough money in the budget to get more of them for the top-level domains. Maybe this isn't the case anymore.
There was a discussion about this a while back that started in #mediawiki_security and migrated to #wikimedia-operations. The problem right now is that it would require some reconfiguration work and no one's gotten around to it. Currently, secure.wikimedia.org just points to a different host from the rest of the site:
$ dig +short secure.wikimedia.org 208.80.152.134 $ dig +short wikipedia.org 208.80.152.2
To put secure and regular at the same domain, they'd also need to be put on the same machine (that is, the same load balancer, which can obviously forward to separate hosts per protocol). Then you'd have the secure site split across several domains, so you'd have the question of which certificate to serve. The best answer to that seems to be putting each second-level domain (wikipedia.org, wiktionary.org, etc.) on a separate IP address, then getting a wildcard certificate for each one and serving that. You'd also want to put each second-level domain itself on a separate IP from all its subdomains, like wikipedia.org on a separate subdomain from *.wikipedia.org, because those again require separate certs.
This doesn't require too many IP addresses (which can all be assigned to the same interface anyway) and avoids having to mess with SNI or such. But it would require some amount of effort to set up. Ryan Lane said he'd be interested in working on it when he finds the time. Ideally we'd have all logged-in users (at least) on HTTPS by default. But anyway, the cost of certificates certainly hasn't been an issue for years!
On Sun, Feb 13, 2011 at 10:14 AM, River Tarnell r.tarnell@ieee.org wrote:
SSL certificates aren't that cheap, but only about 8 would be needed (one for each project, e.g. *.wikipedia.org), so the cost isn't prohibitive anymore.
You'd want two per project so that https://wikipedia.org/ works, right? Lots of sites fail at that, but it's lame: https://amazon.com/
On Sun, Feb 13, 2011 at 10:23 AM, Maury Markowitz maury.markowitz@gmail.com wrote:
Are there _no_ performance issues we should be concerned about here?
SSL adds an extra round-trip or two to each connection, and adds some server-side load. Currently we have much bigger client-side performance issues than this -- Resource Loader is a first stab at fixing some of the worst of those -- so I don't think we need to worry too much about it for now.
If enough users used SSL, the server-side computational load might be significant, compared to just serving stuff from Squid. (Google observed almost no load increase when enabling SSL by default for Gmail, but Gmail spends a lot of CPU cycles on each page anyway -- we usually spend almost no CPU for logged-out users, just serve the cached page from Squid's memory or disk.) But not many people will use it until we make it opt-out, so I don't think we have to worry about it for now.
I know local ISP's did (used to?) throttle all encrypted traffic. Would this fall into that category?
I'm not aware of any issue with this.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
In article AANLkTikgDVs2zHMBzrd5dDkjsjadQVLmHjYpfjBhY+=n@mail.gmail.com, Aryeh Gregor Simetrical+wikilist@gmail.com wrote:
On Sun, Feb 13, 2011 at 10:14 AM, River Tarnell r.tarnell@ieee.org wrote:
SSL certificates aren't that cheap, but only about 8 would be needed (one for each project, e.g. *.wikipedia.org), so the cost isn't prohibitive anymore.
You'd want two per project so that https://wikipedia.org/ works, right? Lots of sites fail at that, but it's lame: https://amazon.com/
That's a good point, but there's no reason for it to be required... it really depends on whether a CA will issue an appropriate cert. A certificate that contains CN=*.wikipedia.org, subjectAltName:wikipedia.org would work fine. StartSSL does include the appropriate subjectAltName in their (non-wildcard) certs; RapidSSL does not. I don't have a wildcard StartSSL certificate around to check.
On Sun, Feb 13, 2011 at 10:23 AM, Maury Markowitz maury.markowitz@gmail.com wrote:
I know local ISP's did (used to?) throttle all encrypted traffic. Would this fall into that category?
I'm not aware of any issue with this.
Not sure what "local" means (presumably USA? ;-) but I've never heard of this either -- which is not to say it doesn't happen, but there's a limit to how much ISP brokenness the WMF can reasonably work around.
- river.
On Sun, Feb 13, 2011 at 8:48 PM, Aryeh Gregor Simetrical+wikilist@gmail.com wrote:
On Sun, Feb 13, 2011 at 10:23 AM, Maury Markowitz maury.markowitz@gmail.com wrote:
Are there _no_ performance issues we should be concerned about here?
SSL adds an extra round-trip or two to each connection, and adds some server-side load. Currently we have much bigger client-side performance issues than this -- Resource Loader is a first stab at fixing some of the worst of those -- so I don't think we need to worry too much about it for now.
I think most users use secure.wikimedia.org when they are behind an untrusted connection, and don't want to reveal their password, their username or articles they are reading/editing. I once tried changing my default Wikipedia site to secure, but there were more perormance issues both client and server side than unencrypted, so I changed back. However, I would still prefer to send my login details over SSL whenever I need to relogin.
Currently, if you login on secure you are not logged-in on the unencrypted site, even if I allow setting third party cookies in the browser settings. I assume the login session is common to both unencrypted and encrypted, so would it be possible to transfer the session from secure.wikimedia.org? This way users could login securely but choose to use the unencrypted site for the normal tasks.
2011/2/13 Ville Stadista ville.stadista@gmail.com:
Currently, if you login on secure you are not logged-in on the unencrypted site, even if I allow setting third party cookies in the browser settings. I assume the login session is common to both unencrypted and encrypted, so would it be possible to transfer the session from secure.wikimedia.org? This way users could login securely but choose to use the unencrypted site for the normal tasks.
This is not a bug, it's a feature. If you were automatically logged in on the insecure sites when logging in on the secure site, someone could just trick you to visit wikipedia.org (e.g. by including an image from wikipedia.org on their web page, or through various other means) and your browser will happily send your session cookies to wikipedia.org unencrypted. If that someone happens to also be on the same public wifi and has Firesheep running, they can now hijack your login session.
Roan Kattouw (Catrope)
Is that how Facebook™ or Google™ operate, sending every single component via HTTPS?
No. Only the vital personal settings, password stuff is done that way.
As for not letting people know what pages you are browsing, well, I don't now. Does Google™ offer a way to not let wiretapping people know what pages you are searching? Probably. We Geritol™ Generation users aren't exactly sure to tell you the truth.
Lots of people don't like to have their sessions stolen via Firesheep. That's one reason to do "all https all the time".
On 2/15/11 1:09 PM, jidanni@jidanni.org wrote:
Is that how Facebook™ or Google™ operate, sending every single component via HTTPS?
No. Only the vital personal settings, password stuff is done that way.
On Feb 16, 2011, at 10:21 AM, Brandon Harris wrote:
Lots of people don't like to have their sessions stolen via Firesheep. That's one reason to do "all https all the time".
I think y'all should chill and take the tone down from an argument to more of discussion where points of view back up their points with information.
Brandon, though perhaps a bit hostile in tone, is backed up by what many consider the best practice after Firesheep. See https://www.eff.org/pages/how-deploy-https-correctly for background and recommendations.
Now, in practice implementing this has challenges. I'm the lead developer on Kete, an open source Ruby on Rails app (http://kete.net.nz), and recently wanted to make the switch to fully HTTPS for a site and the Kete app when used with HTTPS.
I encountered the headache of mixed content warnings.
I found that using // for links I could control mostly did the trick, but external links were problematic. Specifically Google Maps API will answer HTTPS, but delivers Javascript with internal links that triggers the mixed content warning. The only workaround appeared to be pay for premier service from Google.
The organization running the site doesn't have the budget for this, is a non-profit and is using the Maps API non-commercially, and wants to continue to use the API. So...
On 2/15/11 1:09 PM, jidanni@jidanni.org wrote:
Is that how Facebook™ or Google™ operate, sending every single component via HTTPS?
No. Only the vital personal settings, password stuff is done that way.
I ended up falling back to current "norm" as jidanni outlines. Not happy about it, but my client and my project make use of Maps extensively and it would have been a drag.
All this boils down to, yes full HTTPS is best practice, but if you make use of external APIs or services, it may be hard to achieve.
Cheers, Walter
----------------------------------------------------------------- Walter McGinnis Kete Project Lead (http://kete.net.nz) Katipo Communications, Ltd. (http://katipo.co.nz) http://twitter.com/wtem walter@katipo.co.nz
On Tue, Feb 15, 2011 at 2:06 PM, Ryan Lane rlane32@gmail.com wrote:
All this boils down to, yes full HTTPS is best practice, but if you make use of external APIs or services, it may be hard to achieve.
We don't for anything, I believe.
- Ryan Lane
There was some discussion on making secure.wikimedia.org full-https functional, as I recall, but it died out.
On Tue, Feb 15, 2011 at 4:36 PM, Walter McGinnis walter@katipo.co.nz wrote:
Now, in practice implementing this has challenges. I'm the lead developer on Kete, an open source Ruby on Rails app (http://kete.net.nz), and recently wanted to make the switch to fully HTTPS for a site and the Kete app when used with HTTPS.
I encountered the headache of mixed content warnings.
What problems does this present in practice? I notice Gmail sometimes serves mixed content without my browser complaining significantly. The UI changes a bit, but nothing worse than normal http:// UI.
All this boils down to, yes full HTTPS is best practice, but if you make use of external APIs or services, it may be hard to achieve.
Using an external API or service by including stuff from third-party sites would send users' IP addresses to those sites, which would violate Wikimedia's privacy policy, so this isn't an issue for us.
On 15 February 2011 15:43, Aryeh Gregor Simetrical+wikilist@gmail.com wrote:
Using an external API or service by including stuff from third-party sites would send users' IP addresses to those sites, which would violate Wikimedia's privacy policy, so this isn't an issue for us.
There are people loading javascript from toolserver: http://en.wikipedia.org/wiki/MediaWiki:Common.js/watchlist.js (I'm sure the toolserver can be fixed to make ssl work in this case, but it seems quite broken for https at the moment)
Conrad
On Feb 16, 2011, at 12:43 PM, Aryeh Gregor wrote:
On Tue, Feb 15, 2011 at 4:36 PM, Walter McGinnis walter@katipo.co.nz wrote:
Now, in practice implementing this has challenges. I'm the lead developer on Kete, an open source Ruby on Rails app (http://kete.net.nz), and recently wanted to make the switch to fully HTTPS for a site and the Kete app when used with HTTPS.
I encountered the headache of mixed content warnings.
What problems does this present in practice? I notice Gmail sometimes serves mixed content without my browser complaining significantly. The UI changes a bit, but nothing worse than normal http:// UI.
Many versions of Internet Explorer will throw up a dialog box with a warning.
All this boils down to, yes full HTTPS is best practice, but if you make use of external APIs or services, it may be hard to achieve.
Using an external API or service by including stuff from third-party sites would send users' IP addresses to those sites, which would violate Wikimedia's privacy policy, so this isn't an issue for us.
Fair enough. Every situation is different. As I had recently attempted to go full HTTPS with a project, I thought I would share my experience of what it takes in practice.
Cheers, Walter
Walter McGinnis wrote:
All this boils down to, yes full HTTPS is best practice, but if you make use of external APIs or services, it may be hard to achieve.
Using an external API or service by including stuff from third-party sites would send users' IP addresses to those sites, which would violate Wikimedia's privacy policy, so this isn't an issue for us.
Fair enough. Every situation is different. As I had recently attempted to go full HTTPS with a project, I thought I would share my experience of what it takes in practice.
Well, as someone else somewhat noted in this thread, Aryeh isn't completely correct. The Toolserver has external APIs and services that are used via JavaScript from Wikimedia wikis. More information is available about the Toolserver here: https://wiki.toolserver.org/view/FAQ.
I appreciate you sharing your experience. Part of the resourcefulness of this list is learning how others have implemented solutions, including understanding what worked well and what didn't and why.
MZMcBride
On Tue, Feb 15, 2011 at 9:35 PM, MZMcBride z@mzmcbride.com wrote:
Well, as someone else somewhat noted in this thread, Aryeh isn't completely correct. The Toolserver has external APIs and services that are used via JavaScript from Wikimedia wikis. More information is available about the Toolserver here: https://wiki.toolserver.org/view/FAQ.
I had the toolserver in mind when I worded my post. It's run by Wikimedia Deutschland, which for our purposes is *not* an external site. If working HTTPS for everything on the toolserver were needed, we could arrange that easily.
I appreciate you sharing your experience. Part of the resourcefulness of this list is learning how others have implemented solutions, including understanding what worked well and what didn't and why.
Seconded.
On Wed, Feb 16, 2011 at 5:26 PM, Platonides Platonides@gmail.com wrote:
Wouldn't each page view mean a connection, and a ssl handshake? Or are you thinking on keep-alives?
As I understand it, both clients and servers will cache TLS handshakes across connections, because they're so expensive. TLS has the notion of sessions, and allows resuming from a session if both parties remember the shared secret from that session. I have no idea how good the cache hit rate is in practice. I doubt it would last thirty days, which is how often most regular users presumably log in, but I'd be surprised if it didn't last at least the length of a browsing session.
On 15 February 2011 22:09, jidanni@jidanni.org wrote:
Is that how Facebook™ or Google™ operate, sending every single component via HTTPS?
I can't really understand your email. As for having the ability to read Wikipedia trough a HTTPS conexion. I suppose is useful, wen you don't want to use cleartext to broadcast your intention to the worlds.
Heres a map: http://en.wikipedia.org/wiki/File:World_homosexuality_laws.svg Homosexuality is illegal on the red and dark red areas. So heres a user case, some homosexual on these countries searching for general information about his homosexuality, that don't want to broadcast in cleartext his navigation trought these links.
But can be any reason or none. Maybe some people is "communication sensitive" and just don't want to use cleartext for that. So heres a feature some people would want to use.
On Tue, Feb 15, 2011 at 1:09 PM, jidanni@jidanni.org wrote:
Is that how Facebook™ or Google™ operate, sending every single component via HTTPS?
No. Only the vital personal settings, password stuff is done that way.
As for not letting people know what pages you are browsing, well, I don't now. Does Google™ offer a way to not let wiretapping people know what pages you are searching? Probably. We Geritol™ Generation users aren't exactly sure to tell you the truth.
As a point of counterargument:
My current mail session is https://mail.google.com/mail/?(deleted)
I have no mixed-mode warnings on the page.
So - Google *does* in fact do this for you, if you want, for apps which are sensitive enough.
On Tue, Feb 15, 2011 at 1:09 PM, jidanni@jidanni.org wrote:
Is that how Facebook™ or Google™ operate, sending every single component via HTTPS?
No. Only the vital personal settings, password stuff is done that way.
Actually, there's now a setting in Facebook to use https on every single page request. (Account Settings | Account Security | Secure Browsing (https) where there's a check box that says: Browse Facebook on a secure connection (https) whenever possible
Mark W.
On 15 February 2011 21:09, jidanni@jidanni.org wrote:
Is that how Facebook™ or Google™ operate, sending every single component via HTTPS?
No. Only the vital personal settings, password stuff is done that way.
As for not letting people know what pages you are browsing, well, I don't now. Does Google™ offer a way to not let wiretapping people know what pages you are searching? Probably. We Geritol™ Generation users aren't exactly sure to tell you the truth.
Ok, so offering HTTPS for everything isn't essential. What harm does it do, though?
2011/2/15 Thomas Dalton thomas.dalton@gmail.com:
Ok, so offering HTTPS for everything isn't essential. What harm does it do, though?
Exactly. We're not gonna force users to use HTTPS for everything, but we should at least offer the possibility to those who want it.
Roan Kattouw (Catrope)
----- Original Message -----
From: "Thomas Dalton" thomas.dalton@gmail.com
Ok, so offering HTTPS for everything isn't essential. What harm does it do, though?
it imposes on your server cluster some requirements -- and some load -- with which it would otherwise not have to deal.
Cheers, -- jra
On 16/02/11 05:50, Jay Ashworth wrote:
Ok, so offering HTTPS for everything isn't essential. What harm does it do, though?
it imposes on your server cluster some requirements -- and some load -- with which it would otherwise not have to deal.
I think most load balancer appliances around also handle SSL off-loading.
On Tue, Feb 15, 2011 at 11:25 PM, Ashar Voultoiz hashar+wmf@free.fr wrote:
On 16/02/11 05:50, Jay Ashworth wrote:
Ok, so offering HTTPS for everything isn't essential. What harm does it do, though?
it imposes on your server cluster some requirements -- and some load -- with which it would otherwise not have to deal.
I think most load balancer appliances around also handle SSL off-loading.
Not particularly relevant to the Wikimedia Foundation (which is using Squid and will probably switch to another open source software web load balancer in the future). But relevant for many/most commercial organizations running large websites.
Not particularly relevant to the Wikimedia Foundation (which is using Squid and will probably switch to another open source software web load balancer in the future). But relevant for many/most commercial organizations running large websites.
We use LVS w/ pybal for load balancing, not squid; otherwise, you are right. We don't use hardware load balancers, and likely will not in the foreseeable future.
To be a little more on subject: I don't think it'll be a major difference between using ssl for just login, and using it for all logged in user actions. One of the more expensive parts of https is the connection. If we are logging them in, we are already doing the more expensive part, why not continue on and make the site truly secure?
But really, I should have just ignored this thread since it was mostly a troll.
- Ryan Lane
Ryan Lane wrote:
To be a little more on subject: I don't think it'll be a major difference between using ssl for just login, and using it for all logged in user actions. One of the more expensive parts of https is the connection. If we are logging them in, we are already doing the more expensive part, why not continue on and make the site truly secure?
But really, I should have just ignored this thread since it was mostly a troll.
- Ryan Lane
Wouldn't each page view mean a connection, and a ssl handshake? Or are you thinking on keep-alives?
But really, I should have just ignored this thread since it was mostly a troll.
Wouldn't each page view mean a connection, and a ssl handshake? Or are you thinking on keep-alives?
No. Once a connection is made, requests from the same user should go through the same connection.
- Ryan
----- Original Message -----
From: jidanni@jidanni.org
Is that how Facebook™ or Google™ operate, sending every single component via HTTPS?
No. Only the vital personal settings, password stuff is done that way.
Wrong.
Both Google and Facebook will be perfectly happy to let you conduct your entire session in https, these days.
As for not letting people know what pages you are browsing, well, I don't now. Does Google™ offer a way to not let wiretapping people know what pages you are searching? Probably. We Geritol™ Generation users aren't exactly sure to tell you the truth.
Quit trolling, dude.
Cheers, -- jra
jidanni@jidanni.org wrote:
Is that how Facebook™ or Google™ operate, sending every single component via HTTPS?
No. Only the vital personal settings, password stuff is done that way.
That would mean sending normal pages with http but pretty much all /w/index.php urls using https. Seems odd at first, but it could work. Another option would be to digitally sign every token, which is more geek :)
As for not letting people know what pages you are browsing, well, I don't now. Does Google™ offer a way to not let wiretapping people know what pages you are searching? Probably. We Geritol™ Generation users aren't exactly sure to tell you the truth.
It does https://www.google.com so you only share your searches with Google Brother.
wikitech-l@lists.wikimedia.org