<begin name="Timwi" date="Friday 09 July 2004 23:55"/>
did some think about an secure web implentation? When https would be offered, i (and i am sure other also) would prefer it. :)
Do you mean the entire site? Do you mean you want to be able to submit edits completely anonymously? I can understand someone wanting complete anonymity if they're editing a controversial or sensitive topic, but isn't HTTPS a little bit overkill?
Well, sniffing of passwords and user accounts is not so difficult, and i am often in unsecure networks with my notebook. It is not about sensitive topics - it is about the secure of the accounts.
Best regards,
da didi
Michael Diederich wrote:
<begin name="Timwi" date="Friday 09 July 2004 23:55"/> >>did some think about an secure web implentation? When https would be >>offered, i (and i am sure other also) would prefer it. :) > >Do you mean the entire site? Do you mean you want to be able to submit >edits completely anonymously? >I can understand someone wanting complete anonymity if they're editing a >controversial or sensitive topic, but isn't HTTPS a little bit overkill?
Well, sniffing of passwords and user accounts is not so difficult, and i am often in unsecure networks with my notebook. It is not about sensitive topics - it is about the secure of the accounts.
Are you asking about Wikipedia in specific or on MediaWiki in general?
MediaWiki in general should work fine over HTTPS. If it doesn't, please send patches.
For Wikipedia, we briefly discussed the possibility a couple years ago but were stymied by the nasty virtual server problem: basically, HTTPS and name-based virtual servers don't mix.
In order to determine which hostname & configuration to use, the web server needs the Host: header sent by the client. BUT, before we get there an encrypted connection has to be set up. BUT, the certificate is verified based on the hostname. BUT, we don't know which hostname to use yet.
D'oh!
A possible way around this is to rearrange everything to different paths on a single hostname, but this could be a big pain in the ass. Further, maintaining two different sets of paths or URLs might be a problem for [parser] cache consistency.
Additionally there's the issue that any HTTPS access won't be cached at the squid level (or perhaps even the client level); if we restrict this to logins only (perhaps even optionally) then this oughtn't to impact performance too much.
Further there's the certificate issue; would we be content with a self-signed certificate (BIG WARNINGS in your browser every time you login) or will we spend the foundation's money for a big fancy corporation's stamp of approval?
-- brion vibber (brion @ pobox.com)
On Sat, 10 Jul 2004 12:44:20 -0700, Brion Vibber brion@pobox.com wrote:
For Wikipedia, we briefly discussed the possibility a couple years ago but were stymied by the nasty virtual server problem: basically, HTTPS and name-based virtual servers don't mix.
HTTPS and IP-based virtualhosts work just fine, however. TLS also works with name-based virtualhosts (although it isn't supported in all browsers).
Additionally there's the issue that any HTTPS access won't be cached at the squid level (or perhaps even the client level); if we restrict this to logins only (perhaps even optionally) then this oughtn't to impact performance too much.
Can't squid be reconfigured to handle the SSL portion itself? In other words, can it simply treat all requests to the backend as if they were HTTP, and simply serve out cached/fresh copies of pages via SSL? I understand why squid can't cache pages that IT has to retrieve via SSL, but that's not the case here.
That said, I tend to think that only logins really need to be secure anyway.
Further there's the certificate issue; would we be content with a self-signed certificate (BIG WARNINGS in your browser every time you login) or will we spend the foundation's money for a big fancy corporation's stamp of approval?
You can pick up "wildcard" certificates (i.e. *.wikipedia.org) for only a few hundred bucks from some providers. Most big e-commerce companies shell out the big bucks for one from Verisign for brand recognition, but since that's not a concern here, one of the cheaper providers should suffice.
Even if we wanted to go the free self-signed route, it's possible to accept the certificate as valid so that it doesn't give a warning every time, only the first time.
-Bill Clark
Bill Clark wrote:
On Sat, 10 Jul 2004 12:44:20 -0700, Brion Vibber brion@pobox.com wrote:
For Wikipedia, we briefly discussed the possibility a couple years ago but were stymied by the nasty virtual server problem: basically, HTTPS and name-based virtual servers don't mix.
HTTPS and IP-based virtualhosts work just fine, however.
We have over 300 wikis, each with a virtual subdomain. Each "major" project which supports all languages will add about 150 wikis: right now that's Wikipedia and Wiktionary.
Our IP subnet is a /27, with 32 addresses available. Between Wikimedia's machines, a few second IPs for failover of the squids, and a few Bomis boxes, it's pretty near full. I don't know what it would cost to secure 300 more IP addresses, but that's not a sustainable route...
TLS also works with name-based virtualhosts (although it isn't supported in all browsers).
Can you give some pointers on setting this up with an Apache server, and providing a sane failure mode for clients that don't support it?
Can't squid be reconfigured to handle the SSL portion itself? In other words, can it simply treat all requests to the backend as if they were HTTP, and simply serve out cached/fresh copies of pages via SSL?
I don't know, can it?
That said, I tend to think that only logins really need to be secure anyway.
Right.
-- brion vibber (brion @ pobox.com)
On Sat, 10 Jul 2004 13:10:38 -0700, Brion Vibber brion@pobox.com wrote:
We have over 300 wikis, each with a virtual subdomain. Each "major" project which supports all languages will add about 150 wikis: right now that's Wikipedia and Wiktionary.
Oh right, I forgot that every language has its own subdomain.
Yeah, I guess IP-based virtualhosts aren't really an option then, are they? :)
Can you give some pointers on setting this up with an Apache server, and providing a sane failure mode for clients that don't support it?
I've never actually used TLS myself, but this seems as good an excuse as any to look into it. I'll get back to you on this.
Can't squid be reconfigured to handle the SSL portion itself? In other words, can it simply treat all requests to the backend as if they were HTTP, and simply serve out cached/fresh copies of pages via SSL?
I don't know, can it?
I'm not sure, and honestly I look for any excuse I can NOT to play with squid. IMHO that software is simply too flaky for production use, and I'm frankly astonished you have it working as well as you (apparently) do. When it works, it's great... but when it doesn't...
I watched squid flat-out lie to me about checking for more recent copies of a requested file once. I was sitting there watching the webserver logfile and squid logs simultaneously, and squid claimed it contacted the webserver when I could see for myself that it was completely full of shit. I uninstalled it immediately afterwards.
I play with squid every couple years, hoping that it will surprise me with how stable and reliable it's become... but I keep finding myself disappointed instead.
-Bill Clark
Bill Clark wrote:
On Sat, 10 Jul 2004 13:10:38 -0700, Brion Vibber brion@pobox.com wrote:
Can you give some pointers on setting this up with an Apache server, and providing a sane failure mode for clients that don't support it?
I've never actually used TLS myself, but this seems as good an excuse as any to look into it. I'll get back to you on this.
Cool, thanks.
-- brion vibber (brion @ pobox.com)
On Sat, 10 Jul 2004 16:23:43 -0400, Bill Clark wclarkxoom@gmail.com wrote:
I've never actually used TLS myself, but this seems as good an excuse as any to look into it. I'll get back to you on this.
Looks like I was wrong.
RFC 2817 claims that TLS should be capable of doing name-based virtualhosts:
http://www.faqs.org/rfcs/rfc2817.html
This has been implemented in Apache 2.x, but from what I've been able to find so far, it's not currently supported by any browsers. Those browsers that currently have TLS support don't have the "Upgrade TLS" option, which is what's necessary for named-based virtualhosting to work over SSL. (Basically, the initial handshake takes place over a standard HTTP connection so that Host information and such can be sent, and THEN the connection is upgraded to TLS for the actual request transfer).
So this doesn't appear to be an option (yet).
-Bill Clark
<begin name="Brion Vibber" date="Saturday 10 July 2004 21:44"/>
Thx for the reply. I am not an technical, but when i understood Bill correctly, it is possible. I just know:
Further there's the certificate issue; would we be content with a self-signed certificate (BIG WARNINGS in your browser every time you login) or will we spend the foundation's money for a big fancy corporation's stamp of approval?
the german Chaos Computer Club (CCC) signs ssl-certificates for free.
Best regards,
da didi
On Sat, 10 Jul 2004 22:04:09 +0200, Michael Diederich dev-null@md-d.org wrote:
the german Chaos Computer Club (CCC) signs ssl-certificates for free.
Yes, but the CCC probably isn't listed as a trusted signing authority in most browsers. We'd get a slightly different warning message that way, but there'd still be a warning message.
-Bill Clark
On Sat, 10 Jul 2004 12:44:20 -0700, Brion Vibber brion@pobox.com wrote:
Further there's the certificate issue; would we be content with a self-signed certificate (BIG WARNINGS in your browser every time you login) or will we spend the foundation's money for a big fancy corporation's stamp of approval?
Though arguably, those who will want to use the optional HTTPS login will be prepared to install the Wikimedia SSL certificate the first time they connect.
Having said that, Domas Mituzas' proposal sounded interesting.
Cheers, Philip
wikitech-l@lists.wikimedia.org