Hi, I'm a grad student at CMU studying network security in general and censorship / surveillance resistance in particular. I also used to work for Mozilla, some of you may remember me in that capacity. My friend Sumana Harihareswara asked me to comment on Wikimedia's plans for hardening the encyclopedia against state surveillance. I've read the discussion to date on this subject, but it was kinda all over the map, so I thought it would be better to start a new thread. Actually I'm going to start two threads, one for general site hardening and one specifically about traffic analysis. This is the one about site hardening, which should happen first. Please note that I am subscribed to wikitech-l but not wikimedia-l (but I have read the discussion over there).
The roadmap at https://blog.wikimedia.org/2013/08/01/future-https-wikimedia-projects/ looks to me to have the right shape, but there are some missing things and points of confusion.
The first step really must be to enable HTTPS unconditionally for everyone (whether or not logged in). I see on the roadmap that there is concern that this will lock out large groups of users, e.g. from China; a workaround simply *must* be found for this. Everything else that is worth doing is rendered ineffective if *any* application layer data is *ever* transmitted over an insecure channel. There is no point worrying about traffic analysis when an active man-in-the-middle can inject malicious JavaScript into unsecured pages, or a passive one can steal session cookies as they fly by in cleartext.
As part of the engineering effort to turn on TLS for everyone, you should also provide SPDY, or whatever they're calling it these days. It's valuable not only for traffic analysis' sake, but because it offers server-side efficiency gains that (in theory anyway) should mitigate the TLS overhead somewhat.
After that's done, there's a grab bag of additional security refinements that are deployable immediately or with minimal-to-moderate engineering effort. The roadmap mentions HTTP Strict Transport Security; that should definitely happen. All cookies should be tagged both Secure and HttpOnly (which renders them inaccessible to accidental HTTP loads and to page JavaScript); now would also be a good time to prune your cookie requirements, ideally to just one which does not reveal via inspection whether or not someone is logged in. You should also do Content-Security-Policy, as strict as possible. I know this can be a huge amount of development effort, but the benefits are equally huge - we don't know exactly how it was done, but there's an excellent chance CSP on the hidden service would have prevented the exploit discussed here: https://blog.torproject.org/blog/hidden-services-current-events-and-freedom-...
Several people raised concerns about Wikimedia's certificate authority becoming compromised (whether by traditional "hacking", social engineering, or government coercion). The best available cure for this is called "certificate pinning", which is unfortunately only doable by talking to browser vendors right now; however, I imagine they would be happy to apply pins for Wikipedia. There's been some discussion of an HSTS extension that would apply a pin (http://tools.ietf.org/html/draft-evans-palmer-key-pinning-00) and it's also theoretically doable via DANE (http://tools.ietf.org/html/rfc6698); however, AFAIK no one implements either of these things yet, and I rate it moderately likely that DANE is broken-as-specified. DANE requires DNSSEC, which is worth implementing for its own sake (it appears that the wikipedia.org. and wikimedia.org. zones are not currently signed).
Perfect forward secrecy should also be considered at this stage. Folks seem to be confused about what PFS is good for. It is *complementary* to traffic analysis resistance, but it's not useless in the absence of. What it does is provide defense in depth against a server compromise by a well-heeled entity who has been logging traffic *contents*. If you don't have PFS and the server is compromised, *all* traffic going back potentially for years is decryptable, including cleartext passwords and other equally valuable info. If you do have PFS, the exposure is limited to the session rollover interval. Browsers are fairly aggressively moving away from non-PFS ciphersuites (see https://briansmith.org/browser-ciphersuites-01.html; all of the non-"deprecated" suites are PFS).
Finally, consider paring back the set of ciphersuites accepted by your servers. Hopefully we will soon be able to ditch TLS 1.0 entirely (all of its ciphersuites have at least one serious flaw). Again, see https://briansmith.org/browser-ciphersuites-01.html for the current thinking from the browser side.
zw
I hope I'm not being rude, but everything you've suggested here has already been discussed. HTTPS and HSTS are already on the schedule once the China workaround is invented. Cookies are already tagged as Secure and HttpOnly when over HTTPS. Both the certificate pinning RFC and the DANE RFC have been discussed for implementation.
With that said, I think the real problem with Wikimedia's security right now is a pretty big failure on the part of the operations team to inform anybody as to what the hell is going on. Why hasn't the TLS cipher list been updated? Why are we still using RC4 even though it's obviously a terrible option? Why isn't Wikimedia using DNSSEC, let alone DANE? I'm sure the operations team is doing quite a bit of work, but all of these things should be trivial configuration changes, and if they aren't, the community should be told why so we know why these changes haven't been applied yet.
Can somebody from ops comment on this? Or do I have to sign up for yet another mailing list to find this out?
*-- * *Tyler Romeo* Stevens Institute of Technology, Class of 2016 Major in Computer Science www.whizkidztech.com | tylerromeo@gmail.com
On Fri, Aug 16, 2013 at 8:04 PM, Zack Weinberg zackw@cmu.edu wrote:
Hi, I'm a grad student at CMU studying network security in general and censorship / surveillance resistance in particular. I also used to work for Mozilla, some of you may remember me in that capacity. My friend Sumana Harihareswara asked me to comment on Wikimedia's plans for hardening the encyclopedia against state surveillance. I've read the discussion to date on this subject, but it was kinda all over the map, so I thought it would be better to start a new thread. Actually I'm going to start two threads, one for general site hardening and one specifically about traffic analysis. This is the one about site hardening, which should happen first. Please note that I am subscribed to wikitech-l but not wikimedia-l (but I have read the discussion over there).
The roadmap at https://blog.wikimedia.org/**2013/08/01/future-https-** wikimedia-projects/https://blog.wikimedia.org/2013/08/01/future-https-wikimedia-projects/looks to me to have the right shape, but there are some missing things and points of confusion.
The first step really must be to enable HTTPS unconditionally for everyone (whether or not logged in). I see on the roadmap that there is concern that this will lock out large groups of users, e.g. from China; a workaround simply *must* be found for this. Everything else that is worth doing is rendered ineffective if *any* application layer data is *ever* transmitted over an insecure channel. There is no point worrying about traffic analysis when an active man-in-the-middle can inject malicious JavaScript into unsecured pages, or a passive one can steal session cookies as they fly by in cleartext.
As part of the engineering effort to turn on TLS for everyone, you should also provide SPDY, or whatever they're calling it these days. It's valuable not only for traffic analysis' sake, but because it offers server-side efficiency gains that (in theory anyway) should mitigate the TLS overhead somewhat.
After that's done, there's a grab bag of additional security refinements that are deployable immediately or with minimal-to-moderate engineering effort. The roadmap mentions HTTP Strict Transport Security; that should definitely happen. All cookies should be tagged both Secure and HttpOnly (which renders them inaccessible to accidental HTTP loads and to page JavaScript); now would also be a good time to prune your cookie requirements, ideally to just one which does not reveal via inspection whether or not someone is logged in. You should also do Content-Security-Policy, as strict as possible. I know this can be a huge amount of development effort, but the benefits are equally huge - we don't know exactly how it was done, but there's an excellent chance CSP on the hidden service would have prevented the exploit discussed here: https://blog.torproject.org/**blog/hidden-services-current-** events-and-freedom-hostinghttps://blog.torproject.org/blog/hidden-services-current-events-and-freedom-hosting
Several people raised concerns about Wikimedia's certificate authority becoming compromised (whether by traditional "hacking", social engineering, or government coercion). The best available cure for this is called "certificate pinning", which is unfortunately only doable by talking to browser vendors right now; however, I imagine they would be happy to apply pins for Wikipedia. There's been some discussion of an HSTS extension that would apply a pin (http://tools.ietf.org/html/**draft-evans-palmer-key-** pinning-00 http://tools.ietf.org/html/draft-evans-palmer-key-pinning-00) and it's also theoretically doable via DANE (http://tools.ietf.org/html/** rfc6698 http://tools.ietf.org/html/rfc6698); however, AFAIK no one implements either of these things yet, and I rate it moderately likely that DANE is broken-as-specified. DANE requires DNSSEC, which is worth implementing for its own sake (it appears that the wikipedia.org. and wikimedia.org. zones are not currently signed).
Perfect forward secrecy should also be considered at this stage. Folks seem to be confused about what PFS is good for. It is *complementary* to traffic analysis resistance, but it's not useless in the absence of. What it does is provide defense in depth against a server compromise by a well-heeled entity who has been logging traffic *contents*. If you don't have PFS and the server is compromised, *all* traffic going back potentially for years is decryptable, including cleartext passwords and other equally valuable info. If you do have PFS, the exposure is limited to the session rollover interval. Browsers are fairly aggressively moving away from non-PFS ciphersuites (see <https://briansmith.org/** browser-ciphersuites-01.htmlhttps://briansmith.org/browser-ciphersuites-01.html>; all of the non-"deprecated" suites are PFS).
Finally, consider paring back the set of ciphersuites accepted by your servers. Hopefully we will soon be able to ditch TLS 1.0 entirely (all of its ciphersuites have at least one serious flaw). Again, see < https://briansmith.org/**browser-ciphersuites-01.htmlhttps://briansmith.org/browser-ciphersuites-01.html> for the current thinking from the browser side.
zw
______________________________**_________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/**mailman/listinfo/wikitech-lhttps://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Fri, Aug 16, 2013 at 8:17 PM, Tyler Romeo tylerromeo@gmail.com wrote:
With that said, I think the real problem with Wikimedia's security right now is a pretty big failure on the part of the operations team to inform anybody as to what the hell is going on. Why hasn't the TLS cipher list been updated? Why are we still using RC4 even though it's obviously a terrible option? Why isn't Wikimedia using DNSSEC, let alone DANE? I'm sure the operations team is doing quite a bit of work, but all of these things should be trivial configuration changes, and if they aren't, the community should be told why so we know why these changes haven't been applied yet.
I believe the short answer to your questions is: because Wikipedia lives in the real world. Some of the trivial changes you describe would make the site inoperable for large numbers of users. Even switching to HTTPS-only potentially locks out a billion or so people.
That said, I'm not part of the operations team either so I can't answer definitively. I agree that it would probably be useful to have more formal progress reporting. "Can't disable RC4 in the cipher suite until more than N% of our readers are using <a set of known good browsers>" for example. There has been discussion elsewhere on wmf lists about metrics reporting. Once the blockers were quantified, it would be easier for interested people to 'count the days' until greater security could be enforced, or to bring pressure to bear on upstream providers (of the chrome browser, of DNS root zones, etc) where security fixes are needed.
Zack: what would probably be useful is to compile your list of suggestions into a number of specific issues in bugzilla (enable DNS sec, disable RC4, etc). Some of these probably already exist in bugzilla, they should be uncovered. A tracking bug or wiki page could collect all the different bugzilla tickets for anyone who wants the big picture. --scott
On Fri, Aug 16, 2013 at 9:25 PM, C. Scott Ananian cananian@wikimedia.orgwrote:
That said, I'm not part of the operations team either so I can't answer definitively. I agree that it would probably be useful to have more formal progress reporting. "Can't disable RC4 in the cipher suite until more than N% of our readers are using <a set of known good browsers>" for example. There has been discussion elsewhere on wmf lists about metrics reporting. Once the blockers were quantified, it would be easier for interested people to 'count the days' until greater security could be enforced, or to bring pressure to bear on upstream providers (of the chrome browser, of DNS root zones, etc) where security fixes are needed.
To be fair, I'm really only talking about non-restrictive changes. For example, right now we *only* have RC4. Rather than disable RC4 (which would have consequences), I'm saying why haven't other normal ciphers been enabled? I don't foresee us doing anything like "all HTTPS for everybody" anytime in the near future.
*-- * *Tyler Romeo* Stevens Institute of Technology, Class of 2016 Major in Computer Science www.whizkidztech.com | tylerromeo@gmail.com
On Fri, Aug 16, 2013 at 9:47 PM, Tyler Romeo tylerromeo@gmail.com wrote:
To be fair, I'm really only talking about non-restrictive changes. For example, right now we *only* have RC4. Rather than disable RC4 (which would have consequences), I'm saying why haven't other normal ciphers been enabled?
Because the other TLS 1.0 ciphers are *even worse*. https://community.qualys.com/blogs/securitylabs/2013/03/19/rc4-in-tls-is-bro...
I believe the solution is to enable TLS 1.2, which has been discussed before and is on the roadmap AFAIK. --scott
On Fri, Aug 16, 2013 at 9:59 PM, C. Scott Ananian cananian@wikimedia.orgwrote:
Because the other TLS 1.0 ciphers are *even worse*.
https://community.qualys.com/blogs/securitylabs/2013/03/19/rc4-in-tls-is-bro...
...except they're not (in all major browsers and the latest stable openssl and gnutls implementations).
https://bugzilla.mozilla.org/show_bug.cgi?id=665814
*-- * *Tyler Romeo* Stevens Institute of Technology, Class of 2016 Major in Computer Science www.whizkidztech.com | tylerromeo@gmail.com
Please read the link I provided more carefully. Apple devices and browsers are still vulnerable. --scott On Aug 16, 2013 10:14 PM, "Tyler Romeo" tylerromeo@gmail.com wrote:
On Fri, Aug 16, 2013 at 9:59 PM, C. Scott Ananian <cananian@wikimedia.org
wrote:
Because the other TLS 1.0 ciphers are *even worse*.
https://community.qualys.com/blogs/securitylabs/2013/03/19/rc4-in-tls-is-bro...
...except they're not (in all major browsers and the latest stable openssl and gnutls implementations).
https://bugzilla.mozilla.org/show_bug.cgi?id=665814
*-- * *Tyler Romeo* Stevens Institute of Technology, Class of 2016 Major in Computer Science www.whizkidztech.com | tylerromeo@gmail.com _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Sat, Aug 17, 2013 at 10:13 AM, Tyler Romeo tylerromeo@gmail.com wrote:
On Fri, Aug 16, 2013 at 9:59 PM, C. Scott Ananian <cananian@wikimedia.org
wrote:
Because the other TLS 1.0 ciphers are *even worse*.
https://community.qualys.com/blogs/securitylabs/2013/03/19/rc4-in-tls-is-bro...
...except they're not (in all major browsers and the latest stable openssl and gnutls implementations).
I can't tell if your emails are trolling us or not, but you're being pretty aggressive. Things take time and you're oversimplifying issues. It's better to be calm and rational when implementing stuff like this.
I mentioned on wikimedia-l that I'd be enabling GCM ciphers relatively soon. You even opened a bug after I mentioned it. I didn't get a chance at Wikimania to do it and I'm currently on vacation. They'll be enabled when I get back on Monday or Tuesday.
We released a blog post about our plans and are having an ops meeting about this next week. We'll update https://wikitech.wikimedia.org/wiki/Https when we've more firmly set our plans.
To this specific email's point, though: RC4 still protects BEAST for browsers that will always be vulnerable and those that aren't will support TLS 1.2 soon enough (which is the correct solution). Let's not make old browsers vulnerable to make newer browsers slightly more secure for a short period of time.
- Ryan
On Sat, Aug 17, 2013 at 12:47 PM, Faidon Liambotis faidon@wikimedia.org wrote:
On Fri, Aug 16, 2013 at 08:04:24PM -0400, Zack Weinberg wrote:
Hi, I'm a grad student at CMU studying network security in general and censorship / surveillance resistance in particular. I also used to work for Mozilla, some of you may remember me in that capacity. My friend Sumana Harihareswara asked me to comment on Wikimedia's plans for hardening the encyclopedia against state surveillance.
<snip>
First of all, thanks for your input. It's much appreciated. As I'm sure Sumanah has already mentioned, all of our infrastructure is being developed in the open using free software and we'd be also very happy to accept contributions in code/infrastructure-as-code as well.
hi faidon, i do not think you personally and WMF are particularly helpful in accepting contributions. because you: * do not communicate openly the problems * do not report upstream publically * do not ask for help, and even if it gets offered you just ignore it with quite some arrogance
let me give you an example as well. git.wikimedia.org broke, and you, faidon, did _absolutely nothing_ to give good feedback to upstream to improve the gitblit software. you and colleagues did though adjust robots.txt to reduce the traffic arriving at the git.wikimedia.org. which is, in my opinion, "paying half of the rent". see * our bug: https://bugzilla.wikimedia.org/show_bug.cgi?id=51769, includes details how to take a stack trace * upstream bug: https://code.google.com/p/gitblit/issues/detail?id=294, no stacktrace reported
That being said, literally everything in your mail has been already considered and discussed multiple times :), plus a few others you didn't mention (GCM ciphers, OCSP stapling, SNI & split certificates, short-lived certificates, ECDSA certificates). A few have been discussed on wikitech, others are under internal discussion & investigation by some of us with findings to be posted here too when we have something concrete.
I don't mean this to sound rude, but I think you may be oversimplifying the situation quite a bit.
....
Is dedicating (finite) engineering time to write the necessary code for e.g. gdnsd to support DNSSEC, just to be able to support DANE for which there's exactly ZERO browser support, while at the same time breaking a significant chunk of users, a sensible thing to do?
i don't mean this to sound rude, but you give me the impression that you handle the https and dns case similarly than the gitblit case. you tried some approaches, and let me perceive you think only in your wmf box. i'd really appreciate some love towards other projects here, and get things fixed at source as well, in mid term (i.e months, one or two years).
rupert
hi faidon, i do not think you personally and WMF are particularly helpful in accepting contributions. because you:
- do not communicate openly the problems
- do not report upstream publically
- do not ask for help, and even if it gets offered you just ignore it
with quite some arrogance
let me give you an example as well. git.wikimedia.org broke, and you, faidon, did _absolutely nothing_ to give good feedback to upstream to improve the gitblit software. you and colleagues did though adjust robots.txt to reduce the traffic arriving at the git.wikimedia.org. which is, in my opinion, "paying half of the rent". see
includes details how to take a stack trace
- upstream bug:
https://code.google.com/p/gitblit/issues/detail?id=294, no stacktrace reported
Lets not point fingers at specific people. Its really unhelpful and causes defensiveness.
In the case of gitblit, the problem at this point has been identified (web spiders DOSing us accidentally). Its really not surprising that creating zip files ~100mb on the fly is expensive. It doesn't really seem that likely that a stack trace would help solve such a problem, and really its more a config issue on our end than a problem with gitblit.
I really don't see anything wrong with what any of the wmf folks did on that bug.
-bawolff
On Sat, Aug 17, 2013 at 2:19 PM, Brian Wolff bawolff@gmail.com wrote:
Lets not point fingers at specific people. Its really unhelpful and causes defensiveness.
In the case of gitblit, the problem at this point has been identified (web spiders DOSing us accidentally). Its really not surprising that creating zip files ~100mb on the fly is expensive. It doesn't really seem that likely that a stack trace would help solve such a problem, and really its more a config issue on our end than a problem with gitblit.
I really don't see anything wrong with what any of the wmf folks did on that bug.
To be honest, I don't feel like the git browser for WMF has ever been truly stable. The ops team does their best, but I think it's just the nature of the application. Git wasn't really made to be accessed that kind of way, and here who knows how many people are using that service daily.
*-- * *Tyler Romeo* Stevens Institute of Technology, Class of 2016 Major in Computer Science www.whizkidztech.com | tylerromeo@gmail.com
On 17.08.2013, 22:19 Brian wrote:
its more a config issue on our end than a problem with gitblit.
Frankly, all web apps that allow anons do crazy shit with GET requests should at least mark critical links with rel="nofollow", so at least part of the blame lies on Gitblit:)
On Sat, Aug 17, 2013 at 7:03 PM, Max Semenik maxsem.wiki@gmail.com wrote:
On 17.08.2013, 22:19 Brian wrote:
its more a config issue on our end than a problem with gitblit.
Frankly, all web apps that allow anons do crazy shit with GET requests should at least mark critical links with rel="nofollow", so at least part of the blame lies on Gitblit:)
I think a more important problem is the various cache prevention headers emitted by gitblit. Ops and Chad are well aware of that issue and have gotten upstream fixes for that (with public bugzilla bugs and google code issues!) and I guess are still working with upstream on further fixes for those headers.
But this is not constructive to the "site hardening" thread so let's either follow up on the other thread I just started or drop it entirely.
-Jeremy
On Aug 17, 2013, at 1:33 PM, rupert THURNER rupert.thurner@gmail.com wrote:
hi faidon, i do not think you personally and WMF are particularly helpful in accepting contributions. because you:
- do not communicate openly the problems
- do not report upstream publically
- do not ask for help, and even if it gets offered you just ignore it
with quite some arrogance
Rupert, please don't call out or attack specific people. We're all on the same team, and I can assure you that Faidon, as well as Ryan and the rest of the team, are as eager as anyone else to provide more security and crypto capability to the WMF projects. Our goal is to share the sum of all knowledge with every human being, and right now, HTTPS means cutting out more than a few of them, so we need to determine a solution. We're not standing still, but it's not a simple problem with a simple answer, and it will take time.
Further, Ops in general, and Faidon in particular, routinely report issues upstream. Our recent bug reports or patches to Varnish and Ceph are two examples that easily come to mind. Faidon was (rightly) attempting to restore service first - we have a lot on our plates, and I assure you that if a specific bug had been identified, we'd report it - as we have for many other tools we use.
We all want the same things, and we're all on the same team here. Let's please stick to the issues at hand, and presume that we're all working in good faith to resolve these issues.
Thank you.
--Ken.
Interesting timing. DNSSEC came up on the NANOG list today -- someone sent out an FYI to a USENIX paper [1] which shows that the law of unintended consequences is still strong and active. In the case of DNSSEC; there is an increased chance of a user being unable to resolve a protected domain -- particularly in APNIC.
[1] https://www.usenix.org/conference/usenixsecurity13/measuring-practical-impac... dnssec-deployment
~Matt Walker Wikimedia Foundation Fundraising Technology Team
On Fri, Aug 16, 2013 at 6:25 PM, C. Scott Ananian cananian@wikimedia.orgwrote:
On Fri, Aug 16, 2013 at 8:17 PM, Tyler Romeo tylerromeo@gmail.com wrote:
With that said, I think the real problem with Wikimedia's security right now is a pretty big failure on the part of the operations team to inform anybody as to what the hell is going on. Why hasn't the TLS cipher list been updated? Why are we still using RC4 even though it's obviously a terrible option? Why isn't Wikimedia using DNSSEC, let alone DANE? I'm
sure
the operations team is doing quite a bit of work, but all of these things should be trivial configuration changes, and if they aren't, the
community
should be told why so we know why these changes haven't been applied yet.
I believe the short answer to your questions is: because Wikipedia lives in the real world. Some of the trivial changes you describe would make the site inoperable for large numbers of users. Even switching to HTTPS-only potentially locks out a billion or so people.
That said, I'm not part of the operations team either so I can't answer definitively. I agree that it would probably be useful to have more formal progress reporting. "Can't disable RC4 in the cipher suite until more than N% of our readers are using <a set of known good browsers>" for example. There has been discussion elsewhere on wmf lists about metrics reporting. Once the blockers were quantified, it would be easier for interested people to 'count the days' until greater security could be enforced, or to bring pressure to bear on upstream providers (of the chrome browser, of DNS root zones, etc) where security fixes are needed.
Zack: what would probably be useful is to compile your list of suggestions into a number of specific issues in bugzilla (enable DNS sec, disable RC4, etc). Some of these probably already exist in bugzilla, they should be uncovered. A tracking bug or wiki page could collect all the different bugzilla tickets for anyone who wants the big picture. --scott
-- (http://cscott.net) _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
----- Original Message -----
From: "Zack Weinberg" zackw@cmu.edu
The first step really must be to enable HTTPS unconditionally for everyone (whether or not logged in). I see on the roadmap that there is concern that this will lock out large groups of users, e.g. from China; a workaround simply *must* be found for this. Everything else that is worth doing is rendered ineffective if *any* application layer data is *ever* transmitted over an insecure channel. There is no point worrying about traffic analysis when an active man-in-the-middle can inject malicious JavaScript into unsecured pages, or a passive one can steal session cookies as they fly by in cleartext.
I understand your goal, and your argument, but I've just this week been reminded that It Isn't Always China.
I found myself stuck on a non-rooted Android phone, and having to use a demo version of a tethering app ... which wouldn't pass HTTPS on purpose. Ironically, that's why it was the demo: I couldn't get through it to PayPal to buy it from them.
My point here, of course, is that you have to decide whether you're forcing HTTPS *for the user's good* or *for the greater good*... and if you think it's the former, remember that the user sometimes knows better than you do.
If it's the latter, well, you have to decide what percentage of false positives you're willing to let get away: are there any large populations of WP users *who cannot use HTTPS*? EMEA users on cheap non-smart phones that have a browser, but it's too old -- or the phone too slow -- to do HTTPS?
Cheers, -- jra
On 8/16/13, Zack Weinberg zackw@cmu.edu wrote:
Hi, I'm a grad student at CMU studying network security in general and censorship / surveillance resistance in particular. I also used to work for Mozilla, some of you may remember me in that capacity. My friend Sumana Harihareswara asked me to comment on Wikimedia's plans for hardening the encyclopedia against state surveillance. I've read the discussion to date on this subject, but it was kinda all over the map, so I thought it would be better to start a new thread. Actually I'm going to start two threads, one for general site hardening and one specifically about traffic analysis. This is the one about site hardening, which should happen first. Please note that I am subscribed to wikitech-l but not wikimedia-l (but I have read the discussion over there).
The roadmap at https://blog.wikimedia.org/2013/08/01/future-https-wikimedia-projects/ looks to me to have the right shape, but there are some missing things and points of confusion.
The first step really must be to enable HTTPS unconditionally for everyone (whether or not logged in). I see on the roadmap that there is concern that this will lock out large groups of users, e.g. from China; a workaround simply *must* be found for this. Everything else that is worth doing is rendered ineffective if *any* application layer data is *ever* transmitted over an insecure channel. There is no point worrying about traffic analysis when an active man-in-the-middle can inject malicious JavaScript into unsecured pages, or a passive one can steal session cookies as they fly by in cleartext.
As part of the engineering effort to turn on TLS for everyone, you should also provide SPDY, or whatever they're calling it these days. It's valuable not only for traffic analysis' sake, but because it offers server-side efficiency gains that (in theory anyway) should mitigate the TLS overhead somewhat.
After that's done, there's a grab bag of additional security refinements that are deployable immediately or with minimal-to-moderate engineering effort. The roadmap mentions HTTP Strict Transport Security; that should definitely happen. All cookies should be tagged both Secure and HttpOnly (which renders them inaccessible to accidental HTTP loads and to page JavaScript); now would also be a good time to prune your cookie requirements, ideally to just one which does not reveal via inspection whether or not someone is logged in. You should also do Content-Security-Policy, as strict as possible. I know this can be a huge amount of development effort, but the benefits are equally huge - we don't know exactly how it was done, but there's an excellent chance CSP on the hidden service would have prevented the exploit discussed here: https://blog.torproject.org/blog/hidden-services-current-events-and-freedom-...
Several people raised concerns about Wikimedia's certificate authority becoming compromised (whether by traditional "hacking", social engineering, or government coercion). The best available cure for this is called "certificate pinning", which is unfortunately only doable by talking to browser vendors right now; however, I imagine they would be happy to apply pins for Wikipedia. There's been some discussion of an HSTS extension that would apply a pin (http://tools.ietf.org/html/draft-evans-palmer-key-pinning-00) and it's also theoretically doable via DANE (http://tools.ietf.org/html/rfc6698); however, AFAIK no one implements either of these things yet, and I rate it moderately likely that DANE is broken-as-specified. DANE requires DNSSEC, which is worth implementing for its own sake (it appears that the wikipedia.org. and wikimedia.org. zones are not currently signed).
Perfect forward secrecy should also be considered at this stage. Folks seem to be confused about what PFS is good for. It is *complementary* to traffic analysis resistance, but it's not useless in the absence of. What it does is provide defense in depth against a server compromise by a well-heeled entity who has been logging traffic *contents*. If you don't have PFS and the server is compromised, *all* traffic going back potentially for years is decryptable, including cleartext passwords and other equally valuable info. If you do have PFS, the exposure is limited to the session rollover interval. Browsers are fairly aggressively moving away from non-PFS ciphersuites (see https://briansmith.org/browser-ciphersuites-01.html; all of the non-"deprecated" suites are PFS).
Finally, consider paring back the set of ciphersuites accepted by your servers. Hopefully we will soon be able to ditch TLS 1.0 entirely (all of its ciphersuites have at least one serious flaw). Again, see https://briansmith.org/browser-ciphersuites-01.html for the current thinking from the browser side.
zw
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Thanks for taking the time to write these two emails. You raise an interesting point about having everything on one domain. I really don't think that's practical for political reasons (not to mention technical disruption), but it would allow people to be more lost in the crowd, especially for small languages. Some of the discussion about this stuff has taken place on bugzilla. Have you read through https://bugzilla.wikimedia.org/show_bug.cgi?id=47832 ?
Personally I think we need to make a more formal list of who all the potential threats we could face are, and then expand that list to include what we would need to do to protect ourselves from the different types of threats (or which threats we chose not to care about). Some kid who downloads a firesheep-type program is very different type of threat then that of a state agent, and a state agent that is just trying to do broad spying is different from a state agent targeting a specific user. Lots of these discussion seem to end up being: lets do everything to try to protect against everything, which I don't think is the right mindset, as you can't protect against everything, and if you don't know what specifically you are trying to protect against, you end up missing things.
Tyler said:
With that said, I think the real problem with Wikimedia's security right now is a pretty big failure on the part of the operations team to inform anybody as to what the hell is going on. Why hasn't the TLS cipher list been updated? Why are we still using RC4 even though it's obviously a terrible option? Why isn't Wikimedia using DNSSEC, let alone DANE? I'm sure the operations team is doing quite a bit of work, but all of these things should be trivial configuration changes, and if they aren't, the community should be told why so we know why these changes haven't been applied yet.
While I would certainly love to have more details on the trials and tribulations of the ops team, I think its a little unfair to consider deploying DNSSEC a "trivial config change". Changing things like our dns infrastructure is not something (or at least I imagine its not something) done on the spur of the moment. It needs testing and to be done cautiously, since if something bad happens the entire site goes down. That said, it would be nice for ops to comment on what their thoughts on DNSSEC (for reference bug 24413)
--bawolff
----- Original Message -----
From: "Brian Wolff" bawolff@gmail.com
Thanks for taking the time to write these two emails. You raise an interesting point about having everything on one domain. I really don't think that's practical for political reasons (not to mention technical disruption), but it would allow people to be more lost in the crowd, especially for small languages. Some of the discussion about this stuff has taken place on bugzilla. Have you read through https://bugzilla.wikimedia.org/show_bug.cgi?id=47832 ?
I should think we might be able to run a proxy that would handle such hiding, no?
Personally I think we need to make a more formal list of who all the potential threats we could face are, and then expand that list to include what we would need to do to protect ourselves from the different types of threats (or which threats we chose not to care about). Some kid who downloads a firesheep-type program is very different type of threat then that of a state agent, and a state agent that is just trying to do broad spying is different from a state agent targeting a specific user. Lots of these discussion seem to end up being: lets do everything to try to protect against everything, which I don't think is the right mindset, as you can't protect against everything, and if you don't know what specifically you are trying to protect against, you end up missing things.
Definitely: the potential attack surfaces need to be explicitly itemized.
Cheers, -- jra
On Fri, Aug 16, 2013 at 08:04:24PM -0400, Zack Weinberg wrote:
Hi, I'm a grad student at CMU studying network security in general and censorship / surveillance resistance in particular. I also used to work for Mozilla, some of you may remember me in that capacity. My friend Sumana Harihareswara asked me to comment on Wikimedia's plans for hardening the encyclopedia against state surveillance.
<snip>
First of all, thanks for your input. It's much appreciated. As I'm sure Sumanah has already mentioned, all of our infrastructure is being developed in the open using free software and we'd be also very happy to accept contributions in code/infrastructure-as-code as well.
That being said, literally everything in your mail has been already considered and discussed multiple times :), plus a few others you didn't mention (GCM ciphers, OCSP stapling, SNI & split certificates, short-lived certificates, ECDSA certificates). A few have been discussed on wikitech, others are under internal discussion & investigation by some of us with findings to be posted here too when we have something concrete.
I don't mean this to sound rude, but I think you may be oversimplifying the situation quite a bit.
Enabling HTTPS for everyone on a website our scale isn't a trivial thing to do. Besides matters of policy -blocking Wikipedia in China isn't something that can be lightly done- there are significant technical restrictions. Just to lay a few examples here: there is no software that can both do both SPDY and take as input the key for encrypting SSL session tokens, something that's needed if you need a cluster of load-balancers (you also need to rotate it periodically; a lot of people miss this). There is no software out there than support both having a shared SSL session cache and also do SPDY[1]. etc.
DANE is great and everything but there's no software availalble that satisfies our GeoDNS requirements *and* supports DNSSEC. I know, I've tried them all. Traditional DNSSEC signing proxies (e.g. OpenDNSSEC) don't work at all with DNSSEC. (FWIW, we're switching to gdnsd which has a unique set of characteristics and whose author Brandon Black was hired in the ops team shortly after we decided to switch to gdnsd.)
Plus, DNSSEC has only a marginal adoption client-side (and DANE has none yet) and has important side effects. For example, you're more likely to be used as a source for DNS amplification attacks as your responses get larger. More importantly though, you're breaking users, something that needs to be carefully weighted with the benefits.
If you need numbers, this is from a paper from USENIX Security '13 last week titled "Measuring the Practical Impact of DNSSEC Deployment"[2]: "And we show, for the first time, that enabling DNSSEC measurably increases end-to-end resolution failures. For every 10 clients that are protected from DNS tampering when a domain deploys DNSSEC, approximately one ordinary client (primarily in Asia) becomes unable to access the domain."
Is dedicating (finite) engineering time to write the necessary code for e.g. gdnsd to support DNSSEC, just to be able to support DANE for which there's exactly ZERO browser support, while at the same time breaking a significant chunk of users, a sensible thing to do?
We'll keep wikitech -and blog, where appropriate- up to date with our plans as these evolve. In the meantime, feel free to dive in our puppet repository and see our setup and make your suggestions :)
Best, Faidon (wmf ops)
[1]: stud does SSL well (but not SPDY), but does not pass X-Forwarded-For, and Varnish doesn't support the PROXY protocol that stud provides, so either of the two would need to be coded (and we've already explored what it'd take to code it). nginx scales up and has some support for SPDY but doesn't have a shared-across-systems session cache or session token key rotation support nor it supports ECDSA. Apache 2.4 has all that, but we're not sure of its performance characteristics yet, plus mod-spdy won't cut it for us. etc.
[2]: https://www.usenix.org/conference/usenixsecurity13/measuring-practical-impac...
On Sat, Aug 17, 2013 at 6:47 AM, Faidon Liambotis faidon@wikimedia.orgwrote:
First of all, thanks for your input. It's much appreciated. As I'm sure Sumanah has already mentioned, all of our infrastructure is being developed in the open using free software and we'd be also very happy to accept contributions in code/infrastructure-as-code as well.
This type of email was what I was looking for. Thank you Faidon. All I need is somebody to come in and say "yes, we're looking into it, but we can't because of XXX".
Please read the link I provided more carefully. Apple devices and browsers
are still vulnerable.
Aha, I indeed missed that part. Sorry about my misunderstanding.
*-- * *Tyler Romeo* Stevens Institute of Technology, Class of 2016 Major in Computer Science www.whizkidztech.com | tylerromeo@gmail.com
On 08/17/2013 06:47 AM, Faidon Liambotis wrote:
On Fri, Aug 16, 2013 at 08:04:24PM -0400, Zack Weinberg wrote:
Hi, I'm a grad student at CMU studying network security in general and censorship / surveillance resistance in particular. I also used to work for Mozilla, some of you may remember me in that capacity. My friend Sumana Harihareswara asked me to comment on Wikimedia's plans for hardening the encyclopedia against state surveillance.
<snip>
First of all, thanks for your input. It's much appreciated. As I'm sure Sumanah has already mentioned, all of our infrastructure is being developed in the open using free software and we'd be also very happy to accept contributions in code/infrastructure-as-code as well.
That being said, literally everything in your mail has been already considered and discussed multiple times :), plus a few others you didn't mention (GCM ciphers, OCSP stapling, SNI & split certificates, short-lived certificates, ECDSA certificates). A few have been discussed on wikitech, others are under internal discussion & investigation by some of us with findings to be posted here too when we have something concrete.
I don't mean this to sound rude, but I think you may be oversimplifying the situation quite a bit.
Thanks to both of you, and to everyone on these threads, for thinking about and working on these issues. I apologize for not quite briefing Zack enough before asking him to share his thoughts -- I presumed that https://blog.wikimedia.org/2013/08/01/future-https-wikimedia-projects/ , http://www.gossamer-threads.com/lists/wiki/wikitech/378169 and http://www.gossamer-threads.com/lists/wiki/wikitech/378940 , and the "NSA" and "Disinformation regarding perfect forward secrecy for HTTPS" threads in http://lists.wikimedia.org/pipermail/wikimedia-l/2013-August/thread.html would be enough for him to get started with. I probably should have done more research.
We'll keep wikitech -and blog, where appropriate- up to date with our plans as these evolve.
I suggest that we also update either https://meta.wikimedia.org/wiki/HTTPS or a hub page on http://wikitech.wikimedia.org/ or https://www.mediawiki.org/wiki/Security_auditing_and_response with up-to-date plans, to make it easier for experts inside and outside the Wikimedia community to get up to speed and contribute. For topics under internal discussion and investigation, I would love a simple bullet point saying: "we're thinking about that, sorry nothing public or concrete yet, contact $person if you have experience to share."
In the meantime, feel free to dive in our puppet repository and see our setup and make your suggestions :)
You can browse that repository at https://git.wikimedia.org/summary/?r=operations/puppet.git and you can learn how to contribute a patch at https://wikitech.wikimedia.org/wiki/Puppet_coding (using Git and Gerrit the way we do per https://www.mediawiki.org/wiki/Gerrit/Tutorial ).
Best, Faidon (wmf ops)
Thanks again!
On Sat, Aug 17, 2013 at 05:55:36PM -0400, Sumana Harihareswara wrote:
I suggest that we also update either https://meta.wikimedia.org/wiki/HTTPS or a hub page on http://wikitech.wikimedia.org/ or https://www.mediawiki.org/wiki/Security_auditing_and_response with up-to-date plans, to make it easier for experts inside and outside the Wikimedia community to get up to speed and contribute. For topics under internal discussion and investigation, I would love a simple bullet point saying: "we're thinking about that, sorry nothing public or concrete yet, contact $person if you have experience to share."
This is a good suggestion. We had a pad that we've been working on even before this thread; a few of us (Ryan, Mark, Asher, Ken, myself) met the other day and worked a bit on our strategy from the operations perspective and put out our notes at: https://wikitech.wikimedia.org/wiki/HTTPS/Future_work
It's still very rudimentary bullet-point summary so it might not be an easy read. Feel free to ask questions here or or on-wiki.
There's obviously still a lot of unknowns -- we have a lot of "evaluate this" TODO item. Feel provide feedback or audit our choices, though, it'd be very much welcome. If you feel you can help in some of these areas in some other ways, feel free to say so and we'll try to find a way to make it happen.
Regards, Faidon
1st wrote large response, while i was still in midst of reading various responses, when i've seen last few responses which have touched many important factors related to very large scale sites, and that wikipedia want to keep geo-location & geo-feature based separation(s), then removed various paragraph that are unnecessary.
Most users have no experience in such (large scale or geo-location based implementations) areas, neither do i.
For (compared to very) small scale sites, i faced such problem(s) : At my first few small-scale implementations, i did not pay attention to rate-limiting techniques, then i realized its importance over time.
So before adding DNSSEC 1st, and then upgrade into DNSSEC+DANE in 2nd, ... Admin(s) for sites with large user-base, must work early on DNS rate-limiting solutions, various approaches should be tested and applied first. Some type of dynamic and elastic set of ranges need to be implemented, based on previous data of daily or weekly loads. And it has to be "adaptive" as well, when it comes to a situation when same (or same set of) IP-address is sending multiple DNSSEC queries, too frequently.
When more understanding and discussion are carried on that (rate-limiting), and better techniques are found and/or developed for very large scale site(s), then implementing large scale DNSSEC will be easier, or else it will remain scary/fearful.
Since my implementations cases are much much much smaller, i was able to do in different order, first added dnssec, then dane, then applied rate-limiting techniques. When initially i've set much lower ranges, then observed many initial queries were failing from client sides, so had to increase that set of ranges. For small-scale site(s), that worked out very fine.
In some areas (northern EU countries), DNSSEC has already reached more than 60% usage, in those area's wikipedia servers should be attempted first, for initial test cases, that should help (i guess). It is very unfortunate, here(USA) we are much much behind in many real human-friendly techniques/products/systems.
-- Bright Star.
Received from Faidon Liambotis, on 2013-08-17 3:47 AM:
On Fri, Aug 16, 2013 at 08:04:24PM -0400, Zack Weinberg wrote:
Hi, I'm a grad student at CMU studying network security in general and censorship / surveillance resistance in particular. I also used to work for Mozilla, some of you may remember me in that capacity. My friend Sumana Harihareswara asked me to comment on Wikimedia's plans for hardening the encyclopedia against state surveillance.
<snip>
First of all, thanks for your input. It's much appreciated. As I'm sure Sumanah has already mentioned, all of our infrastructure is being developed in the open using free software and we'd be also very happy to accept contributions in code/infrastructure-as-code as well.
That being said, literally everything in your mail has been already considered and discussed multiple times :), plus a few others you didn't mention (GCM ciphers, OCSP stapling, SNI & split certificates, short-lived certificates, ECDSA certificates). A few have been discussed on wikitech, others are under internal discussion & investigation by some of us with findings to be posted here too when we have something concrete.
I don't mean this to sound rude, but I think you may be oversimplifying the situation quite a bit.
Enabling HTTPS for everyone on a website our scale isn't a trivial thing to do. Besides matters of policy -blocking Wikipedia in China isn't something that can be lightly done- there are significant technical restrictions. Just to lay a few examples here: there is no software that can both do both SPDY and take as input the key for encrypting SSL session tokens, something that's needed if you need a cluster of load-balancers (you also need to rotate it periodically; a lot of people miss this). There is no software out there than support both having a shared SSL session cache and also do SPDY[1]. etc.
DANE is great and everything but there's no software availalble that satisfies our GeoDNS requirements *and* supports DNSSEC. I know, I've tried them all. Traditional DNSSEC signing proxies (e.g. OpenDNSSEC) don't work at all with DNSSEC. (FWIW, we're switching to gdnsd which has a unique set of characteristics and whose author Brandon Black was hired in the ops team shortly after we decided to switch to gdnsd.)
Plus, DNSSEC has only a marginal adoption client-side (and DANE has none yet) and has important side effects. For example, you're more likely to be used as a source for DNS amplification attacks as your responses get larger. More importantly though, you're breaking users, something that needs to be carefully weighted with the benefits.
If you need numbers, this is from a paper from USENIX Security '13 last week titled "Measuring the Practical Impact of DNSSEC Deployment"[2]: "And we show, for the first time, that enabling DNSSEC measurably increases end-to-end resolution failures. For every 10 clients that are protected from DNS tampering when a domain deploys DNSSEC, approximately one ordinary client (primarily in Asia) becomes unable to access the domain."
Is dedicating (finite) engineering time to write the necessary code for e.g. gdnsd to support DNSSEC, just to be able to support DANE for which there's exactly ZERO browser support, while at the same time breaking a significant chunk of users, a sensible thing to do?
We'll keep wikitech -and blog, where appropriate- up to date with our plans as these evolve. In the meantime, feel free to dive in our puppet repository and see our setup and make your suggestions :)
Best, Faidon (wmf ops)
[1]: stud does SSL well (but not SPDY), but does not pass X-Forwarded-For, and Varnish doesn't support the PROXY protocol that stud provides, so either of the two would need to be coded (and we've already explored what it'd take to code it). nginx scales up and has some support for SPDY but doesn't have a shared-across-systems session cache or session token key rotation support nor it supports ECDSA. Apache 2.4 has all that, but we're not sure of its performance characteristics yet, plus mod-spdy won't cut it for us. etc.
On Fri, Aug 23, 2013 at 06:53:29AM -0700, Bry8 Star wrote:
At my first few small-scale implementations, i did not pay attention to rate-limiting techniques, then i realized its importance over time.
RRL support for gdnsd is being tracked upstream at: https://github.com/blblack/gdnsd/issues/36 (filed by yours truly, 7 months ago; Brandon has left some really good and large responses there)
You're right that it's a prerequisite to DNSSEC support, due to the large DNSSEC responses -and more importantly, for tiny queries- being appealing to DNS amplification attackers.
Thanks, Faidon
(Sometime i used extra info next to Words for new-reader's better understanding, pls ignore/skip if you(reader) are already aware of those short descriptions)
Based on its homepage info, it seems, gdnsd do not support full DNSSEC yet !
https://github.com/blblack/gdnsd
gdnsd is based on what predecessor DNS-Server ?
Can it(gdnsd) not use DNSSEC supported libraries from NLnetLabs Projects for providing DNSSEC support ? Or is such feature already under development ?
I'm not sure, if NSD (a DNSSEC based authoritative-only DNS Server from NLnetLabs) is already considered or not ?
Is it possible to use servers (which have NSD dnssec dns server) with GeoIP based features ? or, can firewall(iptables) be pre-configured with various GeoIP based functions, for Geographic location based balancing, redirecting and/or other features ?
Many many software are using Libraries from NLnetLabs Projects for full DNSSEC supports.
http://www.nlnetlabs.nl/projects/nsd/ http://www.nlnetlabs.nl/projects/ldns/ http://unbound.net/
IPtables and GeoIP: http://xtables-addons.sourceforge.net/geoip.php
GeoIP resources: http://dev.maxmind.com/ http://dev.maxmind.com/geoip/geoip2/geolite2/ http://www.maxmind.com/en/geoip_resources http://opensourcegis.org/
Free geoip database can be reduced into around 4MB using simple scripts, to include only: all ip / ip-ranges, country, etc.
BIND (a full DNSSEC based DNS server software, can be configured as authoritative-only, or other type of DNS server) has GeoIP based solutions:
http://code.google.com/p/bind-geoip/ http://www.caraytech.com/geodns/ http://phix.me/geodns/ http://www.bind9.net/dns-tools
A fully DNSSEC supported DNS server with IP Anycast with RRL should be better.
http://vincent.bernat.im/en/blog/2011-dns-anycast.html http://www.circleid.com/posts/20110531_anycast_unicast_or_both/ http://en.wikipedia.org/wiki/Anycast
If i'm repeating already discussed topics here, then pls pardon/spare me. Then a link to per-discussed location would be very helpful for others.
-- Bright Star. bry 8 st ar a.@t. in ven ta ti d.o.t. or g: GPG-FPR:C70FD3D070EB5CADFC040FCB80F68A461F5923FA. bry 8 st ar a.@t. ya hoo d.o.t. c om: GPG-FPR:12B77F2C92BF25C838C64D9C8836DBA2576C10EC.
Before this get's lost in the other noise on this thread, I wanted to address the MediaWiki specific pieces.
On Fri, Aug 16, 2013 at 5:04 PM, Zack Weinberg zackw@cmu.edu wrote:
All cookies should be tagged both Secure and HttpOnly (which renders them inaccessible to accidental HTTP loads and to page
They are (if you login over https). Except for a generic one which indicates you should be redirected to HTTPS if it's received on an HTTP connection.
JavaScript); now would also be a good time to prune your cookie requirements, ideally to just one which does not reveal via inspection whether or not someone is logged in.
I'm not sure what attack you're preventing here. Can you elaborate? If they are logging in over HTTPS, their cookies shouldn't be visible to a network-based attacker. If their cookies are visible to a network-based attacker, then the attacker can probably get their username from their login. Also, since edits show up in the history, correlating any edit action to the editor's name is trivial, even without the cookies.
You should also do Content-Security-Policy, as strict as possible. I know this can be a huge amount of development effort, but the benefits are equally huge - we don't know exactly how it was done, but there's an excellent chance CSP on the hidden service would have prevented the exploit discussed here: https://blog.torproject.org/** blog/hidden-services-current-**events-and-freedom-hostinghttps://blog.torproject.org/blog/hidden-services-current-events-and-freedom-hosting
A strong CSP is #3 on my most-wanted list of security features (after https and better password hashing). However, that would likely limit things like editors adding css into their edits, which is pretty controversial.
On 17 August 2013 22:08, Chris Steipp csteipp@wikimedia.org wrote:
A strong CSP is #3 on my most-wanted list of security features (after https and better password hashing). However, that would likely limit things like editors adding css into their edits, which is pretty controversial.
Do you mean adding user/site CSS, or do you mean other edits?
- d.
Inline css (<div style="...")
On Sat, Aug 17, 2013 at 2:09 PM, David Gerard dgerard@gmail.com wrote:
On 17 August 2013 22:08, Chris Steipp csteipp@wikimedia.org wrote:
A strong CSP is #3 on my most-wanted list of security features (after
https
and better password hashing). However, that would likely limit things
like
editors adding css into their edits, which is pretty controversial.
Do you mean adding user/site CSS, or do you mean other edits?
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Also inline JavaScript, which MediaWiki has a lot of for the ResourceLoader. On Aug 17, 2013 5:10 PM, "Chris Steipp" csteipp@wikimedia.org wrote:
Inline css (<div style="...")
On Sat, Aug 17, 2013 at 2:09 PM, David Gerard dgerard@gmail.com wrote:
On 17 August 2013 22:08, Chris Steipp csteipp@wikimedia.org wrote:
A strong CSP is #3 on my most-wanted list of security features (after
https
and better password hashing). However, that would likely limit things
like
editors adding css into their edits, which is pretty controversial.
Do you mean adding user/site CSS, or do you mean other edits?
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Yeah, but I *think* that one can be solved without affecting editors.. Building something to let them style, but in a way that inline css isn't allowed by the CSP is something I haven't figured out yet.
On Sat, Aug 17, 2013 at 2:11 PM, Tyler Romeo tylerromeo@gmail.com wrote:
Also inline JavaScript, which MediaWiki has a lot of for the ResourceLoader. On Aug 17, 2013 5:10 PM, "Chris Steipp" csteipp@wikimedia.org wrote:
Inline css (<div style="...")
On Sat, Aug 17, 2013 at 2:09 PM, David Gerard dgerard@gmail.com wrote:
On 17 August 2013 22:08, Chris Steipp csteipp@wikimedia.org wrote:
A strong CSP is #3 on my most-wanted list of security features (after
https
and better password hashing). However, that would likely limit things
like
editors adding css into their edits, which is pretty controversial.
Do you mean adding user/site CSS, or do you mean other edits?
- d.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
wikitech-l@lists.wikimedia.org