Is there a way we can narrow down this security check so it doesn't keep breaking API requests, action=raw requests, and ResourceLoader requests, etc?
Having the last param in a query string end in say ".png" or ".svg" or ".jpg" or ".ogg" is..... very frequent when dealing with uploaded files and file pages. In addition to the reported breakages with ResourceLoader, I've seen this problem break classic-style action=raw site CSS page loads (action=raw&title=MediaWiki:Filepage.css) and API requests for MwEmbed+TimedMediaHandler's video player, for the OEmbed extension I'm fiddling with, etc.
The impression I get is this is:
1) only exploitable on IE 6 (which is now a small minority and getting smaller) 2) only exploitable if the path portion of the URL does not include an unencoded period (eg 'api' or 'api%2Ephp' instead of 'api.php') 3) only exploitable if raw HTML fragments can be injected into the output, eg a '<body' or other that triggers IE's HTML detection
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and is actively campaigning to get the remaining users to upgrade. :) But I get protecting, so if we can find a workaround that's ok.
For 2) ... if we can detect this it would be great as we could avoid breaking *any* api.php, index.php, or load.php requests in most real-world situations.
For 3) ... formatted & XML output from the API should always be safe as, even if it triggers XML/HTML detection you can't slip in arbitrary <script> bits. JSON output seems to be the problematic vector currently, as you can manage to get arbitrary strings embedded in some places like error messages:
{"warnings":{"siteinfo":{"*":"Unrecognized value for parameter 'siprop': <body onload=alert(1)>.html"}}}
On the other hand if our JSON output escaped '<' and '>' characters you'd get this totally safe document:
{"warnings":{"siteinfo":{"*":"Unrecognized value for parameter 'siprop': \u003Cbody onload=alert(1)\u003E.html"}}}
I tested this by slipping a couple lines into ApiFormatJson:
$this->printText( $prefix . str_replace( '<', '\u003C', str_replace( '>', '\u003E', FormatJson::encode( $this->getResultData(), $this->getIsHtml() ) ) ) . $suffix );
and can confirm that IE 6 doesn't execute the script bit.
Are there any additional exploit vectors for API output other than HTML tags mixed unescaped into JSON?
-- brion
On Thu, Jun 2, 2011 at 10:56 PM, Brion Vibber brion@pobox.com wrote:
Is there a way we can narrow down this security check so it doesn't keep breaking API requests, action=raw requests, and ResourceLoader requests, etc?
Tim had an idea about redirecting bad URLs to fixed ones. He ran it by me last night his time, and my guess is he'll probably implement it this morning his time. But I'll leave it up to him to elaborate on that.
Your ideas to secure api.php output against HTML abuse are interesting, but I don't think the txt and dbg formats can be fixed that way.
Roan Kattouw (Catrope)
On Thu, Jun 2, 2011 at 2:20 PM, Roan Kattouw roan.kattouw@gmail.com wrote:
On Thu, Jun 2, 2011 at 10:56 PM, Brion Vibber brion@pobox.com wrote:
Is there a way we can narrow down this security check so it doesn't keep breaking API requests, action=raw requests, and ResourceLoader requests, etc?
Tim had an idea about redirecting bad URLs to fixed ones. He ran it by me last night his time, and my guess is he'll probably implement it this morning his time. But I'll leave it up to him to elaborate on that.
I know this has already been brought up, but that doesn't work for POST, and may not work for API clients that don't automatically follow redirects. (Which it looks like includes MediaWiki's ForeignAPIRepo since our Http class got redirection turned off by default a couple versions ago.)
Your ideas to secure api.php output against HTML abuse are
interesting, but I don't think the txt and dbg formats can be fixed that way.
Why do we actually have these extra unparseable formats? If they're for debug readability then we can probably just make them HTML-formatted, like jsonfm/xmlfm/etc.
-- brion
On Thu, Jun 2, 2011 at 11:39 PM, Brion Vibber brion@pobox.com wrote:
I know this has already been brought up, but that doesn't work for POST, and may not work for API clients that don't automatically follow redirects.
That's exactly what Tim said. However, I don't think POST is that much of a problem. Problematic dots can only result from 'variable' pieces of data being put in your query string, and with POST you'd typically put the variable parts in the POST body. And even if you're not doing that, moving something from the query string to the POST body should be trivial.
Why do we actually have these extra unparseable formats? If they're for debug readability then we can probably just make them HTML-formatted, like jsonfm/xmlfm/etc.
To be honest, I don't remember. I think they can die. I'll take a look at the revision history tomorrow to check why they were introduced in the first place.
Roan Kattouw (Catrope)
On Thu, Jun 2, 2011 at 2:44 PM, Roan Kattouw roan.kattouw@gmail.com wrote:
However, I don't think POST is that much of a problem. Problematic dots can only result from 'variable' pieces of data being put in your query string, and with POST you'd typically put the variable parts in the POST body. And even if you're not doing that, moving something from the query string to the POST body should be trivial.
Good point! That helps simplify things. :D
Why do we actually have these extra unparseable formats? If they're for
debug readability then we can probably just make them HTML-formatted,
like
jsonfm/xmlfm/etc.
To be honest, I don't remember. I think they can die. I'll take a look at the revision history tomorrow to check why they were introduced in the first place.
Spiff. I'd be happy to kill 'em off entirely ;) but if there's a better way that still keeps our functional output formats safe & working, that's super.
-- brion
Brion Vibber wrote:
On Thu, Jun 2, 2011 at 2:20 PM, Roan Kattouwroan.kattouw@gmail.com wrote:
On Thu, Jun 2, 2011 at 10:56 PM, Brion Vibberbrion@pobox.com wrote:
Is there a way we can narrow down this security check so it doesn't keep breaking API requests, action=raw requests, and ResourceLoader requests, etc?
Tim had an idea about redirecting bad URLs to fixed ones. He ran it by me last night his time, and my guess is he'll probably implement it this morning his time. But I'll leave it up to him to elaborate on that.
I know this has already been brought up, but that doesn't work for POST, and may not work for API clients that don't automatically follow redirects. (Which it looks like includes MediaWiki's ForeignAPIRepo since our Http class got redirection turned off by default a couple versions ago.)
Luckily ForeignAPIRepo doesn't spoof IE6, so we can just redirect IE6.
Brion Vibber wrote:
Your ideas to secure api.php output against HTML abuse are interesting, but I don't think the txt and dbg formats can be fixed that way.
Why do we actually have these extra unparseable formats? If they're for debug readability then we can probably just make them HTML-formatted, like jsonfm/xmlfm/etc.
You're talking about api.php?format=txt and api.php?format=dbg? I'd strongly recommend at least rudimentary log sampling on the Wikimedia cluster to check for use frequency before making any decision. People have all sorts of strange use-cases and it could be a rather nasty breaking change for some people. It'd be nice to have hard(er) data before making a decision, even if it turns out that the idea behind having these extra formats was initially misguided.
MZMcBride
On 03/06/11 07:20, Roan Kattouw wrote:
Tim had an idea about redirecting bad URLs to fixed ones. He ran it by me last night his time, and my guess is he'll probably implement it this morning his time. But I'll leave it up to him to elaborate on that.
I committed it in r89397. It's unconventional to have WebRequest modify the output, but it doesn't seem to break any cardinal rules, and Roan thought WebRequest is a better place for it than OutputPage since the API tries to avoid using OutputPage.
-- Tim Starling
Remember Rule #1: You can't solve social problems using technical means. The social problem is that people keep using IE6. The technical means are to protect them by pandering to IE6 security lapses. The social solution is to tell people "FFS, STOP USING IE^111".
For all the reasons Brion gave below, I support the idea of checking to see if the browser is IE6, and if it is, then give them a header that says "We can no longer provide you with a secure browsing experience because you are using Internet Explorer 6" followed by a dump of the raw wikitext with any angle brackets replaced by < and >. ________________________________________ From: wikitech-l-bounces@lists.wikimedia.org [wikitech-l-bounces@lists.wikimedia.org] on behalf of Brion Vibber [brion@pobox.com]
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and is actively campaigning to get the remaining users to upgrade. :) But I get protecting, so if we can find a workaround that's ok.
On Thu, Jun 2, 2011 at 11:25 PM, Russell N. Nelson - rnnelson rnnelson@clarkson.edu wrote:
Remember Rule #1: You can't solve social problems using technical means. The social problem is that people keep using IE6. The technical means are to protect them by pandering to IE6 security lapses. The social solution is to tell people "FFS, STOP USING IE^111".
For all the reasons Brion gave below, I support the idea of checking to see if the browser is IE6, and if it is, then give them a header that says "We can no longer provide you with a secure browsing experience because you are using Internet Explorer 6" followed by a dump of the raw wikitext with any angle brackets replaced by < and >.
Could be cool, except we have caching proxies (Squid for api.php, Varnish for load.php) between us and the IE users, and I'm not sure we can tell them to only not cache requests for IE6.
Roan Kattouw (Catrope)
On Thu, Jun 2, 2011 at 5:25 PM, Russell N. Nelson - rnnelson rnnelson@clarkson.edu wrote:
Remember Rule #1: You can't solve social problems using technical means. The social problem is that people keep using IE6. The technical means are to protect them by pandering to IE6 security lapses. The social solution is to tell people "FFS, STOP USING IE^111".
For all the reasons Brion gave below, I support the idea of checking to see if the browser is IE6, and if it is, then give them a header that says "We can no longer provide you with a secure browsing experience because you are using Internet Explorer 6" followed by a dump of the raw wikitext with any angle brackets replaced by < and >.
I support the proposal in spirit :)
But.....wouldn't work with our cache setup. Squid doesn't filter on user agent.
-Chad
Look, we're moving into a new generation of web browsers. It's time to upgrade - it's easy and free. We shouldn't spend our time/resources trying to support ten-year-old technology. Even Microsoft is trying to get people to stop and it's the responsibility of any popular website to support modern technologies.
Anytime I hear 'developing for IE6' I cringe. If the time developers spent on supporting IE6 went toward modern features, we'd have some really great websites.
On Thu, Jun 2, 2011 at 2:25 PM, Russell N. Nelson - rnnelson < rnnelson@clarkson.edu> wrote:
Remember Rule #1: You can't solve social problems using technical means. The social problem is that people keep using IE6. The technical means are to protect them by pandering to IE6 security lapses. The social solution is to tell people "FFS, STOP USING IE^111".
For all the reasons Brion gave below, I support the idea of checking to see if the browser is IE6, and if it is, then give them a header that says "We can no longer provide you with a secure browsing experience because you are using Internet Explorer 6" followed by a dump of the raw wikitext with any angle brackets replaced by < and >. ________________________________________ From: wikitech-l-bounces@lists.wikimedia.org [ wikitech-l-bounces@lists.wikimedia.org] on behalf of Brion Vibber [ brion@pobox.com]
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and is actively campaigning to get the remaining users to upgrade. :) But I get protecting, so if we can find a workaround that's ok.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Thu, Jun 2, 2011 at 6:06 PM, Mono mium monomium@gmail.com wrote:
Look, we're moving into a new generation of web browsers. It's time to upgrade - it's easy and free. We shouldn't spend our time/resources trying to support ten-year-old technology. Even Microsoft is trying to get people to stop and it's the responsibility of any popular website to support modern technologies.
Anytime I hear 'developing for IE6' I cringe. If the time developers spent on supporting IE6 went toward modern features, we'd have some really great websites.
I remain convinced that it's worth it-- at least for security issues-- as long as a browser retains at least 1% market share.
-Chad
On 03/06/11 06:56, Brion Vibber wrote:
The impression I get is this is:
- only exploitable on IE 6 (which is now a small minority and getting
smaller)
IE 6 and some earlier versions of IE, at least back to 4.
- only exploitable if the path portion of the URL does not include an
unencoded period (eg 'api' or 'api%2Ephp' instead of 'api.php') 3) only exploitable if raw HTML fragments can be injected into the output, eg a '<body' or other that triggers IE's HTML detection
HTML is a particularly dangerous exploit vector since it leads to XSS with no user interaction, but any content type can be faked. For example, if you use a .bat extension, IE will prompt you to execute the "batch file". So there's a potential for it being used for malware distribution. That's why we're denying all file extensions, not just .html.
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and is actively campaigning to get the remaining users to upgrade. :) But I get protecting, so if we can find a workaround that's ok.
We can't really do this without sending "Vary: User-Agent", which would completely destroy our cache hit ratio. For people who use Squid with our X-Vary-Options patch, it would be possible to use a very long X-Vary-Options header to single out IE 6 requests, but not everyone has that patch.
The patch we used for 1.16.5 included a User-Agent check, but I didn't realise the caching implications. It'll be removed for 1.16.6, that's one of the main reasons for doing a 1.16.6 release.
For 2) ... if we can detect this it would be great as we could avoid breaking *any* api.php, index.php, or load.php requests in most real-world situations.
The main issue here is that we don't a wide variety of web servers set up for testing. We know that Apache lets you detect %2E versus dot via $_SERVER['REQUEST_URI'], but we don't know if any other web servers do that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Are there any additional exploit vectors for API output other than HTML tags mixed unescaped into JSON?
Yes, all other content types, as I said above.
I think the current solution in trunk, plus the redirect idea that I've been discussing with Roan, is our best bet for now, unless someone wants to investigate $_SERVER['REQUEST_URI'].
In another post:
I know this has already been brought up, but that doesn't work for POST, and may not work for API clients that don't automatically follow redirects. (Which it looks like includes MediaWiki's ForeignAPIRepo since our Http class got redirection turned off by default a couple versions ago.)
If there is an actual problem with ForeignAPIRepo then we can look at server-side special cases for it. But r89248 should allow all API requests that have a dotless value in their last GET parameter, and a quick review of ForeignAPIRepo in 1.16 and trunk indicates that it always sends such requests.
The current solution could theoretically break some API clients. An informative error message and a Location header will help the maintainers of the clients to update them.
Since we're talking about discarded solutions for this, maybe it's worth noting that I also investigated using a Content-Disposition header. The vulnerability involves an incorrect cache filename, and it's possible to override the cache filename using a Content-Disposition "filename" parameter. The reason I gave up on it is because we already use Content-Disposition for wfStreamFile():
header( "Content-Disposition: inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename( $fname ) ) );
IE 6 doesn't understand the charset specification, so it ignores the header and goes back to detecting the extension.
-- Tim Starling
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling tstarling@wikimedia.orgwrote:
On 03/06/11 06:56, Brion Vibber wrote:
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and is actively campaigning to get the remaining users to upgrade. :) But I get protecting, so if we can find a workaround that's ok.
We can't really do this without sending "Vary: User-Agent", which would completely destroy our cache hit ratio. For people who use Squid with our X-Vary-Options patch, it would be possible to use a very long X-Vary-Options header to single out IE 6 requests, but not everyone has that patch.
I'm really thinking more along the lines of: if someone's an IE 6-or-below user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them if they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix with varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users; it's not ideal but it's something that their antivirus tools etc will already be watching out for, that end-users already get trained to beware of, and that will probably *still* be exploitable on other web sites that they visit anyway.
The main issue here is that we don't a wide variety of web servers set
up for testing. We know that Apache lets you detect %2E versus dot via $_SERVER['REQUEST_URI'], but we don't know if any other web servers do that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then we can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate a huge portion of our false positives that affect real-world situations. Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
Are there any additional exploit vectors for API output other than HTML
tags
mixed unescaped into JSON?
Yes, all other content types, as I said above.
Only as drive-by downloads, or as things that execute without interaction?
I think the current solution in trunk, plus the redirect idea that I've been discussing with Roan, is our best bet for now, unless someone wants to investigate $_SERVER['REQUEST_URI'].
*nod* Checking REQUEST_URI is probably the first thing we should do when it's available.
If there is an actual problem with ForeignAPIRepo then we can look at server-side special cases for it. But r89248 should allow all API requests that have a dotless value in their last GET parameter, and a quick review of ForeignAPIRepo in 1.16 and trunk indicates that it always sends such requests.
Yay! That's one less thing to worry about. :D
Since we're talking about discarded solutions for this, maybe it's worth noting that I also investigated using a Content-Disposition header. The vulnerability involves an incorrect cache filename, and it's possible to override the cache filename using a Content-Disposition "filename" parameter. The reason I gave up on it is because we already use Content-Disposition for wfStreamFile():
header( "Content-Disposition:
inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename( $fname ) ) );
IE 6 doesn't understand the charset specification, so it ignores the header and goes back to detecting the extension.
Good to know.
-- brion
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling tstarling@wikimedia.orgwrote:
On 03/06/11 06:56, Brion Vibber wrote:
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and is actively campaigning to get the remaining users to upgrade. :) But I get protecting, so if we can find a workaround that's ok.
We can't really do this without sending "Vary: User-Agent", which would completely destroy our cache hit ratio. For people who use Squid with our X-Vary-Options patch, it would be possible to use a very long X-Vary-Options header to single out IE 6 requests, but not everyone has that patch.
I'm really thinking more along the lines of: if someone's an IE 6-or-below user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them if they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix with varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users; it's not ideal but it's something that their antivirus tools etc will already be watching out for, that end-users already get trained to beware of, and that will probably *still* be exploitable on other web sites that they visit anyway.
The main issue here is that we don't a wide variety of web servers set
up for testing. We know that Apache lets you detect %2E versus dot via $_SERVER['REQUEST_URI'], but we don't know if any other web servers do that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then we can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate a huge portion of our false positives that affect real-world situations. Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
Are there any additional exploit vectors for API output other than HTML
tags
mixed unescaped into JSON?
Yes, all other content types, as I said above.
Only as drive-by downloads, or as things that execute without interaction?
I think the current solution in trunk, plus the redirect idea that I've been discussing with Roan, is our best bet for now, unless someone wants to investigate $_SERVER['REQUEST_URI'].
*nod* Checking REQUEST_URI is probably the first thing we should do when it's available.
If there is an actual problem with ForeignAPIRepo then we can look at server-side special cases for it. But r89248 should allow all API requests that have a dotless value in their last GET parameter, and a quick review of ForeignAPIRepo in 1.16 and trunk indicates that it always sends such requests.
Yay! That's one less thing to worry about. :D
Since we're talking about discarded solutions for this, maybe it's worth noting that I also investigated using a Content-Disposition header. The vulnerability involves an incorrect cache filename, and it's possible to override the cache filename using a Content-Disposition "filename" parameter. The reason I gave up on it is because we already use Content-Disposition for wfStreamFile():
header( "Content-Disposition:
inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename( $fname ) ) );
IE 6 doesn't understand the charset specification, so it ignores the header and goes back to detecting the extension.
Good to know.
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
We don't want to use Microsoft's, whatever we do, because it promotes their own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling <tstarling@wikimedia.org wrote:
On 03/06/11 06:56, Brion Vibber wrote:
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and
is
actively campaigning to get the remaining users to upgrade. :) But I
get
protecting, so if we can find a workaround that's ok.
We can't really do this without sending "Vary: User-Agent", which would completely destroy our cache hit ratio. For people who use Squid with our X-Vary-Options patch, it would be possible to use a very long X-Vary-Options header to single out IE 6 requests, but not everyone has that patch.
I'm really thinking more along the lines of: if someone's an IE
6-or-below
user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them if they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix with varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users; it's
not
ideal but it's something that their antivirus tools etc will already be watching out for, that end-users already get trained to beware of, and
that
will probably *still* be exploitable on other web sites that they visit anyway.
The main issue here is that we don't a wide variety of web servers set
up for testing. We know that Apache lets you detect %2E versus dot via $_SERVER['REQUEST_URI'], but we don't know if any other web servers do that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then
we
can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate
a
huge portion of our false positives that affect real-world situations. Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
Are there any additional exploit vectors for API output other than HTML
tags
mixed unescaped into JSON?
Yes, all other content types, as I said above.
Only as drive-by downloads, or as things that execute without
interaction?
I think the current solution in trunk, plus the redirect idea that I've been discussing with Roan, is our best bet for now, unless someone wants to investigate $_SERVER['REQUEST_URI'].
*nod* Checking REQUEST_URI is probably the first thing we should do when it's available.
If there is an actual problem with ForeignAPIRepo then we can look at server-side special cases for it. But r89248 should allow all API requests that have a dotless value in their last GET parameter, and a quick review of ForeignAPIRepo in 1.16 and trunk indicates that it always sends such requests.
Yay! That's one less thing to worry about. :D
Since we're talking about discarded solutions for this, maybe it's worth noting that I also investigated using a Content-Disposition header. The vulnerability involves an incorrect cache filename, and it's possible to override the cache filename using a Content-Disposition "filename" parameter. The reason I gave up on it is because we already use Content-Disposition for wfStreamFile():
header( "Content-Disposition:
inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename( $fname ) ) );
IE 6 doesn't understand the charset specification, so it ignores the header and goes back to detecting the extension.
Good to know.
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Or we can use:
<script>for(x in document.write){document.write(x);}</script> and <input type crash>
to create a more accessible and user-friendly experience.
Mono On Fri, Jun 3, 2011 at 11:55 AM, Mono mium monomium@gmail.com wrote:
We don't want to use Microsoft's, whatever we do, because it promotes their own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling tstarling@wikimedia.orgwrote:
On 03/06/11 06:56, Brion Vibber wrote:
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and is actively campaigning to get the remaining users to upgrade. :) But I get protecting, so if we can find a workaround that's ok.
We can't really do this without sending "Vary: User-Agent", which would completely destroy our cache hit ratio. For people who use Squid with our X-Vary-Options patch, it would be possible to use a very long X-Vary-Options header to single out IE 6 requests, but not everyone has that patch.
I'm really thinking more along the lines of: if someone's an IE 6-or-below user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them if they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix with varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users; it's not ideal but it's something that their antivirus tools etc will already be watching out for, that end-users already get trained to beware of, and that will probably *still* be exploitable on other web sites that they visit anyway.
The main issue here is that we don't a wide variety of web servers set
up for testing. We know that Apache lets you detect %2E versus dot via $_SERVER['REQUEST_URI'], but we don't know if any other web servers do that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then we can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate a huge portion of our false positives that affect real-world situations. Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
Are there any additional exploit vectors for API output other than HTML
tags
mixed unescaped into JSON?
Yes, all other content types, as I said above.
Only as drive-by downloads, or as things that execute without interaction?
I think the current solution in trunk, plus the redirect idea that I've been discussing with Roan, is our best bet for now, unless someone wants to investigate $_SERVER['REQUEST_URI'].
*nod* Checking REQUEST_URI is probably the first thing we should do when it's available.
If there is an actual problem with ForeignAPIRepo then we can look at server-side special cases for it. But r89248 should allow all API requests that have a dotless value in their last GET parameter, and a quick review of ForeignAPIRepo in 1.16 and trunk indicates that it always sends such requests.
Yay! That's one less thing to worry about. :D
Since we're talking about discarded solutions for this, maybe it's worth noting that I also investigated using a Content-Disposition header. The vulnerability involves an incorrect cache filename, and it's possible to override the cache filename using a Content-Disposition "filename" parameter. The reason I gave up on it is because we already use Content-Disposition for wfStreamFile():
header( "Content-Disposition: inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename( $fname ) ) );
IE 6 doesn't understand the charset specification, so it ignores the header and goes back to detecting the extension.
Good to know.
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Thats completly not the point.
2011/6/3, Mono mium monomium@gmail.com:
We don't want to use Microsoft's, whatever we do, because it promotes their own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling <tstarling@wikimedia.org wrote:
On 03/06/11 06:56, Brion Vibber wrote:
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and
is
actively campaigning to get the remaining users to upgrade. :) But I
get
protecting, so if we can find a workaround that's ok.
We can't really do this without sending "Vary: User-Agent", which would completely destroy our cache hit ratio. For people who use Squid with our X-Vary-Options patch, it would be possible to use a very long X-Vary-Options header to single out IE 6 requests, but not everyone has that patch.
I'm really thinking more along the lines of: if someone's an IE
6-or-below
user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them if they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix with varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users; it's
not
ideal but it's something that their antivirus tools etc will already be watching out for, that end-users already get trained to beware of, and
that
will probably *still* be exploitable on other web sites that they visit anyway.
The main issue here is that we don't a wide variety of web servers set
up for testing. We know that Apache lets you detect %2E versus dot via $_SERVER['REQUEST_URI'], but we don't know if any other web servers do that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then
we
can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate
a
huge portion of our false positives that affect real-world situations. Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
Are there any additional exploit vectors for API output other than HTML
tags
mixed unescaped into JSON?
Yes, all other content types, as I said above.
Only as drive-by downloads, or as things that execute without
interaction?
I think the current solution in trunk, plus the redirect idea that I've been discussing with Roan, is our best bet for now, unless someone wants to investigate $_SERVER['REQUEST_URI'].
*nod* Checking REQUEST_URI is probably the first thing we should do when it's available.
If there is an actual problem with ForeignAPIRepo then we can look at server-side special cases for it. But r89248 should allow all API requests that have a dotless value in their last GET parameter, and a quick review of ForeignAPIRepo in 1.16 and trunk indicates that it always sends such requests.
Yay! That's one less thing to worry about. :D
Since we're talking about discarded solutions for this, maybe it's worth noting that I also investigated using a Content-Disposition header. The vulnerability involves an incorrect cache filename, and it's possible to override the cache filename using a Content-Disposition "filename" parameter. The reason I gave up on it is because we already use Content-Disposition for wfStreamFile():
header( "Content-Disposition:
inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename( $fname ) ) );
IE 6 doesn't understand the charset specification, so it ignores the header and goes back to detecting the extension.
Good to know.
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Why not?
On Fri, Jun 3, 2011 at 1:19 PM, Huib Laurens sterkebak@gmail.com wrote:
Thats completly not the point.
2011/6/3, Mono mium monomium@gmail.com:
We don't want to use Microsoft's, whatever we do, because it promotes their own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling <tstarling@wikimedia.org wrote:
On 03/06/11 06:56, Brion Vibber wrote:
For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at this point; the vendor's dropped support, shipped three major versions, and
is
actively campaigning to get the remaining users to upgrade. :) But I
get
protecting, so if we can find a workaround that's ok.
We can't really do this without sending "Vary: User-Agent", which would completely destroy our cache hit ratio. For people who use Squid with our X-Vary-Options patch, it would be possible to use a very long X-Vary-Options header to single out IE 6 requests, but not everyone has that patch.
I'm really thinking more along the lines of: if someone's an IE
6-or-below
user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them if they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix with varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users; it's
not
ideal but it's something that their antivirus tools etc will already be watching out for, that end-users already get trained to beware of, and
that
will probably *still* be exploitable on other web sites that they visit anyway.
The main issue here is that we don't a wide variety of web servers set
up for testing. We know that Apache lets you detect %2E versus dot via $_SERVER['REQUEST_URI'], but we don't know if any other web servers do that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then
we
can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate
a
huge portion of our false positives that affect real-world situations. Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
Are there any additional exploit vectors for API output other than HTML
tags
mixed unescaped into JSON?
Yes, all other content types, as I said above.
Only as drive-by downloads, or as things that execute without
interaction?
I think the current solution in trunk, plus the redirect idea that I've been discussing with Roan, is our best bet for now, unless someone wants to investigate $_SERVER['REQUEST_URI'].
*nod* Checking REQUEST_URI is probably the first thing we should do when it's available.
If there is an actual problem with ForeignAPIRepo then we can look at server-side special cases for it. But r89248 should allow all API requests that have a dotless value in their last GET parameter, and a quick review of ForeignAPIRepo in 1.16 and trunk indicates that it always sends such requests.
Yay! That's one less thing to worry about. :D
Since we're talking about discarded solutions for this, maybe it's worth noting that I also investigated using a Content-Disposition header. The vulnerability involves an incorrect cache filename, and it's possible to override the cache filename using a Content-Disposition "filename" parameter. The reason I gave up on it is because we already use Content-Disposition for wfStreamFile():
header( "Content-Disposition: inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename( $fname ) ) );
IE 6 doesn't understand the charset specification, so it ignores the header and goes back to detecting the extension.
Good to know.
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Verzonden vanaf mijn mobiele apparaat
Kind regards,
Huib Laurens WickedWay.nl
Webhosting the wicked way.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
We shouldn't focus on getting people away from IE. We should stay neutral and advice people to upgrade.
2011/6/3, Mono mium monomium@gmail.com:
Why not?
On Fri, Jun 3, 2011 at 1:19 PM, Huib Laurens sterkebak@gmail.com wrote:
Thats completly not the point.
2011/6/3, Mono mium monomium@gmail.com:
We don't want to use Microsoft's, whatever we do, because it promotes their own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling <tstarling@wikimedia.org wrote:
On 03/06/11 06:56, Brion Vibber wrote: > For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at > this > point; the vendor's dropped support, shipped three major versions, > and
is
> actively campaigning to get the remaining users to upgrade. :) But I
get
> protecting, so if we can find a workaround that's ok.
We can't really do this without sending "Vary: User-Agent", which would completely destroy our cache hit ratio. For people who use Squid with our X-Vary-Options patch, it would be possible to use a very long X-Vary-Options header to single out IE 6 requests, but not everyone has that patch.
I'm really thinking more along the lines of: if someone's an IE
6-or-below
user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them if they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix with varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users; it's
not
ideal but it's something that their antivirus tools etc will already be watching out for, that end-users already get trained to beware of, and
that
will probably *still* be exploitable on other web sites that they visit anyway.
The main issue here is that we don't a wide variety of web servers set
up for testing. We know that Apache lets you detect %2E versus dot via $_SERVER['REQUEST_URI'], but we don't know if any other web servers do that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then
we
can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate
a
huge portion of our false positives that affect real-world situations. Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
> Are there any additional exploit vectors for API output other than > HTML tags > mixed unescaped into JSON?
Yes, all other content types, as I said above.
Only as drive-by downloads, or as things that execute without
interaction?
I think the current solution in trunk, plus the redirect idea that I've been discussing with Roan, is our best bet for now, unless someone wants to investigate $_SERVER['REQUEST_URI'].
*nod* Checking REQUEST_URI is probably the first thing we should do when it's available.
If there is an actual problem with ForeignAPIRepo then we can look at server-side special cases for it. But r89248 should allow all API requests that have a dotless value in their last GET parameter, and a quick review of ForeignAPIRepo in 1.16 and trunk indicates that it always sends such requests.
Yay! That's one less thing to worry about. :D
Since we're talking about discarded solutions for this, maybe it's worth noting that I also investigated using a Content-Disposition header. The vulnerability involves an incorrect cache filename, and it's possible to override the cache filename using a Content-Disposition "filename" parameter. The reason I gave up on it is because we already use Content-Disposition for wfStreamFile():
header( "Content-Disposition: inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename( $fname ) ) );
IE 6 doesn't understand the charset specification, so it ignores the header and goes back to detecting the extension.
Good to know.
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Verzonden vanaf mijn mobiele apparaat
Kind regards,
Huib Laurens WickedWay.nl
Webhosting the wicked way.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
We shouldn't throw annoying text/graphics at people who probably *cant* upgrade.
-Chad On Jun 3, 2011 4:27 PM, "Mono mium" monomium@gmail.com wrote:
Why not?
On Fri, Jun 3, 2011 at 1:19 PM, Huib Laurens sterkebak@gmail.com wrote:
Thats completly not the point.
2011/6/3, Mono mium monomium@gmail.com:
We don't want to use Microsoft's, whatever we do, because it promotes
their
own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com
wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling <tstarling@wikimedia.org wrote:
On 03/06/11 06:56, Brion Vibber wrote: > For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at > this > point; the vendor's dropped support, shipped three major versions,
and
is
> actively campaigning to get the remaining users to upgrade. :) But
I
get
> protecting, so if we can find a workaround that's ok.
We can't really do this without sending "Vary: User-Agent", which would completely destroy our cache hit ratio. For people who use
Squid
with our X-Vary-Options patch, it would be possible to use a very
long
X-Vary-Options header to single out IE 6 requests, but not everyone has that patch.
I'm really thinking more along the lines of: if someone's an IE
6-or-below
user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them
if
they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix
with
varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users;
it's
not
ideal but it's something that their antivirus tools etc will already
be
watching out for, that end-users already get trained to beware of,
and
that
will probably *still* be exploitable on other web sites that they
visit
anyway.
The main issue here is that we don't a wide variety of web servers
set
up for testing. We know that Apache lets you detect %2E versus dot
via
$_SERVER['REQUEST_URI'], but we don't know if any other web servers
do
that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then
we
can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate
a
huge portion of our false positives that affect real-world
situations.
Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
> Are there any additional exploit vectors for API output other than > HTML tags > mixed unescaped into JSON?
Yes, all other content types, as I said above.
Only as drive-by downloads, or as things that execute without
interaction?
I think the current solution in trunk, plus the redirect idea that I've been discussing with Roan, is our best bet for now, unless someone wants to investigate $_SERVER['REQUEST_URI'].
*nod* Checking REQUEST_URI is probably the first thing we should do
when
it's available.
If there is an actual problem with ForeignAPIRepo then we can look
at
server-side special cases for it. But r89248 should allow all API requests that have a dotless value in their last GET parameter, and
a
quick review of ForeignAPIRepo in 1.16 and trunk indicates that it always sends such requests.
Yay! That's one less thing to worry about. :D
Since we're talking about discarded solutions for this, maybe it's worth noting that I also investigated using a Content-Disposition header. The vulnerability involves an incorrect cache filename, and it's possible to override the cache filename using a Content-Disposition "filename" parameter. The reason I gave up on it is because we already use Content-Disposition for wfStreamFile():
header( "Content-Disposition:
inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename(
$fname
) ) );
IE 6 doesn't understand the charset specification, so it ignores the header and goes back to detecting the extension.
Good to know.
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Verzonden vanaf mijn mobiele apparaat
Kind regards,
Huib Laurens WickedWay.nl
Webhosting the wicked way.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Anyone can upgrade, Chad. It's not hard and any sane IT department should have done it six years ago.
On Fri, Jun 3, 2011 at 1:34 PM, Chad innocentkiller@gmail.com wrote:
We shouldn't throw annoying text/graphics at people who probably *cant* upgrade.
-Chad On Jun 3, 2011 4:27 PM, "Mono mium" monomium@gmail.com wrote:
Why not?
On Fri, Jun 3, 2011 at 1:19 PM, Huib Laurens sterkebak@gmail.com wrote:
Thats completly not the point.
2011/6/3, Mono mium monomium@gmail.com:
We don't want to use Microsoft's, whatever we do, because it promotes
their
own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com
wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling <tstarling@wikimedia.org wrote:
> On 03/06/11 06:56, Brion Vibber wrote: >> For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at >> this >> point; the vendor's dropped support, shipped three major versions,
and
is
>> actively campaigning to get the remaining users to upgrade. :) But
I
get
>> protecting, so if we can find a workaround that's ok. > > We can't really do this without sending "Vary: User-Agent", which > would completely destroy our cache hit ratio. For people who use
Squid
> with our X-Vary-Options patch, it would be possible to use a very
long
> X-Vary-Options header to single out IE 6 requests, but not everyone > has that patch. >
I'm really thinking more along the lines of: if someone's an IE
6-or-below
user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them
if
they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix
with
varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users;
it's
not
ideal but it's something that their antivirus tools etc will already
be
watching out for, that end-users already get trained to beware of,
and
that
will probably *still* be exploitable on other web sites that they
visit
anyway.
The main issue here is that we don't a wide variety of web servers
set
> up for testing. We know that Apache lets you detect %2E versus dot
via
> $_SERVER['REQUEST_URI'], but we don't know if any other web servers
do
> that. > > Note that checking for %2E alone is not sufficient, a lot of > installations (including Wikimedia) have an alias /wiki -> > /w/index.php which can be used to exploit action=raw. >
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then
we
can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate
a
huge portion of our false positives that affect real-world
situations.
Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
> >> Are there any additional exploit vectors for API output other than >> HTML > tags >> mixed unescaped into JSON? > > Yes, all other content types, as I said above. >
Only as drive-by downloads, or as things that execute without
interaction?
> I think the current solution in trunk, plus the redirect idea that > I've been discussing with Roan, is our best bet for now, unless > someone wants to investigate $_SERVER['REQUEST_URI']. >
*nod* Checking REQUEST_URI is probably the first thing we should do
when
it's available.
> If there is an actual problem with ForeignAPIRepo then we can look
at
> server-side special cases for it. But r89248 should allow all API > requests that have a dotless value in their last GET parameter, and
a
> quick review of ForeignAPIRepo in 1.16 and trunk indicates that it > always sends such requests. >
Yay! That's one less thing to worry about. :D
> Since we're talking about discarded solutions for this, maybe it's > worth noting that I also investigated using a Content-Disposition > header. The vulnerability involves an incorrect cache filename, and > it's possible to override the cache filename using a > Content-Disposition "filename" parameter. The reason I gave up on it > is because we already use Content-Disposition for wfStreamFile(): > > header( "Content-Disposition: > inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename(
$fname
> ) ) ); > > IE 6 doesn't understand the charset specification, so it ignores the > header and goes back to detecting the extension. >
Good to know.
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Verzonden vanaf mijn mobiele apparaat
Kind regards,
Huib Laurens WickedWay.nl
Webhosting the wicked way.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Fri, Jun 3, 2011 at 10:35 PM, Mono mium monomium@gmail.com wrote:
Anyone can upgrade, Chad. It's not hard and any sane IT department should have done it six years ago.
That doesn't mean we should punish people that work for a company with a less sane IT department.
Roan Kattouw (Catrope)
Anyone who has control of the computer they use can upgrade, but a surprising (depressing?) number of people don't have that kind of control. In particular, schools and libraries are notorious for being stuck with failbrowsers of one sort or another, and a significant number of people depend on these sorts of places for their only Internet access. I'm not sure we're yet at the stage where *most* people using IE6 don't have the ability to upgrade, but as time goes on that proportion is only going to get larger. What's the use in annoying people to try and get them to do something they can't do?
You're right that any sane IT department should have upgraded years ago. Unfortunately, many IT departments are not driven by sanity, and even those that are, sometimes their bosses are not.
-----Original Message----- From: Mono mium [mailto:monomium@gmail.com] Sent: Friday, June 03, 2011 4:36 PM To: Wikimedia developers Subject: Re: [Wikitech-l] IE6
Anyone can upgrade, Chad. It's not hard and any sane IT department should have done it six years ago.
On Fri, Jun 3, 2011 at 1:34 PM, Chad innocentkiller@gmail.com wrote:
We shouldn't throw annoying text/graphics at people who probably *cant* upgrade.
-Chad On Jun 3, 2011 4:27 PM, "Mono mium" monomium@gmail.com wrote:
Why not?
On Fri, Jun 3, 2011 at 1:19 PM, Huib Laurens sterkebak@gmail.com wrote:
Thats completly not the point.
2011/6/3, Mono mium monomium@gmail.com:
We don't want to use Microsoft's, whatever we do, because it promotes
their
own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com
wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling <tstarling@wikimedia.org wrote:
> On 03/06/11 06:56, Brion Vibber wrote: >> For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at >> this >> point; the vendor's dropped support, shipped three major versions,
and
is
>> actively campaigning to get the remaining users to upgrade. :) But
I
get
>> protecting, so if we can find a workaround that's ok. > > We can't really do this without sending "Vary: User-Agent", which > would completely destroy our cache hit ratio. For people who use
Squid
> with our X-Vary-Options patch, it would be possible to use a very
long
> X-Vary-Options header to single out IE 6 requests, but not everyone > has that patch. >
I'm really thinking more along the lines of: if someone's an IE
6-or-below
user they have hundreds of other exploit vectors staring them in the face too, and we can't protect them against many of them -- or ANY of them
if
they're visiting other sites than just an up-to-date MediaWiki.
The cost of this fix has been immense; several versions of the fix
with
varying levels of disruption on production sites, both for IE 6 users and non-IE 6 users, and several weeks of delay on the 1.17.0 release.
I'd be willing to accept a few drive-by downloads for IE 6 users;
it's
not
ideal but it's something that their antivirus tools etc will already
be
watching out for, that end-users already get trained to beware of,
and
that
will probably *still* be exploitable on other web sites that they
visit
anyway.
The main issue here is that we don't a wide variety of web servers
set
> up for testing. We know that Apache lets you detect %2E versus dot
via
> $_SERVER['REQUEST_URI'], but we don't know if any other web servers
do
> that. > > Note that checking for %2E alone is not sufficient, a lot of > installations (including Wikimedia) have an alias /wiki -> > /w/index.php which can be used to exploit action=raw. >
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then
we
can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate
a
huge portion of our false positives that affect real-world
situations.
Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
> >> Are there any additional exploit vectors for API output other than >> HTML > tags >> mixed unescaped into JSON? > > Yes, all other content types, as I said above. >
Only as drive-by downloads, or as things that execute without
interaction?
> I think the current solution in trunk, plus the redirect idea that > I've been discussing with Roan, is our best bet for now, unless > someone wants to investigate $_SERVER['REQUEST_URI']. >
*nod* Checking REQUEST_URI is probably the first thing we should do
when
it's available.
> If there is an actual problem with ForeignAPIRepo then we can look
at
> server-side special cases for it. But r89248 should allow all API > requests that have a dotless value in their last GET parameter, and
a
> quick review of ForeignAPIRepo in 1.16 and trunk indicates that it > always sends such requests. >
Yay! That's one less thing to worry about. :D
> Since we're talking about discarded solutions for this, maybe it's > worth noting that I also investigated using a Content-Disposition > header. The vulnerability involves an incorrect cache filename, and > it's possible to override the cache filename using a > Content-Disposition "filename" parameter. The reason I gave up on it > is because we already use Content-Disposition for wfStreamFile(): > > header( "Content-Disposition: > inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename(
$fname
> ) ) ); > > IE 6 doesn't understand the charset specification, so it ignores the > header and goes back to detecting the extension. >
Good to know.
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Verzonden vanaf mijn mobiele apparaat
Kind regards,
Huib Laurens WickedWay.nl
Webhosting the wicked way.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Good point Ronald,
The more people who are constantly told this message - the more will put pressure on their IT departments to change.
That is how lots of change happens. Pressure.
I think that Wikipedia's place in the internet community as all things encyclopedic will help this change.
How much time would it save WikiMedia engineers (volunteer & staff) if we did something like this instead of engineering for IE6?
/me done with this line of thought, thanks for the consideration --- Mark
On 03Jun2011, at 1:47 PM, Greenman, Ronald (NIH/CIT) [C] wrote:
Anyone who has control of the computer they use can upgrade, but a surprising (depressing?) number of people don't have that kind of control. In particular, schools and libraries are notorious for being stuck with failbrowsers of one sort or another, and a significant number of people depend on these sorts of places for their only Internet access. I'm not sure we're yet at the stage where *most* people using IE6 don't have the ability to upgrade, but as time goes on that proportion is only going to get larger. What's the use in annoying people to try and get them to do something they can't do?
You're right that any sane IT department should have upgraded years ago. Unfortunately, many IT departments are not driven by sanity, and even those that are, sometimes their bosses are not.
-----Original Message----- From: Mono mium [mailto:monomium@gmail.com] Sent: Friday, June 03, 2011 4:36 PM To: Wikimedia developers Subject: Re: [Wikitech-l] IE6
Anyone can upgrade, Chad. It's not hard and any sane IT department should have done it six years ago.
On Fri, Jun 3, 2011 at 1:34 PM, Chad innocentkiller@gmail.com wrote:
We shouldn't throw annoying text/graphics at people who probably *cant* upgrade.
-Chad On Jun 3, 2011 4:27 PM, "Mono mium" monomium@gmail.com wrote:
Why not?
On Fri, Jun 3, 2011 at 1:19 PM, Huib Laurens sterkebak@gmail.com wrote:
Thats completly not the point.
2011/6/3, Mono mium monomium@gmail.com:
We don't want to use Microsoft's, whatever we do, because it promotes
their
own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com
wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
> On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling <tstarling@wikimedia.org > wrote: > >> On 03/06/11 06:56, Brion Vibber wrote: >>> For 1) I'm honestly a bit willing to sacrifice a few IE 6 users at >>> this >>> point; the vendor's dropped support, shipped three major versions,
and
is >>> actively campaigning to get the remaining users to upgrade. :) But
I
get >>> protecting, so if we can find a workaround that's ok. >> >> We can't really do this without sending "Vary: User-Agent", which >> would completely destroy our cache hit ratio. For people who use
Squid
>> with our X-Vary-Options patch, it would be possible to use a very
long
>> X-Vary-Options header to single out IE 6 requests, but not everyone >> has that patch. >> > > I'm really thinking more along the lines of: if someone's an IE 6-or-below > user they have hundreds of other exploit vectors staring them in the > face > too, and we can't protect them against many of them -- or ANY of them
if
> they're visiting other sites than just an up-to-date MediaWiki. > > The cost of this fix has been immense; several versions of the fix
with
> varying levels of disruption on production sites, both for IE 6 users > and > non-IE 6 users, and several weeks of delay on the 1.17.0 release. > > I'd be willing to accept a few drive-by downloads for IE 6 users;
it's
not > ideal but it's something that their antivirus tools etc will already
be
> watching out for, that end-users already get trained to beware of,
and
that > will probably *still* be exploitable on other web sites that they
visit
> anyway. > > > The main issue here is that we don't a wide variety of web servers
set
>> up for testing. We know that Apache lets you detect %2E versus dot
via
>> $_SERVER['REQUEST_URI'], but we don't know if any other web servers
do
>> that. >> >> Note that checking for %2E alone is not sufficient, a lot of >> installations (including Wikimedia) have an alias /wiki -> >> /w/index.php which can be used to exploit action=raw. >> > > Well that should be fine; as long as we can see the "/wiki?/foo.bat" > then we > can identify that it doesn't contain an unencoded dot in the path. > > It sounds like simply checking REQUEST_URI when available would > eliminate a > huge portion of our false positives that affect real-world
situations.
> Apache is still the default web server in most situations for most > folks, > and of course runs our own production servers. > > >> >>> Are there any additional exploit vectors for API output other than >>> HTML >> tags >>> mixed unescaped into JSON? >> >> Yes, all other content types, as I said above. >> > > Only as drive-by downloads, or as things that execute without interaction? > > >> I think the current solution in trunk, plus the redirect idea that >> I've been discussing with Roan, is our best bet for now, unless >> someone wants to investigate $_SERVER['REQUEST_URI']. >> > > *nod* Checking REQUEST_URI is probably the first thing we should do
when
> it's available. > > >> If there is an actual problem with ForeignAPIRepo then we can look
at
>> server-side special cases for it. But r89248 should allow all API >> requests that have a dotless value in their last GET parameter, and
a
>> quick review of ForeignAPIRepo in 1.16 and trunk indicates that it >> always sends such requests. >> > > Yay! That's one less thing to worry about. :D > > >> Since we're talking about discarded solutions for this, maybe it's >> worth noting that I also investigated using a Content-Disposition >> header. The vulnerability involves an incorrect cache filename, and >> it's possible to override the cache filename using a >> Content-Disposition "filename" parameter. The reason I gave up on it >> is because we already use Content-Disposition for wfStreamFile(): >> >> header( "Content-Disposition: >> inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename(
$fname
>> ) ) ); >> >> IE 6 doesn't understand the charset specification, so it ignores the >> header and goes back to detecting the extension. >> > > Good to know. > > -- brion > _______________________________________________ > Wikitech-l mailing list > Wikitech-l@lists.wikimedia.org > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Verzonden vanaf mijn mobiele apparaat
Kind regards,
Huib Laurens WickedWay.nl
Webhosting the wicked way.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Heh, you think?
Deploying a new browser is not a trivial exercise in some large-scale environments.
And a lot of companies have really useless IT departments (i.e. no budget).
Trust me; we get employed (at vastly greater expense than simply upgrading) to tell them why their IT infrastructure is so insecure. Tom
On 3 June 2011 21:35, Mono mium monomium@gmail.com wrote:
Anyone can upgrade, Chad. It's not hard and any sane IT department should have done it six years ago.
On Fri, Jun 3, 2011 at 1:34 PM, Chad innocentkiller@gmail.com wrote:
We shouldn't throw annoying text/graphics at people who probably *cant* upgrade.
-Chad On Jun 3, 2011 4:27 PM, "Mono mium" monomium@gmail.com wrote:
Why not?
On Fri, Jun 3, 2011 at 1:19 PM, Huib Laurens sterkebak@gmail.com
wrote:
Thats completly not the point.
2011/6/3, Mono mium monomium@gmail.com:
We don't want to use Microsoft's, whatever we do, because it promotes
their
own borked browser IE9.
On Fri, Jun 3, 2011 at 11:30 AM, Mark Dilley markwdilley@gmail.com
wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
On 03Jun2011, at 10:53 AM, Brion Vibber wrote:
> On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling <
tstarling@wikimedia.org
>wrote: > >> On 03/06/11 06:56, Brion Vibber wrote: >>> For 1) I'm honestly a bit willing to sacrifice a few IE 6 users
at
>>> this >>> point; the vendor's dropped support, shipped three major
versions,
and
is >>> actively campaigning to get the remaining users to upgrade. :)
But
I
get >>> protecting, so if we can find a workaround that's ok. >> >> We can't really do this without sending "Vary: User-Agent", which >> would completely destroy our cache hit ratio. For people who use
Squid
>> with our X-Vary-Options patch, it would be possible to use a very
long
>> X-Vary-Options header to single out IE 6 requests, but not
everyone
>> has that patch. >> > > I'm really thinking more along the lines of: if someone's an IE 6-or-below > user they have hundreds of other exploit vectors staring them in
the
> face > too, and we can't protect them against many of them -- or ANY of
them
if
> they're visiting other sites than just an up-to-date MediaWiki. > > The cost of this fix has been immense; several versions of the fix
with
> varying levels of disruption on production sites, both for IE 6
users
> and > non-IE 6 users, and several weeks of delay on the 1.17.0 release. > > I'd be willing to accept a few drive-by downloads for IE 6 users;
it's
not > ideal but it's something that their antivirus tools etc will
already
be
> watching out for, that end-users already get trained to beware of,
and
that > will probably *still* be exploitable on other web sites that they
visit
> anyway. > > > The main issue here is that we don't a wide variety of web servers
set
>> up for testing. We know that Apache lets you detect %2E versus dot
via
>> $_SERVER['REQUEST_URI'], but we don't know if any other web
servers
do
>> that. >> >> Note that checking for %2E alone is not sufficient, a lot of >> installations (including Wikimedia) have an alias /wiki -> >> /w/index.php which can be used to exploit action=raw. >> > > Well that should be fine; as long as we can see the
"/wiki?/foo.bat"
> then we > can identify that it doesn't contain an unencoded dot in the path. > > It sounds like simply checking REQUEST_URI when available would > eliminate a > huge portion of our false positives that affect real-world
situations.
> Apache is still the default web server in most situations for most > folks, > and of course runs our own production servers. > > >> >>> Are there any additional exploit vectors for API output other
than
>>> HTML >> tags >>> mixed unescaped into JSON? >> >> Yes, all other content types, as I said above. >> > > Only as drive-by downloads, or as things that execute without interaction? > > >> I think the current solution in trunk, plus the redirect idea that >> I've been discussing with Roan, is our best bet for now, unless >> someone wants to investigate $_SERVER['REQUEST_URI']. >> > > *nod* Checking REQUEST_URI is probably the first thing we should do
when
> it's available. > > >> If there is an actual problem with ForeignAPIRepo then we can look
at
>> server-side special cases for it. But r89248 should allow all API >> requests that have a dotless value in their last GET parameter,
and
a
>> quick review of ForeignAPIRepo in 1.16 and trunk indicates that it >> always sends such requests. >> > > Yay! That's one less thing to worry about. :D > > >> Since we're talking about discarded solutions for this, maybe it's >> worth noting that I also investigated using a Content-Disposition >> header. The vulnerability involves an incorrect cache filename,
and
>> it's possible to override the cache filename using a >> Content-Disposition "filename" parameter. The reason I gave up on
it
>> is because we already use Content-Disposition for wfStreamFile(): >> >> header( "Content-Disposition: >> inline;filename*=utf-8'$wgLanguageCode'" . urlencode( basename(
$fname
>> ) ) ); >> >> IE 6 doesn't understand the charset specification, so it ignores
the
>> header and goes back to detecting the extension. >> > > Good to know. > > -- brion > _______________________________________________ > Wikitech-l mailing list > Wikitech-l@lists.wikimedia.org > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Verzonden vanaf mijn mobiele apparaat
Kind regards,
Huib Laurens WickedWay.nl
Webhosting the wicked way.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
"This site is best viewed with Netscape Navigator 2.0 or higher. Download Netscape Now!" http://web.archive.org/web/19961226001115/www.cae.wisc.edu/~agnew/sp/luna.ht...
It seems that these messages don't get the point of the web. That is to let everyone browse the web with whatever is available to then.
On 3 June 2011 20:30, Mark Dilley markwdilley@gmail.com wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
Another possibility is for us to 1) announce that IE6 is no longer a supported browser, and 2) just stop worrying about IE6. Not take support out, but not worry about IE6. Don't test using it, don't code for it, don't pay any attention to new IE6 vulns. Just let the IE6 support die through bit-rot. If existing IE6 support code causes a problem, take it out. Over time, people who are using IE6 can decide for themselves if they want to continue using it. This method lowers our engineering cost and increases reliability. At some point in the future, ops will report that we have had no visitors using IE6 in the last few months. Then we can go on a hunt for IE6 support and actively remove it. ________________________________________ From: wikitech-l-bounces@lists.wikimedia.org [wikitech-l-bounces@lists.wikimedia.org] on behalf of Tei [oscar.vives@gmail.com] Sent: Friday, June 03, 2011 5:19 PM To: Wikimedia developers Subject: Re: [Wikitech-l] IE6
"This site is best viewed with Netscape Navigator 2.0 or higher. Download Netscape Now!" http://web.archive.org/web/19961226001115/www.cae.wisc.edu/~agnew/sp/luna.ht...
It seems that these messages don't get the point of the web. That is to let everyone browse the web with whatever is available to then.
On 3 June 2011 20:30, Mark Dilley markwdilley@gmail.com wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
-- -- ℱin del ℳensaje.
_______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
I still support <input type=crash>.
On Fri, Jun 3, 2011 at 5:04 PM, Russell N. Nelson - rnnelson rnnelson@clarkson.edu wrote:
Another possibility is for us to 1) announce that IE6 is no longer a supported browser, and 2) just stop worrying about IE6. Not take support out, but not worry about IE6. Don't test using it, don't code for it, don't pay any attention to new IE6 vulns. Just let the IE6 support die through bit-rot. If existing IE6 support code causes a problem, take it out. Over time, people who are using IE6 can decide for themselves if they want to continue using it. This method lowers our engineering cost and increases reliability. At some point in the future, ops will report that we have had no visitors using IE6 in the last few months. Then we can go on a hunt for IE6 support and actively remove it. ________________________________________ From: wikitech-l-bounces@lists.wikimedia.org [wikitech-l-bounces@lists.wikimedia.org] on behalf of Tei [oscar.vives@gmail.com] Sent: Friday, June 03, 2011 5:19 PM To: Wikimedia developers Subject: Re: [Wikitech-l] IE6
"This site is best viewed with Netscape Navigator 2.0 or higher. Download Netscape Now!" http://web.archive.org/web/19961226001115/www.cae.wisc.edu/~agnew/sp/luna.ht...
It seems that these messages don't get the point of the web. That is to let everyone browse the web with whatever is available to then.
On 3 June 2011 20:30, Mark Dilley markwdilley@gmail.com wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
--
ℱin del ℳensaje.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 04/06/11 02:07, Mono mium wrote:
I still support<input type=crash>.
On a multiple millions visitors per day website, this is unacceptable.
Most users do not even know what a 'browser' is. They just look at their computer desktop and double click the 'e' icon.
Plus, 'internet explorer' sounds much better than: - 'Firefox' : will it light my computer? - 'Opera' : I listen to R'n'B
I do support your crash method... on your personal website.
On 04/06/11 08:51, Ashar Voultoiz wrote:
On 04/06/11 02:07, Mono mium wrote:
I still support<input type=crash>.
Most users do not even know what a 'browser' is. They just look at their computer desktop and double click the 'e' icon.
We're talking about Wikipedia right, surely we have an article that can explain this? Even if we have to point to a simplified version or something...
On 6/3/11 5:04 PM, Russell N. Nelson - rnnelson wrote:
At some point in the future, ops will report that we have had no visitors using IE6 in the last few months. Then we can go on a hunt for IE6 support and actively remove it.
This may happen but it might be sometime in 2020. We still seem to get hits from practically every browser ever made in the last 10 years.
http://stats.wikimedia.org/wikimedia/squids/SquidReportClients.htm
Last month we had 3.5M hits from browsers claiming to be IE 5.5, released in mid-2000.
Not until we hit less than 1% of users using it. Not a minute before.
-Chad On Jun 3, 2011 8:05 PM, "Russell N. Nelson - rnnelson" < rnnelson@clarkson.edu> wrote:
Another possibility is for us to 1) announce that IE6 is no longer a
supported browser, and 2) just stop worrying about IE6. Not take support out, but not worry about IE6. Don't test using it, don't code for it, don't pay any attention to new IE6 vulns. Just let the IE6 support die through bit-rot. If existing IE6 support code causes a problem, take it out. Over time, people who are using IE6 can decide for themselves if they want to continue using it. This method lowers our engineering cost and increases reliability. At some point in the future, ops will report that we have had no visitors using IE6 in the last few months. Then we can go on a hunt for IE6 support and actively remove it.
From: wikitech-l-bounces@lists.wikimedia.org [
wikitech-l-bounces@lists.wikimedia.org] on behalf of Tei [ oscar.vives@gmail.com]
Sent: Friday, June 03, 2011 5:19 PM To: Wikimedia developers Subject: Re: [Wikitech-l] IE6
"This site is best viewed with Netscape Navigator 2.0 or higher. Download Netscape Now!"
http://web.archive.org/web/19961226001115/www.cae.wisc.edu/~agnew/sp/luna.ht...
It seems that these messages don't get the point of the web. That is to let everyone browse the web with whatever is available to then.
On 3 June 2011 20:30, Mark Dilley markwdilley@gmail.com wrote:
<aside from main conversation>
Would it be a good community gesture to join Microsoft in trying to
eradicate IE6?
or to not join them and put up a more general banner
and move on?
</aside from main conversation>
--
ℱin del ℳensaje.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 04/06/11 03:53, Brion Vibber wrote:
On Thu, Jun 2, 2011 at 5:21 PM, Tim Starling tstarling@wikimedia.orgwrote:
The main issue here is that we don't a wide variety of web servers set up for testing. We know that Apache lets you detect %2E versus dot via $_SERVER['REQUEST_URI'], but we don't know if any other web servers do that.
Note that checking for %2E alone is not sufficient, a lot of installations (including Wikimedia) have an alias /wiki -> /w/index.php which can be used to exploit action=raw.
Well that should be fine; as long as we can see the "/wiki?/foo.bat" then we can identify that it doesn't contain an unencoded dot in the path.
It sounds like simply checking REQUEST_URI when available would eliminate a huge portion of our false positives that affect real-world situations. Apache is still the default web server in most situations for most folks, and of course runs our own production servers.
You mean by checking $_SERVER["SERVER_SOFTWARE"] or something to check if it's Apache that we're running under? I suppose that could work.
It's easy enough to find out if REQUEST_URI is available. What we don't know is whether REQUEST_URI is really what was sent to the server, or whether it has %2E converted to "." before PHP gets to see it.
Are there any additional exploit vectors for API output other than HTML
tags
mixed unescaped into JSON?
Yes, all other content types, as I said above.
Only as drive-by downloads, or as things that execute without interaction?
Presumably that depends on what plugins are registered. I think it's better to avoid taking risks like this unless there is some good reason for doing so. With a REQUEST_URI check in place, in addition to all the other mitigating measures we now have in place, overblocking should be vanishingly rare.
-- Tim Starling
I wrote:
It's easy enough to find out if REQUEST_URI is available. What we don't know is whether REQUEST_URI is really what was sent to the server, or whether it has %2E converted to "." before PHP gets to see it.
I installed IIS on my Windows VM and checked this. I installed PHP from the MSI installer, which uses FastCGI by default.
REQUEST_URI is indeed mangled on IIS. There's no way to tell if %2E was sent by the browser.
-- Tim Starling
wikitech-l@lists.wikimedia.org