Steve Bennett schrieb:
On 8/24/06, Simetrical Simetrical+wikitech@gmail.com wrote:
This is a problem why, provided the server is careful about what it does with the response? It could potentially be used for, e.g., flooding a third party's server, but it wouldn't be hard to restrict the harm that could do (by throttling), and no one could do much more damage that way than they could do without the WMF's help. An overwhelming number of massive, reputable sites are willing to execute arbitrary GET requests -- it's necessary for spidering, to begin with.
Given that this feature is *not* currently implemented, I see no reason not to discuss its possible implications openly.
I was trying to think of reasons too, and couldn't come up with much. Maybe since the GET operation is not exactly the same as the HTTP upload operation (again I don't know what I'm talking about), there would be a way of forcing MediaWiki to download something harmful to itself, such as an executable, or a file that would cause a buffer overrun? What if you set up a dodgy server that said it was going to download you a nice little .gif file, and instead sent you 10 gig of executable?
My implementation uses a hard limit (default: 100MB) and won't copy any more data than that, even if the size is reported wrong (maliciously or otherwise).
Magnus