On 07/20/2010 10:24 PM, Tim Starling wrote:
The problem is just that increasing the limits in our main Squid and Apache pool would create DoS vulnerabilities, including the prospect of "accidental DoS". We could offer this service via another domain name, with a specially-configured webserver, and a higher level of access control compared to ordinary upload to avoid DoS, but there is no support for that in MediaWiki.
We could theoretically allow uploads of several gigabytes this way, which is about as large as we want files to be anyway. People with flaky internet connections would hit the problem of the lack of resuming, but it would work for some.
yes in theory we could do that ... or we could support some simple chunk uploading protocol for which there is *already* basic support written, and will be supported in native js over time.
The firefogg protocol is almost identical to the plupload protocol. The main difference is firefogg requests a unique upload parameter / url back from the server so that if you uploaded identical named files they would not mangle the chunking. From a quick look at upload.php of plupload it appears plupload relies on the filename and a extra "chunk" url parameter != 0 request parameter. The other difference is firefogg has an explicit done = 1 in the request parameter to signify the end of chunks.
We requested feedback for adding a chunk id to the firefogg chunk protocol with each posted chunk to gard againt cases where the outer caches report an error but the backend got the file anyway. This way the backend can check the chunk index and not append the same chunk twice even if their are errors at other levels of the server response that cause the client to resend the same chunk.
Either way, if Tim says that plupload chunk protocol is "superior" then why discuss it? We can easily shift the chunks api to that and *move forward* with supporting larger file uploads. Is that at all agreeable?
peace, --michael