Daniel Kinzler schrieb:
Platonides schrieb:
Process? Tools?
It would just be making a 'bigupload' right for people to bypass file
size restrictions (or have a extremely high one).
Then give it to sysops or a new group.
Tell me if I'm wrong, but as far as I know, the file size is limited by PHP, nto
by MediaWiki. And it has to be: if we would admit huge files to be uploaded
before they are finally rejected by MediaWiki, this would already be an attack
vector - because, afaik, PHP got the dumb idea of buffering uploads in RAM. So,
to kill the server, just upload a 5GB file.
Really? It makes sense for text POSTs but it's not very smart for files...
Of course we
would also need an interface able at least to continue
interrupted uploads, to make it really useful.
That would be helpful. ALso helpful would be the ability to upload archive files
containing multiple images. If we have a way to deal with uploading big files,
this would become feasible.
I did a proposal years
ago based on a FTP upload interface. Maybe you are referring to
something similar. Please keep me posted.
Upload from URL and Firefogg should alleviate the issue, though.
A relatively simple way would be to allow big files to be uploaded via FTP or
any other protocol, to "dumb storage", and then transfer and import them
server
side. I'd propose a ticket system for this: people with a special right can
generate a ticket good for uploading a one file, for instance. But it's just an
idea so far.
-- daniel
I was thinking on a FTP server where you log in with your wiki
credentials and get to a private temporary folder. You can view pending
files, delete, rename, append and create new ones (you can't read them
though, to avoid being used as a sharing service).
You are given a quota so you could upload a few large files or many
small ones. Files get deleted after X time untouched.
When you go to the page name it would have on the wiki there's a message
reminding you of a pending upload an inviting you to finish it, where
you get the normal upload fields. After transferring, the file gets
public and you are returned the file size quota.
Having a specific protocol for uploads also allow to store them directly
on the storage nodes, instead of writing them via nfs from the apaches.