Στις 14-08-2010, ημέρα Σαβ, και ώρα 23:25 -0700, ο/η Jamie Morken έγραψε:
Hi,
----- Original Message ----- From: emijrp emijrp@gmail.com Date: Friday, August 13, 2010 4:48 am Subject: [Xmldatadumps-l] Dumps, dumps, dumps To: xmldatadumps-l@lists.wikimedia.org
Hi all;
Yesterday, I wrote a post[1] with some links to current dumps, old dumps, and another raw data like Domas visits logs. Also, some links to InternetArchive where we can download some historical dumps. Please, can you share your links?
Also, what about making a tarball with thumbnails from Commons? 800x600would be a nice (re)-solution, to avoid a TB dump. If not, probably it will never be published an image dump. Commons is growing ~5000 images per day. It is scaring.
Yes publicly available tarballs of image dumps would be great. Here's what I think it would take to implement:
- allocate the server space for the image tarballs
- allocate the bandwidth for us to download them
- decide what tarballs will be made available (ie. separated by wiki
or whole commons, thumbnails or 800x600max, etc) 3. write the script(s) for collecting the image lists, automating the image scaling and creating the tarballs 4. done!
None of those tasks are really that difficult, the hard part is figuring out why there used to be tarball images available but not anymore, especially when apparently there is adequate server space and bandwidth. I guess it is one more thing that could break and then people would complain about it not working.
Images take up 8T or more these days (of course that includes deletes and earlier versions but those aren't the bulk of it). Hosting 8T tarballs seems out of the question... who would download them anyways?
Having said that, hosting small subsets of images is qute possible and is something that has been discussed in the past. I would love to hear which subsets of images people want and would actually use.
Ariel