> Hi,
>
> ----- Original Message -----
> From: emijrp <
emijrp@gmail.com>
> Date: Friday, August 13, 2010 4:48 am
> Subject: [Xmldatadumps-l] Dumps, dumps, dumps
> To:
xmldatadumps-l@lists.wikimedia.org
>
> > Hi all;
> >
> > Yesterday, I wrote a post[1] with some links to current dumps,
> > old dumps,
> > and another raw data like Domas visits logs. Also, some links to
> > InternetArchive where we can download some historical dumps.
> > Please, can you share
> > your links?
> >
> > Also, what about making a tarball with thumbnails from Commons?
> > 800x600would be a nice (re)-solution, to avoid a TB dump. If
> > not, probably it will
> > never be published an image dump. Commons is growing ~5000
> > images per day.
> > It is scaring.
>
> Yes publicly available tarballs of image dumps would be great. Here's
> what I think it would take to implement:
>
> 1. allocate the server space for the image tarballs
> 2. allocate the bandwidth for us to download them
> 3. decide what tarballs will be made available (ie. separated by wiki
> or whole commons, thumbnails or 800x600max, etc)
> 3. write the script(s) for collecting the image lists, automating the
> image scaling and creating the tarballs
> 4. done!
>
> None of those tasks are really that difficult, the hard part is
> figuring out why there used to be tarball images available but not
> anymore, especially when apparently there is adequate server space and
> bandwidth. I guess it is one more thing that could break and then
> people would complain about it not working.
>