As I said below, providing multiterabyte dumps does not seem reasonable
to me. Monthly incrementals don't provide a workaround, unless you are
suggesting that we put dumps online for every month since the beginning
of the project. I think that a much more workable way to jump-start a
mirror is to copy directly to disks in the datacenter, for an
organization which will provide public access to its copy. This
requires three things: 1) an organization that wants to host such a
mirror, 2) them sending us disks, 3) me clearing it with Rob and with
our datacenter tech, but he's agreed to this in principle in the past.
Στις 17-11-2011, ημέρα Πεμ, και ώρα 14:11 +0100, ο/η emijrp έγραψε:
People can't mirror Commons if there is no public
image dump. As there
is no public image dump, people don't care about mirror. And so on...
You can offer monthly incremental image dumps. Until mid-2008,
month uploads are lower than 100 GB. Recently, it is on the 200-300GB
rage. People is mirroring Domas visit logs at Internet Archive, ok,
Commons monthly size in this case is about 10x, but it is not
impossible. Arcnhive Team has mirrored GeoCities (0.9TB), Yahoo!
Videos (20TB), Jamendo (2.5TB) and other huge sites. So, if you put
that image dumps online, they are going to rage-download all.
You can start offering full resolution monthly dumps until 2007 or
similar. But, man, we have to restart this soon or later.
2011/11/17 Ariel T. Glenn <ariel(a)wikimedia.org>
I had a quick look and it turns out that the English language
uses over 2.8 million images today. So, as you point out, an
reader that just used thumbnails would still have to be
its image use.
In any case, putting together collections of thumbs doesn't
need for a mirror of the originals, which I would really like
Στις 17-11-2011, ημέρα Πεμ, και ώρα 01:46 +0100, ο/η Erik
> Providing multiple terabyte sized files for download
any kind of sense to me. However, if we get
concrete proposals for categories of Commons images people
really want and would use, we can put those together. I think
this has been said before on wikitech-l if not here.
There is another way to cut down on download size, which
would serve a
whole class of content re-users, e.g. offline
For offline readers it is not so important to
of 20 Mb each, rather to have pictures at all, preferably
Kb's in size.
A download of all images, scaled down to say
would be quite appropriate for many uses.
Map and diagrams would not survive this scale
(illegible text), but are very compact already.
In fact the compress ratio of each image is very
predictor of the type of content.
In 2005 I distributed a DVD  with all unabridged texts
Wikipedia and all 320,000 images on one DVD, to be
loaded on 4Gb CF card for handheld.
Now we have 10 million images on Commons, so even
down images would need some filtering, but any collection
would still be 100-1000 times smaller in size.
Xmldatadumps-l mailing list
Xmldatadumps-l mailing list