On Sat, Jan 9, 2010 at 5:37 AM, Robert Rohde rarohde@gmail.com wrote:
I know that You didn't want or use a tarball, but requests for an "image dump" are not that uncommon and often the requester is envisioning something like a tarball. Arguably that is what the originator of this thread seems to have been asking for. I think you and I are probably mostly on the same page about the virtue of ensuring that images can be distributed and that monolithic approaches are bad.
Monolithic approaches may be bad, but they're better than nothing, which is what we have now.
Tar everything up into 250 gig tarballs and upload it to Internet Archive. Then we're not dependent on the WMF to take the next step and convert those tarballs into something useful.
On Sat, Jan 9, 2010 at 7:44 AM, Platonides Platonides@gmail.com wrote:
Anthony wrote:
The bandwidth-saving way to do things would be to just allow mirrors to
use
hotlinking. Requiring a middle man to temporarily store images (many,
and
possibly even most of which will never even be downloaded by end users)
just
wastes bandwidth.
There is already a way to instruct a wiki to use images from a foreign wiki as they are needed. With proper caching.
Umm, the "with proper caching" part is exactly the part I was talking about wasting bandwidth. Sending the images to a middle-man wastes bandwidth. In a "proper caching" scenario, the middle-man is in a position where the cached material passes through anyway. That saves bandwidth. But that isn't how Instant Commons works.
The original version of Instant Commons had it right. The files were sent straight from the WMF to the client. That version still worked last I checked, but my understanding is that it was deprecated in favor of the bandwidth-wasting "store files in a caching middle-man".
On 1.16 it will even be much easier, as you will only need to set
$wgUseInstantCommons = true; to use Wikimedia Commons images. http://www.mediawiki.org/wiki/Manual:$wgUseInstantCommons
That assumes you're using MediaWiki.