2009/7/18 Alexandre Dulaunoy a@foo.be:
I was wondering if it would be possible to allow web robots to access http://upload.wikimedia.org/wikipedia/commons/ to gather and mirror the media files. As this is pure HTTP, the mirroring could benefit from the caching mechanisms of HTTP object (instead of having a large dump containing all the media files, that is more difficult to cache/update).
I see lots of files on upload.wikimedia.org on Google Image Search already. Is that actually forbidden by our robots.txt?
It'd actually be better if Google properly indexed text pages whose name ends in .jpg or whatever ... but they're aware we'd like that, so it's up to them.
- d.