Platonides schrieb:
Robert Rohde wrote:
Since people are doing dump redesign right now, might I suggest that providing better integration / information on Commons-hosted images would actually be useful. As far as I know the current system has no way to distinguish between Commons images and missing images except by downloading the Commons dump files. That can be frustrating since the Commons dumps are larger (and hence more trouble to work with) than all but a handful of other wikis.
-Robert Rohde
You only need the image.sql dump from commons to determine if the image exists there (it will also include other useful and not-so-useful data like filetype, image size, metadata...). http://download.wikimedia.org/commonswiki/20090510/
This is strange: The image dump is larger than the pages-articles dump. I assume that this is because the first dump is in sql format and the second is in xml format which is more efficient. But nevertheless, thanks for the hint. Using that file the import should be faster.
Christian