When a database dump fails, as seen in "20 items failed" on
http://download.wikimedia.org/backup-index.html
then the dump script continues with the next database in turn.
Apparently they often fail in groups. I guess this is because the
failure doesn't depend on the database itself (e.g. svwiki) but on
some other circumstance. When that circumstance is solved, all
dumps succeed.
Right now (20080529), frwiki is being dumped. Previous successful
dumps of frwiki were done on 20080514 and 20080420,
http://download.wikimedia.org/frwiki/
The last successful dump of svwiki was made on 20080406, but the
one on 20080425 failed, and the one on 20080524 was aborted,
http://download.wikimedia.org/svwiki/
Hopefully, but only hopefully, the next svwiki dump will succeed
in just a few days or weeks. But who knows. Maybe it too will be
aborted and the next dump of frwiki will run instead.
What I want is the dump script to be rewritten so it prioritizes
those databases (websites) that haven't been successfully dumped
in a long time. It seems unfair that the French should have one
every fortnight when us Swedes are waiting almost two months.
Other than this, I want the dumps to fail less often. Why do they
fail? Has this been investigated? What can be done to help this?
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik -
http://aronsson.se