On 14/03/13 23:39, Ariel T. Glenn wrote:
Στις 14-03-2013, ημέρα Πεμ, και ώρα 23:24 +0000, ο/η Neil Harris έγραψε:
Dear Wikimedia ops team,
The most recent enwiki dump now seems to have finished _almost_ successfully, apart from the dumping of the database metadata tables such as the pages table and the various links tables, almost all of which have failed.
I wonder if there is any chance someone could give this a kick, and re-try the dumping of these tables to finish the dump?
It's rerunning the tables now.
Thank you!
Since this seems to have happened several times now, could it be worth considering automating re-trying in this sort of situation, to improve dump reliability for the future without needing manual intervention?
Because the reasons for failure are varied (ranging from hardware failure to dbs being unreachable to broken MediaWiki code being deployed), automating restarts isn't practical; typically someone needs to find out what the underlying cause is and take appropriate action.
Ariel
I should have thought of this -- sorry for the overly simplistic suggestion,
Thanks again,
Neil