Tomasz Finc wrote:
Russell Blau wrote:
"Tomasz Finc" tfinc@wikimedia.org wrote in message news:49FB3CA6.90602@wikimedia.org... Brion Vibber wrote:
El 5/1/09 5:51 PM, Andreas Meier escribió:
Since today the dump process does not work correctly. It is running, but without any success
Tomasz is on it... we've upgraded the machine they run on and it needs some more tweaking. :)
Indeed. The backup job was missing the php normalize library. Putting that into place now. Then I'll see if there is any db weirdness.
But, on the bright side, every database in the system now has a dump that was completed within the last nine hours (roughly). When's the last time you could say *that*? :-)
Mwhaha .. that would be awesome if it was actually useful data. The libs, binaries and configs have all been fixed. I've run a couple of batch jobs for the small wikis [tokiponawiktionary, emlwiki] and am running [afwiki] right now to try a bigger data set. No issues so far past the main page not noticing them finishing.
After afwiki finishes up I'll remove the failed runs as they don't provide us with any useful data. Will set the worker to begin processing after that. Plus I'll actually document the setup.
afwiki finished just fine and all subsequent wiki's have been happy. The only issue left is that the version of 7za on Ubuntu 8.04 ignores the system umask and decides that 600 is good enough for everyone. This is fixed in 4.58 and I've requested a backport from the ubuntu folks at
https://bugs.edge.launchpad.net/hardy-backports/+bug/370618
In the mean time I've forced a chmod of 644 into the dumps script.
--tomasz