Hello!
While working on my improvements to MediaWiki Import&Export, I've discovered a feature that is totally new for me: 2-phase backup dump. I.e. the first pass dumper creates XML file without page texts, and the second pass dumper adds page texts.
I have several questions about it - what it is intended for? Is it a sort of optimisation for large databases and why such method of optimisation was chosen?
Also, does anyone use it? (does Wikimedia use it?)