Robert Rohde wrote:
The largest gains are almost certainly going to be in parallelization though. A single monolithic dumper is impractical for enwiki.
-Robert Rohde
Using dumps compressed per blocks, as the ones I used for http://lists.wikimedia.org/pipermail/wikitech-l/2009-January/040812.html would allow several processes/computers to write the same dump on different offsets and reading from the last one on different positions as well.
As sharing a transaction between different servers would be tricky, they should probably dump from the previously dumped page.sql.gz
<whisper>Patches on bugs 16082 and 16176 to add Export features are awaiting review</whisper>