Platonides wrote:
Yuvi Panda wrote:
Hi, I'm Yuvi, a student looking forward to working with MediaWiki via this year's GSoC.
<snip/>
An idea I have been pondering is to pass the offset to the previous revision to the compressor, so it would need much less work in the compressing window to perform its work. You would need something like 7z/xz so that the window can be big enough to contain at least the latest revision (its compression factor is quite impressive, too: 1TB down to 2.31GB). Note that I haven't checked on how factible it can be such modification to the compressor.
Consider using pigz for the compression step.
+ Much (7x?) faster than gzip + Straighforward install + Stable + One or more threads per CPU (settable) - Only compresses to .gz or .zz formats - Linux only
Alternately, could Gnu make's parallel feature be used? For example, "make -j --load-average=30" will keep adding jobs in parallel until the load average reaches 30.
+ It's make - It's make