you may want to drop it on a Wikimedia repo
This is done (
https://gerrit.wikimedia.org/r/#/c/63139/). Besides the rzip script, there's another one that does a simple dedupe of lines of text repeated between revs, then gzips. It's slower than rzip at any given compression level, but still faster and smaller than straight bzip (in a test on 4G of enwiki, 50% smaller and 10x faster) and anyone with Python can run it.
Again, if there's any lesson it's just that there are some gains from making even pretty naive attempts to compress redundancy between revisions. Interested to see what the summer dumps project produces.
I've also expanded the braindump about this stuff on the dbzip2 page you linked to.