I attached a Python script that compresses/decompresses files in 10MB chunks, and stores info about block boundaries so you can read random parts of the file. It's set up to use rzip or xdelta3 (a different package from xdelta) for compression, so you'll want one or both. It's public domain, with no warranty. No docs either; the command-line syntax tries to be like gzip's.
Some sample invocations of the script with timing and output-size info are below my signature.
The list of caveats could be about as long as the script--performance, hackability/readability, ease of installation, and flexibility are all suboptimal. At a minimum, it's not safe for exchanging files without at least 1) making the way it reads/writes binary numbers CPU-architecture-independent (Python's array.array('l').tostring() is not), 2) adding a magic number, file format version, and compression-type tag, so format/algorithm can be upgraded gracefully, 3) better handling errors, like the ones you get when rzip or xdelta3 are not installed, 4) testing. The -blksdev filename suffix it uses reflects that it's a development format, not production-ready.
Last post I said xdelta3 and rzip compressed histories pretty well pretty quickly, but didn't expand (ha!) on that at all. Both programs have have a first stage that quickly compresses long repetitions like you'll see in history files, at the cost of completely missing short-range redundancy. Then, rzip uses bzip2 and xdelta3 can use its own compressor for short-range redundancy. Neither adds much value if your file doesn't have large long-range repetitions, which is why you don't hear about them as general-purpose compressors often.
Honestly don't know if anything down this path will suit your needs. Certainly this exact script doesn't--just seemed like an interesting thing to mess around with.
Best,
Randall