There are multiple JS LZMA libraries. I haven't looked at any of them but have you ? It might be enough for you to get a sens of performances.


On Tue, Jan 1, 2013 at 1:18 PM, Douglas Crosher <> wrote:
On 01/01/2013 08:09 PM, Emmanuel Engelhart wrote:
> Hi Douglas
> On 01/01/2013 02:22 AM, Douglas Crosher wrote:
>> Has anyone considered a pure Javascript ZIM file reader and Wikipedia
>> reader?
> No, this is complicated to do... although this could be practical. I'm
> also not sure if we could achieve to get acceptable performances.

I'll hack something together to explore the performance question, and
follow up.

>> I have made a small start, writing some hack code to open a ZIM file and
>> it gets to the point of needing to uncompress a cluster.  A start has
>> been made on the needed XZ decompress code but it's not done yet.
> Great. Yes, xz decompression is the most complicated part.

Would it be very limiting on ZIM files if the XZ decoder were restricted
to the 'XZ embedded' format, supporting only the 'LZMA2' filter?   See:

Do ZIM files really need the XZ/LZMA2 containers, or could they just use
raw LZMA1 compression?  This could be added as a new cluster compression
type for compatibility.

Two possible uses for XZ/LZMA2 may be for large entries and/or entries
with distinct regions that are compressible and not compressible.
However perhaps a significant amount of content does not need this.

I expect that typical HTML entries would be relatively small.  It would
seem pointless for a cluster to use multiple XZ blocks and/or streams
when these could be avoided by placing entries in separate clusters.  So
perhaps there is a case for clusters with just one LZMA1 block.  Further
entries are likely to either be compressible or not, and could be placed
in separate clusters rather than exploiting the LZMA2 support for such

It might even save space not having the XZ container overhead.

Douglas Crosher

Offline-l mailing list