Hi
How about using another algorithm that does this already?
-- chris
[...] The problem is that 7zip was never optimized to work quickly on extracting one given file out of hundreds of thousands, or millions. Right now, the Indonesian wikipedia (60MB 7zipped) takes about 15 seconds for a page on my two-year old iBook, whereas the Chinese one (250MB 7zipped) takes about 150 seconds for a page. I haven't dared try any of the bigger ones, like the German (1,5GB) or the English (four files a 1,5GB)... My first thought was if it was possible to modify the open-source 7zip to generate an index of which block the different files where, which would then make the actual extraction a lot faster. The problem is that I suck at C, and I have been looking for people to help me, even offering a small bounty to the developer. (If anyone here would help me, that would be MUCH appreciated! I personally think it would be quite easy, given the sourcecode that exists, but I don't know for sure).