2
0
mirror of https://github.com/vasi/pixz synced 2024-11-03 09:40:24 +00:00
pixz/README
2010-01-16 20:04:25 -05:00

45 lines
1.7 KiB
Plaintext

Pixz (pronounced 'pixie') is a parallel, indexing version of XZ.
The existing XZ Utils ( http://tukaani.org/xz/ ) provide great compression in the .xz file format, but they have two significant problems:
* They are single-threaded, while most users nowadays have multi-core computers.
* The .xz files they produce are just one big block of compressed data, rather than a collection of smaller blocks. This makes random access to the original data impossible.
With pixz, both these problems can eventually be solved. Currently these pixz tools are available:
* write INPUT.tar OUTPUT.tpxz: Compresses an uncompressed tarball. The compression uses two cores. An index of all the files in the tarball is stored within the file, yet it remains compatible with standard xz and tar.
* read INPUT.tpxz PATH: Efficiently extracts a single file from a tarball compressed by 'write'.
* list [-t] INPUT.xz: Lists the xz blocks present within any .xz file. Optionally also lists a file index as stored by 'write'.
Compare to:
plzip
* About equally complex, efficient
* lzip format seems less-used
* Version 1 is theoretically indexable...I think
ChopZip
* Python, much simpler
* More flexible, supports arbitrary compression programs
* Uses streams instead of blocks, not indexable
* Splits input and then combines output, much higher disk usage
pxz
* Simpler code
* Uses OpenMP instead of pthreads
* Uses streams instead of blocks, not indexable
* Uses temp files and doesn't combine them until the whole file is compressed, high disk/memory usage
Comparable tools for other compression algorithms:
pbzip2
* Not indexable
* Appears slow
* bzip2 algorithm is non-ideal
pigz
* Not indexable
dictzip
* Not parallel