🇮🇷 Iran Proxy | https://www.wikipedia.org/wiki/Draft:LZP_(compression_algorithm)
Jump to content

Draft:LZP (compression algorithm)

From Wikipedia, the free encyclopedia
  • Comment: Ok, so I kind of understand why the reviewers haven't got to this yet. But having some level of quantitative knowledge, I'm perplexed.
    1. The article is not comprehensible for anyone who doesn't know information theory. It is not possible, for example, to tell what the differences are between the different variants, without knowing about them beforehand.
    2. The article is also not comprehensible for anyone who does know information theory. All lossless data compression algorithms, to my knowledge, make use of the fact that character chains have a nontrivial kernel; it is in fact mathematically necessary for a compression algorithm to exist. What is different for this algorithm?
    3. Most importantly for Wikipedia - I'm nearly 100% certain that the article subject is notable. The current references, however, are basically entirely primary, which means that the sources as they stand don't meet GNG. There is a massive academic corpus on compression algorithms, review of compression theory and its implementation, etc. Use it. Fermiboson (talk) 21:46, 4 December 2025 (UTC)

LZP (Lempel-Ziv-Prediction)
ClassLossless data compression
Worst-case performanceO(n)
Best-case performanceO(n)
Average performanceO(n)
Worst-case space complexityO(n) total, O(TABLE_SIZE) auxiliary.

LZP (short for Lempel-Ziv-Prediction) is a lossless data compression algorithm that leverages the insight that the characters immediately prior a given position tend to be strong predictors for the immediately following characters[1][2]. It's closely related to PPM context modeling and Markov chains.

Algorithm overview

[edit]

For every input byte within the input stream, the previous N bytes are gathered to create a finite context of order N. This context is hashed and used to retrieve the corresponding entry in a hashtable, which returns a reference to an earlier position in the already‑processed data. The upcoming sequence of bytes is then compared with the data at that reference point, and the length of the matching segment is determined.

The result of this comparison is written in the output stream using an encoding scheme such as:

  • If the matching length is zero, a flag indicating a negative match is recorded followed by the literal byte itself.
  • If the matching length is one or greater, a flag indicating a positive match is recorded and then the length of the match is written.[2][3][4]

History

[edit]

The technique was first described by Charles Bloom in 1996.[1][2] and later popularised in hobbyist publications related to the demoscene due to its low implementation footprint[5]. It is often used as a pre-processor for stronger LZ-type or entropy compressors to improve their compression ratio without a noticeable speed penalty[6]

Comparison with the PPP Predictor algorithm

[edit]

In the same year of the publication of LZP, another similar algorithm was indipendently published as part of the PPP protocol called Predictor[7]. Multiple online sources have used their name interchangeably even thought the two algorithms are distinct[8][9]. To be specific, Predictor is different to LZP in that it stores literals instead of pointers in the hashtable, doesn't require jumping to previous locations in the stream and matches single bytes only without attempting to find multi-byte matches.

LZP variants

[edit]

Documented variants of LZP involve different choices of parameters, further processing with entropy coders or different hashtable matching systems:

  • LZP1: A fixed order-3 hashed context model was used, with a hashtable of size 212 and match lengths were written using variable-length integers.
  • LZP2: Same as LZP1, but literals and match lengths were written with a Huffman coder.
  • LZP3: A order-4 context model was used with a 216 table size. Literals and match lengths were written with a order-I Huffman coder. Notably, the table also contains the contexts, together with a mechanism that finds the highest-order context that does have a match in the table.
  • LZP4: A order-5 context model was used and a table with 216 entries, with order-1 arithmetic encoding of match lengths and order-4 PPM encoding of literals. In this version, table entries aren't a single pointer, but contain instead a 2-dimensional linked list of pointers.[2][3]

Typical parameters

[edit]
  • Hashtable size: A value of 216 bytes or greater is typical on modern implementations[9]. Early implementations on memory-constrained hardware would choose a lower amount such as 212 [1][2] or 28 at the cost of a worse compression ratio.
  • Context size: How many most-recent input bytes should be taken into account when creating the hash. For example, the original implementation of LZP1 uses a size of 3 bytes.
  • Hash function: The hash function should be chosen to yield a hash with representable size equivalent to the hashtable size. For example, in the original implementation of LZP1, the function H = ((C >> 11) ^ C) & 0xFFF was chosen, which yields a 12-bit hash that can be immediately used in a hashtable of size 212.

Performance characteristics

[edit]
  • Compression ratio: In the worst case, LZP's output can be at most 12.5% larger than the input, as an all-miss string yields 9 bits per 8 input bits. The best case ratio is dependent on the maximum value representable by the chosen encoding of the match length. On real-world data, LZP alone provides modest compression, around 50% or worse for written text or worse for more general-purpose textual data with markup[2][9], but pairing it with an entropy coder has the potential to cut in half again the ratio.[10]
  • Speed: due to the lack of tree structures, search buffers or the need for dynamic allocations, LZP is always relatively faster than any algorithm that would employ these techniques such as entropy coders and most LZ variants.
  • Memory usage: The predictor table is fixed, no dynamic allocation is required during processing except for LZP4. It is recommended to choose a table size that would fit in the CPU cache[10] to avoid the table overflowing into RAM, which would cause substantial performance degradation.

See also

[edit]

References

[edit]
  1. ^ a b c Bloom, Charles (1996). "LZP: A new data compression algorithm". Proceedings of Data Compression Conference - DCC '96. pp. 425–. doi:10.1109/DCC.1996.488353. ISBN 0-8186-7358-3.
  2. ^ a b c d e f "LZP: a new data compression algorithm" (PDF). cbloom.com. Retrieved 2025-10-12.
  3. ^ a b Salomon, David; Motta, Giovanni (18 January 2010). "6.19". Handbook of Data Compression Fifth Edition (PDF). Springer. ISBN 978-1-84882-903-9. Retrieved 2025-10-12.
  4. ^ ""Lzp" by Arturo San Emeterio Campos". Archived from the original on 27 April 2016. Retrieved 2025-10-12.
  5. ^ "LZP Data Compression". Hugi 12. Retrieved 2025-10-12.
  6. ^ "Large Text Compression Benchmark". mattmahoney.net. Retrieved 2025-10-12.
  7. ^ "PPP Predictor Compression Protocol". ietf.org. Retrieved 2025-10-12.
  8. ^ "PREDICTOR algorithm". encode.su. Retrieved 2025-10-12.
  9. ^ a b c "LZP - A streaming LZ Predictor compression tool". Github. Retrieved 2025-10-12.
  10. ^ a b "SQUID - a fast LZP-based file compressor". encode.su. Retrieved 2025-10-12.