 Dynamic Markov compression

Dynamic Markov compression (DMC) is a lossless data compression algorithm developed by Gordon Cormack and Nigel Horspool ^{[1]}. It uses predictive arithmetic coding similar to prediction by partial matching (PPM), except that the input is predicted one bit at a time (rather than one byte at a time). DMC has a good compression ratio and moderate speed, similar to PPM, but requires somewhat more memory and is not widely implemented. Some recent implementations include the experimental compression programs hook by Nania Francesco Antonio, ocamyd by Frank Schwellinger, and as a submodel in paq8l by Matt Mahoney. These are based on the 1993 implementation in C by Gordon Cormack.
Contents
Algorithm
DMC predicts and codes one bit at a time. It differs from PPM in that it codes bits rather than bytes, and from context mixing algorithms such as PAQ in that there is only one context per prediction. The predicted bit is then coded using arithmetic coding.
Arithmetic coding
A bitwise arithmetic coder such as DMC has two components, a predictor and an arithmetic coder. The predictor accepts an nbit input string x = x_{1}x_{2}...x_{n} and assigns it a probability p(x), expressed as a product of a series of predictions, p(x_{1})p(x_{2}x_{1})p(x_{3}x_{1}x_{2}) ... p(x_{n}x_{1}x_{2}...x_{n–1}). The arithmetic coder maintains two high precision binary numbers, p_{low} and p_{high}, representing the possible range for the total probability that the model would assign to all strings lexicographically less than x, given the bits of x seen so far. The compressed code for x is p_{x}, the shortest bit string representing a number between p_{low} and p_{high}. It is always possible to find a number in this range no more than one bit longer than the Shannon limit, log_{2} 1/p(x'). One such number can be obtained from p_{high} by dropping all of the trailing bits after the first bit that differs from p_{low}.
Compression proceeds as follows. The initial range is set to p_{low} = 0, p_{high} = 1. For each bit, the predictor estimates p_{0} = p(x_{i} = 0x_{1}x_{2}...x_{i–1}) and p_{1} = 1 − p_{0}, the probability of a 0 or 1, respectively. The arithmetic coder then divides the current range, (p_{low}, p_{high}) into two parts in proportion to p_{0} and p_{1}. Then the subrange corresponding to the next bit x_{i} becomes the new range.
For decompression, the predictor makes an identical series of predictions, given the bits decompressed so far. The arithmetic coder makes an identical series of range splits, then selects the range containing p_{x} and outputs the bit x_{i} corresponding to that subrange.
In practice, it is not necessary to keep p_{low} and p_{high} in memory to high precision. As the range narrows, the leading bits of both numbers will be the same, and can be output immediately.
DMC model
The DMC predictor is a table which maps (bitwise) contexts to a pair of counts, n_{0} and n_{1}, representing the number of zeros and ones previously observed in this context. Thus, it predicts that the next bit will be a 0 with probability p_{0} = n_{0}/n = n_{0}/(n_{0} + n_{1}) and 1 with probability p_{1} = 1 − p_{0} = n_{1}/n. In addition, each table entry has a pair of pointers to the contexts obtained by appending either a 0 or a 1 to the right of the current context (and possibly dropping bits on the left). Thus, it is never necessary to look up the current context in the table; it is sufficient to maintain a pointer to the current context and follow the links.
In the original DMC implementation, the initial table is the set of all contexts of length 8 to 15 bits that begin on a byte boundary. The initial state is any of the 8 bit contexts. The counts are floating point numbers initialized to a small nonzero constant such as 0.2. The counts are not initialized to zero in order to allow values to be coded even if they have not been seen before in the current context.
Modeling is the same for compression and decompression. For each bit, p_{0} and p_{1} are computed, bit x_{i} is coded or decoded, the model is updated by adding 1 to the count corresponding to x_{i}, and the next context is found by traversing the link corresponding to x_{i}.
Adding new contexts
DMC as described above is equivalent to an order1 context model. However, it is normal to add longer contexts to improve compression. If the current context is A, and the next context B would drop bits on the left, then DMC may add (clone) a new context C from B. C represents the same context as A after appending one bit on the right as with B, but without dropping any bits on the left. The link from A will thus be moved from B to point to C. B and C will both make the same prediction, and both will point to the same pair of next states. The total count, n = n_{0} + n_{1} for C will be equal to the count n_{x} for A (for input bit x), and that count will be subtracted from B.
For example, suppose that state A represents the context 11111. On input bit 0, it transitions to state B representing context 110, obtained by dropping 3 bits on the left. In context A, there have been 4 zero bits and some number of one bits. In context B, there have been 3 zeros and 7 ones (n = 10), which predicts p_{1} = 0.7.
State n_{0} n_{1} next_{0} next_{1} A = 11111 4 B B = 110 3 7 E F C is cloned from B. It represents context 111110. Both B and C predict p_{1} = 0.7, and both go to the same next states, E and F. The count for C is n = 4, equal to n_{0} for A. This leaves n = 6 for B.
State n_{0} n_{1} next_{0} next_{1} A = 11111 4 C B = 110 1.8 4.2 E F C = 111110 1.2 2.8 E F States are cloned just prior to transitioning to them. In the original DMC, the condition for cloning a state is when the transition from A to B is at least 2, and the count for B is at least 2 more than that. (When the second threshold is greater than 0, it guarantees that other states will still transition to B after cloning). Some implementations such as hook allow these thresholds to be set as parameters. In paq8l, these thresholds increase as memory is used up to slow the growth rate of new states. In most implementations, when memory is exhausted the model is discarded and reinitialized back to the original bytewise order 1 model.
References
 ^ Gordon Cormack and Nigel Horspool, "Data Compression using Dynamic Markov Modelling", Computer Journal 30:6 (December 1987)
External links
Data compression methods Information theory Lossless Shannon–Fano · Shannon–Fano–Elias · Huffman · Adaptive Huffman · Arithmetic · Range · Golomb · Universal (Gamma · ExpGolomb · Fibonacci · Levenshtein)RLE · Byte pair encoding · DEFLATE · Lempel–Ziv (LZ77/78 · LZSS · LZW · LZWL · LZO · LZMA · LZX · LZRW · LZJB · LZS · LZT · ROLZ) · Statistical Lempel ZivOthersAudio Audio codec partsOthersImage TermsMethodsOthersVideo TermsVideo characteristics · Frame · Frame rate · Interlace · Frame types · Video quality · Video resolutionOthersSee Compression formats for formats and Compression software implementations for codecs Categories: Lossless compression algorithms
 Markov models
Wikimedia Foundation. 2010.