 Minimum description length

The minimum description length (MDL) principle is a formalization of Occam's Razor in which the best hypothesis for a given set of data is the one that leads to the best compression of the data. MDL was introduced by Jorma Rissanen in 1978. It is an important concept in information theory and learning theory.^{[1]}^{[2]}^{[3]}
Contents
Overview
Any set of data can be represented by a string of symbols from a finite (say, binary) alphabet.
The fundamental idea behind the MDL Principle is that any regularity in a given set of data can be used to compress the data, i.e. to describe it using fewer symbols than needed to describe the data literally." (Grünwald, 1998)^{[4]}^{[not in citation given]}
To select the hypothesis that captures the most regularity in the data, scientists look for the hypothesis with which the best compression can be achieved. In order to do this, a code is fixed to compress the data, most generally with a (Turingcomplete) computer language. A program to output the data is written in that language; thus the program effectively represents the data. The length of the shortest program that outputs the data is called the Kolmogorov complexity of the data. This is the central idea of Ray Solomonoff's idealized theory of inductive inference.
Inference
However, this mathematical theory does not provide a practical way of reaching an inference. The most important reasons for this are:
 Kolmogorov complexity is uncomputable: there exists no algorithm that, when input an arbitrary sequence of data, outputs the shortest program that produces the data.
 Kolmogorov complexity depends on what computer language is used. This is an arbitrary choice, but it does influence the complexity up to some constant additive term. For that reason, constant terms tend to be disregarded in Kolmogorov complexity theory. In practice, however, where often only a small amount of data is available, such constants may have a very large influence on the inference results: good results cannot be guaranteed when one is working with limited data.
MDL attempts to remedy these, by:
 Restricting the set of allowed codes in such a way that it becomes possible (computable) to find the shortest codelength of the data, relative to the allowed codes, and
 Choosing a code that is reasonably efficient, whatever the data at hand. This point is somewhat elusive and much research is still going on in this area.
Rather than "programs", in MDL theory one usually speaks of candidate hypotheses, models or codes. The set of allowed codes is then called the model class. (Some authors refer to the model class as the model.) The code is then selected for which the sum of the description of the code and the description of the data using the code is minimal.
One of the important properties of MDL methods is that they provide a natural safeguard against overfitting, because they implement a tradeoff between the complexity of the hypothesis (model class) and the complexity of the data given the hypothesis^{[citation needed]}.
Example of MDL
A coin is flipped 1,000 times and the numbers of heads and tails are recorded. Consider two model classes:
 The first is a code that represents outcomes with a 0 for heads or a 1 for tails. This code represents the hypothesis that the coin is fair. The code length according to this code is always exactly 1,000 bits.
 The second consists of all codes that are efficient for a coin with some specific bias, representing the hypothesis that the coin is not fair. Say that we observe 510 heads and 490 tails. Then the code length according to the best code in the second model class is shorter than 1,000 bits.
For this reason a naive statistical method might choose the second model as a better explanation for the data. However, an MDL approach would construct a single code based on the hypothesis, instead of just using the best one. To do this, it is simplest to use a twopart code in which the element of the model class with the best performance is specified. Then the data is specified using that code. A lot of bits are needed to specify which code to use; thus the total codelength based on the second model class could be larger than 1,000 bits. Therefore the conclusion when following an MDL approach is inevitably that there is not enough evidence to support the hypothesis of the biased coin, even though the best element of the second model class provides better fit to the data.
MDL Notation
Central to MDL theory is the onetoone correspondence between code length functions and probability distributions. (This follows from the KraftMcMillan inequality.) For any probability distribution P, it is possible to construct a code C such that the length (in bits) of C(x) is equal to − log _{2}P(x); this code minimizes the expected code length. Vice versa, given a code C, one can construct a probability distribution P such that the same holds. (Rounding issues are ignored here.) In other words, searching for an efficient code reduces to searching for a good probability distribution, and vice versa.
Related concepts
MDL is very strongly connected to probability theory and statistics through the correspondence between codes and probability distributions mentioned above. This has led researchers such as David MacKay to view MDL as equivalent to Bayesian inference: code length of the model and code length of model and data together in MDL correspond to prior probability and marginal likelihood respectively in the Bayesian framework.^{[5]}
While Bayesian machinery is often useful in constructing efficient MDL codes, the MDL framework also accommodates other codes that are not Bayesian. An example is the Shtarkov normalized maximum likelihood code, which plays a central role in current MDL theory, but has no equivalent in Bayesian inference. Furthermore, Rissanen stresses that we should make no assumptions about the true data generating process: in practice, a model class is typically a simplification of reality and thus does not contain any code or probability distribution that is true in any objective sense.^{[6]}^{[7]}. In the last mentioned reference Rissanen bases the mathematical underpinning of MDL on the Kolmogorov structure function.
According to the MDL philosophy, Bayesian methods should be dismissed if they are based on unsafe priors that would lead to poor results. The priors that are acceptable from an MDL point of view also tend to be favored in socalled objective Bayesian analysis; there, however, the motivation is usually different.^{[8]}
Other Systems
MDL was not the first informationtheoretic approach to learning; as early as 1968 Wallace and Boulton pioneered a related concept called Minimum Message Length (MML). The difference between MDL and MML is a source of ongoing confusion. Superficially, the methods appear mostly equivalent, but there are some significant differences, especially in interpretation:
 MML is a fully subjective Bayesian approach: it starts from the idea that one represents one's beliefs about the data generating process in the form of a prior distribution. MDL avoids assumptions about the data generating process.
 Both methods make use of twopart codes: the first part always represents the information that one is trying to learn, such as the index of a model class (model selection), or parameter values (parameter estimation); the second part is an encoding of the data given the information in the first part. The difference between the methods is that, in the MDL literature, it is advocated that unwanted parameters should be moved to the second part of the code, where they can be represented with the data by using a socalled onepart code, which is often more efficient than a twopart code. In the original description of MML, all parameters are encoded in the first part, so all parameters are learned.
References
 ^ "Minimum Description Length". University of Helsinki. http://www.mdlresearch.org/. Retrieved 20100703.
 ^ Grünwald, P. (June 2007). "the Minimum Description Length principle". MIT Press. http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11155. Retrieved 20100703.
 ^ Grünwald, P (April 2005). "Advances in Minimum Description Length: Theory and Applications". MIT Press. http://mitpress.mit.edu/catalog/item/default.asp?sid=4C100C6F225540FFA2ED02FC49FEBE7C&ttype=2&tid=10478. Retrieved 20100703.
 ^ Grünwald, Peter. "MDL Tutorial". http://www.cwi.nl/~pdg/. Retrieved 20100703.
 ^ MacKay, David (2003). "Information Theory, Inference, and Learning Algorithms". Cambridge University Press. http://www.inference.phy.cam.ac.uk/mackay/itila/. Retrieved 20100703.
 ^ Rissanen, Jorma. "Homepage of Jorma Rissanen". http://www.mdlresearch.org/jorma.rissanen/. Retrieved 20100703.
 ^ Rissanen, J. (2007). "Information and Complexity in Statistical Modeling". Springer. http://www.springer.com/computer/foundations/book/9780387366104. Retrieved 20100703.
 ^ Nannen, Volker. "A short introduction to Model Selection, Kolmogorov Complexity and Minimum Description Length.". http://volker.nannen.com/pdf/short_introduction_to_model_selection.pdf. Retrieved 20100703.
Further reading
 Minimum Description Length on the Web, by the University of Helsinki. Features readings, demonstrations, events and links to MDL researchers.
 Homepage of Jorma Rissanen, containing lecture notes and other recent material on MDL.
 Homepage of Peter Grünwald, containing his very good tutorial on MDL.
 J. Rissanen, Information and Complexity in Statistical Modeling, Springer, 2007.
 ISBN0262072629.
 David MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, 2003.
Least squares and regression analysis Computational statistics Least squares · Linear least squares · Nonlinear least squares · Iteratively reweighted least squaresCorrelation and dependence Regression analysis Regression as a
statistical modelSimple linear regression · Ordinary least squares · Generalized least squares · Weighted least squares · General linear modelPredictor structureNonstandardNonnormal errorsDecomposition of variance Model exploration Background Mean and predicted response · Gauss–Markov theorem · Errors and residuals · Goodness of fit · Studentized residual · Minimum meansquare errorDesign of experiments Numerical approximation Applications Regression analysis category  Statistics category · Statistics portal · Statistics outline · Statistics topicsCategories:
Wikimedia Foundation. 2010.
Look at other dictionaries:
Minimum Description Length — oder MDL ist eine informationstheoretische Methode, die 1978 von Jorma Rissanen zur Beschreibung von Regelmäßigkeiten in gemessenen Daten eingeführt wurde. Je stärker die Daten komprimiert werden können, desto größer ist der Anteil der Ordnung im … Deutsch Wikipedia
Minimum message length — (MML) is a formal information theory restatement of Occam s Razor: even when models are not equal in goodness of fit accuracy to the observed data, the one generating the shortest overall message is more likely to be correct (where the message… … Wikipedia
Longueur de description minimale — La longueur de description minimale ou LDM (MDL pour Minimum Description Length en anglais) est un concept inventé par Jorma Rissanen en 1978 et utilisé en théorie de l information et en compression de données. Sommaire 1 Principe 2 Voir aussi… … Wikipédia en Français
MDL — minimum description length; minimum detection limit … Medical dictionary
MDL — • minimum description length; • minimum detection limit … Dictionary of medical acronyms & abbreviations
Kolmogorov complexity — In algorithmic information theory (a subfield of computer science), the Kolmogorov complexity of an object, such as a piece of text, is a measure of the computational resources needed to specify the object. It is named after Soviet Russian… … Wikipedia
Occam's razor — For the aerial theatre company, see Ockham s Razor Theatre Company. It is possible to describe the other planets in the solar system as revolving around the Earth, but that explanation is unnecessarily complex compared to the modern consensus… … Wikipedia
Statistical inference — In statistics, statistical inference is the process of drawing conclusions from data that are subject to random variation, for example, observational errors or sampling variation.[1] More substantially, the terms statistical inference,… … Wikipedia
Nat (information) — Fundamental units of information bit (binary) nat (base e) ban (decimal) qubit (quantum) This box: view · … Wikipedia
Concept learning — Concept learning, also known as category learning, concept attainment, and concept formation, is largely based on the works of the cognitive psychologist Jerome Bruner. Bruner, Goodnow, Austin (1967) defined concept attainment (or concept… … Wikipedia