 Memoization

Not to be confused with Memorization.
In computing, memoization is an optimization technique used primarily to speed up computer programs by having function calls avoid repeating the calculation of results for previously processed inputs. Memoization has also been used in other contexts (and for purposes other than speed gains), such as in simple mutually recursive descent parsing^{[1]} in a general topdown parsing algorithm^{[2]}^{[3]} that accommodates ambiguity and left recursion in polynomial time and space. Although related to caching, memoization refers to a specific case of this optimization, distinguishing it from forms of caching such as buffering or page replacement. In the context of some logic programming languages, memoization is also known as tabling;^{[4]} see also lookup table.
Contents
Etymology
The term memoization was coined by Donald Michie in 1968^{[5]} and is derived from the Latin word memorandum (to be remembered), and thus carries the meaning of turning [the results of] a function into something to be remembered. While memoization might be confused with memorization (because of the shared cognate), memoization has a specialized meaning in computing.
Overview
A memoized function "remembers" the results corresponding to some set of specific inputs. Subsequent calls with remembered inputs return the remembered result rather than recalculating it, thus eliminating the primary cost of a call with given parameters from all but the first call made to the function with those parameters. The set of remembered associations may be a fixedsize set controlled by a replacement algorithm or a fixed set, depending on the nature of the function and its use. A function can only be memoized if it is referentially transparent; that is, only if calling the function has exactly the same effect as replacing that function call with its return value. (Special case exceptions to this restriction exist, however.) While related to lookup tables, since memoization often uses such tables in its implementation, memoization populates its cache of results transparently on the fly, as needed, rather than in advance.
Memoization is a means of lowering a function's time cost in exchange for space cost; that is, memoized functions become optimized for speed in exchange for a higher use of computer memory space. The time/space "cost" of algorithms has a specific name in computing: computational complexity. All functions have a computational complexity in time (i.e. they take time to execute) and in space.
Although a tradeoff occurs (i.e., space used is speed gained), this differs from some other optimizations that involve timespace tradeoff, such as strength reduction, in that memoization is a runtime rather than compiletime optimization. Moreover, strength reduction potentially replaces a costly operation such as multiplication with a less costly operation such as addition, and the results in savings can be highly machinedependent, nonportable across machines, whereas memoization is a more machineindependent, crossplatform strategy.
Consider the following pseudocode function to calculate the factorial of n:
function factorial (n is a nonnegative integer) if n is 0 then return 1 [by the convention that 0! = 1] else return factorial(n – 1) times n [recursively invoke factorial with the parameter 1 less than n] end if end function
For every integer n such that n≥0, the final result of the function
factorial
is invariant; if invoked asx = factorial(3)
, the result is such that x will always be assigned the value 6. A nonmemoized version of the above, given the nature of the recursive algorithm involved, would require n + 1 invocations offactorial
to arrive at a result, and each of these invocations, in turn, has an associated cost in the time it takes the function to return the value computed. Depending on the machine, this cost might be the sum of: The cost to set up the functional call stack frame.
 The cost to compare n to 0.
 The cost to subtract 1 from n.
 The cost to set up the recursive call stack frame. (As above.)
 The cost to multiply the result of the recursive call to
factorial
by n.  The cost to store the return result so that it may be used by the calling context.
In a nonmemoized implementation, every toplevel call to
factorial
includes the cumulative cost of steps 2 through 6 proportional to the initial value of n.A memoized version of the
factorial
function follows:function factorial (n is a nonnegative integer) if n is in lookuptable then return lookuptablevalueforn else if n is 0 then return 1 [by the convention that 0! = 1] else let x = factorial(n – 1) times n [recursively invoke factorial with the parameter 1 less than n] store x in lookuptable in the n^{th} slot [remember the result of n! for later] return x end if end function
In this particular example, if
factorial
is first invoked with 5, and then invoked later with any value less than or equal to five, those return values will also have been memoized, sincefactorial
will have been called recursively with the values 5, 4, 3, 2, 1, and 0, and the return values for each of those will have been stored. If it is then called with a number greater than 5, such as 7, only 2 recursive calls will be made (7 and 6), and the value for 5! will have been stored from the previous call. In this way, memoization allows a function to become more timeefficient the more often it is called, thus resulting in eventual overall speed up.Some other considerations
Automatic memoization
While memoization may be added to functions internally and explicitly by a computer programmer in much the same way the above memoized version of
factorial
is implemented, referentially transparent functions may also be automatically memoized externally.^{[1]} The techniques employed by Peter Norvig have application not only in Common Lisp (the language in which his paper demonstrated automatic memoization), but in various other programming languages. Applications of automatic memoization have also been formally explored in the study of term rewriting^{[6]} and artificial intelligence.^{[7]}In those programming languages where functions are first or secondclass objects (such as Lua, with its firstclass functions, or Perl [1]), automatic memoization can be implemented by replacing (at runtime) a function with its calculated value once a value has been calculated for a given set of parameters. The function that does this valueforfunctionobject replacement can generically wrap any referentially transparent function. Consider the following pseudocode (where it is assumed that functions are firstclass values):
function memoizedcall (F is a function object parameter) if F has no attached array values then allocate an associative array called values; attach values to F; end if; if F.values[arguments] is empty then F.values[arguments] = F(arguments); end if; return F.values[arguments]; end function
In order to call an automatically memoized version of
factorial
using the above strategy, rather than callingfactorial
directly, code invokesmemoizedcall(factorial(n))
. Each such call first checks to see if a holder array has been allocated to store results, and if not, attaches that array. If no entry exists at the positionvalues[arguments]
(wherearguments
are used as the key of the associative array), a real call is made tofactorial
with the supplied arguments. Finally, the entry in the array at the key position is returned to the caller.The above strategy requires explicit wrapping at each call to a function that is to be memoized. In those languages that allow closures, memoization can be effected implicitly by a functor factory that returns a wrapped memoized function object. In pseudocode, this can be expressed as follows:
function constructmemoizedfunctor (F is a function object parameter) allocate a function object called memoizedversion; let memoizedversion(arguments) be if self has no attached array values then [self is a reference to this object] allocate an associative array called values; attach values to self; end if; if self.values[arguments] is empty then self.values[arguments] = F(arguments); end if; return self.values[arguments]; end let; return memoizedversion; end function
Rather than call
factorial
, a new function objectmemfact
is created as follows:memfact = constructmemoizedfunctor(factorial)
The above example assumes that the function
factorial
has already been defined before the call toconstructmemoizedfunctor
is made. From this point forward,memfact(n)
is called whenever the factorial of n is desired. In languages such as Lua, more sophisticated techniques exist which allow a function to be replaced by a new function with the same name, which would permit:factorial = constructmemoizedfunctor(factorial)
Essentially, such techniques involve attaching the original function object to the created functor and forwarding calls to the original function being memoized via an alias when a call to the actual function is required (to avoid endless recursion), as illustrated below:
function constructmemoizedfunctor (F is a function object parameter) allocate a function object called memoizedversion; let memoizedversion(arguments) be if self has no attached array values then [self is a reference to this object] allocate an associative array called values; attach values to self; allocate a new function object called alias; attach alias to self; [for later ability to invoke F indirectly] self.alias = F; end if; if self.values[arguments] is empty then self.values[arguments] = self.alias(arguments); [not a direct call to F] end if; return self.values[arguments]; end let; return memoizedversion; end function
(Note: Some of the steps shown above may be implicitly managed by the implementation language and are provided for illustration.)
Parsers
When a topdown parser tries to parse an ambiguous input with respect to an ambiguous CFG, it may need an exponential number of steps (with respect to the length of the input) to try all alternatives of the CFG in order to produce all possible parse trees. This eventually would require exponential memory space. Memoization was explored as a parsing strategy in 1991 by Norvig, who demonstrated that an algorithm similar to the use of dynamic programming and statesets in Earley's algorithm (1970), and tables in the CYK algorithm of Cocke, Younger and Kasami, could be generated by introducing automatic memoization to a simple backtracking recursive descent parser to solve the problem of exponential time complexity.^{[1]} The basic idea in Norvig’s approach is that when a parser is applied to the input, the result is stored in a memotable for subsequent reuse if the same parser is ever reapplied to the same input. Richard Frost also used memoization to reduce the exponential time complexity of parser combinators, which can be viewed as “Purely Functional TopDown Backtracking” parsing technique.^{[8]} He showed that basic memoized parser combinators can be used as building blocks to construct complex parsers as executable specifications of CFGs.^{[9]}^{[10]} It was again explored in the context of parsing in 1995 by Johnson and Dörre.^{[11]}^{[12]} In 2002, it was examined in considerable depth by Ford in the form called packrat parsing.^{[13]}
In 2007, Frost, Hafiz and Callaghan^{[2]} described a topdown parsing algorithm that uses memoization for refraining redundant computations to accommodate any form of ambiguous CFG in polynomial time (Θ(n^{4}) for leftrecursive grammars and Θ(n^{3}) for non leftrecursive grammars). Their topdown parsing algorithm also requires polynomial space for potentially exponential ambiguous parse trees by 'compact representation' and 'local ambiguities grouping'. Their compact representation is comparable with Tomita’s compact representation of bottomup parsing.^{[14]} Their use of memoization is not only limited to retrieving the previously computed results when a parser is applied to a same input position repeatedly (which is essential for polynomial time requirement); it is specialized to perform the following additional tasks:
 The memoization process (which could be viewed as a ‘wrapper’ around any parser execution) accommodates an evergrowing direct leftrecursive parse by imposing depth restrictions with respect to input length and current input position.
 The algorithm’s memotable ‘lookup’ procedure also determines the reusability of a saved result by comparing the saved result’s computational context with the parser’s current context. This contextual comparison is the key to accommodate indirect (or hidden) leftrecursion.
 When performing a successful lookup in a memotable, instead of returning the complete resultset, the process only returns the references of the actual result and eventually speeds up the overall computation.
 During updating the memotable, the meoization process groups the (potentially exponential) ambiguous results and ensures the polynomial space requirement.
Frost, Hafiz and Callaghan also described the implementation of the algorithm in PADL’08^{[3]} as a set of higherorder functions (called parser combinators) in Haskell, which enables the construction of directly executable specifications of CFGs as language processors. The importance of their polynomial algorithm’s power to accommodate ‘any form of ambiguous CFG’ with topdown parsing is vital with respect to the syntax and semantics analysis during Natural language processing. The XSAIGA site has more about the algorithm and implementation details.
While Norvig increased the power of the parser through memoization, the augmented parser was still as time complex as Earley's algorithm, which demonstrates a case of the use of memoization for something other than speed optimization. Johnson and Dörre^{[12]} demonstrate another such nonspeed related application of memoization: the use of memoization to delay linguistic constraint resolution to a point in a parse where sufficient information has been accumulated to resolve those constraints. By contrast, in the speed optimization application of memoization, Ford demonstrated that memoization could guarantee that parsing expression grammars could parse in linear time even those languages that resulted in worstcase backtracking behavior.^{[13]}
Consider the following grammar:
S → (A c)  (B d) A → X (ab) B → X b X → x [X]
(Notation note: In the above example, the production S → (A c)  (B d) reads: "An S is either an A followed by a c or a B followed by a d." The production X → x [X] reads "An X is an x followed by an optional X.")
This grammar generates one of the following three variations of string: xac, xbc, or xbd (where x here is understood to mean one or more x's.) Next, consider how this grammar, used as a parse specification, might effect a topdown, leftright parse of the string xxxxxbd:
 The rule A will recognize xxxxxb (by first descending into X to recognize one x, and again descending into X until all the x's are consumed, and then recognizing the b), and then return to S, and fail to recognize a c. The next clause of S will then descend into B, which in turn again descends into X and recognizes the x's by means of many recursive calls to X, and then a b, and returns to S and finally recognizes a d.
The key concept here is inherent in the phrase again descends into X. The process of looking forward, failing, backing up, and then retrying the next alternative is known in parsing as backtracking, and it is primarily backtracking that presents opportunities for memoization in parsing. Consider a function
RuleAcceptsSomeInput(Rule, Position, Input)
, where the parameters are as follows:Rule
is the name of the rule under consideration.Position
is the offset currently under consideration in the input.Input
is the input under consideration.
Let the return value of the function
RuleAcceptsSomeInput
be the length of the input accepted byRule
, or 0 if that rule does not accept any input at that offset in the string. In a backtracking scenario with such memoization, the parsing process is as follows: When the rule A descends into X at offset 0, it memoizes the length 5 against that position and the rule X. After having failed at d, B then, rather than descending again into X, queries the position 0 against rule X in the memoization engine, and is returned a length of 5, thus saving having to actually descend again into X, and carries on as if it had descended into X as many times as before.
In the above example, one or many descents into X may occur, allowing for strings such as xxxxxxxxxxxxxxxxbd. In fact, there may be any number of x's before the b. While the call to S must recursively descend into X as many times as there are x's, B will never have to descend into X at all, since the return value of
RuleAcceptsSomeInput(X, 0, xxxxxxxxxxxxxxxxbd)
will be 16 (in this particular case).Those parsers that make use of syntactic predicates are also able to memoize the results of predicate parses, as well, thereby reducing such constructions as:
S → (A)? A A → /* some rule */
to one descent into A.
If a parser builds a parse tree during a parse, it must memoize not only the length of the input that matches at some offset against a given rule, but also must store the subtree that is generated by that rule at that offset in the input, since subsequent calls to the rule by the parser will not actually descend and rebuild that tree. For the same reason, memoized parser algorithms that generate calls to external code (sometimes called a semantic action routine) when a rule matches must use some scheme to ensure that such rules are invoked in a predictable order.
Since, for any given backtracking or syntactic predicate capable parser not every grammar will need backtracking or predicate checks, the overhead of storing each rule's parse results against every offset in the input (and storing the parse tree if the parsing process does that implicitly) may actually slow down a parser. This effect can be mitigated by explicit selection of those rules the parser will memoize.^{[15]}
See also
 Computational complexity theory – more information on algorithm complexity
 Strength reduction – a compiler optimization that replaces a costly operation with an equivalent, less costly one
 Partial evaluation – a related technique for automatic program optimization
 Lazy evaluation – shares some concepts with memoization
 Lookup table – a key data structure used in memoization
 Flyweight pattern – an object programming design pattern, that also uses a kind of memoization
 Director string
 Dynamic programming – some applications of memoizing techniques
 Hashlife – a memoizing technique to speed up the computation of cellular automata
 HigherOrder Perl – a free book by Mark Jason Dominus contains an entire chapter on implementing memoization, along with some background
References
 ^ ^{a} ^{b} ^{c} Norvig, Peter, "Techniques for Automatic Memoization with Applications to ContextFree Parsing," Computational Linguistics, Vol. 17 No. 1, pp. 91–98, March 1991.
 ^ ^{a} ^{b} Frost, Richard, Hafiz, Rahmatullah, and Callaghan, Paul. " Modular and Efficient TopDown Parsing for Ambiguous LeftRecursive Grammars." 10th International Workshop on Parsing Technologies (IWPT), ACLSIGPARSE , Pages: 109 – 120, June 2007, Prague.
 ^ ^{a} ^{b} Frost, Richard, Hafiz, Rahmatullah, and Callaghan, Paul. " Parser Combinators for Ambiguous LeftRecursive Grammars." 10th International Symposium on Practical Aspects of Declarative Languages (PADL), ACMSIGPLAN , Volume 4902/2008, Pages: 167–181, January 2008, San Francisco.
 ^ Warren, David. "Tabling and Datalog Programming". Accessed 29 May 2009.
 ^ Michie, Donald, "Memo Functions and Machine Learning," Nature, No. 218, pp. 19–22, 1968.
 ^ Hoffman, Berthold, "Term Rewriting with Sharing and Memoïzation," Algebraic and Logic Programming: Third International Conference, Proceedings, H. Kirchner and G. Levi (eds.), pp. 128–142, Volterra, Italy, 2–4 September 1992.
 ^ Mayfield, James, et al., Using Automatic Memoization as a Software Engineering Tool in RealWorld AI Systems, TBD, 1995.
 ^ Frost, Richard. and Szydlowski, Barbara. "Memoizing Purely Functional TopDown Backtracking Language Processors. " "Sci. Comput. Program. " 1996 – 27(3): 263–288.
 ^ Frost, Richard. "Using Memoization to Achieve Polynomial Complexity of Purely Functional Executable Specifications of NonDeterministic TopDown Parsers. " "SIGPLAN Notices" 29(4): 23–30.
 ^ Frost, Richard. "Monadic Memoization towards CorrectnessPreserving Reduction of Search. " "Canadian Conference on AI 2003." p 6680.
 ^ Johnson, Mark, "Memoization of TopDown Parsing,” Computational Linguistics, Vol. 21 No. 3, pp. 405–417, September 1995.
 ^ ^{a} ^{b} Johnson, Mark & Dörre, Jochen, "Memoization of Coroutined Constraints," Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, Cambridge, Massachusetts, 1995.
 ^ ^{a} ^{b} Ford, Bryan, Packrat Parsing: a Practical LinearTime Algorithm with Backtracking, Master’s thesis, Massachusetts Institute of Technology, September, 2002.
 ^ Tomita, Masaru. “Efficient Parsing for Natural Language.” Kluwer, Boston, MA, 1985.
 ^ Acar, Umut A. A. et al., "Selective Memoization," Proceedings of the 30th ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, New Orleans, Louisiana, pp. 14–25, 15–17 January 2003.
External links
 Examples of memoization in various programming languages
 Memoize – Memoize is a small library, written by Tim Bradshaw, for performing memoization in Common Lisp.
 IncPy – A custom Python interpreter that performs automatic memoization (with no required user annotations)
 Dave Herman's Macros for defining memoized procedures in Racket.
 Memoize.pm – a Perl module that implements memoized functions.
 Java memoization – an example in Java using dynamic proxy classes to create a generic memoization pattern.
 Memoization in C++ – although C++ doesn't support firstclass functions, here is a toolkit that supports automated memoization (via preprocessing).
 Tek271 Memoizer – Open source Java memoizer using annotations and pluggable cache implementations.
 memoize – A Ruby module that implements memoized methods.
 Python memoization – A Python example of memoization.
 OCaml memoization – Implemented as a Camlp4 syntax extension.
 Memoization in Lua – Two example implementations of a general memoize function in Lua.
 Memoization in Mathematica – Memoization and limited memoization in Mathematica.
 Memoization in Javascript – Extending the Function prototype in JavaScript.
 Memoization in Javascript – Examples of memoization in javascript using own caching mechanism and using the YUI library
 XSAIGA – eXecutable SpecificAtIons of GrAmmars. Contains publications related to topdown parsing algorithm that supports leftrecursion and ambiguity in polynomial time and space.
 Memoization in Scheme – A Scheme example of memoization on a class webpage.
 Memoization in Combinatory Logic  A web service to reduce Combinatory Logic while memoizing every step in a database.
Categories: Software optimization
 Computer performance
Wikimedia Foundation. 2010.
Look at other dictionaries:
Memoization — Mémoization En informatique, la mémoization est une technique d optimisation de code consistant à réduire le temps d exécution d une fonction en mémorisant ses résultats d une fois sur l autre. Bien que liée à la notion de cache, la mémoization… … Wikipédia en Français
Mémoization — En informatique, la mémoization est une technique d optimisation de code consistant à réduire le temps d exécution d une fonction en mémorisant ses résultats d une fois sur l autre. Bien que liée à la notion de cache, la mémoization désigne une… … Wikipédia en Français
Memoization — Memoisation ist eine Technik, um Computerprogramme zu beschleunigen, indem Rückgabewerte von Funktionen zwischengespeichert anstatt neu berechnet werden. Memoisation ähnelt Dynamischer Programmierung, bewahrt jedoch im Gegensatz zu dieser die… … Deutsch Wikipedia
memoization — noun A technique in which partial results are recorded (forming a memo) and then can be re used later without having to recompute them … Wiktionary
Dynamic programming — For the programming paradigm, see Dynamic programming language. In mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems… … Wikipedia
Algorithm — Flow chart of an algorithm (Euclid s algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B ≤ A yields yes… … Wikipedia
Password cracking — is the process of recovering passwords from data that has been stored in or transmitted by a computer system. A common approach is to repeatedly try guesses for the password. The purpose of password cracking might be to help a user recover a… … Wikipedia
Memory bound function — Memory bound refers to a situation in which the time to complete a given computational problem is decided primarily by the amount of available memory to hold data. In other words, the limiting factor of solving a given problem is the memory… … Wikipedia
Camlp4 — is a software system for writing extensible parsers for programming languages. It provides a set of Objective Caml libraries that are used to define grammars as well as loadable syntax extensions of such grammars. Camlp4 stands for Caml… … Wikipedia
Longest common subsequence problem — Not to be confused with longest common substring problem. The longest common subsequence (LCS) problem is to find the longest subsequence common to all sequences in a set of sequences (often just two). Note that subsequence is different from a… … Wikipedia