# Random access machine

﻿
Random access machine

In computer science, random access machine (RAM) is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. Like the counter machine the RAM has its instructions in the finite-state portion of the machine (the so-called Harvard architecture).

The RAM's equivalent of the Universal Turing machine -- with its program in the registers as well as its data -- is called the Random access stored program machine or RASP. It is an example of the so-called von Neumann architecture and is closest to the common notion of computer.

Together with the Turing machine and counter machine models, the RAM and RASP models are used for computational complexity analysis. Van Emde Boas (1990) calls these three plus the pointer machine "sequential machine" models, to distinguish them from "parallel random access machine" models.

Introduction to the Random Access Machine Model (RAM)

The concept of a random access machine (RAM) starts with the simplest model of all, the so-called counter machine model. Two additions move it away from the counter machine, however. The first enhances the machine with the convenience of indirect addressing; the second moves the model toward the more conventional accumulator-based computer with the addition of one or more auxiliary (dedicated) registers, the most common of which is called "the accumulator".

Formal Definition: Random Access Machine

A "Random Access Machine" (RAM) is an abstract computational-machine model identical to a multiple-register counter machine with the addition of indirect addressing. At the discretion of an instruction from its finite state machine's TABLE, the machine derives a "target" register's address either (i) directly from the instruction itself, or (ii) indirectly from the "contents" (e.g. number, label) of the "pointer" register specified in the instruction.

By definition: A "register" is a location with both an "address" (a unique, distinguishable designation/locator equivalent to a natural number) and a "content" -- a single natural number. For precision we will use the quasi-formal symbolism from Boolos-Burgess-Jeffrey (2002) to specify a register, its contents, and an operation on a register:
* [r] means "the contents of register with address r". The label "r" here is a "variable" that can be filled with a natural number or a letter (e.g. "A") or a name.
* → means "copy/deposit into", or "replaces", but without destruction of the source:: Example: [3] +1 → 3; means "The contents of source register with address "3", plus 1, is put into destination register with address "3" (here source and destination are the same place). If [3] =37, that is, the contents of register 3 is the number "37", then 37+1 = 38 will be put into register 3. :: Example: [3] → 5; means "The contents of source register with address "3" is put into destination register with address "5". If [3] =38, that is, the contents of register 3 is the number 38, then this number will be put into register 5. :::"NOTE: As it stands this second example is ambiguous. Unless otherwise specified our usage hereafter will be "the contents of source register 3 remains in register 3, i.e. [3] → 3 (the alternative would be that the contents of register 3 is "removed" from register 3 and placed into register 5, thereby leaving register 3 empty.)"

Definition: A "direct" instruction is one that specifies "in the instruction itself" the address of the source or destination register whose contents will be the subject of the instruction.Definition: An "indirect instruction" is one that specifies an "pointer register", the contents of which is the address of a "target" register. The target register can be either a source or a destination (the various COPY instructions provide examples of this). A register can address itself indirectly.:"For want of a standard/convention this article will specify "direct/indirect", abbreviated as "d/i", as a parameter (or parameters) in the instruction:::"Example: COPY ( d, A, i, N ) means directly d get the source register's address (register "A") from the instruction itself but indirectly i get the destination address from pointer-register N. Suppose [N] =3, then register 3 is the destination and the instruction will do the following: [A] → 3."

Definition: The contents of "source register" is used by the instruction. The source register's address can be specified either (i) directly by the instruction, or (ii) indirectly by the pointer register specified by the instruction.

Definition: The contents of the "pointer register" is the "address" of the "target" register.

Definition: The contents of the "pointer register" points to the "target register" -- the "target" may be either a source or a destination register.

Definition: The "destination register" is where the instruction deposits its result. The source register's address can be specified either (i) directly by the instruction, or (ii) indirectly by the pointer register specified by the instruction. The source and destination registers can be one

Refresher: The counter machine model

:"Melzak (1961) provides an easy visualization of a counter machine: its "registers" are holes in the ground, and these holes hold pebbles. Per an instruction, into and out of these holes "the computer" (person or machine) adds (INCrements) or removes (DECrements) a single pebble. As needed, additional pebbles come from, and excess pebbles go back into, an infinite supply; if the hole is too small to accommodate the pebbles the "computer" digs the hole bigger."

:"Minsky (1961) and Hopcroft-Ullman 1979 (p. 171) offer the visualization of a multi-tape Turing machine with as many left-ended tapes as "registers". Each tape's length is unbounded to the right, and every square is blank except for the left end, which is marked. The "distance" of a tape's "head" from its left end, measured in numbers of tape-squares, represents the natural number in "the register". To DECrement the count of squares the tape head moves left; INCrement it moves right. There is no need to print or erase marks on the tape; the only conditional instructions are to check to see if the head is at the left end, by testing a left-end mark with a "Jump-if-marked instruction"."

:"The following instruction "mnemonics" e.g. "CLR (r)" are arbitrary; no standard exists." The register machine has, for a memory external to its finite-state machine -- an unbounded (cf: footnote|countable and unbounded) collection of discrete and uniquely-labelled locations with "unbounded" capacity, called "registers". These registers hold only natural numbers (zero and the positive integers). Per a list of sequential instructions in the finite state machine's TABLE, a few (e.g. 2) types of primitive operations operate on the contents of these "registers". Finally, a "conditional-expression" in the form of an "IF-THEN-ELSE" is available to test the contents of one or two registers and "branch/jump" the finite state machine out of the default instruction-sequence.

Base model 1: The model closest to Minsky's (1961) visualization and to Lambek (1961):
*{ INCrement contents of register r, DECrement contents of register r, "IF" contents of register r is Zero "THEN" Jump to instruction Iz "ELSE" continue to next instruction }:

If we stick with a specific name for the accumulator, e.g. "A", we can imply the accumulator in the instructions, for example, : INC ( A ) = INCAHowever, when we write the CPY instructions without the accumulator called out the instructions are ambiguous or they must have empty parameters:: CPY ( d, r2, d, A ) = CPY (d, r2, , ): CPY ( d, A, d, r2 ) = CPY ( , , d, r2)

Historically what has happened is these two CPY instructions have received distinctive names; however, no convention exists. Tradition (e.g. Knuth's (1973) imaginary MIX computer) uses two names called LOAD and STORE. Here we are adding the "i/d" parameter:: LDA ( d/i, rs ) =def CPY ( d/i, rs, d, A ) : STA ( d/i, rd ) =def CPY ( d, A, d/i, rd )

The typical accumulator-based model will have all its two-variable arithmetic and constant operations (e.g. ADD (A, r), SUB (A, r) ) use (i) the accumulator's contents, together with (ii) a specified register's contents. The one-variable operations (e.g. INC (A), DEC (A) and CLR (A) ) require only the accumulator. Both instruction-types deposit the result (e.g. sum, difference, product, quotient or remainder) in the accumulator.: Example: INCA = [A] +1 → A: Example: ADDA (rs) = [A] + [rs] → A: Example: MULA (rs) = [A] * [rs] → A

If we so choose, we can abbreviate the mnemonics because at least one source-register and the destination register is always the accumulator A. Thus we have ::{ LDA (i/d, rs), STA (i/d, rd), CLRA, INCA, DECA, ADDA (rs), SUBA (rs), MULA (rs), DIVA (rs), etc.)

The notion of indirect address register "N"

If our model has an "unbounded accumulator" can we "bound" all the other registers? Not until we provide for at least one unbounded register from which we derive our indirect addresses.

The minimimalist approach is to use the accumulator itself (Schönhage does this).

Another approach (Schönhage does this too) is to declare a specific register the "indirect address register" and confine indirection relative to this register (Schonhage's RAM0 model uses both A and N registers for indirect as well as direct instructions). Again our new register has no conventional name -- perhaps "N" from "iNdex", or "iNdirect" or "address Number".

For maximum flexibility, as we have done for the accumulator A -- we will consider N just another register subject to increment, decrement, clear, test, direct copy, etc. Again we can shrink the instruction to a single-parameter that provides for direction and indirection, for example.: LDAN (i/d) = CPY (i/d, N, d, A); LoaD Accumulator via iNdirection register : STAN (i/d) = CPY (d, A, i/d, N). STore Accumlator via iNdirection register

Why is this such an interesting approach? At least two reasons:

(1) An instruction set with no parameters:

Schönhage does this to produce his RAM0 instruction set. See section below.

(2) Reduce a RAM to a Post-Turing machine: Posing as minimalists, we reduce all the registers excepting the accumulator A and indirection register N e.g. r = { r0, r1, r2, ... } to an unbounded string of (very-) bounded-capacity pigeon-holes. These will do nothing but hold (very-) bounded numbers e.g. a lone bit with value { 0, 1 }. Likewise we shrink the accumulator to a single bit. We restrict any arithmetic to the registers { A, N }, use indirect operations to pull the contents of registers into the accumulator and write 0 or 1 from the accumulator to a register: :{ LDA (i, N), STA (i, N), CLR (A/N), INC (A/N), DEC(N), JZ (A/N, Iz), JZ (Iz), H } We push further and eliminate A altogether by the use of two "constant" registers called "ERASE" and "PRINT": [ERASE] =0, [PRINT] =1. :{ CPY (d, ERASE, i, N), CPY (d, PRINT, i, N), CLR (N), INC (N), DEC (N), JZ (i, N, Iz), JZ (Iz), H }

Rename the COPY instructions and call INC (N) = RIGHT, DEC (N) = LEFT and we have the same instructions as the Post-Turing machine, plus an extra CLRN ::{ ERASE, PRINT, CLRN, RIGHT, LEFT, JZ (i, N, Iz), JZ (Iz), H }

Turing equivalence of the RAM with indirection

In the section above we informally showed that a RAM with an unbounded indirection capability produces a Post-Turing machine. The Post-Turing machine is Turing equivalent, so we have shown that the RAM with indirection is Turing equivalent.

We give here a slightly more formal demonstration. Begin by designing our model with three reserved registers "E", "P", and "N", plus an unbounded set of registers 1, 2, ..., n to the right. The registers 1, 2, ..., n will be considered "the squares of the tape". Register "N" points to "the scanned square" that "the head" is currently observing. The "head" can be thought of as being in the conditional jump -- observe that it uses indirect addressing ( cf Elgot-Robinson p. 398). As we decrement or increment "N" the (apparent) head will "move left" or "right" along the squares. We will move the contents of "E"=0 or "P"=1 to the "scanned square" as pointed to by N, using the indirect CPY.

The fact that our tape is left-ended presents us with a minor problem: Whenever LEFT occurs our instructions will have to test to determine whether or not the contents of "N" is zero; if so we should leave its count at "0" (this is our choice as designers -- for example we might have the machine/model "trigger an event" of our choosing).

:Instruction set 1 (augmented): { INC (N), DEC (N), CLR (N), CPY (d, rs,i, N), JZ ( i, r, z ), HALT }

The following table both defines the Post-Turing instructions in terms of their RAM equivalent instructions and gives an example of their functioning. The (apparent)location of the head along the tape of registers r0-r5 . . . is shown shaded:

Example: Bounded indirection yields a machine that is not Turing equivalent

Throughout this demonstration we have to keep in mind that the instructions in the finite state machine's TABLE is "bounded", i.e. "finite"::"Besides a merely being a "finite set of rules" which gives a seqeunce of operations for solving a specific type of problem, an algorithm has five important features [Finiteness, Definiteness, Input, Output, Effectiveness] " (italics added, Knuth p. 4-7).

:"The difficulty arises because the registers have explicit "names" (numbers) and our machine must call each out by name in order to "access" it."

We will build the indirect CPY ( i, q, d, φ ) with the CASE operator. The address of the target register will be specified by the contents of register "q"; once the CASE operator has determined what this number is, CPY will directly deposit the contents of the register with that number into register "φ". We will need an additional register that we will call "y" -- it serves as an up-counter.

:"So the following is actually a constructive demonstration or proof that we can indeed simulate the indirect CPY ( i, q, d, φ ) without a "hardware" design change to our counter machine/model. However, note that because this indirect CPY is "bounded" by the size/extent of the finite state machine, a RASP using this indirect CPY can only calculate the primitive recursive functions, not the full suite of mu recursive functions."

The CASE "operator" is described in Kleene (1952) (p. 229) and in Boolos-Burgess-Jeffrey (2002) (p. 74); the latter authors emphasize its utility. The following definition is per Kleene but modified to reflect the familiar "IF-THEN-ELSE" construction.

The CASE operator "returns" a natural number into φ depending on which "case" is satisfied, starting with "case_0" and going successively through "case_last"; if no case is satisfied then the number called "default" (aka "woops") is returned into φ (here x designates some selection of parameters, e.g. register q and the string r0, ... rlast )):

"Definition by cases" φ (x, y)::* case_0: IF Q0(x, y) is true THEN φ0(x, y) ELSE :* case_1: IF Q1(x, y) is true THEN φ1(x, y) ELSE:* cases_2 through case_next_to_last: etc. . . . . . . . . ELSE:* case_last: IF Qlast(x, y) is true THEN φlast(x, y) ELSE:* default: do φdefault(x, y)

Kleene require that the "predicates" Qn that doing the testing are all mutually-exclusive -- "predicates" are functions that produce only { true, false } for output; Boolos-Burgess-Jeffrey add the requirement that the cases are "exhaustive".

We begin with a number in register q that represents the address of the target register. But what is this number? The "predicates" will test it to find out, one trial after another: JE (q, y, z) followed by INC (y). Once the number is identified explicitly, the CASE operator directly/explicitly copies the contents of this register to φ:

:"Definition by cases" CPY (i, q, d, φ) =def φ (q, r0, ..., rlast, y) = :* case_0: IF CLR (y), [q] - [y] =0 THEN CPY ( r0, φ ), J (exit) ELSE :* case_1: IF INC (y), [q] = [y] =1 THEN CPY ( r1, φ ), J (exit) ELSE:* case_2 through case n: IF . . . THEN . . . ELSE:* case_n: IF INC (y), [q] = [y] =n THEN CPY ( rn, φ ), J (exit) ELSE:* case_n+1 to case_last: IF . . . THEN . . . ELSE:* case_last: IF INC (y), [q] = [y] ="last" THEN CPY ( rlast, φ ), J (exit) ELSE:* default: woops Case_0 ( the base step of the recursion on y) looks like this::* "case_0":::* CLR ( y ) ; set register y = 0::* JE ( q, y, "_φ0" )::* J ( "case_1" ):::* "_φ0:" CPY ( r0, φ ):::* J ( "exit" ):* "case_1:" etc.

Case_n (the induction step) looks like this; remember, each instance of "n", "n+1", ..., "last" must be an explicit natural number::* "case_n":::* INC ( y ) ::* JE ( q, y, "_φn" )::* J ( "case_n+1"):::* "_φn:" CPY ( rn, φ ):::* J ( "exit" ):*"case__n+1:" etc.

Case_last stops the induction and bounds the CASE operator (and thereby bounds the "indirect copy" operator): :* "case_last":::* INC ( y )::* JE ( q, y, "_φlast" )::* J ( "woops" ):::* "_φlast": CPY ( rlast, φ ):::* J ( "exit" ):*"woops:" how do we handle an out-of-bounds attempt?:*"exit:" etc.

If the CASE could continue ad infinitum it would be the mu operator. But it can't --its finite state machine's "state register" has reached its maximum count (e.g. 65365 = 11111111,111111112 ) or its table has run out of instructions; it is a "finite" machine, after all.

Examples of Models

Register-to-register ("read-modify-write") model of Cook and Reckhow (1973)

The commonly-encountered Cook and Rechkow model is a bit like the ternary-register Malzek model (written with Knuth mnemonics -- the original instructions had no mnemonics excepting TRA, Read, Print).

:* LOAD ( C, rd ) ; C → rd, C is any integer:: Example: LOAD ( 0, 5 ) will clear register 5.:* ADD ( rs1, rs2, rd ) ; [rs1] + [rs2] → rd , the registers can be the same or different;::Example: ADD ( A, A, A ) will double the contents of register A.:* SUB ( rs1, rs2, rd ) ; [rs1] - [rs2] → rd, the registers can be the same or different:::Example: SUB ( 3, 3, 3 ) will clear register 3. :* COPY ( i, rp, d, rd ) ; [rp] ] → rd, Indirectly copy the contents of the source-register pointed to by pointer-register rp into the destination register. :* COPY ( d, rs, i, rp ) ; [rs] → [rp] . Copy the contents of source register rs into the destination-register pointed to by the pointer-register rp.:* JNZ ( r, Iz ) ; Conditional jump if [r] is positive; i.e. IF [r] > 0 THEN jump to instruction z else continue in sequence (Cook and Reckhow call this: "TRAnsfer control to line m if Xj > 0"):* READ ( rd ) ; copy "the input" into destination register rd :* PRINT ( rs ) ; copy the contents of source register rs to "the output."

Schönhage's RAM0 and RAM1 (1980)

Schönhage (1980) describes a very primitive, atomized model chosen for his proof of the equivalence of his SMM pointer machine model::"In order to avoid any explicit addressing the RAM0 has the accumulator with contents "z" and an additional address register with current contents "n" (initially 0)" (p. 494)

RAM1 model: Schönhage demonstrates how his construction can be used to form the more common, usable form of "successor"-like RAM (using this article's mnemonics):::* LDA k ; k --> A , k is a constant, an explicit number such as "47"::* LDA ( d, r ) ; [r] → A ; directly load A ::* LDA ( i, r ) ; [r] ] → A ; indirectly load A ::* STA ( d, r ) ; [A] → r ; directly store A::* STA ( i, r ) ; [A] → [r] ; indirectly store A::* JEA ( r, z ) ; IF [A] = [r] then Iz else continue::* INCA ; [A] + 1 --> A

RAM0 model: Schönhage's RAM0 machine has 6 instructions indicated by a single letter (the 6th "C xxx" seems to involve 'skip over next parameter'. Schönhage designated the accumulator with "z", "N" with "n", etc. Rather than Schönage's mnemonics we will use the mnemonics developed above. ::*(Z), CLRA: 0 → A::*(A), INCA: [A] +1 → A::*(N), CPYAN: [A] → N::*(A), LDAA: [A] ] → A ; contents of A points to register address; put register's contents into A::*(S), STAN: [A] → [N] ; contents of N points to register address; put contents of A into register pointed to by N::*(C), JAZ ( z ): [A] = 0 then go to Iz ; ambiguous in his treatment

Indirection comes (i) from CPYAN (copy/transfer contents A to N) working with store_A_via_N STAN, and from (ii) the peculiar indirection instruction LDAA ( A → < A > ).

Footnotes

Countable vs unbounded

The definitional fact that any sort of counter machine without an unbounded register-"address" register must specify a register "r" by name indicates that the model requires "r" to be "finite", although it is "unbounded" in the sense that the model implies no upper limit to the number of registers necessary to do its job(s). For example we do not require r < 83,617,563,821,029,283,746 nor r < 2^1,000,001, etc.

:Thus our model can "expand" the number of registers, if necessary to perform a certain computation. However this "does" mean that whatever number the model expands to must be "countable" -- it must be indexable with a counting-number: "ω is not an option".

We can escape this restriction by providing an unbounded register to provide the address of the register that specifies an indirect address.

ee also

*Turing machine
*Register machine -- generic register-based machines as opposed to Turing-like tape-based machines
*Counter machine -- simplest model without indirection
*Random access stored program machine - RASP: a RAM on steroids
*Pointer machine -- a restricted type of RAM
*PRAM - Parallel Random Access Machine
*real RAM, the infinite precision floating point arithmetics RAM, an abstract model of computation used in computational geometry

External links

* [http://savannah.nongnu.org/projects/ramemu/ Random Access Machine Emulator]
* [http://www.szkup.com/?pid=msthesis&lang=en Random Access Machine Emulator]

References

With a few exceptions, these references are the same as those at Register machine.

* George Boolos, John P. Burgess, Richard Jeffrey (2002), "Computability and Logic: Fourth Edition", Cambridge University Press, Cambridge, England. The original Boolos-Jeffrey text has been extensively revised by Burgess: more advanced than an introductory textbook. "Abacus machine" model is extensively developed in Chapter 5 "Abacus Computability"; it is one of three models extensively treated and compared -- the Turing machine (still in Boolos' original 4-tuple form) and recursion the other two.
* Arthur Burks, Herman Goldstine, John von Neumann (1946), "Preliminary discussion of the logical design of an electronic computing instrument", reprinted pp. 92ff in Gordon Bell and Allen Newell (1971), "Computer Structures: Readings and Examples", mcGraw-Hill Book Company, New York. ISBN 0070043574 .
* Stephen A. Cook and Robert A. Reckhow (1972), "Time-bounded random access machines", Journal of Computer Systems Science 7 (1973), 354-375.
* Martin Davis (1958), "Computability & Unsolvability", McGraw-Hill Book Company, Inc. New York.
* Calvin Elgot and Abraham Robinson (1964), "Random-Access Stored-Program Machines, an Approach to Programming Languages", Journal of the Association for Computing Machinery, Vol. 11, No. 4 (October, 1964), pp. 365-399.
* J. Hartmanis (1971), "Computational Complexity of Random Access Stored Program Machines," Mathematical Systems Theory 5, 3 (1971) pp. 232-245.
* John Hopcroft, Jeffrey Ullman (1979). "Introduction to Automata Theory, Languages and Computation", 1st ed., Reading Mass: Addison-Wesley. ISBN 0-201-02988-X. A difficult book centered around the issues of machine-interpretation of "languages", NP-Completeness, etc.
* Stephen Kleene (1952), "Introduction to Metamathematics", North-Holland Publishing Company, Amsterdam, Netherlands. ISBN 0-7204-2103-9.
*Donald Knuth (1968), "The Art of Computer Programming", Second Edition 1973, Addison-Wesley, Reading, Massachusetts. Cf pages 462-463 where he defines "a new kind of abstract machine or 'automaton' which deals with linked structures."
*Joachim Lambek (1961, received 15 June 1961), "How to Program an Infinite Abacus", Mathematical Bulletin, vol. 4, no. 3. September 1961 pages 295-302. In his Appendix II, Lambek proposes a "formal definition of 'program'. He references Melzak (1961) and Kleene (1952) "Introduction to Metamathematics".
*Z. A. Melzak (1961, received 15 May 1961), "An informal Arthmetical Approach to Computability and Computation", Canadian Mathematical Bulletin, vol. 4, no. 3. September 1961 pages 279-293. Melzak offers no references but acknowledges "the benefit of conversations with Drs. R. Hamming, D. McIlroy and V. Vyssots of the Bell telephone Laborators and with Dr. H. Wang of Oxford University."
*cite journal|author=Marvin Minsky
title=Recursive Unsolvability of Post's Problem of 'Tag' and Other Topics in Theory of Turing Machines
journal=Annals of Math
year=1961, received August 15, 1960
volume=74
pages=437&ndash;455
doi=10.2307/1970290

*cite book |author= Marvin Minsky |title = Computation: Finite and Infinite Machines | edition = 1st ed. | publisher = Prentice-Hall, Inc.| location = Englewood Cliffs, N. J. | year = 1967 In particular see chapter 11: "Models Similar to Digital Computers" and chapter 14: "Very Simple Bases for Computability". In the former chapter he defines "Program machines" and in the later chapter he discusses "Universal Program machines with Two Registers" and "...with one register", etc.
*John C. Shepherdson and H. E. Sturgis (1961) received December 1961 "Computability of Recursive Functions", Journal of the Association of Computing Machinery (JACM) 10:217-255, 1963. An extremely valuable reference paper. In their Appendix A the authors cite 4 others with reference to "Minimality of Instructions Used in 4.1: Comparison with Similar Systems".:*Kaphengst, Heinz, "Eine Abstrakte programmgesteuerte Rechenmaschine"', Zeitschrift fur mathematische Logik und Grundlagen der Mathematik:"5" (1959), 366-379. :*Ershov, A. P. "On operator algorithms", (Russian) Dok. Akad. Nauk 122 (1958), 967-970. English translation, Automat. Express 1 (1959), 20-23.:*Péter, Rózsa "Graphschemata und rekursive Funktionen", Dialectica 12 (1958), 373.:*Hermes, Hans "Die Universalität programmgesteuerter Rechenmaschinen. Math.-Phys. Semsterberichte (Göttingen) 4 (1954), 42-53.
* Arnold Schönhage (1980), "Storage Modification Machines", Society for Industrial and Applied Mathematics, SIAM J. Comput. Vol. 9, No. 3, August 1980. Wherein Schōnhage shows the equivalence of his SMM with the "successor RAM" (Random Access Machine), etc. resp. "Storage Modification Machines", in "Theoretical Computer Science" (1979), pp. 36-37
*Peter van Emde Boas, "Machine Models and Simulations" pp.3-66, appearing in:::Jan Van Leeuwen, ed. "Handbbook of Theoretical Computer Science. Volumne A: Algorithms and Complexity", The MIT PRESS/Elsevier, 1990. ISBN 0-444-88071-2 (volume A). QA 76.H279 1990. :van Emde Boas' treatment of SMMs appears on pp. 32-35. This treatment clarifies Schōnhage 1980 -- it closely follows but expands slightly the Schōnhage treatment. Both references may be needed for effective understanding.
*Hao Wang (1957), "A Variant to Turing's Theory of Computing Machines", JACM (Journal of the Association for Computing Machinery) 4; 63-92. Presented at the meeting of the Association, June 23-25, 1954.

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Random access machine — Pour les articles homonymes, voir RAM.  Ne doit pas être confondu avec Random Access Memory. La machine RAM, pour Random Access Machine, est un modèle abstrait d ordinateur destiné à étudier des algorithmes. Il s agit au départ d un modèle… …   Wikipédia en Français

• Random Access Machine — Théorie de la complexité des algorithmes La théorie de la complexité des algorithmes étudie formellement la difficulté intrinsèque des problèmes algorithmiques. Sommaire 1 Histoire 2 Généralités 2.1 Présentation …   Wikipédia en Français

• Random Access Machine — Die Registermaschine (auch RAM für engl. random access machine) ist ein Rechnermodell der theoretischen Informatik, das einem realen Rechner (PC) sehr ähnlich ist. Das Modell geht auf eine Arbeit von John C. Shepherdson und Howard E. Sturgis aus… …   Deutsch Wikipedia

• Parallel Random Access Machine — In computer science, Parallel Random Access Machine (PRAM) is a shared memory abstract machine. As its name indicates, the PRAM was intended as the parallel computing analogy to the random access machine (RAM). In the same way, that the RAM is… …   Wikipedia

• Parallel random access machine — PRAM, pour Parallel Random Access Machine, est un modèle abstrait de machine destiné à concevoir des algorithmes pour machines parallèles de modèle MIMD, ou pour de plus rares cas de modèle SIMD. PRAM modélise une machine parallèle à une mémoire… …   Wikipédia en Français

• Parallel Random Access Machine — Als Parallel Random Access Machine, kurz PRAM, bezeichnet man ein Maschinenmodell zur Analyse paralleler Algorithmen. Es handelt sich um eine Registermaschine, die um die Möglichkeit zur parallelen Verarbeitung von Befehlen erweitert wurde. Wie… …   Deutsch Wikipedia

• Random access stored program machine — In theoretical computer science the Random Access Stored Program (RASP) machine model is an abstract machine used for the purposes of algorithm development and algorithm complexity theory.The RASP is a Random Access Machine (RAM) model that,… …   Wikipedia

• Dynamic random-access memory — DRAM redirects here. For other uses, see Dram (disambiguation). Computer memory types Volatile RAM DRAM (e.g., DDR SDRAM) SRAM In development T RAM Z RAM TTRAM Historical Delay line memory Selectron tube Williams tube …   Wikipedia

• Dynamic random access memory — (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically …   Wikipedia

• Magnetoresistive random access memory — Computer memory types Volatile RAM DRAM (e.g., DDR SDRAM) SRAM In development T RAM Z RAM TTRAM Historical Delay line memory Selectron tube Williams tube Non volatile …   Wikipedia

We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.