36-bit word length

36-bit word length

Many early computers aimed at the scientific market had a 36-bit word length. This word length was just long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character encoding. Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator, such as those manufactured by Friden, Marchant and Monroe. These calculators had a column of keys for each digit and operators were trained to use all their fingers when entering numbers, so while some specialized calculators had more columns, ten was a practical limit. Computers, as the new competitor, had to match that accuracy. Decimal computers sold in that era, such as the IBM 650 and the IBM 7070, had a word length of ten digits, as did ENIAC, one of the earliest computers.

Computers with 36-bit words included the MIT Lincoln Laboratory TX-2, the IBM 701/704/709/7090/7040, the UNIVAC 1103/1103A/1105/1100/2200, the General Electric 600/Honeywell 6000, the Digital Equipment Corporation PDP-6/10 (as used in the DECsystem-10/DECSYSTEM-20), and the Symbolics 3600 series. Smaller machines like the PDP-1/9/15 used 18-bit words, so a double word would be 36 bits. EDSAC had a similar scheme.

These computers used 18-bit word addressing, not byte addressing, giving an address space of 218 36-bit words, approximately 1 megabyte of storage. Many of them were originally limited to a similar amount of physical memory as well. Architectures that survived evolved over time to support larger virtual address spaces using memory segmentation or other mechanisms.

The common character packings included
* six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)
* five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention)
* four 8-bit characters (7-bit ASCII plus 1 unused bit or 8-bit EBCDIC) and 4 unused bits
* four 9-bit characters (the Multics convention).

Characters were extracted from words either using standard shift and mask operations or with special-purpose hardware supporting 6-bit, 9-bit, or variable-length characters. The Univac 1100/2200 used the "partial word designator" of the instruction or a "J" register to access characters. The GE-600 used special indirect words to access 6- and 9-bit characters; the PDP-6/10 had special instructions to access arbitrary-length byte fields. The C programming language requires that all memory be accessible as bytes, so C implementations on 36-bit machines use 9-bit bytes.

By the time IBM introduced System/360, scientific calculations had shifted to floating point and mechanical calculators were no longer a competitor. The 360s also included instructions for variable length decimal arithmetic for commercial applications, so the practice of using word lengths that were a power of two quickly became universal.

See also

* UTF-9 and UTF-18

External links

* [http://www.36bit.org/ 36bit.org]


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Bit slicing — is a technique for constructing a processor from modules of smaller bit width. Each of these components processes one bit field or slice of an operand. The grouped processing components would then have the capability to process the chosen full… …   Wikipedia

  • Word (computer architecture) — Processors 1 bit 4 bit 8 bit 12 bit 16 bit 18 bit 24 bit 31 bit 32 bit 36 bit 48 bit 60 bit …   Wikipedia

  • Word (computing) — In computing, word is a term for the natural unit of data used by a particular computer design. A word is simply a fixed sized group of bits that are handled together by the machine. The number of bits in a word (the word size or word length) is… …   Wikipedia

  • Bit-level parallelism — is a form of parallel computing based on increasing processor word size. From the advent of very large scale integration (VLSI) computer chip fabrication technology in the 1970s until about 1986, advancements in computer architecture were done by …   Wikipedia

  • 8-bit — This article is about computer architecture. For other uses, see 8 bit (disambiguation). Processors 1 bit 4 bit 8 bit 12 bit 16 bit 18 bit 24 bit 31 bit 32 bit 36 bit …   Wikipedia

  • Bit-serial — Digital Architecture=IntroductionIn digital logic applications, Bit Serial architectures are direct contrast to Bit Parallel where a data word tends to be a one to one function of the system clock signal. A Bit Serial architecture processes a… …   Wikipedia

  • Bit — This article is about the unit of information. For other uses, see Bit (disambiguation). Fundamental units of information bit (binary) nat (base e) ban (decimal) qubit (quantum) This box …   Wikipedia

  • Bit array — A bit array (or bitmap, in some cases) is an array data structure which compactly stores individual bits (boolean values). It implements a simple set data structure storing a subset of {1,2,..., n } and is effective at exploiting bit level… …   Wikipedia

  • Word processor — OpenOffice.org Writer in Version 3.2 …   Wikipedia

  • 64-bit — CPUs have existed in supercomputers since the 1960s and in RISC based workstations and servers since the early 1990s. In 2003 they were introduced to the (previously 32 bit) mainstream personal computer arena, in the form of the x86 64 and 64 bit …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”