Commodity computing

Commodity computing

Commodity computing (or Commodity cluster computing) is to use large numbers of already available computing components for parallel computing to get the greatest amount of useful computation at low cost.[1] It is computing done in commodity computers as opposed to high-cost supermicrocomputers or boutique computers. Commodity computers are computer systems manufactured by multiple vendors, incorporating components based on open standards. Such systems are said to be based on commodity components, since the standardization process promotes lower costs and less differentiation among vendors' products. A governing principle of commodity computing is that it is preferable to have more low-performance, low-cost hardware working in parallel (scalar computing) (e.g. AMD x86 CISC[2]) than to have less high-performance, high-cost hardware[3] (e.g. IBM POWER7[4] RISC). At some point, the number of discrete systems in a cluster will be greater than the mean time between failures (MTBF) for any hardware platform, no matter how reliable, so fault tolerance must be built into the controlling software[5][6]. Purchases should be optimized on cost-per-unit-of-performance, not just absolute performance-per-CPU at any cost.

Contents

History

The mid-1960s to early 1980s

The first computers were large, expensive and proprietary. The move towards commodity computing began when DEC introduced the PDP-8 in 1965. This was a computer that was relatively small and inexpensive enough that a department could purchase one without convening a meeting of the board of directors. The entire minicomputer industry sprang up to supply the demand for 'small' computers like the PDP-8. Unfortunately, each of the many different brands of minicomputers had to stand on its own because there was no software and very little hardware compatibility between the brands.

When the first general purpose microprocessor was introduced in 1974 it immediately began chipping away at the low end of the computer market, replacing embedded minicomputers in many industrial devices.

This process accelerated in 1977 with the introduction of the first commodity-like microcomputer, the Apple II. With the development of the VisiCalc application in 1979, microcomputers broke out of the factory and began entering office suites in large quantities, but still through the back door.

The 1980s to mid-1990s

The IBM PC was introduced in 1981 and immediately began displacing Apple II's in the corporate world, but commodity computing as we know it today truly began when Compaq developed the first true IBM PC compatible. More and more PC-compatible microcomputers began coming into big companies through the front door and commodity computing was well established.

During the 1980s microcomputers began displacing larger computers in a serious way. At first, price was the key justification but by the late 1980s and early 1990s, VLSI semiconductor technology had evolved to the point where microprocessor performance began to eclipse the performance of discrete logic designs. These traditional designs were limited by speed-of-light delay issues inherent in any CPU larger than a single chip, and performance alone began driving the success of microprocessor-based systems.

By the mid 1990s, every computer made were based on microprocessors, and the majority of general purpose microprocessors were implementations of the x86 instruction set architecture. Although there was a time when every traditional computer manufacturer had its own proprietary micro-based designs there are only a few manufacturers of non-commodity computer systems today.

Today, there are fewer and fewer general business computing requirements that cannot be met with off-the shelf commodity computers. It is likely that the low-end of the supermicrocomputer genre will continue to be pushed upward by increasingly powerful commodity microcomputers.

Characteristics of commodity computers

A large part of the current commodity computing marketplace is based on IBM PC compatibles. This typically means systems that are capable of running Microsoft Windows, Linux, or PC-DOS/MS-DOS, without requiring special drivers.

Some of the general characteristics of a commodity computer are:

  • Shares a base instruction set common to many different models.
  • Shares an architecture (memory, I/O map and expansion capability) that is common to many different models.
  • High degree of mechanical compatibility, internal components (CPU, RAM, motherboard, peripheral cards, drives) are interchangeable with other models.
  • Software is widely available off-the-shelf.
  • Compatible with most available peripherals, works with most right out of the box.

Other characteristics of today's commodity computers include:

  • ATX motherboard form factor.
  • Built-in interfaces for floppy drives, IDE CD-ROMs and hard drives.
  • Industry-standard PCI slots for expansion.

Some characteristics that are becoming common to many commodity computers and may become part of the commodity computer definition:

  • Built-in Ethernet interface.
  • Built-in USB ports.
  • Built-in video.
  • Built in interfaces for SATA drives.

Standards such as SCSI, FireWire, and Fibre Channel help commodotize computer systems more powerful than typical PCs. Standards such as ATCA and Carrier Grade Linux are helping to commoditize telecommunications systems. Blade servers, server farms, and computer clusters are also computer architectures that exploit commodity hardware.

Deployment

See also

References

  1. ^ John E. Dorband; Josephine Palencia Raytheon, Udaya Ranawake. "Commodity Computing Clusters at Goddard Space Flight Center". http://spacejournal.ohio.edu/: Goddard Space Flight Center. http://spacejournal.ohio.edu/pdf/Dorband.pdf. Retrieved 2010-03-07. "The purpose of commodity cluster computing is to utilize large numbers of readily available computing components for parallel computing to obtaining the greatest amount of useful computations for the least cost. The issue of the cost of a computational resource is key to computational science and data processing at GSFC as it is at most other places, the difference being that the need at GSFC far exceeds any expectation of meeting that need." 
  2. ^ http://www.computerworld.com/s/article/9154518/IBM_HP_servers_won_t_stop_x86_onslaught_on_Unix
  3. ^ http://research.google.com/pubs/DistributedSystemsandParallelComputing.html
  4. ^ ftp://ftp.software.ibm.com/common/ssi/pm/rg/n/poo03017usen/POO03017USEN.PDF
  5. ^ http://www.morganclaypool.com/doi/abs/10.2200/S00193ED1V01Y200905CAC006
  6. ^ http://insidehpc.com/2008/06/02/google-fellow-sheds-some-light-on-infrastructure-robustness-in-face-of-failure

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Computing platform — A computing platform includes some sort of hardware architecture and a software framework (including application frameworks), where the combination allows software to run. Typical platforms include a computer s architecture, operating system,… …   Wikipedia

  • Data Intensive Computing — is a class of parallel computing applications which use a data parallel approach to processing large volumes of data typically terabytes or petabytes in size and typically referred to as Big Data. Computing applications which devote most of their …   Wikipedia

  • Advanced Computing Environment — The Advanced Computing Environment (ACE) was defined by an industry consortium in the early 1990s to be the next generation commodity computing platform, the successor to Wintel based personal computers. Formation The consortium was announced on… …   Wikipedia

  • History of computing hardware — Computing hardware is a platform for information processing (block diagram) The history of computing hardware is the record of the ongoing effort to make computer hardware faster, cheaper, and capable of storing more data. Computing hardware… …   Wikipedia

  • Cluster (computing) — A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks.… …   Wikipedia

  • Grid computing — is a term referring to the combination of computer resources from multiple administrative domains to reach a common goal. The grid can be thought of as a distributed system with non interactive workloads that involve a large number of files. What …   Wikipedia

  • Kernel (computing) — A kernel connects the application software to the hardware of a computer In computing, the kernel is the main component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware… …   Wikipedia

  • Beowulf (computing) — Originally referring to a specific computer built in 1994, Beowulf is a class of computer clusters similar to the original NASA system. They are high performance parallel computing clusters of inexpensive personal computer hardware. The name… …   Wikipedia

  • Parallel computing — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing …   Wikipedia

  • List of distributed computing projects — A list of distributed computing projects. Berkeley Open Infrastructure for Network Computing (BOINC) The Berkeley Open Infrastructure for Network Computing (BOINC) platform is currently the most popular volunteer based distributed computing… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”