Commodity computing
Commodity computing is quite simply, computing done on commodity computers as opposed to supermicrocomputers or boutique computers.
The idea behind commodity computing began when Digital Equipment Corporation introduced the PDP-8 in 1965. This was a computer that was inexpensive enough that a department could purchase one without convening a meeting of the board of directors. The entire minicomputer industry sprang up around products similar to this, unfortunately, each of the many different brands of minicomputers had to stand on their own, because there was no software and little hardware compatibility between them.
When the first general purpose microprocessor was introduced in 1974 it immediately began chipping away at the low end of the computer market, replacing embedded systems in many industrial devices.
This process accelerated in 1977 with the introduction of the first commodity - like computer, the Apple II. With the development of the Visicalc application in 1979, microcomputers broke out of the factory and began entering office suites in large quantities, but still thru the back door.
The IBM PC was introduced in 1981 and immediately began displacing Apple II’s in the corporate world, but commodity computing as we know it today didn’t begin until Compaq developed the first IBM PC compatible. More and more PC compatible microcomputers began coming into big companies thru the front door and commodity computing was well established.
During the 80’s microcomputers began displacing ‘real’ computers in a serious way. At first, price / performance issues were the key justification. By the mid 80’s, semiconductor technology had evolved to the point where microprocessor performance began to eclipse the performance of discrete logic designs. These traditional designs were limited by speed-of-light delay issues inherent in any CPU larger than a single chip, and performance alone began driving the success of microprocessor-based systems.
The old systems began to fall, first minis, then superminis, and finally mainframes. By the mid 90’s, every computer made was a microcomputer, and most microcomputers were IBM PC compatibles. Although there was a time when every traditional computer manufacturer had its own proprietary micro-based designs there are only a few manufacturers of non-commodity computer systems today, but supermicrocomputers (like those of the IBM p, i, and z series) still own the high end of the market.
As the power of microprocessors continues to increase, there are fewer and fewer business computing needs that cannot be met with off-the shelf commodity computers. It is likely that the low end of the supermicrocomputer genre will continue to be pushed upward by increasingly powerful commodity microcomputers. There will be fewer non-commodity systems sold each year, resulting in fewer and fewer dollars available for non-commodity R&D, resulting in a continually narrowing performance gap between commodity microcomputers and proprietary supermicros.
As the speed of Ethernet increases to 10 gigabits, the differences between multiprocessor systems based on loosely coupled commodity microcomputers and those based on tightly coupled proprietary supermicro designs (like the IBM p-series) will continue to narrow and will eventually disappear.
When 10 gigabit Ethernet becomes standard equipment in commodity microcomputer servers, multi-processor cluster or grid systems based on off-the-shelf commodity microcomputers and Ethernet switches will take over more and more computing tasks that can currently be performed only by high end models of proprietary supermicros like the IBM p-series, further eroding the viability of the supermicro industry.