Jump to content

PCMOS

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 75.2.135.12 (talk) at 22:35, 28 March 2010 (Minor- (changed 'which' to 'that' in restrictive clause, punctuation, removed second-person references to make more 'encyclopedic')). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

PCMOS (probabilistic complementary metal-oxide semiconductor) is a semiconductor manufacturing technology invented by Professor Krishna Palem of Rice University and Director of NTU's Institute for Sustainable Nanoelectronics (ISNE). The technology hopes to compete against current CMOS technology. Proponents claim it uses 30 times less electricity while running seven times faster than the fastest current technology.[1][2][3]

Introduction

Typically, embedded computing systems are required to achieve a required level of computing performance, with simultaneous and severe constraints on characteristics such as power consumption, mobility, and size. Moore’s law, and the associated shrinking of transistor sizes, increases in mobility, and decrease in size and power consumption has driven the proliferation and ubiquity of embedded systems. It is desirable for this trend to continue, to enable new applications and novel contexts for embedded systems.

The challenges can broadly be classified under two categories:

  1. Change in the nature of materials and material properties

as transistor size decreases

  1. Inability to fabricate identical and reliable nanometer-sized

silicon devices and achieve uniform behavioral characteristics

These challenges affect the physical characteristics of transistors and hence computing platforms in many ways [3]. For example, devices are no longer expected to behave in a deterministic and reliable manner, and the probabilistic and unreliable behavior of devices is deemed inevitable by the international technology road-map for semiconductors (ITRS), which forecasts [16] “Relaxing the requirement of 100% correctness for devices and interconnects may dramatically reduce costs of manufacturing, verification, and test. Such a paradigm shift is likely forced in any case by technology scaling, which leads to more transient and permanent failures of signals, logic values, devices, and interconnects.”. This nonuniform, probabilistic and unreliable behavior of transistors has an impact on the desirable characteristics of embedded systems. For example, to provide adequate noise immunity, the supply voltage of transistors are not scaled down at a rate concomitant to the reduction of the size of the transistors [13]. This results in an increase in power density as size of the transistors decrease without a corresponding decrease in power consumption. Increasing power density results in bulky cooling components thus severely impacting the mobility of embedded computing platforms. A comprehensive survey of such challenges to nanometer-sized devices and beyond may be found in [3, 4, 5, 13]. Several approaches have been adopted to address these challenges to Moore’s law. These approaches include rigorous test mechanisms, techniques that correct errors incurred by architectural primitives using temporal and spatial redundancy [2, 6, 14], an increase in parallelism without an increase in the frequency of operation of computing devices, research into novel non-silicon materials for computing, including molecular devices [33], graphene and optoelectronics, and design automation-based approaches to reduce the impact of undesirable effects such as timing variations and noise susceptibility.

By contrast, the central theme of probabilistic and approximate design, is to design computing systems using circuit components that are susceptible to perturbations. Perturbations refers to a broad range of phenomena that cause circuit elements to behave in an “incorrect” or “non-uniform” manner. These behaviors may arise from unreliable behavior of computing devices—caused by, for example, susceptibility to noise, or unpredictable behavior due to variations incircuit element delay. PCMOS does not attempt to correct circuit element errors but instead use them in the context of applications that can benefit from or tolerate those behaviors. PCMOS research on probabilistic and approximate design has three inter-related aspects that draw on theoretical background of diverse disciplines such as probabilistic algorithms, theory of digital signal processing, thermodynamics, computer arithmetic, and mathematical logic.

Applications

In general, embedded applications can be classified into three categories: those that benefit from perturbations, those that tolerate but do not benefit from perturbations, and those that cannot tolerate perturbations. PCMOS strives to implement applications derived from the former two categories.

Device properties

In the application context outlined above, energy and performance efficiency can be obtained if computing devices provide some mechanism that trades “correctness” for cost. PCMOS work covers general simiconductors, but is focused on CMOS implementations. This draws attention to two phenomena in CMOS. The first is the relationship between the probability of correct switching and energy consumption. The other is the relationship between switching speed and supply voltage of CMOS-based logic gates.

Design practice

Given applications that benefit from or tolerate perturbations, and CMOS devices that exhibit a trade-off between perturbations and cost, PCMOS requires a design methodology to implement applications using these computing devices. PCMOS methodology is rooted in the theory of probabilistic Boolean logic (PBL), probabilistic arithmetic, and approximate arithmetic. PCMOS proponents distinguish between probabilistic design, where computing substrate behavior is probabilistic, and approximate design, where computing substrate behavior is deterministic, but erroneous.


References