Jump to content

Cache (computing)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Techi2ee (talk | contribs) at 22:42, 10 September 2007 (See also). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer science, a cache (IPA:/kæʃ/, like "cash" [1]) is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (due to slow access time) or to compute, relative to the cost of reading the cache. In other words, a cache is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, future use can be made by accessing the cached copy rather than re-fetching or recomputing the original data, so that the average access time is lower.

Caches have proven to be extremely effective in many areas of computing because access patterns in typical computer applications have locality of reference. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time (temporal locality). The data might or might not be located physically close to each other (spatial locality).

History

Use of the word “cache” in the computer context originated in 1967 during preparation of an article for publication in the IBM Systems Journal. The paper concerned an exciting memory improvement in Model 85, a latecomer in the IBM System/360 product line. The Journal editor, Lyle R. Johnson, pleaded for a more descriptive term than “high-speed buffer”; when none was forthcoming, he suggested “cache.” The paper was published in early 1968, the authors were honored by IBM, their work was widely welcomed and subsequently improved upon, and “cache” soon became standard usage in computer literature.[2]

Operation

Diagram of a CPU memory cache

A cache is a block of memory for temporary storage of data likely to be used again. The CPU and hard drive frequently use a cache, as do web browsers and web servers.

A cache is made up of a pool of entries. Each entry has a datum (a nugget of data) which is a copy of the datum in some backing store. Each entry also has a tag, which specifies the identity of the datum in the backing store of which the entry is a copy.

When the cache client (a CPU, web browser, operating system) wishes to access a datum presumably in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired datum, the datum in the entry is used instead. This situation is known as a cache hit. So, for example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the contents of the web page is the datum. The percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache.

The alternative situation, when the cache is consulted and found not to contain a datum with the desired tag, is known as a cache miss. The datum fetched from the backing store during miss handling is usually inserted into the cache, ready for the next access.

If the cache has limited storage, it may have to eject some other entry in order to make room. The heuristic used to select the entry to eject is known as the replacement policy. One popular replacement policy, least recently used (LRU), replaces the least recently used entry (see cache algorithms). More efficient caches compute use frequency against the size of the stored contents, as well as the latencies and throughputs for both the cache and the backing store. While this works well for larger amounts of data, long latencies, and slow throughputs, such as experienced with a hard drive and the Internet, it's not efficient to use this for cached main memory (RAM).[citation needed]

When a datum is written to the cache, it must at some point be written to the backing store as well. The timing of this write is controlled by what is known as the write policy. In a write-through cache, every write to the cache causes a write to the backing store. Alternatively, in a write-back cache, writes are not immediately mirrored to the store. Instead, the cache tracks which of its locations have been written over (these locations are marked dirty). The data in these locations is written back to the backing store when those data are evicted from the cache. For this reason, a miss in a write-back cache will often require two memory accesses to service: one to retrieve the needed datum, and one to write replaced data from the cache to the store.

Data write-back may be triggered by other policies as well. The client may make many changes to a datum in the cache, and then explicitly notify the cache to write back the datum.

No-write allocation is a cache policy where only processor reads are cached, thus avoiding the need for write-back or write-through when the old value of the datum was absent from the cache prior to the write.

The data in the backing store may be changed by entities other than the cache, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of that data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols.

Applications

CPU caches

Small memories on or close to the CPU chip can be made faster than the much larger main memory. Most CPUs since the 1980s have used one or more caches, and modern general-purpose CPUs inside personal computers may have as many as half a dozen, each specialized to a different part of the problem of executing programs. indeed

Disk cache

CPU caches are generally managed entirely by hardware, apart from specialized architectures such as NUMA. Other caches are managed by a variety of software. The cache of disk sectors in main memory is usually managed by the operating system kernel or file system.

In turn, fast local hard disk can be used to cache information held on even slower data storage devices, such as tape drives or optical jukeboxes. Such a scheme is the main concept of hierarchical storage management.

Other caches

The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a resolver library.

Write-through operation is common when operating over unreliable networks (like an ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-side network file system caches (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable.

A cache of recently visited web pages can be managed by your web browser. Some browsers are configured to use an external proxy web cache, a server program through which all web requests are routed so that it can cache frequently accessed pages for everyone in an organization. Many internet service providers use proxy caches to save bandwidth on frequently-accessed web pages.

Search engines also frequently make web pages they have indexed available from their cache. For example, Google provides a "Cached" link next to each search result. This is useful when web pages are temporarily inaccessible from a web server.

Another type of caching is storing computed results that will likely be needed again, or memoization. An example of this type of caching is ccache, a program that caches the output of the compilation to speed up the second-time compilation.

The difference between buffer and cache

The terms are not mutually exclusive and the functions are frequently combined; however, there is a difference in intent. A buffer is a temporary storage location where a large block of data is assembled or disassembled[citation needed]. This may be necessary for interacting with a storage device that requires large blocks of data, or when data must be delivered in a different order than that in which it is produced. The benefit is present even if the buffered data are written to the buffer once and read from the buffer once.

A cache is a kind of buffer. However, it operates on the premise that the same datum will be read from it multiple times, that written data will soon be read, or that there is a good chance of multiple reads or writes to combine to form a single larger block. Its sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually an abstraction layer that is designed to be invisible.

References

  1. ^ or, formerly, /kɑːʃ/Oxford English Dictionary cache (restricted); also Dictionary.com (unrestricted). Although the pronunciation /ˈkæʃeɪ/ is sometimes heard in English, it properly only represents the French adjective 'caché(e)', meaning 'hidden'. /keɪʃ/ is a mispronunciation.
  2. ^ G. C. Stierhoff and A. G. Davis. A History of the IBM Systems Journal. IEEE Annals of the History of Computing, Vol. 20, No. 1 (Jan. 1998), pages 29-35. [1] 

See also