Jump to content

Memory latency

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Akeosnhaoe (talk | contribs) at 11:31, 1 April 2021. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
1 megabit DRAMs with 70 ns latency on a 30-pin SIMM module. Modern DDR4 DIMMs have latencies under 15ns.[1]

In computing, memory latency is the time (the latency) between initiating a request for a byte or word in memory until it is retrieved by a processor. If the data are not in the processor's cache, it takes longer to obtain them, as the processor will have to communicate with the external memory cells. Latency is therefore a fundamental measure of the speed of memory: the less the latency, the faster the reading operation.

Latency should not be confused with memory bandwidth, which measures the throughput of memory. Latency can be expressed in clock cycles or in time measured in nanoseconds. Over time, memory latencies expressed in clock cycles have been fairly stable, but they have improved in time.[1]

Memory latency is also the time between initiating a request for data and the beginning of the actual data transfer. On a hard disk drive, latency is the time it takes for the selected sector to come around and be positioned under the read/write head.

See also

References

  1. ^ a b Crucial Technology, "Speed vs. Latency: Why CAS latency isn't an accurate measure of memory performance" [1]