User:Prohlep/Locality of reference
![]() | This user page may require cleanup to meet Wikipedia's quality standards. No cleanup reason has been specified. Please help improve this user page if you can; the talk page may contain suggestions. |
This article needs additional citations for verification. (July 2008) |
In computer science, a reference is an object containing information which refers to data stored elsewhere, as opposed to containing the data itself. Accessing the value referred to by a reference is called dereferencing it.
The locality of reference, also known as The locality principle, is the phenomenon, that the collection of the data locations referenced in a short period of time in a running computer, consists mainly of relatively few clusters. In addition to this still picture, the collection of these clusters evolves, as the time progresses in greater scale than the time of short periods. The moving picture of the phenomenon in question is, that given the actual collection of clusters, the prospective clusters can be predicted partially.
Systems which exhibit stronger locality of reference phenomenon, are better candidates for performance optimization through the use of techniques, like the cache and prefetching technology concerning the memory, or like the advanced branch predictor at the pipelining of processors.
Locality of reference
Locality can be either temporal or spatial, at least.
- Temporal locality: if at one point in the time a particular memory location is referenced, then it is likely that the same location will be referenced again in the near future. There is a temporal proximity between the adjacent references to the same memory location. In this case it is common to make efforts to store a copy of the referenced data in special memory storage, which can be accessed faster.
- Spatial locality: if a particular memory location is referenced at a particular time, then it is likely that nearby memory locations will be referenced in the near future. There is a spatial proximity between the memory locations, referenced at almost the same time. In this case it is common to make efforts to guess, how big neighbourhood around the current reference is worthwhile to prepare for faster access.
In order to make benefit from the diverse kind of locality, most of the information storage systems are hierarchical, see below.
Globality of reference
If the phenomenon of locality is weak or missing, then we can speak of globality of reference. In this case, the information storage access is fairly random. Hence the faster hierarchy level of the storage system has no capacity to accommodate the data for faster handling. Therefore the fairly randomness in question yields usually a performance penalty in terms of speed. There is more or less nothing to do with it except knowing about it and be prepared for it.
Reasons for locality
There are several reasons for locality. These reasons are either goals to achieve or circumstances to accept, depending on the aspect. The reasons below are not disjoint, in fact the list below goes from the most general case to special cases.
- Predictability: In fact, locality is merely one type of predictable behavior in computer systems. Luckily, many of the practical problems are decidable and hence the corresponding program can behave predictable, if it is well written.
- Structure of the program: Locality occurs often because of the way in which computer programs are created, for handling decidable problems. Generally, related data is stored in nearby locations in storage. One common pattern in computing involves the processing of several items, one at a time. This means that if a lot of processing is done, the single item will be accessed more than once, thus leading to temporal locality of reference. Furthermore, moving to the next item implies that the next item will be read, hence spatial locality of reference, since memory locations are typically read in batches.
- Linear data structures: Locality often occurs because code contains loops that tend to reference arrays or other data structures by indices. Sequential locality, a special case of spatial locality, occurs when relevant data elements are arranged and accessed linearly. For example, the simple traversal of elements in a one-dimensional array, from the base address to the highest element would exploit the sequential locality of the array in memory.[1]
Reasons for globality
What is called predictability in the practical life, it is approximately the decidability in mathematics, universal algebra and mathematical logic. There is a wast amount of negative results in Universal Algebra, that most of the interesting mathematical questions are undecidable, and hence the corresponding search program must behave unpredictably, and therefore any kind of locality is missing.
Finding new proofs, new ideas, new concepts is more or less equivalent to do something similar to the famous Knuth-Bendix completion algorithm, introduced at a conference in 1969 and published as a research paper in 1970. In contrary to it's name, it is definitely not an algorithm, since due to the famous Gödel's incompleteness theorems, (1) the time of the termination can not be predicted, and (2) in most of the cases (nearly 100% of the cases) the program will never terminate. As a consequence of this, the handling of the storage resources, including the path in the instruction space can not be predictable. Hence, in this kind of situations, the locality phenomenon is almost completely missing!
Hence, if you do serious mathematics in artificial intelligence, then better not to be satisfied with processors having big caches, because the slower levels of the memory hierarchy does really matter, and hence it is worthwhile to have really speedy memory modules installed into the motherboard.
All the above is not far from the reality of manufacturing, designing computers. There the mathematical verification of the plans of the digital electronics and the integrated circuits can run into decidability problems. This is the reason why the industrial Larch prover, developed in the corporation of Digital Equipment Corporation and Massachusetts Institute of Technology, is limited to those kind of reasoning methods, which terminate in log linear time.
Use of locality
If most of the time the substantial portion of the references aggregate into clusters, and if the shape of this system of clusters can be well predicted, then it can be used for speed optimization.
Increasing and exploiting locality of reference are common techniques for optimization. This can happen on several levels of the memory hierarchy. Paging obviously benefits from spatial locality. A cache is a simple example of exploiting temporal locality, because it is a specially designed faster but smaller memory area, generally used to keep recently referenced data and data near recently referenced data, which can lead to potential performance increases. Data in cache does not necessarily correspond to data that is spatially close in main memory; however, data elements are brought into cache one cache line at a time. This means that spatial locality is again important: if one element is referenced, a few neighboring elements will also be brought into cache. Finally, temporal locality plays a role on the lowest level, since results that are referenced very closely together can be kept in the machine registers. Programming languages such as C allow the programmer to suggest that certain variables are kept in registers.
To make benefit of memory locality: hierarchical memory
Data locality is a typical memory reference feature of regular programs (though many irregular memory access patterns exist). It makes the hierarchical memory layout profitable. In computers, memory is divided up into a hierarchy in order to speed up data accesses. The lower levels of the memory hierarchy tend to be slower, but larger. Thus, a program will achieve greater performance if it uses memory while it is cached in the upper levels of the memory hierarchy and avoids bringing other data into the upper levels of the hierarchy that will displace data that will be used shortly in the future. This is an ideal, and sometimes cannot be achieved.
Typical memory hierarchy (access times and cache sizes are approximations of typical values used as of 2006 for the purpose of discussion; actual values and actual numbers of levels in the hierarchy vary):
- CPU registers (8-32 registers) – immediate access (0-1 clock cycles)
- L1 CPU caches (32 KiB to 128 KiB) – fast access (3 clock cycles)
- L2 CPU caches (128 KiB to 12 MiB) – slightly slower access (10 clock cycles)
- Main physical memory (RAM) (256 MiB to 4 GiB) – slow access (100 clock cycles)
- Disk (file system) (1 GiB to 1 TiB) – very slow (10,000,000 clock cycles)
- Remote Memory (such as other computers or the Internet) (Practically unlimited) – speed varies
Modern machines tend to read blocks of lower memory into the next level of the memory hierarchy. If this displaces used memory, the operating system tries to predict which data will be accessed least (or latest) and move it down the memory hierarchy. Prediction algorithms tend to be simple to reduce hardware complexity, though they are becoming somewhat more complicated.
Spatial and temporal locality example: matrix multiplication
A common example is matrix multiplication:
for i in 0..n for j in 0..m for k in 0..p C[i][j] = C[i][j] + A[i][k] * B[k][j];
When dealing with large matrices, this algorithm tends to shuffle data around too much. Since memory is pulled up the hierarchy in consecutive address blocks, in the C programming language it would be advantageous to refer to several memory addresses that share the same row (spatial locality). By keeping the row number fixed, the second element changes more rapidly. In C and C++, this means the memory addresses are used more consecutively. One can see that since j
affects the column reference of both matrices C
and B
, it should be iterated in the innermost loop (this will fix the row iterators, i
and k
, while j
moves across each column in the row). This will not change the mathematical result, but it improves efficiency. By switching the looping order for j
and k
, the speedup in large matrix multiplications becomes dramatic. (In this case, 'large' means, approximately, more than 100,000 elements in each matrix, or enough addressable memory such that the matrices will not fit in L1 and L2 caches.)
Temporal locality can also be improved in the above example by using a technique called blocking. The larger matrix can be divided into evenly-sized sub-matrices, so that the smaller blocks can be referenced (multiplied) several times while in memory.
for (ii = 0; ii < SIZE; ii += BLOCK_SIZE) for (kk = 0; kk < SIZE; kk += BLOCK_SIZE) for (jj = 0; jj < SIZE; jj += BLOCK_SIZE) for (i = ii; i < ii + BLOCK_SIZE && i < SIZE; i++) for (k = kk; k < kk + BLOCK_SIZE && k < SIZE; k++) for (j = jj; j < jj + BLOCK_SIZE && j < SIZE; j++) C[i][j] = C[i][j] + A[i][k] * B[k][j];
The temporal locality of the above solution is provided because a block can be used several times before moving on, so that it is moved in and out of memory less often. Spatial locality is improved because elements with consecutive memory addresses tend to be pulled up the memory hierarchy together.
See also
References
- P.J. Denning, S.C. Schwartz, Communications of the ACM, Volume 15 , Issue 3 (March 1972), Pages:191-198
Bibliography
- Peter J. Denning The locality principle, 2005. The story of the computing fundamental principle of locality of reference.
- ^ Aho, Lam, Sethi, and Ullman. "Compilers: Principles, Techniques & Tools" 2nd ed. Pearson Education, Inc. 2007