Cache placement policies

Cache placement policies are programmable approaches to holding data that is to be utilized by the general processing unit (CPU) in the CPU cache.
Performance
CPU cache plays a significant role in the overall performance of a computer system and the speed at which a general processing unit (CPU) can process data. CPU cache is very fast computer memory, usually using static random-access memory (SRAM). SRAM is volatile memory, it can only store data when supplied with electricity. The CPU cache sits between the CPU core and the random-access memory (RAM). The CPU cache isolates the super fast CPU cores from the much slower RAM, which is usually direct random access memory (DRAM).[1]
Cache placement policies can enhance the performance of data processing by the CPU cores, because computer programs may access data in the RAM sequentially, also known as sequential execution or spatial locality behavior. Alternatively computer programs may execute a part of the code or data repeatedly for some time, also known as sequential code execution or loop structure.[2]
Approaches to cache placement


There are three different policies available for placement of data the CPU cache: direct-mapped, fully associative, and set-associative. Originally this space of cache organizations was described using the term "congruence mapping".[3]
See also
References
- ^ Sivarama P. Dandamudi (2006). Fundamentals of Computer Organization and Design. Springer New York. p. 731. ISBN 9780387215662.
- ^ Sivarama P. Dandamudi (2006). Fundamentals of Computer Organization and Design. Springer New York. p. 731. ISBN 9780387215662.
- ^ Mattson, R.L.; Gecsei, J.; Slutz, D. R.; Traiger, I (1970). "Evaluation Techniques for Storage Hierarchies". IBM Systems Journal. 9 (2): 78–117. doi:10.1147/sj.92.0078.