Jump to content

Memory virtualization

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Expatrick (talk | contribs) at 20:13, 9 April 2009 (Created page with 'In computer science, Memory virtualization decouples volatile random access memory (RAM) resources from individual systems in the data center, and then aggr…'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In computer science, Memory virtualization decouples volatile random access memory (RAM) resources from individual systems in the data center, and then aggregates those resources into a virtualized memory pool available to any computer in the cluster. The memory pool is accessed by the operating system or applications running on top of the operating system. The distributed memory pool can then be utilized as a high-speed cache, a messaging layer, or a large, shared memory resource for a CPU or a GPU application.

Description

Memory virtualization allows networked, and therefore distributed, servers to share a pool of memory to overcome physical memory limitations, a common bottleneck in Information Technology software performance. With this capability integrated into the network, applications can take advantage of a very large amount of memory to improve overall performance, system utilization, increase memory usage efficiency, and enable new use cases. Software on the memory pool nodes (servers) allows nodes to connect to the memory pool to contribute memory, and store and retrieve data. Management software manages the shared memory, data insertion, eviction and provisioning policies, data assignment to contributing nodes, and handles requests from client nodes. The memory pool may be accessed at the application level or operating system level. At the application level, the pool is accessed through an API or as a networked file system to create a high-speed shared memory cache. At the operating system level, a page cache can utilize the pool as a very large memory resource that is much faster than local or networked storage.

Memory virtualization implementations are distinguished from Shared Memory systems and Memory-based storage. Shared Memory systems do not permit abstraction of memory resources thus requiring implementation with a single operating system instance (i.e. not within a clustered application environment). Solid State Disks (SSDs) are direct attached storage implementations that use memory (i.e. RAM or Flash memory) as the storage medium.


Benefits

  • Improves cluster and server utilization via the sharing of scarce resources
  • Increases efficiency and lowers run time for data intensive and I/O bound applications
  • Allows applications on multiple servers to share data without replication, decreasing total memory needs
  • Lowers latency and provides faster access than storage including SSD’s, SAN’s or NAS
  • Scales linearly as memory resources are added to the cluster and made available to the memory pool.

Products

  • RNA networks Memory Virtualization Platform - A low latency memory pool, implemented as a shared cache and a low latency messaging solution.
  • Gigaspaces - A Java based shared memory platform for grid computing.
  • ScaleMP - A platform to combine resources from multiple computers for the purpose of creating a single computing instance.
  • Wombat Data Fabric – A memory based messaging fabric for delivery of market data in financial services.


Implementations

Application level integration

In this case, applications running on connected computers connect to the memory pool directly through an API or the file system.


Cluster implementing Memory Virtualization at the Application Level. Contributors 1...n contribute memory to the pool. Applications read and write data to the pool using Java or C APIs, or a file system API. Cluster implementing Memory Virtualization at the Application Level. Contributors 1...n contribute memory to the pool. Applications read and write data to the pool using Java or C APIs, or a file system API.



Operating System Level Integration

In this case, the operating system connects to the memory pool, and makes pooled memory available to applications.


Cluster implementing Memory Virtualization. Contributors 1...n contribute memory to the pool. The operating system connects to the memory pool through the page cache system. Applications consume pooled memory via the operating system.


Comparison to Other Technologies

  • Virtual Memory describes server RAM in combination with disk-based page caching on a single server
  • Virtual Memory Management in Hypervisors Hypervisors manage the physical memory of one server, dynamically apportioning memory among operating system instances, and typically using the translation lookaside buffer (TLB) to translate between virtual and physical memory addresses. (VMware ESX, Xen Hypervisor)
  • Data Grid Enables Java application clustering (Terracotta Network Attached Memory)
  • In-memory Database Provide faster and more predictable performance than disk-based databases (Gigaspaces, Gemstone Gemfire)
  • I/O Virtualization Creates virtual network and storage endpoints which allow network and storage data to travel over the same fabrics (XSigo I/O Director)
  • Storage Virtualization Abstracts logical storage from physical storage (NAS, SAN, File Systems (NFS, cluster FS), Volume Management, RAID)
  • Virtualization Management Hardware - Hardware solution to accelerate hypervisors (3Leaf Management Solution)
  • RAMdisks Virtual storage devices within a single computer, limited to capacity of unused local RAM.


Background

Memory Virtualization technology follows from memory management architectures and virtualization techniques. In both fields, the path of innovation has moved from tightly coupled relationships between logical and physical resources to more flexible, abstracted relationships where physical resources are allocated as needed.

Virtual memory systems abstract between physical RAM and virtual addresses, assigning virtual memory addresses both to physical RAM and to disk-based storage, expanding addressable memory, but at the cost of speed. NUMA and SMP architectures optimize memory allocation within multi-processor systems. While these technologies dynamically manage memory within individual computers, memory virtualization manages the aggregated memory of multiple networked computers as a single memory pool.

The effort to virtualize physical resources has progressed in tandem with memory management innovations, and a number of virtualization techniques have arisen to make the best use of available hardware resources. [Application virtualization] was demonstrated in mainframe systems first. The next wave was storage virtualization, as servers connected to storage systems such as NAS or SAN in addition to, or instead of, on-board hard disk drives. Server virtualization, or Full virtualization, partitions a single physical server into multiple virtual machines, consolidating multiple instances of operating systems onto the same machine for the purpose of efficiency and flexibility. In both storage and server virtualization, the applications are unaware that the resources they are using are virtual rather than physical, so efficiency and flexibility are achieved without application changes. In the same way, memory virtualization allocates the memory of an entire networked cluster of servers with memory among the computers in that cluster.

See Also

References