Zum Inhalt springen

Container (Solaris)

aus Wikipedia, der freien Enzyklopädie
Dies ist eine alte Version dieser Seite, zuletzt bearbeitet am 1. November 2009 um 11:59 Uhr durch Rjwilmsi (Diskussion | Beiträge) (date comma spacing fixes + gen fixes using AWB). Sie kann sich erheblich von der aktuellen Version unterscheiden.

Solaris Containers (including Solaris Zones) is an implementation of operating system-level virtualization technology first made available in 2005 as part of Solaris 10.

A Solaris Container is the combination of system resource controls and the boundary separation provided by zones. Zones act as completely isolated virtual servers within a single operating system instance. By consolidating multiple sets of application services onto one system and by placing each into isolated virtual server containers, system administrators can reduce cost and provide all the same protections of separate machines on a single machine.

Terminology

There is always one zone defined, named the "global" zone. Zones hosted by a global zone are known as "non-global zones" but are sometimes just called "zones". The term "local zone" is specifically discouraged, since in this usage "local" is not an antonym of "global". The global zone encompasses all processes running on the system, whether or not these processes are running within a non-global zone. Unless otherwise noted, "zone" will refer to non-global zones in this article.

Description

Each zone has its own node name, virtual network interfaces, and storage assigned to it; there is no requirement for a zone to have any minimum amount of dedicated hardware other than the disk storage necessary for its unique configuration. Specifically, it does not require a dedicated CPU, memory, physical network interface or HBA, although any of these can be allocated specifically to one zone.

Each zone has a security boundary surrounding it which prevents a process associated with one zone from interacting with or observing processes in other zones. Each zone can be configured with its own separate user list. The system automatically manages user ID conflicts; that is, two zones on a system could have a user ID 10000 defined, and each would be mapped to its own unique global identifier.

A zone can be assigned to a resource pool (processor set plus scheduling class) to guarantee certain usage, or can be given shares via fair-share scheduling. A zone can be in one of the following states:

  • Configured: configuration was completed and committed
  • Incomplete: Transition state during install or uninstall operation
  • Installed: the packages have been successfully installed
  • Ready: the virtual platform has been established
  • Running: the zone booted successfully and is now running
  • Shutting down: the zone is in the process of shutting down - this is a temporary state, leading to "Down"
  • Down: the zone has completed the shut down process and is down - this is a temporary state, leading to "Installed"

Some programs cannot be executed from within a non-global zone; typically this is because the application requires privileges that cannot be granted within a container. As a zone does not have its own separate kernel (in contrast to a hardware virtual machine), applications that require direct manipulation of kernel features, such as the ability to directly read or alter kernel memory space, may not work inside of a container.

Resources needed

Zones induce a very low overhead on CPU and memory. Currently a maximum of 8191 non-global zones can be created within a single operating system instance. "Sparse Zones", in which most filesystem content is shared with the global zone, can take as little as 50MB of disk space. "Whole Root Zones", in which each zone has its own copy of its operating system files, may occupy anywhere from several hundred megabytes to several gigabytes, depending on installed software.

Even with Whole Root Zones, disk space requirements can be negligible if the zone's OS file system is a ZFS clone of the global zone image, since only the blocks different from a snapshot image need to be stored on disk; this method also makes it possible to create new zones in a few seconds.

Branded zones

Although all zones on the system share a common kernel, an additional feature set has been added called branded zones (BrandZ for short), or non-native zones. This allows individual zones to emulate an OS environment other than the native one of the global OS.[1] The non-native environment is dubbed a "brand", which plugs into the BrandZ framework.

The brand for a zone is set at the time the zone is created, and is implemented with interposition points within the OS kernel that can be used to change the behavior of syscalls, process loading, thread creation, and other elements.

Three brands that have been implemented are Solaris Containers for Linux Applications, Solaris 8 Containers, and Solaris 9 Containers. The first is available when Solaris is run on x86 systems, and provides an environment that emulates Red Hat Enterprise Linux 3. Libraries from Red Hat 3 or an equivalent distribution such as CentOS are required to complete the emulated environment. The latter two brands allow an existing Solaris 8 or Solaris 9 environment to be copied and relocated in a Solaris 10 zone.

The OpenSolaris s10brand project is using the BrandZ framework to create Solaris 10 Containers for OpenSolaris, which will allow an existing Solaris 10 environment to be hosted on a system running an OpenSolaris release.[2]

Documentation

The Solaris operating system provides man pages for Solaris Containers by default; more detailed documentation can be found at various on-line technical resources.

The first published document and hands-on reference for Solaris Zones was written in February 2004 by Dennis Clarke at Blastwave.org, providing the essentials to getting started.[3] This document was greatly expanded upon by Brendan Gregg in July 2005.[4] The Solaris 8 and Solaris 9 Containers were documented in detail by Dennis Clarke at Blastwave(tm) again in April 2008 and this has become a simple How To style guide that can get people started with Solaris Containers in a production setting.[5] The Blastwave Solaris 8 and Solaris 9 Containers document was very early in the release cycle of the Solaris Containers technology and the actions and implementation at Blastwave resulted in a followup by Sun Microsystems marketing.[6] More extensive documentation may be found at the Sun Microsystems documentation site,[7], the Sun BluePrints Archive,[8] and the Sun Solaris Containers Learning Center.[9]

Implementation issues

The standard Solaris NFS server is implemented in the kernel, and cannot be used for exports within non-global zones.[10][11] Third party NFS server software that is not implemented in the Solaris kernel may work.

Branded zones are not supported on the sun4us architecture. The packages will install on Fujitsu PRIMEPOWER servers that are running Solaris 10, but attempting to use branded zones will result in errors.[12]

Similar technologies

Other implementations of operating system-level virtualization technology include OpenVZ/Virtuozzo, Linux-VServer, FreeBSD Jails, FreeVPS, iCore Virtual Accounts and AIX Workload Partitions.

See also

References

Vorlage:Reflist

Vorlage:Sun Microsystems

  1. BrandZ/SCLA FAQ. OpenSolaris Project, abgerufen am 19. Oktober 2007.
  2. OpenSolaris Project: s10brand. OpenSolaris Project, abgerufen am 10. Mai 2009.
  3. Dennis Clarke: Get in the Zone. In: Blastwave.org Inc. Februar 2004, abgerufen am 21. April 2008.
  4. Zones. In: Solaris Internals wiki. 6. November 2007, abgerufen am 21. April 2008.
  5. The Solaris 8 Container & Solaris 9 Container : A Brief Introduction. In: Blastwave.org Inc. 27. April 2008, abgerufen am 27. April 2008.
  6. World's Largest Provider of Standardized Solaris Software Packages Saves Time and Money With Solaris Containers. In: Sun Microsystems Inc. 1. August 2008, abgerufen am 1. Mai 2008.
  7. Sun Microsystems Documentation. Sun Microsystems, Inc., abgerufen am 21. April 2008.
  8. Sun BluePrints. Sun Microsystems, Inc., abgerufen am 9. November 2008.
  9. Solaris Containers Learning Center. Sun Microsystems, Inc., abgerufen am 21. April 2008.
  10. RFE: Zones should be able to be NFS servers. In: OpenSolaris BugTracker. 7. Dezember 2003, abgerufen am 20. Februar 2007.
  11. NFS server in zones. In: zones-discuss. 14. Februar 2007, abgerufen am 20. Februar 2007.
  12. SUNWs[89]brandr missing kernel modules. In: brandz-discuss list. 16. Oktober 2008, abgerufen am 5. Mai 2009.