PowerVM virtualization technologies
Over the past few years, IBM has introduced a number of virtualization technologies, capabilities and offerings. Many of the virtualization features were originally available through an offering known as Advanced POWER Virtualization (APV). As of early 2008, these and many subsequent technologies and offerings have been collectively grouped under the new brand of PowerVM.
The PowerVM technologies are packaged into three PowerVM editions:
PowerVM Express Edition
PowerVM Standard Edition
PowerVM Enterprise Edition
These packages give you the flexibility to choose a cost-effective solution based on your virtualization requirements.
PowerVM is additionally extended by virtualization features and capabilities of the processor and operating system.
Individual technologies are enabled either by hardware, software, firmware, or some combination thereof. Table 2-1 presents a list of key technology offerings with their requirements.
Table 2-1 PowerVM technologies by PowerVM Edition and technology enabler
PowerVM Technology
Express Edition
Standard Edition
Enterprise Edition
Hardware requirement1
Available with AIX2
Available with IBM POWER Linuxb
Available with IBM i b
Hypervisor
POWER4™
Hardware Management Console
 
POWER4
Integrated Virtualization Manager
POWER4
Systems Director Management Console
POWER6®
Systems Director VMControl™
POWER5™
AIX 5.3
Dedicated Logical Partitions (LPARs)
POWER4
Live Partition Mobility
 
 
POWER6
AIX 5.3
SLES10 SP2, RHEL5 QU2
Dynamic LPAR
POWER4
Micro-Partitioning™ and Shared Processor LPARs
POWER5
AIX 5.3
Shared Dedicated Capacity
POWER6
Multiple shared Processor Pools
 
POWER6
Virtual I/O Server (VIOS)
POWER5
IBM i 6.1
Partition Suspend / Resume
 
POWER7™
AIX 6.1
 
IBM i 7.1 TR2
N Port ID Virtualization
POWER6
AIX 5.3
SLES 11
RHEL 6
IBM i 6.1.1
Virtual Tape
POWER6
AIX 5.3
 
SLES 11
RHEL 6
Virtual SCSI
 
POWER5
AIX 5.3
Virtual Ethernet
POWER5
AIX 5.3
Shared Ethernet Adapter
 
POWER5
AIX 5.3
Integrated Virtual Ethernet
POWER6
Active Memory™ Sharing (AMS)
 
 
POWER6
AIX 6.1
SLES 11
RHEL6
IBM i 6.1.1 + PTFs
Active Memory Expansion (AME)
n/a5
n/ae
n/ae
POWER7
AIX 6.1
 
 
Workload Partitions
POWER4
AIX 6.1
 
 
Workload Partition Manager
POWER4
AIX 6.1
 
 
Live Application Mobility
POWER4
AIX 6.1
 
 
Simultaneous Multithreading
POWER5
IBM i Subsystems
any
 
 

1 Minimum hardware technology
2 Minimum software release level
3 IBM i only supports features of the SystemsDirector VMControl Express Edition
4 Statement of Direction: Live Partition Mobility is planned to be made available on IBM i with a Technology Refresh for IBM i 7.1. This support will require POWER7.
5 Not related to PowerVM editions
Table 2-1 on page 12 shows the minimum requirements with respect to the hardware. Individual models might not support all features. Check the following website for details:
2.1 Hypervisor
The POWER Hypervisor™ is a foundation technology for PowerVM virtualization. It exists as a firmware layer between the hosted operating systems and the hardware and provides functions that enable many of the PowerVM technologies. These are, for example, dedicated logical partitions (LPARs), micro-partitions, shared processor pools, dynamic LPAR reconfiguration, virtual I/O, and virtual LAN. For example, as seen in Figure 2-1 on page 14, for managing partitions, the Hypervisor dispatches partition workloads amongst the processors, ensures partition isolation, and supports the dynamic resource movement.
Figure 2-1 Power Hypervisor
The POWER Hypervisor is always installed and activated, regardless of the system configuration. Although the POWER Hypervisor has no specific or dedicated processor resources assigned to it, it does consume a small overhead in terms of memory and processor capacity from both system and LPAR resources.
The managing interface to the Hypervisor is either the Hardware Management Console (HMC), the Integrated Virtualization Manager (IVM), the Systems Director Management Console (SDMC), or Systems Director VMControl.
2.2 Hardware Management Console
The Hardware Management Console (HMC) is a dedicated Linux-based appliance that you use to configure and manage IBM Power System servers. The HMC provides access to logical partitioning functions, service functions, and various system management functions through both a browser-based interface and a command line interface (CLI). Because it is a separate stand-alone system, the HMC does not use any managed system resources and you can maintain it without affecting system activity.
2.3 Integrated Virtualization Manager
For smaller or segmented and distributed environments, not all functions of an HMC are required, and the deployment of an additional management server might not be suitable. IBM developed an additional hardware management solution called the Integrated Virtualization Manager (IVM) that provides a convenient browser-based interface and can perform a subset of the HMC functions. It was integrated into the Virtual I/O Server product that runs in a separate partition of the managed server itself, which avoids the need for a dedicated HMC server. Because IVM itself is provided as a no cost option, it lowers the cost of entry into PowerVM virtualization. However, the IVM can only manage a single Power System server. If IVM is the chosen management method, then a VIOS partition is defined and all resources belong to the VIOS. That means that no partition that is created can have dedicated adapters; instead, they are all shared.
2.4 Systems Director Management Console
As of April 2011, IBM has released the Systems Director Management Console (SDMC) as the next generation HMC. SMDC extends the support scope of the HMC to range from POWER blade servers to the high-end Power servers, thereby unifying and consolidating administration. It also leverages the same consistent Systems Director user interface utilized for other IBM hardware ranging from mainframes to System x® servers. To ease the transition, SDMC can operate simultaneously with existing HMC or IVM consoles. It is currently offered on the same x86-based hardware as the HMC, as well as being available as a Red Hat Enterprise Virtualization KVM or VMware-based virtual appliance.
2.5 Systems Director VMControl
Another system management offering available for Power System servers is the Systems Director VMControl. VMControl is a cross-platform virtualization management solution providing an enterprise-wide management platform for servers, storage, networks, and software. It is offered as a virtualization plug-in for IBM Systems Director.
With the same dashboard interface, VMControl not only enables you to create, edit, manage and relocate LPARs, but you can also capture ready-to-run virtual LPAR images and store them in a shared repository. These images can be quickly deployed to meet business needs. You can also create and manage server and storage system pools to consolidate resources and increase utilization.
The available features and capabilities are packaged into three editions as depicted in Table 2-2: Express, Standard, and Enterprise.
Table 2-2 Express, Standard and Enterprise edition of VMControl
Feature
Express
Standard
Enterprise
Create and manage virtual machines
Virtual machine relocation
Import, edit, create and delete virtual images
 
Deploy virtual images
 
Maintain virtual images in a repository
 
Manage virtual workloads in system pools
 
 
2.6 Dedicated LPARs
Logical partitioning was introduced to the Power Systems environment on POWER4-based servers. It provided the ability to make a server run as though it were two or more independent servers. When a physical system is logically partitioned, the resources on the server are divided into subsets called logical partitions (LPARs). Processors, memory, and input/output devices can be individually assigned to logical partitions. Dedicated LPARs hold these resources for exclusive use. You can separately install and operate each dedicated LPAR because LPARs run as independent logical servers with the resources allocated to them.
2.7 Live Partition Mobility
Live Partition Mobility (LPM) provides the ability to move a running AIX or Linux partition from one physical POWER6 technology-based system or newer server to another compatible server without application downtime. This feature allows applications to continue running during activities that previously required a scheduled downtime, for example, for hardware and firmware maintenance and upgrades, for workload rebalancing, or for sever consolidation. For more details, see Chapter 6, “IBM PowerVM Live Partition Mobility” on page 63.
2.8 Dynamic LPAR
Dynamic logical partitioning (DLPAR) gives you the ability to manually move resources (such as processors, I/O, and memory) to, from, and between running logical partitions without shutting down or restarting the logical partitions.
When you apply this dynamic resource allocation, known as dynamic logical partitioning or dynamic LPAR, you can redefine all available system resources to reach optimum capacity for each partition, which allows you to share devices that logical partitions use occasionally. The following examples describe situations in which you might want to employ dynamic LPAR:
Moving processors from a test partition to a production partition in periods of peak demand, then moving them back again as demand decreases.
Moving memory to a partition that is doing excessive paging.
Moving an infrequently used I/O device between partitions, such as a CD-ROM for installations or a tape drive for backups.
Releasing a set of processor, memory, and I/O resources into the free pool so that a new partition can be created from those resources.
Configuring a set of minimal logical partitions to act as backup to primary logical partitions, while also keeping some set of resources available. If one of the primary logical partitions fails, you can assign available resources to that backup logical partition so that it can assume the workload.
Temporarily assigning more capacity to an LPAR during an upgrade or migration to reduce SAP system downtime.
2.9 Micropartitioning and Shared Processor LPARs
Shared Processor LPARs (SPLPARs) are logical partitions that share a common pool of processors, which is called the shared-processor pool, as illustrated in Figure 2-2 on page 18. Micropartitioning technology allows these partitions to be sized using 1/100th of processor increments whereas SPLPARs can start as small as 1/10th of a processor. This level of processor granularity provides excellent flexibility (in comparison to dedicated processor LPARs) when allocating processor resources to partitions. Within the shared-processor pool, unused processor cycles can be automatically distributed to busy partitions on an as-needed basis, which allows you to right-size partitions so that more efficient utilization rates can be achieved. Implementing the shared-processor pool using micropartitioning technology allows you to create more partitions on a server, which reduces costs.
Figure 2-2 Micro-Partitioning and Shared Processor LPARs
2.10 Shared Dedicated Capacity
A new feature in POWER6 technology-based systems, Shared Dedicated Capacity, allows partitions that are running with dedicated processors to donate unused processor cycles to the shared-processor pool. When enabled in a partition, the size of the shared processor pool is increased by the number of physical processors that are normally dedicated to that partition, which increases the simultaneous processing capacity of the associated SPLPARs. Due to licensing concerns, however, the number of processors an individual SPLPAR can acquire is never more than the initial processor pool size. This feature provides a further opportunity to increase the workload capacity of uncapped micro-partitions.
2.11 Multiple Shared-Processor Pools
Multiple Shared-Processor Pools (MSPP) is a feature that is available starting with POWER6 technology-based systems. It allows you to create additional shared-processor pools. The MSPP pools are subsets of and contained within the global shared-processor pool. You can then assign SPLPARs processing capacity from either the global shared processor pool or a newly created MSPP pool. The main motivation here is to limit the processor capacity for SPLPARs, thereby reducing software license fees that are based on processor capacity.
2.12 Virtual I/O Server
The Virtual I/O Server (VIOS), as illustrated in Figure 2-3, is an AIX-based appliance partition that provides access to its physical storage and network resources to virtual I/O client partitions. Client partitions can be either AIX, IBM i or Linux based. By eliminating the need for dedicated resources, such as network adapters, disk adapters and disk drives, for each partition, the VIOS facilitates both on demand computing and server consolidation. The VIOS is a foundation technology required by many other PowerVM technologies.
Figure 2-3 Virtual I/O Server
2.13 Partition Suspend and Resume
Partition Suspend and Resume provides the ability for partitions to be put into a standby or hibernated state and later to be restored and resumed back to their active state. During suspension, partition state information is stored on a VIOS-managed paging device.
A benefit of this is that partitions that are temporarily not required online can be suspended, freeing up the resources for other partitions.
Additionally, since some applications can have long shutdown and startup times, suspending and resuming a partition may be quicker than shutting down the LPAR and all its running applications and later restarting. In such situations, planned downtime for certain maintenance activities may also be reduced by avoiding a long shutdown and startup process.
The feature can also work in conjunction with LPM, to resume the partition on different systems.
2.14 N Port ID Virtualization
N Port ID Virtualization (NPIV) is an industry standard technology for virtualizing a physical Fibre Channel port. It provides the capability to assign a physical Fibre Channel adapter to multiple unique N Port IDs. Together with the Virtual I/O Server (VIOS) adapter sharing, this enables direct access to a Fiber Channel adapter from multiple client partitions. NPIV offers many benefits:
Ease-of-use allowing storage administrators to use existing tools for storage management (including SAN managers, copy services, and backup and restore)
Simplified storage provisioning using standard zoning and LUN masking techniques
Physical and virtual device compatibility
Access to SAN devices including tape libraries
Distributed solutions that depend on SCSI heuristics SCSI-2 (Reserve/Release and SCSI3 Persistent Reserve)
2.15 Virtual Tape
Virtual Tape virtualizes physical Serial Attached SCSI (SAS) tape devices. Together with the VIOS, this allows the sharing of these devices among multiple client partitions.
2.16 Virtual SCSI
A virtualized implementation of the SCSI protocol is provided through the Virtual I/O Sever. In large environments, the cost of adapters, switches, cables, patch panels, and so on can be a significant amount.
Virtual SCSI reduces the costs of provisioning storage to servers by sharing storage attachment costs among multiple partitions.
Additionally, Virtual SCSI and Virtual I/O might provide attachment of previously unsupported storage solutions. If the Virtual I/O Server supports the attachment of a storage resource, any client partition can access this storage by using virtual SCSI adapters
Virtual SCSI is based on a client/server relationship, as Figure 2-4 on page 21 illustrates. The Virtual I/O Server owns the physical resources and acts as server or, in SCSI terms, target device. The client logical partitions access the virtual SCSI backing storage devices that the Virtual I/O Server provides.
Figure 2-4 Virtual SCSI
2.17 Virtual Ethernet
Virtual Ethernet enables inter-partition communication without the need for physical network adapters assigned to each partition. In-memory connections between partitions are established and handled at the system level (Hypervisor and operating system interaction). Figure 2-5 on page 22 shows an example of two virtual LANs that are connected using virtual Ethernet adapters. These connections exhibit characteristics that are similar to physical high-bandwidth Ethernet connections and support standard industry protocols (such as IPv4, IPv6, ICMP, and ARP).
Virtual Ethernet requires at least a POWER5 technology-based system and the appropriate level of the AIX (5.3 onwards), IBM i or Linux operating systems.
Figure 2-5 Virtual Ethernet
2.18 Shared Ethernet Adapter
Shared Ethernet Adapters enable multiple partitions to share a physical adapter for access to external networks, which allows partitions to communicate outside of the system without requiring the partition to dedicate a physical I/O slot with a physical network adapter. A Shared Ethernet Adapter (SEA) is created on the VIOS and acts as a bridge between a virtual Ethernet network and a physical network. For additional bandwidth or redundancy purposes, a SEA can utilize and aggregate multiple physical ports.
2.19 Integrated Virtual Ethernet
The Integrated Virtual Ethernet adapter (IVE) is a feature that is provided with POWER6 technology-based systems. It provides external network connectivity for LPARs using dedicated ports without the need of a Virtual I/O Server. IVE is comprised of the Host Ethernet Adapter (HEA) and the software support. The HEA is a physical Ethernet adapter that is connected directly to the GX+ bus of a POWER6 (or later) processor-based server to provide high throughput and low latency connectivity. Virtualized logical ports called Logical Host Ethernet Adapters (LHEAs) are configured and directly associated with LPARs, as Figure 2-6 shows.
Figure 2-6 Integrated Virtual Ethernet adapter
IVE can provide most of the functions of both Virtual Ethernet and Shared Ethernet Adapters at improved performance and without the resource overhead of a Virtual I/O Server. However, some limitations do exist with IVE, such as the inability to perform LPM operations.
2.20 Active Memory Sharing
Active Memory Sharing (AMS) is a feature of PowerVM which allows for the sharing of system memory via a single physical memory pool amongst a set of LPARs (Figure 2-7 on page 24). It is analogous to the sharing of processors from a processor pool by SPLPARs and the intent is to allow memory hungry partitions in a system to use portions of the physical memory not currently being used by other partitions.
When a partition is started, the configured memory defines the amount of logical memory assigned to a partition. The hypervisor then maps a range of physical memory to the partition’s logical memory. In a dedicated memory partition this assignment remains fixed. In an AMS environment, on the other hand, the physical memory is part of a shared pool and portions are alternatively mapped to different AMS-managed LPARs' logical memory. Memory savings and optimization are achieved when the overall amount of memory resources are oversubscribed, that is, when the sum of the AMS-managed LPARs' logical memory is greater than the physical memory in the pool. To accommodate this, a VIOS-configured paging device is used to page out logical memory.
Figure 2-7 Shared and dedicated memory logical partitions
2.21 Active Memory Expansion
Beginning with the IBM POWER7 technology-based systems, LPARs with a minimum of AIX 6.1 TL4 SP3, can employ a new technology for expanding a system’s effective memory capacity called Active Memory Expansion (AME). AME employs memory compression technology to transparently compress in-memory data, allowing more data to be placed into memory and thus expanding the memory capacity of the server. By utilizing Active Memory Expansion, clients can reduce the physical memory requirements or improve system utilization and increase a system’s throughput. For details see 8.5, “Active Memory Expansion for SAP systems” on page 84.
2.22 Workload Partitions
Workload Partitions (WPAR) introduces virtualization of the operating system by providing isolated partitions of software services, applications, and administration within a single instance of the operating system. WPARs are a feature of the AIX operating system and are available as of AIX 6.1. Key benefits include rapid deployment and a reduction of the number of AIX images to maintain. For details see Chapter 7, “Workload partitions” on page 69.
2.23 Workload Partition Manager
The Workload Partition (WPAR) Manager provides a central systems management solution by providing a set of Web-based system management tools and tasks that simplify the management of a server and WPAR infrastructure, including Live Application Mobility. For details, see Chapter 7, “Workload partitions” on page 69.
2.24 Live Application Mobility
Live Application Mobility (LAM) is a WPAR feature that allows the WPAR to be relocated from one LPAR to another LPAR, running on the same or on separate physical servers. This feature helps to avoid planned downtime for supported applications that is caused by scheduled hardware maintenance. LAM might also help to improve performance by moving running instances to a more powerful server. Finally, it supports higher efficiency of servers in terms of a higher overall utilization and energy efficiency by concentrating applications on the best matching server capacity.
Live Application Mobility is not a replacement for high-availability software, such as PowerHA™ or similar products.
2.25 Simultaneous Multithreading
Simultaneous multi-threading (SMT) is a hardware feature that provides the ability for a single physical processor to simultaneously dispatch instructions from more than one thread context. For POWER5 and POWER6 technology-based processors, SMT enables two parallel threads (SMT2) while for POWER7 technology-based systems, up to four parallel threads per core are available (SMT4). For these processors, when SMT mode is activated, a single physical processor appears to the operating system as two logical processors in the case of SMT2 or four logical processors in the case of SMT4, independent of the partition type. For example, in SMT2 mode, a partition with one dedicated physical processor would operate with two logical processors. Similarly, in SMT4 mode, a shared processor partition with two virtual processors would appear as a logical 8-way. SMT is active by default.
The SMT performance effect is application dependent; however, most commercial applications see a significant performance increase. For SAP environments, it is recommended to have SMT turned on because the mix of many parallel online users, RFC, and batch tasks significantly benefits from this feature.
2.26 IBM i subsystems
In the IBM i operating system, subsystems are used to group processes and assign hardware resources in a flexible, yet controlled manner. In general, subsystems share all hardware resources of the logical partition and assign them to their processes based on the run priority of the processes and activity levels in the main storage pools. Subsystem properties are configured by subsystem description objects (object type: *SBSD) and their related attributes and objects.
When running SAP on IBM i, each SAP instance is running in its own subsystem. By default, all SAP processes are running with the same run priority, each instance can use all available processor resources in the logical partition, and all SAP instances are sharing the main storage pool *BASE. However, you can configure your SAP instances to run with different run priorities based on the work process types, to limit the total processor utilization of each instance, or to use different main storage pools for one or more instances. These configuration options are discussed in section 8.4, “Main storage pools, work process priorities, and workload capping on IBM i” on page 82.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset