Chapter 7
Deploying Hosts and Clusters in VMM 2012
Chapter 4, “Setting Up and Deploying VMM 2012,” introduced Hyper-V hosts. This chapter provides more details about adding and managing Hyper-V hosts, and it explains how to use VMM to create a cluster from Hyper-V servers deployed on bare-metal machines. It shows how to add servers to trusted domains, untrusted domains, and workgroups. It explains how to work with VMware and Citrix virtualization servers in VMM and how to maintain and update Hyper-V clusters.
The chapter includes the following topics:
A new VMM installation has a single, empty host group called All Hosts. Chapter 4 explained how to create host groups and add Hyper-V hosts to them.
You can add an existing Hyper-V server to the VMM-managed fabric using the following methods, all of which install a VMM agent on the server:
The remainder of this section describes these options. A subsequent section describes another option:
The large majority of servers that you add to VMM are Hyper-V servers that are members of the same domain as the VMM server or a domain that is trusted by the domain containing the VMM server. Before you can add Hyper-V servers that are located in a trusted domain, the following steps must be taken:
Do not add a Windows server without Hyper-V if the server is not available for a reboot (see the “Adding New Hyper-V Servers” section). To add one or more Hyper-V servers or clusters to the fabric of your VMM environment, follow these steps:
Some of your Hyper-V servers and clusters may be part of an untrusted AD domain. An example of this type of setup would be a hosting provider managing a customer's Hyper-V hosts from their own management domain without trusting the customer's domains.
To set up this type of Hyper-V host, take the following steps to integrate the hosts and clusters:
You can add a Hyper-V server that is configured to be in a workgroup and is unrelated to any AD domain. Often such Hyper-V hosts are in a perimeter network or DMZ. For this category, an encryption key or a CA-signed certificate is required to authenticate the Hyper-V server from the perimeter network before VMM will accept it.
A non-enterprise scenario for this type of setup is a demo laptop with Windows Server 2008 R2 SP1 plus Hyper-V in a workgroup and several Hyper-V guests, including a domain controller and VMM running on the same laptop. Before you can add the Hyper-V host, you must install the VMM agent locally from VMM media and provide an encryption key. VMM requests this key when you add a Hyper-V Server in a perimeter network. An alternative is to provide a CA-signed certificate.
To add a Hyper-V server in a perimeter network, follow these steps:
On the VMM Manager server, you can directly add a Windows server. A reboot of the perimeter host is not necessary. These are the steps:
In the Hosts view of one of your host groups, you can add additional fields by right-clicking one of the headers. Check one or more fields and they show up in the current view. The Group By This Column option is at the bottom; this option gives you a great deal of flexibility in the way hosts can be viewed. Figure 7.7 shows an example of the Managed Computers view.
If you want to use a Windows 2008 or 2008 R2 server that doesn't have the Hyper-V role enabled, you can still use all of the previous choices. The Add Resource Wizard enables the Hyper-V role for you; but the server is restarted at least once, so be careful if this server carries production workloads. Of course, according to best practices, a Hyper-V server should be fully dedicated to this role and you should add only servers without other duties.
If you have only a few Hyper-V hosts, you don't really need to learn bare-metal deployment. You will probably be able to provision five Hyper-V hosts much faster than you will be able to set up the prerequisites and successfully test a bare-metal deployment with VMM. Nevertheless, if you are just curious about the technology or if your organization expects you to rapidly expand the number of Hyper-V hosts, then bare-metal deployment is for you.
So how does bare-metal deployment work in VMM? First, a number of prerequisites need to be met before you can actually start deploying your first Hyper-V host without ever touching the server. It is always nice to do some work up front and then let the system do all the work for you. You may finally get an opportunity to read your RSS feeds.
You can find the prerequisites for bare-metal deployment in Chapter 4, but a shortened version of them is given here. Before you can launch a bare-metal deployment, you'll need the following:
You'll need to complete quite a few steps for a successful bare-metal deployment. As with most tasks, the process can be broken down into a few basic procedures. Here is an overview of that process.
In VMM you can remotely control a host via out-of-band (OOB) management if that host uses one of the supported BMCs. Think of the concept as a computer within a computer. Even when the host is switched off, you can still independently control the host and perform operations such as power off, power on, and reset.
Microsoft supports several standards-based OOB power-management-configuration-provider options:
After a Hyper-V host has been added to VMM, you'll need to configure its BMC in one of the hardware properties, as described in the following steps:
If you can use a PXE server for bare-metal deployment, your deployment of Hyper-V hosts can be fully automated. Of course, PXE servers are out of the question in some environments. If that is the case, you can bypass a PXE server by preparing a specially configured ISO file from which your bare-metal servers can boot. You can do this manually from USB or a virtual DVD using your BMC, or you can burn that ISO file to create a bootable DVD if you have no other options. On a positive note, the entire bare-metal deployment workflow can still be triggered from VMM with all the other steps and reporting except the PXE-boot part of the workflow.
The next steps explain how to prepare a PXE server based on Windows Deployment Services in Windows Server 2008 R2 SP1. First, place the bare-metal computers you want to convert to Hyper-V servers in the same subnet as your WDS/PXE and DHCP server. You need to do this because PXE boot messages are nonroutable. Note that no customizations are necessary in WDS. You don't have to set any parameters within WDS, and you don't have to add any images or drivers. All you have to think about is where to place the WDS server, because it makes a difference whether you combine the PXE with a DHCP server or keep them separate. As a general best practice, you should keep the two separate if you want to avoid having to perform a custom configuration of your DHCP-server options.
Here is how a WDS/PXE server is configured:
As mentioned in the overview of the bare-metal deployment steps, you need to do a number of things before the actual deployment. Your server's BIOS should be configured for Hyper-V (the usual hardware-virtualization settings), and you might want to do some configuration (enable/disable power management/C-state processor power modes) based on your own preferences.
In addition to racking and stacking your servers and setting BIOS, this is a good time to configure your BMCs:
Now you are ready to leave your servers in the dark and close the data-center door behind you. You can return to your desk and open your VMM console to remotely control the bare-metal deployment from your management computer. If you have forgotten any of the previous steps, the BMC will save you another trip to the data center. So yes, BMCs are worth investing in, even if you and the servers are in the same building.
The first task in VMM is to create one or more host profiles. A host profile is like a template that describes how to configure the bare-metal host. Take the following steps to prepare a host profile:
To better understand the exact steps of a bare-metal deployment, study the process flow. The numbers in the following steps refer to Figure 7.11.
If you click on the Jobs pane and select the related VMM job, you will be able to see which steps ran successfully and which failed.
If you have made all the preparations described in the previous paragraphs, you are ready to test your bare-metal deployment. Do not make any special configurations yet. Just see if you can successfully discover the hosts, boot from the PXE server into WinPE, and deploy the VHD with all its configuration steps. If that succeeds, you are ready to make some additional configurations to perfect your bare-metal deployment.
Before you begin the bare-metal deployment, connect to your BMC screen so you can see what happens. If the PXE boot does not kick in automatically, press F12 to manually force the server to boot from the network. In some blade-server enclosure configurations, it is possible to configure a one-time boot for PXE, which will force the server to boot from the network at the next reboot.
Let's kick off a bare-metal deployment by following these steps:
At this point, the following steps occur without your intervention.
This completes a full cycle of a bare-metal deployment of a Hyper-V server. You can deploy one server at a time; however, if the process works well, it is just as easy to deploy an entire blade enclosure with 16 blades or even more. This is a big time-saver and, most important, you'll end up with identical servers. Uniformly installed servers help you lay a solid foundation for your private cloud.
As with any generation of Windows, the operating system includes support for device drivers of the current generation of servers. As soon as a vendor introduces a new server generation with a new class of network or storage devices, you are out of luck and have to do your own plug before you play. Fortunately, VMM offers the opportunity to inject drivers into the VHD from which a Hyper-V host boots. So, if PnP does not recognize any of your server's device drivers, you'll need to do some extra preparation and testing. This is what you have to do:
Even if you can't use a WDS/PXE server, you can still fully benefit from using bare-metal deployment for rapidly deploying new Hyper-V hosts. All of the steps remain the same except you don't have to install WDS and add it to the VMM fabric. The requirement for a DHCP server still stands.
Instead of booting from a PXE server, you can boot from an ISO file that is configured to use the VMM bare-metal deployment script. This ISO file is created by a PowerShell cmdlet. First, start a PowerShell session from within VMM, issue the following cmdlet, and use a valid path in which to create the ISO file. The directory does not have to exist. The command does not give any output if it succeeds, so either check the directory for the existence of the ISO file or check under Jobs.
PS C:> Publish-SCWindowsPE -ISOPath E:ISO -UseWindowsAIK PS C:> dir c:iso Directory: C: Mode LastWriteTime Length Name ---- ------------- ------ ---- -a--- 1/30/2012 1:05 PM 182130688 iso
You can burn the ISO file to a disk, place it on a bootable USB drive, or connect to the ISO file through a BMC.
The process then continues as follows:
You can use custom commands in the host profile to prepare certain activities. The next PowerShell code snippet provides an example of a general command executing during the bare-metal process. In this example, a script calls an HP disk utility to delete a RAID configuration and then create a new mirror raid configuration before kicking off the rest of the deployment.
#1 Get resource folder location (HPArrayUtility.cr) in the VMM library PS C:> $resource = Get-SCCustomResource | where { $_.SharePath -eq "\ vmmserver ProgramDataVirtual Machine Manager Library FilesHPArrayUtility.cr" } #2 Get the host profile to be used PS C:> $HostProfile = Get-SCVMHostProfile -Name "host gce profile" #3 Configure Script Command Settings PS C:> $scriptSetting = New-SCScriptCommandSetting PS C:> Set-SCScriptCommandSetting -ScriptCommandSetting $scriptSetting -WorkingDirectory "" -PersistStandardOutputPath "" -PersistStandardErrorPath "" -MatchStandardOutput "" -MatchStandardError ".+" -MatchExitCode "[1-9][0-9]*" -FailOnMatch -AlwaysReboot $false -MatchRebootExitCode "{1641}|{3010}|{3011}" -RestartScriptOnExitCodeReboot $false #5 Run hpacucli.exe with a command to delete the raid configuration PS C:> Add-SCScriptCommand -VMHostProfile $HostProfile -Executable "hpacucli. exe" -ScriptCommandSetting $scriptSetting -CommandParameters "ctrl slot=1 delete forced" -TimeoutSeconds 120 -LibraryResource $resource #6 Run hpacucli.exe with a command to create new mirror raid configuration PS C:> Add-SCScriptCommand -VMHostProfile $HostProfile -Executable "hpacucli. exe" -ScriptCommandSetting $scriptSetting -CommandParameters "ctrl slot=1 create type=ld drives=1:1,1:2 raid=1" -TimeoutSeconds 120 -LibraryResource $resource
After the new host job finishes, the admin can decide to run one or more post-deployment general command executions (GCEs) — for example, to configure NIC teaming. These GCEs can be initiated from the Run Command action available in the user interface, but they can also be scripted and started by the invoke-scscript command PowerShell cmdlet.
If you can perform a bare-metal deployment without any errors, hats off to you! In practice, you will stumble into one or more gotchas. The following sections provide some hints and guidance for common situations.
When you're configuring a WDS server for PXE, don't configure the same way you would for other purposes. VMM takes full control of the whole process, so all you need to do is a basic WDS role installation. Don't do any configurations or add any images.
A PXE boot can fail for many reasons, including the following:
Make sure you have installed the WDS server according to the detailed requirements described in this chapter.
When the server boots, either press F12 or use your BMC to start it from the network. If everything fails, resort to the procedure explained under “Creating an ISO File.”
You might encounter a situation in which you can boot from PXE and an IP address is provisioned to the server, but the process halts when WinPE kicks in. When the provisioning process halts, you'll probably get a message like “Synchronizing Time with Server.” After this, error 803d0010 is displayed, prompting you to check X:VMMvmmAgentPE.exe.etl. If you are unlucky, this file will be full of blank entries.
A likely reason for the process stall is that WinPE does not have a suitable network driver to continue the installation. Press Shift+F10 to open a command prompt and enter ipconfig /all to check for a network configuration.
The Creating ISO method will not help you here because that method also requires a network connection after WinPE boots.
You need to add drivers to the WinPE image that is taken from the Windows Automated Installation Kit location by VMM and deployed to the WDS/PXE server. This involves the following process:
c:Program FilesWindows AIKToolsPEToolsamd64
First, check whether the custom drivers can be found in the library with the tags you have given them:
PS C:> $tags = "BL460G6" get-scdriverpackage | where { $_.tags -match $tags } | select-object class, inffile,type, tags, provider, version, date | ft -auto Class INFFile Type Tags Provider Version ----- ------- --------- -------- ------- system evbd.inf INF {BL460G6} Hewlett-Packard Company 6.2.16.0 net bxnd.inf INF {BL460G6} Hewlett-Packard Company 6.2.9.0 SCSIAdapter bxois.inf INF {BL460G6} Hewlett-Packard Company 6.2.7.0 system bxdiag.inf INF {BL460G6} Hewlett-Packard Company 6.2.3.0
You can see some more detail by issuing the following command:
PS C:> $tags = "BL460G6" PS C:> get-scdriverpackage | where { $_.tags -match $tags } [output shows only one driver] PlugAndPlayIDs : {B06BDRVL2ND&PCI_164A14E4&SUBSYS_3101103C, B06BDRV L2ND&PCI_16AA14E4&SUBSYS_3102103C, B06BDRVL2ND &PCI_164A14E4&SUBSYS_3106103C, B06BDRVL2ND&PCI_16AA14E4&SUBSYS_310C103C…} Tags : {BL460G6} TagsString : BL460G6 Type : INF INFFile : bxnd.inf Date : 2/4/2011 12:00:00 AM Version : 6.2.9.0 Class : net Provider : Hewlett-Packard Company Signed : True Signer : Microsoft Windows Hardware Compatibility Publisher BootCritical : False Release : State : Normal LibraryShareId : 2dda2b24-bf52-4308-a4df-8c192a097e52 SharePath : \vmmlib1.private.cloudSCVMMLibrary1DriversBL460G6 bxnd.inf Directory : \vmmlib1.private.cloudSCVMMLibrary1DriversBL460G6 Size : 6153482 IsOrphaned : False FamilyName : Namespace : Global ReleaseTime : HostVolumeId : 9eecdae3-c395-4fd1-a17b-0cd07179eac7 HostVolume : Classification : HostId : 64f2e51f-469f-4d6a-9d28-30c06a241fc9 HostType : LibraryServer HostName : vmmlib1.private.cloud VMHost : LibraryServer : vmmlib1.private.cloud Cloud : LibraryGroup : GrantedToList : {} UserRoleID : 00000000-0000-0000-0000-000000000000 UserRole : Owner : ObjectType : DriverPackage Accessibility : Public Name : bxnd.inf IsViewOnly : False Description : AddedTime : 1/25/2012 9:06:33 PM ModifiedTime : 1/30/2012 12:23:34 PM Enabled : True MostRecentTask : ServerConnection : Microsoft.SystemCenter.VirtualMachineManager.Remoting .ServerConnection ID : ddf2560a-c718-41c4-bebb-f9eb260f00ff MarkedForDeletion : False IsFullyCached : True
After you have verified the tags of the custom drivers in the library, you are ready to run a script to prepare a winpe.wim image with the custom drivers injected.
#1 Get tags for matching drivers in the VMM library # Master WIM = c:Program FilesWindows AIKToolsPEToolsamd64winpe.wim # Driver tag = winpe PS C:> $wim = "c:Program FilesWindows AIKToolsPEToolsamd64winpe.wim " PS C:> $tags = "winpe" #2 Prepare directories PS C:> $winpesrcdir = $wim PS C:> $workingdir = $workingdir = $env:temp + "" + [System.Guid]::NewGuid()
.toString() PS C:> $mountdir = $workingdir + "mount" PS C:> $wimfile = $workingdir + "winpe.wim" PS C:> mkdir $workingdir PS C:> mkdir $mountdir #3 Copy default WIM file and mount it using DISM PS C:> copy $winpesrcdir $workingdir PS C:> dism /Mount-Wim /wimfile:$wimfile /index:1 /MountDir:$mountdir #4 Find the path of each driver that matches the tag and insert it into mounted wim using DISM PS C:> $drivers = Get-SCDriverPackage | where { $_.tags -match $tags } foreach ($driver in $drivers) { $path = $driver.sharepath dism /image:$mountdir /Add-Driver /driver:$path } #5 Commit the changes PS C:> Dism /Unmount-Wim /MountDir:$mountdir /Commit #6 Republish the WIM file to every PXE server managed by VMM Publish-SCWindowsPE -Path $wimfile #7 Clean up PS C:> del $wimfile PS C:> rmdir $mountdir PS C:> rmdir $workingdir
Step 4 is the part of the script where the drivers are actually injected into the mounted WIM file using DISM:
[output shows only one driver] Deployment Image Servicing and Management tool Version: 6.1.7600.16385 Image Version: 6.1.7600.16385 Found 1 driver package(s) to install. Installing 1 of 1 - \vmmlib1.private.cloudSCVMMLibrary1DriversDL460G6Broadcom10Gbxnd.inf: The driver package was successfully installed. The operation completed successfully. Deployment Image Servicing and Management tool Version: 6.1.7600.16385
If you are using some sort of hardware-virtualization technology, such as HP Virtual Connect, chances are that you not only configured virtual MAC and WWN addresses, but you also virtualized your servers' unique identifiers. If so, you will have a logical serial number and a logical UUID, as shown in Figure 7.17).
Because the bare-metal deployment process looks at the physical UUID and not the virtual one, the only way to successfully run the bare-metal deployment job is to use PowerShell. You can start the process in the GUI; but before you kick it off, click the button for viewing the PowerShell command. Save the file, replace the SMBiosGuid with the logical UUID, and copy and paste the cmdlet into a VMM PowerShell command session.
PS C:> $HostGroup = Get-SCVMHostGroup -ID "0e3ba228-a059-46be-aa41-2f5cf0f4b96e" -Name "All Hosts" PS C:> $RunAsAccount = Get-SCRunAsAccount -Name "Run_As_HP_iLO" PS C:> $HostProfile = Get-SCVMHostProfile -ID "d3982328-2a4b-48d9-8eaa- ad5129e8cc5e" PS C:> New-SCVMHost -ComputerName "hv1" -VMHostProfile $HostProfile -VMHostGroup $HostGroup -BMCAddress "172.16.3.22" -SMBiosGuid "41AF9DB5-1AF3- 4369-9805-60F8EDE56C51" -BMCRunAsAccount $RunAsAccount -RunAsynchronously -BMCProtocol "IPMI" -BMCPort "623" -ManagementAdapterMACAddress "00-17-A4-77-00- 60" -LogicalNetwork "VM1" -Subnet "192.168.1.0/24" -IPAddress "192.168.1.51"
A bare-metal deployment will fail if the name of the new server already exists in Active Directory. To side-step the issue, you can skip the AD check. In the GUI, in the Deployment Customization section of the host profile, check the Skip Active Directory check box for the computer name.
If you are using the PowerShell script, just add the following to the cmdlet:
-BypassADMachineAccountCheck
A bare-metal deployment will fail when the name of the server already exists in VMM. In that case, double-check the name and remove it from VMM if appropriate. Removing a server from VMM requires elevated privileges. At any rate, be careful when you have to do this.
If you add a Hyper-V host that is part of a cluster, VMM automatically adds all nodes of that cluster and installs a VMM agent on each. Once the cluster has been inventoried, you can view its properties. Under the General tab, you will not only find the cluster name and host-group location, but also its cluster reserve state. The default is 1, which means that the resources of one cluster node are reserved for high availability. If you start with a one-node cluster and the cluster reserve is 1, the cluster reserve state will not show OK. Of course, for testing purposes, you can change this value to 0. In general, though, you should have one cluster reserve per eight cluster nodes. This means that for a 16-node cluster, you follow best practices by changing this value to 2.
You can obtain more information about the cluster from the tabs of the Properties screen, including the following:
Status
The status displays information such as whether a cluster validation test was run, whether the test was successful or not, and the state of cluster core resources and cluster services.
Available Storage
If the cluster has access to disks that are still in the available storage section, you'll see those disks here. As a best practice, you should have at least one small, shared disk available for testing/validation purposes.
Shared Volumes
A similar view (Figure 7.18) is available for disks that are shared volumes of the cluster. You can also add available disks to the list of Cluster Shared Volumes, remove disks, or convert shared volumes to available storage. Naturally, these tasks are performed only when these disks hold no active virtual machines.
Virtual Networks
This tab shows which virtual networks are available to the entire cluster. If you make a spelling error in the name of a virtual network on one of the nodes in the cluster, the virtual network will not show up here.
Custom Properties
You can add additional custom properties to the cluster for further identification or custom sorting and grouping.
Once the cluster is managed under VMM, you can perform a number of actions against the cluster:
Create Service
You can create a new service on this cluster, based on a service template. A service can be a single- or multitier combination of virtual machines with applications (see Chapter 8).
Create Virtual Machine
You can create a new VM based on an existing VM, VM template, or VHD. You can also create a new VM with a blank VHD.
Refresh
You can reread all the properties of the cluster and cluster nodes to capture any changes since the last update. The refresh interval is 10 minutes.
Optimize Hosts
You can begin a dynamic resource-optimization task for balancing guests across the cluster.
Move to Host Group
You can move a cluster to another host group.
Uncluster
You can perform what Failover Cluster Manager calls Destroy Cluster. You cannot uncluster if there are still active resources on one of the cluster nodes. If there are, the job will fail with Error 25330 (Figure 7.19).
Add Cluster Node
Expand the cluster to the current maximum of 16 nodes. The candidate node must have access (a possible owner) to the same shared storage as the other cluster nodes and must be validated before joining the cluster.
This option does not support clusters with asymmetric storage (where not all cluster disks are presented to all cluster nodes, which can be useful in multi-site clusters). Asymmetric storage is a feature that was introduced in Windows Server 2008 R2 SP1.
Validate Cluster
Revalidate the cluster. The validation status will appear under Status on the cluster's Properties screen.
Remove
Remove the cluster from VMM management. The cluster will remain unaltered, but VMM will uninstall its agents from all cluster nodes.
Properties
View the cluster's properties, as discussed earlier.
If you've followed along with the discussion, you have added one or more hosts and clusters and brought existing hosts and clusters under VMM management. As soon as you have one or more Hyper-V machines available, you can build clusters from them. Before you create a cluster, you need to prepare all the network and storage connections because VMM validates the potential cluster nodes before they can join the cluster. It is recommended that all cluster nodes be as similar as possible, including but not limited to service packs, Windows updates, and hotfixes.
This is how a cluster can be created from VMM:
You can also add to existing Hyper-V clusters. Take these steps to add a node to an existing cluster:
An alternative way to add a node to an existing cluster is to drag the node to the cluster. The Add Node To Cluster Wizard kicks in and you will only have to provide credentials to start the job.
This section deals with dynamic optimization and power optimization — two cluster-optimization techniques that help the cluster to keep load-balanced or can reduce power consumption.
Dynamic optimization refers to the built-in support for load-balancing a cluster. It is no longer dependent on performance and resource optimization (PRO) and integration with Operations Manager, as was the case with VMM 2008 R2. Power optimization can help conserve power by shutting down hosts that are not running any workloads. Alternatively, they are turned on again when workload activity increases. Power optimization requires a BMC in the virtualization host.
Both dynamic optimization and power optimization work with Hyper-V, VMware ESX, and XenServer clusters. The respective automated VM migration functionality of the different hypervisors is used to either balance workloads or evacuate a host so that a host can be powered off.
Dynamic optimization and power optimization are configurable on a per-host-group basis. To view dynamic optimization and power optimization in action, you must deploy and run virtual machines on the host cluster.
Dynamic optimization in VMM adds functionality to the host reserve settings from VMM 2008 R2. Like its predecessor, VMM receives its host reserve settings as it is added to the host group.
A host reserve is set by default on the All Hosts host group and can be changed for all underlying host groups. It is also possible to create an underlying host group with different host reserve settings. In such cases, you must deselect “Use the host reserves settings from the parent host group” on the Host Reserves tab of the subgroup's Properties screen. For instance, by default 10 percent of the CPU is reserved, meaning that a host is available for placement until 90 percent of the CPU is utilized.
A different host reserve can also be set at the host level to effectively override the settings of the host group. If no override is set, the host receives the settings of the host group when the host is placed under management by VMM. Also, when a host is moved from one host group to another, the host will inherit the host reserve settings of that group, unless you have set specific host reserves on the host itself.
You can check the current host reserve settings for any specific host or host group by issuing the following PowerShell commands:
PS C:> Get-SCVMHost -computername "hvserver1" RunAsAccount : Run_As_Domain_Admin OverallStateString : OK OverallState : OK CommunicationStateString : Responding CommunicationState : Responding Name : hvserver1.private.cloud FullyQualifiedDomainName : hvserver1.private.cloud ComputerName : hvserver1 DomainName : private.cloud Description : RemoteUserName : OverrideHostGroupReserves : True CPUPercentageReserve : 30 NetworkPercentageReserve : 0 DiskSpaceReserveMB : 10240 MaxDiskIOReservation : 10000 MemoryReserveMB : 256
In the previous example, the host reserve has an override for CPUPercentageReserve. To check for a specific host group, use this command:
PS C:> Get-SCHostReserve -VMHostGroup Production CPUReserveOff : False CPUPlacementLevel : 40 CPUStartOptimizationLevel : 30 CPUVMHostReserveLevel : 15 MemoryReserveOff : False MemoryReserveMode : Megabyte MemoryPlacementLevel : 1024 MemoryStartOptimizationLevel : 1024 MemoryVMHostReserveLevel : 1024 DiskSpaceReserveOff : False DiskSpaceReserveMode : Percentage DiskSpacePlacementLevel : 10 DiskSpaceVMHostReserveLevel : 10 DiskIOReserveOff : False DiskIOReserveMode : IOPS DiskIOPlacementLevel : 1000 DiskIOStartOptimizationLevel : 1000 DiskIOVMHostReserveLevel : 1000 NetworkIOReserveOff : False NetworkIOReserveMode : Percentage NetworkIOPlacementLevel : 0 NetworkIOStartOptimizationLevel : 0 NetworkIOVMHostReserveLevel : 0 Name : Production ReadOnly : False ConnectedHostGroup : All HostsProduction OwnerHostGroup : All HostsProduction ServerConnection : Microsoft.SystemCenter.VirtualMachineManager.Remoting.ServerConnection ID : aab05537-0406-49d5-9670-5f0a938196b7 IsViewOnly : False ObjectType : HostReserveSettings MarkedForDeletion : False IsFullyCached : True
As you can see from the output, there are several settings for optimization levels. The value for a reserve is also the starting point for dynamic optimization. In other words, if the MemoryPlacementLevel is at 1,024 MB (the default), then MemoryStartOptimizationLevel is also set at 1,024 MB, unless it is manually set to another value. The Set-SCHostReserve command is used to set host reserves such as placement levels and start optimization levels.
Dynamic optimization is also a property of the host group that can be configured to migrate virtual machines within host clusters with a specified frequency and aggressiveness. If no overrides are set, the host group receives the Dynamic Optimization settings from the parent host group.
By default, the dynamic optimization settings are configured with medium aggressiveness and a frequency of 10 minutes. Aggressiveness can be set in five steps from Low to High (with an intermediate step between Low and Medium and between Medium and High); Aggressiveness defines how eagerly VMM looks for optimization opportunities:
In Figure 7.22, the Dynamic Optimization settings have been changed to a more aggressive setting but with an hourly interval.
Frequency is set to automatically migrate virtual machines to balance the load every 10 minutes and can be set to a maximum of 2,440 minutes (just over 40 hours). You should test these settings carefully before you automate dynamic optimization. In the current version of Hyper-V, simultaneous migrations are unavailable. Live migrations are queued and wait until the previous migration has finished. If you set the frequency too high, migrations from the previous optimization may not be finished yet.
For dynamic optimization to work, you need clusters with two or more nodes. Any hosts or clusters that do not support migration are ignored. An additional requirement is that VMs must be configured to be highly available and placed on shared storage.
If you want to test dynamic optimization, you can start the process on demand by right-clicking one of your clusters and selecting Optimize Hosts from the context menu.
VMM calculates resource optimizations and proposes a migration for one or more VMs. In the example in Figure 7.23, host hv2 is running all the VMs and VMM proposes to move five VMs to host hv1.
The resulting PowerShell script looks like this:
PS C:> $hostCluster = Get-SCVMHostCluster -Name "HVCluster1.private.cloud" PS C:> Start-SCDynamicOptimization -VMHostCluster $hostCluster
Under Jobs, you can easily track how VMM optimizes the cluster and which VMs have moved (Figure 7.24).
If no resource optimizations are available when you run this command, you will be notified that the host cluster is either within desired resource-usage limits or no further optimization is possible.
Power optimization is functionally part of dynamic optimization. It can only be set if a host group has been configured for dynamic optimization. If you enable power optimization, you effectively allow VMM to power VM hosts off and on based on their actual usage. Before power optimization kicks in, all remaining VMs on that host are migrated to the remaining nodes in the cluster.
By default, power optimization is switched off. If you switch it on, it operates 24 hours a day unless you change the schedule.
There is a difference for power optimization, depending on whether the clusters are created by VMM or outside VMM. For clusters created by VMM, you can set up power optimization for clusters of four or more nodes. For clusters created outside VMM, you need five or more nodes, as shown in Table 7.1.
Nodes Powered Off | Cluster Created in VMM | Cluster Created Outside VMM |
0 | More than 3 nodes | More than 4 nodes |
1 | 4 or 5 nodes | 5 or 6 nodes |
2 | 6 or 7 nodes | 7 or 8 nodes |
3 | 8 or 9 nodes | 9 or 10 nodes |
If the number of cluster nodes is sufficient, you can enable power optimization using the following steps:
Traditionally, updating clusters has been a laborious task. Windows Server Update Services (WSUS) and System Center Configuration Manager (SCCM or Config Mgr) are still not cluster-aware, so you cannot take the risk of automatically updating cluster nodes. If you update a node in a Hyper-V cluster, it reboots without migrating the VMs to another node. Simultaneous reboots of more nodes than the cluster can handle (majority node or other quorum requirements) can take the entire cluster down.
Fortunately, VMM introduces a workflow for updating a Hyper-V cluster based on the assumption that the cluster and the VMs should stay highly available. As when remediating a single Hyper-V host as described in Chapter 5, “Understanding the VMM Library”, VMM uses a supported WSUS server as a source for the update catalog and update baselines. To perform the following steps, assume that the update server is already part of the VMM fabric.
These are the steps for setting up cluster remediation:
VMM allows you to deploy and manage virtual machines on VMware ESX hosts, as you can with any supported hypervisor. The only difference is that you cannot bare-metal-deploy an ESX host because the chosen native VHD deployment model is not applicable to VMware hosts. Also cluster remediation cannot be applied, because WSUS does not support updating non-Windows hosts. This should not keep you from adding your VMware ESX hosts to VMM, because all the other deployment and management capabilities fully apply to ESX hosts, including creating clouds, creating VMs, and using PowerShell scripting. You can even deploy a multitiered service across multiple hypervisors. Service modeling is described in subsequent chapters.
Because you can't create new VMware ESX hosts, you must first add existing ESX hosts and clusters. Check the requirements in Chapter 4 before you begin, because VMM no longer supports some older versions of ESX and vCenter.
VMM integrates directly with VMware vCenter Server. As soon as vCenter is added to the VMM fabric, you can start adding ESX hosts and clusters.
Let's look at some of the differences between managing VMware ESX hosts in VMM 2012 and VMM 2008 R2.
When managing VMware ESX hosts, VMM 2012 supports the following features:
Here are some limitations and unsupported features:
A VM on an ESX host managed by VMM can have the capabilities listed in Table 7.2.
Category | Minimum | Maximum |
Processor Range | 1 | 8 |
Memory Range | 4 MB | 255 GB |
DVD Range | 1 | 4 |
Hard-Disk Count | 0 | 255 |
Disk Size Range | 0 MB | 256 GB |
Network Adapters | 0 | 64 |
Before you add a VMware vCenter Server to VMM, check the requirements in Chapter 4. Not all versions are supported.
Secure Sockets Layer (SSL) is used for communication between the VMM management server and the vCenter Server. Either a third-party or a self-signed certificate can be used. Self-signed certificates must be stored in the Trusted People Certificate store.
Before adding the vCenter Server, create a Run As account with the correct Active Directory credentials. This account must have administrative privileges on vCenter Server. A local administrator account on the vCenter Server is also supported.
Adding a vCenter Server does not automatically add VMware ESX hosts. That requires an additional step.
These are the steps to add a VMware vCenter server:
The PowerShell command to add a VMware vCenter server is
PS C:> $certificate = Get-SCCertificate -ComputerName "vcenter1.private.cloud" -TCPPort 443 PS C:> $runAsAccount = Get-SCRunAsAccount -Name "Run_As_Domain_Admin" PS C:> Add-SCVirtualizationManager -ComputerName "vcenter1.private. cloud" -TCPPort "443" -Credential $runAsAccount -Certificate $certificate -EnableSecureMode $true -RunAsynchronously
To obtain some extra information about the imported virtualization manager, execute the following code:
PS C:> Get-SCVirtualizationManager -ComputerName vcenter1.private.cloud Name : vcenter1.private.cloud SecureMode : True SslTcpPort : 443 SslCertificateHash : 01F5675198A1E88C41228A3F484AC76118D91B0E NumberOfManagedHosts : 0 ManagedHosts : {} UnmanagedHosts : {} UnmanagedHostClusters : {} NumberOfManagedVirtualMachines : 0 Status : Responding StatusString : Responding Version : 4.1.0 UserName : administrator Domain : private RunAsAccount : Run_As_Domain_Admin ServerConnection : Microsoft.SystemCenter.VirtualMachineManager.Remoting.ServerConnection ID : 09447c1c-c63e-4d18-97fc-90abf9c9981f IsViewOnly : False ObjectType : VirtualizationManager MarkedForDeletion : False IsFullyCached : True
Now that the VMware vCenter Server has been successfully added to VMM, you can add VMware ESX/ESXi hosts and clusters. Follow these steps:
To update the ESX-host status, follow these steps:
You have seen how to integrate both Microsoft and VMware hypervisors and how to bring these hosts under management in VMM. In VMM 2012, Microsoft introduces a third supported hypervisor, Citrix XenServer. XenServer host management is made possible by a XenServer Integration Pack written by Citrix. Without depending on Citrix XenCenter, this hypervisor perfectly integrates with the VMM Management and Library servers (Figure 7.30).
VMM 2012 offers you the following functionality in combination with Citrix XenServer:
You need to be aware of these limitations when integrating XenServer in your private cloud with VMM 2012:
Table 7.3 shows supported configurations for VMs on XenServer hosts.
Category | Minimum | Maximum |
Processor Range | 1 | 8 |
Memory Range | 16 MB | 32 GB |
DVD Range | 1 | 4 |
Hard-Disk Count | 0 | 7 |
Disk Size Range | 0 MB | 2,040 GB |
Network Adapters | 0 | 7 |
Integrating Citrix XenServer into your VMM environment is supported for specific versions of XenServer. The requirements for Citrix XenServer integration can be found in Chapter 4.
Unlike the case with Hyper-V, you cannot create a new XenServer host by means of bare-metal deployment. The XenServer hosts or pools (clusters) must already exist before you can add them to the VMM fabric.
Before you can add a XenServer host or cluster (they are called pools in XenServer terminology), you need to prepare XenServer by installing the Microsoft System Center integration pack in Domain 0 (Dom0). Like the first partition or parent partition in Hyper-V, Dom0 is more privileged and has full access to the hardware. When XenServer starts, Dom0 is started automatically after XenServer boots, and you use its console to configure XenServer for VMM integration.
The Microsoft System Center integration pack can be downloaded from MyCitrix. If you don't have an account for MyCitrix, register for one at
www.citrix.com/english/mycitrix/
Follow these steps to install the integration pack:
# mkdir -p /mnt/tmp # # mount <path_to_Integration_Pack_iso/XenServer-6.0.0.2-integration-suite.iso / mnt/tmp -o loop OR # mount /dev/dvd /mnt/tmp Mount: block device /dev/dvd is write-protected, mounting read-only
# cd /mnt/tmp # ls install.sh ms-scx-1.0.0-32074.i386.rpm openpegasus-2.10.0-xs1.i386.rpm xenserver-vnc-control-6.0.0-52391p.noarch.rpm xs-cim-cmpi-6.0.0.52271p.i386.rpm XS-PACKAGES XS-REPOSITORY # ./install.sh
You can check the installed version of the integration pack using a XenServer command:
Xe host-param-get uuid=<host-uuid> param-name=software-version
After installing the XenServer integration pack, you can verify that the installation was correct by using winrm. On the VMM management server, open a PowerShell console from within VMM (so the virtualmachinemanager module is already loaded).
PS C:> winrm enum http://schemas.citrix.com/wbem/wscim/1/cim-schema/2/Xen_ HostComputerSystem -r:https://<hostname>:5989 -encoding:utf-8 -a:basic -u:<XenUser> -p:<Password> -skipCACheck Xen_HostComputerSystem AvailableRequestedStates = 3, 10, 4 CN = xenserver1.private.cloud Caption = XenServer Host CommunicationStatus = null CreationClassName = Xen_HostComputerSystem Dedicated = 2 Description = Default install of XenServer DetailedStatus = null ElementName = xenserver1 EnabledDefault = 2 EnabledState = 2 Generation = null HealthState = 5 IdentifyingDescriptions = IPv4Address, ProductBrand, ProductVersion, BuildNumber InstallDate = null InstanceID = null Name = 420b6087-a97e-403d-b626-0c3f68dc97a9 NameFormat = Other OperatingStatus = null OperationalStatus = 2 OtherConfig = agent_start_time=1327743281., boot_time=1327743234., iscsi_ iqn=iqn.2011-12.com.example:5e105dfa OtherDedicatedDescriptions = null OtherEnabledState = null OtherIdentifyingInfo = 192.168.1.71, XenServer, 6.0.0, 50762p PowerManagementCapabilities = null PrimaryOwnerContact = null PrimaryOwnerName = null PrimaryStatus = null RequestedState = 2 ResetCapability = null Roles = null StartTime = 2012-01-28T09:33:54Z Status = OK StatusDescriptions = null TimeOfLastStateChange = 2011-12-11T09:43:27Z TimeOffset = 3600 TransitioningToState = 12
If the XenServer host does not have an FQDN, VMM imports the XenServer based on its IP address. If VMM displays a XenServer host with only its IP address, perform the following steps to correct this:
# Hostname -f Xenserver1
# Hostname -f Xenserver1.private.cloud
# cd /etc/xensource # rm xapi-sll.pem rm: remove regular file ‘xapi-sll.pem’? y
Before you add your first XenServer host to VMM, prepare a Run As account with credentials for root access to the XenServer hosts. When you are ready, perform these steps to add a XenServer host or cluster:
The alternative route is to use PowerShell for adding a Citrix XenServer host:
PS C:> $RunAsAccount = Get-SCRunAsAccount -Name "Run_As_XenServer" PS C:> $HostGroup = Get-SCVMHostGroup -ID "35fc20ae-fa7c-4395-a55a-0591fe1cf68b" -Name "Development" PS C:> $Certificate = Get-SCCertificate -ComputerName "xenserver1.private.cloud" -TCPPort 5989 PS C:> Add-SCVMHost -ComputerName "xenserver1.private.cloud" -TCPPort "5989" -EnableSecureMode $true -Credential $RunAsAccount -VMHostGroup $HostGroup -XenServerHost -RunAsynchronously -Certificate $Certificate
You are now ready to manage XenServer hosts and clusters as you would any other virtualization host in VMM. Your private cloud has become a truly heterogeneous cloud, and you are ready to deploy virtual machines and services to any of the hypervisors.
This chapter discussed the deployment of hosts and clusters. Hyper-V hosts that are domain-joined in a nontrusted domain or in a perimeter network need to be treated differently. The VMM Add Host Wizard makes this task easy. A more ambitious task is the bare-metal deployment method, which transforms servers without an OS into fully operational Hyper-V servers. Building on the previous chapter about integrating storage and networking, you can automate the creation of Hyper-V clusters and configure dynamic optimization and power optimization.
The chapter also showed you how to add a VMware vCenter, including VMware ESX hosts and clusters. The paragraphs on vSphere explain what features are supported, vSphere's capabilities, and its limitations.
Finally, the newly supported Citrix XenServer host was introduced. You learned how to add XenServer hosts without using XenCenter; add a VMware vCenter, including VMware ESX; and you saw the supported features, capabilities, and limitations of this hypervisor platform under the wings of VMM 2012.