Now that you've taken a closer look at VMware vSphere and its suite of applications in Chapter 1, “Introducing VMware vSphere 5.5,” it's easy to see that VMware ESXi is the foundation of vSphere.
Although the act of installation can be relatively simple, understanding the deployment and configuration options requires planning to ensure a successful, VMware-supported implementation.
In this chapter, you will learn to
Deploying VMware vSphere is more than just virtualizing servers. The effects of storage, networking, and security in a vSphere deployment are as equally significant as they are with the physical servers themselves. As a result of this broad impact on numerous facets of your organization's IT, the process of planning the vSphere deployment becomes even more important. Without the appropriate planning for your vSphere implementation, you run the risk of configuration problems, instability, incompatibilities, and diminished financial impact.
Your planning process for a vSphere deployment involves answering a number of questions (please note that this list is far from comprehensive):
In some cases, the answers to these questions will determine the answers to other questions. After you have answered these questions, you can then move on to more difficult issues. These center on how the vSphere deployment will impact your staff, your business processes, and your operational procedures. Although still important, we're not going to help you answer those sorts of questions here; instead, let's just focus on the technical issues.
VSPHERE DESIGN IS A TOPIC ON ITS OWN
The first section of this chapter barely scratches the surface of what is involved in planning and designing a vSphere deployment. vSphere design is significant enough a topic that it warranted its own book: VMware vSphere Design, Second Edition (Sybex, 2013). If you are interested in a more detailed discussion of design decisions and design impacts, that's the book for you.
In the next few sections, we'll discuss the three major questions that we outlined previously that are a key part of planning your vSphere deployment: compute platform, storage, and network.
The first major decision to make when planning to deploy vSphere is choosing a hardware, or “compute,” platform. Compared to traditional operating systems like Windows or Linux, ESXi has more stringent hardware restrictions. ESXi won't necessarily support every storage controller or every network adapter chipset available on the market. Although these hardware restrictions do limit the options for deploying a supported virtual infrastructure, they also ensure that the hardware has been tested and will work as expected when used with ESXi. Not every vendor or white-box configuration can play host to ESXi, but the list of supported hardware platforms continues to grow as VMware and hardware vendors test newer models.
You can check for hardware compatibility using the searchable Compatibility Guide available on VMware's website at www.vmware.com/resources/compatibility/. A quick search returns dozens of systems from major vendors such as Hewlett-Packard, Cisco, IBM, and Dell. For example, at the time of this writing, searching the HCG for HP returned 202 results, including blades and traditional rack-mount servers supported across several different versions of vSphere 4.1 U3 to 5.1. Within the major vendors like HP, Dell, Cisco, and IBM, it is generally not too difficult to find a tested and supported platform on which to run ESXi, especially their newer models of hardware. When you expand the list to include other vendors, it's clear that there is a substantial base of compatible servers that are supported by vSphere from which to choose.
THE RIGHT SERVER FOR THE JOB
Selecting the appropriate server is undoubtedly the first step in ensuring a successful vSphere deployment. In addition, it is the only way to ensure that VMware will provide the necessary support. Remember the discussion from Chapter 1, though—a bigger server isn't necessarily a better server!
Finding a supported server is only the first step. It's also important to find the right server—the server that strikes the correct balance of capacity and affordability. Do you use larger servers, such as servers that support up to four or more physical CPUs and 512 GB of RAM? Or would smaller servers, such as servers that support dual physical CPUs and 64 GB of RAM, be a better choice? There is a point of diminishing returns when it comes to adding more physical CPUs and more RAM to a server. Once you pass that point, the servers get more expensive to acquire and support, but the number of VMs the servers can host doesn't increase enough to offset the increase in cost. The challenge, therefore, is finding server models that provide enough expansion for growth and then fitting them with the right amount of resources to meet your needs.
Fortunately, a deeper look into the server models available from a specific vendor, such as HP, reveals server models of all types and sizes (see Figure 2.1), including the following:
You'll note that Figure 2.1 doesn't show vSphere 5.5 in the list; at the time of this writing, VMware's HCG hadn't yet been updated to include information on vSphere 5.5. However, once VMware updates its HCG to include vSphere 5.5 and vendors complete their testing, you'll be able to easily view compatibility with vSphere 5.5 using VMware's online HCG. Servers are added to the HCG as they are certified, not just at major vSphere releases.
Which server is the right server? The answer to that question depends on many factors. The number of CPU cores is often used as a determining factor, but you should also consider the total number of RAM slots. A higher number of RAM slots means that you can use lower-cost, lower-density RAM modules and still reach high memory configurations. You should also consider server expansion options, such as the number of available Peripheral Component Interconnect Express (PCIe) buses, expansion slots, and the types of expansion cards supported in the server. Finally, be sure to consider the server form factor; blade servers have advantages and disadvantages when compared to rack-mount servers.
Selecting the right storage solution is the second major decision that you must make before you proceed with your vSphere deployment. The lion's share of advanced features within vSphere—features like vSphere DRS, vSphere HA, and vSphere FT—depend on the presence of a shared storage architecture. While we won't talk in depth on a particular brand of storage hardware, VMware itself has released a feature called virtual SAN (VSAN) with vSphere 5.5, which we'll discuss more in Chapter 6, “Creating and Configuring Storage Devices.” As stated, because of the dependency on shared storage, deciding on the correct storage architecture for your vSphere deployment is equally as critical as the choice of the server hardware on which to run ESXi.
THE HCG ISN'T JUST FOR SERVERS
VMware's HCG isn't just for servers. The searchable HCG also provides compatibility information on storage arrays and other storage components. Be sure to use the searchable HCG to verify the compatibility of your host bus adapters (HBAs) and storage arrays to ensure the appropriate level of support from VMware.
VMware also has a Product Interoperability Matrix to assist with software compatibility information; it can be found at the following location:
http://partnerweb.vmware.com/comp_guide2/sim/interop_matrix.php
Fortunately, vSphere supports a number of storage architectures out of the box and has implemented a modular, plug-in architecture that will make supporting future storage technologies easier. vSphere supports storage based on Fibre Channel and Fibre Channel over Ethernet (FCoE), iSCSI-based storage, and storage accessed via Network File System (NFS). In addition, vSphere supports the use of multiple storage protocols within a single solution so that one portion of the vSphere implementation might run over Fibre Channel while another portion runs over NFS. This provides a great deal of flexibility in choosing your storage solution. Finally, vSphere provides support for software-based initiators as well as hardware initiators (also referred to as host bus adapters or converged network adapters), so this is another option you must consider when selecting your storage solution.
WHAT IS REQUIRED FOR FIBRE CHANNEL OVER ETHERNET SUPPORT?
Fibre Channel over Ethernet (FCoE) is a relatively new storage protocol. However, because FCoE was designed to be compatible with Fibre Channel, it looks, acts, and behaves like Fibre Channel to ESXi. As long as drivers for the FCoE Converged Network Adapter (CNA) are available—and this is where you would go back to the VMware HCG again—support for FCoE should not be an issue.
When determining the correct storage solution, you must consider the following questions:
The procedures involved in creating and managing storage devices are discussed in detail in Chapter 6.
The third and final major decision that you need to make during the planning process is how your vSphere deployment will integrate with the existing network infrastructure. In part, this decision is driven by the choice of server hardware and the storage protocol.
For example, an organization selecting a blade form factor may run into limitations on the number of network interface cards (NICs) that can be supported in a given blade model. This affects how the vSphere implementation will integrate with the network. Similarly, organizations choosing to use iSCSI or NFS instead of Fibre Channel will typically have to deploy more NICs in their ESXi hosts to accommodate the additional network traffic or use 10 Gigabit Ethernet. Organizations also need to account for network interfaces for vMotion and vSphere FT.
Until 10 Gigabit Ethernet (10GbE) became common, ESXi hosts in many vSphere deployments had a minimum of 6 NICs and often 8, 10, or even 12 NICs. So, how do you decide how many NICs to use? We'll discuss some of this in greater detail in Chapter 5, “Creating and Configuring Virtual Networks,” but here are some general guidelines:
This adds up to eight NICs per server (again, assuming management and vMotion share a pair of NICs). For this sort of deployment, you'll want to ensure that you have enough network ports available, at the appropriate speeds, to accommodate the needs of the vSphere deployment. This is, of course, only a rudimentary discussion of networking design for vSphere and doesn't incorporate any discussion on the use of 10 Gigabit Ethernet, FCoE (which, while a storage protocol, impacts the network design), or what type of virtual switching infrastructure you will use. All of these other factors would affect your networking setup.
HOW ABOUT 10GBE NICS?
Lots of factors go into designing how a vSphere deployment will integrate with the existing network infrastructure. For example, it has been only in the last few years that 10GbE networking has become pervasive in the datacenter. This bandwidth change fundamentally changes how virtual networks are designed.
In one particular case, a company wished to upgrade its existing rack-mount server clusters with six NICs and two Fibre Channel HBAs to two dual-port 10GbE CNAs. Not only physically was there a stark difference from a switch and cabling perspective but the logical configuration was significantly different too. Obviously this allowed for greater bandwidth to each host but it also allowed for more design flexibility.
The final design used VMware Network IO Control (NOIC) and Load-Based Teaming (LBT) to share available bandwidth between the necessary types of traffic but only restricted bandwidth when the network was congested. This resulted in an efficient use of the new bandwidth capability without adding too much configuration complexity. Networking is discussed in more detail in Chapter 5.
With these questions answered, you at least have the basics of a vSphere deployment established. As we mentioned previously, this has been far from a comprehensive or complete discussion on designing a vSphere solution. We do recommend that you find a good resource on vSphere design and consider going through a comprehensive design exercise before actually deploying vSphere.
Once you've established the basics of your vSphere design, you have to decide exactly how you are going to deploy ESXi.
There are three ways to deploy ESXi:
Of these, the simplest is an interactive installation of ESXi. The most complex—but perhaps the most powerful, depending on your needs and your environment—is automated provisioning of ESXi. In the following sections, we'll describe all three of these methods for deploying ESXi in your environment.
Let's start with the simplest method first: interactively installing ESXi.
VMware has done a great job of making the interactive installation of ESXi as simple and straightforward as possible. It takes just minutes to install, so let's walk through the process.
Perform the following steps to interactively install ESXi:
This will vary from manufacturer to manufacturer and will also depend on whether you are installing locally or remotely via an IP-based Keyboard, Video, Mouse (KVM) or other remote management facility.
Again, this will vary based on a local installation (which involves simply inserting the VMware ESXi installation CD into the optical drive) or a remote installation (which typically involves mapping an image of the installation media, known as an ISO image, to a virtual optical drive).
OBTAINING VMWARE ESXI INSTALLATION MEDIA
You can download the installation files from VMware's website at www.vmware.com/download/.
Once it boots from the installation media, the initial boot menu screen appears, as shown in Figure 2.2.
The installer will boot the vSphere hypervisor and eventually stop at a welcome message. Press Enter to continue.
Potential devices are identified as either local devices or remote devices. Figure 2.3 and Figure 2.4 show two different views of this screen: one with a local device and one with remote devices.
RUNNING ESXI AS A VM
You might be able to deduce from the screen shot in Figure 2.3 that we're actually running ESXi 5.5 as a VM. Yes, that's right—you can virtualize ESXi! In this particular case, we're using VMware's desktop virtualization solution for Mac OS X, VMware Fusion, to run an instance of ESXi as a VM. As of this writing, the latest version of VMware Fusion is 5, and it includes ESX Server 5 as an officially supported guest OS.
Storage area network logical unit numbers, or SAN LUNs, are listed as remote, as you can see in Figure 2.4. Local serial attached SCSI (SAS) devices are also listed as remote. Figure 2.4 shows a SAS drive connected to an LSI Logic controller; although this device is physically local to the server on which we are installing ESXi, the installation routine marks it as remote.
If you want to create a boot-from-SAN environment, where each ESXi host boots from a SAN LUN, then you'd select the appropriate SAN LUN here. You can also install directly to your own USB or Secure Digital (SD) device—simply select the appropriate device from the list.
WHICH DESTINATION IS BEST?
Local device, SAN LUN, or USB? Which destination is the best when you're installing ESXi? Those questions truly depend on the overall vSphere design you are implementing, and there is no simple answer. Many variables affect this decision. Are you using an iSCSI SAN and you don't have iSCSI hardware initiators in your servers? That would prevent you from using a boot-from-SAN setup. Are you installing into an environment like Cisco UCS, where booting from SAN is highly recommended? Be sure to consider all the factors when deciding where to install ESXi.
The information about the device includes whether it detected an installation of ESXi and what Virtual Machine File System (VMFS) datastores, if any, are present on it, as shown in Figure 2.5. Press Enter to return to the device-selection screen when you have finished reviewing the information for the selected device.
These are the available actions:
After the installation process begins, it takes only a few minutes to install ESXi onto the selected storage device.
After the host reboots, ESXi is installed. ESXi is configured by default to obtain an IP address via Dynamic Host Configuration Protocol (DHCP). Depending on the network configuration, you might find that ESXi will not be able to obtain an IP address via DHCP. Later in this chapter, in the section “Reconfiguring the Management Network,” we'll discuss how to correct networking problems after installing ESXi by using the Direct Console User Interface (DCUI).
VMware also provides support for scripted installations of ESXi. As you've already seen, there isn't a lot of interaction required to install ESXi, but support for scripting the installation of ESXi reduces the time to deploy even further.
INTERACTIVELY INSTALLING ESXI FROM USB OR ACROSS THE NETWORK
As an alternative to launching the ESXi installer from the installation CD/DVD, you can install ESXi from a USB flash drive or across the network via Preboot Execution Environment (PXE). More details on how to use a USB flash drive or to PXE boot the ESXi installer are found in the vSphere Installation and Setup Guide, available from www.vmware.com/go/support-pubs-vsphere. Note that PXE booting the installer is not the same as PXE booting ESXi itself, something that we'll discuss later in the section “Deploying VMware ESXi with vSphere Auto Deploy.”
ESXi supports the use of an installation script (often referred to as a kickstart, or KS, script) that automates the installation routine. By using an installation script, users can create unattended installation routines that make it easy to quickly deploy multiple instances of ESXi.
ESXi comes with a default installation script on the installation media. Listing 2.1 shows the default installation script.
# # Sample scripted installation file # # Accept the VMware End User License Agreement vmaccepteula # Set the root password for the DCUI and Tech Support Mode rootpw mypassword # Install on the first local disk available on machine install --firstdisk --overwritevmfs # Set the network to DHCP on the first network adapter network --bootproto=dhcp --device=vmnic0 # A sample post-install script %post --interpreter=python --ignorefailure=true import time stampFile = open('/finished.stamp', mode='w') stampFile.write( time.asctime() )
If you want to use this default install script to install ESXi, you can specify it when booting the VMware ESXi installer by adding the ks=file://etc/vmware/weasel/ks.cfg boot option. We'll show you how to specify that boot option shortly.
Of course, the default installation script is useful only if the settings work for your environment. Otherwise, you'll need to create a custom installation script. The installation script commands are much the same as those supported in previous versions of vSphere. Here's a breakdown of some of the commands supported in the ESXi installation script:
accepteula or vmaccepteula These commands accept the ESXi license agreement.
install The install command specifies that this is a fresh installation of ESXi, not an upgrade. You must also specify the following parameters:
keyboard This command specifies the keyboard type. It's an optional component in the installation script.
network This command provides the network configuration for the ESXi host being installed. It is optional but generally recommended. Depending on your configuration, some of the additional parameters are required:
This is by no means a comprehensive list of all the commands available in the ESXi installation script, but it does cover the majority of the commands you'll see in use.
Looking back at Listing 2.1, you'll see that the default installation script incorporates a %post section, where additional scripting can be added using either the Python interpreter or the BusyBox interpreter. What you don't see in Listing 2.1 is the %firstboot section, which also allows you to add Python or BusyBox commands for customizing the ESXi installation. This section comes after the installation script commands but before the %post section. Any command supported in the ESXi shell can be executed in the %firstboot section, so commands such as vim-cmd, esxcfg-vswitch, esxcfg-vmknic, and others can be combined in the %firstboot section of the installation script.
A number of commands that were supported in previous versions of vSphere (by ESX or ESXi) are no longer supported in installation scripts for ESXi 5.5, such as these:
Once you have created the installation script you will use, you need to specify that script as part of the installation routine.
Specifying the location of the installation script as a boot option is not only how you would tell the installer to use the default script but also how you tell the installer to use a custom installation script that you've created. This installation script can be located on a USB flash drive or in a network location accessible via NFS, HTTP, HTTPS, or FTP. Table 2.1 summarizes some of the supported boot options for use with an unattended installation of ESXi.
BOOT OPTION | BRIEF DESCRIPTION |
ks=cdrom:/path | Uses the installation script found at path on the CD-ROM. The installer checks all CD-ROM drives until the file matching the specified path is found. |
ks=usb | Uses the installation script named ks.cfg found in the root directory of an attached USB device. All USB devices are searched as long as they have a FAT16 or FAT32 file system. |
ks=usb:/path | Uses the installation script at the specified path on an attached USB device. This allows you to use a different filename or location for the installation script. |
ks=protocol:/serverpath | Uses the installation script found at the specified network location. The protocol can be NFS, HTTP, HTTPS, or FTP. |
ip=XX.XX.XX.XX | Specifies a static IP address for downloading the installation script and the installation media. |
nameserver=XX.XX.XX.XX | Provides the IP address of a Domain Name System (DNS) server to use for name resolution when downloading the installation script or the installation media. |
gateway=XX.XX.XX.XX | Provides the network gateway to be used as the default gateway for downloading the installation script and the installation media. |
netmask=XX.XX.XX.XX | Specifies the network mask for the network interface used to download the installation script or the installation media. |
vlanid=XX | Configures the network interface to be on the specified VLAN when downloading the installation script or the installation media. |
NOT A COMPREHENSIVE LIST OF BOOT OPTIONS
The list found in Table 2.1 includes only some of the more commonly used boot options for performing a scripted installation of ESXi. For the complete list of supported boot options, refer to the vSphere Installation and Setup Guide, available from www.vmware.com/go/support-pubs-vsphere.
To use one or more of these boot options during the installation, you'll need to specify them at the boot screen for the ESXi installer. The bottom of the installer boot screen states that you can press Shift+O to edit the boot options.
The following code line is an example that could be used to retrieve the installation script from an HTTP URL; this would be entered at the prompt at the bottom of the installer boot screen:
<ENTER: Apply options and boot> <ESC: Cancel> > runweasel ks=http://192.168.1.1/scripts/ks.cfg ip=192.168.1.200 netmask=255.255.255.0 gateway=192.168.1.254
Using an installation script to install ESXi not only speeds up the installation process but also helps to ensure the consistent configuration of all your ESXi hosts.
The final method for deploying ESXi—using vSphere Auto Deploy—is the most complex, but it also offers administrators a great deal of flexibility.
vSphere Auto Deploy can be configured with one of three different modes:
In the Stateless mode, you deploy ESXi using Auto Deploy, but you aren't actually installing ESXi. Instead of actually installing ESXi onto a local disk or a SAN boot LUN, you are building an environment where ESXi is directly loaded into memory on a host as it boots.
In the next mode, Stateless Caching, you deploy ESXi using Auto Deploy just as with Stateless, but the image is cached on the server's local disk or SAN boot LUN. In the event that the Auto Deploy infrastructure is not available, the host boots from a local cache of the image.
The third mode, Stateful Install, is very similar to Stateless Caching except the server's boot order is reversed: local disk first and network second. Unless the server is specifically told to network boot again, the Auto Deploy service is no longer needed. This mode is effectively just a mechanism for network installation.
Auto Deploy uses a set of rules (called deployment rules) to control which hosts are assigned a particular ESXi image (called an image profile). Deploying a new ESXi image is as simple as modifying the deployment rule to point that physical host to a new image profile and then rebooting with the PXE/network boot option. When the host boots up, it will receive a new image profile.
Sounds easy, right? Maybe not. In theory, it is—but there are several steps you have to accomplish before you're ready to actually deploy ESXi in this fashion:
AUTO DEPLOY DEPENDENCIES
This chapter deals with ESXi host installation methods; however, vSphere Auto Deploy is dependent on Host Profiles, a feature of VMware vCenter. More information about installing vCenter and configuring Host Profiles can be found in Chapter 3, “Installing and Configuring vCenter Server.”
Once you've completed these five steps, you're ready to start provisioning hosts with ESXi. When everything is configured and in place, the process looks something like this:
When the host has finished executing, you have a system running ESXi. The Auto Deploy server also has the ability to automatically join the ESXi host to vCenter Server and assign a host profile (which we'll discuss in a bit more detail in Chapter 3) for further configuration. As you can see, this system potentially offers administrators tremendous flexibility and power.
Ready to get started with provisioning ESXi hosts using Auto Deploy? Let's start with setting up the vSphere Auto Deploy server.
The vSphere Auto Deploy server is where the various ESXi image profiles are stored. The image profile is transferred from this server via HTTP to a physical host when it boots. The image profile is the actual ESXi image, and it comprises multiple VIB files. VIBs are ESXi software packages; these could be drivers, Common Information Management (CIM) providers, or other applications that extend or enhance the ESXi platform. Both VMware and VMware's partners could distribute software as VIBs.
You can install vSphere Auto Deploy on the same system as vCenter Server or on a separate Windows Server–based system (this could certainly be a VM). In addition, the vCenter virtual appliance comes preloaded with the Auto Deploy server installed. If you want to use the vCenter virtual appliance, you need only deploy the appliance and configure the service from the web-based administrative interface. We'll describe the process for deploying the vCenter virtual appliance in more detail in Chapter 3. In this section, we'll walk you through installing the Auto Deploy server on a separate Windows-based system.
Perform the following steps to install the vSphere Auto Deploy server:
If this is a VM, you can map the vCenter Server installation ISO to the VM's CD/DVD drive.
This will launch the vSphere Auto Deploy installation wizard.
If you need to change locations, use either of the Change buttons; if you need to change the repository size, specify a new value in gigabytes (GB).
You'll also need to provide a username and password. Click Next when you have finished entering this information.
If you now go back to the vSphere Client (if you haven't installed it yet, skip ahead to the section “Installing the vSphere C# Client” and then come back) and connect to vCenter Server, you'll see a new Auto Deploy icon on the vSphere Client's home page. Click it to see information about the registered Auto Deploy server. Figure 2.8 shows the Auto Deploy screen after we installed and registered an Auto Deploy server with vCenter Server.
That's it for the Auto Deploy server itself; once it's been installed and is up and running, there's very little additional work or configuration required, except configuring TFTP and DHCP on your network to support vSphere Auto Deploy. The next section provides an overview of the required configurations for TFTP and DHCP.
The exact procedures for configuring TFTP and DHCP are going to vary based on the specific TFTP and DHCP servers you are using on your network. For example, configuring the ISC DHCP server to support vSphere Auto Deploy is dramatically different from configuring the DHCP Server service provided with Windows Server. As a matter of necessity, then, we can provide only high-level information in the following section. Refer to your specific vendor's documentation for details on how the configuration is carried out.
For TFTP, you only need to upload the appropriate TFTP boot files to the TFTP directory. The Download TFTP Boot Zip hyperlink shown in Figure 2.8 provides the necessary files. Simply download the Zip file using that link, unzip the file, and place the contents of the unzipped file in the TFTP directory on the TFTP server.
For DHCP, you need to specify two additional DHCP options:
If you want to identify hosts by IP address in the deployment rules, then you'll need a way to ensure that the host gets the IP address you expect. You can certainly use DHCP reservations to accomplish this, if you like; just be sure that options 66 and 67 apply to the reservation as well.
Once you've configured TFTP and DHCP, you're ready to PXE boot your server, but you still need to create the image profile to deploy ESXi.
The process for creating an image profile may seem counterintuitive at first; it did for us. Creating an image profile involves first adding at least one software depot. A software depot could be a directory structure of files and folders on an HTTP server, or (more commonly) it could be an offline depot in the form of a Zip file. You can add multiple software depots.
Some software depots will already have one or more image profiles defined, and you can define additional image profiles (usually by cloning an existing image profile). You'll then have the ability to add software packages (in the form of VIBs) to the image profile you've created. Once you've finished adding or removing software packages or drivers from the image profile, you can export the image profile (either to an ISO or as a Zip file for use as an offline depot).
All image profile tasks are accomplished using PowerCLI, so you'll need to ensure that you have a system with PowerCLI installed in order to perform these tasks. We'll describe PowerCLI, along with other automation tools, in more detail in Chapter 14, “Automating VMware vSphere.” In the next part of this section, we'll walk you through creating an image profile based on the ESXi 5.5.0 offline depot Zip file available for downloading by registered customers.
Perform the following steps to create an image profile:
Add-EsxSoftwareDepot C:vmware-ESXi-5.5.0-XXXXXX-depot.zip
Add-EsxSoftwareDepot https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
New-EsxImageProfile -CloneProfile "ESXi-5.5.0-XXXXXX-standard" -Name "My_Custom_Profile"
Once you have an image profile established, you can customize it by adding VIBs or you can export it. You might want to export the image profile because once you exit a PowerCLI session where you've created image profiles, the image profiles will not be available when you start a new session. Exporting the image profile as a Zip file offline depot, you can easily add it back in when you start a new session.
To export an image profile as a Zip file offline depot, run this command:
Export-EsxImageProfile -ImageProfile "My_Custom_Profile" -ExportToBundle -FilePath "C:path oIP-file-offline-depot.zip"
When you start a new PowerCLI session to work with an image profile, simply add this offline depot with the Add-EsxSoftwareDepot command.
The final step is establishing deployment rules that link image profiles to servers in order to provision ESXi to them at boot time. We'll describe how to do this in the next section.
The deployment rules are where the “rubber meets the road” for vSphere Auto Deploy. When you define a deployment rule, you are linking an image profile to one or more hosts. It's at this point that vSphere Auto Deploy will copy all the VIBs defined in the specified image profile up to the Auto Deploy server so that they are accessible from the hosts. Once a deployment rule is in place, you can actually begin provisioning hosts via Auto Deploy (assuming all the other pieces are in place and functioning correctly, of course).
As with image profiles, deployment rules are managed via PowerCLI. You'll use the New-DeployRule and Add-DeployRule commands to define new deployment rules and add them to the working rule set, respectively.
Perform the following steps to define a new deployment rule:
New-DeployRule -Name "Img_Rule " -Item "My_Custom_Profile" -Pattern "vendor=Cisco", "ipv4=10.1.1.225,10.1.1.250"
This rule assigns the image profile named My_Custom_Profile to all hosts with Cisco in the vendor string and having either the IP address 10.1.1.225 or 10.1.1.250. You could also specify an IP range like 10.1.1.225-10.1.1.250 (using a hyphen to separate the start and end of the IP address range).
New-DeployRule -Name "Default_Cluster" -Item "Cluster-1" -AllHosts
This rule puts all hosts into the cluster named Cluster-1 in the vCenter Server with which the Auto Deploy server is registered. (Recall that an Auto Deploy server must be registered with a vCenter Server instance.)
Add-DeployRule Img_Rule Add-DeployRule Default_Cluster
As soon as you add the deployment rules to the working rule set, vSphere Auto Deploy will, if necessary, start uploading VIBs to the Auto Deploy server in order to satisfy the rules you've defined.
Now that a deployment rule is in place, you're ready to provision via Auto Deploy. Boot the physical host that matches the patterns you defined in the deployment rule, and it should follow the boot sequence described at the start of this section. Figure 2.9 shows what it looks like when a host is booting ESXi via vSphere Auto Deploy.
By now, you should be starting to see the flexibility that Auto Deploy offers. If you need to deploy a new ESXi image, you need only define a new image profile (using a new software depot, if necessary), assign that image profile with a deployment rule, and reboot the physical servers. When the servers come up, they will boot the newly assigned ESXi image via PXE.
Of course, there are some additional concerns that you'll need to address should you decide to go this route:
In the Auto Deploy Stateless mode, the ESXi image doesn't contain configuration state and doesn't maintain dynamic state information, and they are therefore considered stateless ESXi hosts. All the state information is stored elsewhere instead of on the host itself.
Real World Scenario
ENSURING AUTO DEPLOY IS AVAILABLE
Author Nick Marshall says, “When working with a customer with vSphere 5.0 Auto Deploy, we had to ensure that all Auto Deploy components were highly available. This meant designing the infrastructure that was responsible for booting and deploying ESXi hosts was more complicated than normal. Services such as PXE and Auto Deploy and the vCenter VMs were all deployed on hosts that were not provisioned using Auto Deploy in a separate management cluster.
As per the Highly Available Auto Deploy best practices in the vSphere documentation, building a separate cluster with a local installation or boot from SAN will ensure there is no chicken-and-egg situation. You need to ensure that in a completely virtualized environment your VMs that provision ESXi hosts with Auto Deploy are not running on the ESXi hosts they need to build.”
Unless your ESXi host hardware does not have any local disks or bootable SAN storage, we would recommend considering one of the two other Auto Deploy modes. These modes offer resiliency for your hosts if at any time the Auto Deploy services become unavailable.
To configure Stateless Caching, follow the previous procedure for Stateless with these additions:
This configuration tells the ESXi host to take the Auto Deploy image loaded in memory and save it to the local disk after a successful boot. If for some reason the network or Auto Deploy server is unavailable when your host reboots, it will fall back and boot the cached copy on its local disk.
Just like Stateful Caching mode, the Auto Deploy Stateful mode is configured by editing host profiles within vCenter and the boot order settings in the host BIOS.
vSphere Auto Deploy offers some great advantages, especially for environments with lots of ESXi hosts to manage, but it can also add complexity. As mentioned earlier, it all comes down to the design and requirements of your vSphere deployment.
Whether you are installing from a CD/DVD or performing an unattended installation of ESXi, once the installation is complete, there are several post-installation steps that are necessary or might be necessary, depending on your specific configuration. We'll discuss these tasks in the following sections.
This might come as a bit of shock for IT professionals who have grown accustomed to managing Microsoft Windows–based servers from the server's console (even via Remote Desktop), but ESXi wasn't designed for you to manage it from the server's console. Instead, you should use the vSphere Client.
In earlier versions, ESXi and vCenter were administered with the C# (pronounced “see sharp”) Client. vSphere 5.0 introduced the Web Client. Although the first iteration of the Web Client was not as feature rich as the C# Client, with vSphere 5.1 and 5.5 the tables have turned. To ensure that you can follow which client the instructions are for, we will use the terms vSphere Client and Web Client.
The vSphere Client is a Windows-only application that allows for connecting directly to an ESXi host or to a vCenter Server installation. The only difference in the tools used is that connecting directly to an ESXi host requires authentication with a user account that exists on that specific host, while connecting to a vCenter Server installation relies on Windows users for authentication. Additionally, some features of the vSphere Client—such as initiating vMotion, for example—are available only when connecting to a vCenter Server installation.
LEARNING A NEW USER INTERFACE
For those who are already used to the vSphere Client, things can feel a little awkward, but learning the new web-based client for vCenter is certainly necessary. While you will be able to perform more traditional tasks in the vSphere Client, the Web Client helps you unlock the full potential when using vSphere 5.5. We'll focus primarily on the vSphere Web Client in this book unless we are directly administering the hosts (as is the case in this chapter) or when using vSphere Client plug-ins that are not currently available in the vSphere Web Client.
You can install either of the vSphere Clients with the vCenter Server installation media. Figure 2.11 shows the VMware vCenter Installer with the vSphere Client option selected.
In previous versions of VMware vSphere, one of the easiest installation methods was to simply connect to an ESX/ESXi host or a vCenter Server instance using your web browser. From there, you clicked a link to download the vSphere Client right from the web page. From vSphere 5.0 onward, the vSphere Client download link for ESXi hosts doesn't point to a local copy of the installation files; it redirects you to a VMware-hosted website to download the files. The vSphere Client download link for vCenter Server 5.5, though, still points to a local copy of the vSphere Client installer.
Because you might not have installed vCenter Server yet—that is the focus of the next chapter, Chapter 3—we'll walk you through installing the vSphere Client from the vCenter Server installation media. Regardless of how you obtain the installer, once the installation wizard starts, the process is the same. It is also worth noting that ESXi cannot be directly managed with the Web Client, so you will probably want to install both clients at some point. Refer to Chapter 3 for details on the Web Client installation.
Perform the following steps to install the vSphere Client from the vCenter Server installation media:
If you are installing the vSphere Client on a Windows VM, you can mount the vCenter Server installation ISO image as a virtual CD/DVD image. Refer to Chapter 7, “Ensuring High Availability and Business Continuity,” for more details if you are unsure how to attach a virtual CD/DVD image.
Although the vSphere Client can be installed and is supported on 64-bit Windows operating systems, the vSphere Client itself remains a 32-bit application and runs in 32-bit compatibility mode.
During the installation of ESXi, the installer creates a virtual switch—also known as a vSwitch—bound to a physical NIC. The tricky part, depending on your server hardware, is that the installer might select a different physical NIC than the one you need for correct network connectivity. Consider the scenario depicted in Figure 2.12. If, for whatever reason, the ESXi installer doesn't link the correct physical NIC to the vSwitch it creates, then you won't have network connectivity to that host. We'll talk more about why ESXi's network connectivity must be configured with the correct NIC in Chapter 5, but for now just understand that this is a requirement for connectivity. Since you need network connectivity to manage the host from the vSphere Client, how do you fix this?
The simplest fix for this problem is to unplug the network cable from the current Ethernet port in the back of the server and continue trying the remaining ports until the host is accessible, but that's not always possible or desirable. The better way is to use the DCUI to reconfigure the management network so that it is converted the way you need it to be configured.
Perform the following steps to fix the management NIC in ESXi using the DCUI:
After the correct NIC has been assigned to the ESXi management network, the System Customization menu provides a Test Management Network option to verify network connectivity.
The other options within the DCUI for troubleshooting management network issues are covered in detail within Chapter 5.
At this point, you should have management network connectivity to the ESXi host, and from here forward you can use the vSphere Client to perform other configuration tasks, such as configuring time synchronization and name resolution.
Time synchronization in ESXi is an important configuration because the ramifications of incorrect time run deep. While ensuring that ESXi has the correct time seems trivial, time-synchronization issues can affect features such as performance charting, SSH key expirations, NFS access, backup jobs, authentication, and more. After the installation of ESXi or during an unattended installation of ESXi using an installation script, the host should be configured to perform time synchronization with a reliable time source. This source could be another server on your network or a time source located on the Internet. For the sake of managing time synchronization, it is easiest to synchronize all your servers against one reliable internal time server and then synchronize the internal time server with a reliable Internet time server. ESXi provides a Network Time Protocol (NTP) implementation to provide this functionality.
The simplest way to configure time synchronization for ESXi involves the vSphere Client.
Perform the following steps to enable NTP using the vSphere Client:
You'll note that using the vSphere Client to enable NTP this way also automatically enables NTP traffic through the firewall. You can verify this by noting an Open Firewall Ports entry in the Tasks pane or by clicking Security Profile under the Software menu and seeing an entry for NTP Client listed under Outgoing Connections.
WINDOWS AS A RELIABLE TIME SERVER
You can configure an existing Windows server as a reliable time server by performing these steps:
Just as we mentioned that time synchronization is important for your vSphere environment, so is name resolution. Although the vSphere dependency on name resolution is less than it was, there is still some functionality that may not work as expected without proper name resolution.
Configuring name resolution is a simple process in the vSphere Client:
In this chapter we've discussed some of the decisions that you'll have to make as you deploy ESXi in your datacenter, and we've shown you how to deploy these products using both interactive and unattended methods. In the next chapter, we'll show you how to deploy VMware vCenter Server, a key component in your virtualization environment.
Understand ESXi compatibility requirements. Unlike traditional operating systems like Windows or Linux, ESXi has much stricter hardware compatibility requirements. This helps ensure a stable, well-tested product line that is able to support even the most mission-critical applications.
Master It You have some older servers onto which you'd like to deploy ESXi. They aren't on the Hardware Compatibility Guide. Will they work with ESXi?
Plan an ESXi deployment. Deploying ESXi will affect many different areas of your organization—not only the server team but also the networking team, the storage team, and the security team. There are many issues to consider, including server hardware, storage hardware, storage protocols or connection types, network topology, and network connections. Failing to plan properly could result in an unstable and unsupported implementation.
Master It Name three areas of networking that must be considered in a vSphere design.
Master It What are some of the different types of storage that ESXi can be installed on?
Deploy ESXi. ESXi can be installed onto any supported and compatible hardware platform. You have three different ways to deploy ESXi: You can install it interactively, you can perform an unattended installation, or you can use vSphere Auto Deploy to provision ESXi as it boots up.
Master It Your manager asks you to provide him with a copy of the unattended installation script that you will be using when you roll out ESXi using vSphere Auto Deploy. Is this something you can give him?
Master It Name two advantages and two disadvantages of using vSphere Auto Deploy to provision ESXi hosts.
Perform post-installation configuration of ESXi. Following the installation of ESXi, some additional configuration steps may be required. For example, if the wrong NIC is assigned to the management network, then the server won't be accessible across the network. You'll also need to configure time synchronization.
Master It You've installed ESXi on your server, but the welcome web page is inaccessible, and the server doesn't respond to a ping. What could be the problem?
Install the vSphere C# Client. ESXi is managed using the vSphere C# Client, a Windows-only application that provides the functionality to manage the virtualization platform. There are a couple different ways to obtain the vSphere Client installer, including running it directly from the VMware vCenter Installer or by downloading it using a web browser connected to the IP address of a vCenter Server instance.
Master It List two ways by which you can install the vSphere Client.