images

Chapter 2

Planning and Installing VMware ESXi

Now that you've taken a closer look at VMware vSphere and its suite of applications in Chapter 1, “Introducing VMware vSphere 5.5,” it's easy to see that VMware ESXi is the foundation of vSphere.

Although the act of installation can be relatively simple, understanding the deployment and configuration options requires planning to ensure a successful, VMware-supported implementation.

In this chapter, you will learn to

  • Understand ESXi compatibility requirements
  • Plan an ESXi deployment
  • Deploy ESXi
  • Perform post-installation configuration of ESXi
  • Install the vSphere C# Client

Planning a VMware vSphere Deployment

Deploying VMware vSphere is more than just virtualizing servers. The effects of storage, networking, and security in a vSphere deployment are as equally significant as they are with the physical servers themselves. As a result of this broad impact on numerous facets of your organization's IT, the process of planning the vSphere deployment becomes even more important. Without the appropriate planning for your vSphere implementation, you run the risk of configuration problems, instability, incompatibilities, and diminished financial impact.

Your planning process for a vSphere deployment involves answering a number of questions (please note that this list is far from comprehensive):

  • What types of servers will I use for the underlying physical hardware?
  • What kinds of storage will I use, and how will I connect that storage to my servers?
  • How will the networking be configured?

In some cases, the answers to these questions will determine the answers to other questions. After you have answered these questions, you can then move on to more difficult issues. These center on how the vSphere deployment will impact your staff, your business processes, and your operational procedures. Although still important, we're not going to help you answer those sorts of questions here; instead, let's just focus on the technical issues.

VSPHERE DESIGN IS A TOPIC ON ITS OWN

The first section of this chapter barely scratches the surface of what is involved in planning and designing a vSphere deployment. vSphere design is significant enough a topic that it warranted its own book: VMware vSphere Design, Second Edition (Sybex, 2013). If you are interested in a more detailed discussion of design decisions and design impacts, that's the book for you.

In the next few sections, we'll discuss the three major questions that we outlined previously that are a key part of planning your vSphere deployment: compute platform, storage, and network.

Choosing a Server Platform

The first major decision to make when planning to deploy vSphere is choosing a hardware, or “compute,” platform. Compared to traditional operating systems like Windows or Linux, ESXi has more stringent hardware restrictions. ESXi won't necessarily support every storage controller or every network adapter chipset available on the market. Although these hardware restrictions do limit the options for deploying a supported virtual infrastructure, they also ensure that the hardware has been tested and will work as expected when used with ESXi. Not every vendor or white-box configuration can play host to ESXi, but the list of supported hardware platforms continues to grow as VMware and hardware vendors test newer models.

You can check for hardware compatibility using the searchable Compatibility Guide available on VMware's website at www.vmware.com/resources/compatibility/. A quick search returns dozens of systems from major vendors such as Hewlett-Packard, Cisco, IBM, and Dell. For example, at the time of this writing, searching the HCG for HP returned 202 results, including blades and traditional rack-mount servers supported across several different versions of vSphere 4.1 U3 to 5.1. Within the major vendors like HP, Dell, Cisco, and IBM, it is generally not too difficult to find a tested and supported platform on which to run ESXi, especially their newer models of hardware. When you expand the list to include other vendors, it's clear that there is a substantial base of compatible servers that are supported by vSphere from which to choose.

THE RIGHT SERVER FOR THE JOB

Selecting the appropriate server is undoubtedly the first step in ensuring a successful vSphere deployment. In addition, it is the only way to ensure that VMware will provide the necessary support. Remember the discussion from Chapter 1, though—a bigger server isn't necessarily a better server!

Finding a supported server is only the first step. It's also important to find the right server—the server that strikes the correct balance of capacity and affordability. Do you use larger servers, such as servers that support up to four or more physical CPUs and 512 GB of RAM? Or would smaller servers, such as servers that support dual physical CPUs and 64 GB of RAM, be a better choice? There is a point of diminishing returns when it comes to adding more physical CPUs and more RAM to a server. Once you pass that point, the servers get more expensive to acquire and support, but the number of VMs the servers can host doesn't increase enough to offset the increase in cost. The challenge, therefore, is finding server models that provide enough expansion for growth and then fitting them with the right amount of resources to meet your needs.

Fortunately, a deeper look into the server models available from a specific vendor, such as HP, reveals server models of all types and sizes (see Figure 2.1), including the following:

FIGURE 2.1 Servers on the HCG come in various sizes and models.

images

  • Half-height C-class blades, such as the BL460c and BL465c
  • Full-height C-class blades, such as the BL685c
  • Dual-socket 1U servers, such as the DL360
  • Dual-socket 2U servers, such as the DL380 and the DL385
  • Quad-socket 4U servers, such as the DL580 and DL585

You'll note that Figure 2.1 doesn't show vSphere 5.5 in the list; at the time of this writing, VMware's HCG hadn't yet been updated to include information on vSphere 5.5. However, once VMware updates its HCG to include vSphere 5.5 and vendors complete their testing, you'll be able to easily view compatibility with vSphere 5.5 using VMware's online HCG. Servers are added to the HCG as they are certified, not just at major vSphere releases.

Which server is the right server? The answer to that question depends on many factors. The number of CPU cores is often used as a determining factor, but you should also consider the total number of RAM slots. A higher number of RAM slots means that you can use lower-cost, lower-density RAM modules and still reach high memory configurations. You should also consider server expansion options, such as the number of available Peripheral Component Interconnect Express (PCIe) buses, expansion slots, and the types of expansion cards supported in the server. Finally, be sure to consider the server form factor; blade servers have advantages and disadvantages when compared to rack-mount servers.

Determining a Storage Architecture

Selecting the right storage solution is the second major decision that you must make before you proceed with your vSphere deployment. The lion's share of advanced features within vSphere—features like vSphere DRS, vSphere HA, and vSphere FT—depend on the presence of a shared storage architecture. While we won't talk in depth on a particular brand of storage hardware, VMware itself has released a feature called virtual SAN (VSAN) with vSphere 5.5, which we'll discuss more in Chapter 6, “Creating and Configuring Storage Devices.” As stated, because of the dependency on shared storage, deciding on the correct storage architecture for your vSphere deployment is equally as critical as the choice of the server hardware on which to run ESXi.

THE HCG ISN'T JUST FOR SERVERS

VMware's HCG isn't just for servers. The searchable HCG also provides compatibility information on storage arrays and other storage components. Be sure to use the searchable HCG to verify the compatibility of your host bus adapters (HBAs) and storage arrays to ensure the appropriate level of support from VMware.

VMware also has a Product Interoperability Matrix to assist with software compatibility information; it can be found at the following location:

http://partnerweb.vmware.com/comp_guide2/sim/interop_matrix.php

Fortunately, vSphere supports a number of storage architectures out of the box and has implemented a modular, plug-in architecture that will make supporting future storage technologies easier. vSphere supports storage based on Fibre Channel and Fibre Channel over Ethernet (FCoE), iSCSI-based storage, and storage accessed via Network File System (NFS). In addition, vSphere supports the use of multiple storage protocols within a single solution so that one portion of the vSphere implementation might run over Fibre Channel while another portion runs over NFS. This provides a great deal of flexibility in choosing your storage solution. Finally, vSphere provides support for software-based initiators as well as hardware initiators (also referred to as host bus adapters or converged network adapters), so this is another option you must consider when selecting your storage solution.

WHAT IS REQUIRED FOR FIBRE CHANNEL OVER ETHERNET SUPPORT?

Fibre Channel over Ethernet (FCoE) is a relatively new storage protocol. However, because FCoE was designed to be compatible with Fibre Channel, it looks, acts, and behaves like Fibre Channel to ESXi. As long as drivers for the FCoE Converged Network Adapter (CNA) are available—and this is where you would go back to the VMware HCG again—support for FCoE should not be an issue.

When determining the correct storage solution, you must consider the following questions:

  • What type of storage will best integrate with your existing storage or network infrastructure?
  • Do you have experience or expertise with some types of storage?
  • Can the storage solution provide the necessary performance to support your environment?
  • Does the storage solution offer any form of advanced integration with vSphere?

The procedures involved in creating and managing storage devices are discussed in detail in Chapter 6.

Integrating with the Network Infrastructure

The third and final major decision that you need to make during the planning process is how your vSphere deployment will integrate with the existing network infrastructure. In part, this decision is driven by the choice of server hardware and the storage protocol.

For example, an organization selecting a blade form factor may run into limitations on the number of network interface cards (NICs) that can be supported in a given blade model. This affects how the vSphere implementation will integrate with the network. Similarly, organizations choosing to use iSCSI or NFS instead of Fibre Channel will typically have to deploy more NICs in their ESXi hosts to accommodate the additional network traffic or use 10 Gigabit Ethernet. Organizations also need to account for network interfaces for vMotion and vSphere FT.

Until 10 Gigabit Ethernet (10GbE) became common, ESXi hosts in many vSphere deployments had a minimum of 6 NICs and often 8, 10, or even 12 NICs. So, how do you decide how many NICs to use? We'll discuss some of this in greater detail in Chapter 5, “Creating and Configuring Virtual Networks,” but here are some general guidelines:

  • The ESXi management network needs at least one NIC. We strongly recommend adding a second NIC for redundancy. In fact, some features of vSphere, such as vSphere HA, will note warnings if the hosts do not have redundant network connections for the management network.
  • vMotion needs a NIC. Again, we heartily recommend a second NIC for redundancy. These NICs should be at least Gigabit Ethernet. In some cases, this traffic can be safely combined with ESXi management traffic, so we'll assume that two NICs will handle both ESXi management and vMotion.
  • vSphere FT, if you will be utilizing that feature, needs a NIC. A second NIC would provide redundancy and is recommended. This should be at least a Gigabit Ethernet NIC, preferably a 10 Gigabit Ethernet NIC.
  • For deployments using iSCSI or NFS, at least one more NIC, preferably two, is needed. Gigabit Ethernet or 10 Gigabit Ethernet is necessary here. Although you can get by with a single NIC, we strongly recommend at least two.
  • Finally, at least two NICs are needed for traffic originating from the VMs themselves. Gigabit Ethernet or faster is strongly recommended for VM traffic.

This adds up to eight NICs per server (again, assuming management and vMotion share a pair of NICs). For this sort of deployment, you'll want to ensure that you have enough network ports available, at the appropriate speeds, to accommodate the needs of the vSphere deployment. This is, of course, only a rudimentary discussion of networking design for vSphere and doesn't incorporate any discussion on the use of 10 Gigabit Ethernet, FCoE (which, while a storage protocol, impacts the network design), or what type of virtual switching infrastructure you will use. All of these other factors would affect your networking setup.

HOW ABOUT 10GBE NICS?

Lots of factors go into designing how a vSphere deployment will integrate with the existing network infrastructure. For example, it has been only in the last few years that 10GbE networking has become pervasive in the datacenter. This bandwidth change fundamentally changes how virtual networks are designed.

In one particular case, a company wished to upgrade its existing rack-mount server clusters with six NICs and two Fibre Channel HBAs to two dual-port 10GbE CNAs. Not only physically was there a stark difference from a switch and cabling perspective but the logical configuration was significantly different too. Obviously this allowed for greater bandwidth to each host but it also allowed for more design flexibility.

The final design used VMware Network IO Control (NOIC) and Load-Based Teaming (LBT) to share available bandwidth between the necessary types of traffic but only restricted bandwidth when the network was congested. This resulted in an efficient use of the new bandwidth capability without adding too much configuration complexity. Networking is discussed in more detail in Chapter 5.

With these questions answered, you at least have the basics of a vSphere deployment established. As we mentioned previously, this has been far from a comprehensive or complete discussion on designing a vSphere solution. We do recommend that you find a good resource on vSphere design and consider going through a comprehensive design exercise before actually deploying vSphere.

Deploying VMware ESXi

Once you've established the basics of your vSphere design, you have to decide exactly how you are going to deploy ESXi.

There are three ways to deploy ESXi:

  • Interactive installation of ESXi
  • Unattended (scripted) installation of ESXi
  • Automated provisioning of ESXi

Of these, the simplest is an interactive installation of ESXi. The most complex—but perhaps the most powerful, depending on your needs and your environment—is automated provisioning of ESXi. In the following sections, we'll describe all three of these methods for deploying ESXi in your environment.

Let's start with the simplest method first: interactively installing ESXi.

Installing VMware ESXi Interactively

VMware has done a great job of making the interactive installation of ESXi as simple and straightforward as possible. It takes just minutes to install, so let's walk through the process.

Perform the following steps to interactively install ESXi:

  1. Ensure that your server hardware is configured to boot from the CD-ROM drive.

    This will vary from manufacturer to manufacturer and will also depend on whether you are installing locally or remotely via an IP-based Keyboard, Video, Mouse (KVM) or other remote management facility.

  2. Ensure that VMware ESXi installation media are available to the server.

    Again, this will vary based on a local installation (which involves simply inserting the VMware ESXi installation CD into the optical drive) or a remote installation (which typically involves mapping an image of the installation media, known as an ISO image, to a virtual optical drive).

    OBTAINING VMWARE ESXI INSTALLATION MEDIA

    You can download the installation files from VMware's website at www.vmware.com/download/.

  3. Power on the server.

    Once it boots from the installation media, the initial boot menu screen appears, as shown in Figure 2.2.

    FIGURE 2.2 The initial ESXi installation routine has options for booting the installer or booting from the local disk.

    images

  4. Press Enter to boot the ESXi installer.

    The installer will boot the vSphere hypervisor and eventually stop at a welcome message. Press Enter to continue.

  5. At the End User License Agreement (EULA) screen, press F11 to accept the EULA and continue with the installation.
  6. Next, the installer will display a list of available disks on which you can install or upgrade ESXi.

    Potential devices are identified as either local devices or remote devices. Figure 2.3 and Figure 2.4 show two different views of this screen: one with a local device and one with remote devices.

    FIGURE 2.3 The installer offers options for both local and remote devices; in this case, only a local device was detected.

    images

    FIGURE 2.4 Although local SAS devices are supported, they are listed as remote devices.

    images

    RUNNING ESXI AS A VM

    You might be able to deduce from the screen shot in Figure 2.3 that we're actually running ESXi 5.5 as a VM. Yes, that's right—you can virtualize ESXi! In this particular case, we're using VMware's desktop virtualization solution for Mac OS X, VMware Fusion, to run an instance of ESXi as a VM. As of this writing, the latest version of VMware Fusion is 5, and it includes ESX Server 5 as an officially supported guest OS.

    Storage area network logical unit numbers, or SAN LUNs, are listed as remote, as you can see in Figure 2.4. Local serial attached SCSI (SAS) devices are also listed as remote. Figure 2.4 shows a SAS drive connected to an LSI Logic controller; although this device is physically local to the server on which we are installing ESXi, the installation routine marks it as remote.

    If you want to create a boot-from-SAN environment, where each ESXi host boots from a SAN LUN, then you'd select the appropriate SAN LUN here. You can also install directly to your own USB or Secure Digital (SD) device—simply select the appropriate device from the list.

    WHICH DESTINATION IS BEST?

    Local device, SAN LUN, or USB? Which destination is the best when you're installing ESXi? Those questions truly depend on the overall vSphere design you are implementing, and there is no simple answer. Many variables affect this decision. Are you using an iSCSI SAN and you don't have iSCSI hardware initiators in your servers? That would prevent you from using a boot-from-SAN setup. Are you installing into an environment like Cisco UCS, where booting from SAN is highly recommended? Be sure to consider all the factors when deciding where to install ESXi.

  7. To get more information about a device, highlight the device and press F1.

    The information about the device includes whether it detected an installation of ESXi and what Virtual Machine File System (VMFS) datastores, if any, are present on it, as shown in Figure 2.5. Press Enter to return to the device-selection screen when you have finished reviewing the information for the selected device.

    FIGURE 2.5 Checking to see if there are any VMFS datastores on a device can help prevent accidentally overwriting data.

    images

  8. Use the arrow keys to select the device on which you are going to install ESXi, and press Enter.
  9. If the selected device includes a VMFS datastore or an installation of ESXi, you'll be prompted to choose what action you want to take, as illustrated in Figure 2.6. Select the desired action and press Enter.

    FIGURE 2.6 You can upgrade or install ESXi as well as choose to preserve or overwrite an existing VMFS datastore.

    images

    These are the available actions:

    • Upgrade ESXi, Preserve VMFS Datastore: This option upgrades to ESXi 5.5 and preserves the existing VMFS datastore.
    • Install ESXi, Preserve VMFS Datastore: This option installs a fresh copy of ESXi 5.5 and preserves the existing VMFS datastore.
    • Install ESXi, Overwrite VMFS Datastore: This option overwrites the existing VMFS datastore with a new one and installs a fresh installation of ESXi 5.5.
  10. Select the desired keyboard layout and press Enter.
  11. Enter (and confirm) a password for the root account. Press Enter when you are ready to continue with the installation. Be sure to make note of this password—you'll need it later.
  12. At the final confirmation screen, press F11 to proceed with the installation of ESXi.

    After the installation process begins, it takes only a few minutes to install ESXi onto the selected storage device.

  13. Press Enter to reboot the host at the Installation Complete screen.

After the host reboots, ESXi is installed. ESXi is configured by default to obtain an IP address via Dynamic Host Configuration Protocol (DHCP). Depending on the network configuration, you might find that ESXi will not be able to obtain an IP address via DHCP. Later in this chapter, in the section “Reconfiguring the Management Network,” we'll discuss how to correct networking problems after installing ESXi by using the Direct Console User Interface (DCUI).

VMware also provides support for scripted installations of ESXi. As you've already seen, there isn't a lot of interaction required to install ESXi, but support for scripting the installation of ESXi reduces the time to deploy even further.

INTERACTIVELY INSTALLING ESXI FROM USB OR ACROSS THE NETWORK

As an alternative to launching the ESXi installer from the installation CD/DVD, you can install ESXi from a USB flash drive or across the network via Preboot Execution Environment (PXE). More details on how to use a USB flash drive or to PXE boot the ESXi installer are found in the vSphere Installation and Setup Guide, available from www.vmware.com/go/support-pubs-vsphere. Note that PXE booting the installer is not the same as PXE booting ESXi itself, something that we'll discuss later in the section “Deploying VMware ESXi with vSphere Auto Deploy.”

Performing an Unattended Installation of VMware ESXi

ESXi supports the use of an installation script (often referred to as a kickstart, or KS, script) that automates the installation routine. By using an installation script, users can create unattended installation routines that make it easy to quickly deploy multiple instances of ESXi.

ESXi comes with a default installation script on the installation media. Listing 2.1 shows the default installation script.

LISTING 2.1: ESXi provides a default installation script

#
# Sample scripted installation file
#
# Accept the VMware End User License Agreement
vmaccepteula
# Set the root password for the DCUI and Tech Support Mode
rootpw mypassword
# Install on the first local disk available on machine
install --firstdisk --overwritevmfs
# Set the network to DHCP on the first network adapter
network --bootproto=dhcp --device=vmnic0
# A sample post-install script
%post --interpreter=python --ignorefailure=true
import time
stampFile = open('/finished.stamp', mode='w')
stampFile.write( time.asctime() )

If you want to use this default install script to install ESXi, you can specify it when booting the VMware ESXi installer by adding the ks=file://etc/vmware/weasel/ks.cfg boot option. We'll show you how to specify that boot option shortly.

Of course, the default installation script is useful only if the settings work for your environment. Otherwise, you'll need to create a custom installation script. The installation script commands are much the same as those supported in previous versions of vSphere. Here's a breakdown of some of the commands supported in the ESXi installation script:

accepteula or vmaccepteula These commands accept the ESXi license agreement.

install The install command specifies that this is a fresh installation of ESXi, not an upgrade. You must also specify the following parameters:

  • --firstdisk Specifies the disk on which ESXi should be installed. By default, the ESXi installer chooses local disks first, then remote disks, and then USB disks. You can change the order by appending a comma-separated list to the --firstdisk command, like this: --firstdisk=remote,local This would install to the first available remote disk and then to the first available local disk. Be careful here—you don't want to inadvertently overwrite something (see the next set of commands).
  • --overwritevmfs or --preservevmfs These commands specify how the installer will handle existing VMFS datastores. The commands are pretty self-explanatory.

keyboard This command specifies the keyboard type. It's an optional component in the installation script.

network This command provides the network configuration for the ESXi host being installed. It is optional but generally recommended. Depending on your configuration, some of the additional parameters are required:

  • --bootproto This parameter is set to dhcp for assigning a network address via DHCP or to static for manual assignment of an IP address.
  • --ip This sets the IP address and is required with -bootproto=static. The IP address should be specified in standard dotted-decimal format.
  • --gateway This command specifies the IP address of the default gateway in standard dotted-decimal format. It's required if you specified --bootproto=static.
  • --netmask The network mask, in standard dotted-decimal format, is specified with this command. If you specify --bootproto=static, you must include this value.
  • -- hostname Specifies the hostname for the installed system.
  • --vlanid If you need the system to use a VLAN ID, specify it with this command. Without a VLAN ID specified, the system will respond only to untagged traffic.
  • --addvmportgroup This parameter is set to either 0 or 1 and controls whether a default VM Network port group is created. 0 does not create the port group; 1 does create the port group.
  • reboot This command is optional and, if specified, will automatically reboot the system at the end of installation. If you add the --noeject parameter, the CD is not ejected.
  • rootpw This is a required parameter and sets the root password for the system. If you don't want the root password displayed in the clear, generate an encrypted password and use the --iscrypted Parameter.
  • upgrade This specifies an upgrade to ESXi 5.5. The upgrade command uses many of the same parameters as install and also supports a parameter for deleting the ESX Service Console VMDK for upgrades from ESX to ESXi. This parameter is the --deletecosvmdk parameter.

This is by no means a comprehensive list of all the commands available in the ESXi installation script, but it does cover the majority of the commands you'll see in use.

Looking back at Listing 2.1, you'll see that the default installation script incorporates a %post section, where additional scripting can be added using either the Python interpreter or the BusyBox interpreter. What you don't see in Listing 2.1 is the %firstboot section, which also allows you to add Python or BusyBox commands for customizing the ESXi installation. This section comes after the installation script commands but before the %post section. Any command supported in the ESXi shell can be executed in the %firstboot section, so commands such as vim-cmd, esxcfg-vswitch, esxcfg-vmknic, and others can be combined in the %firstboot section of the installation script.

A number of commands that were supported in previous versions of vSphere (by ESX or ESXi) are no longer supported in installation scripts for ESXi 5.5, such as these:

  • autopart (replaced by install, upgrade, or installorupgrade)
  • auth or authconfig
  • bootloader
  • esxlocation
  • firewall
  • firewallport
  • serialnum or vmserialnum
  • timezone
  • virtualdisk
  • zerombr
  • The --level option of %firstboot

Once you have created the installation script you will use, you need to specify that script as part of the installation routine.

Specifying the location of the installation script as a boot option is not only how you would tell the installer to use the default script but also how you tell the installer to use a custom installation script that you've created. This installation script can be located on a USB flash drive or in a network location accessible via NFS, HTTP, HTTPS, or FTP. Table 2.1 summarizes some of the supported boot options for use with an unattended installation of ESXi.

TABLE 2.1: Boot options for an unattended ESXi installation

BOOT OPTION BRIEF DESCRIPTION
ks=cdrom:/path Uses the installation script found at path on the CD-ROM. The installer checks all CD-ROM drives until the file matching the specified path is found.
ks=usb Uses the installation script named ks.cfg found in the root directory of an attached USB device. All USB devices are searched as long as they have a FAT16 or FAT32 file system.
ks=usb:/path Uses the installation script at the specified path on an attached USB device. This allows you to use a different filename or location for the installation script.
ks=protocol:/serverpath Uses the installation script found at the specified network location. The protocol can be NFS, HTTP, HTTPS, or FTP.
ip=XX.XX.XX.XX Specifies a static IP address for downloading the installation script and the installation media.
nameserver=XX.XX.XX.XX Provides the IP address of a Domain Name System (DNS) server to use for name resolution when downloading the installation script or the installation media.
gateway=XX.XX.XX.XX Provides the network gateway to be used as the default gateway for downloading the installation script and the installation media.
netmask=XX.XX.XX.XX Specifies the network mask for the network interface used to download the installation script or the installation media.
vlanid=XX Configures the network interface to be on the specified VLAN when downloading the installation script or the installation media.

NOT A COMPREHENSIVE LIST OF BOOT OPTIONS

The list found in Table 2.1 includes only some of the more commonly used boot options for performing a scripted installation of ESXi. For the complete list of supported boot options, refer to the vSphere Installation and Setup Guide, available from www.vmware.com/go/support-pubs-vsphere.

To use one or more of these boot options during the installation, you'll need to specify them at the boot screen for the ESXi installer. The bottom of the installer boot screen states that you can press Shift+O to edit the boot options.

The following code line is an example that could be used to retrieve the installation script from an HTTP URL; this would be entered at the prompt at the bottom of the installer boot screen:

<ENTER: Apply options and boot> <ESC: Cancel>
> runweasel ks=http://192.168.1.1/scripts/ks.cfg ip=192.168.1.200
 netmask=255.255.255.0 gateway=192.168.1.254

Using an installation script to install ESXi not only speeds up the installation process but also helps to ensure the consistent configuration of all your ESXi hosts.

The final method for deploying ESXi—using vSphere Auto Deploy—is the most complex, but it also offers administrators a great deal of flexibility.

Deploying VMware ESXi with vSphere Auto Deploy

vSphere Auto Deploy can be configured with one of three different modes:

  • Stateless
  • Stateless Caching
  • Stateful Install

In the Stateless mode, you deploy ESXi using Auto Deploy, but you aren't actually installing ESXi. Instead of actually installing ESXi onto a local disk or a SAN boot LUN, you are building an environment where ESXi is directly loaded into memory on a host as it boots.

In the next mode, Stateless Caching, you deploy ESXi using Auto Deploy just as with Stateless, but the image is cached on the server's local disk or SAN boot LUN. In the event that the Auto Deploy infrastructure is not available, the host boots from a local cache of the image.

The third mode, Stateful Install, is very similar to Stateless Caching except the server's boot order is reversed: local disk first and network second. Unless the server is specifically told to network boot again, the Auto Deploy service is no longer needed. This mode is effectively just a mechanism for network installation.

Auto Deploy uses a set of rules (called deployment rules) to control which hosts are assigned a particular ESXi image (called an image profile). Deploying a new ESXi image is as simple as modifying the deployment rule to point that physical host to a new image profile and then rebooting with the PXE/network boot option. When the host boots up, it will receive a new image profile.

Sounds easy, right? Maybe not. In theory, it is—but there are several steps you have to accomplish before you're ready to actually deploy ESXi in this fashion:

  1. You must set up a vSphere Auto Deploy server. This is the server that stores the image profiles.
  2. You must set up and configure a Trivial File Transfer Protocol (TFTP) server on your network.
  3. You must configure a DHCP server on your network to pass the correct information to hosts booting up.
  4. You must create an image profile using PowerCLI.
  5. Still using PowerCLI, you must create a deployment rule that assigns the image profile to a particular subset of hosts.

AUTO DEPLOY DEPENDENCIES

This chapter deals with ESXi host installation methods; however, vSphere Auto Deploy is dependent on Host Profiles, a feature of VMware vCenter. More information about installing vCenter and configuring Host Profiles can be found in Chapter 3, “Installing and Configuring vCenter Server.”

Once you've completed these five steps, you're ready to start provisioning hosts with ESXi. When everything is configured and in place, the process looks something like this:

  1. When the physical server boots, the server starts a PXE boot sequence. The DHCP server assigns an IP address to the host and provides the IP address of the TFTP server as well as a boot filename to download.
  2. The host contacts the TFTP server and downloads the specified filename, which contains the gPXE boot file and a gPXE configuration file.
  3. gPXE executes; this causes the host to make an HTTP boot request to the Auto Deploy server. This request includes information about the host, the host hardware, and host network information. This information is written to the server console when gPXE is executing, as you can see in Figure 2.7.

    FIGURE 2.7 Host information is echoed to the server console when it performs a network boot.

    images

  4. Based on the information passed to it from gPXE (the host information shown in Figure 2.7), the Auto Deploy server matches the server against a deployment rule and assigns the correct image profile. The Auto Deploy server then streams the assigned ESXi image across the network to the physical host.

When the host has finished executing, you have a system running ESXi. The Auto Deploy server also has the ability to automatically join the ESXi host to vCenter Server and assign a host profile (which we'll discuss in a bit more detail in Chapter 3) for further configuration. As you can see, this system potentially offers administrators tremendous flexibility and power.

Ready to get started with provisioning ESXi hosts using Auto Deploy? Let's start with setting up the vSphere Auto Deploy server.

INSTALLING THE VSPHERE AUTO DEPLOY SERVER

The vSphere Auto Deploy server is where the various ESXi image profiles are stored. The image profile is transferred from this server via HTTP to a physical host when it boots. The image profile is the actual ESXi image, and it comprises multiple VIB files. VIBs are ESXi software packages; these could be drivers, Common Information Management (CIM) providers, or other applications that extend or enhance the ESXi platform. Both VMware and VMware's partners could distribute software as VIBs.

You can install vSphere Auto Deploy on the same system as vCenter Server or on a separate Windows Server–based system (this could certainly be a VM). In addition, the vCenter virtual appliance comes preloaded with the Auto Deploy server installed. If you want to use the vCenter virtual appliance, you need only deploy the appliance and configure the service from the web-based administrative interface. We'll describe the process for deploying the vCenter virtual appliance in more detail in Chapter 3. In this section, we'll walk you through installing the Auto Deploy server on a separate Windows-based system.

Perform the following steps to install the vSphere Auto Deploy server:

  1. Make the vCenter Server installation media available to the Windows Server–based system where you will be installing Auto Deploy.

    If this is a VM, you can map the vCenter Server installation ISO to the VM's CD/DVD drive.

  2. From the VMware vCenter Installer screen, select VMware Auto Deploy and click Install.
  3. Choose the language for the installer and click OK.

    This will launch the vSphere Auto Deploy installation wizard.

  4. Click Next at the first screen of the installation wizard.
  5. Click Next to acknowledge the VMware patents.
  6. Select I Accept The Terms In The License Agreement, and click Next to continue.
  7. Click Next to accept the default installation location, the default repository location, and the default maximum repository size.

    If you need to change locations, use either of the Change buttons; if you need to change the repository size, specify a new value in gigabytes (GB).

  8. If you are installing on a system separate from vCenter Server, specify the IP address or name of the vCenter Server with which this Auto Deploy server should register.

    You'll also need to provide a username and password. Click Next when you have finished entering this information.

  9. Click Next to accept the default Auto Deploy server port.
  10. Click Next to accept the Auto Deploy server identifying itself on the network via its IP address (be sure to select the correct address if your server has multiple NICs).
  11. Click Install to install the Auto Deploy server.
  12. Click Finish to complete the installation.

If you now go back to the vSphere Client (if you haven't installed it yet, skip ahead to the section “Installing the vSphere C# Client” and then come back) and connect to vCenter Server, you'll see a new Auto Deploy icon on the vSphere Client's home page. Click it to see information about the registered Auto Deploy server. Figure 2.8 shows the Auto Deploy screen after we installed and registered an Auto Deploy server with vCenter Server.

FIGURE 2.8 This screen provides information about the Auto Deploy server that is registered with vCenter Server.

images

That's it for the Auto Deploy server itself; once it's been installed and is up and running, there's very little additional work or configuration required, except configuring TFTP and DHCP on your network to support vSphere Auto Deploy. The next section provides an overview of the required configurations for TFTP and DHCP.

CONFIGURING TFTP AND DHCP FOR AUTO DEPLOY

The exact procedures for configuring TFTP and DHCP are going to vary based on the specific TFTP and DHCP servers you are using on your network. For example, configuring the ISC DHCP server to support vSphere Auto Deploy is dramatically different from configuring the DHCP Server service provided with Windows Server. As a matter of necessity, then, we can provide only high-level information in the following section. Refer to your specific vendor's documentation for details on how the configuration is carried out.

Configuring TFTP

For TFTP, you only need to upload the appropriate TFTP boot files to the TFTP directory. The Download TFTP Boot Zip hyperlink shown in Figure 2.8 provides the necessary files. Simply download the Zip file using that link, unzip the file, and place the contents of the unzipped file in the TFTP directory on the TFTP server.

Configuring DHCP

For DHCP, you need to specify two additional DHCP options:

  • Option 66, referred to as next-server or as Boot Server Host Name, must specify the IP address of the TFTP server.
  • Option 67, called boot-filename or Bootfile Name, should contain the value undionly.kpxe.vmw-hardwired.

If you want to identify hosts by IP address in the deployment rules, then you'll need a way to ensure that the host gets the IP address you expect. You can certainly use DHCP reservations to accomplish this, if you like; just be sure that options 66 and 67 apply to the reservation as well.

Once you've configured TFTP and DHCP, you're ready to PXE boot your server, but you still need to create the image profile to deploy ESXi.

CREATING AN IMAGE PROFILE

The process for creating an image profile may seem counterintuitive at first; it did for us. Creating an image profile involves first adding at least one software depot. A software depot could be a directory structure of files and folders on an HTTP server, or (more commonly) it could be an offline depot in the form of a Zip file. You can add multiple software depots.

Some software depots will already have one or more image profiles defined, and you can define additional image profiles (usually by cloning an existing image profile). You'll then have the ability to add software packages (in the form of VIBs) to the image profile you've created. Once you've finished adding or removing software packages or drivers from the image profile, you can export the image profile (either to an ISO or as a Zip file for use as an offline depot).

All image profile tasks are accomplished using PowerCLI, so you'll need to ensure that you have a system with PowerCLI installed in order to perform these tasks. We'll describe PowerCLI, along with other automation tools, in more detail in Chapter 14, “Automating VMware vSphere.” In the next part of this section, we'll walk you through creating an image profile based on the ESXi 5.5.0 offline depot Zip file available for downloading by registered customers.

Perform the following steps to create an image profile:

  1. At a PowerCLI prompt, use the Connect-VIServer cmdlet to connect to vCenter Server.
  2. Use the Add-EsxSoftwareDepot command to add the ESXi 5.5.0 offline depot file:
    Add-EsxSoftwareDepot C:vmware-ESXi-5.5.0-XXXXXX-depot.zip
  3. Repeat the Add-EsxSoftwareDepot command to add other software depots as necessary. The code listed below adds the online depot file:
    Add-EsxSoftwareDepot
    https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
  4. Use the Get-EsxImageProfile command to list all image profiles in all currently visible depots.
  5. To create a new image profile, clone an existing profile (existing profiles are typically read-only) using the New-EsxImageProfile command:
    New-EsxImageProfile -CloneProfile "ESXi-5.5.0-XXXXXX-standard"
    -Name "My_Custom_Profile"

Once you have an image profile established, you can customize it by adding VIBs or you can export it. You might want to export the image profile because once you exit a PowerCLI session where you've created image profiles, the image profiles will not be available when you start a new session. Exporting the image profile as a Zip file offline depot, you can easily add it back in when you start a new session.

To export an image profile as a Zip file offline depot, run this command:

Export-EsxImageProfile -ImageProfile "My_Custom_Profile" -ExportToBundle
-FilePath "C:path	oIP-file-offline-depot.zip"

When you start a new PowerCLI session to work with an image profile, simply add this offline depot with the Add-EsxSoftwareDepot command.

The final step is establishing deployment rules that link image profiles to servers in order to provision ESXi to them at boot time. We'll describe how to do this in the next section.

ESTABLISHING DEPLOYMENT RULES

The deployment rules are where the “rubber meets the road” for vSphere Auto Deploy. When you define a deployment rule, you are linking an image profile to one or more hosts. It's at this point that vSphere Auto Deploy will copy all the VIBs defined in the specified image profile up to the Auto Deploy server so that they are accessible from the hosts. Once a deployment rule is in place, you can actually begin provisioning hosts via Auto Deploy (assuming all the other pieces are in place and functioning correctly, of course).

As with image profiles, deployment rules are managed via PowerCLI. You'll use the New-DeployRule and Add-DeployRule commands to define new deployment rules and add them to the working rule set, respectively.

Perform the following steps to define a new deployment rule:

  1. In a PowerCLI session where you've previously connected to vCenter Server and defined an image profile, use the New-DeployRule command to define a new deployment rule that matches an image profile to a physical host:
    New-DeployRule -Name "Img_Rule " -Item "My_Custom_Profile"
    -Pattern "vendor=Cisco", "ipv4=10.1.1.225,10.1.1.250"

    This rule assigns the image profile named My_Custom_Profile to all hosts with Cisco in the vendor string and having either the IP address 10.1.1.225 or 10.1.1.250. You could also specify an IP range like 10.1.1.225-10.1.1.250 (using a hyphen to separate the start and end of the IP address range).

  2. Next, create a deployment rule that assigns the ESXi host to a cluster within vCenter Server:
    New-DeployRule -Name "Default_Cluster" -Item "Cluster-1" -AllHosts

    This rule puts all hosts into the cluster named Cluster-1 in the vCenter Server with which the Auto Deploy server is registered. (Recall that an Auto Deploy server must be registered with a vCenter Server instance.)

  3. Add these rules to the working rule set:
    Add-DeployRule Img_Rule
    Add-DeployRule Default_Cluster

    As soon as you add the deployment rules to the working rule set, vSphere Auto Deploy will, if necessary, start uploading VIBs to the Auto Deploy server in order to satisfy the rules you've defined.

  4. Verify that these rules have been added to the working rule set with the Get-DeployRuleSet command.

Now that a deployment rule is in place, you're ready to provision via Auto Deploy. Boot the physical host that matches the patterns you defined in the deployment rule, and it should follow the boot sequence described at the start of this section. Figure 2.9 shows what it looks like when a host is booting ESXi via vSphere Auto Deploy.

By now, you should be starting to see the flexibility that Auto Deploy offers. If you need to deploy a new ESXi image, you need only define a new image profile (using a new software depot, if necessary), assign that image profile with a deployment rule, and reboot the physical servers. When the servers come up, they will boot the newly assigned ESXi image via PXE.

FIGURE 2.9 Note the differences in the ESXi boot process when using Auto Deploy versus a traditional installation of ESXi.

images

Of course, there are some additional concerns that you'll need to address should you decide to go this route:

  • The image profile doesn't contain any ESXi configuration state information, such as virtual switches, security settings, advanced parameters, and so forth. Host profiles are used to store this configuration state information in vCenter Server and pass that configuration information down to a host automatically. You can use a deployment rule to assign a host profile, or you can assign a host profile to a cluster and then use a deployment rule to join hosts to a cluster. We'll describe host profiles in greater detail in Chapter 3.
  • State information such as log files, generated private keys, and so forth is stored in host memory and is lost during a reboot. Therefore, you must configure additional settings such as setting up syslog for capturing the ESXi logs. Otherwise, this vital operational information is lost every time the host is rebooted. The configuration for capturing this state information can be included in a host profile that is assigned to a host or cluster.

In the Auto Deploy Stateless mode, the ESXi image doesn't contain configuration state and doesn't maintain dynamic state information, and they are therefore considered stateless ESXi hosts. All the state information is stored elsewhere instead of on the host itself.

images Real World Scenario

ENSURING AUTO DEPLOY IS AVAILABLE

Author Nick Marshall says, “When working with a customer with vSphere 5.0 Auto Deploy, we had to ensure that all Auto Deploy components were highly available. This meant designing the infrastructure that was responsible for booting and deploying ESXi hosts was more complicated than normal. Services such as PXE and Auto Deploy and the vCenter VMs were all deployed on hosts that were not provisioned using Auto Deploy in a separate management cluster.

As per the Highly Available Auto Deploy best practices in the vSphere documentation, building a separate cluster with a local installation or boot from SAN will ensure there is no chicken-and-egg situation. You need to ensure that in a completely virtualized environment your VMs that provision ESXi hosts with Auto Deploy are not running on the ESXi hosts they need to build.”

STATELESS CACHING MODE

Unless your ESXi host hardware does not have any local disks or bootable SAN storage, we would recommend considering one of the two other Auto Deploy modes. These modes offer resiliency for your hosts if at any time the Auto Deploy services become unavailable.

To configure Stateless Caching, follow the previous procedure for Stateless with these additions:

  1. Within vCenter, navigate to the Host Profiles section: vCenter images Home images Host Profiles.
  2. Create a new host profile or edit the existing one attached to your host.
  3. Navigate to System Image Cache Configuration under Advanced Configuration Settings.
  4. Select Enable Stateless Caching On The Host.
  5. Input the disk configuration details, using the same disk syntax as listed earlier in the section “Performing an Unattended Installation of VMware ESXi.” By default it will populate the first available disk, as you can see in Figure 2.10.
  6. Click Finish to end the Host Profile Wizard.
  7. Next you need to configure the boot order in the host BIOS to boot from the network first, and the local disk second. This procedure will differ depending on your server type.
  8. Reboot the host to allow a fresh Auto Deploy image and the new host profile will be attached.

This configuration tells the ESXi host to take the Auto Deploy image loaded in memory and save it to the local disk after a successful boot. If for some reason the network or Auto Deploy server is unavailable when your host reboots, it will fall back and boot the cached copy on its local disk.

FIGURE 2.10 Editing the host profile to allow Stateless Caching on a local disk

images

STATEFUL MODE

Just like Stateful Caching mode, the Auto Deploy Stateful mode is configured by editing host profiles within vCenter and the boot order settings in the host BIOS.

  1. Within vCenter, navigate to the Host Profiles section: vCenter images Home images Host Profiles.
  2. Create a new host profile or edit the existing one attached to your host.
  3. Navigate to System Image Cache Configuration under Advanced Configuration Settings.
  4. Select Enable Stateful Installs On The Host.
  5. Input the disk configuration details, using the same disk syntax as listed earlier in the section “Performing an Unattended Installation of VMware ESXi.” By default it will populate the first available disk (see Figure 2.10).
  6. Click Finish to end the Host Profile Wizard.
  7. Next you need to configure the boot order in the host BIOS to boot from the local disk first, and the network second. This procedure will differ depending on your server type.
  8. The host will boot into Maintenance mode, and you need to apply the host profile by clicking Remediate Host on the host Summary tab.
  9. You will need to provide IP addresses for the host and then reboot the host.
  10. Upon this reboot, the host is now running off the local disk like a “normally provisioned” ESXi host.

vSphere Auto Deploy offers some great advantages, especially for environments with lots of ESXi hosts to manage, but it can also add complexity. As mentioned earlier, it all comes down to the design and requirements of your vSphere deployment.

Performing Post-installation Configuration

Whether you are installing from a CD/DVD or performing an unattended installation of ESXi, once the installation is complete, there are several post-installation steps that are necessary or might be necessary, depending on your specific configuration. We'll discuss these tasks in the following sections.

Installing the vSphere C# Client

This might come as a bit of shock for IT professionals who have grown accustomed to managing Microsoft Windows–based servers from the server's console (even via Remote Desktop), but ESXi wasn't designed for you to manage it from the server's console. Instead, you should use the vSphere Client.

In earlier versions, ESXi and vCenter were administered with the C# (pronounced “see sharp”) Client. vSphere 5.0 introduced the Web Client. Although the first iteration of the Web Client was not as feature rich as the C# Client, with vSphere 5.1 and 5.5 the tables have turned. To ensure that you can follow which client the instructions are for, we will use the terms vSphere Client and Web Client.

The vSphere Client is a Windows-only application that allows for connecting directly to an ESXi host or to a vCenter Server installation. The only difference in the tools used is that connecting directly to an ESXi host requires authentication with a user account that exists on that specific host, while connecting to a vCenter Server installation relies on Windows users for authentication. Additionally, some features of the vSphere Client—such as initiating vMotion, for example—are available only when connecting to a vCenter Server installation.

LEARNING A NEW USER INTERFACE

For those who are already used to the vSphere Client, things can feel a little awkward, but learning the new web-based client for vCenter is certainly necessary. While you will be able to perform more traditional tasks in the vSphere Client, the Web Client helps you unlock the full potential when using vSphere 5.5. We'll focus primarily on the vSphere Web Client in this book unless we are directly administering the hosts (as is the case in this chapter) or when using vSphere Client plug-ins that are not currently available in the vSphere Web Client.

You can install either of the vSphere Clients with the vCenter Server installation media. Figure 2.11 shows the VMware vCenter Installer with the vSphere Client option selected.

FIGURE 2.11 You can install the vSphere Client directly from the vCenter Server installation media.

images

In previous versions of VMware vSphere, one of the easiest installation methods was to simply connect to an ESX/ESXi host or a vCenter Server instance using your web browser. From there, you clicked a link to download the vSphere Client right from the web page. From vSphere 5.0 onward, the vSphere Client download link for ESXi hosts doesn't point to a local copy of the installation files; it redirects you to a VMware-hosted website to download the files. The vSphere Client download link for vCenter Server 5.5, though, still points to a local copy of the vSphere Client installer.

Because you might not have installed vCenter Server yet—that is the focus of the next chapter, Chapter 3—we'll walk you through installing the vSphere Client from the vCenter Server installation media. Regardless of how you obtain the installer, once the installation wizard starts, the process is the same. It is also worth noting that ESXi cannot be directly managed with the Web Client, so you will probably want to install both clients at some point. Refer to Chapter 3 for details on the Web Client installation.

Perform the following steps to install the vSphere Client from the vCenter Server installation media:

  1. Make the vCenter Server installation media available via CD/DVD to the system where you want to install the vSphere Client.

    If you are installing the vSphere Client on a Windows VM, you can mount the vCenter Server installation ISO image as a virtual CD/DVD image. Refer to Chapter 7, “Ensuring High Availability and Business Continuity,” for more details if you are unsure how to attach a virtual CD/DVD image.

  2. If Autorun doesn't automatically launch the VMware vCenter Installer (shown previously in Figure 2.11), navigate to the CD/DVD and double-click Autorun.exe.
  3. From the VMware vCenter Installer main screen, click vSphere Client under VMware Product Installers, and then click Install.
  4. Select the language for the installer and click OK.
  5. Click the Next button on the welcome page of the Virtual Infrastructure Client Wizard.
  6. Click Next at the End User Patent Agreement screen.
  7. Click the radio button labeled I Accept The Terms In The License Agreement, and then click the Next button.
  8. Specify a username and organization name, and then click the Next button.
  9. Configure the destination folder, and then click the Next button.
  10. Click the Install button to begin the installation.
  11. If prompted, select I Have Read And Accept The Terms Of The License Agreement, and then click Install to install the Microsoft .NET Framework, which is a prerequisite for the vSphere Client.
  12. When the .NET Framework installation completes (if applicable), click Exit to continue with the rest of the vSphere Client installation.
  13. Click the Finish button to complete the installation. Restart the computer if prompted.

64-BIT VS. 32-BIT

Although the vSphere Client can be installed and is supported on 64-bit Windows operating systems, the vSphere Client itself remains a 32-bit application and runs in 32-bit compatibility mode.

Reconfiguring the Management Network

During the installation of ESXi, the installer creates a virtual switch—also known as a vSwitch—bound to a physical NIC. The tricky part, depending on your server hardware, is that the installer might select a different physical NIC than the one you need for correct network connectivity. Consider the scenario depicted in Figure 2.12. If, for whatever reason, the ESXi installer doesn't link the correct physical NIC to the vSwitch it creates, then you won't have network connectivity to that host. We'll talk more about why ESXi's network connectivity must be configured with the correct NIC in Chapter 5, but for now just understand that this is a requirement for connectivity. Since you need network connectivity to manage the host from the vSphere Client, how do you fix this?

FIGURE 2.12 Network connectivity won't be established if the ESXi installer links the wrong NIC to the management network.

images

The simplest fix for this problem is to unplug the network cable from the current Ethernet port in the back of the server and continue trying the remaining ports until the host is accessible, but that's not always possible or desirable. The better way is to use the DCUI to reconfigure the management network so that it is converted the way you need it to be configured.

Perform the following steps to fix the management NIC in ESXi using the DCUI:

  1. Access the console of the ESXi host, either physically or via a remote console solution such as an IP-based KVM.
  2. On the ESXi home screen, shown in Figure 2.13, press F2 for Customize System/View Logs. If a root password has been set, enter that root password.
  3. From the System Customization menu, select Configure Management Network, and press Enter.
  4. From the Configure Management Network menu, select Network Adapters, and press Enter.
  5. Use the spacebar to toggle which network adapter or adapters will be used for the system's management network, as shown in Figure 2.14. Press Enter when finished.
  6. Press Esc to exit the Configure Management Network menu. When prompted to apply changes and restart the management network, press Y.

    After the correct NIC has been assigned to the ESXi management network, the System Customization menu provides a Test Management Network option to verify network connectivity.

  7. Press Esc to log out of the System Customization menu and return to the ESXi home screen.

FIGURE 2.13 The ESXi home screen provides options for customizing the system and restarting or shutting down the server.

images

FIGURE 2.14 In the event the incorrect NIC is assigned to ESXi's management network, you can select a different NIC.

images

The other options within the DCUI for troubleshooting management network issues are covered in detail within Chapter 5.

At this point, you should have management network connectivity to the ESXi host, and from here forward you can use the vSphere Client to perform other configuration tasks, such as configuring time synchronization and name resolution.

Configuring Time Synchronization

Time synchronization in ESXi is an important configuration because the ramifications of incorrect time run deep. While ensuring that ESXi has the correct time seems trivial, time-synchronization issues can affect features such as performance charting, SSH key expirations, NFS access, backup jobs, authentication, and more. After the installation of ESXi or during an unattended installation of ESXi using an installation script, the host should be configured to perform time synchronization with a reliable time source. This source could be another server on your network or a time source located on the Internet. For the sake of managing time synchronization, it is easiest to synchronize all your servers against one reliable internal time server and then synchronize the internal time server with a reliable Internet time server. ESXi provides a Network Time Protocol (NTP) implementation to provide this functionality.

The simplest way to configure time synchronization for ESXi involves the vSphere Client.

Perform the following steps to enable NTP using the vSphere Client:

  1. Use the vSphere Client to connect directly to the ESXi host (or to a vCenter Server installation, if you have vCenter Server running at this point).
  2. Select the hostname from the inventory tree on the left, and then click the Configuration tab in the details pane on the right.
  3. Select Time Configuration from the Software menu.
  4. Click the Properties link.
  5. In the Time Configuration dialog box, select NTP Client Enabled.
  6. Still in the Time Configuration dialog box, click the Options button.
  7. Select the NTP Settings option in the left side of the NTP Daemon (ntpd) Options dialog box, and add one or more NTP servers to the list, as shown in Figure 2.15.

    FIGURE 2.15 Specifying NTP servers allows ESXi to automatically keep time synchronized.

    images

  8. Check the box marked Restart NTP Service To Apply Changes; then click OK.
  9. Click OK to return to the vSphere Client. The Time Configuration area will update to show the new NTP servers.

You'll note that using the vSphere Client to enable NTP this way also automatically enables NTP traffic through the firewall. You can verify this by noting an Open Firewall Ports entry in the Tasks pane or by clicking Security Profile under the Software menu and seeing an entry for NTP Client listed under Outgoing Connections.

WINDOWS AS A RELIABLE TIME SERVER

You can configure an existing Windows server as a reliable time server by performing these steps:

  1. Use the Group Policy Object editor to navigate to Administrative Templates images System images Windows Time Service images Time Providers.
  2. Select the Enable Windows NTP Server Group Policy option.
  3. Navigate to Administrative Templates images System images Windows Time Service.
  4. Double-click the Global Configuration Settings option, and select the Enabled radio button.
  5. Set the AnnounceFlags option to 4.
  6. Click the OK button.

Configuring Name Resolution

Just as we mentioned that time synchronization is important for your vSphere environment, so is name resolution. Although the vSphere dependency on name resolution is less than it was, there is still some functionality that may not work as expected without proper name resolution.

Configuring name resolution is a simple process in the vSphere Client:

  1. Use the vSphere Client to connect directly to the ESXi host (or to a vCenter Server installation, if you have vCenter Server running at this point).
  2. Select the hostname from the inventory tree on the left, and then click the Configuration tab in the details pane on the right.
  3. Select DNS And Routing from the Software menu.
  4. Click the Properties link.
  5. In the DNS And Routing dialog box, add the IP address(s) of your DNS server(s).

In this chapter we've discussed some of the decisions that you'll have to make as you deploy ESXi in your datacenter, and we've shown you how to deploy these products using both interactive and unattended methods. In the next chapter, we'll show you how to deploy VMware vCenter Server, a key component in your virtualization environment.

The Bottom Line

Understand ESXi compatibility requirements. Unlike traditional operating systems like Windows or Linux, ESXi has much stricter hardware compatibility requirements. This helps ensure a stable, well-tested product line that is able to support even the most mission-critical applications.

Master It You have some older servers onto which you'd like to deploy ESXi. They aren't on the Hardware Compatibility Guide. Will they work with ESXi?

Plan an ESXi deployment. Deploying ESXi will affect many different areas of your organization—not only the server team but also the networking team, the storage team, and the security team. There are many issues to consider, including server hardware, storage hardware, storage protocols or connection types, network topology, and network connections. Failing to plan properly could result in an unstable and unsupported implementation.

Master It Name three areas of networking that must be considered in a vSphere design.

Master It What are some of the different types of storage that ESXi can be installed on?

Deploy ESXi. ESXi can be installed onto any supported and compatible hardware platform. You have three different ways to deploy ESXi: You can install it interactively, you can perform an unattended installation, or you can use vSphere Auto Deploy to provision ESXi as it boots up.

Master It Your manager asks you to provide him with a copy of the unattended installation script that you will be using when you roll out ESXi using vSphere Auto Deploy. Is this something you can give him?

Master It Name two advantages and two disadvantages of using vSphere Auto Deploy to provision ESXi hosts.

Perform post-installation configuration of ESXi. Following the installation of ESXi, some additional configuration steps may be required. For example, if the wrong NIC is assigned to the management network, then the server won't be accessible across the network. You'll also need to configure time synchronization.

Master It You've installed ESXi on your server, but the welcome web page is inaccessible, and the server doesn't respond to a ping. What could be the problem?

Install the vSphere C# Client. ESXi is managed using the vSphere C# Client, a Windows-only application that provides the functionality to manage the virtualization platform. There are a couple different ways to obtain the vSphere Client installer, including running it directly from the VMware vCenter Installer or by downloading it using a web browser connected to the IP address of a vCenter Server instance.

Master It List two ways by which you can install the vSphere Client.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset