images

Chapter 5

Creating and Configuring Virtual Networks

Eventually, it all comes back to the network. Having servers running VMware ESXi with VMs stored on a highly redundant Fibre Channel SAN is great, but they are ultimately useless if the VMs cannot communicate across the network. What good is the ability to run 10 production systems on a single host at less cost if those production systems aren't available? Clearly, virtual networking within ESXi is a key area for every vSphere administrator to understand fully.

In this chapter, you will learn to

  • Identify the components of virtual networking
  • Create virtual switches and distributed virtual switches
  • Create and manage NIC teaming, VLANs, and private VLANs
  • Examine the options for third-party virtual switches in your environment
  • Configure virtual switch security policies

Putting Together a Virtual Network

Designing and building virtual networks with ESXi and vCenter Server bears some similarities to designing and building physical networks, but there are enough significant differences that an overview of components and terminology is warranted. So, we'll take a moment here to define the various components involved in a virtual network, and then we'll discuss some of the factors that affect the design of a virtual network:

vSphere Standard Switch A software-based switch that resides in the VMkernel and provides traffic management for VMs. Users must manage vSphere Standard Switches independently on each ESXi host. You'll see us use the term vSwitch to refer to both a vSphere Standard Switch as well as a virtual switch in general.

vSphere Distributed Switch A software-based switch that resides in the VMkernel and provides traffic management for VMs and the VMkernel. Distributed vSwitches are shared by and managed across entire clusters of ESXi hosts. You might see vSphere Distributed Switch abbreviated as VDS; we'll use VDS, vSphere Distributed Switch, or just distributed switch in this book.

Port/Port Group A logical object on a vSwitch that provides specialized services for the VMkernel or VMs. A virtual switch can contain a VMkernel port or a VM port group. On a vSphere Distributed Switch, these are called distributed port groups.

VMkernel Port A specialized virtual switch port type that is configured with an IP address to allow hypervisor management traffic, vMotion, iSCSI storage access, network attached storage (NAS) or Network File System (NFS) access, and vSphere Fault Tolerance (FT) logging. A VMkernel port is also referred to as a vmknic.

NO MORE SERVICE CONSOLE PORTS

Because vSphere 5.5, like vSphere 5.0 and 5.1 before it, does not include VMware ESX with a traditional Linux-based Service Console, pure vSphere 5.x environments will not use a Service Console port (or vswif). Instead, the functionality of a Service Console port in ESX 4.x and earlier is handled by a VMkernel port in vSphere 5.x.

VM Port Group A group of virtual switch ports that share a common configuration and allow VMs to access other VMs or the physical network.

Virtual LAN A logical LAN configured on a virtual or physical switch that provides efficient traffic segmentation, broadcast control, security, and efficient bandwidth utilization by providing traffic only to the ports configured for that particular virtual LAN (VLAN).

Trunk Port (Trunking) A port on a physical switch that listens for and knows how to pass traffic for multiple VLANs. It does this by maintaining the 802.1q VLAN tags for traffic moving through the trunk port to the connected device(s). Trunk ports are typically used for switch-to-switch connections to allow VLANs to pass freely between switches. Virtual switches support VLANs, and using VLAN trunks allows the VLANs to pass freely into the virtual switches.

TRUNKING VS. LINK AGGREGATION?

You might, depending on your networking vendor, also see use of the term trunk to describe an aggregation of multiple individual links into a single logical link. In this book, we use trunk only to describe a connection that passes multiple VLAN tags, and we'll use the term NIC teaming or link aggregation to refer to the practice of bonding multiple individual links together.

Access Port A port on a physical switch that passes traffic for only a single VLAN. Unlike a trunk port, which maintains the VLAN identification for traffic moving through the port, an access port strips away the VLAN information for traffic moving through the port.

Network Interface Card Team The aggregation of physical network interface cards (NICs) to form a single logical communication channel. Different types of NIC teams provide varying levels of traffic load balancing and fault tolerance.

vmxnet Adapter A virtualized network adapter operating inside a guest operating sys-tem (guest OS). The vmxnet adapter is a high-performance, 1 Gbps virtual network adapter that operates only if VMware Tools have been installed. The vmxnet adapter is sometimes referred to as a paravirtualized driver. The vmxnet adapter is identified as Flexible in the VM properties.

vlance Adapter A virtualized network adapter operating inside a guest OS. The vlance adapter is a 10/100 Mbps network adapter that is widely compatible with a range of operating systems and is the default adapter used until the VMware Tools installation is completed.

e1000 Adapter A virtualized network adapter that emulates the Intel e1000 network adapter. The Intel e1000 is a 1 Gbps network adapter. The e1000 network adapter is the most common in 64-bit VMs.

Now that you have a better understanding of the components involved and the terminology that you'll see in this chapter, we'll discuss how these components work together to form a virtual network in support of VMs and ESXi hosts.

Your answers to the following questions will, in large part, determine the design of your virtual networking:

  • Do you have or need a dedicated network for management traffic, such as for the management of physical switches?
  • Do you have or need a dedicated network for vMotion traffic?
  • Do you have an IP storage network? Is this IP storage network a dedicated network? Are you running iSCSI or NAS/NFS?
  • How many NICs are standard in your ESXi host design?
  • Do the NICs in your hosts run 1 Gb Ethernet or 10 Gb Ethernet?
  • Do you need extremely high levels of fault tolerance for VMs?
  • Is the existing physical network composed of VLANs?
  • Do you want to extend the use of VLANs into the virtual switches?

As a precursor to setting up a virtual networking architecture, you need to identify and document the physical network components and the security needs of the network. It's also important to understand the architecture of the existing physical network, because that also greatly influences the design of the virtual network. If the physical network can't support the use of VLANs, for example, then the virtual network's design has to account for that limitation.

Throughout this chapter, as we discuss the various components of a virtual network in more detail, we'll also provide guidance on how the various components fit into an overall virtual network design. A successful virtual network combines the physical network, NICs, and vSwitches, as shown in Figure 5.1.

FIGURE 5.1 Successful virtual networking is a blend of virtual and physical network adapters and switches.

images

Because the virtual network implementation makes VMs accessible, it is essential that the virtual network be configured in a manner that supports reliable and efficient communication around the different network infrastructure components.

Working with vSphere Standard Switches

The networking architecture of ESXi revolves around creating and configuring virtual switches. These virtual switches are either vSphere Standard Switches or vSphere Distributed Switches. First we'll discuss vSphere Standard Switches, hereafter called vSwitches; we'll discuss vSphere Distributed Switches next.

You create and manage vSwitches through the vSphere Web Client or through the vSphere CLI using the esxcli command, but they operate within the VMkernel. Virtual switches provide the connectivity to provide communication as follows:

  • Between VMs within an ESXi host
  • Between VMs on different ESXi hosts
  • Between VMs and physical machines on the network
  • For VMkernel access to networks for vMotion, iSCSI, NFS, or Fault Tolerance logging (and management on ESXi)

Take a look at Figure 5.2, which shows the vSphere Web Client depicting a vSwitch on an ESXi host.

FIGURE 5.2 Virtual switches alone can't provide connectivity; they need ports or port groups and uplinks.

images

In this figure, the vSwitch isn't depicted alone; it also requires ports or port groups and uplinks. Without uplinks, a virtual switch can't communicate with the upstream network; without ports or port groups, a vSwitch can't provide connectivity for the VMkernel or the VMs. It is for this reason that most of our discussion about virtual switches centers on ports, port groups, and uplinks.

First, though, let's take a closer look at vSwitches and how they are similar to yet different from physical switches in the network.

Comparing Virtual Switches and Physical Switches

Virtual switches in ESXi are constructed by and operate in the VMkernel. Virtual switches (referred to in the general sense as vSwitches) are not managed switches and do not provide all the advanced features that many new physical switches provide. You cannot, for example, telnet into a vSwitch to modify settings. There is no command-line interface (CLI) for a vSwitch, apart from the vSphere CLI commands such as esxcli. Even so, a vSwitch operates like a physical switch in some ways. Like its physical counterpart, a vSwitch functions at layer 2, maintains MAC address tables, forwards frames to other switch ports based on the MAC address, supports VLAN configurations, can trunk VLANs using IEEE 802.1q VLAN tags, and can establish port channels. Similar to physical switches, vSwitches are configured with a specific number of ports.

Despite these similarities, vSwitches do have some differences from physical switches. A vSwitch does not support the use of dynamic negotiation protocols for establishing 802.1q trunks or port channels, such as Dynamic Trunking Protocol (DTP) or Link Aggregation Control Protocol (LACP). A vSwitch cannot be connected to another vSwitch, thereby eliminating a potential loop configuration. Because there is no possibility of looping, the vSwitches do not run Spanning Tree Protocol (STP). Looping can be a common network problem, so this is a real benefit of vSwitches.

SPANNING TREE PROTOCOL

In physical switches, Spanning Tree Protocol (STP) offers redundancy for paths and prevents loops in the network topology by locking redundant paths in a standby state. Only when a path is no longer available will STP activate the standby path.

It is possible to link vSwitches together using a VM with layer 2 bridging software and multiple virtual NICs, but this is not an accidental configuration and would require some effort to establish.

vSwitches and physical switches have some other differences:

  • A vSwitch authoritatively knows the MAC addresses of the VMs connected to it, so there is no need to learn MAC addresses from the network.
  • Traffic received by a vSwitch on one uplink is never forwarded out another uplink. This is yet another reason why vSwitches do not run STP.
  • A vSwitch does not need to perform Internet Group Management Protocol (IGMP) snooping because it knows the multicast interests of the VMs attached to it.

As you can see from this list of differences, you simply can't use virtual switches in the same way you can use physical switches. You can't use a virtual switch as a transit path between two physical switches, for example, because traffic received on one uplink won't be forwarded out another uplink.

With this basic understanding of how vSwitches work, let's now take a closer look at ports and port groups.

Understanding Ports and Port Groups

As described previously in this chapter, a vSwitch allows several different types of communication, including communication to and from the VMkernel and between VMs. To help distinguish between these different types of communication, ESXi uses ports and port groups. A vSwitch without any ports or port groups is like a physical switch that has no physical ports; there is no way to connect anything to the switch, and it is, therefore, useless.

Port groups differentiate between the types of traffic passing through a vSwitch, and they also operate as a boundary for communication and/or security policy configuration. Figure 5.3 and Figure 5.4 show the two different types of ports and port groups that you can configure on a vSwitch:

  • VMkernel port
  • VM port group

FIGURE 5.3 Virtual switches can contain two connection types: VMkernel port and VM port group.

images

FIGURE 5.4 You can create virtual switches with both connection types on the same switch.

images

Because a vSwitch cannot be used in any way without at least one port or port group, you'll see that the vSphere Web Client combines the creation of new vSwitches with the creation of new ports or port groups.

As shown in Figure 5.2, though, ports and port groups are only part of the overall solution. The uplinks are the other part of the solution that you need to consider because they provide external network connectivity to the vSwitches.

Understanding Uplinks

Although a vSwitch allows communication between VMs connected to the vSwitch, it cannot communicate with the physical network without uplinks. Just as a physical switch must be connected to other switches to communicate across the network, vSwitches must be connected to the ESXi host's physical NICs as uplinks to communicate with the rest of the network.

Unlike ports and port groups, uplinks aren't required for a vSwitch to function. Physical systems connected to an isolated physical switch with no uplinks to other physical switches in the network can still communicate with each other—just not with any other systems that are not connected to the same isolated switch. Similarly, VMs connected to a vSwitch without any uplinks can communicate with each other but not with VMs on other vSwitches or physical systems.

This sort of configuration is known as an internal-only vSwitch. It can be useful to allow VMs to communicate only with each other. VMs that communicate through an internal-only vSwitch do not pass any traffic through a physical adapter on the ESXi host. As shown in Figure 5.5, communication between VMs connected to an internal-only vSwitch takes place entirely in the software and happens at the speed at which the VMkernel can perform the task, whatever that may be.

FIGURE 5.5 VMs communicating through an internal-only vSwitch do not pass any traffic through a physical adapter.

images

NO UPLINK, NO VMOTION

VMs connected to an internal-only vSwitch are not vMotion capable. However, if the VM is disconnected from the internal-only vSwitch, a warning will be provided, but vMotion will succeed if all other requirements have been met. The requirements for vMotion are covered in Chapter 12, “Balancing Resource Utilization.”

For VMs to communicate with resources beyond the VMs hosted on the local ESXi host, a vSwitch must be configured to use at least one physical network adapter, or uplink. A vSwitch can be bound to a single network adapter or bound to two or more network adapters.

A vSwitch bound to at least one physical network adapter allows VMs to establish communication with physical servers on the network or with VMs on other ESXi hosts. That's assuming, of course, that the VMs on the other ESXi hosts are connected to a vSwitch that is bound to at least one physical network adapter. Just like a physical network, a virtual network requires connectivity from end to end. Figure 5.6 shows the communication path for VMs connected to a vSwitch bound to a physical network adapter. In the diagram, when vm1 on pod-1-blade-5 needs to communicate with vm2 on pod-1-blade-8, the traffic from the VM passes through vSwitch0 (via a VM port group) to the physical network adapter to which the vSwitch is bound. From the physical network adapter, the traffic will reach the physical switch (PhySw1). The physical switch (PhySw1) passes the traffic to the second physical switch (PhySw2), which will pass the traffic through the physical network adapter associated with the vSwitch on pod-1-blade-8. In the last stage of the communication, the vSwitch will pass the traffic to the destination virtual machine vm2.

FIGURE 5.6 A vSwitch with a single network adapter allows VMs to communicate with physical servers and other VMs on the network.

images

The vSwitch associated with a physical network adapter provides VMs with the amount of bandwidth the physical adapter is configured to support. All the VMs will share this bandwidth when communicating with physical machines or VMs on other ESXi hosts. In this way, a vSwitch is once again similar to a physical switch. For example, a vSwitch bound to a network adapter with a 1 Gbps maximum speed will provide up to 1 Gbps of bandwidth for the VMs connected to it; similarly, a physical switch with a 1 Gbps uplink to another physical switch provides up to 1 Gbps of bandwidth between the two switches for systems attached to the physical switches.

A vSwitch can also be bound to multiple physical network adapters. In this configuration, the vSwitch is sometimes referred to as a NIC team, but in this book we'll use the term NIC team or NIC teaming to refer specifically to the grouping of network connections, not to refer to a vSwitch with multiple uplinks.

UPLINK LIMITS

Although a single vSwitch can be associated with multiple physical adapters as in a NIC team, a single physical adapter cannot be associated with multiple vSwitches. ESXi hosts can have up to 32 e1000 network adapters, 32 Broadcom TG3 Gigabit Ethernet network ports, or 16 Broadcom BN32 Gigabit Ethernet network ports. ESXi hosts support up to 8 10 Gigabit Ethernet adapters.

Figure 5.7 and Figure 5.8 show a vSwitch bound to multiple physical network adapters. A vSwitch can have a maximum of 32 uplinks. In other words, a single vSwitch can use up to 32 physical network adapters to send and receive traffic from the physical switches. Binding multiple physical NICs to a vSwitch offers the advantage of redundancy and load distribution. In the section “Configuring NIC Teaming,” you'll dig deeper into the configuration and workings of this sort of vSwitch configuration.

FIGURE 5.7 A vSwitch using NIC teaming has multiple available adapters for data transfer. NIC teaming offers redundancy and load distribution.

images

So, we've examined vSwitches, ports and port groups, and uplinks, and you should have a basic understanding of how these pieces begin to fit together to build a virtual network. The next step is to delve deeper into the configuration of the various types of ports and port groups, because they are so essential to virtual networking. We'll start with a discussion on management networking.

Configuring Management Networking

Management traffic is a special type of network traffic that runs across a VMkernel port. VMkernel ports provide network access for the VMkernel's TCP/IP stack, which is separate and independent from the network traffic generated by VMs. The ESXi management network, however, is treated a bit differently than “regular” VMkernel traffic in two ways:

  • First, the ESXi management network is automatically created when you install ESXi. In order for the ESXi host to be reachable across the network, it must have a management network configured and working. So, the ESXi installer automatically sets up an ESXi management network.
  • Second, the Direct Console User Interface (DCUI)—the user interface that exists when you're working at the physical console of a server running ESXi—provides a mechanism for configuring or reconfiguring the management network but not any other forms of networking on that host.

FIGURE 5.8 Virtual switches using NIC teaming are identified by the multiple physical network adapters assigned to the vSwitch.

images

Although the vSphere Web Client offers an option to enable management traffic when configuring networking, as you can see in Figure 5.9, it's unlikely that you'll use this option very often. After all, for you to configure management networking from within the vSphere Web Client, the ESXi host must already have functional management networking in place (vCenter Server communicates with ESXi over the management network). You might use this option if you were creating additional management interfaces. To do this, you would use the procedure described later (in the section “Configuring VMkernel Networking”) to create VMkernel ports with the vSphere Web Client, simply enabling Management Traffic in the Enable Services section while creating the VMkernel port.

FIGURE 5.9 The vSphere Client offers a way to enable management networking when configuring networking.

images

In the event that the ESXi host is unreachable—and therefore cannot be configured using the vSphere Client—you'll need to use the DCUI to configure the management network.

Perform the following steps to configure the ESXi management network using the DCUI:

  1. At the server's physical console or using a remote console utility such as the HP iLO, press F2 to enter the System Customization menu.

    If prompted to log in, enter the appropriate credentials.

  2. Use the arrow keys to highlight the Configure Management Network option, as shown in Figure 5.10, and press Enter.

    FIGURE 5.10 To configure ESXi's equivalent of the Service Console port, use the Configure Management Network option in the System Customization menu.

    images

  3. From the Configure Management Network menu, select the appropriate option for configuring ESXi management networking, as shown in Figure 5.11.

    You cannot create additional management network interfaces from here; you can only modify the existing management network interface.

  4. When finished, follow the screen prompts to exit the management networking configuration.

    If prompted to restart the management networking, select Yes; otherwise, restart the management networking from the System Customization menu, as shown in Figure 5.12.

In looking at Figure 5.10 and Figure 5.12, you'll also see options for testing the management network, which lets you verify that the management network is configured correctly. This is invaluable if you are unsure of the VLAN ID or network adapters that you should use.

We also want to point out the Network Restore Options screen, shown in Figure 5.13. This screen lets you restore the network configuration to defaults, restore a vSphere Standard Switch, or even restore a vSphere Distributed Switch—all very handy options if you are troubleshooting management network connectivity to your ESXi host.

FIGURE 5.11 From the Configure Management Network menu, users can modify assigned network adapters, change the VLAN ID, or alter the IP configuration.

images

FIGURE 5.12 The Restart Management Network option restarts ESXi's management networking and applies any changes that were made.

images

FIGURE 5.13 Use the Network Restore Options screen to manage network connectivity to an ESXi host.

images

Let's move our discussion of VMkernel networking away from just management traffic and take a closer look at the other types of VMkernel traffic, as well as how to create and configure VMkernel ports.

Configuring VMkernel Networking

VMkernel networking carries management traffic, but it also carries all other forms of traffic that originate with the ESXi host itself (i.e., any traffic that isn't generated by VMs running on that ESXi host). As shown in Figure 5.14 and Figure 5.15, VMkernel ports are used for management, vMotion, iSCSI, NAS/NFS access, and vSphere FT—basically, all types of traffic that are generated by the hypervisor itself. In Chapter 6, “Creating and Configuring Storage Devices.” we detail the iSCSI and NAS/NFS configurations; in Chapter 12, we provide details of the vMotion process and how vSphere FT works. These discussions provide insight into the traffic flow between VMkernel and storage devices (iSCSI/NFS) or other ESXi hosts (for vMotion or vSphere FT). At this point, you should be concerned only with configuring VMkernel networking.

A VMkernel port actually comprises two different components: a port group on a vSwitch and a VMkernel network interface, also known as a vmknic. Creating a VMkernel port using the vSphere Web Client combines the task of creating the port group and the VMkernel NIC.

Perform the following steps to add a VMkernel port to an existing vSwitch using the vSphere Web Client:

  1. If not already connected, open a supported web browser and log in to a vCenter Server instance. For example, if your vCenter Server instance is called “vcenter,” then you'll connect to https://vcenter.domain.name:9443/vsphere-client and then log in with appropriate credentials.
  2. From the vSphere Web Client home page, select vCenter from the navigation list on the left.
  3. From the Inventory Lists area, select Hosts, then click the ESXi host on which you'd like to add the new VMkernel port.

    FIGURE 5.14 A VMkernel port is associated with an interface and assigned an IP address for accessing iSCSI or NFS storage devices or for performing vMotion with other ESXi hosts.

    images

    FIGURE 5.15 The network labels for VMkernel ports should be as descriptive as possible.

    images

  4. Select the Manage tab, and click the Networking button.
  5. Click Virtual Adapters.
  6. Click the Add Host Networking icon. This starts the Add Networking wizard.
  7. Select VMkernel Network Adapter, and then click Next.
  8. Because you're adding a VMkernel port to an existing vSwitch, make sure Select An Existing Standard Switch is selected, then click Browse to select the virtual switch to which the new VMkernel port should be added. Click OK in the Select Switch dialog box, and click Next to continue.
  9. Type the name of the port in the Network Label text box.
  10. If necessary, specify the VLAN ID for the VMkernel port.
  11. Select whether this VMkernel port will be enabled for IPv4, IPv6, or both.
  12. Select the TCP/IP stack that this VMkernel port should use. Unless you have already created a custom TCP/IP stack, Default will be the only option listed here. We discuss IP stacks later in this chapter in the section titled “Configuring TCP/IP Stacks.”
  13. Select the various functions that will be enabled on this VMkernel port, and then click Next. For a VMkernel port that will be used only for iSCSI or NAS/NFS traffic, all the Enable Services check boxes should be deselected, as shown in Figure 5.16. For a VMkernel port that will act as an additional management interface, only Management Traffic should be selected.

    FIGURE 5.16 VMkernel ports can carry IP-based storage traffic, vMotion traffic, Fault Tolerance logging traffic, management traffic, or Virtual SAN traffic.

    images

  14. For IPv4 (applicable if you selected IPv4 or IPv4 And IPv6 for IP Settings in the previous step), you may elect to either obtain the configuration automatically (via DHCP) or supply a static configuration. If you opt to use a static configuration, ensure that the IP address is a valid IP address for the network to which the physical NIC is connected.

    DEFAULT GATEWAY AND DNS SERVERS AREN'T EDITABLE

    Note that the default gateway and DNS server addresses are controlled by the TCP/IP stack configuration and can't be changed here. To change these settings, you'll need to edit the TCP/IP stack settings, as described in the section titled “Configuring TCP/IP Stacks.”

  15. For IPv6 (applicable if you selected IPv6 or IPv4 And IPv6 for IP Settings earlier), you can choose to obtain configuration automatically via DHCPv6, obtain your configuration automatically via Router Advertisement, and/or assign one or more IPv6 addresses. Use the green plus symbol to add an IPv6 address that is appropriate for the network to which this VMkernel interface will be connected.
  16. Click Next to review the configuration summary, and then click Finish.

After you complete these steps, you can use the esxcli command—either from an instance of the vSphere Management Assistant or from a system with the vSphere CLI installed—to show the new VMkernel port and the new VMkernel NIC that was created:

esxcli --server=<vCenter hostname or IP> --vihost=<ESXi hostname or IP>
--username=<vCenter admin user> network ip interface list

DIFFERENT COMMAND-LINE OPTIONS

vSphere 5.5 still provides the vicfg-* tools, such as vicfg-vswitch and vicfg-vmknic. However, most command-line functionality is being collapsed into esxcli moving forward, so it's a good idea to try to stick with esxcli wherever possible.

To help illustrate the different parts—the VMkernel port and the VMkernel NIC, or vmknic—that are created during this process, let's again walk through the steps for creating a VMkernel port using the vSphere Management Assistant.

Perform the following steps to create a VMkernel port on an existing vSwitch using the command line:

  1. Using PuTTY.exe (Windows) or a terminal window (Linux or Mac OS X), establish an SSH session to the vSphere Management Assistant.
  2. Enter the following command to add a port group named VMkernel to vSwitchO:
    esxcli --server=<vCenter host name> --vihost=<ESXi host name>
    --username=<vCenter administrative user> network vswitch standard
    portgroup add --portgroup-name=VMkernel --vswitch-name=vSwitch0
  3. Use the esxcli command to list the port groups on vSwitchO. Note that the port group exists, but nothing has been connected to it (the Active Clients column shows 0).
    esxcli --server=<vCenter host name> --vihost=<ESXi host name>
    --username=<vCenter administrative user> network vswitch standard
    portgroup list
  4. Enter the following command to create the VMkernel port and attach it to the port group created in step 2:
    esxcli --server=<vCenter host name> --vihost=<ESXi host name>
    --username=<vCenter administrative user> network ip interface add
    --portgroup-name=VMkernel --interface-name=vmk4
  5. Use this command to assign an IP address and subnet mask to the VMkernel port created in the previous step:
    esxcli --server=<vCenter host name> --vihost=<ESXi host name>
    --username=<vCenter administrative user> network ip interface ipv4 set
    --interface-name=vmk4 --type=static --ipv4=192.168.1.100
    --netmask=255.255.255.0
  6. Repeat the command from step 3 again, noting now how the Active Clients column has incremented to 1.

    This indicates that a vmknic has been connected to a virtual port on the port group. Figure 5.17 shows the output of the esxcli command after completing step 5.

FIGURE 5.17 Using the CLI helps drive home the fact that the port group and the VMkernel port are separate objects.

images

Aside from the default ports required for the management network, no VMkernel ports are created during the installation of ESXi, so all the nonmanagement VMkernel ports that may be required in your environment will need to be created, either using the vSphere Web Client or via CLI using the vSphere CLI or the vSphere Management Assistant.

In addition to adding VMkernel ports, you might need to edit a VMkernel port, or even remove a VMkernel port. Both of these tasks can be done in the same place you added a VMkernel port: the Networking section of the Manage tab for an ESXi host.

To edit a VMkernel port, select the desired VMkernel port from the list and click the Edit Settings icon (it looks like a pencil). This will bring up the Edit Settings dialog box, where you can change the services for which this port is enabled, change the MTU, and modify the IPv4 and/or IPv6 settings. Of particular interest here is the Analyze Impact section, shown in Figure 5.18, which helps point out dependencies on the VMkernel port in order to prevent unwanted side effects that might result from modifying the VMkernel port's configuration.

To delete a VMkernel port, select the desired VMkernel port from the list and click the Remove Selected Virtual Network Adapter (it looks like a red X). In the resulting confirmation dialog box, you'll see the option to analyze the impact (same as with modifying a VMkernel port). Click OK to remove the VMkernel port.

Before we move on to discussing how to configure VM networking, let's look at one more area related to host networking. Next, we'll introduce a feature new to vSphere 5.5: multiple TCP/IP stacks.

FIGURE 5.18 The Analyze Impact section shows administrators dependencies on VMkernel ports.

images

Configuring TCP/IP Stacks

Prior to the release of vSphere 5.5, all VMkernel interfaces shared a single instance of a TCP/IP stack. As a result, this means they all shared the same routing table and same DNS configuration. This created some interesting challenges in certain environments; for example, what if you needed a default gateway for your management network but you also needed a default gateway for your NFS traffic? The only workaround was to use a single default gateway and then populate the routing table with static routes. Clearly, this is not a very scalable solution.

With the release of vSphere 5.5, you can now create multiple TCP/IP stacks. Each stack has its own routing table and own DNS configuration.

Let's take a look at how to create TCP/IP stacks. Once we have at least one additional TCP/IP stack created, we will show you how to assign a VMkernel interface to a specific TCP/IP stack.

CREATING A TCP/IP STACK

In this release, creating new TCP/IP stack instances can only be done from the command line using the esxcli command.

To create a new TCP/IP stack, use this command:

esxcli --server=<vCenter host name> --vihost=<ESXi host name>
--username=<vCenter administrative user> network ip netstack add
--netstack=<Name of new TCP/IP stack>

For example, if you wanted to create a separate TCP/IP stack for your NFS traffic, the command might look something like this:

esxcli --server=vcenter.v12nlab.net --vihost=esxi-01.v12nlab.net
--username=root network ip netstack add --netstack=nfsStack

You can get a list of all the configured TCP/IP stacks with a very similar esxcli command:

esxcli --server=<vCenter host name> --vihost=<ESXi host name>
--username=<vCenter administrative user> network ip netstack list

Once the new TCP/IP stack is created, you can, if you wish, continue to configure the stack using the esxcli command. However, you will probably find it easier to use the vSphere Web Client to do the actual configuration of the new TCP/IP stack, as we describe in the next section.

CONFIGURING TCP/IP STACK SETTINGS

You've actually seen references to the TCP/IP stacks already at least once (when creating a VMkernel interface), but the actual settings for the TCP/IP stacks are found in the same place where you create and configure other host networking settings: in the Networking section of the Manage tab for an ESXi host object, as shown in Figure 5.19.

FIGURE 5.19 TCP/IP stack settings are located with other host networking configuration options.

images

In Figure 5.19 you can see the new TCP/IP stack, named nfsStack, that we created in previous section. To edit the settings for that stack, you'll simply select it from the list and click the Edit TCP/IP Stack Configuration icon (it looks like a pencil above the list of TCP/IP stacks). That brings up the Edit TCP/IP Stack Configuration dialog box, shown in Figure 5.20.

FIGURE 5.20 Each TCP/IP stack can have its own DNS configuration, routing information, and other advanced settings.

images

In the Edit TCP/IP Stack Configuration dialog box, make the changes you need to make to the name, DNS configuration, routing, or other advanced settings. Once you're finished, click OK.

One final task regarding TCP/IP stacks remains: assigning interfaces to a TCP/IP stack. Until you actually assign an interface—specifically referring to VMkernel interfaces here—to a TCP/IP stack you've created, the VMkernel interface will use the default system stack and won't be able to use any of the custom settings you've configured.

ASSIGNING PORTS TO A TCP/IP STACK

Unfortunately, you can assign VMkernel ports to a TCP/IP stack only at the time of creation. In other words, once a VMkernel port has been created, you can't change the TCP/IP stack to which it has been assigned. You must delete the VMkernel port and then re-create it, assigning it to the desired TCP/IP stack. We described how to create and delete VMkernel ports earlier, so we won't go through those tasks again here.

You'll note that it's in step 12 of creating a VMkernel port that you have the option of selecting a specific TCP/IP stack to which to bind this VMkernel port. This is illustrated in Figure 5.21, where you can see the system default stack as well as the custom nfsStack we created earlier listed.

FIGURE 5.21 VMkernel ports can be assigned to a TCP/IP stack only at the time of creation.

images

One very important thing to note: In this release, using custom TCP/IP stacks isn't supported for use with vMotion, Fault Tolerance logging, management traffic, or Virtual SAN traffic. When you select a custom TCP/IP stack, you'll see that the check boxes to enable these services automatically disable themselves. At this time, you'll only be able to use custom TCP/IP stacks for IP-based storage, like iSCSI and NFS.

It's now time to shift our focus from host networking to VM networking.

Configuring VM Networking

The second type of port group to discuss is the VM port group, which is responsible for all VM networking. The VM port group is quite different from a VMkernel port. With VMkernel networking, there is a one-to-one relationship with an interface: Each VMkernel NIC, or vmknic, requires a matching VMkernel port group on a vSwitch. In addition, these interfaces require IP addresses that are used for management or VMkernel network access.

A VM port group, on the other hand, does not have a one-to-one relationship, and it does not require an IP address. For a moment, forget about vSwitches and consider standard physical switches. When you install or add an unmanaged physical switch into your network environment, that physical switch does not require an IP address: You simply install the switches and plug in the appropriate uplinks that will connect them to the rest of the network.

A vSwitch created with a VM port group is really no different. A vSwitch with a VM port group acts just like an additional unmanaged physical switch. You need only plug in the appropriate uplinks—physical network adapters, in this case—that will connect that vSwitch to the rest of the network. As with an unmanaged physical switch, an IP address does not need to be configured for a VM port group to combine the ports of a vSwitch with those of a physical switch. Figure 5.22 shows the switch-to-switch connection between a vSwitch and a physical switch.

FIGURE 5.22 A vSwitch with a VM port group uses an associated physical network adapter to establish a switch-to-switch connection with a physical switch.

images

Perform the following steps to create a vSwitch with a VM port group using the vSphere Web Client:

  1. Use the vSphere Web Client to establish a connection to a vCenter Server instance.
  2. From the vSphere Web Client home page, click vCenter from the Inventories section, then select Hosts from the inventory lists on the left.
  3. Select the ESXi host on which you'd like to add a vSwitch, then click Manage, and finally, select the Networking section.
  4. Click the Add Host Networking icon (a small globe with a plus sign) to start the Add Networking wizard.
  5. Select the Virtual Machine Port Group For A Standard Switch radio button, and click Next.
  6. Because you are creating a new vSwitch, select the New Standard Switch radio button. Click Next.
  7. Click the green plus icon to add physical network adapters to the new vSwitch you are creating. From the Add Physical Adapters To The Switch dialog box, select the NIC or NICs connected to the switch that can carry the appropriate traffic for your VMs.
  8. Click OK when you're done selecting physical network adapters. This returns you to the Create A Standard Switch screen, where you can click Next to continue.
  9. Type the name of the VM port group in the Network Label text box.
  10. Specify a VLAN ID, if necessary, and click Next.
  11. Click Next to review the virtual switch configuration, and then click Finish.

If you are a command-line junkie, you can create a VM port group from the vSphere CLI as well. You can probably guess the commands that are involved from the previous examples, but we'll walk you through the process anyway.

Perform the following steps to create a vSwitch with a VM port group using the command line:

  1. Using PuTTY.exe (Windows) or a terminal window (Linux or Mac OS X), establish an SSH session to a running instance of the vSphere Management Assistant.
  2. Enter the following command to add a virtual switch named vSwitch1:
    esxcli --server=<vCenter host name> --vihost=<ESXi host name>
    --username=<vCenter administrative user> network vswitch standard add
    --vswitch-name=vSwitch1
  3. Enter the following command to bind the physical NIC vmnic1 to vSwitch1:
    esxcli --server=<vCenter host name> --vihost=<ESXi host name>
    --username=<vCenter administrative user> network vswitch standard
    uplink add --vswitch-name=vSwitch1 --uplink-name=vmnic1

    By binding a physical NIC to the vSwitch, you provide network connectivity to the rest of the network for VMs connected to this vSwitch. Again, remember that you can assign any given physical NIC to only one vSwitch at a time (but a vSwitch may have multiple physical NICs bound at the same time).

  4. Enter the following command to create a VM port group named ProductionLAN on vSwitch1:
    esxcli --server=<vCenter host name> --vihost=<ESXi host name>
    --username=<vCenter administrative user> network vswitch standard
    portgroup add --vswitch-name=vSwitch1 --portgroup-name=ProductionLAN

Of the different connection types—VMkernel ports and VM port groups—vSphere administrators will spend most of their time creating, modifying, managing, and removing VM port groups.

PORTS AND PORT GROUPS ON A VIRTUAL SWITCH

A vSwitch can consist of multiple connection types, or each connection type can be created in its own vSwitch.

Configuring VLANs

Several times so far we've referenced the use of the VLAN ID when configuring a VMkernel port and a VM port group. As defined previously in this chapter, a virtual LAN (VLAN) is a logical LAN that provides efficient segmentation, security, and broadcast control while allowing traffic to share the same physical LAN segments or same physical switches. Figure 5.23 shows a typical VLAN configuration across physical switches.

FIGURE 5.23 Virtual LANs provide secure traffic segmentation with-out the cost of additional hardware.

images

VLANs utilize the IEEE 802.1q standard for tagging, or marking, traffic as belonging to a particular VLAN. The VLAN tag, also known as the VLAN ID, is a numeric value between 1 and 4094, and it uniquely identifies that VLAN across the network. Physical switches such as the ones depicted in Figure 5.23 must be configured with ports to trunk the VLANs across the switches. These ports are known as trunk (or trunking) ports. Ports not configured to trunk VLANs are known as access ports and can carry traffic only for a single VLAN at a time.

USING VLAN ID 4095

Normally the VLAN ID will range from 1 to 4094. In a vSphere environment, however, a VLAN ID of 4095 is also valid. Using this VLAN ID with ESXi causes the VLAN tagging information to be passed through the vSwitch all the way up to the guest OS. This is called virtual guest tagging (VGT) and is useful only for guest OSes that support and understand VLAN tags.

VLANs are an important part of ESXi networking because of the impact they have on the number of vSwitches and uplinks that are required. Consider this configuration:

  • The management network needs access to the network segment carrying management traffic.
  • Other VMkernel ports, depending upon their purpose, may need access to an isolated vMotion segment or the network segment carrying iSCSI and NAS/NFS traffic.
  • VM port groups need access to whatever network segments are applicable for the VMs running on the ESXi hosts.

Without VLANs, this configuration would require three or more separate vSwitches, each bound to a different physical adapter, and each physical adapter would need to be physically connected to the correct network segment, as illustrated in Figure 5.24.

FIGURE 5.24 Supporting multiple networks without VLANs can increase the number of vSwitches, uplinks, and cabling that is required.

images

Add in an IP-based storage network and a few more VM networks that need to be supported and the number of required vSwitches and uplinks quickly grows. And this doesn't even take uplink redundancy, for example NIC teaming, into account!

VLANs are the answer to this dilemma. Figure 5.25 shows the same network as in Figure 5.24, but with VLANs this time.

While the reduction from Figure 5.24 to Figure 5.25 is only a single vSwitch and a single uplink, you can easily add more VM networks to the configuration in Figure 5.25 by simply adding another port group with another VLAN ID. Blade servers provide an excellent example of when VLANs offer tremendous benefit. Because of the small form factor of the blade casing, blade servers have historically offered limited expansion slots for physical network adapters. VLANs allow these blade servers to support more networks than they would be able to otherwise.

NO VLAN NEEDED

Virtual switches in the VMkernel do not need VLANs if an ESXi host has enough physical network adapters to connect to each of the different network segments. However, VLANs provide added flexibility in adapting to future network changes, so the use of VLANs where possible is recommended.

As shown in Figure 5.25, VLANs are handled by configuring different port groups within a vSwitch. The relationship between VLANs and port groups is not a one-to-one relationship; a port group can be associated with only one VLAN at a time, but multiple port groups can be associated with a single VLAN. Later in this chapter when we discuss security settings (in the section “Configuring Virtual Switch Security”), you'll see some examples of when you might have multiple port groups associated with a single VLAN.

FIGURE 5.25 VLANs can reduce the number of vSwitches, uplinks, and cabling required.

images

To make VLANs work properly with a port group, the uplinks for that vSwitch must be connected to a physical switch port configured as a trunk port. A trunk port understands how to pass traffic from multiple VLANs simultaneously while also preserving the VLAN IDs on the traffic. Figure 5.26 shows a snippet of configuration from a Cisco Catalyst 3560G switch for a couple of ports configured as trunk ports.

FIGURE 5.26 The physical switch ports must be configured as trunk ports in order to pass the VLAN information to the ESXi hosts for the port groups to use.

images

The configuration for switches from other manufacturers will vary, of course, so be sure to check with your particular switch manufacturer for specific details on how to configure a trunk port.

THE NATIVE VLAN

In Figure 5.26, you might notice the switchport trunk native vlan 999 command. The default native VLAN (also known as the untagged VLAN) on most switches is VLAN ID 1. If you need to pass traffic on VLAN 1 to the ESXi hosts, you should designate another VLAN as the native VLAN using this command (or its equivalent). We recommend creating a dummy VLAN, like 999, and setting that as the native VLAN. This ensures that all VLANs will be tagged with the VLAN ID as they pass into the ESXi hosts. Keep in mind this might affect behaviors like PXE booting, which generally requires untagged traffic.

When the physical switch ports are correctly configured as trunk ports, the physical switch passes the VLAN tags up to the ESXi server, where the vSwitch tries to direct the traffic to a port group with that VLAN ID assigned. If there is no port group configured with that VLAN ID, the traffic is discarded.

Perform the following steps to configure a VM port group using VLAN ID 30:

  1. Use the vSphere Web Client to establish a connection to a vCenter Server instance.
  2. Navigate to the ESXi host to which you want to add the VM port group, click the Manage tab, and then select Networking.
  3. Make sure Virtual Switches is selected on the left side, then select the vSwitch where the new port group should be created.
  4. Click the Add Host Networking icon (looks like a globe with a plus sign in the corner) to start the Add Networking wizard.
  5. Select the Virtual Machine Port Group For A Standard Switch radio button, then click Next.
  6. Make sure the Select An Existing Standard Switch radio button is selected and, if necessary, use the Browse button to choose which virtual switch will host the new VM port group. Click Next.
  7. Type the name of the VM port group in the Network Label text box.

    Embedding the VLAN ID and a brief description into the name of the port group is strongly recommended, so typing something like VLANXXX-NetworkDescription would be appropriate, where XXX represents the VLAN ID.

  8. Type 30 in the VLAN ID (Optional) text box, as shown in Figure 5.27.

    FIGURE 5.27 You must specify the correct VLAN ID in order for a port group to receive traffic intended for a particular VLAN.

    images

    You will want to substitute a value that is correct for your network here.

  9. Click Next to review the vSwitch configuration, and then click Finish.

As you've probably gathered by now, you can also use the esxcli command from the vSphere CLI to create or modify the VLAN settings for ports or port groups. We won't go through the steps here because the commands are extremely similar to what we've shown you already.

Although VLANs reduce the costs of constructing multiple logical subnets, keep in mind that they do not address traffic constraints. Although VLANs logically separate network segments, all the traffic still runs on the same physical network underneath. For bandwidth-intensive network operations, the disadvantage of the shared physical network might outweigh the scalability and cost savings of a VLAN.

CONTROLLING THE VLANS PASSED ACROSS A VLAN TRUNK

You might see the switchport trunk allowed vlan command in some Cisco switch configurations as well. This command allows you to control what VLANs are passed across the VLAN trunk to the device at the other end of the link—in this case, an ESXi host. You will need to ensure that all the VLANs that are defined on the vSwitches are also included in the switchport trunk allowed vlan command or those VLANs not included in the command won't work.

Configuring NIC Teaming

We know that in order for a vSwitch and its associated ports or port groups to communicate with other ESXi hosts or with physical systems, the vSwitch must have at least one uplink. An uplink is a physical network adapter that is bound to the vSwitch and connected to a physical network switch. With the uplink connected to the physical network, there is connectivity for the VMkernel and the VMs connected to that vSwitch. But what happens when that physical network adapter fails, when the cable connecting that uplink to the physical network fails, or the upstream physical switch to which that uplink is connected fails? With a single uplink, network connectivity to the entire vSwitch and all of its ports or port groups is lost. This is where NIC teaming comes in.

NIC teaming involves connecting multiple physical network adapters to a single vSwitch. NIC teaming provides redundancy and load balancing of network communications to the VMkernel and VMs.

Figure 5.28 illustrates NIC teaming conceptually. Both of the vSwitches have two uplinks, and each of the uplinks connects to a different physical switch. Note that NIC teaming supports all the different connection types, so it can be used with ESXi management networking, VMkernel networking, and networking for VMs.

FIGURE 5.28 Virtual switches with multiple uplinks offer redundancy and load balancing.

images

Figure 5.29 shows what NIC teaming looks like from within the vSphere Web Client. In this example, the vSwitch is configured with an association to multiple physical network adapters (uplinks). As mentioned previously, the ESXi host can have a maximum of 32 uplinks; these uplinks can be spread across multiple vSwitches or all tossed into a NIC team on one vSwitch. Remember that you can connect a physical NIC to only one vSwitch at a time.

FIGURE 5.29 The vSphere Web Client shows when multiple physical network adapters are associated to a vSwitch using NIC teaming.

images

Building a functional NIC team requires that all uplinks be connected to physical switches in the same broadcast domain. If VLANs are used, then all the switches should be configured for VLAN trunking, and the appropriate subset of VLANs must be allowed across the VLAN trunk. In a Cisco switch, this is typically controlled with the switchport trunk allowed vlan statement.

In Figure 5.30, the NIC team for vSwitchO will work, because both of the physical switches share VLAN 100 and are therefore in the same broadcast domain. The NIC team for vSwitch1, however, will not work because the physical network adapters do not share a common broadcast domain.

FIGURE 5.30 All the physical network adapters in a NIC team must belong to the same layer 2 broadcast domain.

images

CONSTRUCTING NIC TEAMS

NIC teams should be built on physical network adapters located on separate bus architectures. For example, if an ESXi host contains two onboard network adapters and a PCI Express–based quad-port network adapter, a NIC team should be constructed using one onboard network adapter and one network adapter on the PCI bus. This design eliminates a single point of failure.

Perform the following steps to create a NIC team with an existing vSwitch using the vSphere Web Client:

  1. Use the vSphere Web Client to establish a connection to a vCenter Server instance.
  2. Navigate to the Networking section of the Manage tab for the ESXi host where you want to create the NIC team. We prefer to use the inventory lists rather than the hierarchy tree, but either method is fine.
  3. Make sure Virtual Switches is selected on the left, then select the virtual switch that will be assigned a NIC team and click the Manage The Physical Adapters Connected To The Selected Virtual Switch icon (it looks like a NIC with a wrench).
  4. In the Manage Physical Network Adapters dialog box, click the green Add Adapters icon.
  5. From the Add Physical Adapters To the Switch dialog box, select the appropriate adapter (or adapters) from the list, as shown in Figure 5.31.

    FIGURE 5.31 Create a NIC team by adding network adapters that belong to the same layer 2 broadcast domain as the original adapter.

    images

    PUTTING NEW ADAPTERS INTO A DIFFERENT FAILOVER GROUP

    The Add Physical Adapters To The Switch dialog box shown in Figure 5.31 allows you to add adapters not only to the list of active adapters but also to the list of standby or unused adapters. Simply change the desired group using the Failover Order Group drop-down list.

  6. Click OK to return to the Manage Physical Network Adapters dialog box.
  7. Click OK to complete the process and return to the Networking section of the Manage tab for the selected ESXi host. Note that it might take a moment or two for the display to update with the new physical adapter.

After a NIC team is established for a vSwitch, ESXi can then perform load balancing for that vSwitch. The load-balancing feature of NIC teaming does not function like the load-balancing feature of advanced routing protocols. Load balancing across a NIC team is not a product of identifying the amount of traffic transmitted through a network adapter and shifting traffic to equalize data flow through all available adapters. The load-balancing algorithm for NIC teams in a vSwitch is a balance of the number of connections—not the amount of traffic. NIC teams on a vSwitch can be configured with one of the following four load-balancing policies:

  • vSwitch port-based load balancing (default)
  • Source MAC-based load balancing
  • IP hash-based load balancing
  • Explicit failover order

The last option, explicit failover order, isn't really a “load-balancing” policy; instead, it uses the administrator-assigned failover order whereby the highest order uplink from the list of active adapters that passes failover detection criteria is used. More information on the failover order is provided in the section “Configuring Failover Detection and Failover Policy.” Also, note that the list we've supplied here applies only to vSphere Standard Switches; vSphere Distributed Switches, which we cover later in this chapter in the section “Working with vSphere Distributed Switches,” have additional options for load balancing and failover.

OUTBOUND LOAD BALANCING

The load-balancing feature of NIC teams on a vSwitch applies only to the outbound traffic.

REVIEWING VIRTUAL SWITCH PORT-BASED LOAD BALANCING

The vSwitch port-based load-balancing policy that is used by default uses an algorithm that ties (or pins) each virtual switch port to a specific uplink associated with the vSwitch. The algorithm attempts to maintain an equal number of port-to-uplink assignments across all uplinks to achieve load balancing. As shown in Figure 5.32, this policy setting ensures that traffic from a specific virtual network adapter connected to a virtual switch port will consistently use the same physical network adapter. In the event that one of the uplinks fails, the traffic from the failed uplink will fail over to another physical network adapter.

FIGURE 5.32 The vSwitch port-based load-balancing policy assigns each virtual switch port to a specific uplink. Failover to another uplink occurs when one of the physical network adapters experiences failure.

images

You can see how this policy does not provide dynamic load balancing but does provide redundancy. Because the port for a VM does not change, each VM is tied to a physical network adapter until failover occurs regardless of the amount of network traffic. Looking at Figure 5.32, imagine that the Linux VM and the Windows VM on the far left are the two most network-intensive VMs. In this case, the vSwitch port-based policy has assigned both ports for these VMs to the same physical network adapter. In this case, one physical network adapter could be much more heavily utilized than other network adapters in the NIC team.

The physical switch passing the traffic learns the port association and therefore sends replies back through the same physical network adapter from which the request initiated. The vSwitch port-based policy is best used when you have more virtual network adapters than physical network adapters. When there are fewer virtual network adapters, then some physical adapters will not be used. For example, if five VMs are connected to a vSwitch with six uplinks, only five vSwitch ports will be assigned to exactly five uplinks, leaving one uplink with no traffic to process.

REVIEWING SOURCE MAC-BASED LOAD BALANCING

The second load-balancing policy available for a NIC team is the source MAC-based policy, shown in Figure 5.33. This policy is susceptible to the same pitfalls as the vSwitch port-based policy simply because the static nature of the source MAC address is the same as the static nature of a vSwitch port assignment. The source MAC-based policy is also best used when you have more virtual network adapters than physical network adapters. In addition, VMs still cannot use multiple physical adapters unless configured with multiple virtual network adapters. Multiple virtual network adapters inside the guest OS of a VM will provide multiple source MAC addresses and allow multiple physical network adapters.

FIGURE 5.33 The source MAC-based load-balancing policy, as the name suggests, ties a virtual network adapter to a physical network adapter based on the MAC address.

images

VIRTUAL SWITCH TO PHYSICAL SWITCH

To eliminate a single point of failure, you can connect the physical network adapters in NIC teams set to use the vSwitch port-based or source MAC-based load-balancing policies to different physical switches; however, the physical switches must belong to the same layer 2 broadcast domain. Link aggregation using 802.3ad teaming is not supported with either of these load-balancing policies.

REVIEWING IP HASH-BASED LOAD BALANCING

The third load-balancing policy available for NIC teams is the IP hash-based policy, also called the out-IP policy. This policy, shown in Figure 5.34, addresses the limitation of the other two policies. The IP hash-based policy uses the source and destination IP addresses to calculate a hash. The hash determines the physical network adapter to use for communication. Different combinations of source and destination IP addresses will, quite naturally, produce different hashes. Based on the hash, then, this algorithm could allow a single VM to communicate over different physical network adapters when communicating with different destinations, assuming that the calculated hashes select a different physical NIC.

FIGURE 5.34 The IP hash-based policy is a more scalable load-balancing policy that allows VMs to use more than one physical network adapter when communicating with multiple destination hosts.

images

BALANCING FOR LARGE DATA TRANSFERS

Although the IP hash-based load-balancing policy can more evenly spread the transfer traffic for a single VM, it does not provide a benefit for large data transfers occurring between the same source and destination systems. Because the source-destination hash will be the same for the duration of the data load, it will flow through only a single physical network adapter.

Unless the physical hardware supports it, a vSwitch with the NIC teaming load-balancing policy set to use the IP-based hash must have all physical network adapters connected to the same physical switch. Some newer switches support link aggregation across physical switches, but otherwise all the physical network adapters will need to connect to the same switch. In addition, the switch must be configured for link aggregation. ESXi configured to use a vSphere Standard Switch supports standard 802.3ad teaming in static (manual) mode—sometimes referred to as EtherChannel in Cisco networking environments—but does not support the Link Aggregation Control Protocol (LACP) or Port Aggregation Protocol (PAgP) commonly found on switch devices. Link aggregation will increase overall aggregate throughput by potentially combining the bandwidth of multiple physical network adapters for use by a single virtual network adapter of a VM.

Another consideration to point out when using the IP hash-based load-balancing policy is that all physical NICs must be set to active instead of some configured as active and some as passive. This is because of the way IP hash-based load balancing works between the virtual switch and the physical switch.

Figure 5.35 shows a snippet of the configuration of a Cisco switch configured for link aggregation. Keep in mind that other switch manufacturers will have their own ways of configuring link aggregation, so refer to your specific vendor's documentation.

FIGURE 5.35 The physical switches must be configured to support the IP hash-based load-balancing policy.

images

Perform the following steps to alter the NIC teaming load-balancing policy of a vSwitch:

  1. Use the vSphere Web Client to establish a connection to a vCenter Server instance.
  2. Using your method of choice, navigate to the specific ESXi host that has the vSwitch whose NIC teaming configuration you wish to modify.
  3. With an ESXi host selected, go to the Manage tab, select Networking, and then make sure that Virtual Switches is highlighted.
  4. Select the name of the virtual switch from the list of virtual switches, and then click the Edit icon (it looks like a pencil).
  5. In the Edit Settings dialog box, select Teaming And Failover, and then select the desired load-balancing strategy from the Load Balancing drop-down list, as shown in Figure 5.36.
  6. Click OK to save the changes.

Now that we've explained the load-balancing policies—and before we explain explicit failover order—let's take a deeper look at the failover and failback of uplinks in a NIC team. There are two parts to consider: failover detection and failover policy. We'll cover both of these in the next section.

FIGURE 5.36 Select the load-balancing policy for a vSwitch in the Teaming And Failover section.

images

CONFIGURING FAILOVER DETECTION AND FAILOVER POLICY

Failover detection with NIC teaming can be configured to use either a link status method or a beacon-probing method.

The link status failover-detection method works just as the name suggests. The link status of the physical network adapter identifies the failure of an uplink. In this case, failure is identified for events like removed cables or power failures on a physical switch. The downside to the setting for link status failover-detection is its inability to identify misconfigurations or pulled cables that connect the switch to other networking devices (for example, a cable connecting one switch to an upstream switch.)

OTHER WAYS OF DETECTING UPSTREAM FAILURES

Some network switch manufacturers have also added features into their network switches that assist in detecting upstream network failures. In the Cisco product line, for example, there is a feature known as link state tracking that enables the switch to detect when an upstream port has gone down and react accordingly. This feature can reduce or even eliminate the need for beacon probing.

The beacon-probing failover-detection setting, which includes link status as well, sends Ethernet broadcast frames across all physical network adapters in the NIC team. These broadcast frames allow the vSwitch to detect upstream network connection failures and will force failover when Spanning Tree Protocol blocks ports, when ports are configured with the wrong VLAN, or when a switch-to-switch connection has failed. When a beacon is not returned on a physical network adapter, the vSwitch triggers the failover notice and reroutes the traffic from the failed network adapter through another available network adapter based on the failover policy.

Consider a vSwitch with a NIC team consisting of three physical network adapters, where each adapter is connected to a different physical switch and each physical switch is connected to a single physical switch, which is then connected to an upstream switch, as shown in Figure 5.37. When the NIC team is set to the beacon-probing failover-detection method, a beacon will be sent out over all three uplinks.

FIGURE 5.37 The beacon-probing failover-detection policy sends beacons out across the physical network adapters of a NIC team to identify upstream network failures or switch misconfigurations.

images

After a failure is detected, either via link status or beacon probing, a failover will occur. Traffic from any VMs or VMkernel ports is rerouted to another member of the NIC team. Exactly which member that might be, though, depends primarily on the configured failover order.

Figure 5.38 shows the failover order configuration for a vSwitch with two adapters in a NIC team. In this configuration, both adapters are configured as active adapters, and either adapter or both adapters may be used at any given time to handle traffic for this vSwitch and all its associated ports or port groups.

Now look at Figure 5.39. This figure shows a vSwitch with three physical network adapters in a NIC team. In this configuration, one of the adapters is configured as a standby adapter. Any adapters listed as standby adapters will not be used until a failure occurs on one of the active adapters, at which time the standby adapters activate in the order listed.

It should go without saying, but adapters that are listed in the Unused Adapters section will not be used in the event of a failure.

Now take a quick look back at Figure 5.36. You'll see an option there labeled Use Explicit Failover Order. This is the explicit failover order policy that we mentioned toward the beginning of the section “Configuring NIC Teaming.” If you select that option instead of one of the other load-balancing options, then traffic will move to the next available uplink in the list of active adapters. If no active adapters are available, then traffic will move down the list to the standby adapters. Just as the name of the option implies, ESXi will use the order of the adapters in the failover order to determine how traffic will be placed on the physical network adapters. Because this option does not perform any sort of load balancing whatsoever, it's generally not recommended and one of the other options is used instead.

FIGURE 5.38 The failover order helps determine how adapters in a NIC team are used when a failover

images

FIGURE 5.39 Standby adapters automatically activate when an active adapter fails.

images

The Failback option controls how ESXi will handle a failed network adapter when it recovers from failure. The default setting, Yes, as shown in Figure 5.38 and Figure 5.39, indicates that the adapter will be returned to active duty immediately upon recovery, and it will replace any standby adapter that may have taken its place during the failure. Setting Failback to No means that the recovered adapter remains inactive until another adapter fails, triggering the replacement of the newly failed adapter.

USING FAILBACK WITH VMKERNEL PORTS AND IP-BASED STORAGE

We recommend setting Failback to No for VMkernel ports you've configured for IP-based storage. Otherwise, in the event of a “port-flapping” issue—a situation in which a link may repeatedly go up and down quickly—performance is negatively impacted. Setting Failback to No in this case protects performance in the event of port flapping.

Perform the following steps to configure the Failover Order policy for a NIC team:

  1. Use the vSphere Web Client to establish a connection to a vCenter Server instance.
  2. Navigate to the ESXi host that has the vSwitch for which you'd like to change the failover order. With an ESXi host selected, select the Manage tab, then click Networking.
  3. With Virtual Switches highlighted on the left, select the virtual switch you want to edit, then click the Edit Settings icon.
  4. Select Teaming And Failover.
  5. Use the Move Up and Move Down buttons to adjust the order of the network adapters and their location within the Active Adapters, Standby Adapters, and Unused Adapters lists, as shown in Figure 5.40.

    FIGURE 5.40 Failover order for a NIC team is determined by the order of network adapters as listed in the Active Adapters, Standby Adapters, and Unused Adapters lists.

    images

  6. Click OK to save the changes.

When a failover event occurs on a vSwitch with a NIC team, the vSwitch is obviously aware of the event. The physical switch that the vSwitch is connected to, however, will not know immediately. As you can see in Figure 5.40, a vSwitch includes a Notify Switches configuration setting, which, when set to Yes, will allow the physical switch to immediately learn of any of the following changes:

  • A VM is powered on (or any other time a client registers itself with the vSwitch).
  • A vMotion occurs.
  • A MAC address is changed.
  • A NIC team failover or failback has occurred.

TURNING OFF NOTIFY SWITCHES

The Notify Switches option should be set to No when the port group has VMs using Microsoft Network Load Balancing (NLB) in Unicast mode.

In any of these events, the physical switch is notified of the change using the Reverse Address Resolution Protocol (RARP). RARP updates the lookup tables on the physical switches and offers the shortest latency when a failover event occurs.

Although the VMkernel works proactively to keep traffic flowing from the virtual networking components to the physical networking components, VMware recommends taking the following actions to minimize networking delays:

  • Disable Port Aggregation Protocol (PAgP) and Link Aggregation Control Protocol (LACP) on the physical switches.
  • Disable Dynamic Trunking Protocol (DTP) or trunk negotiation.
  • Disable Spanning Tree Protocol (STP).

VIRTUAL SWITCHES WITH CISCO SWITCHES

VMware recommends configuring Cisco devices to use PortFast mode for access ports or PortFast trunk mode for trunk ports.

Using and Configuring Traffic Shaping

By default, all virtual network adapters connected to a vSwitch have access to the full amount of bandwidth on the physical network adapter with which the vSwitch is associated. In other words, if a vSwitch is assigned a 1 Gbps network adapter, then each VM configured to use the vSwitch has access to 1 Gbps of bandwidth. Naturally, if contention becomes a bottleneck hindering VM performance, NIC teaming will help. However, as a complement to NIC teaming, you can also enable and configure traffic shaping. Traffic shaping establishes hard-coded limits for peak bandwidth, average bandwidth, and burst size to reduce a VM's outbound bandwidth capability.

As shown in Figure 5.41, the Peak Bandwidth value and the Average Bandwidth value are specified in kilobits per second, and the Burst Size value is configured in units of kilobytes. The value entered for Average Bandwidth dictates the data transfer per second across the virtual vSwitch. The Peak Bandwidth value identifies the maximum amount of bandwidth a vSwitch can pass without dropping packets. Finally, the Burst Size value defines the maximum amount of data included in a burst. The burst size is a calculation of bandwidth multiplied by time. During periods of high utilization, if a burst exceeds the configured value, packets are dropped in favor of other traffic; however, if the queue for network traffic processing is not full, the packets are retained for transmission at a later time.

FIGURE 5.41 Traffic shaping reduces the out-bound bandwidth available to a port group.

images

TRAFFIC SHAPING AS A LAST RESORT

Use the traffic-shaping feature sparingly. Traffic shaping should be reserved for situations where VMs are competing for bandwidth and the opportunity to add physical network adapters isn't available because you don't have enough expansion slots on the physical chassis. With the low cost of network adapters, it is more worthwhile to spend time building vSwitch devices with NIC teams as opposed to cutting the bandwidth available to a set of VMs.

Perform the following steps to configure traffic shaping:

  1. Use the vSphere Web Client to establish a connection to a vCenter Server instance.
  2. Navigate to the ESXi host on which you'd like to configure traffic shaping. With an ESXi host selected, go to the Networking section of the Manage tab.
  3. Make sure Virtual Switches is selected, click the virtual switch on which traffic shaping should be enabled, and then click the Edit Settings icon.
  4. Select Traffic Shaping.
  5. Select the Enabled option from the Status drop-down list.
  6. Adjust the Average Bandwidth value to the desired number of kilobits per second.
  7. Adjust the Peak Bandwidth value to the desired number of kilobits per second.
  8. Adjust the Burst Size value to the desired number of kilobytes.

Keep in mind that traffic shaping on a vSphere Standard Switch applies only to outbound traffic.

Bringing It All Together

By now you've seen how all the various components of ESXi virtual networking interact with each other—vSwitches, ports and port groups, uplinks and NIC teams, and VLANs. But how do you assemble all these pieces into a usable whole?

The number and the configuration of the vSwitches and port groups depend on several factors, including the number of network adapters in the ESXi host, the number of IP subnets, the existence of VLANs, and the number of physical networks. With respect to the configuration of the vSwitches and VM port groups, no single correct configuration will satisfy every scenario. However, the greater the number of physical network adapters in an ESXi host, the more flexibility you will have in your virtual networking architecture.

Later in the chapter we'll discuss some advanced design factors, but for now let's stick with some basic design considerations. If the vSwitches created in the VMkernel will not be configured with multiple port groups or VLANs, you will be required to create a separate vSwitch for every IP subnet or physical network to which you need to connect. This was illustrated previously in Figure 5.24 in our discussion about VLANs. To really understand this concept, let's look at two more examples.

Figure 5.42 shows a scenario in which there are five IP subnets that your virtual infrastructure components need to reach. The VMs in the production environment must reach the production LAN, the VMs in the test environment must reach the test LAN, the VMkernel needs to access the IP storage and vMotion LANs, and finally, the ESXi host must have access to the management LAN. In this scenario, without the use of VLANs and port groups, the ESXi host must have five different vSwitches and five different physical network adapters. (Of course, this doesn't account for redundancy or NIC teaming for the vSwitches.)

FIGURE 5.42 Without the use of port groups and VLANs in the vSwitches, each IP subnet will require a separate vSwitch with the appropriate connection type.

images

images Real World Scenario

WHY DESIGN IT THAT WAY?

During the virtual network design process we are often asked questions such as why virtual switches should not be created with the largest number of ports to leave room to grow or why multiple vSwitches should be used instead of a single vSwitch (or vice versa). Some of these questions are easy to answer; the answers to others are a matter of experience and, to be honest, personal preference.

Consider the question about why vSwitches should not be created with the largest number of ports. As you'll see in Table 5.1, the maximum number of virtual network switch ports per host is 4096. This means that if virtual switches are created with 1024 ports, only 4 virtual switches can be created. Calculate 1024 x 4, and you'll arrive at the per-host maximum of 4096 ports. (Keep in mind that virtual switches actually have 8 reserved ports, so a 1016-port switch actually has 1024 ports.)

Other questions aren't necessarily so clear cut. We have found that using multiple vSwitches can make it easier to shift certain networks to dedicated physical networks; for example, if a customer wants to move their management network to a dedicated physical network for greater security, this is more easily accomplished when using multiple vSwitches instead of a single vSwitch. The same can be said for using VLANs.

In the end, though, many areas of virtual networking design are simply areas of personal preference and not technical necessity. Learning to determine which areas are which will go a long way to helping you understand your virtualized networking environment.

Figure 5.43 shows the same configuration, but this time using VLANs for the Management, vMotion, Production, and Test/Dev networks. The IP storage network is still a physically separate network (a common configuration in many environments).

The configuration in Figure 5.43 still uses five network adapters, but this time you're able to provide NIC teaming for all the networks except for the IP storage network.

If the IP storage network had been configured as a VLAN, the number of vSwitches and uplinks could have been even further reduced. Figure 5.44 shows a possible configuration that would support this sort of scenario.

FIGURE 5.43 The use of the physically separate IP storage network limits the reduction in the number of vSwitches and uplinks.

images

FIGURE 5.44 With the use of port groups and VLANs in the vSwitches, even fewer vSwitches and uplinks are required.

images

This time, you're able to provide NIC teaming to all the traffic types involved—Management, vMotion, IP storage, and VM traffic—using only a single vSwitch with multiple uplinks.

Clearly, there is a tremendous amount of flexibility in how vSwitches, uplinks, and port groups are assembled to create a virtual network capable of supporting your infrastructure. Even given all this flexibility, though, there are limits. Table 5.1 lists some of the limits of ESXi networking.

VIRTUAL SWITCH CONFIGURATIONS: DON'T GO TOO BIG OR TOO SMALL

Although you can create a vSwitch with a maximum of 4088 ports (really 4096), it is not recommended if you anticipate growth. Because ESXi hosts cannot have more than 4096 ports, if you create a vSwitch with 4088 ports, then you are limited to a single vSwitch on that host. With only a single vSwitch, you may not be able to connect to all the networks that you need. In the event you do run out of ports on an ESXi host and need to create a new vSwitch, you can reduce the number of ports on an existing vSwitch. That change requires a reboot to take effect, but vMotion allows you to move the VMs to a different host to prevent VM downtime.

You also want to be sure that you account for scenarios such as a host failure, when VMs will be restarted on other hosts using vSphere HA (described in more detail in Chapter 7, “Ensuring High Availability and Business Continuity”). In this case, if you make your vSwitch too small (for example, not enough ports), then you could run into an issue there also.

Our key takeaway: Virtual switch sizing is the factor of multiple variables that you need to consider, so plan carefully! We recommend creating virtual switches with enough ports to cover existing needs, projected growth, and failover capacity.

TABLE 5.1: Configuration maximums for ESXi networking components (vSphere Standard Switches)

CONFIGURATION ITEM MAXIMUM
Ports per vSwitch 4088
Maximum ports per host (vSS/vDS) 4096
Port groups per vSwitch 512
Uplinks per vSwitch 32
Maximum active ports per host (vSS/vDS) 1016

With all the flexibility provided by the different virtual networking components, you can be assured that whatever the physical network configuration might hold in store, there are several ways to integrate the virtual networking. What you configure today may change as the infrastructure changes or as the hardware changes. ESXi provides enough tools and options to ensure a successful communication scheme between the virtual and physical networks.

Working with vSphere Distributed Switches

So far our discussion has focused solely on vSphere Standard Switches (just vSwitches). Starting with vSphere 4.0 and continuing with vSphere 5.0 and the current release, there is another option: vSphere Distributed Switches.

Whereas vSwitches are managed per host, a vSphere Distributed Switch functions as a single virtual switch across all the associated ESXi hosts. There are a number of similarities between a vSphere Distributed Switch and a Standard vSwitch:

  • A vSphere Distributed Switch provides connectivity for VMs and VMkernel interfaces.
  • A vSphere Distributed Switch leverages physical network adapters as uplinks to provide connectivity to the external physical network.
  • A vSphere Distributed Switch can leverage VLANs for logical network segmentation.

Of course, there are differences as well, but the biggest of these is that a vSphere Distributed Switch spans multiple hosts in a cluster instead of each host having its own set of independent vSwitches and port groups. This greatly reduces complexity in clustered ESXi environments and simplifies the addition of new servers to an ESXi cluster.

VMware's official abbreviation for a vSphere Distributed Switch is VDS. In this chapter, we'll use the full name (vSphere Distributed Switch), VDS, or sometimes just distributed switch to refer to this feature.

Creating a vSphere Distributed Switch

The process of creating and configuring a distributed switch is twofold. First, you create the distributed switch and then you add ESXi hosts to it. To help simplify the process, vSphere automatically includes the option to add an ESXi host to the distributed switch during the process of creating it.

Perform the following steps to create a new vSphere Distributed Switch:

  1. Launch the vSphere Web Client and connect to a vCenter Server instance.
  2. On the vSphere Web Client home screen, select the vCenter object from the list on the left, then select Distributed Switches from the Inventory Lists area.
  3. On the right side of the vSphere Web Client, click the Create A New Distributed Switch icon (it looks like a switch with a green plus mark in the corner).

    This launches the New Distributed Switch wizard.

  4. Supply a name for the new distributed switch, and select a location within the vCenter inventory (a datacenter object or a folder) where you'd like to store the new distributed switch. Click Next.
  5. Next, select the version of the VDS you'd like to create. Figure 5.45 shows the options for distributed switch versions.

    FIGURE 5.45 If you want to support all the features included in vSphere 5.5, you must use a Version 5.5.0 distributed switch.

    images

    Five options are available:

    • Distributed Switch: 4.0: This type of distributed switch is compatible back to vSphere 4.0 and limits the distributed switch to features supported only by vSphere 4.0.
    • Distributed Switch: 4.1.0: This version adds support for Load-Based Teaming and Network I/O Control. This version is supported by vSphere 4.1 and later.
    • Distributed Switch: 5.0.0: This version is compatible only with vSphere 5.0 and later and adds support for features such as user-defined network resource pools in Network I/O Control, NetFlow, and port mirroring.
    • Distributed Switch: 5.1.0: Compatible with vSphere 5.1 or later, this version of the distributed switch adds support for Network Rollback and Recovery, Health Check, Enhanced Port Mirroring, and LACP.
    • Distributed Switch: 5.5.0: This is the latest version, and it's supported on only vSphere 5.5 or later. This distributed switch adds Traffic Filtering and Marking and enhanced support for LACP.

    In this case, select vSphere Distributed Switch Version 5.5.0 and click Next.

  6. Specify the number of uplink ports, as illustrated in Figure 5.46.

    FIGURE 5.46 The number of uplinks controls how many physical adapters from each host can serve as uplinks for the distributed switch.

    images

  7. On the same screen shown in Figure 5.46, select whether you want Network I/O Control enabled or disabled. Also select whether you want to create a default port group and, if so, what the name of that default port group should be. For this example, leave Network I/O Control enabled, and create a default port group with the name of your choosing. Click Next.
  8. Review the settings for your new distributed switch. If everything looks correct, click Finish; otherwise, use the Back button to go back and change settings as needed.

After you complete the New Distributed Switch wizard, a new distributed switch will appear in the list of distributed switches in the vSphere Web Client. You can click the new distributed switch to see the ESXi hosts connected to it (none yet), the VMs hosted on it (none yet), the distributed ports groups on (only one—the one you created during the wizard), and the uplink port groups (of which there is also only one).

All this information is also available using the vSphere CLI or vSphere Management Assistant, but due to the nature of how the esxcli command is structured, you'll need to have an ESXi host added to the distributed switch first. Let's look at how that's done.

VSPHERE DISTRIBUTED SWITCHES REQUIRE VCENTER SERVER

This may seem obvious, but it's important to point out that because of the shared nature of a vSphere Distributed Switch, vCenter Server is required. That is, you cannot have a vSphere Distributed Switch in an environment that is not being managed by vCenter Server.

Once you've created a distributed switch, it is relatively easy to add an ESXi host. When the ESXi host is created, all of the distributed port groups will automatically be propagated to the new host with the correct configuration. This is the distributed nature of the distributed switch—as configuration changes are made via the vSphere Web Client, vCenter Server pushes those changes out to all participating ESXi hosts. VMware administrators who are used to managing large ESXi clusters and having to repeatedly create vSwitches and port groups across all the servers individually will be very pleased with the reduction in administrative overhead that distributed switches offer.

Perform the following steps to add an ESXi host to an existing distributed switch:

  1. Launch the vSphere Web Client, and connect to a vCenter Server instance.
  2. Navigate to the list of distributed switches. One way of getting there is to start at the vCenter home screen, then click Distributed Switches in the Inventory Lists area.
  3. Select an existing distributed switch in the list of objects on the right, and select Add And Manage Hosts from the Actions menu.

    This launches the Add And Manage Hosts wizard, shown in Figure 5.47.

    FIGURE 5.47 When you're working with distributed switches, the vSphere Web Client offers a single wizard to add hosts, remove hosts, or manage host networking.

    images

  4. Select the Add Hosts radio button and click Next.
  5. Click the green plus icon to add an ESXi host. This opens the Select New Host dialog box.
  6. From the list of new hosts to add, place a check mark next to the name of each ESXi host you'd like to add to the distributed switch. Click OK when you're done, and then click Next to continue.
  7. The next screen offers four different adapter-related tasks to perform, as shown in Figure 5.48. In this case, make sure only Manage Physical Adapters is selected. Click Next to continue.

    FIGURE 5.48 All adapter-related changes to distributed switches are consolidated into a single wizard.

    images

    The Manage Virtual Adapters option allows you to add, migrate, edit, or remove virtual adapters (VMkernel ports) from this distributed switch.

    The Migrate Virtual Machine Networking option enables you to migrate VM network adapters to this distributed switch.

    The Manage Advanced Host Settings option lets you set the number of ports per legacy host proxy switch.

  8. The next screen lets you choose the physical adapters on the new host that should be connected to the uplinks port group for the distributed switch. For each physical adapter you'd like to add, click the adapter and then click Assign Uplink. You'll be prompted to confirm the uplink to which this physical adapter should be connected. Repeat this process to add as many physical adapters as you have uplinks configured for the distributed switch.
  9. Repeat step 8 for each ESXi host you're adding to the distributed switch. Click Next when you're finished adding uplinks for all ESXi hosts.
  10. The Analyze Impact screen displays the potential affects of the changes proposed by the wizard. If everything looks OK, click Next; otherwise, click Back to go back and change the settings.
  11. Click Finish to complete the wizard.

You'll have an opportunity to see this wizard again in later sections. For example, we'll discuss the options for managing physical and virtual adapters in more detail in the section “Managing Adapters” later in this chapter.

We mentioned earlier in this section that you could use the vSphere CLI or vSphere Management Assistant to see distributed switch information once you'd added a host to the distributed switch. The following command will show you a list of the distributed switches to which a particular ESXi host has been joined:

esxcli --server=<vCenter host name> --vihost=<ESXi host name>
--username=<vCenter administrative user> network vswitch dvs vmware list

The output will look similar to the output shown in Figure 5.49.

FIGURE 5.49 The esxcli command shows full details on the configuration of a distributed switch.

images

Use the --help parameter with the network vswitch dvs vmware namespace command to see some of the other tasks that you can perform with the vSphere CLI or vSphere Management Assistant related to vSphere Distributed Switches.

Now, let's take a look at a few other tasks related to distributed switches. We'll start with removing an ESXi host from a distributed switch.

Removing an ESXi Host from a Distributed Switch

Naturally, you can also remove ESXi hosts from a distributed switch. You can't remove a host from a distributed switch if it still has VMs connected to a distributed port group on that switch. This is analogous to trying to delete a standard vSwitch or a port group while a VM is still connected; this, too, is prevented. To allow the host to be removed from the distributed switch, you must move all VMs to a standard vSwitch or a different distributed switch.

Perform the following steps to remove an individual ESXi host from a distributed switch:

  1. Launch the vSphere Web Client, and connect to a vCenter Server instance.
  2. Navigate to the list of distributed switches and select the specific distributed switch from which you'd like to remove an individual ESXi host.
  3. From the Actions menu, select Add And Manage Hosts. This will bring up the Add And Manage Hosts dialog box, shown earlier in Figure 5.47.
  4. Select the Remove Hosts radio button. Click Next.
  5. Click the green plus icon to select hosts to be removed from the distributed switch.

    ADDING HOSTS TO BE REMOVED

    It might seem a bit counterintuitive to use the green plus icon when selecting the hosts to be removed from the distributed switch. The easiest way to think about it is to remember that you're adding hosts to the list of hosts that will be removed.

  6. In the Select Member Hosts dialog box, place a check mark next to each ESXi host you'd like to remove from the distributed switch. Click OK when you're done selecting hosts.
  7. Click Finish to remove the selected ESXi hosts.
  8. If any VMs are still connected to the distributed switch, the vSphere Web Client will display an error similar to the one shown in Figure 5.50.

    FIGURE 5.50 The vSphere Web Client won't allow a host to be removed from a distributed switch if a VM is still attached.

    images

    To correct this error, reconfigure the VM(s) to use a different distributed switch or vSwitch, or migrate the VMs to a different host using vMotion. Then proceed with removing the host from the distributed switch.

  9. If there were no VMs attached to the distributed switch, or after all VMs are reconfigured to use a different vSwitch or distributed switch, the host is removed.

In addition to removing individual ESXi hosts from a distributed switch, you can remove the entire distributed switch.

Removing a Distributed Switch

Removing the last ESXi host from a distributed switch does not remove the distributed switch itself. Even if all the VMs and/or ESXi hosts have been removed from the distributed switch, the distributed switch still exists in the vCenter inventory. You must still remove the distributed switch object itself.

Removing a distributed switch is possible only if no VMs have been assigned to a distributed port group on the distributed switch. Otherwise, the removal of the distributed switch is blocked with an error message similar to the one displayed previously in Figure 5.50. Again, you'll need to reconfigure the VM(s) to use a different vSwitch or distributed switch before the operation can proceed. Refer to Chapter 9, “Creating and Managing Virtual Machines,” for more information on modifying a VM's network settings.

Perform the following steps to remove the distributed switch if no VMs are using it or any of the distributed port groups on it:

  1. Launch the vSphere Web Client, and connect to a vCenter Server instance.
  2. From the vSphere Web Client home screen, navigate to the Distributed Switches inventory list.
  3. Select an existing vSphere Distributed Switch in the inventory pane on the left.
  4. From the Actions menu, select All vCenter Actions images Remove From Inventory.
  5. The distributed switch and all associated distributed port groups are removed from the inventory and from any connected hosts.

The bulk of the configuration for a distributed switch isn't performed for the distributed switch itself but rather for the distributed port groups on that distributed switch. Nevertheless, let's first take a look at managing distributed switches themselves.

Managing Distributed Switches

As we stated earlier, the vast majority of the things that a VMware administrator will need to do with a distributed switch involve working with distributed port groups. We'll discuss distributed port groups later, but now we want to point out a few things involved with managing the distributed switch. We'll focus primarily on the functionality found on the Monitor, Manage, and Related Objects tabs of a distributed switch in the vSphere Web Client.

We'll start with the Related Objects tab, where you can see ESXi hosts, VMs, templates, distributed port groups, and uplink groups that are connected to the selected distributed switch. This is a great way to explore the relationships between the distributed switch and other components in the environment.

The Manage tab is an area you've already seen and will see again throughout this chapter; in particular, you've been working in the Settings section of the Manage tab quite a bit, and you'll continue to do so as you start creating distributed port groups. The Manage tab also includes the following sections:

  • In the Alarm Definitions section, you'll be able to create custom alarms for monitoring. This topic is covered in more depth in Chapter 13, “Monitoring VMware vSphere Performance.”
  • The Tags section allows VMware administrators to assign tags to objects within the vSphere Web Client and then use the search functionality to quickly and easily find all the objects with a certain tag.
  • The Permissions section shows you the roles that have been assigned to various users or groups for the selected distributed switch. Note that in order to change these permissions, though, you'll have to work with the datacenter object or folder in which the distributed switch is stored.
  • The Network Protocol Profiles section allows you to create profiles that are associated with a distributed port group. These profiles help shape how IPv4 and/or IPv6 are configured for VMs attached to a distributed port group with an associated profile.
  • The Ports section provides a list of all the ports on the distributed switch and their current status.
  • Finally, the Resource Allocation section is where you'll create network resource pools for use with Network I/O Control, a topic we'll discuss later in Chapter 11, “Managing Resource Allocation.”

On the Monitor tab, there are four sections:

  • The Issues section shows issues and/or alarms pertaining to a distributed switch.
  • The Tasks and Events sections provide insight into recently performed tasks and a list of events. You could use these sections to see which user performed a certain task or to review various events pertaining to the selected distributed switch.
  • The Health section centralizes health information for the distributed switch, such as VLAN checks, MTU checks, and other health checks.

The Health section contains some rather important functionality, so let's dig a little deeper into that section in particular.

USING HEALTH CHECKS AND NETWORK ROLLBACK

The vSphere Distributed Switch Health Check feature was added in vSphere 5.1 and is available only when you're using a version 5.1.0 or version 5.5.0 distributed switch. The idea behind the health check feature is to help VMware administrators identify mismatched VLAN configurations, mismatched MTU configurations, and mismatched NIC teaming policies—all of which are common sources of connectivity issues.

There are a few requirements to using the health check feature that you should know:

  • As we mentioned earlier, you must be using a version 5.1.0 or version 5.5.0 distributed switch.
  • VLAN and MTU checks require at least two NICs with active links.
  • The teaming policy check requires at least two NICs with active links and at least two hosts.

By default, vSphere Distributed Switch Health Check is turned off; you must enable it in order to perform checks.

To enable vSphere Distributed Switch Health Check, perform these steps:

  1. Connect to a vCenter Server instance using the vSphere Web Client.
  2. Navigate to a distributed switch object in the vSphere Web Client, and select the distributed switch for which you want to enable health checks.
  3. Click the Manage tab, choose Settings, and then select Health Check.
  4. Click the Edit button.
  5. In the Edit Health Check Settings dialog box, you can independently enable checks for VLAN and MTU, teaming and failover, or both. Click OK when finished.

Once the health checks are enabled, you can view the health check information on the Monitor tab of the distributed switch. Figure 5.51 shows the health check information for a distributed switch once health checks have been enabled.

Closely related to the health check functionality is a feature added in vSphere 5.1 called vSphere Network Rollback. The idea behind network rollback is to automatically protect environments against changes that would disconnect ESXi hosts from vCenter Server by rolling back changes if they are invalid. For example, changes to the speed or duplex of a physical NIC, updating teaming and failover policies for a switch that contains the ESXi host's management interface, or changing the IP settings of a host's management interface are all examples of changes that are validated when they occur. If the change would result in a loss of management connectivity to the host, the change is reverted—or rolled back—automatically.

Rollbacks can occur at two levels: at the host networking level or distributed switch level. Rollback is enabled by default, but you can enable or disable the feature at the vCenter level (doing so requires editing the vCenter Server configuration file; there is no GUI setting).

In addition to automatic rollbacks, VMware administrators have the option of performing manual rollbacks. We showed you how to do a manual rollback at the host level earlier in the section titled “Configuring Management Networking,” when we discussed the Network Restore Options area of an ESXi host's DCUI. To perform a manual rollback of a distributed switch, you use the same process as restoring from a saved configuration, which is what we're going to discuss in the next section.

FIGURE 5.51 The vSphere Distributed Switch Health Check helps identify potential problems in configuration.

images

IMPORTING AND EXPORTING DISTRIBUTED SWITCH CONFIGURATION

vSphere 5.1 added the ability to export (save) and import (load) the configuration of a distributed switch. This functionality can serve a number of purposes; one purpose that we just mentioned is to manually “roll back” to a previously saved configuration.

To export (save) the configuration of a distributed switch to a file, perform these steps:

  1. Log into a vCenter Server instance using the vSphere Web Client.
  2. Navigate to the distributed switch whose configuration you'd like to save.
  3. From the Actions menu, select All vCenter Actions images Export Configuration. This opens the Export Configuration dialog box.
  4. Select the appropriate radio button to export either the configuration of the distributed switch and all the distributed ports groups or just the configuration of the distributed switch.
  5. Optionally, supply a description of the exported (saved) configuration, then click OK.
  6. When prompted if you want to save the exported configuration file, click Yes.
  7. Use your operating system's File Save dialog box to select the location where the exported configuration file (named backup.zip) should be saved.

Once you have the configuration exported to a file, you can then import this configuration back into your vSphere environment at a later date to restore the saved configuration. You can also import the configuration into a different vSphere environment, such as an environment being managed by a separate vCenter Server instance.

To import a saved configuration, perform these steps:

  1. Log into a vCenter Server instance using the vSphere Web Client.
  2. Navigate to the distributed switch whose configuration you'd like to restore.
  3. From the Actions menu, select All vCenter Actions images Restore Configuration. This opens the Restore Configuration wizard.
  4. Use the Browse button to select the saved configuration file created earlier by exporting the configuration.
  5. Select the appropriate radio button to restore either the distributed switch and all distributed port groups or just the distributed switch configuration.
  6. Note that if vSphere automatically saved a previous version of the configuration (to protect against loss of management connectivity), this dialog box will also have the option of restoring the previous configuration. In this case, you do not need to select the saved backup file.
  7. Click Next.
  8. Review the settings that the wizard will import. If everything is correct, click Finish; otherwise, use Back to go back and make changes.

Both vSphere Network Rollback and the ability to manually export or import the configuration of a distributed switch are major steps forward in managing distributed switches in a vSphere environment.

Most of the work that a VMware administrator needs to perform will revolve around distributed port groups, so let's turn our attention to working with them.

Working With Distributed Port Groups

With vSphere Standard Switches, port groups are the key to connectivity for the VMkernel and for VMs. Without ports and port groups on a vSwitch, nothing can be connected to that vSwitch. The same is true for vSphere Distributed Switches. Without a distributed port group, nothing can be connected to a distributed switch, and the distributed switch is, therefore, unusable. In the following sections, you'll take a closer look at creating, configuring, and removing distributed port groups.

CREATING A DISTRIBUTED PORT GROUP

Perform the following steps to create a new distributed port group:

  1. Launch the vSphere Web Client, and connect to a vCenter Server instance.
  2. On the vSphere Web Client home screen, navigate to the Distributed Switches inventory list.
  3. Select an existing vSphere Distributed Switch in the inventory pane on the left, and then click the Create A New Distributed Port Group icon on the right. This launches the New Distributed Port Group wizard.
  4. Supply a name for the new distributed port group. Click Next to continue.
  5. The Configure Setting screen, shown in Figure 5.52, allows you to specify a number of settings for the new distributed port group.

    FIGURE 5.52 The New Distributed Port Group wizard gives you extensive access to customize the new distributed port group's settings.

    images

    The Port Binding and Port Allocation options allow you more fine-grained control over how ports in the distributed port group are allocated to VMs.

    • With Port Binding set to Static Binding, ports are statically assigned to a VM when a VM is connected to the distributed switch. You may also set the Port Allocation to be either Elastic (in which case the distributed port group starts with 8 ports and adds more in 8-port increments as needed) or Fixed (in which case it defaults to 128 ports).
    • With Port Binding set to Dynamic Binding, you specify how many ports the distributed port group should have (the default is 128). Note that this option is deprecated; the vSphere Web Client will post a warning to that effect if you select this option.
    • With Port Binding set to Ephemeral Binding, you can't specify the number of ports or the Port Allocation method.

    The Network Resource Pool option allows you to connect this distributed port group to a Network I/O Control custom resource pool. Network I/O Control and network resource pools are described in more detail in Chapter 11.

    Finally, the options for VLAN Type might also need a bit more explanation:

    • With VLAN Type set to None, the distributed port group will receive only untagged traffic. In this case, the uplinks must connect to physical switch ports configured as access ports or they will receive only untagged/native VLAN traffic.
    • With VLAN Type set to VLAN, you'll need to specify a VLAN ID. The distributed port group will receive traffic tagged with that VLAN ID. The uplinks must connect to physical switch ports configured as VLAN trunks.
    • With VLAN Type set to VLAN Trunking, you'll need to specify the range of allowed VLANs. The distributed port group will pass the VLAN tags up to the guest OSes on any connected VMs.
    • With VLAN Type set to Private VLAN, you'll need to specify a Private VLAN entry. Private VLANs are described in detail later in the section “Setting Up Private VLANs.”

    Select the desired port binding settings (and port allocation, if necessary), the desired network resource pool, and the desired VLAN type, and then click Next.

  6. On the summary screen, review the settings, and click Finish if everything is correct. If you need to make changes, use the Back button to go back and make the necessary edits.

After a distributed port group has been created, you can select that distributed port group in the VM configuration as a possible network connection, as shown in Figure 5.53.

After you create a distributed port group, it will appear in the Topology View for the distributed switch that hosts it. In the vSphere Web Client, this view is accessible from the Settings area of the Manage tab for the distributed switch. From there, clicking the Info icon (the small i in the blue circle) will provide more information about the distributed port group and its current state. Figure 5.54 shows some of the information shown by the vSphere Web Client about a distributed port group.

EDITING A DISTRIBUTED PORT GROUP

To edit the configuration of a distributed port group, use the Edit Distributed Port Group Settings link in the Topology View for the distributed switch. In the vSphere Web Client, you can locate this area by selecting a distributed switch in the inventory list and then going to the Settings area of the Manage tab. Finally, select Topology to produce the Topology view shown in Figure 5.55.

FIGURE 5.53 A distributed port group is selected as a network connection for VMs, just like port groups on a vSphere Standard vSwitch.

images

FIGURE 5.54 The vSphere Web Client provides a summary of the distributed port group's configuration.

images

FIGURE 5.55 The Topology view for a distributed switch provides easy access to view and edit distributed port groups.

images

For now, let's focus on modifying VLAN settings, traffic shaping, and NIC teaming for the distributed port group. Policy settings for security and monitoring follow later in this chapter.

DIFFERENT OPTIONS ARE AVAILABLE DEPENDING ON THE VSPHERE DISTRIBUTED SWITCH VERSION

Recall that you can create different versions of distributed switches in the vSphere Web Client. Certain configuration options are available only with a version 5.1.0 or version 5.5.0 vSphere Distributed Switch.

Perform the following steps to modify the VLAN settings for a distributed port group:

  1. Connect to a vCenter Server instance using the vSphere Web Client.
  2. Navigate to the Topology view for the distributed switch containing the distributed port group you want to edit.
  3. Select a distributed port group by clicking its name, which acts like a hyperlink in the vSphere Web Client, and then click the Edit Distributed Port Group Settings icon in the row of icons just above the switch topology.
  4. In the Edit Settings dialog box, select the VLAN option from the list of options on the left.
  5. Modify the VLAN settings by changing the VLAN ID or by changing the VLAN Type setting to VLAN Trunking or Private VLAN.

    Refer to Figure 5.52 for the different VLAN configuration options.

  6. Click OK when you have finished making changes.

Perform the following steps to modify the traffic-shaping policy for a distributed port group:

  1. Using a supported web browser, connect to a vCenter Server instance to launch the vSphere Web Client.
  2. Navigate to the Topology view for the distributed switch containing the distributed port group you want to edit.
  3. Select a distributed port group by clicking its name, which acts like a hyperlink in the vSphere Web Client, and then click the Edit Distributed Port Group Settings icon in the row of icons just above the switch topology.
  4. Select the Traffic Shaping option from the list of options on the left of the distributed port group settings dialog box, as illustrated in Figure 5.56.

    Traffic shaping was described in detail earlier in this chapter in the section “Using and Configuring Traffic Shaping.” The big difference here is that with a distributed switch, you can apply traffic-shaping policies to both ingress and egress traffic. With vSphere Standard Switches, you could apply traffic-shaping policies only to egress (outbound) traffic. Otherwise, the settings here are for a distributed port group function as described earlier.

    FIGURE 5.56 You can apply both ingress and egress traffic-shaping policies to a distributed port group on a distributed switch.

    images

  5. Click OK when you have finished making changes.

Perform the following steps to modify the NIC teaming and failover policies for a distributed port group:

  1. Launch the vSphere Web Client by connecting to a vCenter Server instance with a supported web browser.
  2. Navigate to the Topology view for the distributed switch containing the distributed port group you want to edit.
  3. Select a distributed port group by clicking its name, which acts like a hyperlink in the vSphere Web Client, and then click the Edit Distributed Port Group Settings icon in the row of icons just above the switch topology.
  4. Select the Teaming And Failover option from the list of options on the left of the Edit Settings dialog box, as illustrated in Figure 5.57.

    FIGURE 5.57 The Teaming And Failover item in the distributed port group Edit Settings dialog box provides options for modifying how a distributed port group uses uplinks.

    images

    These settings were described in detail in the section “Configuring NIC Teaming,” with one notable exception—version 4.1 and higher distributed switches support a new load-balancing type, Route Based On Physical NIC Load. When this load-balancing policy is selected, ESXi checks the utilization of the uplinks every 30 seconds for congestion. In this case, congestion is defined as either transmit or receive traffic greater than 75 percent mean utilization over a 30-second period. If congestion is detected on an uplink, ESXi will dynamically reassign the VM to a different uplink.

    REQUIREMENTS FOR LOAD-BASED TEAMING

    Load-Based Teaming (LBT) requires that all upstream physical switches be part of the same layer 2 (broadcast) domain. In addition, VMware recommends that you enable the PortFast or PortFast Trunk option on all physical switch ports connected to a distributed switch that is using Load-Based Teaming.

  5. Click OK when you have finished making changes.

Later in this chapter in the section titled “Configuring LACP,” we'll provide more detail on vSphere's support for Link Aggregation Control Protocol (LACP), including how you would configure a distributed switch for use with LACP. In that section, we'll also refer back to some of this information on modifying NIC teaming and failover.

If you browse through the available settings, you might notice a Blocked policy option. This is the equivalent of disabling a group of ports in the distributed port group. Figure 5.58 shows that the Block All Ports setting is set to either Yes or No. If you set the Block policy to Yes, then all traffic to and from that distributed port group is dropped. Don't set the Block policy to Yes unless you are prepared for network downtime for all VMs attached to that distributed port group!

FIGURE 5.58 The Block policy is set to either Yes or No. Setting the Block policy to Yes disables all the ports in that distributed port group.

images

IS THERE A FEATURE THAT COULD HELP HERE?

Suppose you accidentally set Block to Yes on a distributed port group that contains the management interface. Is there a feature that we've discussed that might help here? That's right—vSphere Network Rollback would help here.

REMOVING A DISTRIBUTED PORT GROUP

The easiest way to delete a distributed port group is to use the Topology view of the distributed switch itself. This view is found in the Settings area of the Manage tab for the distributed switch.

To delete a distributed port group, first select the distributed port group by clicking its name in the Topology view. Then, click the Remove The Distributed Port Group icon, which looks like a red X. Finally, click Yes to confirm that you do want to remove the distributed port group.

If any VMs are still attached to that distributed port group, the vSphere Web Client prevents the deletion of it and logs an error notification.

To delete the distributed port group to which a VM is attached, you first have to reconfigure the VM to use a different distributed port group on the same distributed switch, a distributed port group on a different distributed switch, or a vSwitch. You can either use the Migrate VM To Another Network command on the Actions menu, or you can just reconfigure the VM's network settings directly.

Once all VMs have been moved off of a distributed port group, you can remove the distributed port group using the process described in the previous paragraphs.

In the next section, we'll turn our attention to managing adapters, both physical and virtual, when working with a vSphere Distributed Switch.

Managing Adapters

With a distributed switch, managing virtual and physical adapters is handled quite differently than with a standard vSwitch. Virtual adapters are VMkernel interfaces, so by managing virtual adapters, we're really talking about managing VMkernel traffic—management, vMotion, IP-based storage, and Fault Tolerance logging—on a distributed switch. Physical adapters are, of course, the physical network adapters that serve as uplinks for the distributed switch. Managing physical adapters means adding or removing physical adapters connected to ports in the uplinks distributed port group on the distributed switch.

Perform the following steps to add a virtual adapter to a distributed switch:

  1. Launch a supported web browser and connect to a vCenter Server instance to start the vSphere Web Client. Log in as a user with administrative permissions.
  2. From the vSphere Web Client home screen, navigate to the distributed switch you'd like to edit. One way of doing this is to select vCenter, then choose Distributed Switches from the inventory lists (not the inventory tree).
  3. Select a distributed switch from the inventory list on the left, click the Manage tab in the details pane on the right, select Settings, and make sure Topology is selected.
  4. Click the second icon in the row across the top; the pop-up tooltip reads “Add hosts to this distributed switch and add or migrate physical or virtual network adapters.” This launches the Add And Manage Hosts wizard.
  5. Select the Manage Host Networking radio button, and then click Next.
  6. At the Select Hosts screen, use the green plus icon to add hosts to the list of hosts that will be modified during this process. It might seem like the wizard is asking you to add hosts to the distributed switch, but what you're really doing here is adding hosts to the list of hosts that will be modified. Click Next when you're ready to move to the next step.
  7. In this case, we're modifying virtual adapters, so make sure only the Manage Virtual Adapters check box is selected. Click Next.
  8. With an ESXi host selected, click the New Adapter link near the top of the Manage Virtual Network Adapters screen, shown in Figure 5.59. This opens the Add Networking wizard.

    FIGURE 5.59 The Manage Virtual Network Adapters screen of the wizard allows you to add new adapters as well as migrate existing adapters.

    images

    CREATE THE DISTRIBUTED PORT GROUP FIRST

    When you are adding new virtual adapters to a distributed switch, make sure you've created the distributed port group you'd like this new virtual adapter to use first. The wizard for adding a new virtual adapter does not provide a way to create a distributed port group as part of the process.

  9. In the Add Networking wizard, click the Browse button to select the existing distributed port group to which this new virtual adapter should be added. (Refer to the sidebar “Create the Distributed Port Group First” for an important note.) Click OK once you've selected an existing distributed port group, and then click Next.
  10. On the Port Properties screen, select whether you want to enable IPv4 only, IPv6 only, or both protocols.
  11. Enable the desired services—like vMotion or Fault Tolerance logging—that should be enabled on this new virtual adapter. Click Next.
  12. Depending on whether you selected IPv4, IPv6, or IPv4 and IPv6, the next couple of screens ask you to configure the appropriate network settings.

    If you selected only IPv4, then supply the desired IPv4 settings.

    If you selected only IPv6, then supply the correct IPv6 settings for your network.

    If you selected both IPv4 and IPv6, then there will be two configuration screens in the wizard, one for IPv4 and a separate screen for IPv6.

  13. Once you've entered the correct network protocol settings, the final screen of the wizard presents the settings that will be applied. If everything is correct, click Finish; otherwise, use the Back button to go back and change settings as necessary.
  14. This returns you to the Add And Manage Hosts wizard, where you'll now see the new virtual adapter that will be added. Repeat steps 8 through 13 if you need to add a virtual adapter for another ESXi host at the same time; otherwise, click Next.
  15. The Analyze Impact screen will show you the potential impact of the changes you're making. If necessary, use the Back button to go back and make changes to mitigate any negative impacts. When you're ready to proceed, click Next.
  16. Click Finish to commit the changes to the selected distributed switch and ESXi hosts.

Migrating an existing virtual adapter—such as a VMkernel port on an existing vSwitch—is done in exactly the same way. The only real difference is that in step 8, you'll select an existing virtual adapter, then click the Assign Port Group link across the top. Select an existing port group and click OK to return to the wizard, where the screen will look similar to what's shown in Figure 5.60.

FIGURE 5.60 Migrating a virtual adapter involves assigning it to an existing distributed port group.

images

After a virtual adapter has been created or migrated, the same wizard allows for changes to the virtual port, such as modifying the IP address, changing the distributed port group to which the adapter is assigned, or enabling features such as vMotion or Fault Tolerance logging. To edit an existing virtual adapter, you'd select the Edit Adapter link seen in Figure 5.60. You would remove virtual adapters using this same wizard as well, using the Remove link on the Manage Virtual Network Adapters screen of the Add And Manage Hosts wizard.

Not surprisingly, the vSphere Web Client also allows you to add or remove physical adapters connected to ports in the uplinks port group on the distributed switch. Although you can specify physical adapters during the process of adding a host to a distributed switch, as shown earlier, it might be necessary at times to connect a physical NIC to the distributed switch after the host is already participating in it.

Perform the following steps to add a physical network adapter in an ESXi host to a distributed switch:

  1. Start the vSphere Web Client by launching a supported web browser and connecting to a vCenter Server instance.
  2. From the vSphere Web Client home screen, navigate to the distributed switch you'd like to modify.
  3. Make sure the distributed switch is selected in the inventory list on the left, then go to the Manage tab, select Settings, and click Topology.
  4. From the Actions menu, select Add And Manage Hosts. This opens the Add And Manage Hosts wizard.
  5. Select the Manage Host Networking radio button, and then click Next.
  6. Use the green plus icon to add ESXi hosts to the list of hosts that will be affected by the changes in the wizard. Click Next when you're finished adding ESXi hosts to the list.
  7. Make sure only the Manage Physical Adapters option is selected, as shown in Figure 5.61, and click Next.

    FIGURE 5.61 To manage uplinks on a distributed switch, make sure only the Manage Physical Adapters option is selected.

    images

  8. At the Manage Physical Network Adapters screen, you can add or remove physical network adapters to the selected distributed switch.

    To add a physical adapter as an uplink, select an unassigned adapter from the list and click the Assign Uplink link. You can also use the Assign Uplink link to change the uplink to which a given physical adapter is assigned (for example, to move it from uplink 2 to uplink 3).

    To remove a physical adapter as an uplink, select an assigned adapter from the list and click the Unassign Adapter link.

    To migrate a physical adapter from another switch to this distributed switch, select the already-assigned adapter and use the Assign Uplink link. This will automatically remove it from the other switch and assign it to the selected switch.

    Repeat this process for each host in the list. Click Next when you're ready to proceed.

  9. At the Analyze Impact screen, the vSphere Web Client will provide feedback on the anticipated impact of the changes. If the impact of the changes is undesirable, use the Back button to go back and make any necessary changes. Otherwise, click Next.
  10. Click Finish to complete the wizard and commit the changes.

In addition to migrating virtual adapters and modifying the physical adapters, you can use vCenter Server to assist in migrating VM adapters—that is, migrating a VM's networking between vSphere Standard Switches and vSphere Distributed Switches, as shown in Figure 5.62.

FIGURE 5.62 The Migrate Virtual Machine Networking wizard automates the process of migrating VMs between a source and destination network.

images

This tool, accessed using the Actions menu when a distributed switch is selected in the inventory lists, will reconfigure all selected VMs to use the selected destination network. This is a lot easier than individually reconfiguring a bunch of VMs! In addition, this tool allows you to easily migrate VMs both to a distributed switch and from a distributed switch. Let's walk through the process so that you can see how it works.

Perform the following steps to migrate VMs from a vSphere Standard Switch to a vSphere Distributed Switch:

  1. From within a supported web browser, connect to a vCenter Server instance to launch the vSphere Web Client.
  2. Navigate to a distributed switch in the inventory lists.
  3. Select a distributed switch from the inventory tree on the left, and then select Migrate VM To Another Network from the Actions menu. This launches the Migrate Virtual Machine Networking wizard.
  4. Use the Browse button to select the source network that contains the VMs you'd like to migrate. You can use the Filter and Find search boxes to limit the results if you need to. Click OK once you've selected the source network.
  5. Use the Browse button to select the destination network to which you'd like the VMs to be migrated. Again, use the Filter and Find search boxes, where needed, to make it easier to locate the desired destination network. Click OK to return to the wizard once you've selected the destination network.
  6. Click Next after you've finished selecting the source and destination networks.
  7. A list of matching VMs is generated, and each VM is analyzed to determine if the destination network is accessible or inaccessible to the VM.

    Figure 5.63 shows a list with both accessible and inaccessible destination networks. A destination network might show up as inaccessible if the ESXi host on which that VM is running isn't part of the distributed switch (as is the case in this instance). Select the VMs you want to migrate; then click Next.

    FIGURE 5.63 You cannot migrate VMs matching your source network selection if the destination network is listed as inaccessible.

    images

  8. Click Finish to start the migration of the selected VMs from the specified source network to the selected destination network.

    You'll see a Reconfigure Virtual Machine task spawn in the Tasks pane for each VM that needs to be migrated.

Keep in mind that this tool can migrate VMs from a vSwitch to a distributed switch or from a distributed switch to a vSwitch—you only need to specify the source and destination networks accordingly.

Now that we've covered the basics of distributed switches, we'd like to delve into a few advanced topics. First up is network monitoring using NetFlow.

Using NetFlow on vSphere Distributed Switches

NetFlow is a mechanism for efficiently reporting IP-based traffic information as a series of traffic flows. Traffic flows are defined as the combination of source and destination IP address, source and destination TCP or UDP ports, IP, and IP Type of Service (ToS). Network devices that sup-port NetFlow will track and report information on the traffic flows, typically sending this information to a NetFlow collector. Using the data collected, network administrators gain detailed insight into the types and amount of traffic flows across the network.

In vSphere 5.0, VMware introduced support for NetFlow with vSphere Distributed Switches (only on distributed switches that are version 5.0.0 or higher). This allows ESXi hosts to gather detailed per-flow information and report that information to a NetFlow collector.

Configuring NetFlow is a two-step process:

  1. Configure the NetFlow properties on the distributed switch.
  2. Enable or disable NetFlow (the default is disabled) on a per–distributed port group basis.

Let's take a closer look at these steps.

To configure the NetFlow properties for a distributed switch, perform these steps:

  1. Connect to a vCenter Server instance using a supported web browser; this starts the vSphere Web Client.
  2. Navigate to the list of distributed switches from the vSphere Web Client's inventory lists, and select the distributed switch where you want to enable NetFlow.
  3. With the desired distributed switch selected, from the Actions menu, select All vCenter Actions images Edit NetFlow.

    This opens the Edit NetFlow Settings dialog box.

  4. As shown in Figure 5.64, specify the IP address of the NetFlow collector, the port on the NetFlow collector, and an IP address to identify the distributed switch.
  5. You can modify the Advanced Settings if advised to do so by your networking team.
  6. If you want the distributed switch to process only internal traffic flows—that is, traffic flows from VM to VM on that host—set Process Internal Flows Only to Enabled.
  7. Click OK to commit the changes and return to the vSphere Web Client.

FIGURE 5.64 You'll need the IP address and port number for the NetFlow collector in order to send flow information from a distributed switch.

images

After you configure the NetFlow properties for the distributed switch, you then enable NetFlow on a per–distributed port group basis. The default setting is Disabled.

Perform these steps to enable NetFlow on a specific distributed port group:

  1. In the vSphere Web Client, navigate to the distributed switch hosting the distributed port group where you want to enable NetFlow. You must have already performed the previous procedure to configure NetFlow on that distributed switch.
  2. From the Actions menu, select Manage Distributed Port Groups. This opens the Manage Distributed Port Groups wizard.
  3. Place a check mark next to Monitoring, and then click Next.
  4. From the list of distributed port groups on that distributed switch, select the distributed port group(s) that you want to edit. You can select multiple distributed port groups, if you desire. For Windows users, this usually means pressing the Ctrl key while selecting the second and subsequent distributed port groups; on OS X, you would use the Command key.

    Click Next once you've selected the desired distributed port groups.

  5. At the Monitoring screen, shown in Figure 5.65, set NetFlow to enabled; then click Next.
  6. Click Finish to save the changes to the distributed port group.

This distributed port group will start capturing NetFlow statistics and reporting that information to the specified NetFlow collector.

Another feature that is quite useful is vSphere's support for switch discovery protocols, like Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP). The next section shows you how to enable these protocols in vSphere.

FIGURE 5.65 NetFlow is disabled by default. You enable NetFlow on a per–distributed port group basis.

images

Enabling Switch Discovery Protocols

Previous versions of vSphere supported Cisco Discovery Protocol (CDP), a protocol for exchanging information between network devices. However, it required using the command line to enable and configure CDP.

In vSphere 5.0, VMware added support for Link Layer Discovery Protocol (LLDP), an industry-standardized form of CDP, and provided a location within the vSphere Client where CDP/LLDP support can be configured.

Perform the following steps to configure switch discovery support:

  1. In the vSphere Web Client, navigate to a specific distributed switch in the vSphere Web Client's inventory lists.
  2. With the distributed switch selected on the left, select Edit Settings from the Actions menu.
  3. In the Edit Settings dialog box, select Advanced.
  4. Configure the distributed switch for CDP or LLDP support, as shown in Figure 5.66.

    FIGURE 5.66 LLDP support enables distributed switches to exchange discovery information with other LLDP-enabled devices over the network.

    images

    This figure shows the distributed switch configured for LLDP support, both listening (receiving LLDP information from other connected devices) and advertising (sending LLDP information to other connected devices).

  5. Click OK to save your changes.

Once the ESXi hosts participating in this distributed switch start exchanging discovery information, you can view that information from the physical switch(es). For example, on most Cisco switches, the show cdp neighbor command will display information about CDP-enabled network devices, including ESXi hosts. Entries for ESXi hosts will include information on the physical NIC used and the vSwitch/distributed switch involved.

vSphere Standard Switches also support CDP (not LLDP), but there is no GUI for configuring this support; you must use esxcli. This command will set CDP to Both (listen and advertise) on vSwitch0:

esxcli --server=<vCenter IP address> --vihost=<ESXi host IP address>
--username=<vCenter administrative user> network vswitch standard set
--cdp-status=both --vswitch-name=vSwitch0

The next advanced networking topic we'll review is private VLANs.

Setting Up Private VLANs

Private VLANs (PVLANs) are an advanced networking feature of vSphere that build on the functionality of vSphere Distributed Switches. Within the vSphere environment, PVLANs are possible only when using distributed switches and are not available to use with vSphere Standard Switches. Further, you must ensure that the upstream physical switches to which your vSphere environment is connected also support PVLANs.

We'll provide a quick overview of private VLANs. PVLANs are a way to further isolate ports within a given VLAN (some refer to this as micro-segmentation). For example, consider the scenario of hosts within a demilitarized zone (DMZ). Hosts within a DMZ rarely need to communicate with each other, but using a VLAN for each host quickly becomes unwieldy for a number of reasons. By using PVLANs, you can isolate hosts from each other while keeping them on the same IP subnet. Figure 5.67 provides a graphical overview of how PVLANs work.

PVLANs are configured in pairs: the primary VLAN and any secondary VLANs. The primary VLAN is considered the downstream VLAN; that is, traffic to the host travels along the primary VLAN. The secondary VLAN is considered the upstream VLAN; that is, traffic from the host travels along the secondary VLAN.

To use PVLANs, first configure the PVLANs on the physical switches connecting to the ESXi hosts, and then add the PVLAN entries to the distributed switch in vCenter Server.

Perform the following steps to define PVLAN entries on a distributed switch:

  1. Launch the vSphere Web Client by connecting to a vCenter Server instance.
  2. On the vSphere Web Client home screen, select vCenter, then select Distributed Switches from the inventory lists on the left.
  3. Select an existing distributed switch in the inventory pane on the left, select the Manage tab in the details pane on the right, and select Settings.
  4. Select Private VLAN, then click the Edit button.

    FIGURE 5.67 Private VLANs can help isolate ports on the same IP subnet.

    images

  5. In the Edit Private VLAN Settings dialog box, click Add to add a primary VLAN ID to the list on the left.
  6. For each primary VLAN ID in the list on the left, add one or more secondary VLANs to the list on the right, as shown in Figure 5.68.

    FIGURE 5.68 Private VLAN entries consist of a primary VLAN and one or more secondary VLAN entries.

    images

    Secondary VLANs are classified as one of the two following types:

    • Isolated: Ports placed in secondary PVLANs configured as isolated are allowed to communicate only with promiscuous ports in the same secondary VLAN. We'll explain promiscuous ports shortly.
    • Community: Ports in a secondary PVLAN are allowed to communicate with other ports in the same secondary PVLAN as well as with promiscuous ports.

    Only one isolated secondary VLAN is permitted for each primary VLAN. Multiple secondary VLANs configured as community VLANs are allowed.

  7. When you finish adding all the PVLAN pairs, click OK to save the changes and return to the vSphere Web Client.

After you enter the PVLAN IDs for a distributed switch, you must create a distributed port group that takes advantage of the PVLAN configuration. The process for creating a distributed port group was described previously. Figure 5.69 shows the New Distributed Port Group wizard for a distributed port group that uses PVLANs.

In Figure 5.69 you can see the term promiscuous again. In PVLAN parlance, a promiscuous port is allowed to send and receive layer 2 frames to any other port in the VLAN. This type of port is typically reserved for the default gateway for an IP subnet—for example, a layer 3 router.

PVLANs are a powerful configuration tool but also a complex configuration topic and one that can be difficult to understand. For additional information on PVLANs, we recommend visiting Cisco's website at www.cisco.com and searching for private VLANs.

As with vSphere Standard Switches, vSphere Distributed Switches provide a tremendous amount of flexibility in designing and configuring a virtual network. But, as with all things, there are limits to the flexibility. Table 5.2 lists some of the configuration maximums for vSphere Distributed Switches.

FIGURE 5.69 When a distributed port group is created with PVLANs, the distributed port group is associated with both the primary VLAN ID and a secondary VLAN ID.

images

TABLE 5.2: Configuration maximums for ESXi networking components (vSphere Distributed Switches)

CONFIGURATION ITEM MAXIMUM
Switches per vCenter Server 128
Maximum ports per host (vSS/vDS) 4096
vDS ports per vCenter instance 60000
ESXi hosts per vDS 1000
Static port groups per vCenter instance 10000
Ephemeral port groups per vCenter instance 1016

VMware vSphere also lets you use compatible third-party distributed switches in your vSphere environment. Before we move into some options available for third-party distributed virtual switches in your environment, we'd like to first discuss one final advanced networking feature in vSphere: support for Link Aggregation Control Protocol (LACP).

Configuring LACP

Link Aggregation Control Protocol (LACP) is a standardized protocol for supporting the aggregation, or joining, of multiple individual network links into a single, logical network link. LACP support was first added in vSphere 5.1, and the LACP support in vSphere 5.5 has been enhanced. Note that LACP support is available only when you are using a vSphere Distributed Switch; vSphere Standard Switches do not support LACP.

IS LACP THE ONLY WAY?

Note that it's possible to use link aggregation without LACP. When you use either a vSphere Standard Switch or a vSphere Distributed Switch, setting the NIC teaming policy to Route Based On IP Hash enables link aggregation. Although it enables link aggregation, this configuration does not use LACP. This is the only way to use link aggregation with a vSphere Standard Switch.

We'll start with a review of how to configure basic LACP support on a version 5.1.0 vSphere Distributed Switch; then we'll show you how the LACP support has been enhanced in vSphere 5.5.

Using a version 5.1.0 vSphere Distributed Switch, you must configure the following four areas:

  • Enable LACP in the properties for the distributed switch's uplink group.
  • Set the NIC teaming policy for all distributed port groups to Route Based On IP Hash.
  • Set the network detection policy for all distributed port groups to link status only.
  • Configure all distributed port groups so that all uplinks are active, not standby or unused.

Figure 5.70 shows the Edit Settings dialog box for the uplink group on a version 5.1.0 vSphere Distributed Switch. You can see here the setting for enabling LACP as well as the reminder of the other settings that are required.

FIGURE 5.70 Basic LACP support in a version 5.1.0 vSphere Distributed Switch is enabled in the uplink group but requires other settings as well.

images

GETTING TO THE EDIT SETTINGS DIALOG BOX FOR THE UPLINKS GROUP

Getting to the Edit Settings dialog box for a distributed switch's uplink group, like the one shown in Figure 5.70, might seem a bit unintuitive at first. The trick is to select (or highlight) the uplink group in Topology view and then click the Edit Distributed Port Group Settings icon. As far as we know, this is the only way in the vSphere Web Client to get to this dialog box—it's not accessible from the Actions menu, nor is it available through any right-click menu.

You must configure LACP on the physical switch to which the ESXi host is connected; the exact way you enable LACP will vary from vendor to vendor. The Mode setting shown in Figure 5.70—which is set to either Active or Passive—helps dictate how the ESXi host will communicate with the physical switch to establish the link aggregate:

  • When LACP Mode is set to Passive, the ESXi host won't initiate any communications to the physical switch; the switch must initiate the negotiation.
  • When LACP Mode is set to Active, the ESXi host will actively initiate the negotiation of the link aggregation with the physical switch.

You can probably gather from this discussion of using LACP with a version 5.1.0 vSphere Distributed Switch that only a single link aggregate (a single bundle of LACP-negotiated links) is supported and LACP is enabled or disabled for the entire vSphere Distributed Switch.

When you upgrade to a version 5.5.0 vSphere Distributed Switch, though, the LACP support is enhanced to eliminate these limitations. Version 5.5.0 distributed switches support multiple LACP bundles, and how those LACP bundles are used (or not used) can be configured on a per–distributed port group basis. Let's take a look at how you'd configure LACP support with a version 5.5.0 distributed switch.

With a version 5.5.0 distributed switch, a new LACP section appears in the Settings area of the Manage tab, as you can see in Figure 5.71. From this area, you'll define one or more link aggregation groups (LAGs), each of which will appear as a logical uplink to the distributed port groups on that distributed switch. vSphere 5.5 supports multiple LAGs on a single distributed switch, which allows administrators to dual-home distributed switches (connect distributed switches to multiple upstream physical switches) while still using LACP. (There are a few limitations, which we'll describe near the end of this section.)

FIGURE 5.71 vSphere 5.5's enhanced LACP support eliminates many of the limitations of the support found in vSphere 5.1.

images

To use LACP with a version 5.5.0 distributed switch, three basic steps are required:

  1. Define one or more LAGs in the LACP section of the Settings area of the Manage tab.
  2. Add physical adapters into the LAG(s) you've created.
  3. Modify the distributed port groups to use those LAGs as uplinks in the distributed port groups' teaming and failover configuration.

Let's take a look at each of these steps in a bit more detail.

To create a LAG, perform these steps:

  1. Connect to a vCenter Server instance using a supported web browser and log in with administrative credentials.
  2. Navigate to the specific distributed switch for which you want to configure a LACP link aggregation group.
  3. With the distributed switch selected in the inventory list on the left, click the Manage tab, then click Settings, and then click LACP. This displays the screen shown earlier in Figure 5.71.
  4. Click the green plus symbol to add a LAG. This displays the New Link Aggregation Group dialog box, shown in Figure 5.72.

    FIGURE 5.72 With a version 5.5.0 distributed switch, the LACP properties are configured on a per-LAG basis instead of for the entire distributed switch.

    images

  5. In the New Link Aggregation Group dialog box, specify a name for the new LAG.
  6. Specify the number of physical ports that will be included in the LAG.
  7. Specify the LACP mode—either Active or Passive, as we described earlier—that this LAG should use.
  8. Select a load-balancing mode. Note that this load-balancing mode affects only outbound traffic; inbound traffic will be load balanced according to the load-balancing mode configured on the physical switch. (For best results and ease of troubleshooting, the configuration here should match the configuration on the physical switch where possible.)
  9. If you need to override port policies for this LAG, you can do so at the bottom of this dialog box.
  10. Click OK to create the new LAG and return to the LACP area of the vSphere Web Client.

Now that at least one LAG has been created, you need to assign physical adapters to it. To do this, you'll follow the process we outlined earlier for managing physical adapters (see the section titled “Managing Adapters” for the specific details). The one change you'll note is that when you click the Assign Uplink link for a selected physical adapter, you'll now see an option to assign that adapter to one of the available uplink ports in the LAG(s) that you created. Figure 5.73 shows the dialog box for assigning an uplink for a distributed switch with two LAGs.

FIGURE 5.73 Once a LAG has been created, physical adapters can be added to it.

images

Once you've added physical adapters to the LAG(s), you can proceed with the final step: configuring the LAG(s) as uplinks for the distributed port groups on that distributed switch. Specific instructions for this process were given earlier in the section titled “Editing a Distributed Port Group.” Note that the LAG(s) will appear as physical uplinks in the teaming and failover configuration, as you can see in Figure 5.74. You can assign the LAG as an active uplink, a standby uplink, or an unused uplink.

FIGURE 5.74 LAGs appear as physical uplinks to the distributed port groups.

images

When using LAGs, you should be aware of a the following limitations:

  • You can't mix LAGs and physical uplinks for a given distributed port group. Any physical uplinks must be listed as unused adapters.
  • You can't use multiple active LAGs on a single distributed port group. Place one LAG in the active uplinks list; place any other LAGs in the list of unused uplinks.

Note that these limitations are per distributed port group; you can use different active LAGs or standalone uplinks with other distributed port groups because the teaming and failover configuration is set for each individual distributed port group.

IGNORE THE LOAD BALANCING SETTING WITH LAGS

When using LACP and LAGs with a version 5.5.0 distributed switch, you can ignore the Load Balancing setting seen earlier in Figure 5.74. It is overridden by the load-balancing policy set on the LAG(s).

As you can see, the enhanced LACP support present in vSphere 5.5 and version 5.5.0 distributed switches offers VMware administrators and their counterparts in the networking team a great deal of functionality and flexibility.

We'd like to now turn our attention to some of the options available for using third-party distributed switches in your vSphere environment.

Examining Third-Party Distributed Virtual Switches

When VMware first introduced distributed switches with vSphere 4.0 in 2009, it also enabled third-party developers to produce their own distributed switches that would “plug in” to vSphere's distributed switch APIs. This would allow VMware partners to extend the functionality available within vSphere environments through third-party distributed switches. At the time this functionality was introduced, only a single VMware partner had a product ready: Cisco with its Nexus 1000V. Since then, at least two other VMware partners have created their own distributed switches, and now VMware customers have a few different options.

At the time of this writing, three third-party distributed switches were available for use with vSphere 5.5:

  • Cisco Nexus 1000V
  • IBM Distributed Virtual Switch 5000V
  • HP FlexFabric Virtual Switch 5900v

In the following sections, we'll take a quick look at each of these options.

Cisco Nexus 1000V

The first third-party distributed switch, the Cisco Nexus 1000V, leverages Cisco NX-OS in a virtual environment to allow network administrators to use a familiar, CLI-based network management environment in the vSphere environment as well as in the physical environment.

The Cisco Nexus 1000V has the following two major components:

  • The Virtual Ethernet Module (VEM), which executes inside the ESXi hypervisor and replaces the standard vSwitch functionality. The VEM leverages the vSphere Distributed Switch APIs to enable features like quality of service (QoS), private VLANs, access control lists, NetFlow, and SPAN.
  • The Virtual Supervisor Module (VSM), which is a Cisco NX-OS instance running as a VM (note that Cisco also sells a hardware appliance, called the Nexus 1010, that can provide a Nexus 1000V VSM). The VSM controls multiple VEMs as one logical modular switch. All configuration is performed through the VSM and propagated to the VEMs automatically through a management link with vCenter Server. The Nexus 1000V supports redundant VSMs, a configuration with both a primary VSM and a secondary VSM.

Although the Nexus 1000V uses the Cisco “Nexus” brand name, it is interoperable with any upstream physical switch from any vendor; it does not require physical Cisco Nexus switches. Of course, the features that are supported will vary based on the upstream physical switches, so keep in mind that some Nexus 1000V features may not work with all physical switches.

For more detailed information on the Cisco Nexus 1000V, please refer to Cisco's website at www.cisco.com/en/US/products/ps9902/index.html.

IBM Distributed Virtual Switch 5000V

The IBM Distributed Virtual Switch (DVS) 5000V was the second third-party distributed switch to become available for vSphere environments.

Like the Cisco Nexus 1000V, the IBM DVS 5000V employs a two-part architecture:

  • The DVS 5000V Data Path Module (DPM) is embedded in the ESXi hypervisor and replaces the standard virtual switching functionality found there. The DPM supports features like QoS, sFlow v5, RADIUS, TACACS+, private VLANs, local VM-to-VM traffic control using access control lists (ACLs), local port mirroring (SPAN), remote port mirroring (ERSPAN), and advanced VM troubleshooting and visibility.
  • The DVS 5000V Controller performs the central management and configuration of the DPMs that exist on a number of ESXi hosts, communicating with vCenter Server so that the 5000V looks like a distributed switch to the VMware environment.

One point of difference between the Cisco 1000V and the IBM 5000V is that the IBM 5000V supports newer Ethernet technologies such as Edge Virtual Bridging (EVB), Virtual Ethernet Port Aggregation (VEPA), and Virtual Station Interface (VSI) Discovery and Configuration Protocol (VDP). These technologies are intended to enable greater integration between the virtual switches in a vSphere environment and the physical switches upstream.

For more details about the IBM DVS 5000V, please refer to IBM's website at

www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/index.html

HP FlexFabric Virtual Switch 5900v

In May 2013, HP unveiled its third-party distributed virtual switch, the HP FlexFabric Virtual Switch 5900v, initially expected to be available in the fourth quarter of 2013. Because of the dates of announcement and availability, information about the HP FlexFabric 5900v was fairly limited at the time of writing.

HP took a slightly different approach with the 5900v than IBM and Cisco did with their virtual switches. Whereas both IBM and Cisco support multiple types and brands of upstream physical switches, the HP 5900v is designed to work only with HP's FlexFabric 5900AF top-of-rack (ToR) switches through the use of EVB, VEPA, and VDP. In this arrangement, all traffic—even VM-to-VM traffic on the same ESXi host—flows through the HP 5900AF ToR switch, giving the networking teams full visibility and full control over the traffic. This enables HP to support a full range of networking features like QoS, ACLs, and hardware-based sFlow. The HP 5900v is also designed to integrate with HP Intelligent Management Center (IMC) to simplify creating and applying policies that control features like ACLs and QoS to traffic flowing through the HP 5900v and HP 5900AF ToR switches.

For more details about the HP 5900v, please contact HP. (There was no public URL for the HP FlexFabric Virtual Switch 5900v available at the time of writing.)

Before we wrap up this chapter on networking with a quick look toward the future, we'd like to discuss some security-related settings and features available in vSphere environments.

Configuring Virtual Switch Security

Even though vSwitches and distributed switches are considered to be “dumb switches,” you can configure them with security policies to enhance or ensure layer 2 security. For vSphere Standard Switches, you can apply security policies at the vSwitch or at the port group level. For vSphere Distributed Switches, you apply security policies only at the distributed port group level. The security settings include the following three options:

  • Promiscuous mode
  • MAC address changes
  • Forged transmits

Applying a security policy to a vSwitch is effective, by default, for all connection types within the switch. However, if a port group on that vSwitch is configured with a competing security policy, it will override the policy set at the vSwitch. For example, if a vSwitch is configured with a security policy that rejects MAC address changes but a port group on the switch is configured to accept MAC address changes, then any VMs connected to that port group will be allowed to communicate even though it is using a MAC address that differs from what is configured in its VMX file.

The default security profile for a vSwitch, shown in Figure 5.75, is set to reject Promiscuous mode and to accept MAC address changes and forged transmits. Similarly, Figure 5.76 shows the default security profile for a distributed port group on a distributed switch.

FIGURE 5.75 The default security profile for a vSwitch prevents Promiscuous mode but allows MAC address changes and forged transmits.

images

FIGURE 5.76 The default security profile for a distributed port group on a distributed switch also denies MAC address changes and forged transmits.

images

Each of these security options is explored in more detail in the following sections.

Understanding and Using Promiscuous Mode

The Promiscuous Mode option is set to Reject by default to prevent virtual network adapters from observing any of the traffic submitted through a vSwitch or distributed switch. For enhanced security, allowing Promiscuous mode is not recommended because it is an insecure mode of operation that allows a virtual adapter to access traffic other than its own. Despite the security concerns, there are valid reasons for permitting a switch to operate in Promiscuous mode. An intrusion-detection system (IDS) must be able to identify all traffic to scan for anomalies and malicious patterns of traffic, for example.

Previously in this chapter, we talked about how port groups and VLANs did not have a one-to-one relationship and that occasions may arise when you have multiple port groups on a vSwitch/distributed switch configured with the same VLAN ID. This is exactly one of those situations—you need a system, the IDS, to see traffic intended for other virtual network adapters. Rather than granting that ability to all the systems on a port group, you can create a dedicated port group for just the IDS system. It will have the same VLAN ID and other settings but will allow Promiscuous mode instead of rejecting it. This allows you, the administrator, to carefully control which systems are allowed to use this powerful and potentially security-threatening feature.

As shown in Figure 5.77, the virtual switch security policy will remain at the default setting of Reject for the Promiscuous Mode option, while the VM port group for the IDS will be set to Accept. This setting will override the virtual switch, allowing the IDS to monitor all traffic for that VLAN.

Allowing MAC Address Changes and Forged Transmits

When a VM is created with one or more virtual network adapters, a MAC address is generated for each virtual adapter. Just as Intel, Broadcom, and others manufacture network adapters that include unique MAC address strings, VMware is a network adapter manufacturer that has its own MAC prefix to ensure uniqueness. Of course, VMware doesn't actually manufacture anything because the product exists as a virtual NIC in a VM. You can see the 6-byte, randomly generated MAC addresses for a VM in the configuration file (.vmx) of the VM as well as in the Settings area for a VM within the vSphere Web Client, shown in Figure 5.78. A VMware-assigned MAC address begins with the prefix 00:50:56 or 00:0C:29. In previous versions of ESXi, the value of the fourth set (XX) would not exceed 3F to prevent conflicts with other VMware products, but this appears to have changed in vSphere 5.0. The fifth and sixth sets (YY:ZZ) are generated randomly based on the universally unique identifier (UUID) of the VM that is tied to the location of the VM. For this reason, when a VM location is changed, a prompt appears prior to successful boot. The prompt inquires about keeping the UUID or generating a new UUID, which helps prevent MAC address conflicts.

FIGURE 5.77 Promiscuous mode, though it reduces security, is required when using an intrusion-detection system.

images

MANUALLY SETTING THE MAC ADDRESS

Manually configuring a MAC address in the configuration file of a VM does not work unless the first three bytes are VMware-provided prefixes and the last three bytes are unique. If a non-VMware MAC prefix is entered in the configuration file, the VM will not power on.

FIGURE 5.78 A VM's initial MAC address is automatically generated and listed in the configuration file for the VM and displayed within the vSphere Web Client.

images

All VMs have two MAC addresses: the initial MAC and the effective MAC. The initial MAC address is the MAC address discussed in the previous paragraph that is generated automatically and that resides in the configuration file. The guest OS has no control over the initial MAC address. The effective MAC address is the MAC address configured by the guest OS that is used during communication with other systems. The effective MAC address is included in network communication as the source MAC of the VM. By default, these two addresses are identical. To force a non-VMware-assigned MAC address to a guest operating system, change the effective MAC address from within the guest OS, as shown in Figure 5.79.

FIGURE 5.79 A VM's source MAC address is the effective M AC address, which by default matches the initial MAC address configured in the VMX file. The guest OS, however, may change the effective MAC address.

images

The ability to alter the effective MAC address cannot be removed from the guest OS. However, the ability to let the system function with this altered MAC address is easily addressable through the security policy of a vSwitch or distributed switch. The remaining two settings of a virtual switch security policy are MAC Address Changes and Forged Transmits. These security policies allow or deny differences between the initial MAC address in the configuration file and the effective MAC address in the guest OS. As noted earlier, the default security policy is to accept the differences and process traffic as needed.

The difference between the MAC Address Changes and Forged Transmits security settings involves the direction of the traffic. MAC Address Changes is concerned with the integrity of incoming traffic, while Forged Transmits oversees the integrity of outgoing traffic. If the MAC Address Changes option is set to Reject, traffic will not be passed through the vSwitch to the VM (incoming) if the initial and the effective MAC addresses do not match. If the Forged Transmits option is set to Reject, traffic will not be passed from the VM to the vSwitch (outgoing) if the initial and the effective MAC addresses do not match. Figure 5.80 highlights the security restrictions implemented when MAC Address Changes and Forged Transmits are set to Reject.

FIGURE 5.80 The MAC Address Changes and Forged Transmits security options deal with incoming and outgoing traffic, respectively.

images

For the highest level of security, VMware recommends setting MAC Address Changes, Forged Transmits, and Promiscuous Mode on each vSwitch or distributed switch/distributed port group to Reject. When warranted or necessary, use port groups to loosen the security for a subset of VMs to connect to the port group.

VIRTUAL SWITCH POLICIES FOR MICROSOFT NETWORK LOAD BALANCING

As with anything, there are, of course, exceptions to the general recommendations for how a virtual switch should be configured. The recommendations for allowing MAC address changes and forged transmits is one great example. For VMs that will be configured as part of a Microsoft Network Load Balancing (NLB) cluster set in Unicast mode, the VM port group must allow MAC address changes and forged transmits. Systems that are part of an NLB cluster will share a common IP address and virtual MAC address.

The shared virtual MAC address is generated by using an algorithm that includes a static component based on the NLB cluster's configuration of Unicast or Multicast mode plus a hexadecimal representation of the four octets that make up the IP address. This shared MAC address will certainly differ from the MAC address defined in the VMX file of the VM. If the VM port group does not allow for differences between the MAC addresses in the VMX and guest OS, NLB will not function as expected. VMware recommends running NLB clusters in Multicast mode because of these issues with NLB clusters in Unicast mode.

Perform the following steps to edit the security profile of a vSwitch:

  1. Use a supported web browser to establish a connection to a vCenter Server instance; this launches the vSphere Web Client.
  2. Navigate to the specific ESXi host that has the vSwitch you'd like to edit. One way to get there from the vSphere Web Client home screen is to click vCenter, then select Hosts from the Inventory Lists, then select the specific ESXi host from the list of objects on the right.
  3. With an ESXi host selected in the inventory list on the left, click the Manage tab, select Settings, and then click Virtual Switches.
  4. From the list of virtual switches, select the vSphere Standard Switch you'd like to edit, then click the Edit link (looks like a pencil). This brings up the Edit Settings dialog box for the selected vSwitch.
  5. Click Security on the list on the left side of the dialog box, and make the necessary adjustments.
  6. Click OK.

Perform the following steps to edit the security profile of a port group on a vSwitch:

  1. Connect to a vCenter Server instance using the vSphere Web Client.
  2. Navigate to the specific ESXi host and vSphere Standard Switch that contains the port group you wish to edit. You'll find vSwitches in the Virtual Switches section of the Settings area under the Manage tab for a selected ESXi host.
  3. Click the name of the port group under the graphical representation of the virtual switch, and then click the Edit link.
  4. Click Security, and make the necessary adjustments. You'll need to place a check mark in the Override box to allow the port group to use a different setting than its parent virtual switch.
  5. Click OK to save your changes.

Perform the following steps to edit the security profile of a distributed port group on a vSphere Distributed Switch:

  1. Use the vSphere Web Client to connection to an instance of vCenter Server.
  2. Using the vSphere Web Client, navigate to the Distributed Switches inventory list; you can get there from the vSphere Web Client home page by clicking vCenter, then selecting Distributed Switches from the Inventory Lists area on the left.
  3. With a distributed switch selected on the left, click the Manage tab, select Settings, and then click Topology. This will display a graphical representation of the distributed switch.
  4. Select an existing distributed port group by clicking its name in the Topology view, and then click the Edit Distributed Port Group Settings icon.
  5. Select Security from the list of policy options on the left side of the dialog box.
  6. Make the necessary adjustments to the security policy.
  7. Click OK to save the changes.

If you need to make the same security-related change to multiple distributed port groups, you can use the Manage Distributed Port Groups command on the Actions menu to perform the same configuration task to multiple distributed port groups at the same time.

Managing the security of a virtual network architecture is much the same as managing the security for any other portion of your information systems. Security policy should dictate that settings be configured as secure as possible to err on the side of caution. Only with proper authorization, documentation, and change-management processes should security be reduced. In addition, the reduction in security should be as controlled as possible to affect the least number of systems if not just the systems requiring the adjustments.

We'll close out this chapter on networking with a quick look ahead at the future of networking in a VMware vSphere environment.

Looking Ahead

The past few years have been fairly tumultuous for the networking industry, which is undergoing a revolution comparable to the revolution of some years ago when server virtualization started seeing broader adoption. A number of forces are driving this revolution: increased use of open-source software in various industries; increased competition among hardware manufacturers, including very low-cost overseas manufacturers; expanded use of x86-based systems and compute virtualization for providing network services (often referred to as network functions virtualization, or NFV); and the rise of control plane protocols like OpenFlow. This latter force has given rise to an entirely new term within networking: software-defined networking (SDN).

In March 2013, VMware described its vision for network virtualization, which harnesses a number of these macro trends together to enable organizations to provision network services more quickly and in a more automated fashion than before. VMware intends to bring network virtualization to the market in the form of VMware NSX, a product that integrates technologies together from Nicira's Network Virtualization Platform and VMware's own vCloud Networking and Security product suite.

VMware NSX will leverage a number of technologies to enable organizations to create virtual networks—networks that exist entirely in software but that faithfully re-create physical networks. The following technologies are among those that will be found in VMware NSX:

  • Network overlay protocols like VXLAN, STT, and GRE, to enable isolation of network traffic
  • Separation of the control plane and data plane using protocols like OpenFlow
  • Virtualization of network services like load balancing, firewalling, NAT, and dynamic routing (aka NFV)
  • Centralized controllers that automatically compute and program the virtual network topologies across ESXi hosts

Network virtualization will dramatically change the networking landscape moving forward, but many of the basic principles outlined in this chapter are still going to be applicable as this vision evolves. Getting started with virtual networking in vSphere 5.5 environments is a great first step to moving toward full network virtualization in VMware NSX.

In the next chapter, we'll dive deep into storage in VMware vSphere, a critical component of your vSphere environment.

The Bottom Line

Identify the components of virtual networking. Virtual networking is a blend of virtual switches, physical switches, VLANs, physical network adapters, virtual adapters, uplinks, NIC teaming, VMs, and port groups.

Master It What factors contribute to the design of a virtual network and the components involved?

Create virtual switches and distributed virtual switches. vSphere supports both vSphere Standard Switches and vSphere Distributed Switches. vSphere Distributed Switches bring new functionality to the vSphere networking environment, including private VLANs and a centralized point of management for ESXi clusters.

Master It You've asked a fellow vSphere administrator to create a vSphere Distributed Switch for you, but the administrator is having problems completing the task because he can't find out how to do this with an ESXi host selected in the vSphere Web Client. What should you tell this administrator?

Master It As a joint project between the networking and server teams, you are going to implement LACP in your VMware vSphere 5.5 environment. What are some limitations you need to know about?

Create and manage NIC teaming, VLANs, and private VLANs. NIC teaming allows virtual switches to have redundant network connections to the rest of the network. Virtual switches also provide support for VLANs, which provide logical segmentation of the network, and private VLANs, which provide added security to existing VLANs while allowing systems to share the same IP subnet.

Master It You'd like to use NIC teaming to bond multiple physical uplinks together for greater redundancy and improved throughput. When selecting the NIC teaming policy, you select Route Based On IP Hash, but then the vSwitch seems to lose connectivity. What could be wrong?

Master It How do you configure both a vSphere Standard Switch and a vSphere Distributed Switch to pass VLAN tags all the way up to a guest OS?

Examine the options for third-party virtual switches in your environment. In addition to the vSphere Standard Switch and the vSphere Distributed Switch, vSphere also supports a number of third-party virtual switches. These third-party virtual switches support a range of features.

Master It What three third-party virtual switches are, at the time of this book's writing, available for vSphere environments?

Configure virtual switch security policies. Virtual switches support security policies for allowing or rejecting Promiscuous mode, allowing or rejecting MAC address changes, and allowing or rejecting forged transmits. All of the security options can help increase layer 2 security.

Master It You have a networking application that needs to see traffic on the virtual network that is intended for other production systems on the same VLAN. The networking application accomplishes this by using Promiscuous mode. How can you accommodate the needs of this networking application without sacrificing the security of the entire virtual switch?

Master It Another vSphere administrator on your team is trying to configure the security policies on a distributed switch but is having some difficulty. What could be the problem?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset