Chapter 5. Optimizing NetScaler Traffic

The purpose of NetScaler is to act as a logistics department. It has to serve content to many different endpoints using many different protocols across different types of media, and it can either be a physical device or on top of a hypervisor within a private cloud infrastructure. Since there are many factors that play in here, there is room for tuning and improvement. Some of the topics we will cover in this chapter are:

  • Tuning for virtual environments
  • Tuning TCP traffic
  • Tuning SSL traffic
  • HTTP/2 and SPDY
  • Other network capabilities

Tuning for virtual environments

When setting up NetScaler in a virtual environment, there are many factors that affect how it might perform. For instance, the underlying CPUs of the virtual host, NIC throughput and capabilities, vCPU over allocation, NIC teaming, MTU size, and so on. So, it is always important to remember the hardware requirements when setting up NetScaler VPX on a virtualization host.

Another important factor when setting up NetScaler VPX is the concept of Package Engines. By default, when we set up or import NetScaler, it is set up with two vCPUs. The first of these two are dedicated for management purposes, and the second vCPU is dedicated to doing all the packet processing, such as content switching, SSL offloading, ICA-proxy, and so on.

It is important to note that the second vCPU might be seen as 100% utilized in the hypervisor performance monitoring tools, but the correct way to check if it is being utilized is by using the CLI-command stat system.

Now, by default, VPX 10 and VPX 200 only have support for one packet engine. This is because of the bandwidth limitations, it does not require more packet engine CPUs to process the packets. On the other hand, VPX 1000 and VPX 3000 have support for up to three packet engines. This is in most cases needed to process all the packets that are going through the system if the bandwidth is going to be utilized to its fullest.

In order to add a new packet engine, we need to assign more vCPUs and memory to the VPX. Packet engines also have the benefit of load balancing processing between them. So, instead of having a vCPU that is 100% utilized, we can even the load between multiple vCPUs and get even better performance and bandwidth. The following is a chart that shows the different editions and support for multiple packet engines:

License/Memory

2 GB

4 GB

6 GB

8 GB

10 GB

12 GB

VPX 10

1

1

1

1

1

1

VPX 200

1

1

1

1

1

1

VPX 1000

1

2

3

3

3

3

VPX 3000

1

2

3

3

3

3

It is important to remember that multiple PE are only available for VMware, XenServer, and Hyper-V, and not for KVM.

If we plan on using NIC-teaming on the underlying virtualization host, there are some important aspects to consider.

Most of the different vendors have guidelines that describe the kind of load balancing techniques that are available in the hypervisor.

For instance, Microsoft has a guide here that describes their features http://www.microsoft.com/en-us/download/details.aspx?id=30160.

One of the NIC teaming options called Switch Independent Dynamic Mode has an interesting side effect in that it replaces the source MAC address of the virtual machine with one of the primary NICs on the host, and, therefore, we might experience packet loss on a VPX. Therefore, it is recommended in most cases that we have LACP/LAG, or in case of Hyper-V, use the Hyper-V Port distribution feature instead.

Features such as SRV-IO or PCI pass-through are not supported for NetScaler VPX.

NetScaler 11 also introduced the support for Jumbo Frames for the VPX. This allows for a much higher payload in an Ethernet frame. Instead of the traditional 1,500 bytes, we can scale up to 9,000 bytes of payload. This allows for a much lower overhead since the frames contain more data.

This requires that the underlying NIC on the hypervisor supports this feature and is enabled as well, and this in most cases just works for communication with backend resources and not with users accessing public resources. This is because most routers and ISP block such high MTU.

This feature can be configured at the Interface level in NetScaler, which can be done under System | Network | Interface. Choose and select interface and click on Edit. Here, we have the option called Maximum Transmission Unit, which can be adjusted up to 9,216 bytes.

It is important to note that NetScaler can communicate with backend resources using Jumbo frames and then adjust the MTU when communicating back with clients. It can also communicate with Jumbo frames in both paths, in case NetScaler is set up as a backend load balancer.

It is also important to note that NetScaler only supports Jumbo frames load balancing for the following protocols:

  • TCP
  • TCP-based protocols such as HTTP
  • SIP
  • RADIUS
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset