TCP tuning

Much of the traffic that is going through NetScaler is based on the TCP protocol. Either it is ICA-Proxy, HTTP, or something similar.

TCP is a protocol that provides a reliable, error-checked delivery of packets back and forth. This ensures that data is successfully transferred before being processed further. TCP has many features to adjust bandwidth during transfer, congestion checking, adjusting segment sizing, and so on, which we will delve into a bit in this section.

As mentioned in an earlier chapter, we can adjust the way NetScaler uses TCP using TCP profiles. By default, all services and vServers that are created on the NetScaler use the default TCP profile nstcp_default_profile.

These profiles can be found under System | Profiles | TCP Profiles. Make sure not to alter the default TCP profile without properly consulting the network team, as this affects the way TCP works for all default services on NetScaler.

This default profile has most of the different TCP features turned off. This is to ensure compatibility with most infrastructures. The profile has not been adjusted much since it was first added in NetScaler. Citrix also has a lot of other different profiles depending on the use cases. So we will look a bit closer at the different options we have here.

For instance, the profile nstcp_default_XA_XD_profile, which is intended for ICA-proxy traffic, has some differences from the default profile:

  • Window Scaling
  • Selective Acknowledgement
  • Forward Acknowledgement
  • Use of Nagles Algorithm

Window Scaling is a TCP option that allows the receiving point to accept more data than is allowed in the TCP RFC for window size before getting an acknowledgment. By default, the window size is set to accept 65.536 bytes. With Window scaling enabled, it basically shifts the window size bitwise. This is a option that needs to be enabled on both endpoints in order to be used, and will only be sent in the initial three-way handshake.

Select Acknowledgement (SACK) is a TCP option that allows for better handling of TCP retransmission. In a scenario where there are two hosts communicating with SACK not enabled, and suddenly one of the hosts drops out of the network, it loses some packets when it comes back online and receives more packets from the other host. In this case, the first host will ACK from the last packet it got from the other host before it dropped out. With SACK enabled, it will notify the other host of the last packet it got before it dropped out, and the other packets it received when it went back online. This allows for faster recovery of the communication, since the other host does not need to resend all the packets.

Forward Acknowledgement (FACK) is a TCP option that works in conjunction with SACK that helps avoid TCP congestion by measuring the total number of data bytes outstanding in the network. Using the information from SACK, it can more precisely calculate how much data it can retransmit.

Nagles Algorithm is a TCP feature that tries to cope with the small packet problem. Applications such as Telnet often send each keystroke within its own packet, creating multiple small packets containing only 1 byte of data, which results in a 41-byte packet for one keystroke. The algorithm works by combining a number of small outgoing messages into the same message, thus avoiding any overhead.

Since ICA is a protocol that operates with many small packets that might create congestion, Nagle is enabled in the TCP profile. Also, since many might be connecting using 3G or Wi-Fi, which might in some cases be unreliable to change channel, we need options that require the clients to be able to re-establish a connection fast, allowing the use of SACK and FACK.

Note that Nagle might have negative performance on applications that have their own buffering mechanism and operate inside LAN.

If we take a look at another profile, such as nstcp_default_lan, we can see that FACK is disabled. This is because the resources needed to calculate the amount of outstanding data in a high-speed network might be too much.

Another important aspect of these profiles are the TCP congestion algorithms. For instance, nstcp_default_mobile uses the Westwood congestion algorithm. This is because it is much better at handling large bandwidth-delay paths, such as wireless.

The following congestion algorithms are available in NetScaler:

  • Default (based on TCP Reno)
  • Westwood (based on TCP Westwood+)
  • BIC
  • CUBIC
  • Nile (based on TCP Illinois)

What is worth noting here is that Westwood is aimed at 3G/4G connections, or other slow wireless connections. BIC is aimed at high bandwidth connections with high latency, such as WAN connections. CUBIC is almost like BIC but not as aggressive when it comes to fast-ramp and retransmissions. It is important to note, however, is that CUBIC is the default TCP algorithm in Linux kernels from 2.6.19 to 3.1

Nile, which is a newly created algorithm created by Citrix, is based on TCP Illinois, which is targeted at high-speed, long-distance networks. It achieves a higher throughput than standard TCP and is also compatible with standard TCP.

So, here we can customize which algorithm is better suited to a service. For instance, if we have a vServer that serves content to mobile devices, we could use the nstcp_default_mobile TCP profile.

There are also some other parameters that are important to think about in the TCP profile.

One of these parameters is multipath TCP. This is a feature that allows an endpoint to have multiple paths to a service. This is typically a mobile device that has WLAN and 3G capabilities, and allows the device to communicate with a service on NetScaler using both channels at the same time. This requires that the device supports communication on both methods and that the service or application on the device supports Multipath TCP.

So, let's take an example of what a TCP profile might look like if we have a vServer on NetScaler that is used to service an application to mobile devices. This means the most common way that users can access this service is by using 3G or Wi-Fi. The web service has its own buffering mechanism, which means it tries not to send small packets over the link. The application is Multipath-TCP aware.

In this scenario, we could leverage the nstcp_default_mobile profile, since it has most of the defaults for a mobile scenario, but we could also enable multipath TCP and create a new profile of it, and bind it to the vServer.

In order to bind a TCP profile to a vServer, navigate to the particular vServer, then Edit | Profiles | TCP Profiles, as shown in the following screenshot:

TCP tuning

Note

AOL did a presentation of their own TCP customization on NetScaler. You can take a look at the presentation at http://www.slideshare.net/masonke/net-scaler-tcpperformancetuningintheaolnetwork. It is important to note that TCP should always be done in cooperation with the network team.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset