Mobilestream is a set of features on NetScaler that, in essence, optimizes traffic going back to endpoints. This chapter will focus on three of its features: compression, caching, and frontend optimization. The following topics will be covered in this chapter:
NetScaler Mobilestream, as I mentioned, is a set of key features in NetScaler that enhances service delivery to mobile devices. However, in essence, it's a marketing term that combines features such as TCP optimization and application firewall—which are both subjects that we will cover in a later chapter—but it also contains features such as frontend optimization, compression, and caching, which are more HTTP-based application features. Let's explore what these features are and how they work:
All these three features allow the client to get hold of the content faster, as they save bandwidth between the service and the client. They can also reduce traffic to backend servers and protect the backend servers from traffic storms. An important point to note is that these features are not included by default in the standard edition of NetScaler. In order to use these features, we either need to buy a feature license or upgrade to the Enterprise or Platinum edition. In order to use caching, we must upgrade to either NetScaler Platinum or NetScaler Enterprise and then buy a feature license.
So, let's start by taking a look at the compression feature of NetScaler.
The compression feature enables a NetScaler vServer to compress HTTP data that is going to or from the client. Another benefit of this feature is that the HTTP compression algorithm encrypts the data going from the client to the server and therefore adds another layer of security.
The compression feature requires that the client who is requesting the content have a browser that supports compression. The newest and most common browsers, such as Firefox 4 and above, Google Chrome 20 and above, and Internet Explorer 7 and above, support HTTP compression. So, when a client connects to a vServer, it will announce what capabilities it has to the server. This allows NetScaler to choose the best type of algorithm.
HTTP compression is based on the GZIP and DEFLATE algorithms. These are defined in RFC 1950/1951/1952 formats. Those interested in its technical aspects may read more at http://www.ietf.org/rfc/rfc1952.txt.
Now, the HTTP compression feature of NetScaler will compress data within HTML, XML, CSS, text, and Microsoft Office documents. It does not compress any picture format files, JavaScript files, or other web files that are not text related.
In order to configure compression in NetScaler, first enable the feature globally in the appliance. This can be done using the following CLI command:
enable ns feature cmp
Here, cmp
stands for compression. After enabling this feature in NetScaler, activate it for a service. A service in this context can be a load-balanced service. This can be done using the following CLI command:
Set service nameofservice -CMP yes
This can also be done through the GUI under Traffic Management. Then, click on Service and go to the Advanced pane. Navigate to the Settings section of the window and enable Compression, as shown in the following screenshot:
Using Wireshark to analyze network traffic using filters and the different types of HTTP headers will be covered as part of Chapter 7, Security and Troubleshooting.
Now, after compression has been enabled, NetScaler will use the default policies that are set at a global level. We can see that after we enable compression for a service, it will automatically start compressing data for that service. If we go to the HTTP Compression Policy Manager window under HTTP Compression and then choose Override Global and click Continue, we will be able to see the policies that are applied on a global level.
The reason why we need to go for the classic syntax here is that in the global settings of the compression feature, we have a configuration that defines which policies are processed and which are not. By default, this is configured to be of the Classic policy type. This policy is covered later in this chapter.
By default, there are five global policies, each of which has an action attached to it. The policies are explained as follows:
ns_nocmp_xml_ie
: This policy does not compress when a request is sent from Internet Explorer. The content type is either text or XML.ns_nocmp_mozilla_47
: This policy does not compress when a request is sent from Firefox. The content type is either text or XML.ns_cmp_mscss
: This policy compresses the CSS file when the request is sent from Internet Explorer.ns_cmp_msapp
: This policy compresses files that are generated by Microsoft Word, Excel, or PowerPoint.ns_cmp_content_type
: This policy compresses data when the response contains text.These policies can be seen in the following screenshot:
These policies do not compress data coming from the client to the services, but they compress data that is generated from the servers, which contains CSS files, Microsoft Office documents, or text.
After we have enabled compression for a service, we can test it by running a few HTTP requests against a service, for example, by opening a web browser to a service we defined in NetScaler. In my example, I have a simple IIS server setup, where I query the index page.
To view statistics, use the following CLI command:
Show cmp stats
We can also go through the GUI under HTTP Compression | Statistics, as shown in the following screenshot:
We can see that it has already managed to compress about 50 percent of the data. This feature uses the CPU of the appliance. So make sure that you do not enable compression if you have a large number of services, as NetScaler uses a large amount of CPU to perform compression.
We can define some global settings to make sure that the compression feature does not run if NetScaler exceeds a particular amount of CPU usage. To do this, go to Optimization | HTTP Compression | Settings | Change Compression Settings. Here, define the following parameters:
Now, most of these settings will be at their default values, but if you have a scenario where you, for example, have lots of large services and web servers with backend-enabled compression, you will need to change some settings here to make sure that it works properly. Another adjustment might be to change the bypass compression CPU usage feature, since you never want to be in a situation where NetScaler is at 100 percent CPU.
We have gone through the different settings; it is now time to create our own compression policies. A policy is built up of a rule and an action. The rule can contain a query, for example, a client who is connecting using Firefox version 4.7. The default action for this rule would be to compress data.
Follow these steps to create a compression policy:
To test a policy against a service, bind it to the service and define a low-numbered priority to make sure that it applies before other policies. In this example, we've added the newly created policy to the global level, and set it at priority 100 so that we can make sure that the policy is applied to all connections made from Internet Explorer. We can also unbind all the other policies to make sure that no other policies interfere with the one for Internet Explorer.
So, when we try to open a connection from Internet Explorer, we will be able to see from the packets that the traffic is compressed from the HTTP request header. We can see this in the following screenshot in the Content-Encoding field, which says it is compressed with gzip:
If we do the same for Google Chrome and analyze the traffic in Wireshark, we can see in the following screenshot that the traffic is not compressed and the data is sent in clear text, as there is no policy that involves an expression containing an action for Google Chrome:
We have now created a custom policy for Internet Explorer users and explored the different options for compression and how it works. We can verify if the compression policies are working by going into HTTP Compression | Policies. This will list out all our policies and show the current hits of the policy and the bandwidth savings of the different policies, as seen in the following screenshot: