Chapter 8. Content Networking Design

This chapter describes how content networking (CN) can be implemented to provide content to users as quickly and efficiently as possible.

We first introduce the advantages that CN can provide to your network. The services provided under CN and the components that provide those services are introduced. The use of these components—the content engine, content router, content distribution and management device, and content switch—is then described in more detail. We conclude by examining two content network design scenarios.

Note

Appendix B, “Network Fundamentals,” includes material that we assume you understand before reading the rest of the book. Thus, we encourage you to review any of the material in Appendix B that you are not familiar with before reading the rest of this chapter.

Making the Business Case

We have all come to expect that the data we are trying to access will be available almost instantaneously, whether it be from the hard drive on our PC or from a website on the other side of the world. When we try to access a website and we have to wait for more than a few seconds, for example, many of us tend to give up or try another site, because we assume that the first site is either no longer available or is temporarily down.

This behavior has implications for enterprises that provide e-commerce, e-learning, or any other information that is needed by customers, employees, investors, or any other stakeholder: The data and services requested must be available as fast as possible, regardless of the location of the user relative to the data. CN aids in this quest.

CN is an intelligent network solution. In other words, it adds intelligence to the network so that the network devices are aware of the content that users are accessing. CN implements a content delivery network (CDN) that provides efficient distribution of content across the network, selection of the best site for a user to obtain the content, and load balancing of content stored on multiple devices.

Some example applications that could benefit from CN services are e-learning, video on demand (VoD), and IP television (IP/TV). IP/TV uses one-way streaming live video, while VoD transfers prerecorded video files to users upon request.

CN provides many benefits, including the following:

  • Increased loyalty—If data is available when customers request it, they might not be as enticed to go elsewhere to get it.

  • Maximizing internal productivity—Employees can access data when they need it, allowing them to be more productive and to service customers more quickly.

  • Reduced bandwidth use—CN places data closer to users, thus reducing the WAN bandwidth they require.

  • Enabling new applications—As the data bottleneck is reduced, new applications become possible, including e-learning, video communication, e-commerce, customer self-help, and so forth.

  • Scalability—As demand grows in specific areas, data can be distributed where required, without impacting existing users.

Content Networking

The following services fall under the CN umbrella:

  • Efficient distribution of content across the network

  • Selection of the best site for a user to obtain the content

  • Load balancing of content stored on multiple devices

The components of a CDN can include the following:

  • Content cache or content engine—A content engine is a device that caches, or stores, selected content from origin servers (servers from which the content originates) and sends it upon request to users. Content engines can be located, for example, at each branch office to reduce the WAN bandwidth required by the branch-office users.

  • Content router—Content routers direct users’ requests for content to the closest content engine.

  • Content distribution and management device—This is a device, such as the Cisco Content Distribution Manager, that is responsible for distributing content to the content engines and ensuring that the material is kept up to date.

  • Content switch—Content switches load-balance requests to servers or content engines. For example, a content switch can be deployed in front of a group of web servers; when a user requests data from the server, the content switch can forward the request to the least-loaded server.

A CDN does not have to include all of these components. For example, content engines can be deployed as stand-alone devices. Alternatively, a Cisco Content Distribution Manager can be deployed to manage the content engines, and content routers can be added to redirect content requests. Content switches can also be deployed with or without any of the other components.

Because CN is considered a network solution, it requires a robust network infrastructure and appropriate network services to be in place. The network services required by CN include quality of service (QoS), security, and IP multicast.

Note

IP multicast reduces the bandwidth used on a network by delivering a single stream of traffic to multiple recipients (defined in a multicast group), rather than sending the same traffic to each recipient individually. IP multicast is explained further in Chapter 10, “Other Enabling Technologies.”

The CDN components are further described in the following sections.

Content Caches and Content Engines

A content cache transparently caches, or stores, content that is frequently accessed so that it can be retrieved from the cache rather than from a distant server. A content engine can extend this caching functionality by interacting with a content distribution and management device, and optionally content routers, to store selected content and retrieve it on request, as part of a CDN.

Note

The type of software running on Cisco content engines can determine the features supported by the device. For example, the Cisco 7320 content engine is available with a choice of cache software (providing only transparent caching), CDN software, or Application and Content Networking System (ACNS) software.[1] The ACNS software combines the caching and CDN functionality.

Content engine functionality is also available on modules that fit into the Cisco modular routers.

Some content-engine hardware that runs ACNS software can be configured with a choice of personalities: as a content engine, a content router, or a Content Distribution Manager.[2] In fact, Cisco stand-alone Content Distribution Managers have been phased out in favor of the ACNS-enabled content engine.

(Note that a device can have only one personality at a time; it cannot perform multiple functions simultaneously.)

Caching is best suited for data that doesn’t change often, such as static application data and web objects, versus entire web pages, which might include frequently changing objects.

Key Point

When not used with a content router, a content engine can be deployed in a network in three ways: transparent caching, nontransparent caching (also called proxy caching), and reverse proxy caching.

Transparent caching, nontransparent caching, and reverse proxy caching are described in the following sections. The use of a content engine with a content router is described in the “Content Routing” section, later in this chapter.

Content engines can also be configured to preload specific content from an origin web server that stores the primary content, and to periodically verify that the content is still current, or update any content that has changed. This is described in the “Content Distribution and Management” section, later in this chapter.

Transparent Caching

Key Point

A network that uses transparent caching includes a content engine and a Web Cache Communication Protocol (WCCP) enabled router. WCCP is part of the Cisco Internet Operating System (IOS) router software (available in some IOS feature sets) and is the communication mechanism between the router and the stand-alone content engine.

Transparent caching is illustrated in Figure 8-1.

With Transparent Caching, a WCCP-Enabled Router Passes Users’ Requests to the Content Engine

Figure 8-1. With Transparent Caching, a WCCP-Enabled Router Passes Users’ Requests to the Content Engine

Note

The WCCP-enabled router in this scenario is not a content router; it is simply an IOS routerwith WCCP functionality. Refer to the Feature Navigator tool at http://www.cisco.com/go/fn to determine the feature set required to support WCCP for various IOS platforms.

Transparent caching operates as follows:

  1. In Figure 8-1, the user at workstation A requests a web page that resides on the web server. This request is received first by the WCCP-enabled router.

  2. The router analyzes the request and, if it meets configured criteria, forwards it to the content engine. For example, the router can be configured to send specific Transmission Control Protocol (TCP) port requests to the content engine, while not redirecting other requests.

  3. If the content engine does not have the requested page, it sends the request to the server.

  4. The server responds to the content engine with the requested data.

  5. The content engine forwards the web page to the user and then caches it for future use.

At Step 3, if the content engine did have the requested web page cached, it would send the page to the user. After the content engine has the content, any subsequent requests for the same web page are satisfied by the content engine, and the web server itself is not involved.

Transparent caching can also be deployed using a Layer 4 switch instead of a WCCP-enabled router. In this case, a content switch transparently intercepts and redirects content requests to the content engine.

The benefits of transparent caching include a faster response time for user requests and reduced bandwidth requirements and usage. User workstations are not aware of the caching and therefore do not have to be configured with information about the content engine. Content engines in transparent mode are typically positioned on the user side of an Internet or WAN connection.

Nontransparent Caching

Key Point

Nontransparent caching, as its name implies, is visible to end users. As such, workstations must be configured to know the address of the content engine; the content engine acts as a proxy.

Note

A proxy is an action performed on behalf of something else (for example, a proxy vote is one that you give to someone else so that she can vote on your behalf). In networking, a proxy server (also sometimes referred to as simply a proxy) is a server that accepts clients’ requests on behalf of other servers. If the proxy has the desired content, it sends it to the client; otherwise, the proxy forwards the request to the appropriate server. Thus, a proxy server acts as both a client (to the servers to which it connects) and a server (to the client that is requesting the content).

Nontransparent caching is illustrated in Figure 8-2.

With Nontransparent Caching, the Content Engine Acts as a Proxy Server

Figure 8-2. With Nontransparent Caching, the Content Engine Acts as a Proxy Server

This scenario operates as follows:

  1. In Figure 8-2, the browser on workstation A is configured with the content engine as its proxy. The user at this workstation requests a web page that resides on the web server. This request is therefore sent to the content engine.

  2. Assuming that the content engine has been configured to handle the protocol and port number in the received request, the content engine checks to see whether it has the requested page. If the content engine does not have the requested page, it sends the request to the server.

  3. The server responds to the content engine with the requested data.

  4. The content engine forwards the web page to the user and then caches it for future use.

At Step 2, if the content engine had the requested web page cached, it would send the page directly to the user. Similar to transparent caching, after the content engine has the content, any subsequent requests for the same web page are satisfied by the content engine; the web server is not involved.

This nontransparent caching shares the benefits of faster response time and reduced bandwidth usage with transparent caching. An additional benefit of nontransparent caching is that it does not require WCCP-enabled routers; however, a drawback is the requirement to configure workstation browsers with the address of the content engine.

Content engines in nontransparent mode are also typically positioned on the user side of an Internet or WAN connection.

Reverse Proxy Caching

Reverse proxy caches are positioned on the server side of an Internet or WAN connection to help alleviate the load on the server, as illustrated in Figure 8-3.

Reverse Proxy Caches Help Alleviate Server Load

Figure 8-3. Reverse Proxy Caches Help Alleviate Server Load

Key Point

Reverse proxy mode is different from the previous two modes discussed because its goal is not to reduce bandwidth requirements but rather to reduce load on the server.

The steps involved in reverse proxy caching are as follows:

  1. In Figure 8-3, the user at workstation A requests a web page that resides on the web server. This request is received by the WCCP-enabled router on the server side of the Internet.

  2. The router analyzes the request and, if it meets configured criteria, forwards it to the content engine. For example, the router can be configured to send specific TCP port requests to the content engine while not redirecting other requests.

  3. If the content engine does not have the requested page, it sends the request to the server.

  4. The server responds to the content engine with the requested data.

  5. The content engine forwards the web page to the user and then caches it for future use.

At Step 3, if the content engine had the requested web page cached, it would send the page to the user. After the content engine has the content, any subsequent requests for the same web page are satisfied by the content engine, and the web server itself is not involved, thus reducing the load on the server.

Key Point

A variety of content caches can be deployed throughout a network, in any combination of these three modes.

Clusters of caches can also be deployed to provide redundancy and increased caching capacity.

Content Routing

A content router can be added to a CDN to redirect users’ requests to the closest content engine that contains the desired content.

The closest content engine is the one that has the shortest delay to the user. To determine this, the list of candidate content engines is configured on the content router, and a boomerang protocol is used between the content router and each content engine to determine the delay between the two devices.

This delay is then used in a Domain Name System (DNS) race process. When a content router receives a request for content that is serviced by multiple content engines, the content router forwards that request to a selection of the appropriate content engines, delaying each request by the delay determined by the boomerang protocol. Thus, each content engine should receive the request at the same time. The content engines then all respond to the request; the first response that is received by the client or the client’s local DNS server is the winner of the race and is therefore the best content engine from which that particular client should receive the desired content. The client then requests the desired content from the winning content engine.

The content router can be used in either of two modes[3]—direct mode or WCCP mode—as described in the following sections.

Direct Mode

Key Point

When used in direct mode, the content router acts as the authoritative DNS server for all domains for which it is configured. DNS address requests are sent directly from a DNS server that is local to the client to the content router.

As an example of a content router operating in direct mode, assume that the content router is to handle requests to http://www.cisco.com. The DNS server is thus configured to point to the content router as the name server for http://www.cisco.com, and all requests for content from this site are sent to the content router.

Figure 8-4 illustrates how a direct-mode content router interacts with other devices in the network.

A Content Router in Direct Mode Acts as a DNS Server

Figure 8-4. A Content Router in Direct Mode Acts as a DNS Server

The steps involved when a content router is operating in direct mode are as follows:

  1. In Figure 8-4, the user at the workstation in Toronto requests a web page from a server. The user’s workstation (the client) sends a DNS query for the IP address of the content that it is looking for. This request goes to the client’s local DNS server.

  2. The local DNS server sends the query to the content router (which is in San Jose, in this example).

  3. The content router forwards the request to a selection of the appropriate content engines (assuming that multiple content engines service the requested content). In this example, the request is forwarded to the content engines in San Jose and New York.

  4. The content engines receive the request and then reply to the local DNS server. The first response is from the best content engine for this client, and this response is passed to the client.

  5. The client communicates with the best content engine (which is in New York, in this example) and retrieves the requested web page for the user.

WCCP Mode

Key Point

When a content router is used in WCCP mode, users’ requests are intercepted by a WCCP-enabled router and forwarded to the content router. (This is different from when the content router is used in direct mode, in which the user’s local DNS server is configured to point directly to the content router.) If the content router cannot handle the user’s request, it forwards the request on to the DNS server specified in the request. Otherwise, the content router handles the request in the same way as it does in direct mode, as described in the previous section.

The use of WCCP mode requires that WCCP be enabled both on the content router and on another router in the path between the user and the primary DNS server. This second router must be configured to send DNS address requests to the content router.

Figure 8-5 illustrates how a WCCP-mode content router interacts with other devices in the network.

A Content Router in WCCP Mode Receives Requests Intercepted by a WCCP-Enabled Router

Figure 8-5. A Content Router in WCCP Mode Receives Requests Intercepted by a WCCP-Enabled Router

The steps involved when a content router is operating in WCCP mode are as follows:

  1. In Figure 8-5, the user at the workstation in Toronto requests a web page from a server. The user’s workstation (the client) sends a DNS query for the IP address of the content that it is looking for.

  2. This request is destined for a DNS server but is intercepted by the WCCP router.

  3. The WCCP router forwards the request to the content router.

  4. The content router forwards the request to a selection of the appropriate content engines (assuming that multiple content engines service the requested content). In this example, the request is forwarded to the content engines in San Jose and New York.

  5. The content engines receive the request and then reply to the client. The first response is from the best content engine for this client.

  6. The client communicates with the best content engine (which is in New York, in this example) and retrieves the requested web page for the user.

Content Distribution and Management

The Cisco Content Distribution Manager can be used to manage how content is distributed to content engines, and to control other content engine settings.

Key Point

Cisco defines three types of content: on-demand, pre-positioned, and live.

On-demand content is what the content engines store as a result of users’ requests, as described in the “Content Caches and Content Engines” section, earlier in this chapter. Content engines can check with the origin server to see whether on-demand content is up to date. This occurs, for example, when the content expires (as specified by the server), when a user explicitly requests it (such as when the user clicks the Reload button in his browser), or when configurable timers set on the content engine expire. If the content has changed, the content engine caches the updated content from the server.

Pre-positioned content is that which has been retrieved and distributed through a network of content engines; the network administrator configures the Content Distribution Manager to pre-position this bandwidth-intensive content (typically during off-peak hours) so that it will be available upon users’ requests. Some terminology related to pre-positioned content is as follows:[4]

  • Channel—A set of content from a single website and the configuration that defines how the content is to be acquired, distributed, and stored. Content engines are assigned to a channel so that they can handle requests for this content.

  • Root content engine—The content engine that is designated to download a channel’s content from the origin server and forward it to the other content engines that are assigned to the channel.

  • Manifest file—Specifies the location from which the root content engine should fetch the pre-positioned content objects and the frequency with which the content engine should check for updates.

Note

Manifest files define content accessed through Hypertext Transfer Protocol (HTTP), Secure HTTP (HTTPS), and File Transfer Protocol (FTP). Thus, only these file types can be pre-positioned.

Live content is a stream of content, such as a CEO’s annual message to employees, that is being broadcast and that is to be relayed by the content engines to the users according to specific policies (such as maximum bit rate and bandwidth). Live content is not associated with a manifest file but rather with a program file. The program file describes attributes of the program, such as the start and end time and the server to be used.

Content Switching

Key Point

A content switch load-balances requests to servers, content engines, or firewalls.

A content switch can be used in a data center environment, for example. Here the content switch can be used in front of a group of application servers to balance the connection requests sent to each, as illustrated in Figure 8-6.

A Content Switch Can Load-Balance Connections to a Server Farm

Figure 8-6. A Content Switch Can Load-Balance Connections to a Server Farm

Content switches can be configured with various policies that define how messages are shared among devices. For example, when load-balancing across a set of servers, a policy might specify some of the following:[5]

  • That all connections from a single user will go to the same server

  • That all connections from a specific type of device (for example, from a cell phone) will go to a subset of the servers that can handle that device type

  • That all requests for specific file types (for example, video files) will be directed to a specific server

The load balancing can be based on a variety of algorithms, including distributing requests on a round-robin basis or distributing to the least-loaded device.

A content switch can also monitor the status of the devices and fail over to another if one should become unavailable.

Designing Content Networking

Content networking encompasses a selection of device types that can be deployed in a variety of ways. This section examines the following two scenarios and example designs using CN devices:

  • School curriculum

  • Live video and video on demand for a corporation

School Curriculum

In this first scenario, a school board wants to provide curriculum and other course information to all its students, who are distributed across a wide geographical area. The content is relatively static and therefore lends itself well to a CN solution, as shown in Figure 8-7.

Performance Can Be Improved Significantly with Content Networking

Figure 8-7. Performance Can Be Improved Significantly with Content Networking

In the network in Figure 8-7, the course content resides on the curriculum server located at the school board office. A Content Distribution Manager is deployed in the same office to handle the distribution of the content to the content engines deployed in each of the schools. An optional content router (in direct mode) can also be deployed at the main office if not all schools are to be equipped with the content engines; in this case, the content router directs users’ requests to the closest content engine.

The content engines in this scenario are deployed in nontransparent caching mode. Therefore, the workstations in the schools must be configured with the address of the school’s content engine as their proxy address.

Live Video and Video on Demand for a Corporation

In this scenario, a corporation wants to be able to deliver live video, such as company meetings, and VoD, such as training videos, to its employees over the network. The organization uses an IP/TV broadcast server to create and send the live video and stores the VoD files on servers in its head office, as illustrated in Figure 8-8. A content switch is used in the head office to load-share among the servers.

Content Networking Ensures That Video Is Available Throughout the Enterprise

Figure 8-8. Content Networking Ensures That Video Is Available Throughout the Enterprise

A Content Distribution Manager is also deployed in the head office to handle the distribution of the content to the content engines that are deployed in each of the branch offices. An optional content router can again be deployed at the head office if not all branch offices are to be equipped with the content engines; in this case, the content router directs users’ requests to the closest content engine.

The content engines in this scenario are deployed in nontransparent caching mode. Therefore, the workstations in the branch offices must be configured with the address of the office’s content engine as their proxy address.

IP multicast must be enabled on this network to ensure efficient distribution of the live video.

Note

As an alternative to implementing CN, an enterprise can contract with a provider of CDN services. A CDN service provider implements a CDN so that its clients can access CN services and features from different locations, anywhere in the world. For example, consider a company that provides e-learning—the company has course files (including, for example, videos) on its servers for customers to access worldwide. Those users closer to the servers would tend to experience faster response times than those farther away, who might experience unacceptable response times. The company can therefore contract with a CDN service provider to replicate the e-learning content on the service provider’s many worldwide servers. Distant users accessing the courses are then directed to the server closest to them, drastically improving the response times they experience.

Summary

In this chapter, you learned about integrating content networking devices into your network; the following topics were presented:

  • The benefits of employing the CN intelligent network solution.

  • The following components that are used in CN:

    • Content engine—Caches, or stores, selected content from origin servers and sends it upon request to users. When not used with a content router, a content engine can be deployed in a network in three ways: transparent caching, nontransparent caching, and reverse proxy caching.

    • Content router—Directs users’ requests for content to the closest content engine. A content router can operate in either direct mode or WCCP mode.

    • Content distribution and management device—Responsible for distributing content to the content engines and for ensuring that the material is kept up to date.

    • Content switch—Load-balances requests to servers, content engines, or firewalls.

  • The three types of content: on-demand, live, and pre-positioned.

  • Example CN design scenarios.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset