14.5. Resilient Architecture: Virtualization and Routing

Having discussed so far our original premise along with the routing protocol framework and the attack models in detail, we are now ready to discuss the resilient network architecture and the role of routing that has the benefit of enabling a virtualized environment.

14.5.1. An Enabling Framework for Adaptive and Secure Virtualized Networking

Network virtualization to provide prioritized critical/emergency services is a critical need for cybertrust in next-generation networks. In this section, we present a new secure, extended node/link update (ENLU) framework by extending a link-state routing framework through a secure group communication approach for enabling network virtualization. This scheme allows dissemination of ENLU messages to be encoded in such a way that only nodes with the proper key can take advantage of the encoded information for prioritized services. We invoke a many-to-many group communication keying scheme to virtualize network resources to support multiple service domains; the scheme has been described in detail Huang et al. [17] and is summarized in Appendix 14.A at the end of the chapter.

Cryptographic Approaches for Network Resource Prioritization

In general, confidentiality ensures that no unauthorized entities can decipher the routing information on its way to a destination; integrity refers to the trustworthiness of data or resources and it is usually phrased in terms of preventing improper or unauthorized change. Integrity includes data integrity (the content of the information) and origin integrity (the source of the data, often called authentication).

As discussed earlier, our approach considers two preventive cryptographic countermeasures: confidentiality and authentication. These two countermeasures can provide protection at either the PL or the IL, shown in Figure 14.4(a). Recall that if we assume a routing packet to be a bus filled with a group of passengers, PL and IL represent the cryptographic countermeasures being provided for the bus and each individual passenger, respectively.

Figure 14.4. (a) Granularity of attributes at the packet level and information level and (b) VTRouP: conceptual framework.


To date, there has been no information-level confidentiality (ILC) schemes proposed for routing protocols that can be used for network virtualization. Our approach is to use ILC and information-level authentication (ILA) as the foundation to build a new secure routing framework. To deploy ILC, routing information (i.e., metric) is categorized by multiple groups. By carefully assigning group keys to nodes, we can partition network resources into multiple routing domains. For example, consider a node with several outgoing links; it can encrypt routing metrics (RMs) for some links using one key and encrypt RMs for other links using another key. Thus, only nodes that have the correct key can decrypt the routing information. This strategy can also be applied to a single link, that is, a node can partition the bandwidth of a link into multiple portions and create/encrypt an RM for each portion. This approach has several benefits:

  • It prevents outsiders’ sniffing attacks. We assume that the crypto key length is long enough to prevent brute-force attack: within a maintenance cycle (i.e., periodically updating the window of the crypto keys).

  • It mitigates outsiders’ traffic-analysis attacks. Since extended node/link attributes are encrypted and a node may or may not possess the decrypting key, nodes can maintain different network topology information and shortest-path tree or other provisioned paths. Thus, the data flow may not follow the same shortest path, which can prevent attackers from deriving the correct network topology or traffic allocation pattern.

  • An insider has limited information of a network, which can mitigate routing analysis and deliberate exposure attacks.

To implement ILC and ILA, an efficient secure group key management scheme, which supports many-to-many secure group communication, is needed. Many-to-many secure group communication requires that each group member (node in our case) with a group population of size n can communicate with any subgroup of members securely and on a real-time basis without requiring a new set of keys for each subgroup communication. In general, this means a group member would need to possess 2n–1–1 keys; however, our secure many-to-many group communication keying scheme [18], summarized in Appendix 14.A, has been designed for the purpose of virtualization and has much less complexity. Briefly, our many-to-many secure group communication keying scheme has the following advantages:

  1. During the communication phase, group members can self-derive desired subgroup keys.

  2. No group/subgroup setup delay, although there is some processing overhead incurred due to the key agreement protocol.

  3. Less communication overhead incurred compared to other methods.

  4. It is suitable where subgroup formation is frequent.

  5. A node cannot partner with another node to move to a different (unauthorized) subgroup.

The above advantages are ideal for the encrypted ENLU framework since it allows us to disseminate ENLU messages in a way that is meant only for a subgroup of nodes. Recall that we initially considered two services, normal services and SC services, with the important requirement that SC services encompass normal services as well. This can be accomplished by defining two subgroups for extended node/link-state dissemination using the many-to-many secure group communication scheme we have developed. Note that although there are only two subgroups in this case, which are mapped to two categories of services, the formations of subgroups can be changed frequently with respect to the use of different subgroup keys.

Note that our keying scheme addresses the problem of undesirable node partnering with other nodes to read all attributes of an advertised LSA, thus preventing a node from moving from a normal service state to a prioritized service state without having the credentials (i.e., the set of secrets to derive desired group/subgroup keys).

Our keying scheme has an additional advantage since it allows dynamic subgroup formation. This means that if a network wants to dynamically define multiple prioritized service levels, our approach allows it with the added advantage that a node can be in different prioritized groups and yet it cannot become an undesirable node (i.e., to move to a higher prioritized service class).

It may be noted that if a number of nodes in a network is likely to grow, then our scheme can be deployed with overprovisioned keys. For example, if the overall group size is currently around 50, and it is expected to grow to about 100, then the initial deployment can be done assuming the total group members to be, say, 101, since our scheme requires the overall population size to be odd numbered. This would mean that there would be some fictitious group members to start with. This approach then avoids redistribution of keys frequently (at the expense of overprovisioning) to nodes in the entire network. Our current approach has the following limitations: (1) the storage complexity of the keying scheme is O(n2), and the overall group size is restricted to a few hundred nodes, and (2) a centralized key server is required to do initial key predistribution. An important research goal is to overcome these limitations.

Virtual Trust Routing and Provisioning Domain

Using our many-to-many secure group keying scheme and information-level encryption and authentication approach, the entire routing domain can be divided into multiple routing/provisioning subdomains. We refer to such a subdomain as a virtual trust routing and provisioning domain (VTRouP) (Figure 14.4b). The framework may not need/imply the division of the administrative domain into VTRouPs.

Every node that belongs to a particular VTRouP will have complete routing information of its own VTRouP, but not others. We use the cryptographic techniques ILA and ILC to build the VTRouP framework. Each node can be provided by a different infrastructure provider; however, each node would need to support our framework that includes secure many-to-many communication as well as the capability of link bandwidth control and virtualization. For example, the bandwidth of a communication link of each node would be divided by using different encryption/decryption/authentication keys. While bandwidth partitioning is not directly available in most of today’s routers, this can be accomplished through the concept of multiple virtual links due to availability of the virtual link concept in the current generation of routers. Thus, a subset of network resources, which is composed of multiple network links using the same encryption/decryption/authentication key, will build a VTRouP.

We now briefly discuss how the overall system framework is affected (see Figure 14.5). Typically, from a systems perspective, traffic management and network resource management components are necessary for monitoring and managing a network. For a resilient environment, there are three additional components involved IDS, key management, and VTRouP. Note that intrusion detection is outside the scope of the present work, however, its role is important in the overall system framework.

Figure 14.5. System framework.


14.5.2. Routing Protocol Extension: OSPF-E

In this section we present an extension to OSPF, to be referred to as OSPF-E, for use in a VTRouP environment. A basic notion is that our approach benefits from a traffic engineering extension of OSPF, known as OSPF-TE. Here, we go beyond that to address the encryption of information. To achieve this, we also use the opaque LSA option (RFC 2370 [19]). The opaque LSA consists of a standard LSA header followed by application-specific information and it provides a generalized mechanism to allow for the future extensibility of OSPF. We include the details of our proposed LSA packet format. We then introduce a key numbering scheme in order to identify the trust level of the routers and the key that has been used to encrypt the routing information. We also discuss processing overhead of the LSU packet which is critical in an operational environment.

OSPF Opaque Link-State Advertisement: Extension

Opaque LSA [19] provide three LSA types: type 9, 10, and 11. These three LSA types provide the advertisement within a network, area, and autonomous system, respectively. We define the opaque LSA format to provide confidentiality for these three types of advertisements. In addition to these three types of opaque LSAs, we can define other types of opaque LSAs for particular use, such as a key distribution LSA or a routing control LSA.

In our scheme, authentication is based on the packet level of LSA, since the LSA header is not encrypted. We do not intend to provide a way to prevent insider attacks, rather, we show how OSPF-E works by introducing confidentiality. Murphy et al. [14] use digital signatures for each LSA to prevent impersonation attacks, which can be added to our scheme when needed. Here, we assume the network has the capability to detect insider attacks.

We now summarize the changes we propose in opaque LSA. The modified opaque LSA header is shown in Table 14.3.

  1. Options. In RFC 2370, O bit of option field is set in the database description packet to indicate that the router is opaque capable. In the LSA’s header, we use S bit in the same position to indicate the confidentiality provided.

  2. LSA type (8 bits). Three types of opaque LSAs exist (type 9, 10, and 11), each of which has a different flooding scope. We provide confidentiality for these three types.

  3. OType (8 bits). This field specifies the LSA type encrypted. The various Otypes are defined as follows: 1—Router-LSAs, 2—Network-LSAs, 3—Summary-LSAs (destination to network), 4—Summary-LSAs (destination to AS boundary routers), and 5—AS-external-LSAs.

  4. EType (8 bits). The encryption type specifies the encryption/decryption method used, such as DES, 3DES, AES, and so on.

  5. Key ID (8 bits). This field specifies the type of cryptographic scheme used to encrypt a propagated LSA. Some of the schemes are shared key, public key, and so on.

  6. LSA subheader. The LSA subheader just follows the opaque LSA’s header and identifies what encryption/decryption key is used.

    Table 14.3. Opaque LSA header.
    131
    LS ageOptionsLSA type
    OTypePriETypeKey ID
    Advertising router
    LS sequence number
    LS checksumLength
    LSA subheader
    ...

We also propose changes to the LSA subheader (Table 14.4):

  1. Format (16 bits). This field specifies what type of key is used. It can be Globe/level/subgroup key, individual key, and so on.

  2. Levels (n). This field represents the number of levels between the key used and the top-level key.

  3. Num of bits (16 bits). Based on our proposed master key scheme, this field specifies the number of concatenated hash values.

  4. Var 1~16*(n–1). This field identifies the location of the key in the hierarchical key structure. It contains each level’s information from the top down to the level in which the key is located; here, n is specified in the “Levels” field.

  5. SKey len (8 bits). This field specifies the length of a session key when it is used.

  6. Var/Encrypted session key. The session key is encrypted by the individual or globe/level/subgroup key. The length is variable and depends on the field of “SKey len.”

Table 14.4. Opaque LSA subheader.
1 31
FormatLevels (n)Num of Bits
...
Var 1 ~ 16*(n–1)...
...
SKey len...
Var/Encrypted Session Key ...
...
Encrypted Data
...

Key Numbering Scheme

We use dot notation to present our key numbering scheme. The dot separates the levels of the hierarchical key structure.

An OSPF-E key numbering example is shown in Figure 14.6 in which both the trusted group (TG) and the keys are labeled with a dot notation. The leading number stands for the top level of the group. The succeeding numbers following the “.” represent subgroup, sub-subgroup members, and so on. So, it can be conveniently represented as: root.group.subgroup.sub-subgroup. ...

Figure 14.6. Key allocations of OSPF-E hierarchical trust structure.


When a router receives an encrypted routing information packet with a key label as a. b. c. d, it compares its key label with the one received to find out the common parts. In the example, in group TG(0.1), there are four routers with key labels 0.1.1, 0.1.2, 0.1.3, and 0.1.4. The router labeled with K0.1 is the trusted group leader (TGL) of TG(0.1). Using its predistributed key set, the TGL can derive all the keys that are distributed to its group members (using the group key management scheme discussed in Appendix 14.A). We use suffix 0 in a key label to represent a group key. Thus, the group key for this TG is given by 0.1.0. If 0.1.2 and 0.1.3 want to set up subgroup communication, then they can use subgroup key with the label 0.1.2–3, where “–” denotes communication within a subgroup.

OSPF-E and Link-State Advertisement Packet Processing

In order to incorporate the encryption/decryption process into the LSU packet processing, the flow chart presented Shaikh and Greenberg [20] for OSPF needs to be modified. Figure 14.7 represents the new flow chart for the OSPF-E processes initiated, once the LSU packet is received by the router. It can be observed that on receiving an LSU packet, the router processes the LSAs contained in that packet one by one. It looks up the LSA packet header to find which level the key resides in. If the key (the one used to encrypt the LSA) resides at a higher level than the router, then it simply bundles the LSA into the LSU packet. If the key belongs to the same level or lower, the router will either use its own key or generate the key using the group keying scheme presented in Appendix 14.A and decrypt the LSA. Once the LSA is decrypted, it is checked for duplicates. OSPF-E updates its link-state database for every new LSA it receives. The flooding of LSA is similar to OSPF. It also needs to schedule the best route calculation module.

Figure 14.7. Flow chart depicting the OSPF-E processes initiated on receipt of LSU packets.


Before the router creates the LSU packet to be sent on its interfaces, it encrypts the information using various keys. The choice of the encryption key for a particular LSA depends on the level of the router and also the scope of the information as defined by the network policy. Once all the LSAs are encrypted, they are bundled into the LSU packet and flooded through the interface.

14.5.3. Network Analysis: Preliminary Results

An important question is: Can we quantify the benefit in a network that protected services do get allocated prioritized provisioning in a network virtualization framework based on secure encryption? To be able to quantify such a benefit, we have recently started the development of a virtual network analysis simulator (VNAS), which is extended from MuSDyR [8].

In VNAS, we have incorporated protected and dynamic network virtualization by allowing for different service classes, such as in SC service class over the normal service class. In our preliminary prototype, we have implemented a rudimentary version of the ENLU message passing framework for different service classes. In our current implementation, virtualization is performed on a per-link basis, that is, to simulate the affect, a user can decide which links to be considered for virtualization. If it is not considered for virtualization, then all services share the link equally. For activating network virtualization, attribute values of a link are encoded differently for the prioritized service on a link basis compared to the normal services. This is done so that ENLU messages are recorded by nodes as appropriate for different services in computing routes and service provisioning.

For our preliminary study, we have considered an eight-node network (Figure 14.8a). Traffic load was considered in such a way that the network is in a stressed situation where the prioritized service requires better performance than the normal service while both have the same amount of offered load. In Figure 14.8(b), we plot service blocking performance for prioritized and normal services. At the left end is the case when all links in the network are considered as fully shared by both service classes; thus, naturally, both service classes have the same performance. Then, we increase the number of links virtualized (one at time), and plot service blocking for both classes. Thus, using VNAS, we can observe that service blocking performance is significantly lower for the prioritized class over the normal service class as more and more links are virtualized. More important, our tool allows us to quantify how much is the benefit. Currently, additional feature development on VNAS is planned in order to study a series of scenarios.

Figure 14.8. Network topology and service performance: (a) eight-node network topology, and (b) performance: service blocking.


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset