We discussed the communication between neighboring peers in Chapter , and specifically Section 3.2 showed basic concepts of the Diameter session spanning multiple Diameter nodes. This chapter looks at the details of communication between two Diameter nodes that are not adjacent.
The routing table is a data structure internal to the Diameter node that contains information on how to handle Diameter request messages: either consuming the request message locally or processing the request further before routing it to the appropriate adjacent peer. There is typically one routing table per Diameter node; however, multiple routing tables are possible when routing based on policy configuration.
Each routing table entry points to one or more peer table entries. Typically there is a single peer table entry for each routing table entry. However, as discussed in Chapter 3 a single destination may have multiple peer control blocks (PCB), for example in the case of multiple connection instances. How the Diameter application and the routing table lookup selects the peer table entry in this case is implementation specific. Aspects such as peer load information or other priorities may affect the selection.
Different types of Diameter nodes use the routing table differently. Diameter clients use the routing table only to find an adjacent peer to whom to forward the originated request message. Diameter servers consume the received request message locally and do not forward them further. Diameter agents carry out additional processing to received requests before routing them to the appropriate peer.
Although some of the routing table contents are implementation specific, the following elements must be found in each routing table entry:
The realm name is the primary key for the adjacent peer lookups. The realm may be matched against the lookup key realm using exact or “longest match from the right”. Depending on the implementation, the lookup key can be more complex with additional context‐specific information. There may be a default, wildcarded entry that matches everything. The utilization of wildcards and default entries is implementation specific.
The secondary lookup key for the adjacent peer lookups. If the next hop is a relay agent, the value of the Application‐Id can be considered as a wildcard matching all applications. The implementation of wildcards and default entries is implementation specific.
The server identifier is the link/index to one or more peer table entries, where it is present in the peer table as the Host Identity field. When the Local Action (see below) is set to RELAY
or PROXY
, this field contains the DiameterIdentity of the server(s) to which the request message must be routed. When the Local Action field is set to REDIRECT
, this field contains the identity of one or more servers to which the request message must be redirected.
This indicates whether the entry was statically configured or was created as a result of dynamic peer discovery.
Specifies when the entry expires. For example, for DNS‐based dynamic peer discovery, the discovered peer information has an associated lifetime from the DNS response. If the transport security utilizes public key certificates, then the Expiration Time must not be greater than the lifetime of the associated certificates.
Indicates that the node should take one of the following actions:
LOCAL
The node should consume and process the message locally. The request message has reached its final destination. That is, the realm in the routing table matches the destination realm and specifically the destination host in the request message.
PROXY
The node should route the request message to the appropriate adjacent peer node as indicated by the Server Identifier. A node acting as a proxy may also apply local processing to the request message based on the local policy and configuration. This processing may involve modifying the request message by adding, removing or modifying its AVPs. However, the proxy must not reorder AVPs.
RELAY
The node should route or forward the request message to the appropriate peer node as indicated by the Server Identifier. A node acting as a relay must not modify or reorder the request message AVPs other than to update routing AVPs. See Section 4.3.1 for more information.
REDIRECT
The node should give the request message the redirecting treatment, by sending an “error” answer message to the message originator that contains one or more destination hosts where the request can be resent and guidance on how to treat subsequent request messages to the same original destination.
Although the routing table has actions that could also apply to answer messages (such as PROXY
), RFC 6733 does not give guidance on how to process and, specifically, “proxy process” answer messages. The transaction state maintained for the answer messages is separate from the routing table and possible “proxy process” modifications to the answer messages are left for the implementations to solve.
Routing decisions are made based on the Diameter Application‐Id and the destination realm (and possibly by a number of vendor‐ and implementation‐specific methods – there will be examples later). The lookup for the next‐hop peer goes through the routing table. The procedure by which a node sends the request message to a specific peer using the destination host information found in the peer table rather than the destination realm is called request forwarding. The small distinction in the routing and forwarding procedures implies that the Diameter node has to consult both peer table and routing table to determine the correct treatment for the request message.
When a Diameter node routes (or forwards) a request message, it has to maintain the transaction state and buffer the message until the corresponding answer message has arrived or the transaction times out entirely.
The AVPs used for routing (and forwarding) Diameter request messages are discussed in detail in Section 4.3.1 . Although Diameter implementations may use more intelligent approaches to select their next‐hop peers when routing or forwarding request messages, the following simple rules regarding the AVPs in the context of routing, proxying, and forwarding should be kept in mind:
Destination‐Realm
and/or the Destination‐Host
AVPs. Both AVPs contain realm information, which can enable the routing of the request message. However, RFC 6733 is not clear whether request messages should contain only the Destination‐Host
AVP. Our interpretation is that such request messages cannot be proxied but are meant for direct adjacent peer communication for non‐base Diameter applications. The description of the DIAMETER_UNABLE_TO_DELIVER error status code also supports this interpretation (see Section 4.4).Destination‐Realm
and the Destination‐Host
AVPs.Destination‐Realm
AVP.Destination‐Realm
or the Destination‐Host
AVPs.User‐Name
AVP is used for the next‐hop peer selection, but it can be used to populate or manipulate the Destination‐Realm
AVP. A good example can be found in mobile networks, where an International Mobile Subscriber Identity (IMSI) may both identify the subscriber and affect the selection of the next‐hop peer [1]. See Section 4.3.1 for more information.It is also possible for intermediate agents to manipulate or add AVPs to the request messages so that a desired destination gets reached. This may be done for load balancing purposes. For example, a proxy agent that load balances in a multi‐level agent hierarchy could add the Destination‐Host
AVP to request messages that only contain the Destination‐Realm
AVP to ensure that the request messages reach a specific host in that realm.
The Diameter request message routing relies on both the destination realm and the Application‐Id. While reaching the final destination, the destination host, if present, is used to forward the message to the correct peer node.
Destination‐Realm
AVPThe Destination‐Realm
AVP is of type DiameterIdentity and contains the domain portion (i.e., the realm) of the intended request destination. When querying the routing table, the Destination‐Realm
AVP is used as one of the lookup keys along with the Application‐Id. The Destination‐Realm
AVP is included only in the request messages, never in the answer messages. The Destination‐Realm
AVP has to be in the request message if proxying or relaying the message is desired. This implies that request messages that are only meant to be used between two adjacent peers (such as CER/CEA, etc.) must not have the Destination‐Realm
AVP.
The Destination‐Realm
AVP in a request message usually should not be modified as it travels to the final receiver. However, this rule has been relaxed by RFC 5729 [2] in order to accommodate realm granularity source routing of request messages using NAI Decoration. See Section 4.3.1 for further details.
Destination‐Host
AVPThe Destination‐Host
AVP is type of DiameterIdentity and contains the fully qualified domain name (FQDN) of the Diameter node. Thus the Destination‐Host
AVP contains both the specific Diameter node identity within a realm and the realm itself. If both the Destination‐Realm
AVP and the Destination‐Host
AVP are present in a request message, they must not contain conflicting information. Intuitively, routing and proxying the request message should be possible using the Destination‐Host
AVP alone. However, this is not the case. The Destination‐Host
AVP is used solely to name a specific Diameter node within a realm identified by the Destination‐Realm
AVP.
Similar to the Destination‐Realm
AVP, the Destination‐Host
AVP is meant to be present only in request messages that can be proxied.
Auth‐Application‐Id
and Acct‐Application‐Id
AVPsBoth the Auth‐Application‐Id
and the Acct‐Application‐Id
AVPs are of type Unsigned32 and contain the numerical identifier value of the Diameter application. If either the Auth‐Application‐Id
or the Acct‐Application‐Id
AVPs are present in a Diameter message other than CER and CEA, the value of these AVPs must match the Application‐Id present in the Diameter message header, but they are redundant. They are listed in RFC 6733 for backward compatibility purposes but serve no purpose since the Diameter message header already contains the Application‐Id information. One could argue that having the Application‐Id at the message level provides cleaner layering between the application and peer connection logic.
Application‐Ids are used as the other lookup key along with the Destination‐Realm
AVP into the routing table.
User‐Name
AVPUnlike RADIUS [3], Diameter does not rely on the User‐Name
AVP for request routing purposes. However, a Diameter node may use the User‐Name value to determine the destination realm. The User‐Name
AVP is of type UTF8String and contains the subscriber username in a format of Network Access Identifier (NAI) [4,5], which is constructed like an email address. The NAI allows decoration, that is, one can embed a source route in a form of realms into the NAI user‐name portion. See Figure 4.1 for examples of NAI decoration. RFC 5729 [2] updates the Diameter base protocol to explicitly support source routing based on NAI decoration. However, deployment experience has shown that NAI decoration is not a scalable and maintainable solution in larger multi‐vendor and multi‐operator deployments. Populating and maintaining client devices with exact AAA routing information is burdensome, and repairing breakage due to stale source routes is slow since client devices, rather than the routing infrastructure, need to be updated with new routes. Realm‐level redirections [6] or a dynamic DNS‐based discovery [7,8] may be used to circumvent stale realms in the routing network, but there is not much deployment experience using these techniques in the case of source routing of messages.
Diameter end‐to‐end communication relies on a number of routing AVPs. Unlike what the category name suggests, these AVPs are not used for the request routing or forwarding, but to record the used route and the state information of the traversed agents.
Note that even relay agents process the routing AVPs. Both relay and proxy agents append the Route‐Record
AVP to the request. The AVP contains the DiameterIdentity of the agent. Another point to stress here is that the Route‐Record
AVPs are intended to be included only in the request message, whereas the received Proxy‐Info
AVPs are echoed in the answer message in the same order from the corresponding request message.
Routing AVPs serve three main purposes:
Route‐Record
AVPs in the request message in order to detect routing loops.Route‐Record
AVPs – it is trivial to spoof or modify the routing AVPs if some intermediate wishes to do. This “feature” is commonly used, for example to realize topology hiding.Proxy‐Info
AVPs serve the purpose of recording and remembering the transaction state of the traversed stateless agents.Route‐Record
AVPThe Route‐Record
AVP is of type DiameterIdentity and contains the identity of the Diameter node (i.e., the agent) that inserted the AVP. The Route‐Record
AVP is used only in request messages, and its content must be the same as the Diameter node's Origin‐Host
AVP that is used in the CER message during the capability exchange.
Proxy‐Info
AVPThe Proxy‐Info
AVP is of type Grouped and contains two sub‐AVPs: the Proxy‐Host
and the Proxy‐State
AVPs. The Proxy‐Info
AVP is shown in Figure 4.2. Despite what the AVP name suggests, the AVP is meant not only for proxy agents but also for relay agents.
The Proxy‐Host
AVP is of type DiameterIdentity and contains the identity of the Diameter node that inserted the Proxy‐Info
AVP. The Proxy‐State
AVP is of type OctetString and contains an opaque octet blob that is only “meaningful” to the node that inserted it. RFC 6733 also recommends using cryptography to protect its content.
The Diameter node that inserted the Proxy‐Info
AVP into the request message is also responsible for removing it from the answer message before forwarding the answer.
The End‐to‐End Identifier is used for detecting duplicated messages. The End‐to‐End Identifier is a 32‐bit integer placed in the Diameter message header by the request message originator and is locally unique to the Diameter node that created it. RFC 6733 recommends creating the End‐to‐End Identifier out of two parts: placing the low‐order bits of the current time into the upper 12 bits of the 32‐bit value and using a random value for the lower 20 bits. RFC 6733 does not define “current time”. If “current time” is based on Network Time Protocol (NTP), then the lower 12 bits are parts of the “fraction of second” in any of the NTP on‐wire formats [9]. The End‐to‐End Identifier must remain unique to its creator for at least 4 minutes, even across reboots.
Intermediate Diameter agents (relays, redirects, proxies) are not allowed to modify the value. In the answer direction, the value of the End‐to‐End Identifier is copied to the answer message.
In addition to using a combination of the End‐to‐End Identifier and the Origin‐Host
AVP to detect duplicated messages, the message receiver can look for the T flag in the request command flag field. The message originator may set the T flag if it is retrying messages after a transport failure or after a reboot. The 4‐minute minimum value for the End‐to‐End Identifier uniqueness hints to the Diameter server that it should be prepared to “remember” received requests for that period of time. The same also applies for the request originator regarding answer messages.
If a server receives a duplicate Diameter request message, it should reply with the same answer. This requirement does not concern transport‐level identifiers and parameters such as the Hop‐by‐Hop Identifier and routing AVPs (see Section 4.3.1 ). Furthermore, the reception of the duplicate message should not cause any state transition changes in the peer state machine.
The Diameter base protocol has a number of result codes that can be returned as a result of errors during the request message routing/forwarding. All routing and forwarding errors are categorized as protocol errors and fall into the 3xxx
class of the status codes. As a reminder, protocol errors use a specific “answer‐message” Command Code Format (CCF) instead of the normal answer message for the request. The “E” error bit is also set in the “answer‐message” command header. The protocol errors are handled on a hop‐by‐hop basis, which means that intermediate Diameter nodes may react to the received answer message. The intermediate node reacting to the error may try to resolve the issue that caused the error before forwarding the “answer‐message” back to the downstream Diameter node.
DIAMETER_REALM_NOT_SERVED
(Status code 3003) Used when the realm of the request message is not recognized. This could be due to a lack of the desired realm in the routing table and the lack of a default route, or due to the requested realm being malformed.
DIAMETER_UNABLE_TO_DELIVER
(Status code 3002) Used in two situations:
Destination‐Realm
AVP but has the Destination‐Host
AVP and the request should be routed. This case is covered in Section 4.3.DIAMETER_LOOP_DETECTED
(Status code 3005) Used in situations where a Diameter node (typically an intermediate agent) notices that it has received a request message it had already forwarded. This implies that the Diameter network has a routing loop somewhere or that the DNS infrastructure has misconfigured zone files.
The Diameter node finds a loop by inspecting the routing AVPs in the received request message (discussed further in Section 4.3.2). In the case of routing loops, it makes little sense to attempt to re‐route the request message, rather the Diameter node that detected the loop should raise an event to the network administration for further inspection and just return the answer message to the downstream node.
DIAMETER_REDIRECT_INDICATION
(Status code 3006) This status code is not a result of a routing error. It is used only in conjunction with redirect agents to redirect the request message and possibly subsequent request messages to a different Diameter node. The response is meant for the adjacent node, which should not forward it on. Recent updates (RFC 7075 [6]) to redirecting behavior added the ability to redirect a whole realm. A new status code was added for this purpose: DIAMETER_REALM_REDIRECT_INDICATION
(status code 3011). However, this functionality works only for newly defined applications. Note that although not an error, the “E” bit is still set in the answer message.
DIAMETER_APPLICATION_UNSUPPORTED
(Status code 3007) Used in a situation where a Diameter request message reaches a Diameter agent that has no entry for the desired application in its routing table and thus the node cannot route/forward the request message.
Answer messages always follow the Hop‐by‐Hop Identifier determined reverse path. When a Diameter mode receives an answer message, it matches the Hop‐by‐Hop Identifier in that message against the list of pending requests. If an answer does not match a known Hop‐by‐Hop Identifier, the node should ignore it. If the node finds a match, it removes the corresponding message from the list of pending requests and does the need processing for the message (e.g., local processing, proxying).
If the answer message arrives at an agent node, the node restores the original value of the Diameter header's Hop‐by‐Hop Identifier field and proxies or relays the answer message. If the last Proxy‐Info
AVP in the answer message is targeted to the local Diameter server, the node removes the Proxy‐Info
AVP before it forwards the answer message.
If the answer message contains a Result‐Code
AVP that indicates failure, the agent or proxy must not modify the Result‐Code
AVP, even if the node detected additional, local errors. If the Result‐Code
AVP indicates success, but the node wants to indicate an error, it can provide the appropriate error in the Result‐Code
AVP in the message destined towards the request message originator but must also include the Error‐Reporting‐Host
AVP. The node must also send an STR on behalf of the request message originator towards the Diameter server.
RFC 6733 has an interesting statement in the request routing overview:
For routing of Diameter messages to work within an administrative
domain, all Diameter nodes within the realm MUST be peers.
The above implies a full mesh between all Diameter nodes within one realm. One can argue that a realm could be split into multiple administrative domains, however, since the realm is also piggybacked on the administration of the DNS, it is not obvious to claim that a “flat realm” could be more than one administrative domain.
For big operators, this full‐mesh requirement is challenging to meet. Just think about the multi‐million subscriber operator that has continent‐wide geographical coverage where the network has to be partitioned because of operational and reliability reasons. Furthermore, the concept “all peers” would mean that only the peer table is consulted. However, the peer table has no application knowledge, therefore, even for pure peer connections, the routing table has to be consulted to determine the right peer connection for the desired application.
The Destination‐Host
AVP contains also the realm, since the value of the Destination‐Host
AVP is a FQDN. Therefore, it is possible to determine the destination realm even if the request message lacks the Destination‐Realm
AVP.
For standards compliance and to alleviate too big “flat realms”, dividing a realm into multiple sub‐realms is a valid solution. There, for example, the realm example.com would mean and point to “edge agents” of the realm. Anything inside the realm and also part of the host names (as seen in the Origin‐Host
AVP) would then contain more detailed realms such as east.example.com and north.example.com. The realm internal agents and routing/forwarding would then be based on these more detailed sub‐realms to make them appear as multiple realms instead of a single flat realm. This approach is more or less analogous to DNS zone hierarchies.
This section discusses the topic of large multi‐realm inter‐connecting Diameter networks. The discussion is not exhaustive, since the examples are limited only to few publicly known large deployments.
One of growing and expected to be huge Diameter inter‐connection networks is the IPX network [10] serving cellular operators, and not just 3rd Generation Partnership Project (3GPP)‐based cellular operators. 3GPP made a far‐reaching decision to migrate all Signaling System 7 (SS7) based signaling interfaces with Diameter for the Evolved Packet System (EPS) in 3GPP Release 8. Eventually every Long‐Term Evolution (LTE) enabled cellular operator has to use Diameter, not only to connect with their roaming partners, but also in their internal network.
GSM Association (GSMA) has been defining how inter‐operator roaming works in practice. For EPS and LTE, GSMA produced the LTE Roaming Guidelines [11], which also detail the envisioned and recommended Diameter inter‐operator network architecture. Figure 4.3 illustrates the “reference” inter‐operator Diameter‐based inter‐connection network architecture.
The basic architectural approach is straightforward. Operators have relay agents, i.e., Diameter edge agents (DEA), on their network edges. These relay agents are then connected to internal agents, e.g., 3GPP specific Diameter routing agent (DRA) proxies or “vanilla” Diameter proxies with application‐specific treatment of Diameter messages. Finally, the proxy agents are connected to the Diameter clients and servers. The connectivity within the operator's realm does not need to be a full mesh. However, for failover purposes, the peer connections within an operator realm are nearly a full mesh.
The connectivity between operators (and different realms) is realized using one or a maximum of two intermediate IPX roaming network providers. These IPX providers offer the transit of Diameter signaling traffic. The IPX provider may also deploy a number of intermediate agents, IPX proxies, for instance for value‐added services. The IPX proxies can be relay agents only providing request routing services, or they can also be application‐aware proxies doing application‐level handling and/or manipulation of the transit traffic. Obviously, just deploying relay agents makes it easier to roll out new Diameter applications, since there is no need to upgrade intermediate agents in the IPX network for the new application support.
One of the obvious value‐added services that IPX providers could offer is taking the burden of managing roaming partner connectivity relations and routing information on behalf on the customer operator. In this case, the customer operator only needs to do the following with its preferred IPX provider:
The above model simplifies greatly the operator's own Diameter routing bookkeeping and policy‐based Diameter message processing at the network edges. The GSMA LTE Roaming Guidelines [11] also recommend using dynamic node discovery [ 7, 2] at the network edges or within IPX. The dynamic discovery eases the management of the next‐hop peer discovery. Section 4.7.2 discusses the pros and cons of the dynamic node discovery in detail.
The Diameter routing infrastructure may form a complex topology due to the large number of roaming partners involved. The number of partners often exceeds hundreds of companies. The number of peers is therefore even larger, at least double the size, due to the recommendation of maintaining redundant peer connections for improved reliability. As a result, the routing tables for Diameter nodes in such a topology contain a large number of entries.
It is also common for each “foreign realm” to have a dedicated policy for handling and processing of request messages. In large Diameter networks even the internal realm topology can be complex. This results in a vast number of entries in the peer table, and, depending on the internal realm structure, also results in multiple sub‐realm entries in the routing table. As an example, a realm example.com
may have a sub‐realm sub.example.com
. All these contribute to the complexity and administrative overhead of the Diameter node operations and management. For scalability and managability reasons operators avoid storing comprehensive connectivity information to all internal as well as external nodes in each Diameter node. Default route entries and dynamic Diameter node discovery are useful tools to ease the deployment of large Diameter networks.
Similarly to the management of the DNS infrastructure in many enterprise networks, the internal and external views of the networks are kept separate. There is no need for realm‐external nodes to learn the realm‐internal topology or even Diameter node DiameterIdentities. It is also typically in the network administrators' interest to operate specific ingress into and egress points out of the network for security purposes. It's also not useful for a realm‐internal node to dynamically discover realm‐external nodes since direct peer connections outside realm are not allowed. Obviously the realm‐internal DNS view could be configured so that all realm‐internal DNS‐based dynamic discovery attempts always resolve to the specific agents within the realm.
Therefore, in a typical large Diameter network deployment, the realm edge agents, which can be proxies or relays, are likely the ones initiating the discovery and also being populated into the public DNS to be discovered by realm‐external nodes.
Example deployment architectures are illustrated below. In each architecture it is assumed that the Diameter client inside the originating realm has only a static route to the realm's edge agent. All traffic that exits the client's home realm is directed to the edge agents. Similarly, all traffic coming into the realm always goes through the edge agents. Direct connectivity between realm internal Diameter nodes is rarely if ever allowed in production networks. The reasons are the same as with the IP networking in general: better network control, manageability, and security.
In Figure 4.4, the realm foo.example.com edge agent discovers the realm inter‐connection network's edge agent when it tries to discover the “server B” Diameter node in the realm bar.example.com. Here, the realm bar.example.com DNS administration delegates the publishing of the edge agent DNS information to the inter‐connection network provider.
In Figure 4.5, the realm foo.example.com edge agent discovers the realm bar.example.com edge agent when it tries to discover the “server B” Diameter node in the realm bar.example.com. Here, the realm bar.example.com DNS administration publishes the edge agent DNS information in its own public DNS.
In Figure 4.6, the realm foo.example.com edge agent only has a static “default” route to the inter‐connection network's edge agent. The inter‐connection network agent dynamically discovers the realm bar.example.com edge agent on behalf of the realm foo.example.com edge agent. Here, the realm foo.example.com has an agreement that the inter‐connection network handles its realm‐routing management, and that the bar.example.com DNS administration delegates the publishing of the edge agent DNS information in its own public DNS.
Diameter Overload Control [12] is a recent larger solution concept done in the IETF and also adopted by 3GPP in their Diameter‐based interfaces. For example, the 3GPP S6a interface [1] adopted Diameter overload control in Release 12. The basic architecture and the default functionality are described in the Diameter Overload Information Conveyance (DOIC) [13] specification. Figure 4.7 illustrates the high‐level architecture of Diameter overload control. The main idea behind DOIC is to allow a message‐receiving Diameter host or Diameter realm to inform message originator(s) that it is under a load condition. The message originators would then apply specific algorithms to back off and hopefully resolve the load condition.
There are a few key design ideas behind the Diameter overload control design:
*[AVP]
in its command's CCF).A good example of DOIC extensibility is the load control amendment [14] that was widely accepted in 3GPP Release 14 specifications. The load control adds a mechanism to convey load information (and make use of it) in addition to the mandatory to implement default loss algorithm specified in the DOIC specifications.
The DOIC specifies two roles for Diameter nodes: a reporting node (a message receiver) and a reacting node (a message originator). The reporting node sends periodic updates of its overload condition (or a lack of it). The reacting node receives these reports and is supposed to react to the reported overload condition by applying the mutually agreed overload abatement algorithm. There are no predefined client‐server roles in DOIC just like there are no such roles in Diameter. The “roles” are implicitly determined by the direction of the communication. The reporting and reacting nodes determine the identity (i.e., the DiameterIdentity) of their “DOIC partner” from the Origin‐Host
AVP or similarly the entire realm from the Origin‐Realm
AVP.1
A reacting node, which is the originator of the messages that may contribute to an overload condition on the receiving end, indicates its support for DOIC by including the OC‐Supported‐Features
grouped AVP into every request message it originates (see Figure 4.8). The AVP includes the OC‐Feature‐Vector
AVP, which in turn indicates one or more capabilities, e.g., the set of overload abatement algorithms the reacting node supports. The mandatory to support capability is the loss algorithm OLR_DEFAULT_ALGO
. The reacting node also determines whether its communication counterpart supports DOIC from the possibly received OC‐Supported‐Features
AVP.
If a reporting node determines the reacting node supports DOIC, it in turn indicates its support for DOIC by including the OC‐Supported‐Features
AVP in response messages towards a reacting node. This OC‐Feature‐Vector
AVP contains the set of mutually supported features. However, in a case of overload abatement algorithms, only a single mutually supported algorithm is returned out of possibly several candidates. The response messages towards the reacting node may also include the OC‐OLR
grouped AVP (see Figure 4.9). The OC‐OLR
AVP contains the actual overload report (OLR) information.
The DOIC specification does not detail the Diameter agent behavior or possible functions. Only the basic rules are laid out. Diameter Agents that support DOIC should relay all messages that contain the OC‐Supported‐Features
AVP. An interesting function for a Diameter agent is to take the role of a reacting or reporting node for Diameter endpoints that do not support DOIC. Alternatively, a Diameter node may also add or reduce features to those advertised by DOIC‐supporting nodes in their OC‐Supported‐Features
AVP. In that case the Diameter agent also has to ensure consistency in its behavior with both upstream and downstream DOIC partners. Diameter agent overload and peer overload report amendment to the DOIC is a good example of a Diameter agent that actively participates in the Diameter overload condition handling [15].
The OC‐OLR
AVP contains a set of fixed, mandatory sub‐AVPs that are the same for all current and future abatement algorithms of the OLR. The optional sub‐AVPs change depending on the supported and used abatement algorithms. Figure 4.9 illustrates the OC‐OLR
grouped AVP with sub‐AVPs that are present with the default (must implement) loss abatement algorithm. The fixed AVPs are the OC‐Sequence‐Number
and the OC‐Report‐Type
AVPs. The OC‐Sequence‐Number
AVP carries a monotonically increasing sequence number that DOIC partners (namely the reacting node) use to detect whether or not the contents of the OLR actually update the maintained overload control state (OCS) (covered in Section 4.8.2). The OC‐Report‐Type
AVP informs the reacting node whether or not the contents of the OLR concern a specific node (the value of OC‐Report‐Type
is HOST_REPORT
) or an entire realm (the value of OC‐Report‐Type
is REALM_REPORT
).
The loss abatement algorithm is specified by the OC‐Reduction‐Percentage
and the OC‐Validity‐Duration
AVPs. The former indicates the percentage of the traffic that the reacting node is requested to reduce, compared to what it otherwise would send. The values between 0 and 100 are valid. The value 100 means that the reporting node will not process any received messages, and the value 0 means the overload condition is over (somewhat similar meaning as setting the OC‐Validity‐Duration
to zero although the explicit ending of the overload condition should be signaled using the OC‐Validity‐Duration
AVP). The OC‐Validity‐Duration
AVP indicates how long the recently received OLR information is valid. The default value is 30 seconds with a maximum of 86,400 seconds. The value of zero (0) indicates the overload condition concerning this OCS state is over.
Both reacting and reporting nodes maintain Overload Control State (OCS) for their active overload conditions. At the reacting node the active overload condition is determined from the received OC‐OLR
sub‐AVP OC‐Validity‐Period
with a non‐zero value. At the reporting node the OCS state is created and maintained for DOIC partners when the overload condition is active.
The OCS states are indexed somewhat differently on the reacting and reporting nodes. A reacting node maintains an OCS entry for each Diameter Application‐Id + DiameterIdentity tuple (Table 4.1). The DiameterIdentity is either a host from the Origin‐Host
AVP of the OLR (the OC‐Report‐Type
AVP value is HOST_REPORT
) or a realm from the Origin‐Realm
AVP of the OLR (the OC‐Report‐Type
AVP value is REALM_REPORT
).
Table 4.1 Overload control state for reacting nodes.
State information | Description |
Sequence number | Detect whether the received OC‐OLR updates the OCS state, i.e., if the received OC‐Sequence‐Number is greater than the stored sequence number, then update the OCS state. Otherwise, silently ignore the received OC‐OLR . |
Time of expiry | The validity time derived from the OC‐Validity‐Duration received in the OC‐OLR. On an event of expiration the OCS entry for the overload condition is removed. Receiving the OC‐Validity‐Duration zero (0) signals immediate expiration of the overload condition. |
Selected algorithm | The overload abatement algorithm selected by the reporting node and received from the OC‐Supported‐Features for the ongoing overload condition. |
Per algorithm input data | Data specific to the implementation and the abatement algorithm. For example, in the case of the OLR_DEFAULT_ALGO this would include the OC‐Reduction‐Percentage from the OC‐OLR. |
A reporting node maintains an OCS entry for each Diameter Application‐Id + DOIC partner DiameterIdentity + supported abatement algorithm + report type tuple (Table 4.2). The DiameterIdentity is always a host (from the Origin‐Host
AVP) of the request message that contained the OC‐Supported‐Features
.
Table 4.2 Overload control state for reporting nodes.
State information | Description |
Sequence number | Last used sequence number with the latest OC‐OLR sent to the DOIC partner. The sequence number is increased only when the intimation sent in the OC‐OLR changes. For a new OCS state the sequence number is set to zero (0). |
Validity duration | The validity time for the sent OLRs. When the overload condition ends the validity time is set to zero (0). |
Expiration time | All sent OLRs have an expiration time in the reporting node's OCS state. The expiration time is equal to current time when the report was sent plus the validity duration. |
Per algorithm input data | Implementation and abatement algorithm specific data. For example, in the case of the OLR_DEFAULT_ALGO this would include the OC‐Reduction‐Percentage from the OC‐OLR. |
The complete information stored in an OCS state entry is an implementation decision. The original goal of the DOIC design was to maintain state only for overload conditions and respective “DOIC partners”. However, it quickly turned out that this was not a realistic goal since the OCS ended up populated with sequence numbers and timers. The sequence numbers are used to check whether newly received OC‐OLR
AVPs and timers are used. Still, the maintained state is rather trivial.
Diameter did not have a mechanism to prioritize arbitrary messages over each other in a standardized and a generic manner until Diameter Routing Message Priority (DRMP) [16] was specified. Among several use cases enumerated in RFC 7944, the prioritization of messages when DOIC‐originated overload abatement takes place is a prominent one. The DRMP instructs the Diameter nodes, for example, to make an educated throttling decision between different Diameter messages or a resource allocation.
The DRMP uses a single DRMP
AVP in a Diameter message to indicate the relative priority of the message compared to other messages seen by a Diameter node. There are 16 priorities. It is important that priorities are assigned and managed in a coordinated manner within an administrative domain and even between domains/realms. Otherwise, the priorities can be mishandled or misinterpreted, or there could be too many messages with the same priority to realize any benefit of prioritization among messages.