Routing east-west traffic between instances

In the network world, east-west traffic is traditionally defined as server-to-server traffic. In Neutron, as it relates to distributed virtual routers, east-west traffic is the traffic between instances in different networks owned by the same tenant. In the legacy model, all traffic between different networks traverses a virtual router located on a centralized network node. With DVR, the same traffic bypasses the network node and goes directly between the compute nodes hosting the virtual machine instances.

Reviewing the topology

Logically speaking, a distributed virtual router is a single router object connecting two or more tenant networks, as shown in the following diagram:

Reviewing the topology

Figure 9.3

In the following example, a distributed virtual router named MyRouter-DVR is created and connected to two tenant networks: TENANT_BLUE and TENANT_RED. Virtual machine instances in each network use their respective default gateways to route traffic to the other network through the same router. The virtual machine instances are unaware of where the router is located.

A look under the hood, however, tells a different story. In the following example, the blue VM pings the red VM:

Reviewing the topology

Figure 9.4

As far as the user is concerned, the router connecting the two networks is a single entity known as MyRouter-DVR:

Reviewing the topology

Figure 9.5

In reality, each compute node hosts a copy of the router:

Reviewing the topology

Figure 9.6

Tip

Until a virtual machine instance is scheduled to a particular compute node, the router will not be scheduled to this node. If a compute node is missing from the list, verify that an instance in the tenant network has been scheduled to it.

Using the ip netns exec command, we can see that the qr interfaces within the namespaces on each compute node and controller node share the same interface names, IP addresses, and MAC addresses:

Reviewing the topology

Figure 9.7

Reviewing the topology

Figure 9.8

Reviewing the topology

Figure 9.9

In the preceding screenshots, the qrouter namespaces on the controller and compute nodes that correspond to the MyRouter-DVR router contain the same qr-29db7422-76 and qr-57d9db1e-44 interfaces and addresses that correspond to the TENANT_BLUE and TENANT_RED networks. A creative use of routing tables and Open vSwitch flow rules allows traffic between instances behind the same distributed router to be routed directly between compute nodes. The tricks behind this functionality will be discussed in the following sections and throughout the chapter.

Plumbing it up

When a distributed virtual router is connected to a subnet through the router-interface-add command, the router is scheduled to all nodes hosting ports on the subnet, including any controller or network node hosting DHCP or load balancer namespaces and any compute node hosting virtual machine instances in the subnet. L3 agents are responsible for creating the respective qrouter network namespace on each node, and the Open vSwitch agent connects the router interfaces to the bridges and configures the appropriate flows.

Distributing router ports

Without precautions, distributing ports with the same IP and MAC addresses across multiple compute nodes presents major issues in the network. Imagine a physical topology that resembles the following diagram:

Distributing router ports

Figure 9.10

In most networks, an environment consisting of multiple routers with the same IP and MAC addresses connected to a switch would result in the switches learning and relearning the location of the MAC addresses across different switch ports. This behavior is often referred to as MAC flapping and results in network instability and unreliability.

Virtual switches can exhibit the same behavior regardless of segmentation type as the virtual switch may learn that a MAC address exists both locally on the compute node and remotely, resulting in similar behavior observed on the physical switch.

Making it work

To work around this expected network behavior, Neutron allocates a unique MAC address to each compute node, which is used whenever traffic from a distributed virtual router leaves the node. The following screenshot shows the unique MAC addresses that have been allocated to the nodes in this demonstration:

Making it work

Figure 9.11

Open vSwitch flow rules are used to rewrite the source MAC address of a packet as it leaves a router interface with the unique MAC address allocated to the source node:

Making it work

Figure 9.12

When traffic comes into a compute node that matches a local virtual machine instance's MAC address and segmentation ID, the source MAC address is rewritten from the unique source node MAC address to the local instance's gateway MAC address:

Making it work

Figure 9.13

Because the layer 2 header rewrites occur before and after traffic enters or leaves the virtual machine instance, the instance is unaware of the changes made to the frames and operates normally. The following section demonstrates this process in further detail.

Demonstrating traffic between instances

Imagine a scenario where virtual machines in different networks exist on two different compute nodes, as demonstrated in the following diagram:

Demonstrating traffic between instances

Figure 9.14

Traffic from the blue virtual machine instance on Compute A to the red virtual machine instance on Compute B will first be forwarded from the instance to its local gateway through the integration bridge and to the router namespace:

Demonstrating traffic between instances

Figure 9.15

Traffic leaves the blue VM and is forwarded to the blue router interface. The original MAC and IP addresses are intact:

Source MAC

Destination MAC

Source IP

Destination IP

Blue VM

Blue router interface

Blue VM

Red VM

The router on Compute A routes the traffic from the blue VM to the red VM, replacing the source MAC address with its red interface and the destination MAC address to that of the red VM in the process:

Source MAC

Destination MAC

Source IP

Destination IP

Red router interface

Red VM

Blue VM

Red VM

The router then sends the packet back to the integration bridge, which then forwards it to the tunnel bridge:

Demonstrating traffic between instances

Figure 9.16

As traffic arrives at the tunnel bridge on Compute A, a series of flow rules are processed, resulting in the source MAC address being changed from the red interface of the router to the unique MAC address of the host:

Source MAC

Destination MAC

Source IP

Destination IP

Source host (Compute A)

Red VM

Blue VM

Red VM

The traffic is then matched to a flow rule that results in its encapsulation and forwarding out of the appropriate tunnel to Compute B:

Demonstrating traffic between instances

Figure 9.17

When traffic arrives at Compute B, it is forwarded through the tunnel bridge and decapsulated. A flow rule adds a local vlan header that allows the traffic to be matched when it is forwarded to the integration bridge:

Demonstrating traffic between instances

Figure 9.18

Source MAC

Destination MAC

Source IP

Destination IP

Source host (Compute A)

Red VM

Blue VM

Red VM

In the integration bridge, a flow rule strips the local vlan tag and changes the source MAC address back to that of the router's red interface. The packet is then forwarded to the red VM:

Demonstrating traffic between instances

Figure 9.19

Source MAC

Destination MAC

Source IP

Destination IP

Red router interface

Red VM

Blue VM

Red VM

The returning traffic from the red VM to the blue VM undergoes a similar routing path through the respective routers and bridges on each compute node.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset