Visualizing the traffic flow when using Open vSwitch

When using the Open vSwitch driver, for an Ethernet frame to travel from the virtual machine instance out through the physical server interface, it will potentially pass through nine devices inside the host:

  • The tap interface: tapXXXX
  • The Linux bridge: qbrXXXX
  • The veth pair: qvbXXXX, qvoXXXX
  • The OVS integration bridge: br-int
  • OVS patch ports: int-br-ethX and phy-br-ethX
  • The OVS provider bridge: br-ethX
  • The physical interface: ethX
  • The OVS tunnel bridge: br-tun

The Open vSwitch bridge br-int is known as the integration bridge. The integration bridge is the central virtual switch that most virtual devices are connected to, including instances, DHCP servers, routers, and more. When Neutron security groups are enabled, however, instances are not directly connected to the integration bridge. Instead, instances are connected to individual Linux bridges that are cross connected to the integration bridge using a veth cable.

Note

The reliance on Linux bridges in an Open vSwitch-based network implementation stems from the current inability to place iptables rules on tap interfaces connected to Open vSwitch bridge ports, a core function of Neutron security groups. To work around this limitation, tap interfaces are placed into Linux bridges, which in turn are connected to the integration bridge. More information on security group rules and how they are applied to interfaces can be found in Chapter 6, Managing Security Groups.

The br-ethX Open vSwitch bridge is known as a provider bridge. The provider bridge provides connectivity to the physical network via a connected physical interface. The provider bridge is also connected to the integration bridge by a virtual patch cable provided by the int-br-ethX and phy-br-ethX patch ports.

A visual representation of the architecture described can be seen in Figure 4.15:

Visualizing the traffic flow when using Open vSwitch

Figure 4.15

In Figure 4.15, instances are connected to an individual Linux bridge via their respective tap interface. Linux bridges are connected to the OVS integration bridge using a veth cable. OpenFlow rules on the integration bridge dictate how traffic is forwarded through the virtual switch. The integration bridge is connected to the provider bridge using an OVS patch cable. Lastly, the provider bridge is connected to the physical network interface, which allows traffic to enter and exit the host onto the physical network infrastructure.

When using the Open vSwitch driver, every network and compute node in the environment has its own integration, provider, and tunnel bridge. The virtual switches across nodes are effectively cross connected to one another through the physical network. More than one provider bridge can be configured on a host, but it often requires the use of a dedicated physical interface per provider bridge.

Identifying ports on the virtual switch

Using the ovs-ofctl show <bridge> command, we can see a logical representation of the specified virtual switch. The following screenshot demonstrates the use of this command to show the switch ports of the integration bridge on compute01:

Identifying ports on the virtual switch

Figure 4.16

The following are the components demonstrated in the preceding screenshot:

  • Port number 6 is named int-br-eth2, and it is one end of a Linux veth cable. The other end connects to the provider bridge, br-eth2 (not pictured).
  • Port number 7 is named patch-tun, and it is one end of an OVS patch cable. The other end connects to the tunnel bridge, br-tun (not pictured).
  • Port number 8 is named qvo017db302-dc, and it corresponds to a Neutron port UUID starting with 017db302-dc.
  • Port number 9 is named qvo7140bc00-75, and it corresponds to a Neutron port UUID starting with 7140bc00-75.
  • The LOCAL port is named br-int, and it is used for the management of traffic to and from the virtual switch.

The following screenshot demonstrates the switch configuration in a graphical manner:

Identifying ports on the virtual switch

Figure 4.17

Identifying the VLANs associated with ports

Every port on the integration bridge connected to an instance or other network resource is placed in a VLAN, which is local to this virtual switch. The Open vSwitch database on each host is independent of all other hosts, and the VLAN database is not related to the physical network infrastructure. Instances in the same Neutron network on a particular host are placed in the same VLAN on the local integration bridge.

Using the ovs-vsctl show command, you can identify the internal VLAN tag of all ports on all virtual switches on the host. The following screenshot demonstrates this command in action on compute01:

Identifying the VLANs associated with ports

Figure 4.18

Inside the integration bridge reside two ports named qvo7140bc00-75 and qvo017db302-dc, and each is assigned its own VLAN tag. These ports correspond to two instances in two different Neutron networks, as evidenced by the difference in their VLAN IDs.

Note

The VLAN IDs are arbitrarily assigned by the local Open vSwitch process and may change upon the restart of the openvswitch-switch service or after a reboot.

Programming flow rules

Unlike the LinuxBridge driver architecture, the Open vSwitch driver does not use VLAN interfaces on the host to tag traffic. Instead, the Open vSwitch agent programs flow rules on the virtual switches that dictate how the traffic traversing the switch should be manipulated before forwarding. When the traffic traverses a virtual switch, flow rules on the switch can transform, add, or strip the VLAN tags before forwarding the traffic. In addition, flow rules can be added to this drop traffic if it matches certain characteristics. Open vSwitch is capable of performing other types of actions on traffic, but these actions are outside the scope of this book.

Using the ovs-ofctl dump-flows <bridge> command, we can observe the flows currently programmed on the specified bridge. The Open vSwitch plugin agent is responsible for converting information about the network in the Neutron database to Open vSwitch flows and constantly maintains the flows as changes are made to the network.

Flow rules for VLANs

In the following example, the VLANs, 30 and 33, represent two networks in the data center. Both VLANs are trunked down to the controller and compute nodes, and Neutron networks that utilize these VLAN IDs are configured. Traffic that enters the eth2 physical interface is processed by the flow rules on the br-eth2 bridge that it is connected to:

Flow rules for VLANs

Figure 4.19

Flow rules are processed in the order of priority, from highest to lowest. By default, ovs-ofctl returns flow entries in the same order that the virtual switch sends them in; however, they may be out of order. Using --rsort, it is possible to return the results in the order of priority, from highest to lowest:

Flow rules for VLANs

Figure 4.20

The first three rules specify a particular inbound port:

in_port=4

According to the diagram in Figure 4.17, traffic entering the br-eth2 bridge from the eth2 physical interface does so through port 1, not port 4; so, the first three rules do not apply. Traffic is forwarded to the integration bridge via the fourth rule, where no particular port is specified:

Flow rules for VLANs

Figure 4.21

Flows with a NORMAL action instruct Open vSwitch to act as a learning switch, which means that traffic will be forwarded out of all ports until the switch learns and updates its forwarding database, also known as the FDB table. Traffic is forwarded out the port connected to the integration bridge.

Note

The FDB table is the equivalent of a CAM or MAC address table. This learning behavior is similar to that of a hardware switch that floods traffic out of all ports until it learns the proper path.

As traffic exits port 4 of the br-eth2 provider bridge and enters port 6 of the br-int integration bridge, it is evaluated by the flow rules on br-int, as follows:

Flow rules for VLANs

Figure 4.22

The first rule performs the action of modifying the VLAN ID of a packet from its original VLAN to a VLAN that is local to the integration bridge on the compute node when the original VLAN ID is 30:

Flow rules for VLANs

Figure 4.23

When the traffic tagged as VLAN 30 is sent to an instance and forwarded through the provider bridge to the integration bridge, the VLAN tag is modified from 30 to local VLAN 1. It is then forwarded to a port on br-int that is connected to the instance that matches the destination MAC address.

The third rule performs a similar action when the original VLAN is 33 by replacing it with local VLAN 2. If the third rule is matched, it means that no other rules of a higher priority matching port 6 were found and traffic will be dropped:

Flow rules for VLANs

Figure 4.24

The return traffic from the instances through the br-int integration bridge is forwarded to the provider bridge by the third rule:

Flow rules for VLANs

Figure 4.25

Once the traffic hits the br-eth2 provider bridge, it is processed by the flow rules, as follows:

Flow rules for VLANs

Figure 4.26

These rules should look familiar as they are the same flow rules on the provider bridge shown earlier. This time, however, traffic from the integration bridge connected to port 4 is processed by the first three rules.

The first flow rule on the provider bridge checks the VLAN ID in the Ethernet header, and if it is 1, modifies it to 30 before forwarding the traffic to the physical interface. The second rule modifies the VLAN tag of the packet from 2 to 33 before it exits the bridge. All other traffic from the integration bridge on port 4 not tagged as VLAN 1 or 2 will be dropped.

Note

Flow rules for a particular network do not exist on a bridge if there are no instances or resources in this network scheduled to this node. The Neutron Open vSwitch agent on each node is responsible for creating the appropriate flow rules for virtual switches on this node.

Flow rules for flat networks

Flat networks in Neutron are untagged networks, meaning there is no 802.1q VLAN tag associated with the network when it is created. Internally, however, Open vSwitch treats flat networks similarly to VLANs when programming the virtual switches. Flat networks are assigned a local VLAN ID in the Open vSwitch database, similar to a VLAN network, and instances in the same flat network connected to the same integration bridge are placed in the same local VLAN. However, there is a difference between VLAN and flat networks, which can be observed in the flow rules that are created on the integration and provider bridges. Instead of mapping the local VLAN ID to a physical VLAN ID and vice versa, as traffic traverses the bridges, the local VLAN ID is added to or stripped from the Ethernet header by flow rules.

In another example, a flat network that has no VLAN tag is added in Neutron:

Flow rules for flat networks

Figure 4.27

On the physical switch, this network may be configured as the native VLAN (untagged) on the switch port connected to eth2 of compute01. An instance spun up on the MyFlatNetwork network results in the following virtual switch configuration:

Flow rules for flat networks

Figure 4.28

Note that the port associated with the instance is assigned a VLAN ID of 3 even though it is a flat network. On the integration bridge, there exists a flow rule that modifies the VLAN header of an incoming Ethernet frame when it has no real VLAN ID set:

Flow rules for flat networks

Figure 4.29

Note

TCI stands for Tag Control Information; it is a 2-byte field of the 802.1q header. For packets with an 802.1q header, this field contains VLAN information, including the VLAN ID. For packets without an 802.1q header, also known as the untagged packets, vlan_tci is set to 0 (0x0000).

The result is that the incoming traffic is tagged as VLAN 3 and forwarded to the instances connected to the integration bridge that reside in VLAN 3.

As the return traffic from the instance is processed by the flow rules on the provider bridge, the local VLAN ID is stripped and the traffic becomes untagged:

Flow rules for flat networks

Figure 4.30

The untagged traffic is then forwarded out of the eth2 physical interface and processed by the physical switch.

Flow rules for local networks

Local networks in an Open vSwitch implementation behave in a similar way to that of a LinuxBridge implementation. Instances in local networks are connected to the integration bridge and can communicate with other instances in the same network and local VLAN. There are no flow rules created for local networks. Traffic between instances in the same network remains local to the virtual switch, and by definition, local to the compute node on which they reside. This means that DHCP and metadata services will be unavailable to any instances that are not on the same host as these services.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset