Using Distributed Virtual Routers

When we create Neutron routers in DVR mode, the routers are created on our Compute nodes instead of the network node. This allows for a much more distributed layout of routing, and prevents bottlenecks through our network nodes. In normal operation, the process of creating and deleting routers behaves in the same way as for the Legacy mode, but understanding and troubleshooting them is a little different.

Getting ready

Ensure that you have a suitable client available for using Neutron. If you are using the accompanying Vagrant environment, you can use the controller node. This has the python-neutronclient package that provides the neutron command-line client.

If you created this node with Vagrant, you can execute the following command:

vagrant ssh controller

Ensure that you have the following credentials set (adjust the path to your certificates and key file to match your environment if not using the Vagrant environment):

export OS_TENANT_NAME=cookbook
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=https://192.168.100.200:5000/v2.0/
export OS_NO_CACHE=1
export OS_KEY=/vagrant/cakey.pem
export OS_CACERT=/vagrant/ca.pem

How to do it...

In this section, we will create and view the details of a DVR mode router and see how these present themselves to our Compute hosts. The steps are as follows:

  1. First, create a router using the following command:
    neutron router-create cookbook_router_1
    

    You will get the following output:

    How to do it...

    As you can see in the output, a new distributed field is shown that is set to True.

  2. We can attach any of our networks to this router as we did before:
    neutron router-interface-add 
      cookbook_router_1 
      cookbook_subnet_1
    
  3. We still haven't seen any difference yet between this router and any Legacy routers. To locate this router, we can use the following command:
    neutron l3-agent-list-hosting-router cookbook_router_1
    

    You will get the following output:

    How to do it...

    In the output, you can see that the router is available on our Compute host and not our network node.

  4. In Legacy L3 Routing mode, when we troubleshoot the Namespace of the router, we had Namespaces of the form qrouter-{netuuid} on our network node. In DVR, we have this as well as a new fip-{extent-uuid} namespace that we can use to troubleshoot Floating IP assignments. On the Compute host, issue the following command:
    ip netns list
    

    You will get the following output:

    How to do it...
  5. We can then use this Namespace to test connectivity to any instances that have a Floating IP assigned. Assume we have 192.168.100.11 assigned to an instance running on our Compute host:
    ip netns exec fip-ca2fc700-b5e2-4c8b-9fa4-6a80f1174360 ping 192.168.100.11
    

    You will get the following output:

    How to do it...

How it works...

We discussed a few steps to highlight the difference when running routers in distributed mode. By default, due to the setting in distributed /etc/neutron/neutron.conf where we set router_distributed = True, any routers we normally create will be created on our distributed Compute hosts. To troubleshoot them, we can connect to our Compute hosts and view the namespaces created.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset