Preparing the physical infrastructure

Most OpenStack clouds are made up of physical infrastructure nodes that fit into one of the following four categories:

  • Controller node: Controller nodes traditionally run the API services for all of the OpenStack components, including Glance, Nova, Keystone, Neutron, and more. In addition, controller nodes run the database and messaging servers, and are often the point of management of the cloud via the Horizon dashboard. Most OpenStack API services can be installed on multiple controller nodes and can be load balanced to scale the OpenStack control plane.
  • Network node: Network nodes traditionally run DHCP and metadata services and can also host virtual routers when the Neutron L3 agent is installed. In smaller environments, it is not uncommon to see controller and network node services collapsed onto the same server or set of servers. As the cloud grows in size, most network services can be broken out between other servers or installed on their own server for optimal performance.
  • Compute node: Compute nodes traditionally run a hypervisor such as KVM, Hyper-V, or Xen, or container software such as LXC or Docker. In some cases, a compute node may also host virtual routers, especially when Distributed Virtual Routing (DVR) is configured. In proof-of-concept or test environments, it is not uncommon to see controller, network, and compute node services collapsed onto the same machine. This is especially common when using DevStack, a software package designed for developing and testing OpenStack code. All-in-one installations are not recommended for production use.
  • Storage node: Storage nodes are traditionally limited to running software related to storage such as Cinder, Ceph, or Swift. Storage nodes do not usually host any type of Neutron networking service or agent and will not be discussed in this book.

When Neutron services are broken out between many hosts, the layout of services will often resemble the following:

Figure 1.3

In figure 1.3, the neutron API service neutron-server is installed on the Controller node, while Neutron agents responsible for implementing certain virtual networking resources are installed on a dedicated network node. Each compute node hosts a network plugin agent responsible for implementing the network plumbing on that host. Neutron supports a highly available API service with a shared database backend, and it is recommended that the cloud operator load balances traffic to the Neutron API service when possible. Multiple DHCP, metadata, L3, and LBaaS agents should be implemented on separate network nodes whenever possible. Virtual networks, routers, and load balancers can be scheduled to one or more agents to provide a basic level of redundancy when an agent fails. Neutron even includes a built-in scheduler that can detect failure and reschedule certain resources when a failure is detected.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset