Designing your Nova environment

In a production environment, we could be running several of the tens to hundreds of Nova compute nodes. There will be possibly one component that will be used more than any other component.

Designed properly, Nova can be used to support multiple hypervisors on a cloud and provision instances in different regions. While we are not going to supply a guide on how to perform the advanced configuration of Nova, we will definitely take a look at this theoretically.

Logical constructs

The following diagram shows the different logical constructs of Nova. When architecting a production environment, we will need to think about which of these we would use depending on requirements and the scale of the cloud that we are deploying:

Logical constructs

In this diagram, we have not shown a Nova cell as this was still an experimental feature during the time of writing the book, however, it's fairly similar to that of region, with the exception that the nova-api service is also shared.

Region

This is the top level construct and has all the components of Nova installed. If there are two regions, then we will have two full blown Nova installations (of all the Nova services), and the only component that is common between the two regions is that both of them use the same Keystone component for authentication.

Availability zone

An availability zone (AZ) doesn't really need any new services, but it uses the same set of the nova services that are installed for the region. This is merely a configuration change and can be quickly modified.

The reason you may want to use availability zones is to show the users the presence of fault-tolerant hardware—for example, hardware with different power supplies, and so on. A user can normally request a nova instance to be booted on the AZ. An AZ cannot exist without the host aggregates, as they in some ways are simply a way to expose the host aggregates to the end users.

The host aggregates

To overly simplify this, you can consider host aggregates as tags that are used to group compute servers. These could be servers with certain common traits, such as Hypervisors, or performance, such as servers with SSD drives or flash storage. The order of configuration, however, happens to be that the host aggregate is first created and then they can be placed in an AZ, if needed.

It should also be noted that a single compute node can be placed in multiple aggregates, just like adding multiple tags. The Nova API doesn't actually allow the choosing of host aggregates, so we normally expose this as an AZ for users to choose from.

However, the host aggregates can be chosen by the use of metadata, which can be added on both the aggregate and, say, the nova flavor, to push the servers to a certain host aggregate.

Both AZ and host aggregates can be used together. They can even be used separately. A good use case for using them together a multihypervisor cloud.

Virtual machine placement logic

The following logic is used in a virtual machine placement:

  1. The client chooses the region.
  2. The client queries Keystone for the Nova endpoint of that region.
  3. The client submits the request to the Nova API of that region.
  4. The Nova system lists the compute nodes in the region.
  5. The Nova system then filters it with the AZ metadata and the second level filters the host aggregate (in the request) metadata.
  6. The Nova system finds the compute nodes that are suitable for handling the VM, depending upon the size of the VM.
  7. One of the compute nodes is then chosen in a round robin fashion, and the VM is spun up on this node.

In our case, we have not created any availability zones or host aggregates for the purpose of simplicity, but the logic remains the same regardless.

Sample cloud design

So, in order to understand what we have seen so far, let's take a fictitious example where your company will need the private cloud with the following qualities:

  • Two major datacenters—that is, London and Boston
  • Support required for two hypervisors: VMware (for production workloads) and KVM (for development workloads)
  • Each DC will be fed by two separate power grids
  • There will be servers with SSD as internal storages that need to be used for caching static content

Now with this in mind, let's take a look at how we should proceed from here on.

Since we have two datacenters, we can create this as two different regions. This way, each region will be independent of the other. We could also look at creating Nova cells, if we do not want the clients to first query Keystone for the endpoint URL or just use a single endpoint URL. Since they are an experimental feature, we will stick with two regions.

In each region, we will create two AZs. The servers belonging to the AZs will be connected to different power grids. This way, we can tell our users to create application servers on the different AZs for high availability.

Now, it comes down to the host aggregates. For this one, we need to think a little more. So we have this question: will the SSD servers be available for both the dev test environment and production environment?

Note

Please remember, this is not for the OpenStack environment, but for the applications that will be running on the virtual machines that will be spun up and managed by OpenStack.

We can have up to four host aggregates in the preceding scenario:

  • VMware-normal
  • VMware-SSD
  • KVM-normal
  • KVM-SSD

Now depending on the actual use case that is supplied, we can have all four host aggregates or just three of them. We may even have more host aggregates created based on other classifications as well.

So, this is how we can design our Nova deployment. Please do note that in this case, we have also created the multihypervisor cloud, and the compute nodes that are working with VMware will talk to vSphere and provision the virtual machines. So, they can be a little under-sized, as all they do is just make API calls. The compute nodes, which also have the KVM hypervisor installed, will be bigger as they are hosting the virtual machines.

The following figure shows our sample cloud design:

Sample cloud design

So, as you can see that we have only installed Keystone in the London region, and the Boston region is also using the same server.

The diagram also takes the liberty to show the Nova compute instance sizes and functions, in case of the different hypervisors.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset