Chapter 10. User-facing Operations

This guide is for OpenStack operators and does not seek to be an exhaustive reference for users, but as an operator it is important that you have a basic understanding of how to use the cloud facilities. This chapter looks at OpenStack from a basic user perspective, which helps you understand your users’ needs and determine when you get a trouble ticket whether it is a user issue or a service issue. The main concepts covered are images, flavors, security groups, blocks storage and instances.

Images

OpenStack images can often be thought of as “virtual machine templates”. Images can also be standard installation mediums like ISO images. Essentially, they contain bootable file systems which are used to launch instances.

Adding Images

Several pre-made images exist and can easily be imported into the Image Service. A common image to add is the CirrOS image which is very small and used for testing purposes. To add this image, simply do:

# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img # glance image-create --name='cirros image' --is-public=true --container-format=bare --disk-format=qcow2 < cirros-0.3.0-x86_64-disk.img

The glance image-create command provides a large set of options to give your image. For example, the min-disk option is useful for images that require root disks of a certain size (for example, large Windows images). To view these options, do:

$ glance help image-create

The location option is important to note. It does not copy the entire image into Glance, but reference an original location to where the image can be found. Upon launching an instance of that image, Glance accesses the image from the location specified.

The copy-from option copies the image from the location specified into the /var/lib/glance/images directory. The same thing is done when using the STDIN redirection such as shown in the example.

Run the following command to view the properties of existing images:

$ glance details

Deleting Images

To delete an image, just execute:

$ glance image-delete <image uuid>

Deleting an image does not affect instances or snapshots that were based off the image.

Other CLI Options

A full set of options can be found using:

$ glance help

or the OpenStack Image Service CLI Guide. (http://docs.openstack.org/cli/quick-start/content/glance-cli-reference.html)

The Image Service and the Database

The only thing that Glance does not store in a database is the image itself. The Glance database has two main tables:

  • images

  • image_properties

Working directly with the database and SQL queries can provide you with custom lists and reports of Glance images. Technically, you can update properties about images through the database, although this is not generally recommended.

Example Image Service Database Queries

One interesting example is modifying the table of images and the owner of that image. This can be easily done if you simply display the unique ID of the owner, this example goes one step further and displays the readable name of the owner:

$ mysql> select glance.images.id, glance.images.name, keystone.tenant.name, is_public from glance.images inner join keystone.tenant on glance.images.owner=keystone.tenant.id;

Another example is displaying all properties for a certain image:

$ mysql> select name, value from image_properties where id = <image_id>

Flavors

Virtual hardware templates are called “flavors” in OpenStack, defining sizes for RAM, disk, number of cores and so on. The default install provides a range of five flavors. These are configurable by admin users (this too is configurable and may be delegated by redefining the access controls for compute_extension:flavormanage in /etc/nova/policy.json on the nova-api server). To get a list of available flavors on your system run:

$ nova flavor-list
+----+-----------+-----------+------+-----------++-------+-+-------------+
| ID | Name      | Memory_MB | Disk | Ephemeral |/| VCPUs | /| extra_specs |
+----+-----------+-----------+------+-----------++-------+-+-------------+
| 1  | m1.tiny   | 512       | 1    | 0         |/| 1     | /| {}          |
| 2  | m1.small  | 2048      | 10   | 20        || 1     | | {}          |
| 3  | m1.medium | 4096      | 10   | 40        |/| 2     | /| {}          |
| 4  | m1.large  | 8192      | 10   | 80        || 4     | | {}          |
| 5  | m1.xlarge | 16384     | 10   | 160       |/| 8     | /| {}          |
+----+-----------+-----------+------+-----------++-------+-+-------------+

The nova flavor-create command allows authorized users to create new flavors. Additional flavor manipulation commands can be shown with the command:

$ nova help | grep flavor.

Flavors define a number of elements:

Column

Description

ID

A unique numeric id.

Name

a descriptive name. xx.size_name is conventional not required, though some third party tools may rely on it.

Memory_MB

Memory_MB: virtual machine memory in megabytes.

Disk

Virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. When booting from a persistent volume it is not used. The “0” size is a special case which uses the native base image size as the size of the ephemeral root volume.

Ephemeral

Specifies the size of a secondary ephemeral data disk. This is an empty, unformatted disk and exists only for the life of the instance.

Swap

Optional swap space allocation for the instance.

VCPUs

Number of virtual CPUs presented to the instance.

RXTX_Factor

Optional property allows created servers to have a different bandwidth cap than that defined in the network they are attached to. This factor is multiplied by the rxtx_base property of the network. Default value is 1.0 (that is, the same as attached network).

Is_Public

Boolean value, whether flavor is available to all users or private to the tenant it was created in. Defaults to True.

extra_specs

Additional optional restrictions on which compute nodes the flavor can run on. This is implemented as key/value pairs that must match against the corresponding key/value pairs on compute nodes. Can be used to implement things like special resources (such as flavors that can only run on compute nodes with GPU hardware).

How do I modify an existing flavor?

Unfortunately, OpenStack does not provide an interface for modifying flavors, only for creating and deleting them. The OpenStack Dashboard simulates the ability to modify a flavor by deleting an existing flavor and creating a new one with the same name.

Security groups

One of the most common new user issues with OpenStack is failing to set appropriate security group when launching an instance and are then unable to contact the instance on the network.

Security groups are sets of IP filter rules that are applied to an instance’s networking. They are project specific and project members can edit the default rules for their group and add new rules sets. All projects have a “default” security group which is applied to instances which have no other security group defined, unless changed this security group denies all incoming traffic.

The nova.conf option allow_same_net_traffic (which defaults to true) globally controls whether the rules applies to hosts which share a network. When set to true, hosts on the same subnet are not filtered and are allowed to pass all types of traffic between them. On a flat network, this allows all instances from all projects unfiltered communication. With VLAN networking, this allows access between instances within the same project. If allow_same_net_traffic is set to false, security groups are enforced for all connections, in this case it is possible for projects to simulate the allow_same_net_traffic by configuring their default security group to allow all traffic from their subnet.

Security groups for the current project can be found on the Horizon dashboard under “Access & Security” to see details of an existing group select the “edit” action for that security group. Obviously modifying existing groups can be done from this “edit” interface. There is a “Create Security Group” button on the main Access & Security page for creating new groups. We discuss the terms used in these fields when we explain the command line equivalents.

From the command line you can get a list of security groups for the project you’re acting in using the nova command:

$ nova secgroup-list
+---------+-------------+
| Name    | Description |
+---------+-------------+
| default | default     |
| open    | all ports   |
+---------+-------------+

To view the details of the “open” security group:

$ nova secgroup-list-rules open
 +-------------+-----------+---------+-----------+--------------+ 
 | IP Protocol | From Port | To Port | IP Range  | Source Group | 
 +-------------+-----------+---------+-----------+--------------+ 
 | icmp        | -1        | 255     | 0.0.0.0/0 |              | 
 | tcp         | 1         | 65535   | 0.0.0.0/0 |              | 
 | udp         | 1         | 65535   | 0.0.0.0/0 |              | 
 +-------------+-----------+---------+-----------+--------------+ 

These rules are all “allow” type rules as the default is deny. The first column is the IP protocol (one of icmp, tcp, or udp) the second and third columns specify the affected port range. The third column specifies the IP range in CIDR format. This example shows the full port range for all protocols allowed from all IPs.

As noted in the previous chapter the number of rules per security group is controlled by the quota_security_group_rules and the number of allowed security groups per project is controlled by the quota_security_groups quota. 

When adding a new security group you should pick a descriptive but brief name. This name shows up in brief descriptions of the instances that use it where the longer description field often does not. Seeing that an instance is using security group “http” is much easier to understand than “bobs_group” or “secgrp1”.

As an example, let’s create a security group that allows web traffic anywhere on the internet. We’ll call this “global_http” which is clear and reasonably concise, encapsulating what is allowed and from where. From the command line:

+-------------+-------------------------------------+
| Name        | Description                         |
+-------------+-------------------------------------+
| global_http | allow web traffic from the internet |
+-------------+-------------------------------------+

This creates the empty security group to make it do what we want we need to add some rules.

$ nova secgroup-add-rule <secgroup> <ip-proto> <from-port> <to-port>
                              <cidr>
$ nova secgroup-add-rule global_http tcp 80 80 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Note that the arguments are positional and the “from-port” and “to-port” arguments specify the local port range connections are allowed to not source and destination ports of the connection. More complex rule sets can be built up through multiple invocations of nova secgroup-add-rule. For example if you want to pass both http and https traffic:

$ nova secgroup-add-rule global_http tcp 443 443 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 443       | 443     | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Despite only outputting the newly added rule this operation is additive:

$ nova secgroup-list-rules global_http
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
| tcp         | 443       | 443     | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

The inverse operation is called secgroup-delete-rule, using the same format. Whole security groups can be removed with secgroup-delete.

To create security group rules for a cluster of instances:

SourceGroups are a special dynamic way of defining the CIDR of allowed sources. The user specifies a SourceGroup (Security Group name), all the users’ other Instances using the specified SourceGroup are selected dynamically. This alleviates the need for a individual rules to allow each new member of the cluster.usage:

usage: nova secgroup-add-group-rule <secgroup> <source-group> <ip-proto> <from-port> <to-port>

$ nova secgroup-add-group-rule cluster global-http tcp 22 22

The “cluster” rule allows ssh access from any other instance that uses the “global-http” group.

Block Storage

OpenStack volumes are persistent block storage devices which may be attached and detached from instances, but can only be attached to one instance at a time, similar to an external hard drive they do not proved shared storage in the way a network file system or object store does. It is left to the operating system in the instance to put a file system on the block device and mount it, or not.

Similar to other removable disk technology it is important the operating system is not trying to make use of the disk before removing it. On Linux instances this typically involves unmounting any file systems mounted from the volume. The OpenStack volume service cannot tell if it is safe to remove volumes from an instance so it does what it is told. If a user tells the volume service to detach a volume from an instance while it is being written to you can expect some level of file system corruption as well as faults from whatever process within the instance was using the device.

There is nothing OpenStack specific in being aware of the steps needed from with in the instance operating system to access block devices, potentially formatting them for first use and being cautious when removing devices. What is specific is how to create new volumes and attach and detach them from instances. These operations can all be done from the “Volumes” page of the Dashboard or using the cinder command line client.

To add new volumes you only need a name and a volume size in gigabytes, ether put these into the “create volume” web form or using the command line:

$ cinder create --display-name test-volume 10

This creates a 10 GB volume named “test-volume.” To list existing volumes and the instances they are connected to if any:

$ cinder list
+------------+---------+--------------------+------+-------------+-------------+
|     ID     | Status  |    Display Name    | Size | Volume Type | Attached to |
+------------+---------+--------------------+------+-------------+-------------+
| 0821...19f |  active |    test-volume     |  10  |     None    |             |
+------------+---------+--------------------+------+-------------+-------------+

The Block Storage service also allows for creating snapshots of volumes. Remember this is a block level snapshot which is crash consistent so it is best if the volume is not connected to an instance when the snapshot is taken and second best if the volume is not in use on the instance it is attached to. If the volume is under heavy use, the snapshot may have an inconsistent file system. In fact, by default, the volume service does not take a snapshot of a volume that is attached to an image, though it can be forced. To take a volume snapshot either select “Create Snapshot” from the actions column next to the volume name in the dashboard volume page, or from the command line:

usage: cinder snapshot-create [--force <True|False>]
[--display-name <display-name>]
[--display-description <display-description>]
<volume-id>
Add a new snapshot.
Positional arguments:  <volume-id>           ID of the volume to snapshot
Optional arguments:  --force <True|False>  Optional flag to indicate whether to snapshot a volume                        even if its attached to an instance. (Default=False)  --display-name <display-name>                        Optional snapshot name. (Default=None)
--display-description <display-description>
Optional snapshot description. (Default=None)

Block Storage Creation Failures

If a user tries to create a volume and it immediately goes into an error state, the best way to troubleshoot is to grep the Cinder log files for the volume’s UUID. First try the log files on the cloud controller and then try the storage node where they volume was attempted to be created:

# grep 903b85d0-bacc-4855-a261-10843fc2d65b /var/log/cinder/*.log 

Instances

Instances are the running virtual machines within an OpenStack cloud. This section deals with how to work with them and their underlying images, their network properties and how they are represented in the database.

Starting Instances

To launch an instance you need to select an image, a flavor, and a name. The name needn’t be unique but your life is simpler if it is because many tools will use the name in place of UUID so long as the name is unique. This can be done from the dashboard either from the “Launch Instance” button on the “Instances” page or by selecting the “Launch” action next to an image or snapshot on the “Images & Snapshots” page.

On the command line:

$ nova boot --flavor <flavor> --image <image> <name>

There are a number of optional items that can be specified. You should read the rest of this instances section before trying to start one, but this is the base command that later details are layered upon.

To delete instances from the dashboard select the “Terminate instance” action next to the instance on the “Instances” page, from the command line:

$ nova delete <instance-uuid>

It is important to note that powering off an instance does not terminate it in the OpenStack sense.

Instance Boot Failures

If an instance fails to start and immediately moves to “Error” state there are a few different ways to track down what has gone wrong. Some of these can be done with normal user access while others require access to your log server or compute nodes.

The simplest reasons for nodes to fail to launch are quota violations or the scheduler being unable to find a suitable compute node on which to run the instance. In these cases the error is apparent doing a nova show on the faulted instance.

$ nova show test-instance

+------------------------+-----------------------------------------------------
| Property               | Value                                               /
+------------------------+-----------------------------------------------------
| OS-DCF:diskConfig      | MANUAL                                              /
| OS-EXT-STS:power_state | 0                                                   
| OS-EXT-STS:task_state  | None                                                /
| OS-EXT-STS:vm_state    | error                                               
| accessIPv4             |                                                     /
| accessIPv6             |                                                     
| config_drive           |                                                     /
| created                | 2013-03-01T19:28:24Z                                
| fault                  | {u'message': u'NoValidHost', u'code': 500, u'created/
| flavor                 | xxl.super (11)                                      
| hostId                 |                                                     /
| id                     | 940f3b2f-bd74-45ad-bee7-eb0a7318aa84                
| image                  | quantal-test (65b4f432-7375-42b6-a9b8-7f654a1e676e) /
| key_name               | None                                                
| metadata               | {}                                                  /
| name                   | test-instance                                       
| security_groups        | [{u'name': u'default'}]                             /
| status                 | ERROR                                               
| tenant_id              | 98333a1a28e746fa8c629c83a818ad57                    /
| updated                | 2013-03-01T19:28:26Z                                
| user_id                | a1ef823458d24a68955fec6f3d390019                    /
+------------------------+-----------------------------------------------------

In this case looking at the “fault” message shows NoValidHost indicating the scheduler was unable to match the instance requirements.

If nova show does not sufficiently explain the failure searching for the instance UUID in the nova-compute.log on the compute node it was scheduled on or the nova-scheduler.log on your scheduler hosts is a good place to start looking for lower level problems.

Using nova show as an admin user will show the compute node the instance was scheduled on as hostId, if the instance failed during scheduling this field is blank.

Instance-specific Data

There are a variety of ways to inject custom data including authorized_keys key injection, user-data, metadata service, and file injection.

To clarify user-data versus metadata, understand that “user-data” is a chunk of data, set when an instance is not running. This user-data is accessible from within the instance when it is running. People use this user-data to store configuration, a script, or anything the tenant wants.

For Compute, instance metadata is a collection of key/value pairs associated with an instance. Compute reads and writes to these key/value pairs any time during the instance lifetime, from inside and outside the instance, when the end-user uses the Compute API to do so. However, you cannot query the instance associated key/value pairs via the metadata service that is compatible with the Amazon EC2 metadata service.

Users can generate and register ssh keys using the nova command

$ nova keypair-add mykey > mykey.pem

This creates a key named mykey which you can associate with instances. The file mykey.pem is the private key which should be saved to a secure location as it allows root access to instances the mykey key is associated with.

You can register an existing public key with OpenStack using this command

$ nova keypair-add --pub-key mykey.pub mykey

You must have the matching private key to access instances associated with this key.

To associate a key with an instance on boot add --key_name mykey to your command line for example:

$ nova boot --image ubuntu-cloudimage --flavor 1 --key_name mykey

When booting a server, you can also add metadata, so that you can more easily identify it amongst other running instances. Use the --meta option with a key=value pair, where you can make up the string for both the key and the value. For example, you could add a description and also the creator of the server.

$ nova boot --image=test-image --flavor=1 smallimage --meta description='Small test image'

When viewing the server information, you can see the metadata included on the metadata line:

$ nova show smallimage
+------------------------+-----------------------------------------+
|     Property           |                   Value                 |
+------------------------+-----------------------------------------+
|   OS-DCF:diskConfig    |               MANUAL                    |
| OS-EXT-STS:power_state |                 1                       |
| OS-EXT-STS:task_state  |                None                     |
|  OS-EXT-STS:vm_state   |               active                    |
|    accessIPv4          |                                         |
|    accessIPv6          |                                         |
|      config_drive      |                                         |
|     created            |            2012-05-16T20:48:23Z         |
|      flavor            |              m1.small                   |
|      hostId            |             de0...487                   |
|        id              |             8ec...f915                  |
|      image             |             natty-image                 |
|     key_name           |                                         |
|     metadata           | {u'description': u'Small test image'}   |
|       name             |             smallimage2                 |
|    private network     |            172.16.101.11                |
|     progress           |                 0                       |
|     public network     |             10.4.113.11                 |
|      status            |               ACTIVE                    |
|    tenant_id           |             e83...482                   |
|     updated            |            2012-05-16T20:48:35Z         |
|     user_id            |          de3...0a9                      |
+------------------------+-----------------------------------------+

User Data is a special key in the metadata service which holds a file that cloud aware applications within the guest instance can access. For example cloudinit (https://help.ubuntu.com/community/CloudInit) is an open source package from Ubuntu that handles early initialization of a cloud instance that makes use of this user data.

This user-data can be put in a file on your local system and then passed in at instance creation with the flag --user-data <user-data-file> for example:

$ nova boot --image ubuntu-cloudimage --flavor 1 --user-data mydata.file 

Arbitrary local files can also be placed into the instance file system at creation time using the --file <dst-path=src-path> option. You may store up to 5 files. For example if you have a special authorized_keys file named special_authorized_keysfile that you want to put on the instance rather than using the regular ssh key injection for some reason you can use the following command:

$ nova boot --image ubuntu-cloudimage --flavor 1 --file /root/.ssh/authorized_keys=special_authorized_keysfile

Associating Security Groups

Security groups as discussed earlier are typically required to allow network traffic to an instance, unless the default security group for a project has been modified to be more permissive.

Adding security groups is typically done on instance boot. When launching from the dashboard this is on the “Access & Security” tab of the “Launch Instance” dialog. When launching from the command line append --security-groups with a comma separated list of security groups.

It is also possible to add and remove security groups when an instance is running. Currently this is only available through the command line tools.

$ nova add-secgroup <server> <securitygroup>
$ nova remove-secgroup <server> <securitygroup>

Floating IPs

Projects have a quota controlled number of Floating IPs, however these need to be allocated by a user before they are available for use. To allocate a Floating IP to a project there is an “Allocate IP to Project” button on the “Access & Security” page of the dashboard or on the command line by using:

$ nova floating-ip-create

Once allocated, Floating IP can be assigned to running instances from the Dashboard either by selecting the “Associate Floating IP” from the actions drop down next to the IP on the “Access & Security” page or the same action next to the instance you wish to associate it with on the “Instances” page. The inverse action, “Dissociate Floating IP”, is only available from the “Access & Security” page and not from the Instances page.

From the command line, enter the following command to complete these tasks:

$ nova add-floating-ip <server> <address>
$ nova remove-floating-ip <server> <address>

Attaching Block Storage

You can attach block storage to instances from the dashboard on the Volumes page. Click the Edit Attachments action next to the volume you wish to attach.

To perform this action from command line, run the following command:

$ nova volume-attach <server> <volume> 

You can also specify block device mapping at instance boot time through the nova command-line client, as follows:

--block-device-mapping <dev-name=mapping> 

The block device mapping format is <dev-name=<id>:<type>:<size(GB)>:<delete-on-terminate>, where:

dev-name

A device name where the volume is attached in the system at /dev/dev_name .

id

The ID of the volume to boot from, as shown in the output of nova volume-list.

type

Either snap, which means that the volume was created from a snapshot, or anything other than snap (a blank string is valid). In the example above, the volume was not created from a snapshot, so we leave this field blank in our example below.

size (GB)

The size of the volume, in GB. It is safe to leave this blank and have the Compute service infer the size.

delete-on-terminate

A boolean to indicate whether the volume should be deleted when the instance is terminated. True can be specified as True or 1. False can be specified as False or 0.

If you have previously prepared the block storage with a bootable file system image it is even possible to boot from persistent block storage. The following example will attempt boot from volume with ID=13, it does not delete on terminate. Replace the --key-name with a valid keypair name:

$ nova boot --flavor 2 --key-name mykey --block-device-mapping vda=13:::0 boot-from-vol-test

Because of bug 1163566 (https://bugs.launchpad.net/nova/+bug/1163566) you must specify an image when booting from a volume in Horizon, even though this image is not used.

To boot normally from an image and attach block storage, map to a device other than vda.

Taking Snapshots

OpenStack’s snapshot mechanism allows you to create new images from running instances. This is a very convenient for upgrading base images or taking a published image and customizing for local use. To snapshot a running instance to an image using the CLI:

$ nova image-create <instance name or uuid> <name of new image>

The Dashboard interface for snapshots can be confusing because the Images & Snapshots page splits content up into:

  • Images

  • Instance snapshots

  • Volume snapshots

However, an instance snapshot is an image. The only difference between an image that you upload directly to glance and an image you create by snapshot is that an image created by snapshot has additional properties in the glance database. These properties are found in the image_properties table, and include:

name value

image_type

snapshot

instance_uuid

<uuid of instance that was snapshotted>

base_image_ref

<uuid of original image of instance that was snapshotted>

image_location

snapshot

Ensuring snapshots are consistent

Content from Sébastien Han’s OpenStack: Perform Consistent Snapshots blog entry (http://www.sebastien-han.fr/blog/2012/12/10/openstack-perform-consistent-snapshots/)

A snapshot captures the state of the file system, but not the state of the memory. Therefore, to ensure your snapshot contains the data that you want, before your snapshot you need to ensure that:

  • Running programs have written their contents to disk

  • The file system does not have any “dirty” buffers: where programs have issued the command to write to disk, but the operating system has not yet done the write

To ensure that important services have written their contents to disk (such as, databases), we recommend you read the documentation for those applications to determine what commands to issue to have them sync their contents to disk. If you are unsure how to do this, the safest approach is to simply stop these running services normally.

To deal with the “dirty” buffer issue, we recommend using the sync command before snapshotting:

# sync

Running sync writes dirty buffer (buffered block that have been modified but not written yet to the disk block) to disk.

Just running sync is not enough to ensure the file system is consistent. We recommend you use the fsfreeze tool, which halts new access to the file system and create a stable image on disk that is suitable for snapshotting. fsfreeze supports several file systems, including ext3, ext4, and XFS. If your virtual machine instance is running on Ubuntu, install the util-linux package to get fsfreeze:

# apt-get install util-linux

If your operating system doesn’t have a version of fsfreeze available, you can use xfs_freeze instead, which is available on Ubuntu in the xfsprogs package. Despite the “xfs” in the name, xfs_freeze also works on ext3 and ext4 if you are using a Linux kernel version 2.6.29 or greater, since it works at the virtual file system (VFS) level starting at 2.6.29. xfs_freeze supports the same command-line arguments as fsfreeze.

Consider the example where you want to take a snapshot of a persistent block storage volume, detected by the guest operating system as /dev/vdb and mounted on /mnt. The fsfreeze command accepts 2 arguments:

  • -f: freeze the system

  • -u: thaw (un-freeze) the system

To freeze the volume in preparation for snapshotting, you would do, as root, inside of the instance:

# fsfreeze -f /mnt

You must mount the file system before you run the fsfreeze command.

When the “fsfreeze -f” command is issued, all ongoing transactions in the file system are allowed to complete, new write system calls are halted, and other calls which modify the file system are halted. Most importantly, all dirty data, metadata, and log information are written to disk.

Once the volume has been frozen, do not attempt to read from or write to the volume, as these operations hang. The operating system stops every I/O operation and any I/O attempts is delayed until the file system has been unfrozen.

Once you have issued the fsfreeze command, it is safe to perform the snapshot. For example, if your instance was named mon-instance, and you wanted to snapshot it to an image, named mon-snapshot, you could now run the following:

$ nova image-create mon-instance mon-snapshot

When the snapshot is done, you can thaw the file system with the following command, as root, inside of the instance:

# fsfreeze -u /mnt

If you want to backup the root file system, you can’t simply do the command above because it will freeze the prompt. Instead, run the following one-liner, as root, inside of the instance:

# fsfreeze -f / && sleep 30 && fsfreeze -u /

Instances in the Database

While instance information is stored in a number of database tables, the table operators are most likely to need to look at in relation to user instances is the “instances” table.

The instances table carries all most of the information related to both running and deleted instances. It has a bewildering array of fields, for an exhaustive list look at the database. These are the most useful fields for operators looking to form queries.

The “deleted” field is set to “1” if the instance has been deleted and NULL if it has not been deleted this important for excluding deleted instances from your queries.

The “uuid” field is the UUID of the instance and is used through out other tables in the database as a foreign key. This id is also reported in logs, the dashboard and command line tools to uniquely identify an instance.

A collection of foreign keys are available to find relations to the instance. The most useful of these are “user_id” and “project_id” are the UUIDs of the user who launched the instance and the project it was launched in.

The “host” field tells which compute node is hosting the instance.

The “hostname” field holds the name of the instance when it is launched. The “display-name” is initially the same as hostname but can be reset using the nova rename command.

A number of time related fields are useful for tracking when state changes happened on an instance:

  • created_at

  • updated_at

  • deleted_at

  • scheduled_at

  • launched_at

  • terminated_at

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset