From this point forward in the guide, we assume that you have an OpenStack cloud up and running. This section helps you set up your working environment and use it to take a look around your cloud.
We recommend using a combination of the OpenStack command line interface (CLI) client tools and the OpenStack Dashboard. Some users with a background in other cloud technologies may be using the EC2 Compatibility API, which uses somewhat different naming conventions from the native API. We highlight those differences.
We strongly suggest that you install the command-line clients from the Python Package Index(PyPI) (https://pypi.python.org/) instead of from the Ubuntu or Fedora packages. The clients are under heavy development and it is very likely at any given time the version of the packages distributed by your operating system vendor are out of date.
The “pip” utility is used to manage package installation from the PyPI archive and is available in the “python-pip” package in most Linux distributions. Each OpenStack project has its own client, so depending on which services your site runs, install some or all of the following packages:
python-novaclient (nova CLI)
python-glanceclient (glance CLI)
python-keystoneclient (keystone CLI)
python-cinderclient (cinder CLI)
python-swiftclient (swift CLI)
python-quantumclient (quantum CLI)
To install (or upgrade) a package from the PyPI archive with pip, as root:
# pip install [--upgrade] <package-name>
To remove the package:
# pip uninstall <package-name>
If you need even newer versions of the clients, pip can install
directly from the upstream git repository using the -e
flag. You must specify a name for the Python egg that is installed. For
example:
# pip install -e git+https://github.com/openstack/python-novaclient.git#egg=python-novaclient
If you support the EC2 API on your cloud you should also install the “euca2ools” package or some other EC2 API tool so you can get the same view your users have. Using EC2 API based tools is mostly out of the scope of this guide, though we discuss getting credentials for use with it.
There are also several *-manage command line tools:
nova-manage
glance-manage
keystone-manage
cinder-manage
Unlike the tools mentioned above, the *-manage
tools
must be run from the cloud controller, as root, because they need read
access to the config files such as /etc/nova/nova.conf
and
make queries directly against the database rather than against the
OpenStack API endpoints.
The existence of the *-manage
tools is a legacy
issue. It is a goal of the OpenStack project to eventually migrate all
of the remaining functionality in the *-manage
tools into
the regular client tools. Until that day, you need to SSH into the
cloud controller node to perform some maintenance
operations that require one of the *-manage
tools.
You must have the appropriate credentials if you wish to use the command line tools to make queries against your OpenStack cloud. By far the easiest way to obtain authentication credentials to use with command line clients is to use the horizon dashboard. From the top right navigation row, select the Settings link to access the user settings page where you can set your language and timezone preferences for the dashboard view. More importantly, this action changes the left hand navigation column to include OpenStack API and EC2 Credentials links, which let you to generate files you can source in your shell to populate the environment variables the command line tools need to know where your service endpoints are as well as your authentication information.
For using the OpenStack native tools follow the
OpenStack API link. The top section lists the URLs
of your service endpoints and below that is a section titled
Download OpenStack RC File. For looking at the
cloud as an administrator, you can choose admin from the drop-down menu.
In this section select the project you wish to get credentials for and
click Download RC. This generates a file called
openrc.sh
, which looks something like this:
#!/bin/bash # With the addition of Keystone, to use an openstack cloud you should # authenticate against keystone, which returns a **Token** and **Service # Catalog**. The catalog contains the endpoint for all services the # user/tenant has access to - including nova, glance, keystone, swift. # # *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. # We use the 1.1 *compute api* export OS_AUTH_URL=http://203.0.113.10:5000/v2.0 # With the addition of Keystone we have standardized on the term **tenant** # as the entity that owns the resources. export OS_TENANT_ID=98333aba48e756fa8f629c83a818ad57 export OS_TENANT_NAME="test-project" # In addition to the owning entity (tenant), openstack stores the entity # performing the action as the **user**. export OS_USERNAME=test-user # With Keystone you pass the keystone password. echo "Please enter your OpenStack Password: " read -s OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT
This does not save your password in plain text, which is a good
thing. But when you source or run the script, it prompts for your
password and then stores your response in the environment variable
OS_PASSWORD
. It is important to note that this does
require interactivity. It is possible to store a value directly in the
script if you require a non interactive operation, but you then need
to be extremely cautious with the security and permissions of this
file.
EC2 compatibility credentials can be downloaded from the “EC2
Credentials” link in the left hand navigation bar, then selecting the
project you want credentials for and clicking “Download EC2
Credentials”. This generates a zip file with server x509 certificates
and a shell script fragment. Create a new directory in a secure location
as, unlike the default openrc
, these are live credentials
containing all the authentication information required to access your
cloud identity. Extract the zip file here. You should have cacert.pem,
cert.pem, ec2rc.sh and pk.pem. The ec2rc.sh
is similar to
this:
#!/bin/bash NOVARC=$(readlink -f "${BASH_SOURCE:-${0}}" 2>/dev/null) || NOVARC=$(python -c 'import os,sys; print os.path.abspath(os.path.realpath(sys.argv[1]))' "${BASH_SOURCE:-${0}}") NOVA_KEY_DIR=${NOVARC%/*} export EC2_ACCESS_KEY=df7f93ec47e84ef8a347bbb3d598449a export EC2_SECRET_KEY=ead2fff9f8a344e489956deacd47e818 export EC2_URL=http://203.0.113.10:8773/services/Cloud export EC2_USER_ID=42 # nova does not use user id, but bundling requires it export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem export EC2_CERT=${NOVA_KEY_DIR}/cert.pem export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this alias ec2-bundle-image="ec2-bundle-image --cert $EC2_CERT --privatekey $EC2_PRIVATE_KEY --user 42 --ec2cert $NOVA_CERT" alias ec2-upload-bundle="ec2-upload-bundle -a $EC2_ACCESS_KEY -s $EC2_SECRET_KEY --url $S3_URL --ec2cert $NOVA_CERT"
To put the EC2 credentials into your environment source the
ec2rc.sh
file.
The command line tools can be made to show the OpenStack API
calls it’s making by passing it the --debug
flag for
example:
#
nova --debug list
This example shows the HTTP requests from the client and the responses from the endpoints, which can be helpful in creating custom tools written to the OpenStack API.
Keyring Support (https://wiki.openstack.org/wiki/KeyringSupport) can be a source of confusion to the point that, as of the time of this writing, there is a bug report (https://bugs.launchpad.net/python-novaclient/+bug/1020238) which has been open, closed as invalid, and reopened through a few cycles.
The issue is that under some conditions the command line tools try
to use a Python keyring as a credential cache and, under a subset of
those conditions, another condition can arise where the tools prompt for
a keyring password on each use. If you find yourself in this unfortunate
subset, adding the --no-cache
flag or setting the
environment variable OS_NO_CACHE=1
avoids the credentials
cache.
This causes the command line tool to authenticate on each and every interaction with the cloud.
Underlying the use of the command line tools is the OpenStack API, which is a RESTful API that runs over HTTP. There may be cases where you want to interact with the API directly or need to use it because of a suspected bug in one of the CLI tools. The best way to do this is use a combination of cURL (http://curl.haxx.se/) and another tool to parse the JSON, such as jq (http://stedolan.github.com/jq/), from the responses.
The first thing you must do is authenticate with the cloud using your credentials to get an authentication token.
Your credentials are a combination of username, password, and
tenant (project). You can extract these values from the
openrc.sh
discussed above. The token allows you to
interact with your other service endpoints without needing to
re-authenticate for every request. Tokens are typically good for 24
hours, and when the token expires, you are alerted with a 401
(Unauthorized) response and you can request another token.
Look at your OpenStack service catalog:
$
curl -s -X POST http://203.0.113.10:35357/v2.0/tokens
-d '{"auth": {"passwordCredentials": {"username":"test-user", "password":"test-password"}, "tenantName":"test-project"}}'
-H "Content-type: application/json" | jq .
Read through the JSON response to get a feel for how the catalog is laid out.
To make working with subsequent requests easier, store the token in an environment variable.
$
TOKEN=`curl -s -X POST http://203.0.113.10:35357/v2.0/tokens
-d '{"auth": {"passwordCredentials": {"username":"test-user", "password":"test-password"}, "tenantName":"test-project"}}'
-H "Content-type: application/json" | jq -r .access.token.id`
Now you can refer to your token on the command line as $TOKEN.
Pick a service endpoint from your service catalog, such as compute, and try out a request like listing instances (servers).
$
curl -s
-H "X-Auth-Token: $TOKEN"
http://203.0.113.10:8774/v2/98333aba48e756fa8f629c83a818ad57/servers | jq .
To discover how API requests should be structured, read the OpenStack API Reference (http://api.openstack.org/api-ref.html). To chew through the responses using jq, see the jq Manual (http://stedolan.github.com/jq/manual/).
The -s flag
used in the cURL commands above are
used to prevent the progress meter from being shown. If you are
having trouble running cURL commands, you’ll want to remove it.
Likewise, to help you troubleshoot cURL commands you can include the
-v
flag to show you the verbose output. There are many
more extremely useful features in cURL, refer to the man page for all
of the options.
As an administrator, there are a few ways to discover what your OpenStack cloud looks like simply by using the OpenStack tools available. This section gives you an idea of how to get an overview of your cloud, its shape, size, and current state.
First, you can discover what servers belong to your OpenStack cloud by running:
$
nova-manage service list | sort
The output looks like the following:
Binary Host Zone Status State Updated_At nova-cert cloud.example.com nova enabled :-) 2013-02-25 19:32:38 nova-compute c01.example.com nova enabled :-) 2013-02-25 19:32:35 nova-compute c02.example.com nova enabled :-) 2013-02-25 19:32:32 nova-compute c03.example.com nova enabled :-) 2013-02-25 19:32:36 nova-compute c04.example.com nova enabled :-) 2013-02-25 19:32:32 nova-compute c05.example.com nova enabled :-) 2013-02-25 19:32:41 nova-consoleauth cloud.example.com nova enabled :-) 2013-02-25 19:32:36 nova-network cloud.example.com nova enabled :-) 2013-02-25 19:32:32 nova-scheduler cloud.example.com nova enabled :-) 2013-02-25 19:32:33
The output shows that there are five compute nodes and one cloud
controller. You see a smiley face like :-)
which indicates
that the services are up and running and functional. If a service is no
longer available, the :-)
changes to an XXX
.
This is an indication that you should troubleshoot why the service is
down.
If you are using Cinder, run the following command to see a similar listing:
$
cinder-manage host list | sort
host zone c01.example.com nova c02.example.com nova c03.example.com nova c04.example.com nova c05.example.com nova cloud.example.com nova
With these two tables, you now have a good overview of what servers and services make up your cloud.
You can also use the Identity Service (Keystone), to see what services are available in your cloud as well as what endpoints have been configured for the services.
The following commands require you to have your shell environment configured with the proper administrative variables.
$
keystone service-list
+-----+----------+----------+----------------------------+ | id | name | type | description | +-----+----------+----------+----------------------------+ | ... | cinder | volume | Cinder Service | | ... | glance | image | OpenStack Image Service | | ... | nova_ec2 | ec2 | EC2 Service | | ... | keystone | identity | OpenStack Identity Service | | ... | nova | compute | OpenStack Compute Service | +-----+----------+----------+----------------------------+
The output above shows that there are five services configured.
To see the endpoint of each service, run:
$
keystone endpoint-list
---+------------------------------------------+-- | publicurl | ---+------------------------------------------+-- | http://example.com:8774/v2/%(tenant_id)s | | http://example.com:9292 | | http://example.com:8000/v1 | | http://example.com:5000/v2.0 | ---+------------------------------------------+-- ---+------------------------------------------+-- | adminurl | ---+------------------------------------------+-- | http://example.com:8774/v2/%(tenant_id)s | | http://example.com:9292 | | http://example.com:8000/v1 | | http://example.com:5000/v2.0 | ---+------------------------------------------+--
This example shows two columns pulled from the larger listing. There should be a one-to-one mapping between a service and endpoint. Note the different URLs and ports between the public URL and the admin URL for some services.
You can find the version of the Compute installation by using the nova-manage command:
$
nova-manage version list
You can obtain extra information about the running virtual machines: their CPU usage, the memory, the disk I/O or network I/O, per instance, by running the nova diagnostics command with a server ID:
$
nova diagnostics <serverID>
The output of this command will vary depending on the hypervisor. Example output when the hypervisor is Xen:
+----------------+-----------------+
| Property | Value |
+----------------+-----------------+
| cpu0 | 4.3627 |
| memory | 1171088064.0000 |
| memory_target | 1171088064.0000 |
| vbd_xvda_read | 0.0 |
| vbd_xvda_write | 0.0 |
| vif_0_rx | 3223.6870 |
| vif_0_tx | 0.0 |
| vif_1_rx | 104.4955 |
| vif_1_tx | 0.0 |
+----------------+-----------------+
While the command should work with any hypervisor that is controlled through libvirt (e.g., KVM, QEMU, LXC), it has only been tested with KVM. Example output when the hypervisor is KVM:
+------------------+------------+
| Property | Value |
+------------------+------------+
| cpu0_time | 2870000000 |
| memory | 524288 |
| vda_errors | -1 |
| vda_read | 262144 |
| vda_read_req | 112 |
| vda_write | 5606400 |
| vda_write_req | 376 |
| vnet0_rx | 63343 |
| vnet0_rx_drop | 0 |
| vnet0_rx_errors | 0 |
| vnet0_rx_packets | 431 |
| vnet0_tx | 4905 |
| vnet0_tx_drop | 0 |
| vnet0_tx_errors | 0 |
| vnet0_tx_packets | 45 |
+------------------+------------+
Next, take a look at what Fixed IP networks are configured in your cloud. You can use the nova command-line client to get the IP ranges.
$
nova network-list
+--------------------------------------+--------+--------------+ | ID | Label | Cidr | +--------------------------------------+--------+--------------+ | 3df67919-9600-4ea8-952e-2a7be6f70774 | test01 | 10.1.0.0/24 | | 8283efb2-e53d-46e1-a6bd-bb2bdef9cb9a | test02 | 10.1.1.0/24 | +--------------------------------------+--------+--------------+
The nova-manage tool can provide some additional details.
$
nova-manage network list
id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 1 10.1.0.0/24 None 10.1.0.3 None None 300 2725bbd beacb3f2 2 10.1.1.0/24 None 10.1.1.3 None None 301 none d0b1a796
This output shows that two networks are configured, each network containing 255 IPs (a /24 subnet). The first network has been assigned to a certain project while the second network is still open for assignment. You can assign this network manually or it is automatically assigned when a project launches their first instance.
To find out if any floating IPs are available in your cloud, run:
$
nova-manage floating list
2725bbd458e2459a8c1bd36be859f43f 1.2.3.4 None nova vlan20 None 1.2.3.5 48a415e7-6f07-4d33-ad00-814e60b010ff nova vlan20
Here, two floating IPs are available. The first has been allocated to a project while the other is unallocated.
To see a list of projects that have been added to the cloud, run:
$
keystone tenant-list
+-----+----------+---------+ | id | name | enabled | +-----+----------+---------+ | ... | jtopjian | True | | ... | alvaro | True | | ... | everett | True | | ... | admin | True | | ... | services | True | | ... | jonathan | True | | ... | lorin | True | | ... | anne | True | | ... | rhulsker | True | | ... | tom | True | | ... | adam | True | +-----+----------+---------+
To see a list of users, run:
$
keystone user-list
+-----+----------+---------+------------------------------+ | id | name | enabled | email | +-----+----------+---------+------------------------------+ | ... | everett | True | [email protected] | | ... | jonathan | True | [email protected] | | ... | nova | True | nova@localhost | | ... | rhulsker | True | [email protected] | | ... | lorin | True | [email protected] | | ... | alvaro | True | [email protected] | | ... | anne | True | [email protected] | | ... | admin | True | root@localhost | | ... | cinder | True | cinder@localhost | | ... | glance | True | glance@localhost | | ... | jtopjian | True | [email protected] | | ... | adam | True | [email protected] | | ... | tom | True | [email protected] | +-----+----------+---------+------------------------------+
Sometimes a user and a group have a one-to-one mapping. This happens for standard system accounts, such as cinder, glance, nova, and swift, or when only one user is ever part of a group.
To see a list of running instances, run:
$
nova list --all-tenants
+-----+------------------+--------+-------------------------------------------+ | ID | Name | Status | Networks | +-----+------------------+--------+-------------------------------------------+ | ... | Windows | ACTIVE | novanetwork_1=10.1.1.3, 199.116.232.39 | | ... | cloud controller | ACTIVE | novanetwork_0=10.1.0.6; jtopjian=10.1.2.3 | | ... | compute node 1 | ACTIVE | novanetwork_0=10.1.0.4; jtopjian=10.1.2.4 | | ... | devbox | ACTIVE | novanetwork_0=10.1.0.3 | | ... | devstack | ACTIVE | novanetwork_0=10.1.0.5 | | ... | initial | ACTIVE | nova_network=10.1.7.4, 10.1.8.4 | | ... | lorin-head | ACTIVE | nova_network=10.1.7.3, 10.1.8.3 | +-----+------------------+--------+-------------------------------------------+
Unfortunately this command does not tell you various details about the running instances, such as what compute node the instance is running on, what flavor the instance is, and so on. You can use the following command to view details about individual instances:
$
nova show <uuid>
For example:
# nova show 81db556b-8aa5-427d-a95c-2a9a6972f630
+-------------------------------------+-----------------------------------+ | Property | Value | +-------------------------------------+-----------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-SRV-ATTR:host | c02.example.com | | OS-EXT-SRV-ATTR:hypervisor_hostname | c02.example.com | | OS-EXT-SRV-ATTR:instance_name | instance-00000029 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2013-02-13T20:08:36Z | | flavor | m1.small (6) | | hostId | ... | | id | ... | | image | Ubuntu 12.04 cloudimg amd64 (...) | | key_name | jtopjian-sandbox | | metadata | {} | | name | devstack | | novanetwork_0 network | 10.1.0.5 | | progress | 0 | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | tenant_id | ... | | updated | 2013-02-13T20:08:59Z | | user_id | ... | +-------------------------------------+-----------------------------------+