Configuring OpenStack Compute for Cinder-volume

We now need to tell our OpenStack compute service about our new Cinder-volume service.

Getting ready

As we are performing this setup in a multi-node environment, you will need to be logged into your controller, compute, and cinder nodes.

If you are using the Vagrant environment that accompanies this book, you can log in to these nodes as follows:

vagrant ssh controller
vagrant ssh cinder

This recipe assumes you have created an openrc file. To create an openrc file on each node where you need it, open a text file named openrc and add the following contents:

export OS_TENANT_NAME=cookbook
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=https://192.168.100.200:5000/v2.0/
export OS_KEY=/path/to/cakey.pem
export OS_CACERT=/path/to/ca.pem

How to do it...

In our multi-node installation, we will need to configure the controller, compute, and cinder nodes. Thus, we have broken down the instructions in that order.

To configure your OpenStack controller node for cinder-volume, perform the following steps:

  1. In our multi-node configuration, the OpenStack controller node is responsible for authentication (Keystone) as well as hosting the Cinder database. First, log in to the controller to configure authentication by running the following code:
    source openrc
    keystone service-create 
        --name volume 
        --type volume 
        --description 'Volume Service'
    
    CINDER_SERVICE_ID=$(keystone service-list | awk '/ volume / {print $2}')
    
    PUB_CINDER_ENDPOINT="192.168.0.211"
    INT_CINDER_ENDPOINT="172.16.0.211"
    
    PUBLIC="http://$PUB_CINDER_ENDPOINT:8776/v1/%(tenant_id)s"
    
    ADMIN="http://$INT_CINDER_ENDPOINT:8776/v1/%(tenant_id)s"
    
    INTERNAL=$PUBLIC
    
    keystone endpoint-create 
        --region RegionOne 
        --service_id $CINDER_SERVICE_ID 
        --publicurl $PUBLIC 
        --adminurl $ADMIN 
        --internalurl $INTERNAL
    
    keystone user-create 
        --name cinder 
        --pass cinder 
        --tenant_id $SERVICE_TENANT_ID 
        --email cinder@localhost --enabled true
    
    CINDER_USER_ID=$(keystone user-list 
        | awk '/ cinder  / {print $2}')
    
    keystone user-role-add 
        --user $CINDER_USER_ID 
        --role $ADMIN_ROLE_ID 
        --tenant_id $SERVICE_TENANT_ID
  2. Next we create the MariaDB/MySQL database for use with Cinder:
    MYSQL_ROOT_PASS=openstack
    
    MYSQL_CINDER_PASS=openstack
    
    mysql -uroot -p$MYSQL_ROOT_PASS 
        -e 'CREATE DATABASE cinder;'
    
    mysql -uroot -p$MYSQL_ROOT_PASS 
        -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%';"
    
    mysql -uroot -p$MYSQL_ROOT_PASS 
        -e "SET PASSWORD FOR 'cinder'@'%' = PASSWORD('$MYSQL_CINDER_PASS');"
  3. Add the following lines to the /etc/nova/nova.conf file under the [Default] section:
    volume_driver=nova.volume.driver.ISCSIDriver
    
    enabled_apis=ec2,osapi_compute,metadata
    volume_api_class=nova.volume.cinder.API
    iscsi_helper=tgtadm
  4. Now restart the nova services:
    for P in $(ls /etc/init/nova* | cut -d'/' -f4 | cut -d'.' -f1)
    do
      sudo stop ${P}
      sudo start ${P}
    done 
    

To configure the OpenStack compute nodes for Cinder, perform the following steps:

  1. Next on our list for configuration are the OpenStack compute nodes. We will show you how to configure the first node. You will need to replicate this configuration against all of your compute nodes. Start by logging in to a compute node:
    vagrant ssh compute-01
    
  2. Add the following lines to the /etc/nova/nova.conf file under the [Default] section:
    volume_driver=nova.volume.driver.ISCSIDriver
    enabled_apis=ec2,osapi_compute,metadata
    volume_api_class=nova.volume.cinder.API
    iscsi_helper=tgtadm
  3. Now restart the nova services:
    for P in $(ls /etc/init/nova* | cut -d'/' -f4 | cut -d'.' -f1)
    do
      sudo stop ${P}
      sudo start ${P}
    done
    

To configure the Cinder node with the cinder-volume service, log into the Cinder node and perform the following steps:

  1. Add the following lines to /etc/cinder/cinder.conf to enable communication with Keystone on its internal address as follows:
    [keystone_authtoken]
    auth_uri = https:// 192.168.100.200:35357/v2.0/
    identity_uri = https://192.168.100.200:5000
    admin_tenant_name = service
    admin_user = cinder
    admin_password = cinder
    insecure = True
  2. Next we modify /etc/cinder/cinder.conf to configure the database, iSCSI, and RabbitMQ. Ensure cinder.conf has the following lines:
    [DEFAULT]
    rootwrap_config=/etc/cinder/rootwrap.conf
    
    [database]
    backend=sqlalchemy
    connection = mysql://cinder:[email protected]/cinder
    
    
    iscsi_helper=tgtadm
    volume_name_template = volume-%s
    volume_group = cinder-volumes
    verbose = True
    auth_strategy = keystone
    
    
    # Add these when not using the defaults.
    rabbit_host = 172.16.0.200
    rabbit_port = 5672
    
    state_path = /var/lib/cinder/
  3. To wrap up, we populate the Cinder database and restart the Cinder services:
    cinder-manage db sync
    cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done
    

How it works...

In our multi-node OpenStack configuration, we have to perform configuration across our environment to enable the cinder-volume service. On the OpenStack controller node, we created a Keystone service, endpoint, and user. We additionally assigned the cinder user and the admin role within the service tenant. On the controller, we created a cinder MySQL database and modified nova.conf to allow the use of Cinder.

On our compute nodes, the modifications were much simpler as we only needed to modify nova.conf to enable Cinder.

Finally, we configured the Cinder node itself. We did this by enabling Keystone, initializing the Cinder database, and connecting the Cinder service to its MySQL database. After this, we wrapped up by restarting the Cinder services.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset