We will launch our first Trove instance with one simple command. We will discuss the command format in detail in the coming section. However, the simplest command will need the following information:
m1.small
, we will use the ID 2.mytest
.Now that we have all the information we need, let us take a look at the command:
trove create mytest 2 --size 1 --datastore mysql --datastore_version 5.6
This will return the ID of the instance being created.
This instance will be ready in a while; we can track the progress using the trove list
command. The system will first request nova to create an instance using the glance image and then wait for the guest agent to boot up, connect to the queue, pick up the message, and connect to trove-taskmanager
to work its magic.
There are several other options you can use and pass in the command-line options (type trove help create
to see the options and the descriptions).
Let's take a look at what is happening under the covers while the system is being built. We would like to bring to your attention a diagram that we saw in the previous chapter. The following is a snippet of the diagram that we saw previously:
The broad steps in the creation phase are the following:
prepare
message in the RabbitMQ.In the preceding diagram, the trove create
command has put the trove instance in the build state. We will now execute the commands nova list
and cinder list
to see what is happening.
The Trove task manager makes the API calls to the instance and the cinder volume to:
The following output from nova list
shows the machine is getting provisioned:
We need to notice a few points here:
A simple cinder list
will confirm this.
Looking at the command, we should take notice of a few things:
trove create
command (1 GB)If we continue to execute the nova list
command, the status will change from BUILDING to ACTIVE and the power state will change to Running.
Meanwhile, the Trove guest agent would have created a queue and left the Prepare message there. In order to view the message left in the AMQP queue, we will need the RabbitMQ management plugin.
The RabbitMQ management plugin can be installed on the system by using the command:
sudo rabbitmq-plugins enable rabbitmq_management
This enables the RabbitMQ management plugin, which can provide the GUI and CLI access to the message queue system. We will now need to restart the rabbitmq-server
process.
sudo service rabbitmq-server restart
After this is completed, we should be able to log in to the GUI using the following URL: http://172.22.6.246:15672/
(replace the IP address with the IP of the server running the DevStack instance). The username and password will be guest
and guest
respectively (unless you have changed it in the RabbitMQ configuration).
Look at all the queues tab and filter it with the ID of the Trove instance (in our case 879dcf19-8fd6-4044-9a4c-30577b5b52dd).
You will see that the guest queue is created and has a message waiting to be read.
If we look at the queue, it will be cleared once the guest instance picks the message up. A sample message looks like this:
The message itself is a fairly long string of JSON and has the following information:
With this information, the guest agent is able to configure the instance (as per the prepare call steps that were mentioned in the previous chapter) to the requirements of the user.
The following table provides a glimpse of what is happening depending on the state of the Trove instance and the underpinning Nova instance:
Trove state |
Nova state – current task |
Remarks |
---|---|---|
BUILD |
BUILD – attaching block storage |
The cinder volume is being created and mounted on the system. |
BUILD |
BUILD – spawning |
The glance image is being copied to the hypervisor and booting up. |
BUILD |
ACTIVE |
The nova instance and cinder volumes are ready. The OS is booting up on the instance and the Trove system is waiting for the guest agent to come up and process the Prepare message. |
ERROR |
ACTIVE |
Something went wrong in the guest agent phase. Things to check:
|
ERROR |
ERROR |
The Nova/Cinder/Glance system having some issues. Please troubleshoot the underpinning systems. |
ACTIVE |
ACTIVE |
Everything was fine. The guest database is ready for use. |
If all goes well, we would end up in the ACTIVE state of Trove, which means we can now hand off the system to the actual requestor.
However, if the DB creation errors out, we might want to take a look at the appropriate logs (dependent on where it errored out). At the present moment, we can only look at the guest-agent logs by logging into the instance using SSH (or using the VNC console, if we have set up user credentials). However, there is a blueprint that is completed and under review (expected to be published by the Mitaka-3 release) that will allow users to download the guest-agent logs without access to the instance. The link to this feature is https://blueprints.launchpad.net/trove/+spec/datastore-log-operations.
We finally take a look at the networking aspect of the Trove system. The networking is controlled by either the nova network (as it is in our case) or Neutron (if we were using neutron networking); however, Trove also dictates the creation and association of a security group by default.
Since this is the MySQL datastore, the Trove system creates a security group and allows port 3306 through the group.
Let's take a look at the output of nova show< instance name>
(pay attention to the security_groups values).
We can see that there are two security groups associated. Let's now take a look at the rules allowed or denied by those security groups.
The commands to view the security groups will change if we are using Neutron. The corresponding neutron commands are neutron security-group-list
and neutron security-group-show <security group name>
.
The default security group will not come into play as it is empty. However, as per the default configuration in nova (allow_same_net_traffic
), the same subnet traffic is not restricted using the security group. However, from the outside, only access to port 3306 is allowed unless explicitly allowed in the security group.
If you are wondering where the values for the TCP/UDP ports are to be opened for each datastore picked up, the answer is trove-taskmanager.conf
(if the configuration options are not set, the default values are used – the defaults are stored in the file trove/common/cfg.py
).
The following shows the configuration options that we can put in the trove-taskmanger.conf
file and modify it to impact the security groups. For example, setting the tcp_ports
setting for MySQL to 3306
, 22 (from just 3306) will add the SSH port while creating the security group.
[mysql] # Format (single port or port range): A, B-C # where C greater than B tcp_ports = 3306 [cassandra] tcp_ports = 7000, 7001, 9042, 9160 [default] trove_security_groups_support = True
In order to disable the creation of security groups completely, set the configuration option trove_security_groups_support
to False
in trove-taskmanager.conf
.
This is not recommended in a production environment as this may open security risks to the database instances.
We can access the guest instance, either by using SSH or by using the VNC console; this is not necessary for the functioning of Trove.
Please note that the security group will prevent access to SSH unless either a rule is added or the source server is on the same subnet.
In our case, we are executing SSH from the Trove controller system with the IP 10.1.10.1
, which is in the same subnet, and nova.conf
has an option called allow_same_net_traffic
(which defaults to True
) that has not been fiddled with in our case, so we should be able to SSH in.
One reason for logging in can be to troubleshoot a guest agent failure. We can log in to the instance (if you have created the image as shown in the previous chapter) using the private key that you created for your own user. (The Trove integration element also copies the authorized_keys
file to the image.)
We can find details about the instance by using the command trove show <instance name/id>
.
We can see that the instance IP address is 10.1.10.2. We should be able to ssh
to it by using the command:
ssh [email protected] -i ~/.ssh/id_rsa
ubuntu
is the username that we set in the export variable, and the identity file is the private key whose path we exported while creating the image.
The guest agent log is located at /var/log/trove/trove-guestagent.log
.
As we can see, we are able to log in. You may want to log in in order to troubleshoot if somehow the guest agent doesn't work. You will be able to see the cinder drive mounted with the MySQL data directory pointing to that.
You will also notice that the database engine is installed and started (you can check it with the ps -ef
command). If we also take a look at the configuration file, it will be configured based on the configuration template for the particular datastore (more on this in the next chapter).
The GUI is another method that we can use to request ourselves a database instance. More often than not, most of the users will be using this method. So, once we log in to the horizon dashboard, we can go to the Database | Instances dashboard.
Click on Create Instance and we will fill in the details:
We can also initialize the database and create a database user (we could pass the parameters in the CLI command as well):
We could choose to launch it from a backup image or create a replica, but more on this will be covered in the upcoming chapters:
So, we select None and then click on Launch. The GUI will show the state of the instance. We will wait until the instance is marked ACTIVE.
We should be able to access the MySQL database using the username/password over the network.
Now that the instance is created, we can now log in to the database by using the standard MySQL client; we will use the command:
mysql -udbuser -pdbpass –h 10.1.10.3
As we can see, the user is allowed access and access is also granted to testdb1 that we created.
There are other instance operations that can be performed by Trove.
We can resize the instance and also the data volume. We can change the volume using the command:
trove resize-volume <instance name> <new size> trove resize-instance <instance name> <new flavor-id>
Example:
trove resize-volume mytest 2
This will increase the size of the volume to 2 GB. We can execute the instance resize command as well.
We can also perform these operations from the context menu in the GUI using the context menu.
If we want to delete the instances that we have created in Trove, the command is:
trove delete <instance id>
Let us delete the instance that we created. We will get the instance ID by using the trove list
command and then execute the trove delete
command.
The command will delete the instance. This activity can also be done from the context menu in the GUI.