Advanced topics
This chapter describes advanced topics that were not covered in the previous chapters. It also presents options to add your host to cloud environments.
After reading this chapter, you will have a deeper understanding of these PowerKVM-related topics:
Install PowerKVM on a hardware Redundant Array of Independent Disks (RAID)
Migrate guests to another host
Add the host to a cloud environment
Security
PowerVC
Docker usage
7.1 Install PowerKVM on a hardware RAID
Installing IBM PowerKVM V3.1.0 on a hardware RAID is a straightforward process. This section guides you through creating a RAID 10 array using the IBM Power RAID Configuration Utility, iprconfig. For more information about the iprconfig tool, see the following page:
To proceed, you need to enter the Petitboot shell, as described in Figure 2-1 on page 33.
On the Petitboot shell, launch the iprconfig tool and you see the main window, as shown in Figure 7-1.
Figure 7-1 iprconfig main window
Select option 2. Create a disk array, as shown in Figure 7-2.
Figure 7-2 Create a disk array option
You are prompted to select the disk adapter, as shown in Figure 7-3. Select your disk adapter by pressing 1 and then Enter.
Figure 7-3 Select disk adapter
Next step is to select the disk units that will be part of the RAID. Select them by pressing 1 and Enter, as shown in Figure 7-4.
Figure 7-4 Select disk units
You are prompted to select the wanted RAID type, as shown in Figure 7-5. Press c to change the RAID type and then Enter to select. After that, press Enter to proceed.
Figure 7-5 Select RAID type
A confirmation window is displayed, as shown in Figure 7-6.
Figure 7-6 Confirmation window for creating disk array
The message Disk array successfully created is displayed at the bottom of the window.
You can verify the status of your disk array by selecting option 1. Display disk array status in the main window. You then see the status of the disk array, as shown in Figure 7-7.
Figure 7-7 Disk array status
At this point, the iprconfig tool created a RAID 10 disk array. It still takes some hours until the disk array is fully built and ready to be used. After the disk array is ready, you can follow the installation instructions from section 2.1, “Host installation” on page 32.
7.2 Guest migration
IBM PowerKVM allows you to migrate guests between servers, reducing the downtime when moving host systems around data centers. Basically, there are three types of migration:
Offline: The migrating guest is shut down until migration is complete.
Online: The migrating guest is paused in the source host and resumed in the destination host.
Live: The migrating guest is copied without being shut down or paused. This type of migration takes longer and will sometimes not complete, depending on the workload.
To migrate a guest from one PowerKVM host to another, the following requirements must be satisfied:
The Images volume is mounted at the same location in both hosts, usually /var/lib/libvirt/images.
Source and destination hosts run the same PowerKVM version.
Hosts have equal libvirt network configuration.
Network traffic on TCP/IP ports 49152-49215 is allowed in both hosts. For migration over Secure Shell (SSH) protocol, make sure traffic on port 22 is also allowed to the destination host.
The --persistent option, used in the following examples, saves guest configuration on the destination host permanently. Otherwise, when this option is not specified, the guest configuration is erased from libvirt after the guest is shut down.
7.2.1 Offline migration
During the offline migration, the migrating guest needs to be shut down on the source host. Otherwise, you will not be possible to migrate libvirt settings to the destination host. After the migration is complete, the guest remains shut down on the destination host.
To perform an offline migration, you need to make sure both source and destination hosts share the same storage pool for guest disks, as for example:
Fibre Channel
iSCSI
NFS
See section 4.2, “Managing storage pools” on page 107 for how to configure and use shared storage pools.
The switch --offline is specified in the virsh migrate command line options to indicate an offline migration.
The option --undefinesource is used to undefine the guest configuration on the source host. Otherwise, the guest will be configured on both servers after the migration is complete.
 
Note: Having a guest running on two different hosts can damage the guest disk image on the shared storage.
Example 7-1 on page 199 demonstrates how to perform an offline migration.
Example 7-1 Offline migration
root@source# virsh migrate --persistent --offline --undefinesource
guest-name qemu+ssh://destination-host/system
7.2.2 Online migration
Online migration can be used when source and destination hosts do not share the same storage pool. In this case, the option --copy-storage-all is specified to copy the guest disk image to the destination host.
The online migration takes longer to complete than the offline one because the entire guest disk is copied over the network. The transfer time depends on the guest memory usage and network throughput between source and destination hosts.
To perform an online migration, the guest needs to be running on the source host. During the transfer, the guest appears as paused on the destination. After migration is complete, the guest is shut down on the source and resumed on the destination host.
 
Note: An attempt to perform an online migration with the guest shut down results in the following error message:
Error: Requested operation is not valid: domain is not running.
Example 7-2 shows how to perform an online migration.
Example 7-2 Online migration
root@source# virsh migrate --persistent --copy-storage-all
guest-name qemu+ssh://destination-host/system
7.2.3 Live migration
In a live migration, the guest memory is transferred while it is still running on the source host. Remember that if the guest memory pages are changing faster than they are migrated, the migration can fail, time out, or not finish.
 
Note: Do not use Kimchi for guest XML disk setup if you plan to do live migration with SAN Fibre Channel.
Before proceeding, you need to make sure that the guest disk image is already available on the destination host. If you are not using a shared storage, make sure you perform an online migration first, as described in section 7.2.2, “Online migration” on page 199.
To perform a live migration, the guest must be running on the source host and must not be running on the destination host. During the migration, the guest is paused on the destination. After the transfer is complete, the guest is shut down on the source and then resumed on the destination host.
Example 7-3 shows how to perform a live migration.
Example 7-3 Live migration
root@source# virsh migrate --persistent --live
guest-name qemu+ssh://destination-host/system
The --timeout option forces the guest to suspend when live migration exceeds the specified seconds, and then the migration completes offline.
Example 7-4 shows how to perform a live migration specifying a timeout of 120 seconds.
Example 7-4 Live migration with timeout
root@source# virsh migrate --persistent --live --timeout 120
guest-name qemu+ssh://destination-host/system
The migration can be interrupted due to an intense workload in the guest and can be started again with no damage to the guest disk image.
 
Note: Migration can fail if there is not enough contiguous memory space available in the target system.
7.3 Booting PowerKVM from Petitboot shell
Booting PowerKVM from a Petitboot shell can be used when you do not have a DHCP server to automatically provide the system with the boot configuration file, pxe.conf, as described in section 2.2.3, “Automated boot over DHCP” on page 46.
The only requirement is to have an HTTP server configured to serve vmlinuz, initrd.img, squashfs.img, and packages repository. Refer to section “Configuration of an HTTP server” on page 46 for more details.
To boot PowerKVM from Petitboot shell, perform the following steps:
1. Place the boot.sh script, shown in Example 7-5 on page 201, in the wwwroot directory of your HTTP server so it can be downloaded from Petitboot shell.
2. Select Exit to shell in the Petitboot main menu to enter Petitboot shell prompt.
3. Download boot.sh script, for example:
# wget http://server-address/boot.sh
4. Run boot.sh:
# sh boot.sh
The script downloads the kernel and rootfs image, and hands execution over to the downloaded kernel image by calling the kexec command.
Example 7-5 is the boot.sh script that can be used to boot PowerKVM from Petitboot shell.
Example 7-5 boot.sh
#!/bin/sh
#
# Boot IBM PowerKVM V3.1.0 from Petitboot shell.
 
SERVER="http://server-address
 
NIC="net0"
MAC="mac-address"
IP="ip-address"
GW="gateway"
NETMASK="netmask"
NS="dns-server"
HOSTNAME="your-system-hostname"
 
VMLINUZ="${SERVER}/ppc/ppc64le/vmlinuz"
INITRD="${SERVER}/ppc/ppc64le/initrd.img"
SQUASHFS="${SERVER}/LiveOS/squashfs.img"
REPO="${SERVER}/packages"
NET_PARAMS="ifname=${NIC}:${MAC}
ip=${IP}::${GW}:${NETMASK}:${HOSTNAME}:${NIC}:none nameserver=${NS}"
BOOT_PARAMS="rd.dm=0 rd.md=0 console=hvc0 console=tty0"
 
cd /tmp
 
wget $VMLINUZ
wget $INITRD
 
kexec -l vmlinuz --initrd initrd.img
--append="root=live:${SQUASHFS} repo=${REPO} ${NET_PARAMS} ${BOOT_PARAMS}"
kill -QUIT 1
Remember to update the following variables in the boot.sh sample script to meet your environment configuration:
SERVER: The IP address or domain name of your HTTP server.
NIC: The name of the network interface.
MAC: The hardware address of the network interface.
IP: The IP address of your host.
GW: The gateway address.
NETMASK: The network address.
NS: The name server address.
HOSTNAME: The host name of your host system.
7.4 Security
This section gives you an overview of some security aspects present in IBM PowerKVM V3.1.0.
7.4.1 SELinux
SELinux is a Linux kernel security module that provides mandatory access control (MAC). One of its features is that it does not allow programs to access information of other programs running in different contexts. That provides guest isolation from the rest of the other host applications. No other program on the host can affect guest functioning without being explicitly allowed by SELinux rules, if SELinux is running in Enforcing mode.
By default, the PowerKVM Live DVD and the target system run in Enforcing mode. You can verify the SELinux policy by running the following command:
# getenforce
Enforcing
You can change the runtime policy to Permissive by running the following command:
# setenforce Permissive
Then, the getenforce command shows the new policy:
# getenforce
Permissive
The policy can be updated permanently by changing the content of the /etc/selinux/config file. Example 7-6 is a sample SELinux configuration file with Enforcing policy.
Example 7-6 Sample SELinux configuration file
# /etc/selinux/config
#
# SELinux configuration file.
 
SELINUX=enforcing
SELINUXTYPE=targeted
Changes in the /etc/selinux/config file only take place after reboot.
The SELinux context of a file can be updated by using the chcon command. The following example updates the context of guest69-disk.img file to samba_share_t type:
# chcon -t samba_share_t guest69-disk.img
The default SELinux context for the guest disk files under /var/lib/libvirt/images is virt_image_t. Run the restorecon command to restore the original SELinux context of a file. For example:
# restorecon -v /var/lib/libvirt/images/guest69-disk.img
restorecon reset /var/lib/libvirt/images/guest69-disk.img context
system_u:object_r:samba_share_t:s0->system_u:object_r:virt_image_t:s0
The restorecon command reads the files in the /etc/selinux/targeted/contexts/files/ directory, to see which SELinux context files it should have.
For more details about SELinux, visit the SELinux Project page at:
7.4.2 System updates
On PowerKVM, you can keep your system up to date by using yum update or ibm-update-system commands. Both of them can download and install RPM packages from the following external repository:
Example 7-7 shows the content of the default repository configuration file.
Example 7-7 Default repository configuration file
# /etc/yum.repos.d/base.repo
#
# PowerKVM repository configuration file
 
[powerkvm-updates]
name=IBM PowerKVM $ibmver - $basearch
baseurl=http://public.dhe.ibm.com/software/server/POWER/Linux/powerkvm/$ibmmilestone/$ibmver/updates
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ibm_powerkvm-$ibmver-$ibmmilestone
skip_if_unavailable=1
 
[powerkvm-debuginfo]
name=IBM PowerKVM Debuginfo - $ibmver - $basearch
baseurl=http://public.dhe.ibm.com/software/server/POWER/Linux/powerkvm/$ibmmilestone/$ibmver/debuginfo
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ibm_powerkvm-$ibmver-$ibmmilestone
skip_if_unavailable=1
One preferred practice is to use the ibm-update-system command to apply system updates because it installs the recommended packages that were not installed by default at installation time.
For example, when a package is added to the PowerKVM image in a further release, you can still get it installed on your system next time you run the ibm-update-system tool.
Another preferred practice is not to use external packages repositories. Otherwise, you can damage your system when installing software from non-trusted sources.
With the ibm-update-system command, you can also update your system using a local PowerKVM ISO image. The following example shows how to apply updates using a local image:
# ibm-update-system -i ibm-powerkvm.iso
When the command is entered, you are prompted to answer if you want to proceed with the update. The -y option can be used to assume yes and skip this question.
Run ibm-update-system --help to obtain more information about the supported options.
7.5 Cloud management
PowerKVM hosts can be managed just like another KVM host. PowerKVM includes Kimchi in its software stack to provide a friendly user interface for platform management in a single server. For larger management, such as cloud environments, you can use IBM PowerVC or IBM Cloud Manager with OpenStack. Table 7-1 shows the possible management options for PowerKVM.
Table 7-1 Virtualization management systems
Management software
Capabilities
Installation type
Kimchi
Single server
Installed on the host
PowerVC
Multiple servers
Requires a dedicated server
IBM Cloud Manager with OpenStack
Multiple servers
Requires a dedicated server
OpenStack Nova Controller Services
Multiple servers
Installed on the host or dedicated server
Kimchi is an open source project for virtualization management within one server.
IBM PowerVC and IBM Cloud Manager are advanced management solutions created and maintained by IBM, built on OpenStack.
OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a data center. OpenStack Compute (the Nova component) has an abstraction layer for compute drivers to support different hypervisors, including QEMU or KVM, through the libvirt virtualization API.
The following sections introduce the virtualization management systems that can be used to manage the PowerKVM servers in cloud environments.
7.5.1 IBM PowerVC
IBM Power Virtualization Center (IBM PowerVC) is an advanced virtualization manager for creating and managing virtual machines on IBM Power Systems servers by using the PowerKVM hypervisor. PowerVC simplifies the management of virtual resources in your Power Systems environment. It is built on OpenStack technology to deliver an infrastructure as a service (IaaS) within the cloud. With PowerVC, you can deploy your VMs among other tasks, as shown in the following list:
Create virtual machines and then resize and attach volumes to them.
Import existing virtual machines and volumes so they can be managed by IBM PowerVC.
Monitor the use of the resources that are in your environment.
Migrate virtual machines while they are running (hot migration).
Deploy images quickly to create new virtual machines that meet the demands of your ever-changing business needs.
This section gives an overview of how you can add PowerKVM hosts as a compute node and deploy cloud images by using PowerVC. For more detailed information about how to install and configure PowerVC, see IBM PowerVC Version 1.2.3: Introduction and Configuration, SG24-8199.
You can install PowerVC on a separate host and control your compute nodes through the interface by using your web browser. Figure 7-8 shows the PowerVC management interface.
Figure 7-8 PowerVC interface for advanced virtualization management
To connect to a PowerKVM host, simply enter the credentials for the host as shown in Figure 7-9. PowerVC automatically installs the necessary OpenStack modules on the PowerKVM host and adds the host as a compute node in PowerVC.
Figure 7-9 Adding a new host to PowerVC
The new added host is listed on the Hosts panel, as shown in Figure 7-10.
Figure 7-10 Hosts list on PowerVC
Before importing images and creating instances on PowerVC, configure the network and storage settings.
When using PowerVC together with PowerKVM, the networking is done using an Open vSwitch environment, not the standard bridging that is commonly used with PowerKVM. For a simple environment, PowerVC prepares the Open vSwitch environment when connecting to a new PowerKVM host that does not have an open vSwitch environment configured. Example 7-8 shows a simple vSwitch environment configured by PowerVC.
Example 7-8 Simple open vSwitch environment configured by PowerVC
# ovs-vsctl show
0b140048-c1e2-428e-a529-516a347f283c
Bridge default
Port "enP3p9s0f0"
Interface "enP3p9s0f0"
Port default
Interface default
type: internal
Port phy-default
Interface phy-default
type: patch
options: {peer=int-default}
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port int-default
Interface int-default
type: patch
options: {peer=phy-default}
ovs_version: "2.0.0"
PowerVC supports deploy and capture features with local or NFS storage. Before deploying a virtual machine on PowerKVM hosts, you must upload an image first, as shown in Figure 7-11. You can upload ISO or QCOW2 image types. The supported operating systems are Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), and Ubuntu. The distribution can be Little Endian or Big Endian because both architectures are supported by PowerKVM running on a POWER8 system. You can use the boot.iso image provided in the installation DVD as a minimal image suitable for installation over a network.
Figure 7-11 Image upload on PowerVC
It is also possible to use a virtual machine as a golden image that contains the operating system, perhaps some applications, and also other customization. This virtual machine can be equipped with an activation mechanism, such as cloud-init or the IBM Virtual Solutions Activation Engine (VSAE) that changes settings like an IP address, host name, or the SSH keys when deploying the image to a new virtual machine. A virtual machine that contains the golden image can be captured and then used for deployments.
Figure 7-12 shows images that are ready to be deployed. The figure contains ISO boot images as well as prepared so-called golden images (snapshots).
Figure 7-12 Displaying images
To deploy an image, simply use the Deploy button as shown in Figure 7-12 and enter the required data for the new guest as shown in Figure 7-13.
In the deploy dialog, all necessary data is gathered to install and configure the new guest on the PowerKVM host. For the size of the new guest, so-called compute templates are used. In native OpenStack, compute templates are also referred to as flavors. A compute template defines the number of virtual processors, sockets, cores and threads, the memory size, and the size for the boot disk.
Figure 7-13 Deploy an image on PowerVC
For boot.iso deployments, you can connect to your deployed images by using the virsh console command. For images based on preinstalled virtual machines with the network configured, you can connect using SSH.
For attachment of further data disks, also iSCSI volumes can be attached. The creation of a disk is usually done in PowerVC, but also existing volumes can be imported. Figure 7-14 shows the attachment of a new data disk using iSCSI, connected to an IBM Storwize V7000 system.
Figure 7-14 Attachment of an iSCSI disk in PowerVC
PowerVC provides additional advanced functions, such as:
Grouping of hosts
Placement policies to select the best destination for a guest, depending on memory or CPU allocation, CPU utilization, or simple rules such as striping (round robin) or packaging (a host is filled with guests to a certain amount, before the next one is used).
Collocation rules
 – Affinity rules define that several guests must reside on the same host, for instance for performance reasons.
 – Anti-affinity rules define that several guests must reside on different hosts, for instance for availability reasons.
Dynamic Resource Optimizer (DRO). DRO monitors the resource usage of physical hosts and virtual machines in a cloud environment. When a host becomes overused, DRO can migrate virtual machines to hosts that have less utilization.
Customization scripts for custom configuration of a guest.
IP address pool: IP addresses can be either defined in the deploy dialog or can be autoselected from a pool.
Live migration of a guest to another host.
Remote restart: Restarting a guest on another host, when the source host is down or in a defect state.
Maintenance mode: Live migration of all guests in order to shut down a host for maintenance.
For implementing PowerVC connecting to PowerKVM servers, an IBM Systems Lab Services Techbook can be found here:
See also IBM PowerVC Version 1.2.3: Introduction and Configuration, SG24-8199 for more information about how to manage PowerKVM hosts using PowerVC.
7.5.2 IBM Cloud Manager with OpenStack
IBM Cloud Manager with OpenStack, formerly offered as IBM SmartCloud® Entry, can be used to get started with private clouds that can scale users and workloads. IBM Cloud Manager can be also used to attach to a public cloud, such as IBM SoftLayer®. It is based on the OpenStack project and provides advanced resource management with a simplified cloud administration and full access to OpenStack APIs.
These are among the benefits of using IBM Cloud Manager with OpenStack for Power:
Full access to OpenStack APIs
Simplified cloud management interface
All IBM server architectures and major hypervisors are supported. This includes x86 KVM, KVM for IBM z™ Systems, PowerKVM, PowerVM, Hyper-V, IBM z/VM®, and VMware.
Chef installation enables flexibility to choose which OpenStack capabilities to use
AutoScale using the OpenStack Heat service
Manage Docker container services
IBM Cloud Manager comes with two graphical user interfaces: The IBM Cloud Manager Dashboard, which is in OpenStack also referred to as Horizon and the IBM Cloud Manager Self Service portal, which is only available with the IBM Cloud Manager with OpenStack product. Figure 7-15 shows a screen capture of the new IBM Self Service portal that is shipped with IBM Cloud Manager Version 4.3.
Figure 7-15 IBM Cloud Manager with OpenStack Self Service portal
For more information about the IBM Cloud Manager with OpenStack, refer to the documentation that can be found here:
7.5.3 OpenStack controller services
The PowerKVM host can be managed by the open source controller services that are maintained by the OpenStack community. The compute and controller services on OpenStack enable you to launch virtual machine instances.
This section gives an overview of how to install and configure compute controller services to add your PowerKVM server to OpenStack. You can configure these services on a separate node or the same node. A dedicated compute node requires only openstack-nova-compute, the service that launches the virtual machines on the PowerKVM host.
RPM is the package management system used by PowerKVM. To install the open source version of OpenStack compute services on PowerKVM, get the RPM packages from your preferred Linux distribution or build your own packages.
IBM PowerKVM does not bundle OpenStack packages. The installation instructions in this section are based on Fedora repositories:
The link has several subdirectories for the OpenStack releases, especially the Liberty release, which was the latest at the time of writing.
See the online documentation for how to install and configure compute controller services on OpenStack. You can find detailed information about how to install and configure OpenStack services on Red Hat Enterprise Linux, CentOS, SUSE Linux Enterprise Server, and Ubuntu on the OpenStack.org website:
 
Note: IBM PowerKVM version 3.1.0 does not include OpenStack community packages. You can choose to install IBM Cloud Manager or IBM PowerVC to have full integration and support for cloud services.
Compute node
Follow these steps to add a PowerKVM host as a compute node to an existing cloud controller:
1. Install the openstack-nova-compute service. These dependencies are required:
 – openstack-nova-api
 – openstack-nova-cert
 – openstack-nova-conductor
 – openstack-nova-console
 – openstack-nova-novncproxy
 – openstack-nova-scheduler
 – python-novaclient
2. Edit the /etc/nova/nova.conf configuration file:
a. Set the authentication and database settings.
b. Configure the compute service to use the RabbitMQ message broker.
c. Configure Compute to provide remote console access to instances.
3. Start the Compute service and configure it to start when the system boots.
4. Confirm that the compute node is listed as a host on nova, as shown in Example 7-9.
 
Tip: Export Nova credentials and access the API from any system that can reach the controller machine.
Example 7-9 Listing hosts on Nova
[user@laptop ~(keystone_admin)]$ nova host-list
+-------------------------------------+-------------+-------------+
| host_name | service | zone |
+-------------------------------------+-------------+-------------+
| controller | consoleauth | internal |
| controller | scheduler | internal |
| controller | conductor | internal |
| controller | network | internal |
| controller | cert | internal |
| powerkvm | network | internal |
| powerkvm | compute | nova |
+-------------------------------------+-------------+-------------+
Controller node
As a preferred practice, choose a separate server for the controller node when planning to scale horizontally. The instructions to install the controller are in the OpenStack online documentation. There are no special requirements. A minimum OpenStack setup requires the authentication, image, and network services in addition to the controller services. These services include keystone, glance, and neutron (or nova-network).
To configure the compute services in the controller node, install these packages:
openstack-nova-api
openstack-nova-cert
openstack-nova-conductor
openstack-nova-console
openstack-nova-novncproxy
openstack-nova-scheduler
python-novaclient
 
Note: It is also possible to add a PowerKVM compute node to an existing cloud controller running on an IBM x86 server. You might use host aggregates or an availability zone to partition mixed architectures into logical groups that share specific types or images.
After the services are running in the controller, you can deploy your images on the PowerKVM host. To deploy an image and specify the host that you want to run, use the --availability-zone option, as shown in Example 7-10.
Example 7-10 Deploying an image using a nova command line
[user@laptop ~(keystone_admin)]$ nova boot --image my-image --flavor 3 --availability-zone nova:powerkvm vm054
+--------------------------------------+-------------------------------------------------+
| Property | Value |
+--------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00002de6 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | PcMpmee22G7M |
| config_drive | |
| created | 2014-05-28T19:25:18Z |
| flavor | m1.medium (3) |
| hostId | |
| id | 01271354-59dc-480f-a2db-53682fc3d37e |
| image | my-image (36b70fda-497d-45ff-899a-6de2a3616b32) |
| key_name | - |
| metadata | {} |
| name | vm054 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | ad13d65a473f4cafa1db9479c3f7c645 |
| updated | 2014-05-28T19:25:18Z |
| user_id | 9b2ee46e7bbc405bb1816603445de08c |
+--------------------------------------+-------------------------------------------------+
Besides PowerKVM, there are several hypervisors that are supported. For details, see the HypervisorSupportMatrix page on the OpenStack website:
For more detailed options for creating virtual machines on OpenStack, see the online documentation.
7.6 Docker usage
This section covers the basic usage of Docker based on the existing development packages. It is not expected to have any difference from the released Docker packages for PowerKVM.
As explained in the 1.4, “Docker” on page 21, Docker provides an infrastructure for containers, aiming to build and run distributed applications. Even though Docker is focused on application containers, this section uses it as a system container.
7.6.1 Docker installation
To install Docker, you need to add the Development Kit repository in the PowerKVM host as described in 8.2, “Installation” on page 229. When you have the Development Kit repository added on the host, you should install Docker using the yum package manager, as explained in Example 7-11.
Example 7-11 Installing Docker package on PowerKVM
# yum install docker
...
--> Running transaction check
---> Package docker.ppc64le 1:1.7.0-22.gitdcff4e1.5.el7_1.2 will be installed
--> Processing Dependency: docker-selinux >= 1:1.7.0-22.gitdcff4e1.5.el7_1.2 for package: 1:docker-1.7.0-22.gitdcff4e1.5.el7_1.2.ppc64le
--> Processing Dependency: uberchain-ppc64le for package: 1:docker-1.7.0-22.gitdcff4e1.5.el7_1.2.ppc64le
--> Running transaction check
---> Package docker-selinux.ppc64le 1:1.7.0-22.gitdcff4e1.5.el7_1.2 will be installed
---> Package uberchain-ppc64le.ppc64le 0:8.0-4.pkvm3_1_0 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================
Installing:
docker ppc64le 1:1.7.0-22.gitdcff4e1.5.el7_1.2 powerkvm-iso 4.1 M
Installing for dependencies:
docker-selinux ppc64le 1:1.7.0-22.gitdcff4e1.5.el7_1.2 powerkvm-iso 48 k
uberchain-ppc64le ppc64le 8.0-4.pkvm3_1_0 powerkvm-iso 72 M
 
Transaction Summary
================================================================================================================================================
Install 1 Package (+2 Dependent packages)
 
Total download size: 76 M
Installed size: 400 M
Is this ok [y/d/N]: y
Downloading packages:
------------------------------------------------------------------------------------------------------------------------------------------------
Total 293 MB/s | 76 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:docker-selinux-1.7.0-22.gitdcff4e1.5.el7_1.2.ppc64le 1/3
 
Installing : uberchain-ppc64le-8.0-4.pkvm3_1_0.ppc64le 2/3
Installing : 1:docker-1.7.0-22.gitdcff4e1.5.el7_1.2.ppc64le 3/3
Verifying : uberchain-ppc64le-8.0-4.pkvm3_1_0.ppc64le 1/3
Verifying : 1:docker-1.7.0-22.gitdcff4e1.5.el7_1.2.ppc64le 2/3
Verifying : 1:docker-selinux-1.7.0-22.gitdcff4e1.5.el7_1.2.ppc64le 3/3
 
Installed:
docker.ppc64le 1:1.7.0-22.gitdcff4e1.5.el7_1.2
 
Dependency Installed:
docker-selinux.ppc64le 1:1.7.0-22.gitdcff4e1.5.el7_1.2 uberchain-ppc64le.ppc64le 0:8.0-4.pkvm3_1_0
 
Complete!
To verify if your Docker installation is fine and working, you can run the docker info command as shown in Example 7-12.
Example 7-12 docker info output
# docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: docker-253:1-134273-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop1
Metadata file: /dev/loop2
Data Space Used: 307.2 MB
Data Space Total: 107.4 GB
Data Space Available: 10.01 GB
Metadata Space Used: 733.2 kB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.147 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.18.22-359.el7_1.pkvm3_1_0.4000.1.ppc64le
Operating System: IBM_PowerKVM 3.1.0
CPUs: 8
Total Memory: 126.975 GiB
Name: localhost
ID: ES2X:KFCA:YLIE:3TC4:H7Y6:YHF4:WFSD:XYFH:RCXN:K35K:G4KM:I2WL

Docker errors
If you run the command shown in Example 7-12, and the output was an error, for example, an output similar to Example 7-13, it means that you do not have access to the running Docker service. It might be because your user does not have privileges to access the Docker service. The other option is that the Docker service is down. In that case, you can start the service with:
# systemctl start docker
Example 7-13 Docker service access error
Get http:///var/run/docker.sock/v1.20/info: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
On the other side, if you run Docker and it shows an output like Example 7-14, it means that the golang libraries were not found. Assure that the golang libraries are installed and visible. The golang libraries are usually at /opt/ibm/lib64, and two file system links as follows should solve this problem:
# ln -s /opt/ibm/lib64/libgo.so /lib64/libgo.so
# ln -s /opt/ibm/lib64/libgo.so.7 /lib64/libgo.so.7
Example 7-14 Docker failing due to shared library not found
$ docker
docker: error while loading shared libraries: libgo.so.7: cannot open shared object file: No such file or directory
7.6.2 Image management
As described in section 1.4.2, “Docker hub” on page 25, Docker is strictly connected to the Docker hub image database service. The easiest way to start playing with Docker on PowerKVM is by using images available at Docker hub.
You can search for ready-to-deploy container images by using the docker search command, which lists all the images that contain a specified word in the images name available at Docker hub.
Example 7-15 shows part of the output docker search command searching for ppc64 images. By convention, POWER images have ppc64 or ppc64le in its name but it is important to notice that the image uploader is responsible to name the image, so an image can have ppc64 in its name but not necessarily designed to run on a POWER architecture.
Example 7-15 Searching for images
$ docker search ppc64
 
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
ppc64le/busybox-ppc64le 0
ppc64le/debian 0
ppc64le/ubuntu 0
ppc64le/buildpack-deps 0
ppc64le/busybox 0
ppc64le/gcc 0
ppc64le/hello-world 0
ibmcom/gccgo-ppc64le 0
ibmcom/ubuntu-ppc64le 0
ibmcom/busybox-ppc64le 0
ibmcom/unshare-ppc64le 0
ibmcom/hello-world-ppc64le 0
After selecting the image to download, use the pull command to grab the image from the remote archive and put it inside your Docker directory (Example 7-16). The container images are saved in an AUFS (Another Union File System) format at /var/lib/dokcer/aufs.
Example 7-16 Downloading a remote Docker image
$ docker pull ppc64le/debian
latest: Pulling from ppc64le/debian
8a78fb91edd3: Pull complete
Digest: sha256:17c58857e4321300c3ac516fdd9eeb4de017a15a9c69b9b3fbdd7115287343a4
Status: Downloaded newer image for ppc64le/debian:latest
Docker images can be listed by using the docker images command, as shown in Example 7-17.
Example 7-17 Listing container images
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ppc64le/busybox latest a1c0a636f835 46 hours ago 4.871 MB
ppc64le/debian latest ba92052e52c6 47 hours ago 127.6 MB
 
Note: You can also see most of the Docker images for POWER at the Docker hub web at the following address:
7.6.3 Container management
Based on a given container image, you can create a container using two different ways. You can either use the run or create Docker commands. Both of these options are covered in this section.
Starting a container by using the run command is more straightforward because only one command is enough to create the container, start it, and have a shell access to it. Using the create command requires three different commands that are shown in “Creating a container” on page 220.
 
Note: Your user/login needs to have proper access to Docker in order to access Docker commands. Otherwise, Docker complains by using the following message:
time="2015-11-16T09:29:45-05:00" level=fatal msg="Get http:///var/run/docker.sock/v1.18/containers/json: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?"
Running a command on an image
After you have an image on your Docker environment, you can run a command inside the Docker image easily. In this case, it starts a container with the image you specified, starts the container, and runs the command inside the container.
For example, if you want to have a bash shell command inside the container, you can use the Docker command run, using the following arguments:
$ docker run -t -i <image> <command>
Where -i means you want to use the container interactively and -t asks Docker engine to create a terminal for you.
Example 7-18 shows a Debian container being started using the ppc64le/debian image downloaded previously. In this example, the bash shell is started and returned to the user in an interactive terminal. After that, you will be inside the container, and any file you see or execute will come from the container file system and will be executed within the container context.
Example 7-18 Starting a container
$ docker run --name itso -t -i ppc64le/debian /bin/bash
 
root@e022e2a9cb66:/# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support/"
BUG_REPORT_URL="https://bugs.debian.org/"
 
After you leave the shell, the default container dies because Docker was created to be an application container. To see the container that is already down, you can use the ps Docker command. No argument is necessary. You only list the active containers, as shown in Example 7-19. You can also keep the container running even after the shell dies. In order to do that, you must use the argument --restart.
Example 7-19 Listing active containers
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c1da35da075 ppc64le/debian:latest "/bin/bash" 14 seconds ago Up 13 seconds itso
On the other side, if you want to see all the containers, either in running or suspended states, you can use the -a flag, as shown in Example 7-20.
Example 7-20 Listing all containers
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c1da35da075 ppc64le/debian:latest "/bin/bash" 40 seconds ago Exited (130) 4 seconds ago itso
e022e2a9cb66 ppc64le/debian:latest "/bin/bash" 20 minutes ago Exited (130) About a minute ago angry_swartz
926963dbc80d ppc64le/debian:latest "/bin/bash" 27 minutes ago Exited (130) 20 minutes ago mad_lumiere
7118a9b8a56d ppc64le/debian:latest "/bin/bash" 28 minutes ago Exited (0) 28 minutes ago trusting_shockley
1428429888b1 ppc64le/debian:latest "/bin/bash" 29 minutes ago Exited (0) 29 minutes ago loving_albattani
 
 
Note: If the user does not specify a name to the container, it creates one automatically for you, as those described above, as “loving_albattani”, “trusting_shockley” , “mad_lumiere”, and “angry_swartz”.
Creating a container
If you decided to create the container manually, other than using the run command, you can do that by using the docker create command, as shown in Example 7-21.
Example 7-21 Creating a Docker container based on an image
$ docker create --name itso2 ppc64le/debian
4960fbf70982c9e690e56b2c4789c8af76610f09b0add7a1a225a228140bb3c7
 
 
The advantage of using the create method over the run method is the ability to define detailed options for the container, as memory usage, CPU binding, and so on.
These are the most used options:
--cpuset-cpus The host CPUs that are allowed to execute the container
--hostname The container host name
--IPC The IPC namespace
--memory The amount of memory allocated to a specific container
--expose The TCP/IP ports that are exposed
--mac The mac address for the guest (Depends on the network option you are using)
 
 
Note: The arguments for any Docker command should be passed before the Docker image. Otherwise, it will be seen as a command. The argument position is not interchangeable.
Starting the container and attaching to the console
Once you have the container created and want to start it, you can call the start command. To get the console terminal, you can also use the attach command, as shown in Example 7-22.
Example 7-22 Getting the Docker console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
 
$ docker start itso
itso
 
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c1da35da075 ppc64le/debian:latest "/bin/bash" 2 days ago Up 2 seconds itso
 
$ docker attach itso
root@5c1da35da075:/#
 
Note: If you try to attach to a container that is not started, Docker complains by using the following error message:
time="2015-11-16T09:29:10-05:00" level=fatal msg="You cannot attach to a stopped container, start it first"
Changes to the original container image
After you have a Docker image running, all the image modification is saved in a different layer over the original image, and you can manage these changes by committing the changes, reverting, and so on.
Example 7-23 shows the commands that were executed in an image after it was created.
Example 7-23 Docker image changes
$ docker history ppc64le/debian
IMAGE CREATED CREATED BY SIZE
ba92052e52c6 4 days ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
8a78fb91edd3 4 days ago /bin/sh -c #(nop) ADD file:b20760af6fc16448d0 127.6 MB
To make an image change, you should do it through the container itself, instantiate the image in a container, change the container image, and commit this change to a new image. You can, later, replace the old image with the newer one.
Suppose you want to change the container image called itso, adding a new sub directory foo at /tmp/ directory. To do it, you need to create a console, attach to the console, and run the mkdir /tmp/foo command.
When you do it, you can see the file system change by using the docker diff command. If you agree with the changes, you need to commit the changes to a new image by using the docker commit command, which is going to create a new image for you, based on the previous one, using the layer support from AUFS. This whole process is described in Example 7-24. After the commit image is generated, a new image is generated, with the commit ID a98c7e146562.
Example 7-24 Committing a file system change to a new image
$ docker diff itso
C /tmp
A /tmp/itso
 
$ docker commit itso
a98c7e1465623a6c8cf30b34b4038c14d3d04264fe818b89a4c3bcde33e84a36
 
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> a98c7e146562 33 seconds ago 127.6 MB
ppc64le/debian latest ba92052e52c6 4 days ago 127.6 MB
You can give a name to any image by using the docker tag command (Example 7-25).
Example 7-25 Renaming a Docker image
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> a98c7e146562 14 minutes ago 127.6 MB
 
$ docker tag a98c7e146562 itso_version2
 
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
itso_version2 latest a98c7e146562 15 minutes ago 127.6 MB
7.6.4 Uploading your image to Docker hub
When you have a container image ready, you can upload it to Docker hub to be remotely available to others. Having an image at Docker hub allows you to download it from anywhere you want. It also helps you to deploy the same workload on any new container host.
Docker login
In order to upload your image to Docker hub, you need to have a login at Docker hub service. You can create one easily and for no charge at https://hub.docker.com.
When you have a Docker hub login, you can associate your system to a Docker hub account by using the docker login command, as shown in Example 7-26.
Example 7-26 Login information about Docker hub
$ docker login
WARNING: The Auth config file is empty
Username: itso
Password:
WARNING: login credentials saved in /home/itso/.dockercfg.
Login Succeeded
 
Note: After you enter your login information for the first time, it is saved at ~/.dockercfg file. It saves your user ID and the encrypted password.
Docker push
To be able to upload the image to Docker hub, you need to rename it appropriately. The container image name should have your Docker hub login as part of the image. For example, if your login is username, and the image name is itso, the image name should be username/itso. Otherwise, you get the following error:
time="2015-11-16T11:17:08-05:00" level=fatal msg="You cannot push a "root" repository. Please rename your repository to <user>/<repo> (ex: <user>/itso2)"
Example 7-27 shows a successful image upload to Docker hub. The image was originally called itso, but we need to rename it to username/itso, and pushed to the Docker hub.
Example 7-27 Uploading an image to Docker hub
$ docker push username/itso
The push refers to a repository [username/itso] (len: 1)
a98c7e146562: Image already exists
ba92052e52c6: Image successfully pushed
8a78fb91edd3: Pushing [==> ] 2.621 MB/50.58 MB
8a78fb91edd3: Image successfully pushed
Digest: sha256:476590f21a84f5fcc105beab5a1f29ec0970fd3474c2b2a5c0f8a6a0c74b3421
After the image is uploaded to the Docker hub service, you are able to see it by using the docker search command, as shown in Example 7-28.
Example 7-28 Finding the image previously uploaded
$ docker search itso
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
username/itso 0
7.6.5 Creating image from scratch
If you want to create your image from scratch, you can do it by using different methods depending on the type of image you need, and depending on the operating system.
The easiest way to do so is by creating a .tar.gz file with the root file system and importing it in Docker by using the docker import command. This is the method that is explained in this section. Two operating systems are used to describe how to create an image from scratch, Ubuntu and Debian.
Ubuntu Core
Ubuntu has a distribution called Ubuntu Core, which is an image already stripped down for small environments like a Docker container. It is released with the Ubuntu releases, and contains around 50 MB compressed (172 MB decompressed) for each release. It also contains the most 100 important basic packages in Ubuntu. For more information about Ubuntu Core, see the following site:
Because Ubuntu Core is available in a .tgz format in the web, we can point Docker to create an image from it by using a one-line command, as shown in Example 7-29.
Example 7-29 Importing Ubuntu Core from the web
$ docker import http://cdimage.ubuntu.com/ubuntu-core/releases/15.10/release/ubuntu-core-15.10-core-ppc64el.tar.gz
Downloading from http://cdimage.ubuntu.com/ubuntu-core/releases/15.10/release/ubuntu-core-15.10-core-ppc64el.tar.gz
d13659b20bcebc5a89a8a90f4c811f09df0f013fb60896ed50eacaa8ec59d82c52.86 MB/52.86 MB
Importing [===============================================>] 52.86 MB/52.86 MB
When you have imported the Ubuntu Core, you now have an image for Ubuntu Core. You can rename it to “Ubuntu Core” and start it as Example 7-30 shows.
Example 7-30 Renaming an image and starting it
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 24215bea40ea 11 minutes ago 172.1 MB
 
$ docker tag 24215bea40ea ubuntucore
 
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntucore latest 24215bea40ea 12 minutes ago 172.1 MB
 
$ docker run -t -i ubuntucore /bin/bash
root@6969df1e3eb3:/# cat /etc/os-release
NAME="Ubuntu"
VERSION="15.10 (Wily Werewolf)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 15.10"
VERSION_ID="15.10"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/”
Debootstrap
Debian and Ubuntu have a command to create a small operating system rootfs called debootstrap. It is an application that needs to be run on a Debian or Ubuntu system. It downloads a set of packages from an archive and decompresses the packages in a directory. The set of packages is defined in the command line, and in this example the minbase, which includes the minimal set of the most important packages. Ubuntu and Debian has the same package set, so, in order to differentiate Debian and Ubuntu, you basically need to point to the Debian or Ubuntu archive. It does not matter if you are in Ubuntu or Debian, you can create a cross operating system root file system.
To use it, you need to specify the platform you want (ppc64el for POWER8), as the distro and the local directory to decompress the files, as for example, the command below created a localdirectory directory, and install the minimal Debian (minbase) distribution using the unstable packages. Example 7-31 shows part of the expected output:
# debootstrap --arch=ppc64el --variant=minbase unstable localdirectory http://ftp.debian.org/debian
Example 7-31 Debootstrapping Debian in a local directory
# debootstrap --arch=ppc64el --variant=minbase unstable localdirectory http://ftp.debian.org/debian
[sudo] password for ubuntu:
W: Cannot check Release signature; keyring file not available /usr/share/keyrings/debian-archive-keyring.gpg
I: Retrieving Release
I: Retrieving Packages
I: Validating Packages
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Found additional required dependencies: adduser dmsetup insserv libapparmor1 libaudit-common libaudit1 libbz2-1.0 libcap2 libcap2-bin libcryptsetup4 libdb5.3 libdebconfclient0 libdevmapper1.02.1 libgcrypt20 libgpg-error0 libkmod2 libncursesw5 libsemanage-common libsemanage1 libsystemd0 libudev1 libustr-1.0-1 systemd systemd-sysv
I: Found additional base dependencies: binutils bzip2 cpp cpp-5 debian-archive-keyring dpkg-dev g++ g++-5 gcc gcc-5 gnupg gpgv libapt-pkg4.16 libasan2 libatomic1 libc-dev-bin libc6-dev libcc1-0 libdpkg-perl libgcc-5-dev libgdbm3 libgmp10 libgomp1 libisl13 libitm1 libmpc3 libmpfr4 libreadline6 libstdc++-5-dev libstdc++6 libubsan0 libusb-0.1-4 linux-libc-dev make patch perl perl-modules readline-common xz-utils
I: Checking component main on http://ftp.debian.org/debian...
I: Validating libacl1 2.2.52-2
I: Validating adduser 3.113+nmu3
I: Validating libapparmor1 2.10-2+b1
I: Validating apt 1.0.10.2
I: Validating libapt-pkg4.16 1.0.10.2
I: Validating libattr1 1:2.4.47-2
I: Validating libaudit-common 1:2.4.4-4
I: Validating libaudit1 1:2.4.4-4
I: Validating base-files 9.5
I: Validating base-passwd 3.5.38
I: Validating bash 4.3-14
I: Validating binutils 2.25.51.20151113-1
I: Validating build-essential 12.1
I: Validating bzip2 1.0.6-8
I: Validating libbz2-1.0 1.0.6-8
I: Retrieving libdebconfclient0 0.196
I: Validating libdebconfclient0 0.196
I: Validating coreutils 8.23-4
I: Validating libcryptsetup4 2:1.6.6-5
I: Validating dash 0.5.7-4+b2
I: Validating libdb5.3 5.3.28-11
I: Validating debconf 1.5.58
I: Validating debconf-i18n 1.5.58
I: Validating debian-archive-keyring 2014.3
I: Validating debianutils 4.5.1
I: Validating diffutils 1:3.3-2
I: Validating dpkg 1.18.3
I: Validating dpkg-dev 1.18.3
I: Validating libdpkg-perl 1.18.3
I: Validating e2fslibs 1.42.13-1
I: Validating e2fsprogs 1.42.13-1
I: Validating libcomerr2 1.42.13-1
I: Validating libss2 1.42.13-1
I: Validating findutils 4.4.2-10
I: Validating gcc-4.8-base 4.8.5-1
I: Validating gcc-4.9-base 4.9.3-5
I: Validating cpp-5 5.2.1-23
I: Validating g++-5 5.2.1-23
...
After the debootstrap command finishes to install the packages at the localdirectory directory, you are going to see a minimal Debian installed at this directory. After that, you can compress that directory on a file and use it to import in Docker, as shown in Example 7-32.
Example 7-32 Importing from a local tgz file
$ cd localdirectory
$ sudo tar -zcf ../debian-unstable.tgz *
$ cat ../debian-unstable.tgz | docker import -
ab90127bcd9308574948ef3920939ab9999bbc78f1ff3fe0
For more information about debootstrap and how to add extra packages to the image, check:
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset