Managing guests from the command-line interface
This chapter describes how to manage IBM PowerKVM virtual machines from the command-line interface (CLI) by using the virsh command. It covers the most common commands and options used to manage guests. See the virsh command man pages and Libvirt online documentation for detailed information about how to manage guests from the command line.
4.1 virsh console
virsh is a command-line interface to libvirt (a library to manage the KVM hypervisor). You can manage libvirt from the local host or remotely. If you manage PowerKVM from a remote Linux-based operating system, you must have virsh installed on the remote system. You can connect to PowerKVM by using the virsh connect command. As an argument, it accepts a URI such as qemu+ssh://root@hostname/system, as shown in Example 4-1.
Example 4-1 Connect remotely
% virsh
Welcome to virsh, the virtualization interactive terminal.
 
Type: 'help' for help with commands
'quit' to quit
 
virsh # connect qemu+ssh://[email protected]/system
[email protected]'s password:
 
virsh # list
Id Name State
----------------------------------------------------
8 MyGuest running
It is also possible to log in to PowerKVM through a Secure Shell (SSH) and run virsh directly on the target host.
You can run virsh in either of two modes:
Interactive terminal mode, as shown in Example 4-2.
Example 4-2 Working within virsh interactive shell
% virsh
Welcome to virsh, the virtualization interactive terminal.
 
Type: 'help' for help with commands
'quit' to quit
 
virsh # list
Id Name State
----------------------------------------------------
8 MyGuest running
 
Non-interactive virsh subcommand from a shell prompt, as shown in Example 4-3.
Example 4-3 Running virsh commands from system shell
# virsh list --all
Id Name State
----------------------------------------------------
8 MyGuest running
4.1.1 virsh vncdisplay
If your running guest has a virtual graphical adapter attached, you can connect to a graphical console by using Virtual Network Computing (VNC). You can discover which VNC port a guest is running on with the virsh vncdisplay command, as shown in Example 4-4.
Example 4-4 vncdisplay command
# virsh vncdisplay MyGuest
127.0.0.1:1
4.2 Managing storage pools
A pool or storage pool is a collection of disks that contain all of the data for a specified set of volumes. The pool of storage is created by administrators for use by guests. In other words, storage pools consist of volumes used by guests as block devices. This section describes how to manage storage pools by using the command line. See 6.3, “Storage pools” on page 167 for more information about the storage pool concept.
4.2.1 Create new storage pools
Storage pools can be created from a pre-created file with the virsh pool-create command or by using the virsh command directly:
virsh pool-create-as <arguments>
Create file-based pools
Example 4-5 shows how to create file-based pools for DIR and NFS.
Example 4-5 Create a file-based pool
DIR:
# mkdir /MyPool
# virsh pool-create-as MyPool dir --target=/MyPool
Pool MyPool created
 
# virsh pool-list
Name State Autostart
-------------------------------------------
default active no
ISO active no
MyPool active no
 
# virsh pool-dumpxml MyPool
<pool type='dir'>
<name>MyPool</name>
<uuid>8e2b0942-7831-4f4e-8ace-c5539e3f22d2</uuid>
<capacity unit='bytes'>21003583488</capacity>
<allocation unit='bytes'>2164170752</allocation>
<available unit='bytes'>18839412736</available>
<source>
</source>
<target>
<path>/MyPool</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
<label>unconfined_u:object_r:default_t:s0</label>
</permissions>
</target>
</pool>
 
NFS:
# mkdir /MyPoolNFS
# virsh pool-create-as MyPoolNFS netfs
--source-host=9.57.139.73
--source-path=/var/www/powerkvm
--target=/MyPoolNFS
Pool MyPoolNFS created
 
# virsh pool-list
Name State Autostart
-------------------------------------------
default active no
ISO active no
MyPool active no
MyPoolNFS active no
 
# virsh pool-dumpxml MyPoolNFS
<pool type='netfs'>
<name>MyPoolNFS</name>
<uuid>a566ca99-a64c-4793-8c7f-d394aee058ef</uuid>
<capacity unit='bytes'>197774016512</capacity>
<allocation unit='bytes'>72522661888</allocation>
<available unit='bytes'>125251354624</available>
<source>
<host name='9.57.139.73'/>
<dir path='/var/www/powerkvm'/>
<format type='auto'/>
</source>
<target>
<path>/MyPoolNFS</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
<label>system_u:object_r:nfs_t:s0</label>
</permissions>
</target>
</pool>
Create block-based pools
Block-based pools are logical, iSCSI, and SCSI types. Example 4-6 on page 109 shows how to create them.
 
Note: The LVM2-based pool must be created with the pool-define-as command and later built and activated.
Example 4-6 Create block-based pools
LVM2:
# virsh pool-define-as MyPoolLVM logical
--source-dev /dev/sdb1
--target /dev/MyPoolLVM
Pool MyPoolLVM defined
 
# virsh pool-build MyPoolLVM
Pool MyPoolLVM built
 
# virsh pool-start MyPoolLVM
Pool MyPoolLVM started
 
# virsh pool-list
Name State Autostart
-------------------------------------------
default active no
ISO active no
MyPoolLVM active no
 
# virsh pool-dumpxml MyPoolLVM
<pool type='logical'>
<name>MyPoolLVM</name>
<uuid>37ac662e-28ab-4f80-a981-c62b20966e1e</uuid>
<capacity unit='bytes'>0</capacity>
<allocation unit='bytes'>0</allocation>
<available unit='bytes'>0</available>
<source>
<device path='/dev/sdb1'/>
<name>MyPoolLVM</name>
<format type='lvm2'/>
</source>
<target>
<path>/dev/MyPoolLVM</path>
</target>
</pool>
 
# vgs
VG #PV #LV #SN Attr VSize VFree
MyPoolLVM 1 0 0 wz--n- 931.51g 931.51g
 
iSCSI:
# virsh pool-create-as MyPoolISCSI iscsi
--source-host 9.40.193.34
--source-dev iqn.1986-03.com.ibm.2145:mypooliscsi
--target /dev/disk/by-id
Pool MyPoolISCSI created
 
# virsh vol-list MyPoolISCSI
Name Path
------------------------------------------------------------------------------
unit:0:0:1 /dev/disk/by-id/wwn-0x60000000000000000e00000000010001
 
# virsh pool-list
Name State Autostart
-------------------------------------------
default active no
ISO active no
MyPoolISCSI active no
 
# virsh pool-dumpxml MyPoolISCSI
<pool type='iscsi'>
<name>MyPoolISCSI</name>
<uuid>f4f5daf2-13a1-4781-b3e6-851d9435416c</uuid>
<capacity unit='bytes'>107374182400</capacity>
<allocation unit='bytes'>107374182400</allocation>
<available unit='bytes'>0</available>
<source>
<host name='9.40.193.34' port='3260'/>
<device path='iqn.1986-03.com.ibm.2145:mypooliscsi'/>
</source>
<target>
<path>/dev/disk/by-id</path>
</target>
</pool>
4.2.2 Query available storage pools
To list available storage pools, run the pool-list command, as shown in Example 4-7.
Example 4-7 Query available storage pools
# virsh pool-list
Name State Autostart
-------------------------------------------
default active no
ISO active no
MyPoolISCSI active no
To list details of a specific storage pool, run pool-info, as in Example 4-8 on page 111.
Example 4-8 Display pool information
# virsh pool-info default
Name: default
UUID: 078e187f-6838-421e-b3e7-7f5c515867b8
State: running
Persistent: yes
Autostart: no
Capacity: 878.86 GiB
Allocation: 3.21 GiB
Available: 875.65 GiB
 
Tip: To avoid using two commands, you can use the --details flag shown in Example 4-9.
Example 4-9 Display a verbose pool list
# virsh pool-list --details
Name State Autostart Persistent Capacity Allocation Available
---------------------------------------------------------------------------------
default running no yes 878.86 GiB 3.21 GiB 875.65 GiB
ISO running no yes 9.72 GiB 161.52 MiB 9.56 GiB
MyPoolISCSI running no no 100.00 GiB 100.00 GiB 0.00 B
4.2.3 List available volumes
To list created volumes in a particular storage pool, use the vol-list command. The storage pool must be specified as an argument. Example 4-10 shows how you can display volumes.
Example 4-10 Display volume list
# virsh vol-list MyPoolISCSI
Name Path
------------------------------------------------------------------------------
unit:0:0:1 /dev/disk/by-id/wwn-0x60000000000000000e00000000010001
4.2.4 Create a new volume
To create a new volume in a particular storage pool, use the vol-create-as command, as shown in Example 4-11.
Example 4-11 Create volume in storage pool
# virsh vol-create-as MyPool mypool.qcow2 --format qcow2 30G --allocation 4G
Vol mypool.qcow2 created
In this example, a 30 GB volume named mypool.qcow2 is created in the pool named MyPool. The --allocation argument instructs libvirt to allocate only 4 GB. The rest will be allocated on demand. This is sometime referred to as a thin volume.
Example 4-12 shows how to check whether the volume has been created.
Example 4-12 Display the volume list
# virsh vol-list MyPool
Name Path
------------------------------------------------------------------------------
mypool.qcow2 /MyPool/mypool.qcow2
If you need a more verbose output, use the --details flag as shown in Example 4-13.
Example 4-13 Display verbose volume list
# virsh vol-list MyPool --details
Name Path Type Capacity Allocation
-----------------------------------------------------------------
mypool.qcow2 /MyPool/mypool.qcow2 file 30.00 GiB 196.00 KiB
4.2.5 Delete or wipe a volume
To wipe a volume from a pool, you can use the vol-wipe command shown in Example 4-14.
Example 4-14 Wipe volume
# virsh vol-wipe mypool.qcow2 MyPool
Vol mypool.qcow2 wiped
 
# virsh vol-list MyPool --details
Name Path Type Capacity Allocation
------------------------------------------------------------------
mypool.qcow2 /MyPool/mypool.qcow2 file 196.00 KiB 196.00 KiB
The wiped volumes are empty and can be reused for another guest.
To remove a volume completely, use the vol-delete command, as shown in Example 4-15.
Example 4-15 Volume delete
# virsh vol-delete mypool.qcow2 MyPool
Vol mypool.qcow2 deleted
 
 
 
 
 
Note: This command deletes the volume and therefore the data on the volume.
4.2.6 Snapshots
Snapshots save the current machine state (disk, memory, and device states) to be used later. It is specially interesting if a user is going to perform actions that can destroy data. In this case, a snapshot taken before that destructive operation can be reverted and the guest will continue working from that point.
 
Note: virsh snapshot-revert loses all changes made in the guest since the snapshot was taken.
Example 4-16 shows how to create, list, revert, and delete snapshots.
Example 4-16 Working with snapshots
# virsh snapshot-create-as PowerKVM_VirtualMachine
Domain snapshot 1447164760 created
 
# virsh snapshot-create-as PowerKVM_VirtualMachine MyNewSnapshot
Domain snapshot MyNewSnapshot created
 
# virsh snapshot-revert PowerKVM_VirtualMachine MyNewSnapshot
 
# virsh snapshot-current PowerKVM_VirtualMachine --name
MyNewSnapshot
 
# virsh snapshot-create-as PowerKVM_VirtualMachine NewChild
Domain snapshot NewChild created
 
# virsh snapshot-list PowerKVM_VirtualMachine --parent
Name Creation Time State Parent
------------------------------------------------------------
1447164760 2015-11-10 09:12:40 -0500 shutoff (null)
MyNewSnapshot 2015-11-10 09:14:20 -0500 shutoff 1447164760
NewChild 2015-11-10 09:22:23 -0500 shutoff MyNewSnapshot
 
# virsh snapshot-delete PowerKVM_VirtualMachine MyNewSnapshot
Domain snapshot MyNewSnapshot deleted
When MyNewSnapshot was deleted, its content was merged into NewChild. Figure 4-1 illustrates what happens to a child when its parent snapshot is deleted.
Figure 4-1 Deleting a snapshot
4.3 Manage guest networks
This section describes how to manage guest networks from the command line.
The hypervisor network configuration is also described as an XML file. The files that describe the host network configuration are stored in the /var/lib/libvirt/network directory.
The default configuration that is based on Network Address Translation (NAT) is called the libvirt default network. After the PowerKVM installation, the default network is automatically created and available for use.
When libvirt adds a network definition based on a bridge, the bridge is created automatically. You can see it by using the brctl show command, as shown in Example 4-17.
Example 4-17 Command to show the bridge interfaces on the system
# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.5254000ec7c6 yes virbr0-nic
vnet1
virbr 1 8000.525400dd81a0 yes virbr1-nic
4.3.1 Query guest networks
To list defined guest networks, use the net-list command shown in Example 4-18.
Example 4-18 Display networks
# virsh net-list --all
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
kop active yes yes
To list detailed information about a specific network entry, use the net-info command, as shown in Example 4-19.
Example 4-19 Display specific network
# virsh net-info default
Name: default
UUID: b7adb719-2cd9-4c7a-b908-682cdd4e0f8a
Active: yes
Persistent: yes
Autostart: yes
Bridge: virbr0
4.3.2 Create a guest network
To create a guest network, use the virsh net-create command to create a transitory network or use the virsh net-define command to create persistent network, that will last after shutdowns. That command requires a pre-created XML file with a network definition. See Example 4-20, Example 4-21 on page 116, and Example 4-22 on page 116 for Bridged, NAT, and Open vSwitch networks, respectively. For more information about a host network, see 6.2, “Network virtualization” on page 164.
In the next examples, we use the uuidgen command to create unique identifiers for those devices. However, libvirt can generate it automatically if the uuid parameter is missed.
Example 4-20 NAT definition
# uuidgen
a2c0da29-4e9c-452a-9b06-2bbe9d8f8f65
 
# cat nat.xml
<network>
<name>MyNat</name>
<uuid>a2c0da29-4e9c-452a-9b06-2bbe9d8f8f65</uuid>
<forward mode='nat'/>
<bridge name='myvirbr0' stp='on' delay='0'/>
<ip address='192.168.133.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.133.2' end='192.168.133.50'/>
</dhcp>
</ip>
</network>
 
# virsh net-define nat.xml
Network MyNat defined from nat.xml
 
# virsh net-start MyNat
Network MyNat started
# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
bridge active yes yes
default active yes yes
kop active yes yes
MyNat active no yes
Example 4-21 Bridged network
# uuidgen
8ee4536b-c4d3-4e3e-a139-6108f3c2d5f5
 
# cat brdg.xml
<network>
<name>MyBridge</name>
<uuid>8ee4536b-c4d3-4e3e-a139-6108f3c2d5f5</uuid>
<forward dev='enP1p12s0f0' mode='bridge'>
<interface dev='enP1p12s0f0'/>
</forward>
</network>
 
# virsh net-define brdg.xml
Network MyBridge defined from brdg.xml
 
# virsh net-start MyBridge
Network MyBridge started
 
# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
bridge active yes yes
default active yes yes
kop active yes yes
MyBridge active no yes
MyNat active no yes
Example 4-22 Open vSwitch configuration
# systemctl start openvswitch
# ovs-vsctl add-br myOVS
# ovs-vsctl add-port myOVS enP1p12s0f1
# ovs-vsctl show
ed0a4b3d-0738-468e-9642-0282c4342960
Bridge myOVS
Port myOVS
Interface myOVS
type: internal
Port "enP1p12s0f1"
Interface "enP1p12s0f1"
 
# cat ovs.xml
<network>
<name>MyOVSBr</name>
<forward mode='bridge'/>
<bridge name='myOVS'/>
<virtualport type='openvswitch'/>
<portgroup name='default' default='yes'>
</portgroup>
</network>
 
# virsh net-define ovs.xml
Network MyOVSBr defined from ovs.xml
 
# virsh net-start MyOVSBr
Network MyOVSBr started
 
# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
bridge active yes yes
default active yes yes
kop active yes yes
MyBridge active no yes
MyNat active no yes
MyOVSBr active no yes
Example 4-23 shows what the guest network interface looks like when using the Open vSwitch bridge.
Example 4-23 Open vSwitch bridge interface
# virsh dumpxml PowerKVM_VirtualMachine
...
<interface type='bridge'>
<mac address='52:54:00:c9:e4:99'/>
<source network='MyOVSBr' bridge='myOVS'/>
<virtualport type='openvswitch'>
<parameters interfaceid='41c249bb-c59d-4632-9601-25d0a9285755'/>
</virtualport>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</interface>
...
4.4 Managing guests
This section describes how to manage guests from the command line. The guest configuration is represented as an XML file by libvirt.
4.4.1 Create a new guest
To create new guests from command line, use the virt-install command. The following are the core arguments:
--name Defines a guest name
--vcpus Defines the number of CPUs used by the guest
--memory Defines the amount of RAM, in megabytes, for a guest
--cdrom Defines the ISO that is used for guest installation
--disk Defines a block device for a guest
--network Specifies a network to plug in to a guest
Each option has a subset of options. Only the most important are covered in this chapter.
The vcpus argument has the following options (also see Example 4-24):
maxvcpus Sets the maximum number of CPUs that you can hot plug into a guest
threads Sets the number of threads used by a guest
Example 4-24 vcpus options
--vcpus=2,threads=8,maxvcpus=5
The disk argument has the following suboptions, which are also shown in Example 4-25.
pool Specifies the pool name where you are provisioning the volume
size Specifies the volume size in GB
format Format of resulting image (raw | qcow2)
bus Bus used by the block device
 
Example 4-25 Disk options
--disk pool=default,size=10,format=qcow2,bus=spapr-scsi
Example 4-26 shows an example of attaching a volume from a Fibre Channel or iSCSI pool.
Example 4-26 LUN mapping
--disk vol=poolname/unit:0:0:1
The network argument has suboptions, the most important are:
bridge Host bridge device
network Network name created in “Create a guest network” on page 115
Example 4-27 shows examples of networks.
Example 4-27 Network arguments
--network bridge=myvirbr0
--network default,mac=52.54.00.bd.7f.d5
--network network=MyOSVBr
Example 4-28 shows the full command and argument.
Example 4-28 virt-install example
# virt-install --name PowerKVM_VirtualMachine
--memory 4096
--vcpus=2,threads=2,maxvcpus=8
--cdrom /var/lib/libvirt/images/distro.iso
--disk pool=default,size=100,sparse=true,cache=none,format=qcow2
--network default
The virt-install command automatically starts the installation and attaches the console.
4.4.2 List guests
By default, the list command displays only running guests. If you want to see powered off guests, use the --all flag, as shown in Example 4-29 on page 119.
Example 4-29 List all guests
# virsh list --all
Id Name State
----------------------------------------------------
8 MyGuest running
- PowerKVM_VirtualMachine shut off
4.4.3 Start or stop a guest
To start an existing guest, use the start command as in Example 4-30.
Example 4-30 Start VM
# virsh start PowerKVM_VirtualMachine
Domain PowerKVM_VirtualMachine started
To power off the guest, use the destroy or the shutdown command as in Example 4-31. The shutdown command interacts with the guest operating system to shut down the system gracefully. This operation can take some time because all services must be stopped. The destroy command shutdown the guest immediately. It can damage the guest operating system.
Example 4-31 Stop/halt VM
# virsh list --all
Id Name State
----------------------------------------------------
8 MyGuest running
25 PowerKVM_VirtualMachine running
 
# virsh destroy PowerKVM_VirtualMachine
Domain PowerKVM_VirtualMachine destroyed
 
# virsh list --all
Id Name State
----------------------------------------------------
8 MyGuest running
- PowerKVM_VirtualMachine shut off
 
# virsh list --all
Id Name State
----------------------------------------------------
8 MyGuest running
28 PowerKVM_VirtualMachine running
 
# virsh shutdown PowerKVM_VirtualMachine
Domain PowerKVM_VirtualMachine is being shutdown
 
# virsh list --all
Id Name State
----------------------------------------------------
8 MyGuest running
- PowerKVM_VirtualMachine shut off
 
4.4.4 Suspending and resuming
A virtual machine can be suspended. This feature pauses the guest and stops using CPU resources until it is resumed. However, it still uses host memory. A guest can be paused by using the suspend command. After a guest is suspended, it is changed to the paused state and can be resumed by using the resume command.
Example 4-32 shows the state of the guest before it is suspended, after it is suspended, and after it is resumed.
Example 4-32 Suspending and resuming a guest
# virsh list
Id Name State
----------------------------------------------------
60 PowerKVM_VirtualMachine running
 
# virsh suspend PowerKVM_VirtualMachine
Domain PowerKVM_VirtualMachine suspended
 
# virsh list
Id Name State
----------------------------------------------------
60 PowerKVM_VirtualMachine paused
 
# virsh resume PowerKVM_VirtualMachine
Domain PowerKVM_VirtualMachine resumed
 
# virsh list
Id Name State
----------------------------------------------------
60 PowerKVM_VirtualMachine running
4.4.5 Delete a guest
To delete a guest, run the virsh undefine command as shown in Example 4-33.
Example 4-33 Deleting a guest
# virsh undefine PowerKVM_VirtualMachine
Domain PowerKVM_VirtualMachine has been undefined
4.4.6 Connect to a guest
To connect to an already running guest, use the console command as shown in Example 4-34.
Example 4-34 Console access
# virsh start PowerKVM_VirtualMachine
Domain PowerKVM_VirtualMachine started
 
# virsh console PowerKVM_VirtualMachine
Connected to domain PowerKVM_VirtualMachine
Escape character is ^]
 
Note: To detach an open console, hold down the Ctrl key and press the ] key.
It is also possible to start a guest and attach to the console by using one command:
# virsh start PowerKVM_VirtualMachine --console
4.4.7 Edit a guest
The virsh edit command is used to edit any guest parameter. It does not provide a friendly user interface such as Kimchi. In contrast, virsh opens the guest XML in a text editor where changes must be done manually.
Example 4-35 shows how to edit a guest. Regardless of using a common editor, virsh verifies any possible mistake to avoid guest damage.
Example 4-35 Editing guest configuration
# virsh edit PowerKVM_VirtualMachine
<domain type='kvm'>
<name>PowerKVM_VirtualMachine</name>
<uuid>6009664b-27c5-4717-97c5-370917c9594f</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static' current='2'>8</vcpu>
<os>
<type arch='ppc64le' machine='pseries-2.4'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu>
<topology sockets='4' cores='1' threads='2'/>
</cpu>
...
After quitting the editor, virsh checks the file for errors and returns one of the following messages:
Domain guest XML configuration not changed
Error: operation failed... Try again? [y,n,i,f,?]
Domain guest XML configuration edited
The default text editor used by virsh is vi, but it is possible to use any other editor by setting the EDITOR shell variable. Example 4-36 shows how to configure a different text editor for virsh.
Example 4-36 Using another text editor
EDITOR=nano virsh edit PowerKVM_VirtualMachine
4.4.8 Add new storage to an existing guest
Example 4-37 shows how to create a new virtual disk, how to plug that disk into the guest, and how to unplug the disk.
Example 4-37 Adding new virtual storage
# qemu-img create -f qcow2 mynewdisk.qcow2 80G
Formatting 'mynewdisk.qcow2', fmt=qcow2 size=85899345920 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
 
# cat disk.xml
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/mynewdisk.qcow2'/>
<target dev='vdb' bus='virtio'/>
</disk>
 
# virsh attach-device PowerKVM_VirtualMachine disk.xml --config
Device attached successfully
The --config parameter persists the attached device in the guest XML. It means that the disk lasts after reboots and, if the device is attached while the guest is running, the device will only be attached after a reboot. To unplug, run the following command:
# virsh detach-device PowerKVM_VirtualMachine disk.xml --config
Example 4-38 shows how to perform a disk hotplug. By using the same disk created in Example 4-37, start the guest and run.
Example 4-38 Storage live pass-through
# virsh attach-device PowerKVM_VirtualMachine disk.xml --live
Device attached successfully
 
# virsh detach-device PowerKVM_VirtualMachine disk.xml --live
Device detached successfully
The --live parameter attaches the device into a running guest. That will be automatically detached as soon as the guest is turned off.
Note: If the guest is running, both the --live and --config parameters can be used together.
4.4.9 Add a new network to an existing guest
To attach a network interface, create the XML with the new virtual card and attach it by using the same virsh attach-device command. Example 4-39 shows how to plug and unplug such a device.
Example 4-39 Network interface pass-through
# cat network.xml
<interface type='network'>
<mac address='52:54:00:ba:00:00'/>
<source network='default'/>
<model type='virtio'/>
</interface>
 
# virsh attach-device PowerKVM_VirtualMachine network.xml --config
Device attached successfully
 
# virsh detach-device PowerKVM_VirtualMachine network.xml --config
Device detached successfully
Example 4-40 shows how to hotplug a network interface and how to unplug it.
Example 4-40 Network interface hotplug
# virsh attach-device PowerKVM_VirtualMachine network.xml --live
Device attached successfully
 
(the command ifconfig -a was run in the guest)
# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.122.61 netmask 255.255.255.0 broadcast 192.168.122.255
inet6 fe80::5054:ff:fec1:c718 prefixlen 64 scopeid 0x20<link>
ether 52:54:00:c1:c7:18 txqueuelen 1000 (Ethernet)
RX packets 67 bytes 7095 (6.9 KiB)
RX errors 0 dropped 9 overruns 0 frame 0
TX packets 57 bytes 6732 (6.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 
eth1: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 52:54:00:ba:00:00 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# virsh detach-device PowerKVM_VirtualMachine network.xml --live
Device detached successfully
4.4.10 PCI I/O pass-through
The lspci command lists all devices attached to the host. Example 4-41 shows how to list all devices, so we can choose the device to be attached. Refer to 6.4, “I/O pass-through” on page 170 for more details about PCI pass-through.
Example 4-41 Listing host PCI devices
# lspci
0000:00:00.0 PCI bridge: IBM Device 03dc
0001:00:00.0 PCI bridge: IBM Device 03dc
0001:01:00.0 PCI bridge: PLX Technology, Inc. Device 8748 (rev ca)
0001:02:01.0 PCI bridge: PLX Technology, Inc. Device 8748 (rev ca)
0001:02:08.0 PCI bridge: PLX Technology, Inc. Device 8748 (rev ca)
0001:02:09.0 PCI bridge: PLX Technology, Inc. Device 8748 (rev ca)
0001:02:0a.0 PCI bridge: PLX Technology, Inc. Device 8748 (rev ca)
0001:02:10.0 PCI bridge: PLX Technology, Inc. Device 8748 (rev ca)
0001:02:11.0 PCI bridge: PLX Technology, Inc. Device 8748 (rev ca)
0001:08:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller (rev 11)
0001:09:00.0 USB controller: Texas Instruments TUSB73x0 SuperSpeed USB 3.0 xHCI Host Controller (rev 02)
0001:0a:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 03)
0001:0b:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 30)
0001:0c:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
0001:0c:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
0001:0c:00.2 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
0001:0c:00.3 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
0002:00:00.0 PCI bridge: IBM Device 03dc
In Example 4-42, we create the XML with the host PCI address information. This is necessary to attach the device to the guest. In this example, we choose the device 0001:0b:00.0.
After creating the XML, it is necessary to detach the device from the host. This is achieved by running virsh nodedev-detach. Then, the virsh attach-device command can be used.
Example 4-42 Getting PCI device information
# cat pci.xml
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0001' bus='0x0b' slot='0x00' function='0x0'/>
</source>
</hostdev>
 
# virsh nodedev-detach pci_0001_0b_00_0
Device pci_0001_0b_00_0 detached
 
# virsh attach-device PowerKVM_VirtualMachine pci.xml --config
Device attached successfully
 
# virsh detach-device PowerKVM_VirtualMachine pci.xml --config
Device detached successfully
As mentioned, the --config parameter means the command has effect after the guest rebooting. For a live action, the --live parameter must be set. Figure 4-2 shows how to hotplug a PCI device to a guest.
Figure 4-2 Interaction between host and guest during PCI hotplug
When the device is not in use by any guest, it can be reattached to the host by calling the following command:
# virsh nodedev-reattach pci_0001_0b_00_0
Device pci_0001_0b_00_0 re-attached
When thinking about multi-function PCI pass-through, some rules must be observed:
All functions must be detached from the host
All functions must be attached to the same guest
 
Note: Multi-function PCI device live pass-through is not currently supported.
Example 4-43 shows how to attach a multifunction device step-by-step. Each function is defined in its own XML, where the guest PCI address is also defined.
 
Note: The first function definition requires the multifunction=‘on’ parameter in the guest PCI address.
Example 4-43 Attaching a multi-function PCI device
# cat multif0.xml
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0001' bus='0x0c' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
</hostdev>
 
# cat multif1.xml
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0001' bus='0x0c' slot='0x00' function='0x1'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
</hostdev>
 
# cat multif2.xml
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0001' bus='0x0c' slot='0x00' function='0x2'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
</hostdev>
 
# cat multif3.xml
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0001' bus='0x0c' slot='0x00' function='0x3'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x3'/>
</hostdev>
 
# virsh nodedev-detach pci_0001_0c_00_0
Device pci_0001_0c_00_0 detached
 
# virsh nodedev-detach pci_0001_0c_00_1
Device pci_0001_0c_00_1 detached
 
# virsh nodedev-detach pci_0001_0c_00_2
Device pci_0001_0c_00_2 detached
 
# virsh nodedev-detach pci_0001_0c_00_3
Device pci_0001_0c_00_3 detached
 
# virsh attach-device PowerKVM_VirtualMachine multif0.xml --config
Device attached successfully
 
# virsh attach-device PowerKVM_VirtualMachine multif1.xml --config
Device attached successfully
 
# virsh attach-device PowerKVM_VirtualMachine multif2.xml --config
Device attached successfully
 
# virsh attach-device PowerKVM_VirtualMachine multif3.xml --config
Device attached successfully
Example 4-44 shows how the multifunction device is displayed in the guest.
Example 4-44 Guest displaying multifunction device
# virsh start PowerKVM_VirtualMachine --console
 
# lspci
00:01.0 Ethernet controller: Red Hat, Inc Virtio network device
00:02.0 USB controller: Apple Inc. KeyLargo/Intrepid USB
00:03.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon
00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:05.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
00:05.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
00:05.2 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
00:05.3 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
Example 4-45 shows how to reattach the multifunction device to the host.
Example 4-45 Reattaching multifunction device to the host
# virsh nodedev-reattach pci_0001_0c_00_0
Device pci_0001_0c_00_0 re-attached
 
# virsh nodedev-reattach pci_0001_0c_00_1
Device pci_0001_0c_00_1 re-attached
 
# virsh nodedev-reattach pci_0001_0c_00_2
Device pci_0001_0c_00_2 re-attached
 
# virsh nodedev-reattach pci_0001_0c_00_3
Device pci_0001_0c_00_3 re-attached
4.4.11 CPU Hotplug
4.4.12 Memory Hotplug
4.4.13 Clone a guest
Example 4-46 shows how to clone a guest by using the virt-clone command.
Example 4-46 Creating a clone guest
# virt-clone --original PowerKVM_VirtualMachine
--name PowerKVM_Clone
--file /var/lib/libvirt/images/clone.qcow2
Clone 'PowerKVM_Clone' created successfully.
 
[root@ltc-hab1 ~]# virsh list --all
Id Name State
----------------------------------------------------
- MyGuest shut off
- PowerKVM_Clone shut off
- PowerKVM_VirtualMachine shut off
4.4.14 Migration
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset