VMware command-line tools for configuring vSphere ESXi storage
In this chapter, we outline the process to configure SAN storage by using the iSCSI software initiator and Fibre Channel protocol. We also describe the necessary settings to connect to DS5000 storage subsystems.
6.1 Introduction to command-line tools
vSphere supports several command-line interfaces for managing your virtual infrastructure:
vSphere command-line interface (vCLI)
ESXi Shell commands (esxcli - vicfg)
PowerCLI
We use ESXi Shell commands to configure iSCSI and FC SAN storage.
6.1.1 Enabling ESXi Shell from DCUI
The ESXi Shell commands feature is natively included in the local support consoles, but this feature is not enabled by default.
Follow these steps from the Direct Console User Interface (DCUI):
1. At the direct console of the ESXi host, press F2 and provide credentials when prompted.
2. Scroll to Troubleshooting Mode Options and press Enter.
3. Choose Enable ESXi Shell and press Enter.
4. The “ESXi Shell is Enabled” message is displayed on the right side of the window, as shown in Figure 6-1.
Figure 6-1 Enabling ESXi Shell from the DCUI
5. Press Esc until you return to the main direct console panel. Click Saving the Configuration Changes.
6.1.2 Enabling ESXi Shell with the vSphere Client
Follow these steps to enable ESXi Shell with the vSphere Client:
1. Log in to a vCenter Server system by using the vSphere Client.
2. Select the host in the inventory panel.
3. Click the Configuration tab and click Security Profile.
4. In the Services section, click Properties.
5. Select ESXi Shell from this list as shown in Figure 6-2.
Figure 6-2 Checking Services from vSphere Client
6. Click Options to open the ESXi Shell Options window.
7. From the ESXi Shell Options window, select the required Startup Policy. Click Start to enable the service as shown in Figure 6-3 on page 146.
Figure 6-3 Starting services
8. Repeat steps 5 - 7 to enable the Secure Shell (SSH) service.
6.1.3 Running ESXi Shell commands
vSphere ESXi supports the execution of ESXi Shell commands in different ways:
Locally executed from the DCUI console
Remotely executed by using SSH through the local support console
Remotely using the vMA appliance
Remotely using the vSphere CLI
For this example, we run ESXi Shell commands remotely by using vSphere CLI. We can install the vSphere CLI command set on a supported Linux or Microsoft Windows system. The installation package and deployment procedure are available at the following link:
The vSphere CLI command is, by default, available by clicking Start → Programs → VMware → VMware vSphere CLI.
The basic usage is formatted this way:
esxcli --server <vc_server> --username <privileged_user> --password <pw> --vihost <esx<namespace> [<namespace]...> <command> --<option_name=option_value>
Later, we describe the basic command-line syntax. For more details about available ESXi Shell commands, see the main reference support document:
6.1.4 Saving time running ESXi Shell commands
To avoid the redundancy of adding the connection information on the command line, you can create a connection document to use every time that you run a command.
The following example illustrates the contents of the configuration file that we have saved as esxcli.config:
VI_SERVER = XX.XXX.XXX.XX
VI_USERNAME = root
VI_PASSWORD = my_password
VI_PROTOCOL = https
VI_PORTNUMBER = 443
Replace this information with your environment data for a useful tool for running ESXi Shell commands.
 
Note: Save the configuration file in the same location/path of your ESXi Shell command to avoid syntax errors. The following locations are the default locations for the ESXi Shell command in Windows OS:
32-Bit OS:
C:Program FilesVMwareVMware vSphere CLI
64-Bit OS:
C:Program Files (x86)VMwareVMware vSphere CLI
VMware provides many resources and an active user community forum at the ESXi Shell main page:
6.2 Connecting to SAN storage by using iSCSI
You can attach the DS Storage Systems to your hosts by using iSCSI interfaces. We show how to configure your vSphere ESXi hosts to use a regular Ethernet network interface card (NIC) and the native software iSCSI Initiator to connect to a DS5300 system with iSCSI host interface cards (HICs).
Our implementation example uses vSphere ESXi 5.0, two Ethernet network cards connected to different Ethernet switches. The traffic is isolated on a dedicated private network where the DS5300 iSCSI controllers reside.
The DS Storage System iSCSI ports are defined in the following way:
192.168.130.101 - iSCSI Controller A
192.168.130.102 - iSCSI Controller B
The following procedure explains how to connect to your storage by using iSCSI. A software iSCSI adapter is part of the VMware code.
Configuring the iSCSI software initiator takes several steps:
1. Activate the software iSCSI adapter.
2. Configure networking for iSCSI.
3. Configure iSCSI discovery addresses.
4. Enable security (CHAP).
6.2.1 Activating the software iSCSI adapter
To activate the software iSCSI adapter, click Start → Programs → VMware → VMware vSphere CLI → Command Prompt and enter the following commands:
1. Enable the iSCSI software initiator:
esxcli --config esxcli.config iscsi software set --enabled=true
2. Check the iSCSI software initiator status:
esxcli --config esxcli.config iscsi software get
 
Note: The system prints correctly (true) if software iSCSI is enabled, or false if it is not enabled.
Now that the iSCSI software initiator is enabled on your system, you can obtain the iSCSI host bus adapter (HBA) name and its iSCSI qualified name (IQN).
To discover the available adapters and get the iSCSI IQN name, run the command that is shown in Example 6-1.
Example 6-1 Discovering available adapters
C:Program FilesVMwareVMware vSphere CLI>esxcli --config esxcli.config storage core adapter list
HBA Name Driver Link State UID Description
-------- --------- ---------- ------------------------------------------ ----------------------------------------------------------------------------
vmhba0 ata_piix link-n/a sata.vmhba0 (0:0:31.2) Intel Corporation 82801H (ICH8 Family) 4 port SATA IDE Controller
vmhba1 ata_piix link-n/a sata.vmhba1 (0:0:31.5) Intel Corporation 82801H (ICH8 Family) 2 port SATA IDE Controller
vmhba32 ata_piix link-n/a sata.vmhba32 (0:0:31.2) Intel Corporation 82801H (ICH8 Family) 4 port SATA IDE Controller
vmhba33 ata_piix link-n/a sata.vmhba33 (0:0:31.5) Intel Corporation 82801H (ICH8 Family) 2 port SATA IDE Controller
vmhba34 iscsi_vmk online iqn.1998-01.com.vmware:redbooks03-5147ed14 iSCSI Software Adapter
6.2.2 Configuring networking for iSCSI
We use two network adapters for iSCSI connection to the storage subsystem. They need to be added to a separate virtual switch, and they both need to be assigned a separate IP address. This procedure shows the necessary steps:
From the menu, click Start → Programs → VMware → VMware vSphere CLI → Command Prompt, and enter the following commands:
1. Create a Virtual Standard Switch (VSS) named vSwitch_iSCSI:
esxcli --config esxcli.config network vswitch standard add -vswitch-name=vSwitch_iSCSI
2. Add a portgroup to my standard vswitch vSwitch_iSCSI:
esxcli --config esxcli.config network vswitch standard portgroup add -p iSCSI-1 -v vSwitch_iSCSI
3. Add a secondary portgroup to vSwitch_iSCSI:
esxcli --config esxcli.config network vswitch standard portgroup add -p iSCSI-2 -v vSwitch_iSCSI
After we create the virtual switch and add the portgroups, the next step is to configure the portgroups adding VMkernel interfaces.
 
Note: In this example, we assume that you have one VMkernel interface that is created for the use of the vSwitch0  Management Network (vmk0). We add two more VMkernel ports using vmk1 and vmk2 as default names.
4. Add a VMkernel interface (vmk1) to the iSCSI-1 portgroup:
esxcli --config esxcli.config network ip interface add -i vmk1 -p iSCSI-1
5. Repeat the process to add a VMkernel interface (vmk2) to the iSCSI-2 portgroup:
esxcli --config esxcli.config network ip interface add -i vmk2 -p iSCSI-2
The following lines cover the network configuration of the recently created VMkernel ports vmk1 and vmk2. The IP addresses to be used need to be in the same network/VLAN where you configured in your DS Subsystem Storage iSCSI adapters. Follow these steps:
1. Set the static IP addresses on both VMkernel NICs as part of the iSCSI network:
esxcli --config esxcli.config network ip interface ipv4 set -i vmk1 -I 192.168.130.50 -N 255.255.255.0 -t static
2. Repeat the process to configure the secondary VMkernel interface vmk2:
esxcli --config esxcli.config network ip interface ipv4 set -i vmk2 -I 192.168.130.51 -N 255.255.255.0 -t static
Now, add uplinks to our vSwitch_iSCSI virtual switch:
1. Add a primary uplink adapter:
esxcli --config esxcli.config network vswitch standard uplink add --uplink-name=vmnic1 --vswitch-name=vSwitch_iSCSI
2. Repeat the process to add a secondary uplink adapter:
esxcli --config esxcli.config network vswitch standard uplink add --uplink-name=vmnic2 --vswitch-name=vSwitch_iSCSI
 
Note: To check the available vmnics, use the following command line:
esxcli --config esxcli.config network nic list
Set the manual override failover policy so that each iSCSI VMkernel portgroup has one active physical vmnic and one vmnic that is configured as “unused”:
1. Change the default failover policy for the iSCSI-1 portgroup:
esxcli --config esxcli.config network vswitch standard portgroup policy failover set -p iSCSI-1 -a vmnic1 -u vmnic2
2. Repeat the process for changing the default failover policy for the iSCSI-2 portgroup:
esxcli --config esxcli.config network vswitch standard portgroup policy failover set -p iSCSI-2 -a vmnic2 -u vmnic1
3. Configure the policy failover at the virtual switch level:
esxcli --config esxcli.config network vswitch standard policy failover set -v vSwitch_iSCSI -a vmnic1,vmnic2
At this point, we have created a virtual switch (vSwitch). To check the vSwitch configuration parameters, execute the following command as shown in Example 6-2 on page 150.
Example 6-2 Checking virtual switch configuration parameters
C:Program FilesVMwareVMware vSphere CLI>esxcli --config esxcli.config network
vswitch standard list -v vSwitch_iSCSI
vSwitch_iSCSI
Name: vSwitch_iSCSI
Class: etherswitch
Num Ports: 128
Used Ports: 5
Configured Ports: 128
MTU: 1500
CDP Status: listen
Beacon Enabled: false
Beacon Interval: 1
Beacon Threshold: 3
Beacon Required By:
Uplinks: vmnic2, vmnic1
Portgroups: iSCSI-2, iSCSI-1
6.2.3 Configuring iSCSI discovery addresses
Before we proceed with the discovery process, we configure the iSCSI initiator by adding the vmk1 and vmk2 ports as binding ports:
1. Bind each of the VMkernel NICs to the software iSCSI HBA:
esxcli --config esxcli.config iscsi networkportal add -A vmhba34 -n vmk1
esxcli --config esxcli.config iscsi networkportal add -A vmhba34 -n vmk2
2. Now, we discover the targets by using the IP addresses of the IBM DS Storage Subsystems. We have two iSCSI interfaces on the DS5300 that use the 192.168.130.101 and 192.168.130.102 IP addresses.
Add the IP address of your iSCSI array or SAN as a dynamic discovery:
esxcli --config esxcli.config iscsi adapter discovery sendtarget add -A vmhba34 -a 192.168.130.101
3. Repeat the process for the secondary iSCSI array IP address:
esxcli --config esxcli.config iscsi adapter discovery sendtarget add -A vmhba34 -a 192.168.130.102
4. Rescan your software iSCSI HBA to discover volumes and Volume Manager File Systems (VMFS) datastores:
esxcli --config esxcli-config storage core adapter rescan --adapter vmhba34
5. To list the available filesystem, run the following command as shown in Example 6-3.
Example 6-3 Listing the available storage file system from the command line
C:Program FilesVMwareVMware vSphere CLI>esxcli --config esxcli.config storage filesystem list
Mount Point Volume Name UUID Mounted Type Size Free
------------------------------------------------- ----------- ----------------------------------- ------- ------ ------------ -----------
/vmfs/volumes/4e9ddd95-696fcc42-fa76-0014d126e786 datastore1 4e9ddd95-696fcc42-fa76-0014d126e786 true VMFS-5 74625056768 73606889472
/vmfs/volumes/4e9f531f-78b18f6e-7583-001641edb4dd Datastore_2 4e9f531f-78b18f6e-7583-001641edb4dd true VMFS-5 107105746944 63313018880
/vmfs/volumes/4ea20b1e-6cf76340-4250-001641edb4dd Datastore_1 4ea20b1e-6cf76340-4250-001641edb4dd true VMFS-5 107105746944 99139715072
/vmfs/volumes/4e9ddd95-f1327d50-b7fc-0014d126e786 4e9ddd95-f1327d50-b7fc-0014d126e786 true vfat 4293591040 4280156160
/vmfs/volumes/b0b41f71-1bc96828-21df-6548ab457c03 b0b41f71-1bc96828-21df-6548ab457c03 true vfat 261853184 128225280
/vmfs/volumes/1f1e5f79-ce9138bf-c62c-3893b933397e 1f1e5f79-ce9138bf-c62c-3893b933397e true vfat 261853184 261844992
/vmfs/volumes/4e9ddd8d-b69852dc-3d8b-0014d126e786 4e9ddd8d-b69852dc-3d8b-0014d126e786 true vfat 299712512 114974720
6.2.4 Enabling security (CHAP)
The best practice CHAP security configuration is recommended. To enable basic CHAP authentication, run the following command:
esxcli --config esxcli.config iscsi adapter auth chap set --adapter vmhba34 --authname iqn.1998-01.com.vmware:redbook s03-5147ed14 --direction uni --level preferred --secret ITSO2011_Secured
 
Security recommendations: Use strong passwords for all accounts. Use CHAP authentication because it ensures that each host has its own password. Mutual CHAP authentication is also recommended.
For more information: We assume that your DS Storage System is already configured for using CHAP authentication. For more information about iSCSI configuration at the DS Storage System level, see IBM System Storage DS5000 Series Implementation and Best Practices Guide, SG24-8024.
6.3 Connecting to SAN storage by using Fibre Channel (FC)
Unlike iSCSI, FC configuration is relatively simple. In the next example, we use two HBAs that are connected to different SAN fabric switches. We have our own zone defined on both fabric switches to separate the traffic for stability and improve the management. The IBM DS5300 has two controllers that are defined as Controller A and Controller B, and both controllers are also physically connected to different SAN fabric switches. Based on the VMware Native Multipathing Plug-in (NMP) or failover driver that is implemented at the ESXi level (natively provided by hypervisor) and the proposed cabling connections, the vSphere ESXi host can access the SAN-attached storage by using alternatives paths for redundancy. The Most Recent Used (MRU) policy is the recommended path policy.
As shown in Example 6-4, two HBAs cards are physically installed in our vSphere ESXi hosts.
Example 6-4 Discovering available adapters
C:Program FilesVMwareVMware vSphere CLI>esxcli --config esxcli.config storage core adapter list
HBA Name Driver Link State UID Description
-------- -------- ---------- ------------------------------------ -----------------------------------------------------------------------------
vmhba0 ata_piix link-n/a sata.vmhba0 (0:0:31.2) Intel Corporation 82801H (ICH8 Family) 4 port SATA IDE Controller
vmhba1 ata_piix link-n/a sata.vmhba1 (0:0:31.5) Intel Corporation 82801H (ICH8 Family) 2 port SATA IDE Controller
vmhba2 qla2xxx link-n/a fc.200000e08b892cc0:210000e08b892cc0 (0:10:9.0) QLogic Corp QLA2340-Single Channel 2Gb Fibre Channel to PCI-X HBA
vmhba3 qla2xxx link-n/a fc.200000e08b18208b:210000e08b18208b (0:10:10.0) QLogic Corp QLA2340-Single Channel 2Gb Fibre Channel to PCI-X HBA
vmhba32 ata_piix link-n/a sata.vmhba32 (0:0:31.2) Intel Corporation 82801H (ICH8 Family) 4 port SATA IDE Controller
vmhba33 ata_piix link-n/a sata.vmhba33 (0:0:31.5) Intel Corporation 82801H (ICH8 Family) 2 port SATA IDE Controller
The following steps show the basic SAN storage tasks by using Fibre Channel (FC). In Example 6-5, we show the SAN-attached disks and their configuration.
From the menu, click Start → Programs → VMware → VMware vSphere CLI. At the command prompt, enter the following commands:
List all devices with their corresponding paths, state of the path, adapter type, and other information:
esxcli --config esxcli.config storage core path list
Limit the display to only a specified path or device:
esxcli --config esxcli.config storage core path list --device vmhba2
List detailed information for the paths for the device specified with --device.
esxcli --config esxcli.config storage core path list -d <naa.xxxxxx>
Rescan all adapters:
esxcli --config esxcli.config storage core adapter rescan
Example 6-5 Showing discovered FC SAN attach through the command line
C:Program FilesVMwareVMware vSphere CLI>esxcli --config esxcli.config storage
core device list
naa.600a0b80006e32a000001e764e9d9e1d
Display Name: IBM Fibre Channel Disk (naa.600a0b80006e32a000001e764e9d9e1d)
Has Settable Display Name: true
Size: 102400
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.600a0b80006e32a000001e764e9d9e1d
Vendor: IBM
Model: 1818 FAStT
Revision: 0730
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: true
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unknown
Other UIDs: vml.0200000000600a0b80006e32a000001e764e9d9e1d313831382020
 
naa.600a0b80006e32020000fe594ea59de0
Display Name: IBM iSCSI Disk (naa.600a0b80006e32020000fe594ea59de0)
Has Settable Display Name: true
Size: 20480
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.600a0b80006e32020000fe594ea59de0
Vendor: IBM
Model: 1818 FAStT
Revision: 0730
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unknown
Other UIDs: vml.0200020000600a0b80006e32020000fe594ea59de0313831382020
 
t10.ATA_____WDC_WD800JD2D08MSA1___________________________WD2DWMAM9ZY50888
Display Name: Local ATA Disk (t10.ATA_____WDC_WD800JD2D08MSA1________________
___________WD2DWMAM9ZY50888)
Has Settable Display Name: true
Size: 76324
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/t10.ATA_____WDC_WD800JD2D08MSA1______________
_____________WD2DWMAM9ZY50888
Vendor: ATA
Model: WDC WD800JD-08MS
Revision: 10.0
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unknown
Other UIDs: vml.0100000000202020202057442d574d414d395a59353038383857444320574
4
 
mpx.vmhba32:C0:T0:L0
Display Name: Local HL-DT-ST CD-ROM (mpx.vmhba32:C0:T0:L0)
Has Settable Display Name: false
Size: 0
Device Type: CD-ROM
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/cdrom/mpx.vmhba32:C0:T0:L0
Vendor: HL-DT-ST
Model: CDRW/DVD GCCH10N
Revision: C103
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: true
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unsupported
Other UIDs: vml.0005000000766d68626133323a303a30
 
naa.600a0b80006e32a000001e794e9d9e32
Display Name: IBM Fibre Channel Disk (naa.600a0b80006e32a000001e794e9d9e32)
Has Settable Display Name: true
Size: 102400
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.600a0b80006e32a000001e794e9d9e32
Vendor: IBM
Model: 1818 FAStT
Revision: 0730
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: true
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unknown
Other UIDs: vml.0200010000600a0b80006e32a000001e794e9d9e32313831382020
6.4 Managing multipath policies
We describe how to show and modify the multipath policies that are natively supported by the Hypervisor through the Path Selection Plug-Ins (PSPs).
Example 6-6 on page 155 shows two LUNs that are presented to the ESXi host with four paths. Two paths act as Active paths and two paths are in Standby mode. The multipath policy is set to the Most recently used (MRU) policy.
From the menu, click Start → Programs → VMware → VMware vSphere CLI. At the command prompt, enter the following commands:
List all devices with their corresponding paths, state of the path, adapter type, and other information:
esxcli --config esxcli.config storage core path list
Limit the display to only a specified path or device:
esxcli --config esxcli.config storage core path list --device vmhba2
List detailed information for the paths for the device that is specified with --device:
esxcli --config esxcli.config storage core path list -d <naa.xxxxxx>
 
 
Device ID: To limit the Display output, we use the naa.600a0b80006e16a80000e71950111711 device ID.
Example 6-6 List storage devices
C:Program Files (x86)VMwareVMware vSphere CLI>esxcli --config esxcli.config storage core path list -d naa.600a0b80006e16a80000e71950111711
fc.200000e08b892cc0:210000e08b892cc0-fc.200600a0b86e16a8:202700a0b86e16a8-naa.600a0b80006e16a80000e71950111711
UID: fc.200000e08b892cc0:210000e08b892cc0-fc.200600a0b86e16a8:202700a0b86e16a8-naa.600a0b80006e16a80000e71950111711
Runtime Name: vmhba2:C0:T3:L0
Device: naa.600a0b80006e16a80000e71950111711
Device Display Name: IBM Fibre Channel Disk (naa.600a0b80006e16a80000e71950111711)
Adapter: vmhba2
Channel: 0
Target: 3
LUN: 0
Plugin: NMP
State: standby
Transport: fc
Adapter Identifier: fc.200000e08b892cc0:210000e08b892cc0
Target Identifier: fc.200600a0b86e16a8:202700a0b86e16a8
Adapter Transport Details: WWNN: 20:00:00:e0:8b:89:2c:c0 WWPN: 21:00:00:e0:8b:89:2c:c0
Target Transport Details: WWNN: 20:06:00:a0:b8:6e:16:a8 WWPN: 20:27:00:a0:b8:6e:16:a8
 
fc.200000e08b892cc0:210000e08b892cc0-fc.200600a0b86e16a8:201700a0b86e16a8-naa.600a0b80006e16a80000e71950111711
UID: fc.200000e08b892cc0:210000e08b892cc0-fc.200600a0b86e16a8:201700a0b86e16a8-naa.600a0b80006e16a80000e71950111711
Runtime Name: vmhba2:C0:T2:L0
Device: naa.600a0b80006e16a80000e71950111711
Device Display Name: IBM Fibre Channel Disk (naa.600a0b80006e16a80000e71950111711)
Adapter: vmhba2
Channel: 0
Target: 2
LUN: 0
Plugin: NMP
State: standby
Transport: fc
Adapter Identifier: fc.200000e08b892cc0:210000e08b892cc0
Target Identifier: fc.200600a0b86e16a8:201700a0b86e16a8
Adapter Transport Details: WWNN: 20:00:00:e0:8b:89:2c:c0 WWPN: 21:00:00:e0:8b:89:2c:c0
Target Transport Details: WWNN: 20:06:00:a0:b8:6e:16:a8 WWPN: 20:17:00:a0:b8:6e:16:a8
 
fc.200000e08b892cc0:210000e08b892cc0-fc.200600a0b86e16a8:202600a0b86e16a8-naa.600a0b80006e16a80000e71950111711
UID: fc.200000e08b892cc0:210000e08b892cc0-fc.200600a0b86e16a8:202600a0b86e16a8-naa.600a0b80006e16a80000e71950111711
Runtime Name: vmhba2:C0:T1:L0
Device: naa.600a0b80006e16a80000e71950111711
Device Display Name: IBM Fibre Channel Disk (naa.600a0b80006e16a80000e71950111711)
Adapter: vmhba2
Channel: 0
Target: 1
LUN: 0
Plugin: NMP
State: active
Transport: fc
Adapter Identifier: fc.200000e08b892cc0:210000e08b892cc0
Target Identifier: fc.200600a0b86e16a8:202600a0b86e16a8
Adapter Transport Details: WWNN: 20:00:00:e0:8b:89:2c:c0 WWPN: 21:00:00:e0:8b:89:2c:c0
Target Transport Details: WWNN: 20:06:00:a0:b8:6e:16:a8 WWPN: 20:26:00:a0:b8:6e:16:a8
 
fc.200000e08b892cc0:210000e08b892cc0-fc.200600a0b86e16a8:201600a0b86e16a8-naa.600a0b80006e16a80000e71950111711
UID: fc.200000e08b892cc0:210000e08b892cc0-fc.200600a0b86e16a8:201600a0b86e16a8-naa.600a0b80006e16a80000e71950111711
Runtime Name: vmhba2:C0:T0:L0
Device: naa.600a0b80006e16a80000e71950111711
Device Display Name: IBM Fibre Channel Disk (naa.600a0b80006e16a80000e71950111711)
Adapter: vmhba2
Channel: 0
Target: 0
LUN: 0
Plugin: NMP
State: active
Transport: fc
Adapter Identifier: fc.200000e08b892cc0:210000e08b892cc0
Target Identifier: fc.200600a0b86e16a8:201600a0b86e16a8
Adapter Transport Details: WWNN: 20:00:00:e0:8b:89:2c:c0 WWPN: 21:00:00:e0:8b:89:2c:c0
Target Transport Details: WWNN: 20:06:00:a0:b8:6e:16:a8 WWPN: 20:16:00:a0:b8:6e:16:a8
List the detailed information for the paths for the device that is specified with --device as shown in Example 6-7.
Example 6-7 List the detailed information for the paths
C:Program Files (x86)VMwareVMware vSphere CLI>esxcli --config esxcli.config storage nmp device list -d naa.600a0b80006e16a80000e71950111711
naa.600a0b80006e16a80000e71950111711
Device Display Name: IBM Fibre Channel Disk (naa.600a0b80006e16a80000e71950111711)
Storage Array Type: VMW_SATP_LSI
Storage Array Type Device Config: SATP VMW_SATP_LSI does not support device configuration.
Path Selection Policy: VMW_PSP_MRU
Path Selection Policy Device Config: Current Path=vmhba2:C0:T1:L0
Path Selection Policy Device Custom Config:
Working Paths: vmhba2:C0:T1:L0
List the path selection policies that are available on the system. Check the values that are valid for the --psp option as shown in Example 6-8.
Example 6-8 List the path selection policies available
C:Program Files (x86)VMwareVMware vSphere CLI>esxcli --config esxcli.config storage core plugin registration list --plugin-class="PSP"
Module Name Plugin Name Plugin Class Dependencies Full Path
------------- ------------- ------------ ------------ ---------
vmw_psp_lib None PSP
vmw_psp_mru VMW_PSP_MRU PSP vmw_psp_lib
vmw_psp_rr VMW_PSP_RR PSP vmw_psp_lib
vmw_psp_fixed VMW_PSP_FIXED PSP vmw_psp_lib
Set the Round Robin path policy and list detailed information for the paths as shown in Example 6-9.
Example 6-9 Set path policy
C:Program Files (x86)VMwareVMware vSphere CLI>esxcli --config esxcli.config storage nmp device set --device naa.600a0b80006e16a80000e71950111711 --psp VMW_PSP_RR
 
C:Program Files (x86)VMwareVMware vSphere CLI>esxcli --config esxcli.config storage nmp device list -d naa.600a0b80006e16a80000e71950111711
naa.600a0b80006e16a80000e71950111711
Device Display Name: IBM Fibre Channel Disk (naa.600a0b80006e16a80000e71950111711)
Storage Array Type: VMW_SATP_LSI
Storage Array Type Device Config: SATP VMW_SATP_LSI does not support device configuration.
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=1: NumIOsPending=0,nu
mBytesPending=0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba2:C0:T0:L0, vmhba2:C0:T1:L0
 
Note: After you complete these modifications, do not forget to set your configuration back by using the MRU preferred path policy.
6.5 Matching DS logical drives with VMware vSphere ESXi devices
After the host is installed and configured, we can identify the SAN attach space that we have assigned. We assume that you already assign some space to your host on the DS Storage System side by using DS Storage Manager. Also, before you start to recognize these volumes on your vSphere ESXi host, ensure that the SAN zoning is set up correctly, if you work in an FC environment, according to your planned configuration. For specific steps to configure SAN FC zoning, see Implementing an IBM/Brocade SAN with 8 Gbps Directors and Switches, SG24-6116, and IBM System Storage DS5000 Series Hardware Guide, SG24-8023.
For iSCSI attachment, ensure that the network that is used is configured correctly
(IP, VLANs, frame size, and so on), and has enough bandwidth to provide storage attachment. You have to analyze and understand the impact of the network into which an iSCSI target is to be deployed before the actual installation and configuration of an IBM DS5000 storage system. See the “iSCSI” sections of the
IBM System Storage DS5000 Series Hardware Guide, SG24-8023.
First, we must discover the SAN space that is attached to our ESXi host. To obtain this information, run the command that is shown in Example 6-10.
As an example, we use the first discovered device, which is a 100 GB LUN that is attached (LUN ID 60:0a:0b:80:00:6e:32:a0:00:00:1e:76:4e:9d:9e:1d).
Example 6-10 Matching LUNs on DS Storage Manager
C:Program FilesVMwareVMware vSphere CLI>esxcli --config esxcli.config storage core device list
naa.600a0b80006e32a000001e764e9d9e1d
Display Name: IBM Fibre Channel Disk (naa.600a0b80006e32a000001e764e9d9e1d)
Has Settable Display Name: true
Size: 102400
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.600a0b80006e32a000001e764e9d9e1d
Vendor: IBM
Model: 1818 FAStT
Revision: 0730
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: true
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Dan J Attached Filters:
VAAI Status: unknown
Other UIDs: vml.0200000000600a0b80006e32a000001e764e9d9e1d313831382020
 
naa.600a0b80006e32020000fe594ea59de0
Display Name: IBM iSCSI Disk (naa.600a0b80006e32020000fe594ea59de0)
Has Settable Display Name: true
Size: 20480
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.600a0b80006e32020000fe594ea59de0
Vendor: IBM
Model: 1818 FAStT
Revision: 0730
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unknown
Other UIDs: vml.0200020000600a0b80006e32020000fe594ea59de0313831382020
 
t10.ATA_____WDC_WD800JD2D08MSA1___________________________WD2DWMAM9ZY50888
Display Name: Local ATA Disk (t10.ATA_____WDC_WD800JD2D08MSA1___________________________WD2DWMAM9ZY50888)
Has Settable Display Name: true
Size: 76324
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/t10.ATA_____WDC_WD800JD2D08MSA1___________________________WD2DWMAM9ZY50888
Vendor: ATA
Model: WDC WD800JD-08MS
Revision: 10.0
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unknown
Other UIDs: vml.0100000000202020202057442d574d414d395a593530383838574443205744
 
mpx.vmhba32:C0:T0:L0
Display Name: Local HL-DT-ST CD-ROM (mpx.vmhba32:C0:T0:L0)
Has Settable Display Name: false
Size: 3020
Device Type: CD-ROM
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/cdrom/mpx.vmhba32:C0:T0:L0
Vendor: HL-DT-ST
Model: CDRW/DVD GCCH10N
Revision: C103
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: true
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unsupported
Other UIDs: vml.0005000000766d68626133323a303a30
Then, we show how to match the path to the specific DS Storage System controller. Open the DS Storage Manager and select the storage subsystem to be managed. Then, go to the Mappings tab to identify the LUNs that we have assigned to the Host Group. For this example, we use Host VMware_5. As shown in Figure 6-5 on page 160, we have three logical drives.
Figure 6-4 Identifying logical drives
Now, we need to obtain the LUN ID. Go to the Logical tab and select VMware_LUN0 as shown in Figure 6-5.
Figure 6-5 Getting the LUN ID from DS Manager
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset