How to do it…

If the performance of your hardware bus adapters (HBAs) is unsatisfactory or your SAN storage processors are overutilized, you can adjust your ESXi hosts' maximum queue depth value. The maximum value refers to the queue depths reported for various paths to the LUN. When you lower this value, it throttles the ESXi host's throughput and alleviates SAN contention concerns if multiple hosts overutilize the storage and fill its command queue.

In a way, this solves the problem, but on the other hand, it just pushes the problem closer to the demand. Now, instead of the SP failing to deliver all of the I/O power that is required, it's the hosts that are failing to deliver I/O as fast as the VMs want. Tweaking queue depths is mostly just an easy thing to do that doesn't actually deliver better performance overall. You should consider rearchitecting the storage infrastructure to meet higher demand (for example, using faster drives, more spindles, or higher performing RAID); alternatively, you can investigate whether you can lower the demand by tuning the applications or moving VMs to other storage arrays.

To adjust the queue depth for an HBA, perform the following steps:

  1. Verify which HBA module is currently loaded by entering one of these commands:
  • For QLogic:
# esxcli system module list | grep qla
  • For Emulex:
# esxcli system module list | grep lpfc
  • For Brocade:
# esxcli system module list | grep bfa
  1. Run one of these commands:
The examples show the QLogic qla2xxx and Emulex lpfc820 modules. Use the appropriate module based on the outcome of the previous step.
  • For QLogic:
# esxcli system module parameters set -p ql2xmaxqdepth=64 -m qla2xxx
  • For Emulex:
# esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc820
  • For Brocade:
# esxcli system module parameters set -p bfa_lun_queue_depth=64 -m bfa
  1. In this case, the HBAs represented by ql2x and lpfc0 have their LUN queue depths set to 64. If all the Emulex cards on the host need to be updated, apply the global parameter lpfc_lun_queue_depth instead.
  1. Reboot your host.
  2. Run this command to confirm that your changes have been applied:
          # esxcli system module parameters list –m driver

Here, the driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc820, qla2xxx, or bfa.

The output appears similar to this:

 Name                        Type Value Description
-------------------------- ---- ----- --------------------------------------
….. ql2xmaxqdepth int 64 Maximum queue depth to report for target devices.
…..

When one VM is active on an LUN, you only need to set the maximum queue depth. When multiple VMs are active on an LUN, the Disk.SchedNumReqOutstanding value is also relevant. The queue depth value, in this case, is equal to whichever value is the lowest of the two settings, namely adapter queue depth or Disk.SchedNumReqOutstanding:

In this example, you have 32. It is the sum total of all commands. And this is where you need Disk.ShedNumReqOutstanding:

But you will still have only 32 Active if you do not change the LUN queue depth as well.

For more information on Disk.SchedNumReqOutstanding, refer to http://www.yellow-bricks.com/2011/06/23/disk-schednumreqoutstanding-the-story/.

The following procedures only apply to the ESXi/ESX host that the parameters are changed on. You must make the same changes to all the other ESXi/ESX hosts that have the datastore/LUN presented to them. In vSphere 5.5 and above, the parameter is set per device.

To set the VMkernel limit per device, perform the following steps:

  1. SSH to the ESXi host.
  2. Run the following command to list all the devices:
# esxcli storage core device list
  1. Find the device you want to change. It will start with naa.
  2. Run the following command with the correct naa value:
# esxcli storage core device set -d naa.xxx -O value
  1. The - is a capital O, not a zero. The value parameter is the number of outstanding disk requests.
  2. Run the following command to see the value:
# esxcli storage core device list -d naa.xxx
  1. The number of outstanding disk requests is shown as 'No of outstanding IOs with competing worlds:'.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset