Host Utilities Kits
This chapter provides an overview of the purpose, contents, and functions of Host Utilities Kits (HUKs) for IBM N series storage systems. It addresses why HUKs are an important part of any successful N series implementation, and the connection protocols supported. It also provides a detailed example of a Windows HUK installation.
This chapter includes the following sections:
17.1 What Host Utilities Kits are
Host Utilities Kits are a set of software programs and documentation that enable you to connect host servers to IBM N series storage systems.
The N series Host Utilities enable connection and support from host computers to IBM N series storage systems that run Data ONTAP. Data ONTAP can be licensed for Fibre Channel, iSCSI, or Fibre Channel over Ethernet (FCoE).
The Host Utilities consist of program files that retrieve important information about the storage systems and servers connected to the SAN. The storage systems include both N series and other storage devices. The Host Utilities also contain scripts that configure important settings on your host computer during installation. The scripts can be run manually on the host computer at a later time to restore these configuration settings.
The HUK is retained in a software package that corresponds to the operating system on your host computer. Each software package for a supported operating system contains a single compressed file for each supported release of the Host Utilities. Select the appropriate release of the Host Utilities for your host computer. You can then use the compressed file to install the Host Utilities software on your host computer as explained in the Host Utilities release's installation and setup guide.
Installation of N series Host Utilities is required for hosts that are attached to N series and other storage array to ensure that IBM configuration requirements are met.
17.2 The components of a Host Utilities Kit
This section provides a high-level, functional discussion of Host Utility components.
17.2.1 What is included in the Host Utilities Kit
The following items are included in a HUK:
An installation program that sets required parameters on the host computer and on certain host bus adapters (HBAs)
A fileset for providing Multipath I/O (MPIO) on the host operating environment
Scripts and utilities for gathering specifications about your configuration
Scripts for optimizing disk timeouts to achieve maximum read/write performance
These functions that can be expected from all Host Utilities packages. Additional components and utilities can be included, depending on the host operating environment and connectivity.
17.2.2 Current supported operating environments
IBM N series provides a SAN Host Utilities kit for every supported OS. This is a set of data collection applications and configuration scripts. These include SCSI and path timeout values, and path retry counts. Tools to improve the supportability of the host in an IBM N series SAN environment are included. These functions include gathering host configuration and logs and viewing the details of all IBM N series presented LUNs.
HUKs are available that support the following programs:
AIX with Fibre Channel Protocol (FCP) and iSCSI
Linux with FCP/iSCSI
HP-UX with FCP/iSCSI
Solaris Platform Edition (SPARC and x86) with FCP/iSCSI
VMWare ESX with FCP/iSCSI
Windows with FCP/iSCSI
17.3 Functions provided by Host Utilities
This section addresses the main functions of the Host Utilities.
17.3.1 Host configuration
On some operating systems such as Microsoft Windows and VMware ESX, the Host Utilities alter the SCSI and path timeout values and HBA parameters. These timeouts are modified to make sure of the best performance and to handle storage system events. Host Utilities make sure that hosts correctly handle the behavior of the IBM N series storage system. On other operating systems such as those based on Linux and UNIX, timeout parameters need to be modified manually. For more information, see the Host Utilities Setup Guide.
17.3.2 IBM N series controller and LUN configuration
Host Utilities also include a tool called sanlun, which is a host-based utility that helps you configure IBM N series controllers and LUNs. The sanlun tool bridges the namespace between host and storage controller, collecting and reporting storage controller LUN information. It then correlates this information with the host device filename or equivalent entity. This process assists with debugging SAN configuration issues. The sanlun utility is available in all operating systems except Windows.
17.4 Windows installation example
The following section provides an example of what is involved in installing the HUK onto Windows and configuring your system to work with that software.
17.4.1 Installing and configuring Host Utilities
You must perform the following high-level steps to install and configure your HUK:
1. Verify your host and storage system configuration.
2. Confirm that your storage system is set up.
3. Configure the Fibre Channel HBAs and switches.
4. Check the media type setting of the Fibre Channel target ports.
5. Install an iSCSI software initiator or HBA.
6. Configure iSCSI options and security.
7. Configure a multipathing solution.
8. Install Veritas Storage Foundation.
9. Install the Host Utilities.
10. Install SnapDrive for Windows.
 
Remember: If you add a Windows 2008 R2 host to a failover cluster after installing the Host Utilities, run the Repair option of the Host Utilities installation program. This process sets the required ClusSvcHangTimeout parameter.
17.4.2 Preparation
Before you install the Host Utilities, verify that the Host Utilities version supports your host and storage system configuration.
Verifying your host and storage system configuration
The Interoperability Matrix lists all supported configurations. Individual computer models are not listed. Windows hosts are qualified based on their processor chips.
The following configuration items must be verified:
1. Windows host processor architecture
2. Windows operating system version, service pack level, and required hotfixes
3. HBA model and firmware version
4. Fibre Channel switch model and firmware version
5. iSCSI initiator
6. Multipathing software
7. Veritas Storage Foundation for Windows software
8. Data ONTAP version and cfmode setting
9. Option software such as SnapDrive for Windows
Installing Windows hotfixes
Obtain and install the required Windows hotfixes for your version of Windows. Required hotfixes are listed in the Interoperability Matrix.
Some of the hotfixes require a reboot of your Windows host. You can wait to reboot the host until after you install or upgrade the Host Utilities. When you run the installer for the Windows Host Utilities, it lists any missing hotfixes. You must add the required hotfixes before the installer can complete the installation process.
Use the Interoperability Matrix to determine which hotfixes are required for your version of Windows, then download hotfixes from the Microsoft download site at:
Enter the hotfix number in the search box and click the Search icon.
Confirming your storage system configuration
Make sure that your storage system is properly cabled and the Fibre Channel and iSCSI services are licensed and started.
Add the iSCSI or FCP license, and start the target service. The Fibre Channel and iSCSI protocols are licensed features of Data ONTAP software. If you need to purchase a license, contact your IBM or sales partner representative.
Next, verify your cabling. See the FC and iSCSI Configuration Guide for detailed cabling and configuration information at:
Configuring Fibre Channel HBAs and switches
Install and configure one or more supported Fibre Channel HBAs for Fibre Channel connections to the storage system.
 
Attention: The Windows Host Utilities installer sets the required Fibre Channel HBA settings. Do not change HBA settings manually.
1. Install one or more supported Fibre Channel HBAs according to the instructions provided by the HBA vendor.
2. Obtain the supported HBA drivers and management utilities, and install them according to the instructions provided by the HBA vendor.
3. Connect the HBAs to your Fibre Channel switches or directly to the storage system.
4. Create zones on the Fibre Channel switch according to your Fibre Channel switch documentation.
Checking the media type of Fibre Channel ports
The media type of the storage system FC target ports must be configured for the type of connection between the host and storage system.
The default media type setting of “auto” is for fabric (switched) connections. If you are connecting the host’s HBA ports directly to the storage system, change the media setting of the target ports to “loop”. This task applies to Data ONTAP operating in 7-Mode.
To display the current setting of the storage system’s target ports, enter the following command at a storage system command prompt:
fcp show adapter -v
The current media type setting is displayed.
To change the setting of a target port to “loop” for direct connections, enter the following commands at a storage system command prompt:
fcp config adapter down
fcp config adapter mediatype loop
fcp config adapter up
adapter is the storage system adapter directly connected to the host.
Configuring iSCSI initiators and HBAs
For configurations that use iSCSI, you must either download and install an iSCSI software initiator, install an iSCSI HBA, or both
An iSCSI software initiator uses the Windows host processor for most processing and Ethernet network interface cards (NICs) or TCP/IP offload engine (TOE) cards for network connectivity. An iSCSI HBA offloads most iSCSI processing to the HBA card, which also provides network connectivity.
The iSCSI software initiator typically provides excellent performance. In fact, an iSCSI software initiator provides better performance than an iSCSI HBA in most configurations. The iSCSI initiator software for Windows is available from Microsoft for no additional charge. In some cases, you can even SAN boot a host with an iSCSI software initiator and a supported NIC.
iSCSI HBAs are best used for SAN booting. An iSCSI HBA implements SAN booting just like a Fibre Channel HBA. When booting from an iSCSI HBA, use an iSCSI software initiator to access your data LUNs.
Select the appropriate iSCSI software initiator for your host configuration. Table 17-1 lists operating systems and their iSCSI software initiator options.
Table 17-1 iSCSI initiator instructions
Operating System
Instructions
Windows Server 2003
Download and install the iSCSI software initiator.
Windows Server 2008
The iSCSI initiator is built into the operating system. The iSCSI Initiator Properties dialog is available from Administrative Tools.
Windows Server 2008 R2
The iSCSI initiator is built into the operating system. The iSCSI Initiator Properties dialog is available from Administrative Tools.
Windows XP guest systems on Hyper-V
For guest systems on Hyper-V virtual machines that access storage directly (not as a virtual hard disk mapped to the parent system), download and install the iSCSI software initiator. You cannot select the Microsoft MPIO Multipathing Support for iSCSI option. Microsoft does not support MPIO with Windows XP. A Windows XP iSCSI connection to IBM N series storage is supported only on Hyper-V virtual machines.
Windows Vista guest systems on Hyper-V
For guest systems on Hyper-V virtual machines that access storage directly (not as a virtual hard disk mapped to the parent system), the iSCSI initiator is built into the operating system. The iSCSI Initiator Properties dialog is available from Administrative Tools. A Windows Vista iSCSI connection to IBM N series storage is supported only on Hyper-V virtual machines.
SUSE Linux Enterprise Server guest systems on Hyper-V
For guest systems on Hyper-V virtual machines that access storage directly (not as a virtual hard disk mapped to the parent system), use an iSCSI initiator solution. This solution must be on a Hyper-V guest that is supported for stand-alone hardware. A supported version of Linux Host Utilities is required.
Linux guest systems on Virtual Server 2005
For guest systems on Virtual Server 2005 virtual machines that access storage directly (not as a virtual hard disk mapped to the parent system), use an iSCSI initiator solution. This solution must be on a Virtual Server 2005 guest that is supported for stand-alone hardware. A supported version of Linux Host Utilities is required.
Installing multipath I/O software
You must have multipathing set up if your Windows host has more than one path to the storage system.
The MPIO software presents a single disk to the operating system for all paths, and a device-specific module (DSM) manages path failover. Without MPIO software, the operating system might see each path as a separate disk, which can lead to data corruption.
On a Windows system, there are two main components to any MPIO solution: A DSM and the Windows MPIO components.
Install a supported DSM before you install the Windows Host Utilities. Select from the following choices:
The Data ONTAP DSM for Windows MPIO
The Veritas DMP DSM
The Microsoft iSCSI DSM (part of the iSCSI initiator package)
The Microsoft msdsm (included with Windows Server 2008 and Windows Server 2008 R2)
MPIO is supported for Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2 systems. MPIO is not supported for Windows XP and Windows Vista running in a Hyper- V virtual machine.
When you select MPIO support, the Windows Host Utilities installs the Microsoft MPIO components on Windows Server 2003. Or it enables the included MPIO feature of Windows Server 2008 and Windows Server 2008 R2.
17.4.3 Running the Host Utilities installation program
The installation program installs the Host Utilities package, and sets the Windows registry and HBA settings.
You must specify whether to include multipathing support when you install the Windows Host Utilities software package. You can also run a quiet (unattended) installation from a Windows command prompt.
Select MPIO if you have more than one path from the Windows host or virtual machine to the storage system. MPIO is required with Veritas Storage Foundation for Windows. Select no MPIO only if you are using a single path to the storage system.
 
Attention: The MPIO selection is not available for Windows XP and Windows Vista systems. Multipath I/O is not supported on these guest operating systems. For Hyper-V guests, raw (passthru) disks are not displayed in the guest OS if you choose multipathing support. You can either use raw disks, or you can use MPIO, but not both in the guest OS.
To install the Host Utilities software package interactively, run the Host Utilities installation program and follow the prompts.
Installing the Host Utilities interactively
To install the Host Utilities software package interactively, run the Host Utilities installation program and follow the prompts. Perform these steps:
1. Check the publication matrix page for important alerts, news, interoperability details, and other information about the product before beginning the installation.
2. Obtain the product software by inserting the Host Utilities CD-ROM into your host system or by downloading the software as follows:
a. Go to the IBM NAS support website.
b. Sign in with your IBM ID and password. If you do not have an IBM ID or password, click the Register link, follow the online instructions, and then sign in. Use the same process if you are adding new N series systems and serial numbers to an existing registration.
c. Select the N series software you want to download, and then select the Download view.
d. Use the Software Packages link on the website presented, and follow the online instructions to download the software.
3. Run the executable file, and follow the instructions on the window.
 
Tip: The Windows Host Utilities installer checks for required Windows hotfixes. If it detects a missing hotfix, it displays an error. Download and install the requested hotfixes, then restart the installer.
4. Reboot the Windows host when prompted.
Installing the Host Utilities from the command line
You can perform a quiet (unattended) installation of the Host Utilities by entering the commands at a Windows command prompt. Enter the following command at a Windows command prompt:
msiexec /i installer.msi /quiet
MULTIPATHING={0 | 1}
[INSTALLDIR=inst_path]
where:
installer is the name of the .msi file for your processor architecture.
MULTIPATHING specifies whether MPIO support is installed. Allowed values are 0 for no, and 1 for yes.
inst_path is the path where the Host Utilities files are installed. The default path is C:Program FilesIBMWindows Host Utilities.
17.4.4 Host configuration settings
You need to collect some host configuration settings as part of the installation process. The Host Utilities installer modifies other host settings based on your installation choices.
Fibre Channel and iSCSI identifiers
The storage system identifies hosts that are allowed to access LUNs. The hosts are identified based on the Fibre Channel worldwide port names (WWPNs) or iSCSI initiator node name on the host.
Each Fibre Channel port has its own WWPN. A host has a single iSCSI node name for all iSCSI ports. You need these identifiers when manually creating initiator groups (igroups) on the storage system.
The storage system also has WWPNs and an iSCSI node name, but you do not need them to configure the host.
Recording the WWPN
Record the worldwide port names of all Fibre Channel ports that connect to the storage system. Each HBA port has its own WWPN. For a dual-port HBA, you need to record two values; for a quad-port HBA, record four values.
The WWPN looks like the following example:
WWPN: 10:00:00:00:c9:73:5b:90
For Windows Server 2008 or Windows Server 2008 R2, use the Windows Storage Explorer application to display the WWPNs. For Windows Server 2003, use the Microsoft fcinfo.exe program.
You can instead use the HBA manufacturer's management software if it is installed on the Windows host. Examples include HBAnyware for Emulex HBAs and SANsurfer for QLogic HBAs.
If the system is SAN booted and not yet running an operating system, or the HBA management software is not available, obtain the WWPNs by using the boot BIOS.
Recording the iSCSI initiator node name
Record the iSCSI initiator node name from the iSCSI Initiator program on the Windows host.
For Windows Server 2008, Windows Server 2008 R2, and Windows Vista, click Start > Administrative Tools > iSCSI Initiator. For Windows Server 2003 and Windows XP, click Start > All Programs > Microsoft iSCSI Initiator > Microsoft iSCSI Initiator.
The iSCSI Initiator Properties window is displayed. Copy the Initiator Name or Initiator Node Name value to a text file or write it down.
The exact label in the dialog box differs depending on the Windows version. The iSCSI node name looks like this example:
iqn.1991-05.com.microsoft:server3
17.4.5 Overview of settings used by the Host Utilities
The Host Utilities require certain registry and parameter settings to ensure that the Windows host correctly handles the storage system behavior.
The parameters set by Windows Host Utilities affect how the Windows host responds to a delay or loss of data. The particular values are selected to ensure that the Windows host correctly handles events. An example event is the failover of one controller in the storage system to its partner controller.
Fibre Channel and iSCSI HBAs also have parameters that must be set to ensure the best performance and handle storage system events.
The installation program supplied with Windows Host Utilities sets the Windows and Fibre Channel HBA parameters to the supported values. You must manually set iSCSI HBA parameters.
The installer sets different values depending on these factors:
Whether you specify MPIO support when running the installation program
Whether you enable the Microsoft DSM on Windows Server 2008 or Windows Server 2008 R2
Which protocols you select (iSCSI, Fibre Channel, both, or none)
Do not change these values unless directed to do so by technical support.
Host Utilities sets registry values to optimize performance based on your selections during installation, including Windows MPIO, Data ONTAP DSM, or the use of Fibre Channel HBAs.
On systems that use Fibre Channel, the Host Utilities installer sets the required timeout values for Emulex and QLogic Fibre Channel HBAs. If Data ONTAP DSM for Windows MPIO is detected on the host, the Host Utilities installer does not set any HBA values.
17.5 Setting up LUNs
LUNs are the basic unit of storage in a SAN configuration. The host system uses LUNs as virtual disks.
17.5.1 LUN overview
You can use a LUN the same way you use local disks on the host.
After you create the LUN, you must make it visible to the host. The LUN is then displayed on the Windows host as a disk. You can:
Format the disk with NTFS. To do so, you must initialize the disk and create a partition. Only basic disks are supported with the native OS stack.
Use the disk as a raw device. To do so, you must leave the disk offline. Do not initialize or format the disk.
Configure automatic start services or applications that access the LUNs. You must configure these start services so that they are dependent on the Microsoft iSCSI Initiator service.
You can create LUNs manually, or by running the SnapDrive or System Manager software.
You can access the LUN by using either the Fibre Channel or the iSCSI protocol. The procedure for creating LUNs is the same regardless of which protocol you use. You must create an initiator group (igroup), create the LUN, and then map the LUN to the igroup.
 
Tip: If you are using the optional SnapDrive software, use SnapDrive to create LUNs and igroups. For more information, see the documentation for your version of SnapDrive. If you are using the optional System Manager software, see the Online Help for specific steps.
The igroup must be the correct type for the protocol. You cannot use an iSCSI igroup when you are using the Fibre Channel protocol to access the LUN. If you want to access a LUN with both Fibre Channel and iSCSI protocols, you must create two igroups: One Fibre Channel and one iSCSI.
17.5.2 Initiator group overview
Initiator groups specify which hosts can access specified LUNs on the storage system. You can create igroups manually, or use the optional SnapDrive for Windows software, which automatically creates igroups. Initiator groups have these features:
Initiator groups (igroups) are protocol-specific.
For Fibre Channel connections, create a Fibre Channel igroup using all WWPNs for the host.
For iSCSI connections, create an iSCSI igroup using the iSCSI node name of the host.
For systems that use both FC and iSCSI connections to the same LUN, create two igroups: One for FC and one for iSCSI. Then map the LUN to both igroups.
There are many ways to create and manage initiator groups and LUNs on your storage system. These processes vary depending on your configuration. These topics are covered in detail in the Data ONTAP Block Access Management Guide for iSCSI and Fibre Channel for your version of the Data ONTAP software.
Mapping LUNs to igroups
When you map a LUN to an igroup, assign the LUN identifier. You must assign the LUN ID of 0 to any LUN that is used as a boot device. LUNs with IDs other than 0 are not supported as boot devices.
If you map a LUN to both a Fibre Channel igroup and an iSCSI igroup, the LUN has two different LUN identifiers.
 
Restriction: The Windows operating system recognizes only LUNs with identifiers 0 through 254, regardless of the number of LUNs mapped. Be sure to map your LUNs to numbers in this range.
17.5.3 About mapping LUNs for Windows clusters
When you use clustered Windows systems, all members of the cluster must be able to access LUNs for shared disks. Map shared LUNs to an igroup for each node in the cluster.
 
Requirement: If more than one host is mapped to a LUN, you must run clustering software on the hosts to prevent data corruption.
17.5.4 Adding iSCSI targets
To access LUNs when you are using iSCSI, you must add an entry for the storage system by using the Microsoft iSCSI Initiator GUI. To add a target, perform the following steps:
1. Run the Microsoft iSCSI Initiator GUI.
2. On the Discovery tab, create an entry for the storage system.
3. On the Targets tab, log on to the storage system.
4. If you want the LUNs to be persistent across host reboots, select Automatically restore this connection when the system boots when logging on to the target.
5. If you are using MPIO or multiple connections per session, create additional connections to the target as needed.
Enabling the optional MPIO support or multiple-connections-per-session support does not automatically create multiple connections between the host and storage system. You must explicitly create the additional connections.
17.5.5 Accessing LUNs on hosts
This section addresses how to make LUNs on N series storage subsystems accessible to hosts.
Accessing LUNs on hosts that use Veritas Storage Foundation
To enable the host that runs Veritas Storage Foundation to access a LUN, you must make the LUN visible to the host. Perform these steps:
1. Click Start  All Programs  Symantec  Veritas Storage Foundation  Veritas Enterprise Administrator.
2. The Select Profile window is displayed. Select a profile and click OK to continue.
3. The Veritas Enterprise Administrator window is displayed. Click Connect to a Host or Domain.
4. The Connect window is displayed. Select a Host from the menu and click Browse to find a host, or enter the host name of the computer and click Connect.
5. The Veritas Enterprise Administrator window with storage objects is displayed. Click Action  Rescan.
6. All the disks on the host are rescanned. Select Action  Rescan.
7. The latest data is displayed. In the Veritas Enterprise Administrator, with the Disks expanded, verify that the newly created LUNs are visible as disks on the host.
The LUNs are displayed on the Windows host as basic disks under Veritas Enterprise Administrator.
Accessing LUNs on hosts that use the native OS stack
To access a LUN when you are using the native OS stack, you must make the LUN visible to the Windows host. Perform these steps:
1. Right-click My Computer on your desktop and select Manage.
2. Expand Storage and double-click the Disk Management folder.
3. Click Action  Rescan Disks.
4. Click Action  Refresh.
5. In the Computer Management window, with Storage expanded and the Disk Management folder open, check the lower right pane. Verify that the newly created LUN is visible as a disk on the host.
Overview of initializing and partitioning the disk
You can create one or more basic partitions on the LUN. After you rescan the disks, the LUN is displayed in Disk Management as an Unallocated disk.
If you format the disk as NTFS, be sure to select the Perform a quick format option.
The procedures for initializing disks vary depending on which version of Windows you are running on the host. For more information, see the Windows Disk Management online Help.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset