Chapter 23. Advanced Data Center Storage

This chapter covers the following exam topics:

5.0. Advanced Data Center Storage

5.1. Describe FCoE concepts and operations

5.1.a. Encapsulation

5.1.b. DCB

5.1.c. vFC

5.1.d. Topologies

5.1.d.1. Single hop

5.1.d.2. Multihop

5.1.d.3. Dynamic

5.2. Describe node port virtualization

5.3. Describe zone types and their uses

5.4. Verify the communication between the initiator and target

5.4.a. FLOGI

5.4.b. FCNS

5.4.c. Active zone set

In computing, virtualization means to create a virtual version of a device or resource, such as a server, storage device, or network, where the framework divides the resource into one or more execution environments. Devices, applications, and human users are able to interact with the virtual resource as if it were a real single logical resource. A number of computing technologies benefit from virtualization, including the following:

Image Storage virtualization: Combining multiple network storage devices into what appears to be a single storage unit.

Image Server virtualization: Partitioning a physical server into smaller virtual servers.

Image Network virtualization: Using network resources through a logical segmentation of a single physical network.

Image Application virtualization: Decoupling the application and its data from the operating system.

The first couple of pages of this chapter guide you through storage virtualization concepts, answering basic what, why, and how questions. The storage virtualization will help you to understand advanced data center storage concepts.

The foundation of the data center network is the network software that runs the network’s switches. Cisco NX-OS software is the network software for Cisco MDS 9000 family and Cisco Nexus family data center switching products. It is based on a secure, stable, and standard Linux core, providing a modular and sustainable base for the long term. Formerly known as Cisco SAN-OS, starting from release 4.1 it has been rebranded as Cisco MDS 9000 NX-OS software. In Chapter 22, “Introduction to Storage and Storage Networking,” we discussed basic storage area network concepts and key Cisco MDS 9000 Software features. In this chapter, we build our storage area network, starting from the initial setup configuration on Cisco MDS 9000 switches.

A classic data center design features a dedicated Ethernet LAN and a separate, dedicated Fibre Channel (FC) SAN. With Fibre Channel over Ethernet (FCoE), it is possible to run a single, converged network. As a standards-based protocol that allows Fibre Channel frames to be carried over Ethernet links, FCoE obviates the need to run separate LAN and SAN networks. FCoE allows an evolutionary approach to I/O consolidation by preserving all Fibre Channel constructs, maintaining the latency, security, and traffic management attributes of Fibre Channel while preserving investments in Fibre Channel tools, training, and SANs. Based on lossless, reliable Ethernet, FCoE networks combine LAN and multiple storage protocols on a converged network.

Now, multihop FCoE technology can be used to extend convergence beyond the access layer. The higher, more efficient speeds, such as 40 Gigabit Ethernet FCoE today, or the 100 Gigabit Ethernet FCoE in the future, help enable fewer and higher-speed Inter-Switch Links (ISLs) in the network core. The converged architecture means you can wire once and deploy anywhere to support any storage protocol, including iSCSI or NFS. This consolidated infrastructure also helps to simplify management and significantly reduce total cost of ownership (TCO).

FCoE is one of the core components of the Cisco Converged Data Center, which helps enable multiprotocol networking through Cisco Unified Computing System (UCS), Cisco Nexus platforms, and Cisco MDS platforms. In this chapter, we discuss all the standards required to support FCoE protocol and the FCoE topologies.

This chapter discusses how to configure Cisco MDS 9000 Series multilayer switches as well as storage virtualization and FCoE concepts. It also describes how to verify virtual storage area networks (VSANs), zoning, the fabric login, and the fabric domain using command-line interface. This chapter goes directly into the practical configuration steps of the Cisco MDS product family and discusses topics relevant to Introducing Cisco Data Center Networking (DCICN) certification.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 23-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes.”

Image

Table 23-1 “Do I Know This Already?” Section-to-Question Mapping


Caution

The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark that question as wrong for purposes of the self-assessment. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.


1. Which of the following switches support Console port, COM1, and MGMT interface? (Choose all the correct answers.)

a. MDS 9710

b. MDS 9508

c. MDS 9513

d. MDS 9148S

e. MDS 9706

2. During the initial setup script of MDS 9500, which interfaces can be configured with the IPv6 address?

a. MGMT 0 interface

b. All FC interfaces

c. VSAN interface

d. Out-of-band management interface

e. None

3. During the initial setup script of MDS 9706, how can you configure in-band management?

a. Via answering yes to Configure Advanced IP Options.

b. Via enabling SSH service.

c. There is no configuration option for in-band management on MDS family switches.

d. MDS 9706 family director switch does not support in-band management.

4. Which MDS switch models support Power On Auto Provisioning with NX-OS 6.2(9)?

a. MDS 9250i

b. MDS 9148

c. MDS 9148S

d. MDS 9706

5. How does the On-Demand Port Activation license work on the Cisco MDS 9148S 16G FC switch?

a. The base configuration of 20 ports of 16Gbps Fibre Channel, two ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services, and eight ports of 10 Gigabit Ethernet for FCoE connectivity.

b. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port activation license to support 24, 36, or 48 ports.

c. The base switch model comes with eight ports and can be upgraded to models of 16, 32, or 48 ports.

d. The base switch model comes with 24 ports enabled and can be upgraded as needed with a 12-port activation license to support 36 or 48 ports.

6. Which is the correct option for the boot sequence?

a. System—Kickstart—BIOS—Loader

b. BIOS—Loader—Kickstart—System

c. System—BIOS—Loader—Kickstart

d. BIOS—Loader—System—Kickstart

7. Which of the following options is the correct configuration for in-band management of MDS 9250i?

a.

switch(config)# interface mgmt0
switch(config-if)# ip address 10.22.2.2 255.255.255.0
switch(config-if)# no shutdown
switch(config-if)# exit
switch(config)# ip default-gateway 10.22.2.1

b.

switch(config)# interface mgmt0
switch(config-if)# ipv6 enable
switch(config-if)# ipv6 address ipv6 address 2001:0db8:800:200c::417a/64
switch(config-if)# no shutdown

c.

switch(config)# interface vsan 1
switch(config-if)# ip address 10.22.2.3 255.255.255.0
switch(config-if)# no shutdown

d.

switch(config-if)# (no) switchport mode F
switch(config-if)# (no) switchport mode auto

e. None

8. Which of the following options are valid member types for a zone configuration on an MDS 9222i switch? (Choose all the correct answers.)

a. pWWN

b. IPv6 address

c. Mac address of the MGMT 0 interface

d. FCID

9. In which of the following trunk mode configurations between two MDS switches we can achieve trunking states are ISL and port modes are E port?

a. Switch 1: Switchport mode E, trunk mode off; Switch 2: Switchport mode E, trunk mode off.

b. Switch 1: Switchport mode E, trunk mode on; Switch 2: Switchport mode E, trunk mode on.

c. Switch 1: Switchport mode E, trunk mode off; Switch 2: Switchport mode F, trunk mode auto.

d. Switch 1: Switchport mode E, trunk mode auto; Switch 2: Switchport mode E, trunk mode auto.

e. No correct answer exists.

10. Which of the following organizations defines the storage virtualization standards?

a. INCITS (International Committee for Information Technology Standards).

b. IETF (Internet Engineering Task Force).

c. SNIA (Storage Networking Industry Association).

d. Storage virtualization has no standard measure defined by a reputable organization.

11. Which of the following options can be virtualized according to SNIA storage virtualization taxonomy? (Choose all the correct answers.)

a. Disks

b. Blocks

c. Tape systems

d. File systems

e. Switches

12. Which of the following options explains RAID 1+0?

a. It is an exact copy (or mirror) of a set of data on two disks.

b. It comprises block-level striping with distributed parity.

c. It comprises block-level striping with double distributed parity.

d. It creates a striped set from a series of mirrored drives.

13. Which of the following options explains RAID 6?

a. It is an exact copy (or mirror) of a set of data on two disks.

b. It comprises block-level striping with distributed parity.

c. It creates a striped set from a series of mirrored drives.

d. It comprises block-level striping with double distributed parity.

14. Which of the following options explain LUN masking? (Choose all the correct answers.)

a. It is a feature of Fibre Channel HBA.

b. It is a feature of storage arrays.

c. It provides basic LUN-level security by allowing LUNs to be seen by selected servers.

d. It is a proprietary technique that Cisco MDS switches offer.

e. It should be configured on HBAs.

15. Which of the following options explain the Logical Volume Manager (LVM)? (Choose all the correct answers.)

a. The LVM manipulates LUN representation to create virtualized storage to the file system.

b. The LVM is a collection of ports from a set of connected Fibre Channel switches.

c. LVM combines multiple hard disk drive components into a logical unit to provide data redundancy.

d. LVMs can be used to divide large physical disk arrays into more manageable virtual disks.

16. Select the correct order of operations for completing asynchronous array-based replication.

I. The write operation is received by the primary storage array from the host.

II. An acknowledgement is sent to the primary storage array by a secondary storage array after the data is stored on the secondary storage array.

III. The write operation to the primary array is acknowledged locally; the primary storage array does not require a confirmation from the secondary storage array to acknowledge the write operation to the server.

IV. The primary storage array maintains new data queued to be copied to the secondary storage array at a later time. It initiates a write to the secondary storage array.

a. I, II, III, IV.

b. I, III, IV, II.

c. III, I, II, VI.

d. I, III, II, IV.

e. Asynchronous mirroring operation was not explained correctly.

17. Which of the following options are the advantages of host-based storage virtualization? (Choose all the correct answers.)

a. It is close to the file system.

b. It uses the operating system’s built-in tools.

c. It is licensed and managed per host.

d. It uses the array controller’s CPU cycles.

e. It is independent of SAN transport.

18. Which of the following specifications of IEEE 802.1 are related to lossless Ethernet? (Choose all the correct answers.)

a. PFC

b. ETS

c. FCF

d. BBC

e. QCN

19. Which option best defines the need for data center bridging (DCB)?

a. A set of standards designed to replace the existing Ethernet and IP protocol stack with a goal of enhancing existing transmissions for delay-sensitive applications.

b. A set of standards designed to transparently enhance Ethernet and IP traffic and provide special treatment and features for certain traffic types such as FCoE and HPC.

c. An emerging LAN standard for future delay-sensitive device communication.

d. A single protocol that is designed to transparently enhance Ethernet and IP traffic and provide special treatment and features for certain traffic types such as FCoE and HPC.

20. Which of the following options are correct for FIP and FCoE Ethertypes?

a. FCoE 0x8906, FIP 0x8914

b. FCoE 0x8907, FIP 0x8941

c. FCoE 0x8908, FIP 0x8918

d. FCoE 0x8902, FIP 0x8914

e. FCoE 0x8909, FIP 0x8916

Foundation Topics

What Is Storage Virtualization?

Storage virtualization is a way to logically combine storage capacity and resources from various heterogeneous, external storage systems into one virtual pool of storage. This virtual pool can then be more easily managed and provisioned as needed. A single set of tools and processes performs everything from online any-to-any data migrations to heterogeneous replication. Unlike previous new protocols or architectures, however, storage virtualization has no standard measure defined by a reputable organization such as the INCITS (International Committee for Information Technology Standards) or the IETF (Internet Engineering Task Force). The closest vendor-neutral attempt to make storage virtualization concepts comprehensible has been the work of the Storage Networking Industry Association (SNIA), which has produced useful tutorial content on the various flavors of virtualization technology. According to the SNIA dictionary, storage virtualization is the act of abstracting, hiding, or isolating the internal function of a storage (sub) system or service from applications, computer servers, or general network resources for the purpose of enabling application- and network-independent management of storage or data.

The beginning of virtualization goes a long way back. Virtual memory operating systems evolved during the 1960s. One decade later, virtualization began to move into storage and some disk subsystems. The original mass storage device used a tape cartridge, which was a helical-scan cartridge that looked like a disk. The late 1970s witnessed the introduction of the first solid-state disk, which was a box full of DRAM chips that appeared to be rotating magnetic disks. Virtualization achieved a new level when the first Redundant Array of Inexpensive Disks (RAID) virtual disk array was announced in 1992 for mainframe systems. The first virtual tape systems appeared in 1997 for mainframe systems. Much of the pioneering work in virtualization began on mainframes and has since moved into the non-mainframe, open-system world. Virtualization gained momentum in the late 1990s as a result of virtualizing SANs and storage networks in an effort to tie together all the computing platforms from the 1980s. In 2003, virtual tape architectures moved away from the mainframe and entered new markets, increasing the demand for virtual tape. In 2005, virtual tape was the most popular storage initiative in many storage management pools.

Software-defined storage (SDS) was proposed in 2013 as a new category of storage software products. The term software-defined storage is a follow-up of the technology trend software-defined networking, which was first used to describe an approach in network technology that abstracts various elements of networking and creates an abstraction or virtualized layer in software. In networking, the control plane and the data plane have been intertwined within the traditional switches that are deployed today, making abstraction and virtualization more difficult to manage in complex virtual environments. SDS refers to the abstraction of the physical elements, similar to server virtualization. SDS delivers automated, policy-driven, application-aware storage services through orchestration of the underlining storage infrastructure in support of an overall software-defined environment. SDS represents a new evolution for the storage industry for how storage will be managed and deployed in the future.

How Storage Virtualization Works

Storage virtualization works through mapping. The storage virtualization layer creates a mapping from the logical storage address space (used by the hosts) to the physical address of the storage device. Such mapping information, also referred to as metadata, is stored in huge mapping tables. When a host requests I/O, the virtualization layer looks at the logical address in that I/O request and, using the mapping table, translates the logical address into the address of a physical storage device. The virtualization layer then performs I/O with the underlying storage device using the physical address; when it receives the data back from the physical device, it returns that data to the application as if it had come from the logical address. The application is unaware of the mapping that happens beneath the covers. Figure 23-1 illustrates this virtual storage layer that sits between the applications and the physical storage devices.

Image
Image

Figure 23-1 Storage Virtualization Mapping Heterogeneous Physical Storage to Virtual Storage

Why Storage Virtualization?

The business drivers for storage virtualization are much the same as those for server virtualization. CIOs and IT managers must cope with shrinking IT budgets and growing client demands. They must simultaneously improve asset utilization, use IT resources more efficiently, ensure business continuity, and become more agile. In addition, they are faced with ever-increasing constraints on power, cooling, and space. The major economic driver is to reduce costs without sacrificing data integrity or performance. Organizations can use the technology to resolve the issue and create a more adaptive, flexible service-based infrastructure to reflect ongoing changes in business requirements. The top seven reasons to use storage virtualizations are

Image Exponential data growth and disruptive storage upgrades

Image Low utilization of existing assets

Image Growing management complexity with flat or decreasing budgets for IT staff head count

Image Increasing hard costs and environmental costs to acquire, run, and manage storage

Image Ensuring rapid, high-quality storage service delivery to application and business owners

Image Achieving cost-effective business continuity and data protection

Image Power and cooling consumption in the data center

What Is Being Virtualized?

The Storage Networking Industry Association (SNIA) taxonomy for storage virtualization is divided into three basic categories: what is being virtualized, where the virtualization occurs, and how it is implemented. This is illustrated in Figure 23-2.

Image
Image

Figure 23-2 SNIA Storage Virtualization Taxonomy Separates Objects of Virtualization from Location and Means of Execution

What is being virtualized may include blocks, disks, tape systems, file systems, and file or record virtualization. Where virtualization occurs may be on the host, in storage arrays, or in the network via intelligent fabric switches or SAN-attached appliances. How the virtualization occurs may be via in-band or out-of-band separation of control and data paths. Storage virtualization provides the means to build higher-level storage services that mask the complexity of all underlying components and enable automation of data storage operations.

The most important reasons for storage virtualization were covered in the “Why Storage Virtualization?” section. The ultimate goal of storage virtualization should be to simplify storage administration. This can be achieved via a layered approach, binding multiple levels of technologies on a foundation of logical abstraction.

The abstraction layer that masks physical from logical storage may reside on host systems such as servers, within the storage network in the form of a virtualization appliance, within SAN switches, or on a storage array or tape subsystem targets. In common usage, these alternatives are referred to as host-based, network-based, or array-based virtualization. Figure 23-3 illustrates all components of an intelligent SAN.

Image
Image

Figure 23-3 Possible Alternatives Where the Storage Virtualization May Occur

In addition to differences between where storage virtualization is located, vendors have different methods for implementing virtualized storage transport. The in-band method places the virtualization engine directly in the data path so that both block data and the control information that govern its virtual appearance transit the same link. The out-of-band method provides separate paths for data and control, presenting an image of virtual storage to the host by one link and allowing the host to directly retrieve data blocks from physical storage on another. In-band and out-of-band virtualization techniques are sometimes referred to as symmetrical and asymmetrical, respectively. In the in-band method, the appliance acts as an I/O target to the hosts and acts as an I/O initiator to the storage. It has the opportunity to work as a bridge. For example, it can present an iSCSI target to the host while using standard FC-attached arrays for the storage. This capability enables end users to preserve investment in storage resources while benefiting from the latest host connection mechanisms such as Fibre Channel over Ethernet (FCoE).

Block Virtualization—RAID

RAID is another data storage virtualization technology that combines multiple hard disk drive components into a logical unit to provide data redundancy and improve performance or capacity. Originally, in 1988, it was defined as Redundant Arrays of Inexpensive Disks (RAID) on a paper written by David A. Patterson, Gary A. Gibson, and Randy Katz, from the University of California. Now it is commonly used as Redundant Array of Independent Disks (RAID).

Data is distributed across the hard disk drives in one of several ways, depending on the specific level of redundancy and performance required, referred to as RAID levels. The different schemes or architectures are named by the word RAID followed by a number (for example, RAID 0, RAID 1). Each scheme provides a different balance between the key goals: reliability and availability, performance, and capacity. Originally, there were five RAID levels, but many variations have evolved—notably, several nested levels and many nonstandard levels (mostly proprietary). The Storage Networking Industry Association (SNIA) standardizes RAID levels and their associated data formats in the common RAID Disk Drive Format (DDF) standard.

Today, RAID is a staple in most storage products. It can be found in disk subsystems, host bus adapters, system motherboards, volume management software, device driver software, and virtually any processor along the I/O path. Because RAID works with storage address spaces and not necessarily storage devices, it can be used on multiple levels recursively. Host volume management software typically has several RAID variations and, in some cases, offers many sophisticated configuration options. Running RAID in a volume manager opens the potential for integrating file system and volume management products for management and performance-tuning purposes. RAID has been implemented in adapters/controllers for many years, starting with SCSI and including Advanced Technology Attachment (ATA) and serial ATA. Disk subsystems have been the most popular product for RAID implementations and probably will continue to be for many years to come. An enormous range of capabilities is offered in disk subsystems across a wide range of prices.

RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors, as well as whole disk failure. Many RAID levels employ an error protection scheme called “parity,” a widely used method in information technology to provide fault tolerance in a given set of data. Most of the RAID levels use simple XOR, but RAID 6 uses two separate parities based, respectively, on addition and multiplication in a particular Galois Field or Reed-Solomon error correction that we will not be covering in this chapter. The most common RAID levels used today are RAID 0 (striping), RAID 1 and variants (mirroring), RAID 5 (distributed parity), and RAID 6 (dual parity).

A RAID 0 array (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped), without parity information. RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical disk out of two or more physical ones. It provides no data redundancy. A RAID 0 array can be created with disks of different sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if an 800GB disk is striped together with a 500GB disk, the size of the array will be 1TB (500GB × 2).

A RAID 1 array is an exact copy (or mirror) of a set of data on two disks. This is useful when read performance or reliability is more important than data storage capacity. Such an array can only be as big as the smallest member disk. A classic RAID 1 mirrored pair contains two disks. For example, if a 500GB disk is mirrored together with a 400GB disk, the size of the array will be 400GB.

A RAID 5 array comprises block-level striping with distributed parity. It requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity so that no data is lost. RAID 5 requires a minimum of three disks.

A RAID 6 array comprises block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, because large-capacity drives take longer to restore. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5.

Many storage controllers allow RAID levels to be nested (hybrid), and storage arrays are rarely nested more than one level deep. The final array of the RAID level is known as the top array. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0), most vendors omit the “+”, which yields RAID 10 and RAID 50, respectively. Here are the common examples of nested RAID levels:

Image RAID 0+1: Creates a second striped set to mirror a primary striped set. The array continues to operate with one or more drives failed in the same mirror set, but if drives fail on both sides of the mirror, the data on the RAID system is lost.

Image RAID 1+0: Creates a striped set from a series of mirrored drives. The array can sustain multiple drive losses as long as no mirror loses all its drives.

Table 23-2 provides an overview of the standard RAID levels.

Image
Image
Image
Image
Image
Image

Table 23-2 Overview of Standard RAID Levels

Virtualizing Logical Unit Numbers (LUNs)

Virtualization of storage helps to provide location independence by abstracting the physical location of the data. The user is presented with a logical space for data storage by the virtualization system, which manages the process of mapping that logical space to the physical location.

You can have multiple layers of virtualization or mapping by using the output of one layer as the input for a higher layer of virtualization. A logical unit number (LUN) is a unique identifier that designates individual hard disk devices or grouped devices for address by a protocol associated with an SCSI, iSCSI, Fibre Channel (FC), or similar interface. LUNs are central to the management of block storage arrays shared over a storage area network (SAN).

Virtualization maps the relationships between the back-end resources and front-end resources. A back-end resource refers to a LUN that is not presented to the computer or host system for direct use. A front-end LUN or volume is presented to the computer or host system for use. The mapping of the LUN performed depends on the implementation. Typically, one physical disk is broken down into smaller subsets in multiple megabytes or gigabytes of disk space. In a block-based storage environment, one block of information is addressed by using a LUN and an offset within that LUN, known as a logical block address (LBA).

Image

In most SAN environments, each individual LUN must be discovered by only one server host bus adapter (HBA). Otherwise, the same volume will be accessed by more than one file system, leading to potential loss of data or security. There are three ways to prevent this multiple access:

Image LUN masking: LUN masking, a feature of enterprise storage arrays, provides basic LUN-level security by allowing LUNs to be seen only by selected servers that are identified by their port World Wide Name (pWWN). Each storage array vendor has its own management and proprietary techniques for LUN masking in the array. In a heterogeneous environment with arrays from different vendors, LUN management becomes more difficult.

Image LUN mapping: LUN mapping, a feature of Fibre Channel HBAs, allows the administrator to selectively map some LUNs that have been discovered by the HBA. LUN mapping must be configured on every HBA. In a large SAN, this mapping is a large management task. Most administrators configure the HBA to automatically map all LUNs that the HBA discovers. They then perform LUN management in the array (LUN masking) or in the network (LUN zoning).

Image LUN zoning: LUN zoning, a proprietary technique that Cisco MDS switches offer, allows LUNs to be selectively zoned to their appropriate host port. LUN zoning can be used instead of, or in combination with, LUN masking in heterogeneous environments or where Just a Bunch of Disks (JBODs) are installed.

Figure 23-13 illustrates LUN masking, LUN mapping, and LUN zoning.


Note

JBODs do not have a management function or a storage controller and therefore do not support LUN masking.


Image
Image

Figure 23-13 LUN Masking, LUN Mapping, and LUN Zoning on a Storage Area Network (SAN)

In most LUN virtualization deployments, a virtualizer element is positioned between a host and its associated target disk array. This virtualizer generates a virtual LUN (vLUN) that proxies server I/O operations while hiding specialized data block processes that are occurring in the pool of disk arrays. The physical storage resources are aggregated into storage pools from which the logical storage is created. More storage systems, which may be heterogeneous in nature, can be added when needed, and the virtual storage space will scale up by the same amount. This process is fully transparent to the applications using the storage infrastructure. Thin-provisioning is a LUN virtualization feature that helps reduce the waste of block storage resources. When this technique is used, although a server can detect a vLUN with a “full” size, only blocks that are present in the vLUN are saved on the storage pool. This feature brings the concept of oversubscription to data storage, consequently demanding special attention to the ratio between “declared” and actually used resources. Utilizing vLUNs disk expansion and shrinking can be managed easily. More physical storage can be allocated by adding to the mapping table (assuming the using system can cope with online expansion). Similarly, disks can be reduced in size by removing some physical storage from the mapping (uses for this are limited because there is no guarantee of what resides on the areas removed). Each storage vendor has different LUN virtualization solutions and techniques. In this chapter we have briefly covered a subset of these features.

Tape Storage Virtualization

Tape storage virtualization can be divided into two categories: the tape media virtualization and the tape drive and library virtualization (VTL). Tape media virtualization resolves the problem of underutilized tape media; it saves tapes, tape libraries, and floor space. With tape media virtualization, the amount of mounts is reduced.

A virtual tape library (VTL) is used typically for backup and recovery purposes. A VTL presents a storage component (usually hard disk storage) as tape libraries or tape drives for use with existing backup software. Virtualizing the disk storage as tape allows integration of VTLs with existing backup software and existing backup and recovery processes and policies. The benefits of such virtualization include storage consolidation and faster data restore processes.

Most current VTL solutions use SAS or SATA disk arrays as the primary storage component because of their relatively low cost. The use of array enclosures increases the scalability of the solution by allowing the addition of more disk drives and enclosures to increase the storage capacity. The shift to VTL also eliminates streaming problems that often impair efficiency in tape drives, because disk technology does not rely on streaming and can write effectively regardless of data transfer speeds. By backing up data to disks instead of tapes, VTL often increases performance of both backup and recovery operations. Restore processes are found to be faster than backup regardless of implementations. In some cases, the data stored on the VTL’s disk array is exported to other media, such as physical tapes, for disaster recovery purposes. (This scheme is called disk-to-disk-to-tape, or D2D2T).

Alternatively, most contemporary backup software products introduced direct usage of the file system storage (especially network-attached storage, accessed through NFS and CIFS protocols over IP networks). They also often offer a disk staging feature: moving the data from disk to a physical tape for long-term storage.

Image
Virtualizing Storage Area Networks

A virtual storage area network (VSAN) is a collection of ports from a set of connected Fibre Channel switches that form a virtual fabric. Ports within a single switch can be partitioned into multiple VSANs, despite sharing hardware resources. Conversely, multiple switches can join a number of ports to form a single VSAN. VSANs were designed by Cisco, modeled after the virtual local area network (VLAN) concept in Ethernet networking, and applied to a SAN. In October 2004, the Technical Committee T11 of the International Committee for Information Technology Standards approved VSAN technology to become a standard of the American National Standards Institute (ANSI).

A VSAN, like each FC fabric, can offer different high-level protocols such as FCP, FCIP, FICON, and iSCSI. Each VSAN is a separate self-contained fabric using distinctive security policies, zones, events, memberships, and name services. The use of VSANs allows traffic to be isolated within specific portions of the network. If a problem occurs in one VSAN, that problem can be handled with a minimum of disruption to the rest of the network. VSANs can also be configured separately and independently. Figure 23-14 portrays three different SAN islands that are being virtualized onto a common SAN infrastructure on Cisco MDS switches. The geographic location of the switches and the attached devices are independent of their segmentation into logical VSANs.

Image
Image

Figure 23-14 Independent Physical SAN Islands Virtualized onto a Common SAN Infrastructure

VSANs were not designed to be confined within a single switch. They can be extended to other switches, spreading a virtual Fibre Channel fabric over multiple devices. Although E_Ports can also be configured to extend a single VSAN, a trunk is usually the recommended extension between VSAN-enabled switches. By definition, a Trunk Expansion Port (TE_Port) can carry the traffic of several VSANs over a single Enhanced Inter-Switch Link (EISL).

In an EISL, an 8-byte VSAN header is included between the Start of Frame (SOF) and the frame header. Figure 23-15 illustrates the VSAN EISL header. In the EISL header, the VSAN ID field occupies 12 bits and is used to mark the frame as part of a particular VSAN.

Image
Image

Figure 23-15 VSAN Enhanced Inter-Switch Link (EISL) Header

Since December 2002, Cisco has enabled multiple virtualization features within intelligent SANs. Several examples are Inter-VSAN routing (IVR), N-Port ID Virtualization (NPIV), and N-Port Virtualizer.

Image
N-Port ID Virtualization (NPIV)

NPIV allows a Fibre Channel host connection or N-Port to be assigned multiple N-Port IDs or Fibre Channel IDs (FCIDs) over a single link. All FCIDs assigned can now be managed on a Fibre Channel fabric as unique entities on the same physical host. Different applications can be used in conjunction with NPIV. In a virtual machine environment where many host operating systems or applications are running on a physical host, each virtual machine can now be managed independently from zoning, aliasing, and security perspectives. Figure 23-16 illustrates an N-Port ID virtualization topology. NPIV must be globally enabled for all VSANs on the switch to allow the NPIV-enabled applications to use multiple N-Port identifiers.

Image
Image

Figure 23-16 NPIV—Control and Monitor VMs in the SAN

Fibre Channel standards define that an FC HBA N-Port must be connected to one and only one F-Port on a Fibre Channel switch. When the device is connected to the switch, the link comes up and the FC HBA sends a FLOGI command containing its pWWN to the FC switch requesting a Fibre Channel ID. The switch responds with a unique FCID based on the domain ID of the switch, an area ID, and a port ID. This is fine for servers with a single operating environment but is restrictive for virtual servers, which may have several operating environments sharing the same FC HBA. Each virtual server requires its own FCID. NPIV provides the ability to assign a separate FCID to each virtual server that requests one through its own FLOGI command.

Image
N-Port Virtualizer (NPV)

An extension to NPIV is the N-Port Virtualizer feature. The N-Port Virtualizer feature allows the blade switch or top-of-rack fabric device to behave as an NPIV-based host bus adapter (HBA) to the core Fibre Channel director. The device aggregates the locally connected host ports or N-Ports into one or more uplinks (pseudo-interswitch links) to the core switches. Whereas NPIV is primarily a host-based solution, NPV is primarily a switch-based technology. It is designed to reduce switch management and overhead in larger SAN deployments. Consider that every Fibre Channel switch in a fabric needs a different domain ID, and that the total number of domain IDs in a fabric is limited. In some cases, this limit can be fairly low depending on the devices attached to the fabric. The problem, though, is that you often need to add Fibre Channel switches to scale the size of your fabric. There is, therefore, an inherent conflict between trying to reduce the overall number of switches to keep the domain ID count low while also needing to add switches to have a sufficiently high port count. NPV is intended to address this problem. NPV makes use of NPIV to get multiple FCIDs allocated from the core switch on the NP port. Figure 23-17 illustrates an NPV SAN topology.

Image
Image

Figure 23-17 What Is NPV?

A switch is in NPV mode after a user has enabled NPV and the switch has successfully rebooted. NPV mode applies to an entire switch. All end devices connected to a switch that is in NPV mode must log in as an N-Port to use this feature (loop-attached devices are not supported). All links from the edge switches (in NPV mode) to the NPV core switches are established as NP ports (not E ports), which are used for typical interswitch links. NPIV is used by the switches in NPV mode to log in to multiple end devices that share a link to the NPV core switch. An NP port (proxy N port) is a port on a device that is in NPV mode and connected to the NPV core switch using an F port. NP ports behave like N ports except that in addition to providing N port behavior, they also function as proxies for multiple, physical N ports. An NP link is an NPIV uplink to a specific end device. NP links are established when the uplink to the NPV core switch comes up; the links are terminated when the uplink goes down. Once the uplink is established, the NPV switch performs an internal fabric login (FLOGI) to the NPV core switch, and then (if the FLOGI is successful) registers itself with the NPV core switch’s name server.

Once the uplink is established, the NPV switch performs an internal fabric and sends a FLOGI request that includes the following parameters:

Image The fWWN (fabric port WWN) of the NP port used as the pWWN in the internal login

Image The VSAN-based sWWN (switch WWN) of the NPV device used as nWWN (node WWN) in the internal FLOGI

After completing its FLOGI request (if the FLOGI is successful), the NPV device registers itself with the fabric name server using the following additional parameters:

Image The switch name and interface name (for example, fc1/4) of the NP port are embedded in the symbolic port name in the name server registration of the NPV device itself.

Image The IP address of the NPV device is registered as the IP address in the name server registration of the NPV device.

Subsequent FLOGIs from end devices in this NP link are converted to fabric discoveries (FDISCs).

NPV devices use only IP as the transport medium. CFS uses multicast forwarding for CFS distribution. NPV devices do not have ISL connectivity and FC domain. To use CFS over IP, multicast forwarding has to be enabled on the Ethernet IP switches all along the network that physically connects the NPV switch. You can also manually configure the static IP peers for CFS distribution over IP on NPV-enabled switches.

In-order data delivery is not required in NPV mode because the exchange between two end devices always takes the same uplink to the core from the NPV device. For traffic beyond the NPV device, core switches will enforce in-order delivery if needed and/or configured. Three different types of traffic management exist for NPV:

Image Auto: When a server interface is brought up, an external interface with the minimum load is selected from the available links. There is no manual selection on the server interfaces using the external links. Also, when a new external interface was brought up, the existing load was not distributed automatically to the newly available external interface. This newly brought up interface is used only by the server interfaces that come up after this interface.

Image Traffic map: This feature facilitates traffic engineering by providing dedicated external interfaces for the servers connected to NPV. It uses the shortest path by selecting external interfaces per server interface. It utilizes the persistent FCID feature by providing the same traffic path after a link break, or reboot of the NPV or core switch. It also balances the load by allowing the user to evenly distribute the load across external interfaces.

Image Disruptive: Disruptive load balance works independent of automatic selection of interfaces and a configured traffic map of external interfaces. This feature forces reinitialization of the server interfaces to achieve load balance when this feature is enabled and whenever a new external interface comes up. To avoid flapping the server interfaces too often, enable this feature once and then disable it whenever the needed load balance is achieved. If disruptive load balance is not enabled, you need to manually flap the server interface to move some of the load to a new external interface.

Figure 23-18 illustrates the NPV auto load balancing.

Image
Image

Figure 23-18 NPV Auto Load Balancing

An F port channel is a logical interface that combines a set of F ports connected to the same Fibre Channel node and operates as one link between the F ports and the NP ports. The F port channels support bandwidth utilization and availability like the E port channels. F port channels are mainly used to connect MDS core and NPV switches to provide optimal bandwidth utilization and transparent failover between the uplinks of a VSAN. An F port channel trunk combines the functionality and advantages of a TF port and an F port channel. F port channel trunks allow for the fabric logins from the NPV switch to be virtualized over the port channel. This provides nondisruptive redundancy should individual member links fail. The individual links by default are in rate-mode shared, but can be rate-mode dedicated as well. Figure 23-19 illustrates F port channel and F port channel trunking features.

Image
Image

Figure 23-19 F Port Channel and F Port Channel Trunking

Virtualizing File Systems

A virtual file system is an abstraction layer that allows client applications using heterogeneous file-sharing network protocols to access a unified pool of storage resources. These systems are based on a global file directory structure that is contained on dedicated file metadata servers. Whenever a file system client needs to access a file, it must retrieve information about the file from the metadata servers first (see Figure 23-20).

The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way. A VFS can, for example, be used to access local and network storage devices transparently without the client application noticing the difference. It can be used to bridge the differences in Windows, Mac OS, and Unix file systems so that applications can access files on local file systems of those types without having to know what type of file system they are accessing.

Image
Image

Figure 23-20 File System Virtualization

A VFS specifies an interface (or a “contract”) between the kernel and a concrete file system. Therefore, it is easy to add support for new file system types to the kernel simply by fulfilling the contract. The terms of the contract might bring incompatibility from release to release, which would require that the concrete file system support be recompiled, and possibly modified before recompilation, to allow it to work with a new release of the operating system. Or the supplier of the operating system might make only backward-compatible changes to the contract so that concrete file system support built for a given release of the operating system would work with future versions of the operating system.

Sun Microsystems introduced one of the first VFSes on Unix-like systems. The VMware Virtual Machine File System (VMFS), NTFS, Linux’s Global File System (GFS), and the Oracle Clustered File System (OCFS) are all examples of virtual file systems.

File/Record Virtualization

File virtualization unites multiple storage devices into a single logical pool of files. It is a vital part of both file area network (FAN) and network file management (NFM) concepts.

File virtualization is similar to Dynamic Name Service (DNS), which removes the requirement to know the IP address of a website; you simply type in the domain name. In a similar fashion, file virtualization eliminates the need to know exactly where a file is physically located. Users look to a single mount point that shows all their files. Data can be moved from a tier-one NAS to a tier-two or tier-three NAS or network file server automatically. In Chapter 22, we discussed different types of tiered storage, as you will remember. In a common scenario, the primary NAS could be a high-speed high performance system storing production data. Because these files are less frequently accessed, they can be moved to lower-cost, more-power-efficient systems that bring down capital and operational costs.

Two types of file virtualization methods are available today. The first is one that’s built in to the storage system or the NAS itself, often called a “global” file system. There may be some advantage here because the file system itself is managing the metadata that contains the file locations, which could solve the file stub and manual look-up problems. The challenge, however, is that this metadata is unique to the users of that specific file system, meaning the same hardware must be used to expand these systems and is often available from only one manufacturer. This eliminates one of the key capabilities of a file virtualization system: the flexibility to mix hardware in a system. With this solution, you could lose the ability to store older data on a less expensive storage system, a situation equivalent to not being able to mix brands of servers in a server virtualization project. Often, global file systems are not granular to the file level.

The second file virtualization method uses a standalone virtualization engine—typically an appliance. These systems can either sit in the data path (in-band) or outside the data path (out-of-band) and can move files to alternate locations based on a variety of attribute-related policies. In-band solutions typically offer better performance, whereas out-of-band solutions are simpler to implement. Both offer file-level granularity, but most important, standalone file virtualization systems do not have to use all the same hardware.

Where Does the Storage Virtualization Occur?

Storage virtualization functionality, including aggregation (pooling), transparent data migration, heterogeneous replication, and device emulation, can be implemented at the host server in the data path on a network switch blade or on an appliance, as well as in a storage system. The best location to implement storage virtualization functionality depends in part on the preferences, existing technologies, and objectives for deploying storage virtualization.

Host-Based Virtualization

A host-based virtualization requires additional software running on the host as a privileged task or process. In some cases, volume management is built in to the operating system, and in other instances it is offered as a separate product. A physical device driver handles the volumes (LUN) presented to the host system. However, a software layer, the logical volume manager, residing above the disk device driver intercepts the I/O requests (see Figure 23-21) and provides the metadata lookup and I/O mapping.

Image
Image

Figure 23-21 Logical Volume Manager on a Server System

The LVM manipulates LUN representation to create virtualized storage to the file system or database manager. So with host-based virtualization, heterogeneous storage systems can be used on a per-server basis. For smaller data centers, this may not be a significant issue, but manual administration of hundreds or thousands of servers in a large data center can be very costly. LVMs can be used to divide large physical disk arrays into more manageable virtual disks or to create large virtual disks from multiple physical disks. The logical volume manager treats any LUN as a vanilla resource for storage pooling; it will not differentiate the RAID levels. So the storage administrator must ensure that the appropriate classes of storage under LVM control are paired with individual application requirements.

Most modern operating systems have some form of logical volume management built in that performs virtualization tasks (in Linux, it’s called Logical Volume Manager, or LVM; in Solaris and FreeBSD, it’s ZFS’s zpool layer; in Windows, it’s Logical Disk Manager, or LDM). Host-based storage virtualization may require more CPU cycles. Use of host-resident virtualization must therefore be balanced against the processing requirements of the upper-layer applications so that overall performance expectations can be met. For higher performance requirements, logical volume management may be complemented by hardware-based virtualization. RAID controllers for internal DAS and JBOD external chassis are good examples of hardware-based virtualization. RAID can be implemented without a special controller, but the software that performs the striping calculations often places a noticeable burden on the host CPU.

Network-Based (Fabric-Based) Virtualization

A network-based virtualization is a fabric-based switch infrastructure. The SAN fabric provides connectivity to heterogeneous server platforms and heterogeneous storage arrays and tape subsystems. Fabric may be composed of multiple switches from different vendors; the virtualization capabilities require standardized protocols to ensure interoperability between fabric-hosted virtualization engines. Fabric-based storage virtualization dynamically interacts with virtualizers on arrays, servers, or appliances.

A host-based (server-based) virtualization provides independence from vendor-specific storage, and storage-based virtualization provides independence from vendor-specific servers and operating systems. Network-based virtualization provides independence from both.

Fabric-based virtualization requires processing power and memory on top of a hardware architecture that is providing processing power for fabric services, switching, and other tasks. The fabric-based virtualization engine can be hosted on an appliance, an application-specific integrated circuit (ASIC) blade, or auxiliary module. It can be either in-band or out-of-band, referring to whether the virtualization engine is in the data path. Out-of-band solutions typically run on the server or on an appliance in the network and essentially process the control traffic, routing I/O requests to the proper physical locations, but don’t handle data traffic. This can result in less latency than in-band storage virtualization because the virtualization engine doesn’t handle the data; it’s also less disruptive if the virtualization engine goes down.

In-band solutions intercept I/O requests from hosts, map them to the physical storage locations, and regenerate those I/O requests to the storage systems on the back end. They do require that the virtualization engine handle both control traffic and data, which requires the processing power and internal bandwidth to ensure that they don’t add too much latency to the I/O process for host servers.

Multivendor fabric-based virtualization provides high availability, and it initiated the fabric application interface standard (FAIS). FAIS is an open-systems project of the ANSI/INCITS T11.5 task group and defines a set of common application programming interfaces (APIs) to be implemented within fabrics. The APIs are a means to more easily integrate storage applications that were originally developed as host, array, or appliance-based utilities to now be supported within fabric switches and directors. FAIS development is thus being driven by the switch manufacturers and by companies that have developed storage virtualization software and virtualization hardware-assist components.

The FAIS initiative separates control information from the data path. There are two types of processors: the CPP and the DCP. The control path processor (CPP) includes some form of operating system, the FAIS application interface, and the storage virtualization application. The CPP is therefore a high-performance CPU with auxiliary memory, centralized within the switch architecture. Allocation of virtualized storage to individual servers and management of the storage metadata are the responsibility of the storage application running on the CPP.

The data path controller (DPC) may be implemented at the port level in the form of an application-specific integrated circuit (ASIC) or dedicated CPU. The DPC is optimized for low latency and high bandwidth to execute basic SCSI read/write transactions under the management of one or more CPPs. The FAIS specifications do not define a particular hardware design for CPP and DPC entities, so vendor implementation may be different. The FAIS initiative aims to provide fabric-based services, such as snapshots, mirroring, and data replication, while maintaining high-performance switching of storage data.

Network-based virtualization can be done with one of the following approaches:

Image

Image Symmetric or in-band: With an in-band approach, the virtualization device is sitting in the path of the I/O data flow. Hosts send I/O requests to the virtualization device, which perform I/O with the actual storage device on behalf of the host. Caching for improving performance and other storage management features such as replication and migration can be supported.

Image Asymmetric or out-of-band: The virtualization device in this approach sits outside the data path between the host and storage device. What this means is that special software is needed on the host, which knows to first request the location of data from the virtualization device and then use that mapped physical address to perform the I/O.

Image Hybrid split-path: This method uses a combination of in-band and out-of-band approaches, taking advantage of intelligent SAN switches to perform I/O redirection and other virtualization tasks at wire speed. Specialized software running on a dedicated highly available appliance interacts with the intelligent switch ports to manage I/O traffic and map logical-to-physical storage resources at wire speed. Whereas in typical in-band solutions, the CPU is susceptible to being overwhelmed by I/O traffic, in the split-path approach the I/O-intensive work is offloaded to dedicated port-level ASICs on the SAN switch. These ASICs can look inside Fibre Channel frames and perform I/O mapping and reroute the frames without introducing any significant amount of latency. Figure 23-22 portrays the three architecture options for implementing network-based storage virtualization.

Image
Image

Figure 23-22 Possible Architecture Options for How the Storage Virtualization Is Implemented

There are many debates regarding the architecture advantages of in-band (in the data path) vs. out-of-band (out-of-data path with agent and metadata controller) or split path (hybrid of in-band and out-of-band). Some applications and their storage requirements are best suited for a combination of technologies to address specific needs and requirements.

Cisco SANTap is one of the Intelligent Storage Services features supported on the Storage Services Module (SSM), MDS 9222i Multiservice Modular Switch, and MDS 9000 18/4-Port Multiservice Module (MSM-18/4). Cisco SANTap is a good example of network-based virtualization. The Cisco MDS 9000 SANTap service enables customers to deploy third-party appliance-based storage applications without compromising the integrity, availability, or performance of a data path between the server and disk. The protocol-based interface that is offered by SANTap allows easy and rapid integration of the data storage service application because it delivers a loose connection between the application and an SSM, which reduces the effort needed to integrate applications with the core services being offered by the SSM. SANTap has a control path and a data path. The control path handles requests that create and manipulate replication sessions sent by an appliance. The control path is implemented using an SCSI-based protocol. An appliance sends requests to a Control Virtual Target (CVT), which the SANTap process creates and monitors. Responses are sent to the control logical unit number (LUN) on the appliance. SANTap also allows LUN mapping to Appliance Virtual Targets (AVTs).

Array-Based Virtualization

In the array-based approach, the virtualization layer resides inside a storage array controller, and multiple other storage devices from the same vendor or from a different vendor can be added behind it. That controller, called the primary storage array controller, is responsible for providing the mapping functionality. The primary storage array controller also provides replication, migration, and other storage management services across the storage devices that it is virtualizing. The communication between separate storage array controllers enables the disk resources of each system to be managed collectively, either through a distributed management scheme or through a hierarchal master/slave relationship. The communication protocol between storage subsystems may be proprietary or based on the SNIA SMI-S standard.

As one of the first forms of array-to-array virtualization, data replication requires that a storage system function as a target to its attached servers and as an initiator to a secondary array. This is commonly referred to as disk-to-disk (D2D) data replication.

Array-based data replication may be synchronous or asynchronous. In a synchronous replication operation, each write to a primary array must be completed on the secondary array before the SCSI transaction acknowledges completion. The SCSI I/O is therefore dependent on the speed at which both writes can be performed, and any latency between the two systems affects overall performance. For this reason, synchronous data replication is generally limited to metropolitan distances (~150 kilometers) to avoid performance degradation due to speed of light propagation delay. Figure 23-23 illustrates synchronous mirroring.

Image
Image

Figure 23-23 Synchronous Mirroring

In asynchronous array-based replication, individual write operations to the primary array are acknowledged locally, while one or more write transactions may be queued to the secondary array. This solves the problem of local performance, but may result in loss of the most current write to the secondary array if the link between the two systems is lost. To minimize this risk, some implementations required the primary array to temporarily buffer each pending transaction to the secondary so that transient disruptions are recoverable. Figure 23-24 illustrates asynchronous mirroring.

Image
Image

Figure 23-24 Asynchronous Mirroring

Array-based virtualization services often include automated point-in-time copies or snapshots that may be implemented as a complement to data replication. Point-in-time copy is a technology that permits you to make the copies of large sets of data a common activity. Like photographic snapshots capture images of physical action, point-in-time copies (or snapshots) are virtual or physical copies of data that capture the state of data set contents at a single instant. Both virtual (copy-on-write) and physical (full-copy) snapshots protect against corruption of data content. Additionally, full-copy snapshots can protect against physical destruction. These copies can be used for a backup, a checkpoint to restore the state of an application, data mining, test data, and a kind of off-host processing.

Solutions need to be able to scale in terms of performance, connectivity, and ease of management, functionality, and resiliency without introducing instability. There are many approaches to implement and deploy storage virtualization functionality to meet various requirements. The best solution is going to be the one that meets customers’ specific requirements and may vary by customers’ different tiers of storage and applications. Table 23-3 provides a basic comparison between various approaches based on SNIA’s virtualization tutorial.

Image

Table 23-3 Comparison of Storage Virtualization Levels

Fibre Channel Zoning and LUN Masking

VSANs are used to segment the physical fabric into multiple logical fabrics. Zoning provides security within a single fabric, whether physical or logical, to restrict access between initiators and targets (see Figure 23-25). LUN masking is used to provide additional security to LUNs after an initiator has reached the target device.

The zoning service within a Fibre Channel fabric is designed to provide security between devices that share the same fabric. The primary goal is to prevent certain devices from accessing other devices within the fabric. With many types of servers and storage devices on the network, the need for security is crucial. For example, if a host were to access a disk that another host is using, potentially with a different operating system, the data on the disk could become corrupted. To avoid any compromise of crucial data within the SAN, zoning allows the user to overlay a security map. This process dictates which devices—namely hosts—can see which targets, thus reducing the risk of data loss.

Image

Figure 23-25 VSANs Versus Zoning

Advanced zoning capabilities specified in the FC-GS-4 and FC-SW-3 standards are provided. You can use either the existing basic zoning capabilities or the advanced (enhanced), standards-compliant zoning capabilities.

There are two main methods of zoning: hard and soft. More recently, the differences between the two have blurred. All modern SAN switches then enforce soft zoning in hardware. The fabric name service allows each device to query the addresses of all other devices. Soft zoning restricts only the fabric name service, to show only an allowed subset of devices. Therefore, when a server looks at the content of the fabric, it will see only the devices it is allowed to see. However, any server can still attempt to contact any device on the network by address. In this way, soft zoning is similar to the computing concept of security through obscurity.

In contrast, hard zoning restricts actual communication across a fabric. This requires efficient hardware implementation (frame filtering) in the fabric switches, but is much more secure. That stated, modern switches would employ hard zoning when you implement soft. Cisco MDS 9000 and Cisco Nexus family switches implement hard zoning.

Zoning does have its limitations. Zoning was designed to do nothing more than prevent devices from communicating with other unauthorized devices. It is a distributed service that is common throughout the fabric. Any installed changes to a zoning configuration are therefore disruptive to the entire connected fabric. Zoning was also not designed to address availability or scalability of a Fibre Channel infrastructure.

Zoning has the following features:

Image

Image A zone consists of multiple zone members. Members in a zone can access each other; members in different zones cannot access each other. If zoning is not activated, all devices are members of the default zone. If zoning is activated, any device that is not in an active zone (a zone that is part of an active zone set) is a member of the default zone. Zones can vary in size. Devices can belong to more than one zone.

Image A zone set consists of one or more zones. A zone set can be activated or deactivated as a single entity across all switches in the fabric. Only one zone set can be activated at any time. A zone can be a member of more than one zone set. A Cisco MDS switch can have a maximum of 500 zone sets.

Image Zoning can be administered from any switch in the fabric. When you activate a zone (from any switch), all switches in the fabric receive the active zone set. Additionally, full zone sets are distributed to all switches in the fabric, if this feature is enabled in the source switch. If a new switch is added to an existing fabric, zone sets are acquired by the new switch.

Image Zone changes can be configured nondisruptively. New zones and zone sets can be activated without interrupting traffic on unaffected ports or devices.

Image Zone membership criteria is based mainly on WWNs or FCIDs.

Image Port World Wide Name (pWWN): Specifies the pWWN of an N port attached to the switch as a member of the zone.

Image Fabric pWWN: Specifies the WWN of the fabric port (switch port’s WWN). This membership is also referred to as port-based zoning.

Image FCID: Specifies the FCID of an N port attached to the switch as a member of the zone.

Image Interface and switch WWN (sWWN): Specifies the interface of a switch identified by the sWWN. This membership is also referred to as interface-based zoning.

Image Interface and domain ID: Specifies the interface of a switch identified by the domain ID.

Image Domain ID and port number: Specifies the domain ID of an MDS domain and additionally specifies a port belonging to a non-Cisco switch.

Image IPv4 address: Specifies the IPv4 address (and optionally the subnet mask) of an attached device.

Image IPv6 address: The IPv6 address of an attached device in 128 bits in colon-separated hexadecimal format.

Image Symbolic node name: Specifies the member symbolic node name. The maximum length is 240 characters.

Image Default zone membership includes all ports or WWNs that do not have a specific membership association. Access between default zone members is controlled by the default zone policy. Figure 23-26 illustrates zoning examples.

Image

Figure 23-26 Zoning Examples

SAN administrators allow servers (initiators) and storage devices (targets) in a Fibre Channel SAN to talk to each other by adding them to the same zone. In the fabric, permissions defined in this way are converted to access control entries (ACEs), which are programmed into ternary content-addressable memory (TCAM) hardware in the switches. Traditionally, zones have members, and all members of a zone can talk to each other. Each pair of members consumes two ACEs in the TCAM: One ACE permits the first member to receive traffic from the second member, and the other ACE permits the second member to receive traffic from the first member. Mathematically, the number of ACEs consumed by a zone with n members would be n× (n – 1). Since hardware resources are finite, a moderate number of large zones can exceed the TCAM capacity of a switch (see Figure 23-27). The solution to this problem has been to use 1-1 zoning, where each zone consists of a single initiator and a single target. This solution solves the problem of excessive TCAM consumption, but it imposes a burden on the SAN administrator by requiring the creation and management of a large number of zones. More zones generate more work—and more possibilities for errors. In very large fabrics, this solution may even run up against system software limits on the size of the total zone database.

Image

Figure 23-27 The Trouble with Sizable Zoning

Cisco Smart Zoning takes advantage of the fact that storage traffic is not symmetrical or egalitarian like LAN traffic, where any Ethernet or TCP/IP host may need to talk to any other host. Storage is asymmetrical: zone members are either initiators or targets, and in most cases, initiators do not talk to other initiators, and targets do not talk to other targets. There are exceptions to this generalization, such as array-to-array replication, and any solution must take those into account. Consider an example in which an application has eight servers, each with dual host bus adapters (HBAs) or converged network adapters (CNAs), talking to eight storage ports. These devices are split among two redundant, disjointed SAN fabrics, so each fabric has eight HBAs and four storage ports for this application. A total of 132 ACEs is created because each of the 12 members of the zone is provisioned with access to all 11 other members. With Smart Zoning, zones can now be defined as one-to-many, many-to-one, or many-to-many without incurring a penalty in switch resource consumption. Thus, administrators can now define zones to correspond to entities that actually are meaningful in their data center operations. For example, they can define a zone for an application, for an application cluster, or for a hypervisor cluster without compromising internal resource utilization. Consider a maximum VMware vSphere cluster of 32 servers that uses a total of eight storage ports. A single zone for this cluster has 40 members. With traditional zoning, it would consume n× (n – 1) ACL entries, which translates to 40 × 39 = 1560 ACL entries. With Smart Zoning, ACL consumption drops to 32 × 8 × 2 = 512 ACL entries.

The following guidelines must be considered when creating zone members:

Image

Image Configuring only one initiator and one target for a zone provides the most efficient use of the switch resources.

Image Configuring the same initiator to multiple targets is accepted.

Image Configuring multiple initiators to multiple targets is not recommended.

The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. Both standards support the basic zoning functionalities and the enhanced zoning functionalities.

LUN masking is the most common method of ensuring LUN security. Each SCSI host uses a number of primary SCSI commands to identify its target, discover LUNs, and obtain their size. The following are the commands:

Image Identify: Which device are you?

Image Report LUNs: How many LUNs are behind this storage array port?

Image Report capacity: What is the capacity of each LUN?

Image Request sense: Is a LUN online and available?

It is important to ensure that only one host accesses each LUN on the storage array at a time, unless the hosts are configured in a cluster. As the host mounts each LUN volume, it writes a signature at the start of the LUN to claim exclusive access. If a second host should discover and try to mount the same LUN volume, it overwrites the previous signature.

LUN masking ensures that only one host can access a LUN; all other hosts are masked out. LUN masking is essentially a mapping table inside the front-end array controllers. LUN masking determines which LUNs are to be advertised through which storage array ports, and which host is allowed to own which LUNs.

An alternative method of LUN security is LUN mapping. If LUN masking is unavailable in the storage array, LUN mapping can be used, although both methods can be used concurrently.

LUN mapping is configured in the HBA of each host, to ensure that only one host at a time can access each LUN. LUNs can be advertised on many storage ports and discovered by several hosts at the same time. Many LUNs are to be visible, but it is the responsibility of the administrator to configure each HBA so that each host has exclusive access to its LUNs. When there are many hosts, a mistake is more likely to be made and more than one host might access the same LUN by accident. Table 23-4 compares the characteristic differences between VSAN and Zone.

Image
Image

Table 23-4 VSAN and Zone Comparison

Device Aliases Versus Zone-Based (FC) Aliases

When the port WWN (pWWN) of a device must be specified to configure different features (zoning, QoS, port security) in a Cisco MDS 9000 family switch, you must assign the correct device name each time you configure these features. An incorrect device name may cause unexpected results. You can avoid this problem if you define a user-friendly name for a port WWN and use this name in the entire configuration commands as required. These user-friendly names are referred to as device aliases.

Device aliases support two modes—basic and enhanced—as detailed here:

Image When you configure the basic mode using device aliases, the application immediately expands to pWWNs. This operation continues until the mode is changed to enhanced.

Image When a device alias runs in the enhanced mode, all applications accept the device alias configuration in the native format. The applications store the device alias name in the configuration and distribute it in the device alias format instead of expanding to pWWN. The applications track the device alias database changes and take actions to enforce it.

When a device alias mode is changed from basic mode to enhanced mode, the corresponding applications are informed about the change. The applications then start accepting the device alias–based configuration in the native format.

Device aliases have the following features:

Image Device alias information is independent of your VSAN configuration.

Image Device alias configuration and distribution is independent of the zone server and the zone server database.

Image You can import legacy zone alias configurations without losing data.

Image The device alias application uses the Cisco Fabric Services (CFS) infrastructure to enable efficient database management and distribution. Device aliases use the coordinated distribution mode and the fabric-wide distribution scope.

Image When you configure zones, IVR zones, or QoS features using device aliases, and if you display these configurations, you will automatically see that the device aliases are displayed along with their respective pWWNs.

Device aliases have the following requirements:

Image You can only assign device aliases to pWWNs.

Image The mapping between the pWWN and the device alias to which it is mapped must have a one-to-one relationship. A pWWN can be mapped to only one device alias, and vice versa.

Image A device alias name is restricted to 64 alphanumeric characters and may include one or more of the following characters:

Image a to z and A to Z

Image 1 to 9

Image - (hyphen) and _ (underscore)

Image $ (dollar sign) and ^ (up caret)

Table 23-5 compares the differences between zone and device aliases.

Image
Image

Table 23-5 Comparison Between Zone Aliases and Device Aliases

Where Do I Start Configuration?

The Cisco MDS 9000 family of storage networking products support 1, 2, 4, 8, 10, and 16 Gbps Fibre Channel; 10 and 40 Gbps Fibre Channel over Ethernet (FCoE); 1 and 10 Gbps Fibre Channel over IP (FCIP); and Internet Small Computer Interface (iSCSI) on Gigabit Ethernet ports at the time of writing this book. The MDS multilayer directors have dual supervisor modules, and each of the supervisor modules have one management and one-console port. The MDS multiservice and multilayer switches have one management and one-console port as well.

Before any configuration is started, the equipment should be installed and mounted onto the racks. For each MDS product family switch there is a specific hardware installation guide that explains in detail how and where to install specific components of the MDS switches. This section starts with explaining what to do after the switch is installed and powered on following the hardware installation guide.

The Cisco MDS family switches provide the following types of ports:

Image

Image Console port (supervisor modules): The console port is available for all the MDS product family. The console port, labeled “Console,” is an RS-232 port with an RJ-45 interface. It is an asynchronous (sync) serial port; any device connected to this port must be capable of asynchronous transmission. This port is used to create a local management connection to set the IP address and other initial configuration settings before connecting the switch to the network for the first time.

To connect the console port to a computer terminal, follow these steps:

Step 1. Configure the terminal emulator program to match the following default port characteristics: 9600 baud, 8 data bits, 1 stop bit, no parity.

Step 2. Connect the supplied RJ-45 to DB-9 female adapter or RJ-45 to DP-25 female adapter (depending on your computer) to the computer serial port. We recommend using the adapter and cable provided with the switch.

Step 3. Connect the console cable (a rollover RJ-45 to RJ-45 cable) to the console port and to the RJ-45-to-DB-9 adapter or the RJ-45-to-DP-25 adapter (depending on your computer) at the computer serial port.

Image COM1 port: This is an RS-232 port that you can use to connect to an external serial communication device such as a modem. The COM1 port is available for all the MDS product family except MDS 9700 director switches. It is available on each supervisor module of MDS 9500 Series switches.

The COM1 port (labeled “COM1”) is an RS-232 port with a DB-9 interface. You can use it to connect to an external serial communication device such as a modem. To connect the COM1 port to a modem, follow these steps:

Step 1. Connect the modem to the COM1 port using the adapters and cables.

a. Connect the DB-9 serial adapter to the COM1 port.

b. Connect the RJ-45-to-DB-25 modem adapter to the modem.

c. Connect the adapters using the RJ-45-to-RJ-45 rollover cable (or equivalent crossover cable).

Step 2. The default COM1 settings are as follows:

line Aux: |Speed: 9600 bauds Databits: 8 bits per byte Stopbits: 1 bit(s) Parity: none Modem In: Enable Modem Init-String - default: ATE0Q1&D2&C1S0=115 Statistics: tx:17 rx:0 Register Bits:RTS|DTR

Image MGMT 10/100/1000 Ethernet port (supervisor module): This is an Ethernet port that you can use to access and manage the switch by IP address, such as through Cisco Data Center Network Manager (DCNM). MGMT Ethernet port is available for all the MDS product family. The supervisor modules support an autosensing MGMT 10/100/1000 Ethernet port (labeled “MGMT 10/100/1000”) and have an RJ-45 interface. You can connect the MGMT 10/100/1000 Ethernet port to an external hub, switch, or router. You need to use a modular, RJ-45, straight-through UTP cable to connect the MGMT 10/100/1000 Ethernet port to an Ethernet switch port or hub or use a crossover cable to connect to a router interface.


Note

For high availability, connect the MGMT 10/100/1000 Ethernet port on the active supervisor module and on the standby supervisor module to the same network or VLAN. The active supervisor module owns the IP address used by both of these Ethernet connections. On a switchover, the newly activated supervisor module takes over this IP address. This process requires an Ethernet connection to the newly activated supervisor module.


Image Fibre Channel ports (switching modules): These are Fibre Channel (FC) ports that you can use to connect to the SAN or for in-band management. The Cisco MDS 9000 family supports both Fibre Channel and FCoE protocols for SFP+ transceivers. Each transceiver must match the transceiver on the other end of the cable, and the cable must not exceed the stipulated cable length for reliable communication.

Image Fibre Channel over Ethernet ports (switching modules): These are Fibre Channel over Ethernet (FCoE) ports that you can use to connect to the SAN or for in-band management.

Image Gigabit Ethernet ports (IP storage ports): These are IP Storage Services ports that are used for the FCIP or iSCSI connectivity.

Image USB drive/port: This is a simple interface that allows you to connect to different devices supported by NX-OS. The USB drive is not available for MDS 9100 Series switches. The Cisco MDS 9700 Series switch has two USB drives (in each Supervisor-1 module). In the supervisor module, there are two USB drives: Slot 0 and LOG FLASH. The LOG FLASH and Slot 0 USB ports use different formats for their data.

Let’s also review the transceivers before we continue with the configuration. You can use any combination of SFP or SFP+ transceivers that are supported by the switch. The only restrictions are that short wavelength (SWL) transceivers must be paired with SWL transceivers, and long wavelength (LWL) transceivers with LWL transceivers, and the cable must not exceed the stipulated cable length for reliable communications. The Cisco SFP, SFP+, and X2 devices are hot-swappable transceivers that plug in to Cisco MDS 9000 family director switching modules and fabric switch ports. They allow you to choose different cabling types and distances on a port-by-port basis. The most up-to-date information can be found for pluggable transceivers here: http://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9000-series-multilayer-switches/product_data_sheet09186a00801bc698.html.

The SFP hardware transmitters are identified by their acronyms when displayed in the show interface brief command. If the related SFP has a Cisco-assigned extended ID, the show interface and show interface brief commands display the ID instead of the transmitter type. The show interface transceiver command and the show interface fc slot/port transceiver command display both values for Cisco-supported SFPs.

What are the differences between SFP+, SFP, and XFP? SFP, SFP+, and XFP are all terms for a type of transceiver that plugs in to a special port on a switch or other network device to convert the port to a copper or fiber interface. SFP has the lowest cost among all three. SFP+ was replaced by the XFP 10 G modules and became the mainstream. An advantage of the SFP+ modules is that SFP+ has a more compact form factor package than X2 and XFP. It can connect with the same type of XFP, X2, and XENPAK directly. The cost of SFP+ is lower than XFP, X2, and XENPAK.

SFP and SFP+ specifications are developed by the MSA (Multiple Source Agreement) group. They both have the same size and appearance but they are based on different standards. SFP is based on the INF-8074i specification and SFP+ is based on the SFF-8431 specification.

XFP and SFP+ are 10G fiber optical modules and can connect with other type of 10G modules. The size of SFP+ is smaller than XFP; therefore, it moves some functions to the motherboard, including signal modulation function, MAC, CDR, and EDC. XFP is based on the standard of XFP MSA. SFP+ is in compliance with the protocol of IEEE802.3ae, SFF-8431, and SFF-8432 specifications. SFP+ is the mainstream design.

To get the most up-to-date technical specifications for fiber optics per the current standards and specs, maximum supportable distances, and attenuation for optical fiber applications by fiber type, check the Fiber Optic Association page (FOA) at http://www.thefoa.org/tech/Linkspec.htm.

The Cisco MDS product family uses the same operating system as NX-OS. The Nexus product line and MDS family switches use mostly the same management features. The Cisco MDS 9000 family switches can be accessed and configured in many ways, and they support standard management protocols. Figure 23-28 portrays the tools for configuring the Cisco NX-OS software.

Image
Image

Figure 23-28 Tools for Configuring Cisco NX-OS Software

The different protocols that are supported in order to access, monitor, and configure the Cisco MDS 9000 family of switches are described in Table 23-6.

Image
Image

Table 23-6 Protocols to Access, Monitor, and Configure the Cisco MDS Family

The Cisco MDS NX-OS Setup Utility—Back to Basics

The Cisco MDS NX-OS setup utility is an interactive command-line interface (CLI) mode that guides you through a basic (also called a startup) configuration of the system. The setup utility allows you to configure only enough connectivity for system management. The setup utility allows you to build an initial configuration file using the System Configuration dialog. The setup starts automatically when a device has no configuration file in NVRAM. The dialog guides you through initial configuration. After the file is created, you can use the CLI to perform additional configuration.

You can press Ctrl+C at any prompt to skip the remaining configuration options and proceed with what you have configured up to that point, except for the administrator password. If you want to skip answers to any questions, press Enter. If a default answer is not available (for example, the device hostname), the device uses what was previously configured and skips to the next question. Figure 23-29 portrays the setup script flow.

Image
Image

Figure 23-29 Setup Script Flow

You use the setup utility mainly for configuring the system initially, when no configuration is present. However, you can use the setup utility at any time for basic device configuration. The setup script only supports IPv4. The setup utility keeps the configured values when you skip steps in the script. For example, if you have already configured the mgmt0 interface, the setup utility does not change that configuration if you skip that step. However, if there is a default value for the step, the setup utility changes to the configuration using that default, not the configured value. Be sure to carefully check the configuration changes before you save the configuration.


Note

Be sure to configure the IPv4 route, the default network IPv4 address, and the default gateway IPv4 address to enable SNMP access. If you enable IPv4 routing, the device uses the IPv4 route and the default network IPv4 address. If IPv4 routing is disabled, the device uses the default gateway IPv4 address.


Before starting the setup utility, you need to execute the following steps:

Step 1. Have a password strategy for your network environment.

Step 2. Connect the console port on the supervisor module to the network. If you have dual supervisor modules, connect the console ports on both supervisor modules to the network.

Step 3. Connect the Ethernet management port on the supervisor module to the network. If you have dual supervisor modules, connect the Ethernet management ports on both supervisor modules to the network.

The first time you access a switch in the Cisco MDS 9000 family, it runs a setup program that prompts you for the IP address and other configuration information necessary for the switch to communicate over the supervisor module Ethernet interface. This information is required to configure and manage the switch. The IP address can only be configured from the CLI. When you power up the switch for the first time, assign the IP address. After you perform this step, the Cisco MDS 9000 Family Cisco Prime DCNM can reach the switch through the console port. There are two types of management through MDS NX-OS CLI. You can configure out-of-band management on the mgmt 0 interface. The in-band management logical interface is VSAN 1. This management interface uses the Fibre Channel infrastructure to transport IP traffic. An interface for VSAN 1 is created on every switch in the fabric. Each switch should have its VSAN 1 interface configured with either an IPv4 address or an IPv6 address in the same subnetwork. A default route that points to the switch providing access to the IP network should be configured on every switch in the Fibre Channel fabric. The following are the initial setup procedure steps for configuring out-of-band management on the mgmt0 interface:

Step 1. Power on the switch. Switches in the Cisco MDS 9000 family boot automatically.

Step 2. Enter yes (yes is the default) to enable a secure password standard.

Do you want to enforce secure password standard (yes/no): yes

You can also enable a secure password standard using the password strength-check command. A secure password should contain characters from at least three of these classes: lowercase letters, uppercase letters, digits, and special characters.

Step 3. Enter the password for the administrator.

Enter the password for admin: admin-password
Confirm the password for admin: admin-password

Step 4. Enter yes to enter the setup mode.

This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system. Note that setup is mainly used for configuring the system initially, when no configuration is present. Therefore, setup always assumes system defaults and not the current system configuration values. Press Enter at any time to skip a dialog. Use Ctrl+C at any time to skip the remaining dialogs.

Would you like to enter the basic configuration dialog (yes/no): yes

The setup utility guides you through the basic configuration process. Press Ctrl+C at any prompt to end the configuration process.

Step 5. Enter yes (no is the default) if you do not want to create additional accounts.

Create another login account (yes/no) [no]: yes

While configuring your initial setup, you can create an additional user account (in the network-admin role) besides the administrator’s account.

Enter the user login ID: user_name
Enter the password for user_name: user-password
Confirm the password for user_name: user-password
Enter the user role [network-operator]: network-admin

By default, two roles exist in all switches:

Image Network operator (network-operator): Has permission to view the configuration only. The operator cannot make any configuration changes.

Image Network administrator (network-admin): Has permission to execute all commands and make configuration changes. The administrator can also create and customize up to 64 additional roles. One of these 64 additional roles can be configured during the initial setup process.

Step 6. Configure the read-only or read-write SNMP community string.

a. Enter yes (no is the default) to avoid configuring the read-only SNMP community string.

Configure read-only SNMP community string (yes/no) [n]: yes

b. Enter the SNMP community string.

SNMP community string: snmp_community

Step 7. Enter a name for the switch. The switch name is limited to 32 alphanumeric characters.

The default switch name is “switch.”

Enter the switch name: switch_name

Step 8. Enter yes (yes is the default) at the configuration prompt to configure out-of-band management.

IP version 6 (IPv6) is supported in Cisco MDS NX-OS Release 4.1(x) and later. However, the setup script supports only IP version 4 (IPv4) for the management interface.

Continue with out-of-band (mgmt0) management configuration? [yes/no]: yes

Enter the mgmt0 IPv4 address.

Mgmt0 IPv4 address: ip_address

Enter the mgmt0 IPv4 subnet mask.

Mgmt0 IPv4 netmask: subnet_mask

Step 9. Enter yes (yes is the default) to configure the default gateway.

Configure the default-gateway: (yes/no) [y]: yes
Enter the default gateway IP address.
IP address of the default gateway: default_gateway

Step 10. Enter yes (no is the default) to configure advanced IP options such as in-band management static routes, default network, DNS, and domain name.

Configure Advanced IP options (yes/no)? [n]: yes

a. Enter no (no is the default) at the in-band management configuration prompt.

Continue with in-band (VSAN1) management configuration? (yes/no) [no]: no

b. Enter yes (yes is the default) to enable IPv4 routing capabilities.

Enable ip routing capabilities? (yes/no) [y]: yes

c. Enter yes (yes is the default) to configure a static route.

Configure static route: (yes/no) [y]: yes

Enter the destination prefix.

Destination prefix: dest_prefix

Enter the destination prefix mask.

Destination prefix mask: dest_mask

Enter the next hop IP address.

Next hop ip address: next_hop_address

d. Enter yes (yes is the default) to configure the default network.

Configure the default-network: (yes/no) [y]: yes

Enter the default network IPv4 address.

Default network IP address [dest_prefix]: dest_prefix

e. Enter yes (yes is the default) to configure the DNS IPv4 address.

Configure the DNS IP address? (yes/no) [y]: yes

Enter the DNS IP address.

DNS IP address: name_server

f. Enter yes (no is the default) to skip the default domain name configuration.

Configure the default domain name? (yes/no) [n]: yes

Enter the default domain name.

Default domain name: domain_name

Step 11. Enter yes (yes is the default) to enable the SSH service.

Enabled SSH service? (yes/no) [n]: yes

Enter the SSH key type.

Type the SSH key you would like to generate (dsa/rsa)? rsa

Enter the number of key bits within the specified range.

Enter the number of key bits? (768-2048) [1024]: 2048

Step 12. Enter yes (no is the default) to disable the Telnet service.

Enable the telnet service? (yes/no) [n]: yes

Step 13. Enter yes (yes is the default) to configure congestion or no_credit drop for FC interfaces.

Configure congestion or no_credit drop for fc interfaces? (yes/no) [q/
quit] to quit [y]:yes

Step 14. Enter con (con is the default) to configure congestion or no_credit drop.

Enter the type of drop to configure congestion/no_credit drop? (con/no)
[c]:con

Step 15. Enter a value from 100 to 1000 (d is the default) to calculate the number of milliseconds for congestion or no_credit drop.

Enter number of milliseconds for congestion/no_credit drop[100 - 1000] or
[d/default] for default: 100

Step 16. Enter a mode for congestion or no_credit drop.

Enter mode for congestion/no_credit drop[E/F]:

Step 17. Enter yes (no is the default) to configure the NTP server.

Configure NTP server? (yes/no) [n]: yes
Enter the NTP server IPv4 address.
NTP server IP address: ntp_server_IP_address

Step 18. Enter shut (shut is the default) to configure the default switch port interface to the shut (disabled) state.

Configure default switchport interface state (shut/noshut) [shut]: shut

The management Ethernet interface is not shut down at this point. Only the Fibre Channel, iSCSI, FCIP, and Gigabit Ethernet interfaces are shut down.

Step 19. Enter on (off is the default) to configure the switch port trunk mode.

Configure default switchport trunk mode (on/off/auto) [off]: on

Step 20. Enter yes (yes is the default) to configure the switchport mode F.

Configure default switchport mode F (yes/no) [n]: y

Step 21. Enter on (off is the default) to configure the port-channel auto-create state.

Configure default port-channel auto-create state (on/off) [off]: on

Step 22. Enter permit (deny is the default) to deny a default zone policy configuration.

Configure default zone policy (permit/deny) [deny]: permit

This permits traffic flow to all members of the default zone.

Step 23. Enter yes (no is the default) to disable a full zone set distribution.

Enable full zoneset distribution (yes/no) [n]: yes

This overrides the switch-wide default for the full zone set distribution feature.

You see the new configuration. Review and edit the configuration that you have just entered.

Step 24. Enter enhanced (basic is the default) to configure default-zone mode as enhanced.

Configure default zone mode (basic/enhanced) [basic]: enhanced

This overrides the switch-wide default zone mode as enhanced.

Step 25. Enter no (no is the default) if you are satisfied with the configuration.

The following configuration will be applied:

  username admin password admin_pass role network-admin
  username user_name password user_pass role network-admin
  snmp-server community snmp_community ro
  switchname switch
  interface mgmt0
    ip address ip_address subnet_mask
    no shutdown
  ip routing
  ip route dest_prefix dest_mask dest_address
  ip default-network dest_prefix
  ip default-gateway default_gateway
  ip name-server name_server
  ip domain-name domain_name
  telnet server disable
  ssh key rsa 2048 force
  ssh server enable
  ntp server ipaddr ntp_server
  system default switchport shutdown
  system default switchport trunk mode on
  system default switchport mode F
  system default port-channel auto-create
  zone default-zone permit vsan 1-4093
  zoneset distribute full vsan 1-4093
  system default zone mode enhanced
Would you like to edit the configuration? (yes/no) [n]: n

Step 26. Enter yes (yes is default) to use and save this configuration.

Use this configuration and save it? (yes/no) [y]: yes

If you do not save the configuration at this point, none of your changes are updated the next time the switch is rebooted. Type yes to save the new configuration. This ensures that the kickstart and system images are also automatically configured.

Step 27. Log in to the switch using the new username and password.

Step 28. Verify that the required licenses are installed in the switch using the show license command.

Step 29. Verify that the switch is running Cisco NX-OS 6.2(x) software, depending on which you installed, issuing the show version command. Example 23-1 portrays show version display output.

Example 23-1 Verifying NX-OS Version on an MDS 9710 Switch


DS-9710-1# show version
Cisco Nexus Operating System (NX-OS) Software TAC support: http://www.cisco.com/tac
Documents: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_
home.html
Copyright (c) 2002-2013, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
Software
  BIOS:      version 3.1.0
  kickstart: version 6.2(3)
  system:    version 6.2(3)
  BIOS compile time:       02/27/2013
  kickstart image file is: bootflash:///kickstart
  kickstart compile time:  7/10/2013 2:00:00 [07/31/2013 10:10:16]
  system image file is:    bootflash:///system
  system compile time:     7/10/2013 2:00:00 [07/31/2013 11:59:38]
Hardware
  cisco MDS 9710 (10 Slot) Chassis ("Supervisor Module-3")
  Intel(R) Xeon(R) CPU         with 8120784 kB of memory.
  Processor Board ID JAE171504KY
  Device name: MDS-9710-1
  bootflash:    3915776 kB
  slot0:              0 kB (expansion flash)
Kernel uptime is 0 day(s), 0 hour(s), 39 minute(s), 20 second(s)
Last reset
  Reason: Unknown
System version: 6.2(3)
  Service:
Plugin
  Core Plugin, Ethernet Plugin


Step 30. Verify the status of the modules on the switch using the show module command (see Example 23-2).

Example 23-2 Verifying Modules on an MDS 9710 Switch


MDS-9710-1# show module
Mod Ports Module-Type                         Model              Status
--- ----- ----------------------------------- ------------------ ----------
1   48    2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9     ok
2   48    2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9     ok
5   0     Supervisor Module-3                 DS-X97-SF1-K9      active *
6   0     Supervisor Module-3                 DS-X97-SF1-K9      ha-standby
Mod Sw             Hw
--- -------------- ------
1   6.2(3)         1.1
2   6.2(3)         1.1
5   6.2(3)         1.0
6   6.2(3)         1.0
Mod MAC-Address(es)                        Serial-Num
--- -------------------------------------- ----------
1   1c-df-0f-78-cf-d8 to 1c-df-0f-78-cf-db JAE171308VQ
2   1c-df-0f-78-d8-50 to 1c-df-0f-78-d8-53 JAE1714019Y
5   1c-df-0f-78-df-05 to 1c-df-0f-78-df-17 JAE171504KY
6   1c-df-0f-78-df-2b to 1c-df-0f-78-df-3d JAE171504E6
Mod Online Diag Status
--- ------------------
1   Pass
2   Pass
5   Pass
6   Pass
Xbar Ports Module-Type                         Model              Status
---  ----- ----------------------------------- ------------------ ----------
1    0     Fabric Module 1                     DS-X9710-FAB1      ok
2    0     Fabric Module 1                     DS-X9710-FAB1      ok
3    0     Fabric Module 1                     DS-X9710-FAB1      ok
Xbar Sw             Hw
---  -------------- ------
1    NA             1.0
2    NA             1.0
3    NA             1.0
Xbar MAC-Address(es)                        Serial-Num
---  -------------------------------------- ----------
1    NA                                     JAE1710085T
2    NA                                     JAE1710087T
3    NA                                     JAE1709042X


The following are the initial setup procedure steps for configuring the in-band management logical interface on VSAN 1:

You can configure both in-band and out-of-band configuration together by entering yes in both step 10c and step 10d in the following procedure.

Step 10. Enter yes (no is the default) to configure advanced IP options such as in-band management, static routes, default network, DNS, and domain name.

Configure Advanced IP options (yes/no)? [n]: yes

a. Enter yes (no is the default) at the in-band management configuration prompt.

Continue with in-band (VSAN1) management configuration? (yes/no) [no]:
yes

Enter the VSAN 1 IPv4 address.

VSAN1 IPv4 address: ip_address

Enter the IPv4 subnet mask.

VSAN1 IPv4 net mask: subnet_mask

b. Enter no (yes is the default) to enable IPv4 routing capabilities.

Enable ip routing capabilities? (yes/no) [y]: no

c. Enter no (yes is the default) to configure a static route.

Configure static route: (yes/no) [y]: no

d. Enter no (yes is the default) to configure the default network.

Configure the default-network: (yes/no) [y]: no

e. Enter no (yes is the default) to configure the DNS IPv4 address.

Configure the DNS IP address? (yes/no) [y]: no

f. Enter no (no is the default) to skip the default domain name configuration.

Configure the default domain name? (yes/no) [n]: no

In the final configuration, the following CLI commands will be added to the configuration:

interface vsan1
     ip address ip_address subnet_mask
     no shutdown

The Power On Auto Provisioning

POAP (Power On Auto Provisioning) automates the process of upgrading software images and installing configuration files on Cisco MDS and Nexus switches that are being deployed in the network. When a Cisco MDS switch with the POAP feature boots and does not find the startup configuration, the switch enters POAP mode, locates the DCNM DHCP server, and bootstraps itself with its interface IP address, gateway, and DCNM DNS server IP addresses. It also obtains the IP address of the DCNM server to download the configuration script that is run on the switch to download and install the appropriate software image and device configuration file.

Starting with NX-OS 6.2(9), the POAP capability is available on Cisco MDS 9148 and MDS 9148S multilayer fabric switches (at the time of writing this book).

When you power up a switch for the first time, it loads the software image that is installed at manufacturing and tries to find a configuration file from which to boot. When a configuration file is not found, POAP mode starts. During startup, a prompt appears, asking if you want to abort POAP and continue with a normal setup. You can choose to exit or continue with POAP. If you exit POAP mode, you enter the normal interactive setup script. If you continue in POAP mode, all the front-panel interfaces are set up in the default configuration. The USB device on MDS 9148 or on MDS 9148S does not contain the required installation files. The POAP setup requires the following network infrastructure: a DHCP server to bootstrap the interface IP address, gateway address, and DNS (Domain Name System) server.

A TFTP server that contains the configuration script is used to automate the software image installation and configuration process.

One or more servers can contain the desired software images and configuration files. Figure 23-30 portrays the network infrastructure for POAP.

Image
Image

Figure 23-30 POAP Network Infrastructure

Here are the steps to set up the network environment using POAP:

Step 1. (Optional) Put the POAP configuration script and any other desired software image and switch configuration files on a USB device that is accessible to the switch.

Step 2. Deploy a DHCP server and configure it with the interface, gateway, and TFTP server IP addresses and a boot file with the path and name of the configuration script file. This information is provided to the switch when it first boots.

Step 3. Deploy a TFTP server to host the configuration script.

Step 4. Deploy one or more servers to host the software images and configuration files.

After you configure the network environment for POAP setup, follow these steps to configure the switch using POAP:

Step 1. Install the switch in the network.

Step 2. Power on the switch.

Step 3. (Optional) If you want to exit POAP mode and enter the normal interactive setup script, enter y (yes).

To verify the configuration after bootstrapping the device using POAP, use one of the following commands:

Image Show running config: Displays the running configuration

Image Show startup config: Displays the startup configuration

Licensing Cisco MDS 9000 Family NX-OS Software Features

Licenses are available for all switches in the Cisco MDS 9000 family. Licensing allows you to access specified premium features on the switch after you install the appropriate license for that feature. You can also obtain licenses to activate ports on the Cisco MDS 9124 Fabric Switch, the Cisco MDS 9134 Fabric Switch, the Cisco Fabric Switch for HP c-Class Blade System, and the Cisco Fabric Switch for IBM BladeCenter. Any feature not included in a license package is bundled with the Cisco MDS 9000 family switches.

The licensing model defined for the Cisco MDS product line has two options:

Image Feature-based licenses: Allow features that are applicable to the entire switch. The cost varies based on a per-switch usage.

Image Module-based licenses: Allow features that require additional hardware modules. The cost varies based on a per-module usage. An example is the IPS-8 or IPS-4 module using the FCIP feature.

Some features are logically grouped into add-on packages that must be licensed separately, such as the Cisco MDS 9000 Enterprise Package, SAN Extension over IP Package, Mainframe Package, DCNM Packages, DMM Package, IOA Package, and XRC Acceleration Package. On-demand port activation licenses are also available for the Cisco MDS 9000 family blade switches and 4Gbps Cisco MDS 9100 Series multilayer fabric switches. Cisco license packages require a simple installation of an electronic license: No software installation or upgrade is required. Licenses can also be installed on the switch in the factory. Cisco MDS stores license keys on the chassis serial PROM (SPROM), so license keys are never lost even during a software reinstallation.

Cisco Data Center Network Manager for SAN includes a centralized license management console that provides a single interface for managing licenses across all Cisco MDS switches in the fabric. This single console reduces management overhead and prevents problems because of improperly maintained licensing. If an administrative error does occur with licensing, the switch provides a grace period before the unlicensed features are disabled. The grace period allows plenty of time to correct the licensing issue. All licensed features may be evaluated for up to 120 days before a license expires. Devices with dual supervisors have the following additional high-availability features:

Image The license software runs on both supervisor modules and provides failover protection.

Image The license key file is mirrored on both supervisor modules. Even if both supervisor modules fail, the license file continues to function from the version that is available on the chassis.

License Installation

You can either obtain a factory-installed license (only applies to new device orders) or perform a manual installation of the license (applies to existing devices in your network).

Image Obtaining a factory-installed license: You can obtain factory-installed licenses for a new Cisco NX-OS device. The first step is to contact the reseller or Cisco representative and request this service. The second step is to use the device and the licensed features.

Image Performing a manual installation: If you have existing devices or if you want to install the licenses on your own, you must first obtain the license key file and then install that file in the device.

Obtaining the license key file consists of the following steps:

Step 1. Obtain the serial number for your device by entering the show license host-id command. The host ID is also referred to as the device serial number.

mds9710-1# show licenser host-id
License hostid: VDH=JAF1710BBQH

Step 2. Obtain your software license claim certificate document. If you cannot locate your software license claim certificate, contact Cisco Technical Support at this URL: http://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html.

Step 3. Locate the product authorization key (PAK) from the software license claim certificate document.

Step 4. Locate the website URL from the software license claim certificate document. You can access the Product License Registration website from the Software Download website at this URL: https://software.cisco.com/download/navigator.html.

Step 5. Follow the instructions on the Product License Registration website to register the license for your device.

Step 6. Use the copy licenses command to save your license file to either the bootflash: directory or a slot0: device.

Step 7. You can use a file transfer service (tftp, ftp, sftp, sfp, or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch.

Installing the license key file consists of the following steps:

Step 1. Log in to the device through the console port of the active supervisor.

Step 2. You can use a file transfer service ( tftp, ftp, sftp, sfp, or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch.

Step 3. Perform the installation by using the install license command on the active supervisor module from the device console.

switch# install license bootflash:license_file.lic
Installing license ..done

Step 4. (Optional) Back up the license key file.

Step 5. Exit the device console and open a new terminal session to view all license files installed on the device using the show license command, as shown in Example 23-3.

Example 23-3 Show License Output on MDS 9222i Switch


MDS-9222i# show license
MDS20090713084124110.lic:
SERVER this_host ANY
VENDOR cisco
INCREMENT ENTERPRISE_PKG cisco 1.0 permanent uncounted
        VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSES-
INTRL</SKU>
        HOSTID=VDH=FOX1244H0U4
        NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>1</LicLineID>
        <PAK></PAK>" SIGN=D2ABC826DB70
INCREMENT STORAGE_SERVICES_ENABLER_PKG cisco 1.0 permanent 1
        VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-
LICENSES-INTRL</SKU>
        HOSTID=VDH=FOX1244H0U4
        NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>2</LicLineID>
        <PAK></PAK>" SIGN=9287E36C708C
INCREMENT SAN_EXTN_OVER_IP cisco 1.0 permanent 1
        VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSES-
INTRL</SKU>
        HOSTID=VDH=FOX1244H0U4
        NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>3</LicLineID>
        <PAK></PAK>" SIGN=C03E97505672
<snip> ...


You can use the show license brief command to display a list of license files installed on the device (see Example 23-4).

Example 23-4 Show License Brief Output on MDS 9222i Switch


MDS-9222i# show license brief
MDS20090713084124110.lic
MDS201308120931018680.lic
MDS-9222i# show license file MDS201308120931018680.lic
MDS201308120931018680.lic:
SERVER this_host ANY
VENDOR cisco
INCREMENT SAN_EXTN_OVER_IP_18_4 cisco 1.0 permanent 2
        VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>L-M92EXT1AK9=</SKU>
        HOSTID=VDH=FOX1244H0U4
        NOTICE="<LicFileID>20130812093101868</LicFileID><LicLineID>1</LicLineID>
        <PAK></PAK>" SIGN=DD075A320298


When you enable a Cisco NX-OS software feature, it can activate a license grace period:

switch# show license usage ENTERPRISE_PKG
Application
-----------
ivr
qos_manager
-----------

You can use the show license usage command to identify all the active features.

To uninstall the installed license file, you can use clear license filename command, where filename is the name of the installed license key file:

switch# clear license Enterprise.lic
Clearing license Enterprise.lic:
SERVER this_host
ANY VENDOR cisco

If the license is time bound, you must obtain and install an updated license. You need to contact technical support to request an updated license. After obtaining the license file, you can update the license file by using the update license url command, where url specifies the bootflash:, slot0:, usb1:, or usb2: location of the updated license file. You can enable the grace period feature by using the license grace-period command:

switch# configure terminal
switch(config)# license grace-period

Image
Verifying the License Configuration

To display the license configuration, use one of the following commands:

Image show license: Displays information for all installed license files

Image show license brief: Displays summary information for all installed license files

Image show license file: Displays information for a specific license file

Image show license host-id: Displays the host ID for the physical device

Image show license usage: Displays the usage information for installed licenses

On-Demand Port Activation Licensing

The Cisco MDS 9250i is available in a base configuration of 20 ports (upgradable to 40 through On-Demand Port Activation license) of 16Gbps Fibre Channel, two ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services, and eight ports of 10 Gigabit Ethernet for FCoE connectivity.

On the Cisco MDS 9148, the On-Demand Port Activation license allows the addition of eight 8Gbps ports. Customers have the option of purchasing preconfigured models of 16, 32, or 48 ports and upgrading the 16- and 32-port models onsite all the way to 48 ports by adding these licenses.

The Cisco MDS 9148S 16G Multilayer Fabric Switch compact one rack-unit (1RU) switch scales from 12 to 48 line-rate 16Gbps Fibre Channel ports. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port Cisco MDS 9148S On-Demand Port Activation license to support configurations of 24, 36, or 48 enabled ports.

You can use the show license usage command (see Example 23-5) to view any licenses assigned to a switch. If a license is in use, the status displayed is In use. If a license is installed but no ports have acquired a license, the status displayed is Unused. The PORT_ACTIVATION_PKG does not appear as installed if you have only the default license installed.

Example 23-5 Verifying License Usage on an MDS 9148 Switch


MDS-9148# show license usage
Feature                                             Ins  Lic   Status Expiry Date Comments
                                                         Count
----------------------------------------------------------------------------
FM_SERVER_PKG                                        No    -  Unused              -
ENTERPRISE_PKG                                       No    -  Unused              Grace expired
PORT_ACTIVATION_PKG                                  No   16  In use never         -


Example 23-6 shows the default port license configuration for the Cisco MDS 9148 switch.

Example 23-6 Verifying Port-License Usage on an MDS 9148 Switch


bdc-mds9148-2# show port-license
Available port activation licenses are 0
-----------------------------------------------
  Interface   Cookie    Port Activation License
-----------------------------------------------
  fc1/1       16777216        acquired
  fc1/2       16781312        acquired
  fc1/3       16785408        acquired
  fc1/4       16789504        acquired
  fc1/5       16793600        acquired
  fc1/6       16797696        acquired
  fc1/7       16801792        acquired
  fc1/8       16805888        acquired
  fc1/9       16809984        acquired
  fc1/10      16814080        acquired
  fc1/11      16818176        acquired
  fc1/12      16822272        acquired
  fc1/13      16826368        acquired
  fc1/14      16830464        acquired
  fc1/15      16834560        acquired
  fc1/16      16838656        acquired
  fc1/17      16842752        eligible
  <snip>
  fc1/46      16961536        eligible
  fc1/47      16965632        eligible
  fc1/48      16969728        eligible


The three different statuses for port licenses are described in Table 23-7.

Image

Table 23-7 Port Activation License Status Definitions

Making a Port Eligible for a License

By default, all ports are eligible to receive a license. However, if a port has already been made ineligible and you prefer to activate it, you must make that port eligible by using the port-license command. Here are the steps to follow:

Step 1. Configure terminal.

Step 2. interface fc slot/port

Step 3. [no] port-license

Step 4. Exit.

Step 5. (Optional) show port-license

Step 6. (Optional) copy running-config startup-config

Acquiring a License for a Port

If you prefer not to accept the default on-demand port license assignments, you will need to first acquire licenses for ports to which you want to move the license.

Step 1. Configure terminal.

Step 2. interface fc slot/port

Step 3. [no] port-license acquire

Step 4. Exit.

Step 5. (Optional) show port-license

Step 6. (Optional) copy running-config startup-config

Moving Licenses Among Ports

You can move a license from a port (or range of ports) at any time. If you attempt to move a license to a port and no license is available, the switch returns the message “port activation license not available.”

Step 1. Configure terminal.

Step 2. interface fc slot/port

Step 3. Shut down.

Step 4. no port-license

Step 5. Exit.

Step 6. interface fc slot/port

Step 7. Shut down.

Step 8. port-license acquire

Step 9. No shutdown.

Step 10. Exit.

Step 11. (Optional) show port-license

Step 12. (Optional) copy running-config startup-config

Cisco MDS 9000 NX-OS Software Upgrade and Downgrade

Each switch is shipped with the Cisco MDS NX-OS operating system for Cisco MDS 9000 family switches. The Cisco MDS NX-OS software consists of two images: the kickstart image and the system image. The images and variables are important factors in any install procedure. You must specify the variable and the respective image to upgrade or downgrade your switch. Both images are not always required for each install.

Image To select the kickstart image, use the KICKSTART variable.

Image To select the system image, use the SYSTEM variable.

The software image install procedure is dependent on the following factors:

Image Software images: The kickstart and system image files reside in directories or folders that can be accessed from the Cisco MDS 9000 family switch prompt.

Image Image version: Each image file has a version.

Image Flash disks on the switch: The bootflash: directory resides on the supervisor module and the Compact Flash disk is inserted into the slot0: device.

Image Supervisor modules: There are single and dual supervisor modules. On switches with dual supervisor modules, both supervisor modules must have Ethernet connections on the management interfaces (mgmt 0) to maintain connectivity when switchovers occur during upgrades and downgrades.

Before starting any software upgrade or downgrade, the user should review NX-OS Release Notes. In the Release Notes, there are specific sections that explain compatibility between different software versions and hardware modules. In some cases, the user may need to take a specific path to perform nondisruptive software upgrades or downgrades.


Note

What is a nondisruptive NX-OS upgrade or downgrade? Nondisruptive upgrades on the Cisco MDS fabric switches take down the control plane for not more than 80 seconds. In some cases, when the upgrade has progressed past the point at which it cannot be stopped gracefully, or if a failure occurs, the software upgrade may be disruptive. During the upgrade, the control plane is down, but the data plane remains up. So new devices will be unable to log in to the fabric via the control plane, but existing devices will not experience any disruption of traffic via the data plane.


If the running image and the image you want to install are incompatible, the software reports the incompatibility. In some cases, you may decide to proceed with this installation. If the active and the standby supervisor modules run different versions of the image, both images may be high availability (HA) compatible in some cases and incompatible in others.

Compatibility is established based on the image and configuration:

Image Image incompatibility: The running image and the image to be installed are not compatible.

Image Configuration incompatibility: There is a possible incompatibility if certain features in the running image are turned off, because they are not supported in the image to be installed. The image to be installed is considered incompatible with the running image if one of the following statements is true:

Image An incompatible feature is enabled in the image to be installed and it is not available in the running image and may cause the switch to move into an inconsistent state. In this case, the incompatibility is strict.

Image An incompatible feature is enabled in the image to be installed and it is not available in the running image and does not cause the switch to move into an inconsistent state. In this case, the incompatibility is loose.

To view the results of a dynamic compatibility check, issue the show incompatibility system bootflash:filename command. Use this command to obtain further information when the install all command returns the following message:

Warning: The startup config contains commands not supported by
the standby supervisor; as a result, some resources might become
unavailable after a switchover.
Do you wish to continue? (y/ n) [y]: n

You can upgrade any switch in the Cisco MDS 9000 family using one of the following methods:

Image Automated, one-step upgrades using the “install all” command: This upgrade is nondisruptive for director switches. The install all command compares and presents the results of the compatibility before proceeding with the installation. You can exit if you do not want to proceed with these changes.

Image Quick, one-step upgrade using the “reload” command: This upgrade is disruptive. Before running the reload command, copy the correct kickstart and system images to the correct location and change the boot commands in your configuration.

When the Cisco MDS 9000 Series multilayer switch is first switched on or during reboot, the system BIOS on the supervisor module first runs power-on self-test (POST) diagnostics. The BIOS then runs the loader bootstrap function.

The boot parameters are held in NVRAM and point to the location and name of both the kickstart and system images. The loader obtains the location of the kickstart file, usually on bootflash, and verifies the kickstart image before loading it.

The kickstart loads the Linux kernel and device drivers and then needs to load the system image. Again, the boot parameters in NVRAM should point to the location and name of the system image, usually on bootflash. The kickstart then verifies and loads the system image.

Finally, the system image loads the Cisco NX-OS Software, checks the file systems, and proceeds to load the startup configuration that contains the switch configuration from NVRAM.

If the boot parameters are missing or have an incorrect name or location, the boot process fails at the last stage. If this error happens, the administrator must recover from the error and reload the switch. The install all command launches a script that greatly simplifies the boot procedure and checks for errors and the upgrade impact before proceeding. Figure 23-31 portrays the boot sequence.

Image
Image

Figure 23-31 Boot Sequence

For the MDS Director switches with dual supervisor modules, the boot sequence steps are as follows:

1. Upgrade the BIOS on the active and standby supervisor modules and the data modules.

2. Bring up the standby supervisor module with the new kickstart and system images.

3. Switch over from the active supervisor module to the upgraded supervisor module.

4. Bring up the old active supervisor module with the new kickstart and system images.

5. Perform a nondisruptive image upgrade for each data module (one at a time).

6. Upgrade complete.

Upgrading to Cisco NX-OS on an MDS 9000 Series Switch

During any firmware upgrade, use the console connection. Be aware that if you are upgrading through the management interface, you must have a working connection to both supervisors, because this process causes a switchover and the current standby supervisor will be active after the upgrade. In this section, we discuss firmware upgrade steps for a director switch with dual supervisor modules. For MDS switches with only one supervisor module, steps 7 and 8 will not be executed.

To upgrade the switch, use the latest Cisco MDS NX-OS Software on the Cisco MDS 9000 Director Series switch and then follow these steps:

Step 1. Verify the following physical connections for the new Cisco MDS 9500 family switch:

Image The console port is physically connected to a computer terminal (or terminal server).

Image The management 10/100 Ethernet port (mgmt0) is connected to an external hub, switch, or router.

Step 2. Issue the copy running-config startup-config command to store your current running configuration. You can also create a backup of your existing configuration to a file by issuing the copy running-config bootflash:backup_config.txt command.

Step 3. Install licenses (if necessary) to ensure that the required features are available on the switch.

Step 4. Ensure that the required space is available in the bootflash: directory for the image file(s) to be copied using the dir bootflash: command. Use the delete bootflash: filename command to remove unnecessary files.

Step 5. If you need more space on the active supervisor module bootflash, delete unnecessary files to make space available.

switch# del m9500-sf2ek9-kickstart-mz.6.2.6.27.bin
switch# del m9500-sf2ek9-mz-npe.6.2.5.bin

Step 6. Verify that space is available on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch.

switch# attach mod x (where x is the module number of the standby
supervisor)
switch(standby)s# dir bootflash:
12288 Aug 26 19:06:14 2011 lost+found/
16206848 Jul 01 10:54:49 2011 m9500-sf2ek9-kickstart-mz.6.2.5.bin
16604160 Jul 01 10:20:07 2011 m9500-sf2ek9-kickstart-mz.6.2.5c.bin
Usage for bootflash://sup-local
122811392 bytes used
61748224 bytes free
184559616 bytes total
switch(standby)# exit ( to return to the active supervisor )

Step 7. If you need more space on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch, delete unnecessary files to make space available.

switch(standby)# del bootflash: m9500-sf2ek9-kickstart-mz.6.2.5.bin
switch(standby)# del bootflash: m9500-sf2ek9-mz.6.2.5.bin

Step 8. Access the Software Download Center and select the required Cisco MDS NX-OS Release 6.2(x) image file, depending on which one you are installing.

Step 9. Download the files to an FTP or TFTP server.

Step 10. Copy the Cisco MDS NX-OS kickstart and system images to the active supervisor module bootflash using FTP or TFTP.

switch# copy tftp://tftpserver.cisco.com/MDS/m9500-sf2ek9-kickstart-
mz.6.2.x.bin bootflash:m9500-sf2ek9-kickstart-mz.6.2.x.bin
switch# copy tftp://tftpserver.cisco.com/MDS/m9500-sf2ek9-mz.6.2.x.bin
bootflash:m9500-sf2ek9-mz.6.2.x.bin

Step 11. Verify that the switch is running the required software version by issuing the show version command.

Step 12. Verify that your switch is running compatible hardware by checking the Release Notes.

Step 13. Perform the upgrade by issuing the install all command.

switch# install all kickstart m9500-sf2ek9-kickstart-mz.6.2.9.bin
system m9500-sf2ek9-mz.6.2.9.bin ssi m9000-ek9-ssi-mz.6.2.9.bin

The install all process verifies all the images before installation and detects incompatibilities. The process checks configuration compatibility. After information is provided about the impact of the upgrade before it takes place, the script will check whether you want to continue with the upgrade.

Do you want to continue with the installation (y/n)? [n] y

If the input is entered as y, the install will continue.

You can display the status of a nondisruptive upgrade by using the show install all status command. The output displays the status only after the switch has rebooted with the new image. All actions preceding the reboot are not captured in this output because when you enter the install all command using a Telnet session, the session is disconnected when the switch reboots. When you can reconnect to the switch through a Telnet session, the upgrade might already be complete, in which case the output will display the status of the upgrade.

Downgrading Cisco NX-OS Release on an MDS 9500 Series Switch

During any firmware upgrade, use the console connection. Be aware that if you are downgrading through the management interface, you must have a working connection to both supervisors, because this process causes a switchover and the current standby supervisor will be active after the downgrade. In this section, we discuss firmware upgrade steps for an MDS director switch with dual supervisor modules. For MDS switches with only one supervisor module, steps 7 and 8 will not be executed.

You should first read the Release Notes and follow the steps for specific NX-OS downgrade instructions. Use the install all command to downgrade the switch and handle configuration conversions. When downgrading any switch in the Cisco MDS 9000 family, avoid using the reload command. Here we outline the major steps for the downgrade:

Step 1. Verify that the system image files for the downgrade are present on the active supervisor module bootflash with the dir bootflash command.

Step 2. If the software image file is not present, download it from an FTP or TFTP server to the active supervisor module bootflash. If you need more space on the active supervisor module bootflash: directory, use the delete command to remove unnecessary files and ensure that the required space is available on the active supervisor. (Refer to step 5 in the upgrade section.) If you need more space on the standby supervisor module bootflash: directory, delete unnecessary files to make space available. (Refer to steps 6 and 7 in the upgrade section.)

switch# copy tftp://tftpserver.cisco.com/MDS/m9700-sf3ek9-mz.6.2.5.bin
bootflash:m9700-sf3ek9-kickstart-mz.6.2.5.bin
switch# copy tftp://tftpserver.cisco.com/MDS/m9700-sf3ek9-mz.6.2.5.bin
bootflash:m9700-sf3ek9-mz.6.2.5.bin

Step 3. Ensure that the required space is available on the active supervisor.

switch# dir bootflash:

Step 4. Issue the show incompatibility system command (see Example 23-7) to determine if you need to disable any features not supported by the earlier release.

Example 23-7 Show Incompatibility System Output on MDS 9500 Switch


switch# show incompatibility system bootflash:m9500-sf2ek9-mz.5.2.1.bin
The following configurations on active are incompatible with the system image
1) Service : port-channel , Capability : CAP_FEATURE_AUTO_CREATED_41_PORT_CHANNEL
Description : auto create enabled ports or auto created port-channels are present
Capability requirement : STRICT
Disable command :
1.Disable autocreate on interfaces (no channel-group auto).
2.Convert autocreated port channels to be persistent (port-channel 1 persistent)
...


Step 5. Disable any features that are incompatible with the downgrade system image.

switch# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
switch(config)# interface fcip 31
switch(config-if)# no channel-group auto
switch(config-if)# end
switch# port-channel 127 persistent
switch#

Step 6. Save the configuration using the copy running-config startup-config command.

switch# copy running-config startup-config

Step 7. Issue the install all command to downgrade the software.

switch# install all kickstart bootflash:m9500-sf2ek9-kickstart-
mz.5.2.1.bin.S74 system bootflash:m9500-sf2ek9-mz.5.2.1.bin.S74

Step 8. Verify the status of the modules on the switch using the show module command.

Configuring Interfaces

The main function of a switch is to relay frames from one data link to another. To relay the frames, the characteristics of the interfaces through which the frames are received and sent must be defined. The configured interfaces can be Fibre Channel interfaces, Gigabit Ethernet interfaces, the management interface (mgmt0), or VSAN interfaces. Each physical Fibre Channel interface in a switch may operate in one of several port modes: E port, F port, FL port, TL port, TE port, SD port, ST port, and B port (see Figure 23-32). Besides these modes, each interface may be configured in auto or Fx port modes. These two modes determine the port type during interface initialization. Interfaces are created in VSAN 1 by default. When a module is removed and replaced with the same type of module, the configuration is retained. If a different type of module is inserted, the original configuration is no longer retained. In Chapter 22, we discussed the different Fibre Channel port types.

Image
Image

Figure 23-32 Cisco MDS 9000 Family Switch Port Modes

The interface state depends on the administrative configuration of the interface and the dynamic state of the physical link. Table 23-8 summarizes the interface states.

Image
Image

Table 23-8 Interface States

Graceful Shutdown

Interfaces on a port are shut down by default (unless you modified the initial configuration). The Cisco NX-OS software implicitly performs a graceful shutdown in response to either of the following actions for interfaces operating in the E port mode:

Image If you shut down an interface.

Image If a Cisco NX-OS software application executes a port shutdown as part of its function.

A graceful shutdown ensures that no frames are lost when the interface is shutting down. When a shutdown is triggered either by you or the Cisco NX-OS software, the switches connected to the shutdown link coordinate with each other to ensure that all frames in the ports are safely sent through the link before shutting down. This enhancement reduces the chance of frame loss.

Port Administrative Speeds

By default, the port administrative speed for an interface is automatically calculated by the switch.

Autosensing speed is enabled on all 4Gbps and 8Gbps switching module interfaces by default. This configuration enables the interfaces to operate at speeds of 1 Gbps, 2 Gbps, or 4 Gbps on the 4Gbps switching modules, and 8 Gbps on the 8Gbps switching modules. When autosensing is enabled for an interface operating in dedicated rate mode, 4 Gbps of bandwidth is reserved, even if the port negotiates at an operating speed of 1 Gbps or 2 Gbps.

Frame Encapsulation

The switchport encap eisl command applies only to SD (SPAN destination) port interfaces. This command determines the frame format for all frames transmitted by the interface in SD port mode. If the encapsulation is set to EISL, all outgoing frames are transmitted in the EISL frame format, regardless of the SPAN sources. In SD port mode, where an interface functions as a SPAN, an SD port monitors network traffic that passes through a Fibre Channel interface. Monitoring is performed using a standard Fibre Channel analyzer (or a similar Switch Probe) that is attached to the SD port.

Bit Error Thresholds

The bit error rate threshold is used by the switch to detect an increased error rate before performance degradation seriously affects traffic. The bit errors can occur for the following reasons:

Image Faulty or bad cable.

Image Faulty or bad GBIC or SFP.

Image GBIC or SFP is specified to operate at 1 Gbps but is used at 2 Gbps.

Image GBIC or SFP is specified to operate at 2 Gbps but is used at 4 Gbps.

Image Short-haul cable is used for a long haul or long-haul cable is used for a short haul.

Image Momentary sync loss.

Image Loose cable connection at one or both ends.

Image Improper GBIC or SFP connection at one or both ends.

A bit error rate threshold is detected when 15 error bursts occur in a 5-minute period. By default, the switch disables the interface when the threshold is reached. You can enter a shutdown or no shutdown command sequence to reenable the interface.

Local Switching

Local switching can be enabled in Generation 4 modules, which allows traffic to be switched directly with a local crossbar when the traffic is directed from one port to another on the same line card. Because local switching is used, an extra switching step is avoided, which decreases the latency. When using local switching, note the following guidelines:

Image All ports need to be in shared mode, which usually is the default state. To place a port in shared mode, enter the switchport rate-mode shared command.

Image E ports are not allowed in the module because they must be in dedicated mode.


Note

Local switching is not supported on the Cisco MDS 9710 switch.


Dedicated and Shared Rate Modes

Ports on Cisco MDS 9000 family line cards are placed into port groups that have a fixed amount of bandwidth per port group (see Table 23-9). The Cisco MDS 9000 family allows for the bandwidth of ports in a port group to be allocated based on the requirements of individual ports. When you’re planning port bandwidth requirements, allocation of the bandwidth within the port group is important. Ports in the port group can have bandwidth dedicated to them, or ports can share a pool of bandwidth. For ports that require high-sustained bandwidth, such as ISL ports, storage and tape array ports, and ports on high-bandwidth servers, you can have bandwidth dedicated to them in a port group by using the switchport rate-mode dedicated command. For other ports, typically servers that access shared storage-array ports (that is, storage ports that have higher fan-out ratios), you can share the bandwidth in a port group by using the switchport rate-mode shared command. When configuring the ports, be sure not to exceed the available bandwidth in a port group.

Image

Table 23-9 Bandwidth and Port Group Configurations for Fibre Channel Modules

For example, a Cisco MDS 9513 Multilayer Director with a Fabric 3 module installed, using a 48-port 8Gbps Advanced Fibre Channel module, has eight port groups of six ports each. Each port group has 32.4 Gbps of bandwidth available. You cannot configure all six ports of a port group at the 8Gbps dedicated rates because that would require 48 Gbps of bandwidth, and the port group has only 32.4 Gbps of bandwidth. You can, however, configure all six ports in shared rate mode, so that the ports run at 8 Gbps and are oversubscribed at a rate of 1.48:1 (6 ports × 8 Gbps = 48 Gbps/32.4 Gbps). This oversubscription rate is well below the oversubscription rate of the typical storage array port (fan-out ratio) and does not affect performance. Most major disk subsystem vendors provide guidelines as to the recommended fan-out ratio of subsystem client-side ports to server connections. These recommendations are often in the range of 7:1 to 15:1. In Chapter 22, we discussed the fan-out ratio. You can also mix dedicated and shared rate ports in a port group.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset