DS8910F hardware configuration
This chapter describes the hardware components and modules of the IBM DS8910F. It also provides insights into the architecture and individual components and a comparison between the DS8910F Model 993 and the previous DS8882F model 983.
This chapter includes the following topics:
2.1 DS8910F machine types
Several machine type options are available for the DS910F. Table 2-1 lists the available hardware machine types and their corresponding function authorization machine types.
Table 2-1 Available hardware and function-authorization machine types
Hardware
Licensed functions
Hardware machine type
Available hardware models
Corresponding function authorization machine type
Available function authorization models
5331 (1-year warranty period)
 
 
 
 
993
9046 (1-year warranty period)
 
 
 
 
LF8
5332 (2-year warranty period)
9047 (2-year warranty period)
5333 (3-year warranty period)
9048 (3-year warranty period)
5334 (4-year warranty period)
9049 (4-year warranty period)
The machine types for the DS8910F specify the service warranty period. The warranty is used for service entitlement checking when notifications for service are called home. The DS8910F Model 993 reports 2107 as the machine type to attached host systems.
2.2 DS8910F hardware components
The DS8910F is an entry-level, high-performance storage system that includes only High-Performance Flash Enclosures Gen2. The DS8910F hardware components are consistent with the rest of the DS8900 all-flash family.
The modular system contains processor nodes, an I/O Enclosure, High-Performance Flash Enclosures Gen2, and a Management Enclosure (which includes the HMCs, Ethernet Switches, and RPCs).
The DS8910F storage system features 8-core processors and supports one High-Performance Flash Enclosure Gen2 pair with the model ZR1 or LR1 installation, or up to two High-Performance Flash Enclosure Gen2 pair with the standard 19-inch wide rack installation, with up to 96 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives. It supports up to 512 GB system memory, four zHyperLink adapters, and up to 32 host adapter ports.
The DS8910F consists of six modules with the model ZR1 or LR1 installation and six or eight modules with a standard conforming 19-inch rack. For more information about installing into a conforming 19-inch rack, see Chapter 3, “DS8910F installation and integration” on page 27.
The DS8910F includes the following components:
Two 2U IBM POWER® processor nodes (CECs)
Up to two High-Performance Flash Enclosure (HPFE) Gen2 pairs
One 5U I/O enclosure pair
One 2U management enclosure
Optional 1U Display (feature code 1765)
Figure 2-1 shows the DS8910F eight modules that make up the 19U of contiguous space, and the order in which they must be installed. An optional 1U display can be added that make up the 20U of contiguous space.
Figure 2-1 DS8910F Eight 2U Modules
2.2.1 High-Performance Flash Enclosure Gen2 pair
The top four modules that make up the DS8910F are the HPFE Gen2 pairs. The HPFE Gen2 is a 2U flash enclosure that is installed in pairs. DS8910F supports one HPFE Gen2 pair when installed into an IBM Z model ZR1 or IBM LinuxONE Rockhopper II model LR1. The DS8910F occupies 15U of contiguous reserved space.
The HPFE Gen2 pair contains the following hardware components:
Two 2U 24-slot serial-attached SCSI (SAS) flash drive enclosures. Each of the two enclosures contains the following components:
 – Two power supplies with integrated cooling fans
 – Two SAS expander modules with two x4 SAS ports each
 – One midplane or backplane for plugging components that accommodate the flash drives, SAS expander modules, and power supplies
 – A total of 24 2.5-inch flash drives (or drive fillers)
The two 2U HPFE Gen2 modules are positioned as the top two modules in the DS8910F.
For more information about the High-Performance Flash Enclosures Gen2, see DS8000 High-Performance Flash Enclosure Gen2, REDP-5422.
Figure 2-2 shows views of HPFE Gen2 front (top) and rear (bottom).
Figure 2-2 HPFE Gen2 front (top) and rear (bottom)
DS8910F flash drives
The DS8910F provides a choice of the following drives with the HPFE Gen2:
2.5-inch High-Performance Flash Tier 0 drives:
 – 800 GB
 – 1.6 TB
 – 3.2 TB
2.5-inch High Capacity Flash Tier 1 drives: 3.84 TB
2.5-inch High Capacity Flash Tier 2 drives:
 – 1.92 TB
 – 7.68 TB
 – 15.36 TB
 
Note: Intermix of High-Performance Flash Tier 0 drives with High Capacity Flash Tier 1 and Flash Tier 2 drives is not supported in an HPFE Gen2 pair. All flash drives in DS8910F are Full Drive Encryption (FDE) capable.
Flash drives are ordered in drives sets of 16. The HPFE Gen2 pair can contain 16, 32, or 48 flash drives (1, 2 or 3 drive sets). All flash drives in an HPFE Gen2 pair must be the same type. Half the drive set is installed in each enclosure of the pair.
Figure 2-3 shows the HPFE Gen2 flash drive installation order.
Figure 2-3 Flash drive set installation order
Table 2-2 lists the DS8910F feature codes for flash drive sets for HPFE Gen2.
Table 2-2 DS8910F feature codes for flash-drive sets for HPFE Gen2
Feature code
Disk size
Drive type
RAID support
1611
800 GB
Flash Tier 0
5, 6, 10
1612
1.6 TB
Flash Tier 0
6 and 101,2
1613
3.2 TB
Flash Tier 0
6 and 101,2
1622
1.92 TB
Flash Tier 2
6 and 102
1623
3.84 TB
Flash Tier 1
6 and 101,2
1624
7.68 TB
Flash Tier 2
61,2
1625
15.36 TB
Flash Tier 2
61,2
Notes:
1. RAID 5 is not supported for drives larger than 1 TB, and requires a request for price quote (RPQ).
2. RAID 6 is the default and preferred RAID type for all drives larger than 1 TB, and it is the only supported RAID type for 7.68 TB and 15.36 TB drives.
3. Within a High-Performance Flash Enclosure Gen2 pair, no intermix of High-Performance Flash (Tier 0) with High Capacity Flash (Tier 1 and Tier 2) is supported.
Storage enclosure fillers
Storage enclosure fillers fill empty drive slots in the storage enclosures.
The fillers ensure sufficient airflow across populated storage. For HPFE Gen2, one filler feature provides a set of 16 fillers (feature code 1699).
RAID capacities for DS8910F
Use the following information to calculate the physical and effective capacity for the HPFE Gen2.
The default and preferred RAID type for all drives larger than 1 TB is RAID 6, and it is the only RAID type that is supported for 7.68 TB and 15.36 TB drives. RAID 5 is not supported for drives larger than 1 TB, and requires a request for price quote (RPQ).
Table 2-3 lists the DS8910F effective RAID capacities.
Table 2-3 RAID capacities for HPFE Gen2
Flash Tier 0, Flash Tier 1, Flash Tier 2 drive size
Physical capacity of Flash Tier 0, Flash Tier 1, Flash Tier 2 drive sets
Rank Type
Effective capacity of one rank in number of extents
RAID-10 arrays
RAID-5 arrays
RAID-6 arrays
3+3
4+4
6+P
7+P
5+P+Q
6+P+Q
800 GB
12.8 TB
FB Lg Ext
2133
2855
4300
5023
3578
2132
FB Sm Ext
136542
182781
 
275254
321475
229015
136495
CKD Lg Ext
2392
3203
4823
5633
4013
2392
CKD Sm Ext
126821
169768
255651
298601
212705
126787
1.6 TB
25.6 TB
FB Lg Ext
4301
5746
n/a
n/a
7197
8636
FB Sm Ext
275284
367771
n/a
n/a
460243
552727
CKD Lg Ext
4824
6445
n/a
n/a
8065
9686
CKD Sm Ext
255684
341586
n/a
n/a
427475
513372
1.92 TB
15.4 TB
FB Lg Ext
5168
6902
n/a
n/a
8636
10370
FB Sm Ext
330783
441769
n/a
n/a
552748
663727
CKD Lg Ext
5796
7741
n/a
n/a
9686
11631
CKD Sm Ext
307231
410315
n/a
n/a
513392
616474
3.2 TB
51.2 TB
FB Lg Ext
8637
11527
n/a
n/a
14417
17307
FB Sm Ext
552771
737753
n/a
n/a
922733
1107703
CKD Lg Ext
9687
12928
n/a
n/a
16170
19412
CKD Sm Ext
513414
685225
n/a
n/a
857029
1028843
3.84 TB
61.4 TB
FB Lg Ext
10371
13839
n/a
n/a
17308
20776
FB Sm Ext
663766
885747
n/a
n/a
1107725
1329703
CKD Lg Ext
11632
15522
n/a
n/a
19412
23302
CKD Sm Ext
616506
822682
n/a
n/a
1028848
1235028
7.68 TB
123 TB
FB Lg Ext
n/a
n/a
n/a
n/a
34650
41587
FB Sm Ext
n/a
n/a
n/a
n/a
2217663
2661631
CKD Lg Ext
n/a
n/a
n/a
n/a
38863
46643
CKD Sm Ext
n/a
n/a
n/a
n/a
2059760
2472118
15.36 TB
246 TB
FB Lg Ext
n/a
n/a
n/a
n/a
68980
82782
FB Sm Ext
n/a
n/a
n/a
n/a
4414735
5298103
CKD Lg Ext
n/a
n/a
n/a
n/a
77365
92846
CKD Sm Ext
n/a
n/a
n/a
n/a
4100392
4920882
2.2.2 DS8910F 5U I/O enclosure
The DS8910F I/O enclosure holds the I/O adapters and provides connectivity between the I/O adapters and the processor nodes. DS8910F has one pair of 5U I/O enclosure logical names I/O bay 02 and 03. I/O adapters are installed in pairs in the two I/O bays for redundancy.
The I/O adapters in the I/O enclosures can be flash RAID adapters or host adapters. Each I/O enclosure supports two Flash RAID adapter pairs, and four host adapters pairs (32 ports). Host adapters are 4-port 16 Gbps or 4-port 32 Gbps (GFC).
The DS8910F I/O enclosure supports at minimum configuration one pair of flash device adapter. At maximum configuration, two pairs of flash device adapter are supported. The I/O enclosure configuration contains the following components:
Up to four pairs of 4-port 16 Gbps or 4-port 32 GFC host adapters
Power control network (PCN) adapter
Redundant power supplies (PSUs)
Redundant fans for enclosure cooling
Up to two pairs of flash device adapter
Four zHyperLink adapters
Figure 2-4 on page 12, shows the DS8910F I/O enclosure maximum configuration with a flash RAID adapter pair, and four host adapter pairs.
Figure 2-4 DS8910F I/O enclosure configuration
PCIe connectivity is from the 4-port PCIe adapters in the processor nodes to the base PCIe I/O expander. A separate PCIe connection to each base is used, which provides redundant access to each I/O bay and shared access to the flash device and host adapters. Failover occurs during code load, or during node failure and service actions.
The minimum configuration supports one flash device adapter pair (one adapter in each I/O bay) and up to four host adapter pairs (four adapters in each I/ O bay).
 
Note: For continued availability during a logical I/O enclosure or a host adapter failure, ensure that host connectivity has a redundant path to a different host adapter in the other logical I/O enclosure.
Figure 2-5 shows the PCIe connectivity from the processor nodes to the DS8910F 5U I/O enclosures. Two connections are available: one from each processor node to each I/O enclosure.
Figure 2-5 DS8910F PCIe connectivity to 5U I/O Enclosure
zHyperLink connections
Up to four zHyperLink connections with IBM Z hosts can be used to provide low latency for random reads and writes. Each zHyperLink connection requires a zHyperLink I/O adapter to connect the zHyperLink cable to the storage system. Each zHyperLink I/O adapter (Feature Code 3500) has one port, but you must order them in sets of two. Table 2-4 lists the feature codes for the available zHyperLink cables.
Table 2-4 Feature codes for zHyperLink cables
Feature code
Cable type
Cable length
zHyperLink I/O adapter features
1450
OM4 50/125 micrometer, multimode, MTP connectors
 
40 m (131 ft.)
 
3500
1451
OM4 50/125 micrometer, multimode, MTP connectors
 
150 m (492 ft.)
 
3500
1452
OM4 50/125 micrometer, multimode, MTP connectors.
For Model 993 installed in IBM Z model ZR1/LR1
 
3 m (9.8 ft.
3500
Figure 2-6 shows the zHyperLink adapter locations for connecting to IBM Z host system. When configuring host connections T3 ports are connected first, then T4 ports.
Figure 2-6 DS8910F zHyperLink connections
Fibre Channel (SCSI-FCP and FICON) host adapters and cables
The DS8910F Fibre Channel host adapters enable attachment to Fibre Channel (SCSI-FCP) and FICON servers, and SAN fabric components. They can also be used for remote mirror and copy control paths between DS8000 series storage systems.
The DS8910F host adapters are 4-port 16 Gbps 0r 4-port 32 GFC, which is similar to those adapters in other DS8900 models. The DS8910F host adapters can be longwave or shortwave.
Supported protocols include the following types:
SCSI-FCP upper layer protocol (ULP) on point-to-point and fabric
FICON ULP on point-to-point and fabric topologies
 
Note: The 16 Gbps or 32 GFC (EDiF) Encryption capable host adapters do not support arbitrated loop topology at any speed.
Fibre Channel port identification
The DS8910F host adapters are installed as pair in the two I/O enclosures. Up to four 4-port 16 Gbps or 4-port 32 GFC pairs can be installed in DS8910F. The host adapter plug order is shown in Figure 2-7.
Figure 2-7 DS8910F Host adapter plug order
The following installation order is used:
1. All 32 GFC host adapters.
2. The 16 Gbps host adapters.
3. The Long Reach host adapters.
4. The Short Reach host adapters.
 
The DS8910F Fibre Channel ports can be identified by the physical host adapter port location code and the Fibre Channel port ID. Figure 2-8 shows the fibre port IDs for host adapters that are installed in I/O enclosure 1B3.
Figure 2-8 Fibre port IDs for the host adapters in I/O enclosure 1B3
Figure 2-9 shows the Fibre Channel port IDs that are assigned to the host adapters that are installed in I/O enclosure 1B4. Slot numbers that are shown are logical; therefore, slot 0 is physical slot 1 in 1B4.
Figure 2-9 Fibre Channel port IDs for the host adapters in I/O enclosure 1B4
Figure 2-10 on page 16 shows the Fibre Channel ports that are displayed from the Storage Manager GUI. The FC port logical ID, frame number, I/O enclosure, and host adapter slot are shown with other FC port properties. For example, looking at port ID I0232 and referencing Figure 2-8, the port is in I/O enclosure 1B3 slot 4 port 2.
Figure 2-10 Fibre Channel port IDs from Storage Manager GUI
Fibre Channel cables
A Fibre Channel cable is required to attach each Fibre Channel adapter port to a server or fabric component port. The Fibre Channel cables can be 50 or 9 μm, OM3 or higher fiber graded, single, or multimode cables.
For more information about DS8910F host adapter and Fibre Channel cable features, see “Planning for host connectivity” on page 32.
Flash RAID adapters
Up to two pairs of the flash RAID adapter can be installed in DS8910F to connect to two pairs of HPFE. The main processor is a RAID engine that provides RAID and sparing management to the flash drives in the HPFE Gen2 flash enclosures.
The flash RAID adapter has four SAS ports, which provide connectivity from the RAID adapter to the HPFE Gen2 enclosures.
The flash RAID adapters are installed as a pair, one in each of the logical I/O enclosures. This installation is known as a device adapter pair (DA pair). Logical configuration should be balanced across the DA pair for load balancing and the highest throughput.
The redundant DA pair ensures continued availability if a flash RAID adapter or logical I/O enclosure fails.
2.2.3 IBM POWER9-based servers
A pair of POWER9-based servers, also known as processor nodes, are at the heart of all of the IBM DS8900F models. The DS8910F Model 993 shares the processor nodes that are used in the DS8910F model 994.
The two POWER9 servers share the load of receiving and moving data between the attached hosts and the storage arrays. However, they are also redundant, so that if either server fails, the system operations fail over to the remaining server and continue to run without any host interruption.
The DS8910F configuration uses two POWER9 servers (9009-22A servers, each with two
4 core Single Chip Modules (3.4 - 3.9 GHz) and 96 GB or 256 GB processor memory.
The DS8910F processor node is a 2U high enclosure, and features the following configuration:
DDR4 Registered DIMM (RDIMM) slots.
One storage cage with two hard disk drives
Two PCIe x16 Gen3 slots
Four PCIe x8 Gen3 slots
Two power supplies with integrated cooling
The two 2U processor nodes are positioned below the I/O enclosure and the management enclosure. Figure 2-11 shows the front view of the DS8910F Processor node.
Figure 2-11 DS8910F Processor node front view
For more information about the server hardware that is used in the DS8910F models 993 and 994, see IBM Power Systems S922, S914, and S924 Technical Overview and Introduction, REDP-5497.
Processor memory
The DS8910F configuration offers up to 512 GB of total system memory. Each processor node contains half of the total system memory. All memory that is installed in each processor node is accessible to all processors in that node. The absolute addresses that are assigned to the memory are common across all processors in the node. The set of processors is referred to as a symmetric multiprocessor (SMP) system.
The IBM POWER9 processor that is used in the DS8910F operates in simultaneous multithreading (SMT) mode, which runs multiple instruction streams in parallel. The number of simultaneous instruction streams varies according to processor and Licensed Internal Code (LIC) level. SMT mode enables the POWER9 processor to maximize the throughput of the processor cores by offering an increase in core efficiency.
DS8910F memory upgrades can be performed non-disruptively 96 - 256 GB per node.
Caching is a fundamental technique for reducing I/O latency. As with other modern caches, the DS8910F processor nodes contain volatile memory that is used as a read and write cache, and nonvolatile memory (NVDIMM) that is used to maintain and back up a second copy of the write cache.
If power is lost, the NVDIMMs are supplied “hold up” power from one Backup Power Modules (BPM). BPMs retain NVDIMM data when electrical power is removed, either from an unexpected power loss, or from a normal system shutdown.The 2.5-inch Smart BPM is installed in the vacant drive location D7 (which is inside the CEC cage).
Figure 2-12 DS8910F 2.5” Smart BPM location in each processor node
The NVS scales to the processor memory that is installed, which also helps to optimize performance. DS8910F NVS is 4 GB for 96 GB processor nodes and 16 GB per node for 256 GB processor nodes. Figure 2-13 show the top view of a processor node with 256 GB memory.
Figure 2-13 DS8910F processor node with 2 x 4 Core CEC and 256 GB Memory
Flexible service processor
Each IBM POWER9 processor complex is managed by a service processor that is called a flexible service processor (FSP). The FSP is an embedded controller that is based on an IBM PowerPC® processor.
The FSP controls power and cooling for the processor nodes. The FSP performs predictive failure analysis for installed processor hardware, and performs recovery actions for processor or memory errors. The FSP monitors the operation of the firmware during the boot process and can monitor the operating system for loss of control and take corrective actions.
Figure 2-14 shows the rear view of the DS8910F processor node and slot C1 is the FSP.
Figure 2-14 DS8910F Server rear view
The following adapters are installed in the processor nodes, as shown in Figure 2-14:
Peripheral Component Interconnect® Express adapter
Each DS8910F processor node contains two single port PCIe3 adapters. These adapters allow point-to-point connectivity between the processor nodes and the I/O enclosure and I/O adapters. Adapters are installed in slots C6 and C12.
Ethernet connections
Each IBM POWER9 processor complex has a single 4-port 1 Gb Ethernet adapter that is installed in slot C11. Top two ports connect to the internal network switches, as described in “Ethernet switches” on page 22. The bottom two unused connections are available for Transparent Cloud Tiering (TCT). These connections are the bottom two ports of the LPAR Ethernet adapter and were the original low-speed TCT connections.
TCT connections
An optional high-speed Ethernet adapter feature can be ordered for TCT, which provides two 10 Gbps LC connections and two RJ-45 1 Gbps connections. For DS8910 model 993, the 10 Gbps adapter (Feature Code 3602) is installed in slot C4 of each processor node.
These ports are the two high-speed and two low-speed TCT connections, which is an optional chargeable hardware feature. In summary, six TCT connections are available: two low-speed included, and optionally, two high-speed and two low-speed.
2.2.4 Management enclosure
The DS8910F management enclosure is a 2U chassis that contains the following components:
Two Hardware Management Consoles (HMCs)
Two Ethernet switches
Two power control cards (RPCs)
Two power supply units (PSUs) to power the management enclosure components
One Local/Remote switch assembly
Internal cabling for communications and power for each of the components
The DS8910F management enclosure is unique and does not exist in other DS8900 models.
Because the DS8910F system modules can be mounted in any conforming rack, the management enclosure is designed to create a compact container for all essential system management components,
Figure 2-15 shows the layout of the components of the management enclosure.
Figure 2-15 DS8910F management enclosure component layout
The management enclosure provides internal communications to all of the modules of the DS8910F system. The management enclosure also provides external connectivity by using two Ethernet cables from each HMC for remote management. It also provides keyboard/mouse and video connectivity from each HMC for local management. Cables are routed from the management consoles to the rear of the management enclosure through a cable management arm (CMA).
Figure 2-16 shows the front view of the 2U management enclosure.
Figure 2-16 DS8910F management enclosure (front view)
Figure 2-17 shows the rear view of the DS8910F management enclosure, the rear tailgate connectors, and the installed components.
Figure 2-17 DS8910F management enclosure (rear view)
Hardware Management Consoles
The management console is also referred to as the Hardware Management Console (HMC). It supports the DS8910F hardware and firmware installation and maintenance activities.
The HMC connects to the customer network and provides access to functions that can be used to manage the DS8910F. Management functions include logical configuration, problem notification, Call Home for service, remote service, and Copy Services management.
Management functions can be performed from the DS8000 Storage Management GUI, DS command-line interface (DS CLI), or other storage management software that supports the DS8910F.
Clients who use the DS8900 advanced functions, such as Metro Mirror or FlashCopy, can communicate to the storage system with Copy Services Manager (CSM).
The Management Console provides connectivity between the DS8910F and Encryption Key Manager servers, if used.
The Management Console also provides the functions for remote call-home and remote support connectivity.
To provide continuous availability of access to the management console functions, the DS8910F order must include the second management console.
Ethernet switches
The DS8910F management enclosure has two 8-port Ethernet switches. The two switches provide two redundant private management networks. Each processor node includes connections to each switch to allow each server to access both private networks. These networks cannot be accessed externally, and no external connections are allowed. External client network connection to the DS8910F system is through dedicated connections to each of the management consoles.
Figure 2-18 shows the connections at the rear of the DS8910F management enclosure.
Figure 2-18 Management enclosure connections
2.2.5 Power subsystem
Intelligent rack Power Distribution Units (iPDUs) supply power to the storage system, and backup power modules (BPMs) provide power to the non-volatile dual inline memory module (NVDIMM) when electrical power is removed. The rack-mounted model 993 standard 19-inch wide rack installation (Feature Code 0939) supports an optional pair of iPDUs, as shown in Figure 2-19. E23 (Green) must connect all the green enclosure power cords, and connect to the gray private network. E24 (Yellow) must connect all the yellow enclosure power cords, and connect to the black private network. Each enclosure connects to the specified output port on the iPDUs.
 
Note: only DS8910F Model 993 enclosures are allowed to be connected to the optional iPDU pair.
Rack power control is through Ethernet-managed iPDUs and HMC manages system power state and monitoring.
Figure 2-19 DS8910F Model 993 customer rack integration
iPDUs provide following benefits:
IBM Active Energy Manager(AEM) support
IBM Power Line Disturbance (PLD) compliance up to 20 milliseconds
Individual outlet monitoring and control
Firmware updates
Circuit breaker protection
NVDIMM Backup Power Module (BPM) is a Nickel-based hybrid energy storage system with High power discharge and Fast charge time, as shown in Figure 2-20. BPMs retain NVDIMM data when electrical power is removed because of an unexpected power loss or normal system shutdown. This capability improves data security, reliability, and recovery time.
Figure 2-20 DS8910F NVDIMM Backup Power Module (BPM)
Power input and distribution
The DS8910F Model 993 supports single-phase and three-phase power. For ZR1/LR1 integration, DS8910F Model 993 shares iPDUs and monitor with IBM Z System.
The standard 19-inch wide rack installation (Feature Code 0939) requires the following power connections:
15U configuration requires 7x IEC C13 power on two different PDUs (total 14). Optional monitor adds a C13 on either PDU.
19U configuration requires 9x IEC C13 power on two different PDU’s (total 18) and monitor adds a C13 on either PDU.
For more information about power input requirements, see “Planning for power requirements” on page 30.
2.2.6 DS8882F to DS8910F Rack Mounted comparison
The DS8882F is the rack-mounted model 983 that was introduced with DS8880 family. The new rack-mounted member of DS8900 family (DS8910F Model 993) is redesigned with new IBM POWER9 processor nodes, power, capacity, I/O enclosure, host adapter, and zHyperLink adapter. The DS8882F and DS8910F are compared in Table 2-5.
Table 2-5 DS8882F to DS8910F Rack Mounted comparison
Features
DS8882F
DS8910F
Rack Size
No rack
No rack
Min Size
171
16 including the optional display
Max Size
171
20 including the optional display
Processor Complex (CEC)
2 IBM POWER8®
2 IBM POWER9
IO Bay Pairs
1 2U
1 5U
Max HA Ports
16 ports
16 GFC 4-port HA
32 ports
16GFC 4-port HA
32GFC 4-port EDiF HA
Max zHyperLink Ports
0
4
Max Flash Drives
48
96
Max HPFE Gen 2 pair
1
2
4 U UPS single phase per rack
2
0
Display and Keyboard Optional
1 19-inch wide rack mounted
1 19-inch wide rack mounted
HMC in Management Enclosure (ME)
2
2
Ethernet Switch in ME
2
2
1 - DS8882F when integrated into ZR1/LR1, uses 16U contiguous space and shares the Keyboard display unit that is provided by Z.
 
 
 
 
Figure 2-21 shows a comparison of the DS8882F and DS8910F rack-mounted models with maximum supported configuration.
Figure 2-21 DS8882F to DS8910F Rack Mounted comparison
 
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset