Introduction to Power E1080
The Power E1080 is the newest addition to IBM Power family, the industry’s best-in-class server platform for security and reliability. The Power E1080 introduces the essential enterprise hybrid cloud platform, which is uniquely designed to help you securely and efficiently scale core operational and artificial intelligence (AI) applications anywhere in a hybrid cloud.
The Power E1080 simplifies end-to-end encryption and brings AI where your data is stored for faster insights. This configuration helps enable greater workload deployment flexibility and agility while accomplishing more work.
The Power E1080 can help you to realize the following benefits:
Protect trust from core to cloud
Protect data that is in-transit and at-rest with greatly simplified end-to-end encryption across hybrid cloud without affecting performance.
Enjoy enterprise quality of service
The Power E1080 can detect, isolate, and recover from soft errors automatically in the hardware without taking an outage or relying on an operating system to manage the faults.
Drive greater efficiency with sustainable and scalable compute
The processor performance, massive system throughput, and memory capacity qualify the Power E1080 server to be the perfect workload consolidation platform. This performance Sentails significant savings in floor space, energy consumption, and operational expenditure costs.
This chapter includes the following topics:
1.1 System overview
The Power E1080, also referred to by its 9080-HEX machine type-model designation, represents the most powerful and scalable server in the IBM Power portfolio. It is comprised of a combination of CEC enclosures that are called nodes (or system nodes) and more units and drawers.
1.1.1 System nodes, processors, and memory
In this section, we provide a general overview of the system nodes, processors, and memory. For more information about the system nodes, see 1.2, “System nodes” on page 6.
A system node is an enclosure that provides the connections and supporting electronics to connect the processor with the memory, internal disk, adapters, and the interconnects that are required for expansion.
A combination of one, two, three, or four system nodes per server is supported. At announcement, a maximum of two nodes can be ordered. After December 10, 2021, the maximum configuration of four nodes is to be made available.
Each system node provides four sockets for Power10 processor chips and 64 differential DIMM (DDIMM) slots for Double Data Rate 4 (DDR4) technology DIMMs.
Each socket holds one Power10 single chip module (SCM). An SCM can contain 10, 12, or 15 Power10 processor cores. It also holds the extra infrastructure logic to provide electric power and data connectivity to Power10 processor chip.
The processor configuration of a system node is defined by the selected processor feature. Each feature defines a set of four Power10 processors chips with the same core density (10, 12, or 15).
A 4-node Power E1080 server scales up to 16 processor sockets and 160, 192, or 240 cores, depending on the number of cores provided by the configured SCM type.
All system nodes with a Power E1080 server must be configured with the same processor feature.
Starting December 10, 2021, each system node can support up to a maximum of 16 TB of system memory by using the largest available memory DIMM density. A fully configured 4-node Power E1080 can support up to 64 TB of memory.
To support internal boot capability, each system node enables the use of up to four non-volatile memory express (NVMe) drive bays. More drive bays are provided through expansion drawers.
Each system node provides eight Peripheral Component Interconnect Express (PCIe) Gen 5 capable slots, with a maximum of 32 per Power E1080 server.
Any one-, two-, three-, or four-system node configuration requires the system control unit (SCU) to operate. The SCU provides system hardware, firmware, and virtualization control through redundant Flexible Service Processors (FSPs). Only one SCU is required and supported for every Power E1080 server. For more information about the system control unit, see 1.3, “System control unit” on page 10.
For more information about the environmental and physical aspects of the server, see 1.4, “Server specifications” on page 12.
1.1.2 Expansion drawers and storage enclosures
Capacity can be added to your system by using expansion drawers and storage enclosures.
An optional 19-inch PCIe Gen 3 4U I/O expansion drawer provides 12 PCIe Gen 3 slots. The I/O expansion drawer connects to the system node with a pair of PCIe x16 to CXP converter cards that are housed in the system node. Each system node can support up to four I/O expansion drawers, for a total of 48 PCIe Gen 3 slots. A fully configured Power E1080 can support a maximum of 16 I/O expansion drawers, which provides a total of 192 PCIe Gen 3 slots.
An optional EXP24SX SAS storage enclosure provides 24 2.5-inch small form factor (SFF) serial-attached SCSI (SAS) bays. It supports up to 24 hot-swap hard disk drives (HDDs) or solid-state drives (SSDs) in only 2U rack units of space in a 19-inch rack. The EXP24SX is connected to the Power E1080 server by using SAS adapters that are plugged into system node PCIe slots or I/O expansion drawer slots.
For more information about enclosures and drawers, see 1.6, “I/O drawers” on page 25.
For more information about IBM storage products, see this web page.
1.1.3 Hardware at-a-glance
When 4-node configurations are available, the Power E1080 server provides the following hardware components and characteristics:
10-, 12-, or 15-core Power10 processor chips that ar packaged in a single chip module per socket
One, two, three, or four system nodes with four Power10 processor sockets each
Redundant clocking in each system node
Up to 60 Power10 processor cores per system node and up to 240 per system
Up to 16 TB of DDR4 memory per system node and up to 64 TB per system
8 PCIe Gen 5 slots per system node and a maximum of 32 PCIe Gen 5 slots per system
PCIe Gen 1, Gen 2, Gen 3, Gen 4, and Gen 5 adapter cards supported in system nodes
Up to 4 PCIe Gen 3 4U I/O expansion drawers per system node providing a maximum of 48 additional PCIe Gen 3 slots
Up to 192 PCIe Gen 3 slots using 16 PCIe Gen 3 I/O expansion drawers per system
Up to over 4,000 directly attached SAS HDDs or SSDs through EXP24SX SFF drawers
System control unit, which provides redundant Flexible Service Processors and support for the operations panel, the system VPD, and external attached DVD
The massive computational power, exceptional system capacity, and the unprecedented scalability of the Power E1080 server hardware are unfolded by unique enterprise class firmware and system software capabilities and features. The following important characteristics and features are offered by the IBM Power enterprise platform:
Support for IBM AIX, IBM i, and Linux operating system environments
Innovative dense math engine that is integrated in each Power10 processor-core to accelerate AI inferencing workloads
Optimized encryption units that are implemented in each Power10 processor-core
Dedicated data compression engines that are provided by the Power10 processor technology
Hardware and firmware assisted and enforced security provide trusted boot and pervasive memory encryption support
Up to 1,000 virtual machines (VMs) or logical partitions (LPARs) per system
Dynamic LPAR support to modify available processor and memory resources according to workload, without interruption of the business
Capacity on demand (CoD) processor and memory options to help respond more rapidly and seamlessly to changing business requirements and growth
IBM Power System Private Cloud Solution with Dynamic Capacity featuring Power Enterprise Pools 2.0 that support unsurpassed enterprise flexibility for real-time workload balancing, system maintenance and operational expenditure cost management.
Table 1-1 compares important technical characteristics of the Power E1080 server (based on the December 10th 2021 availability) with those of the Power System E980 server, based on IBM POWER9™ processor-based technology.
Table 1-1 Comparison between the Power E980 and the Power E1080 server
Features
Power E980 server
Power E1080 server
Processor
POWER9
Power10
Processor package
Single Chip Module
Single Chip Module
Cores per single chip module
6, 8, 10, 11, 12
10, 12, 15
Number of cores per system
Up to 192 cores
Up to 240 cores
Sockets per node
4
4
System configuration options
1-, 2-, 3-, and 4-node systems
1-, 2-, 3-, and 4-node systems
Maximum memory per node
16 TB
16 TB
Maximum memory per system
64 TB
64 TB
Maximum memory bandwidth per node
920 GBps
1636 GBps
Aggregated maximum memory bandwidth per system
3680 GBps
6544 GBps
Pervasive memory encryption
No
Yes
PCIe slots per node
Eight PCIe Gen 4 slots
Eight PCIe Gen 5 slots
I/O drawer expansion option
Yes
Yes
Acceleration ports
Yes
CAPI 2.0 & OpenCAPI 3.01
Yes
OpenCAPI 3.0, 3.1
PCIe hot-plug Support
Yes
Yes
I/O bandwidth per node
545 GBps
576 GBps
Integrated USB
USB 3.0
Not available
Internal storage bays per node
Four NVMe PCIe Gen 3 bays2
Four NVMe PCIe Gen 4 bays
Per lane bit rate between sockets
25 Gbps
32 Gbps
Reliability, availability, and serviceability (RAS)
SMP3 cable concurrent repair
Non-active SMP cables with concurrent maintenance capability and TDR4 fault isolation
Secure and trusted boot
Yes
Yes

1 CAPI designates the coherent accelerator processor interface technology and OpenCAPI designates the open coherent accelerator processor interface technology. For more information about architectural specifications and the surrounding system, see hthis web page.
2 NVMe designates the Non-Volatile Memory Express interface specification under supervision of the NVM Express consortium: https://nvmexpress.org.
3 SMP designates the symmetric multiprocessing architecture, which is used to build monolithic servers out of multiple processor entities.
4 Time domain reflectometry (TDR) allows the server to actively detect faults in cables and locate discontinuities in a connector.
Figure 1-1 shows a 4-node Power E1080 server that is mounted in an IBM rack. Each system node is cooled by a set of five fans, which are arranged side-by-side in one row. The cooling assemblies show through the front door of the rack.
Figure 1-1 Power E1080 4-node server
1.1.4 Planned availability dates of system capacities and features
With any initial orders, the Power E1080 supports up to two system nodes. The memory features #EMC1 128 GB and #EMC2 256 GB are planned to be available. The maximum memory capacity that is supported in each node is 4 TB.
The maximum number of supported PCIe Gen 3 I/O expansion drawer is four per system node. Each I/O expansion drawer can be populated with two Fanout Modules. Each Fanout Module in turn is connected to a system node through one PCIe x16 to CXP Converter Card.
The following characteristics apply to the September planned availability date:
Maximum of 2 #EDN1 5U system node drawers
Maximum of 4 TB of system memory per socket and 16 TB per node drawer
Maximum of 8 #EMX0 PCIe Gen 3 I/O expansion drawers
Maximum of 16 #EMXH PCIe Gen 3 6-slot Fanout Module for PCIe Gen 3 expansion drawers
Maximum of 16 #EJ24 PCIe x16 to CXP Converter Cards
Starting from December 10, 2021, the Power E1080 supports up to four system nodes and all memory features (including #EMC3 512 GB and #EMC4 1024 GB) are planned to be available.
The following characteristics apply to the December planned availability date:
Maximum of 4 #EDN1 5U system node drawers
Maximum of 16 TB of system memory per node drawer
Maximum of 16 #EMX0 PCIe Gen 3 I/O expansion drawers
Maximum of 32 #EMXH PCIe Gen 3 6-slot Fanout Module for PCIe Gen 3 expansion drawers
Maximum of 32 #EJ24 PCIe x16 to CXP Converter Cards
1.2 System nodes
The full operational Power E1080 includes one SCU and one, two, three, or four system nodes. A system node is also referred to as a central electronic complex (CEC), or CEC drawer.
Each system node is 5U rack units high and holds four air-cooled Power10 single-chip modules (SCMs) that are optimized for performance, scalability, and AI workloads. An SCM is constructed of one Power10 processor chip and more logic, pins, and connectors that enable plugging the SCM into the related socket on the system node planar.
The Power E1080 Power10 SCMs are available in 10-core, 12-core, or 15-core capacity. Each core can run in eight-way simultaneous multithreading (SMT) mode, which delivers eight independent hardware threads of parallel execution power.
The 10-core SCMs are ordered in a set of four per system node through processor feature #EDP2. In this way, feature #EDP2 provide 40 cores of processing power to one system node and 160 cores of total system capacity in a 4-node Power E1080 server. The maximum frequency of the 10-core SCM is specified with 3.9 GHz, which makes this SCM suitable as a building block for entry class Power E1080 servers.
The 12-core SCMs are ordered in a set of four per system node through processor feature #EDP3. In this way, feature #EDP3 provides 48 cores capacity per system node and a maximum of 192 cores per fully configured 4-node Power E1080 server. This SCM type offers the highest processor frequency at a maximum of 4.15 GHz, which makes it a perfect choice if highest thread performance is one of the most important sizing goals.
The 15-core SCMs are ordered in a set of four per system node through processor feature #EDP4. In this way, feature #EDP4 provides 60 cores per system node and an impressive 240 cores total system capacity for a 4-node Power E1080. The 15-core SCMs run with a maximum of 4.0 GHz and meet the needs of environments with demanding thread performance and high compute capacity density requirements.
 
Note: All Power10 SCMs within a system node must be of the same type: 10-core, 12-core, or 15-core. Also, all system nodes within a specific Power E1080 server must be configured consistently with identical processor features.
Three PowerAXON1 18-bit wide buses per Power10 processor chip are used to span a fully connected fabric within a CEC drawer. In this way, each SCM within a system node is directly connected to every other SCM of the same drawer at 32 Gbps speed. This on-planar interconnect provides 128 GBps chip-to-chip data bandwidth, which marks an increase of 33% relative to the previous POWER9 processor-based on-planar interconnect implementation in Power E980 systems. The throughput can be calculated as 16 b lane with * 32 gbps = 16 GBps per direction * 2 directions for an aggregated rate of 128 GBps.
Each of the four Power10 processor chips in a Power E1080 CEC drawer is connected directly to a Power10 processor chip at the same position in every other CEC drawer in a multi-node system This connection is made by using a symmetric multiprocessing (SMP) PowerAXON 18-bit wide bus per connection running at 32 Gbps speed.
The Power10 SCM provides eight PowerAXON connectors directly on the module of which six are used to route the SMP bus to the rear tailstock of the CEC chassis. This innovative implementation allows to use passive SMP cables, which in turn reduces the data transfer latency and enhances the robustness of the drawer to drawer SMP interconnect. As discussed in 1.2, “System nodes” on page 6, cable features #EFCH, #EFCE, #EFCF, and #EFCG are required to connect system node drawers to the system control unit. They also are required to facilitate the SMP interconnect among each drawer in a multi-node Power E1080 configuration.
To access main memory, the Power10 processor technology introduces the new open memory interface (OMI). The 16 available high-speed OMI links are driven by 8 on-chip memory controller units (MCUs) that provide a total aggregated bandwidth of up to 409 GBps per SCM. This design represents a memory bandwidth increase of 78% compared to the POWER9 processor-based technology capability.
Every Power10 OMI link is directly connected to one memory buffer-based differential DIMM (DDIMM) slot. Therefore, the four sockets of one system node offer a total of 64 DDIMM slots with an aggregated maximum memory bandwidth of 1636 GBps. The DDIMM densities supported in Power E1080 servers are 32 GB, 64 GB,128 GB, and 256 GB, all of which use Double Data Rate 4 (DDR4) technology.
The Power E1080 memory options are available as 128 GB (#EMC1), 256 GB (#EMC2), 512 GB (#EMC3), and 1024 GB (#EMC4) memory features. Each memory feature provides four DDIMMs.
Each system node supports a maximum of 16 memory features that cover the 64 DDIMM slots. The use of 1024 GB DDIMM features yields a maximum of 16 TB per node. A 2-node system has a maximum of 32 TB capacity. A 4-node system has a maximum of 64 TB capacity. Minimum memory activations of 50% of the installed capacity are required.
 
Note: The Power10 processor memory logic and the memory I/O subsystem of the Power E1080 server are designed to support DDR4 DIMMs, but also can use the next generation DDR5 dynamic random-access memory technology.
The Power10 processor I/O subsystem is driven by 32 GHz differential Peripheral Component Interconnect Express 5.0 (PCIe Gen 5) buses that provide 32 lanes that are grouped in two sets of 16 lanes. The 32 PCIe lanes deliver an aggregate bandwidth of 576 GBps per system node and are used to support 8 half-length, low-profile (half-height) adapter slots for external connectivity and 4 Non-Volatile Memory Express (NVMe) mainstream Solid State Drives (SSDs) of form factor U.2. for internal storage.
Six of the eight external PCIe slots can be used for PCIe Gen 4 x16 or PCIe Gen 5 x8 adapters and the remaining two offer PCIe Gen 5 x8 capability. All PCIe slots support earlier generations of the PCIe standard, such as PCIe Gen 1 (PCIe 1.0), PCIe Gen 2 (PCIe 2.0), PCIe Gen 3 (PCIe 3.0), and PCIe Gen 4 (PCIe 4.0).
For extra connectivity, up to four 19-inch PCIe Gen 3 4U high I/O expansion units (#EMX0) optionally can be attached to one system node. Each expansion drawer contains one or two PCIe Fanout Modules (#EMXH) with six PCIe Gen 3 full-length, full-height slots each.
A fully configured 4-node Power E1080 server offers a total of 32 internal PCIe slots and up to 192 PCIe slots through I/O expansion units.
Figure 1-2 shows the front view of a system node. The fans and power supply units (PSUs) are redundant and concurrently maintainable. Fans are n+1 redundant; therefore, the system continues to function when any one fan fails. Because the power supplies are n+2 redundant, the system continues to function, even if any two power supplies fail.
Figure 1-2 Front view of a Power E1080 server node
Figure 1-3 on page 9 shows the rear view of a system node with the locations of the external ports and features.
Figure 1-3 Rear view of a Power E1080 server node
Figure 1-4 shows the internal view of a system node and some of the major components like heat sinks, processor voltage regulator modules (VRMs), VRMs of other miscellaneous components, differential DIMM (DDIMM) slots, DDIMMs, system clocks, trusted platform modules (TPMs), and internal SMP cables.
Figure 1-4 Top view of a Power E1080 server node with the top cover assembly removed
1.3 System control unit
The system control unit (SCU) is implemented in a 2U high chassis and provides system hardware, firmware, and virtualization control functions through a pair of redundant Flexible Service Processor (FSP) devices. It also contains the operator panel and the electronics module that stores the system vital product data (VPD). The SCU is also prepared to facilitate USB connectivity that can be used by the Power E1080 server.
One SCU is required and supported for each Power E1080 server (any number of system nodes) and, depending on the number of system nodes, the SCU is powered according to the following rules:
Two universal power interconnect (UPIC) cables are used to provide redundant power to the SCU.
In a Power E1080 single system node configuration, both UPIC cables are provided from the single system node to be connected to the SCU.
For a two, three, or four system nodes configuration, one UPIC cable is provided from the first system node and the second UPIC cable is provided from second system node to be connected to the SCU.
The set of two cables facilitate a 1+1 redundant electric power supply. In case of a failure of one cable, the remaining UPIC cable is sufficient to feed the needed power to the SCU.
Two service processor cards in SCU are ordered by using two mandatory #EDFP features. Each one provides two 1 Gb Ethernet ports for the Hardware Management Console (HMC) system management connection. One port is used as primary connection and the second port can be used for redundancy. To enhance resiliency, it is recommended to implement a dual HMC configuration by attaching separate HMCs to each of the cards in the SCU.
Four FSP ports per FSP card provide redundant connection from the SCU to each system node. System nodes connect to the SCU by using the cable features #EFCH, #EFCE, #EFCF, and #EFCG. Feature #EFCH connects the first system node to the SCU and it is included by default in every system node configuration. It provides FSP, UBIC, and USB cables, but no symmetric multiprocessing (SMP) cables. All the other cable features are added depending on the number of extra system nodes that are configured and includes FSP and SMP cables.
The SCU implementation also includes the following highlights:
Elimination of clock cabling since the introduction of POWER9 processor-based servers
Front-accessible system node USB port
Optimized UPIC power cabling
Optional external DVD
Concurrently maintainable time of day clock battery
Figure 1-5 shows the front and rear view of a SCU with the locations of the external ports and features.
Figure 1-5 Front and rear view of the system control unit
1.4 Server specifications
The Power E1080 server specifications are essential to plan for your server. For a first assessment in the context of your planning effort, this section provides you with an overview related to the following topics:
Physical dimensions
Electrical characteristics
Environment requirements and noise emission
For more information about the comprehensive Model 9080-HEX server specifications product documentation, see IBM Documentation.
1.4.1 Physical dimensions
The Power E1080 is a modular system that is build of a single SCU and one, two, three, or four system nodes.
Each system component must be mounted in a 19-inch industry standard rack. The SCU requires 2U rack units and each system node requires 5U rack units. Thus, a single-node system requires 7U, a two-node system requires 12U, a three-node system requires 17U, and a four-node system requires 22U rack units. More rack space must be allotted; for example, to PCIe I/O expansion drawers, a hardware management console, flat panel console kit, network switches, power distribution units, and cable egress space.
Table 1-2 lists the physical dimensions of the Power E1080 server control unit and a Power E1080 server node. The component height is also given in Electronic Industries Alliance (EIA) rack units. (One EIA unit corresponds to one rack unit (U) and is defined as 1 3/4 inch or 44.45 mm respectively).
Table 1-2 Physical dimensions of the Power E1080 server components,
Dimension
Power E1080 server control unit
Power E1080 server node
Width
445.6 mm (17.54 in)
445 mm (17.51 in)
Depth
779.7 mm (30.7 in)
866.95 mm (34.13 in)
Height
86 mm (3.39 in) / 2 EIA units
217.25 mm (8.55 in) / 5 EIA units
Weight
22.7 kg (50 lb)
81.6 kg (180 lb)
Lift tools
It is recommended to have a lift tool available at each site where one or more Power E1080 servers are located to avoid any delays when servicing systems. An optional lift tool #EB2Z is available for order with a Power E1080 server. One #EB2Z lift tool can be shared among many servers and I/O drawers. The #EB2Z lift tool provides a hand crank to lift and position up to 159 kg (350 lb). The #EB2Z lift tool is 1.12 meters x 0.62 meters (44 in x 24.5 in).
 
Note: A single system node can weigh up to 86.2 kg (190 lb). Also available are a lighter, lower-cost lift tool #EB3Z and wedge shelf tool kit for #EB3Z with feature code #EB4Z.
 
1.4.2 Electrical characteristics
Each Power E1080 server node has four 1950 W bulk power supplies. The hardware design provides N+2 redundancy for the system power supply, and any node can continue to operate at full function in nominal mode with any two of the power supplies functioning.
Depending on the specific Power E1080 configuration, the power for the SCU is provided through two UPIC cables connected to one or two system nodes, as described in 1.2, “System nodes” on page 6.
Table 1-3 lists the electrical characteristics per Power E1080 server node. For planning purposes, use the maximum values that are provided. However, the power draw and heat load depends on the specific processor, memory, adapter, and expansion drawer configuration and the workload characteristics.
Table 1-3 Electrical characteristics per Power E1080 server node
Electrical characteristics
Properties
Operating voltage
200 - 208 / 220 - 240 V AC
Operating frequency
50 or 60 Hz +/- 3 Hz AC
Maximum power consumption
4500 W
Maximum power source loading
4.6 kVA
Maximum thermal output
15355 BTU/h
Phase
Single
 
 
Note: The Power E1080 must be installed in a rack with a rear door and side panels for electromagnetic compatibility (EMC) compliance.
1.4.3 Environment requirements and noise emission
Environmental assessment: The IBM Systems Energy Estimator tool can provide more accurate information about power consumption and thermal output of systems based on a specific configuration.
The environment requirements for the Power E1080 servers are classified in operating and non-operating environments. The operating environments are further segmented regarding the recommended and allowable conditions.
The recommended operating environment designates the long-term operating environment that can result in the greatest reliability, energy efficiency, and reliability. The allowable operating environment represents where the equipment is tested to verify functionality. Because the stresses that operating in the allowable envelope can place on the equipment, these envelopes must be used for short-term operation, not continuous operation.
The condition of a non-operating environment pertains to the situation when equipment is removed from the original shipping container and is installed, but is powered down. The allowable non-operating environment is provided to define the environmental range that an unpowered system can experience short term without being damaged.
Table 1-4 lists the environment requirements for the Power E1080 server regarding temperature, humidity, dew point, and altitude. It also lists the maximum noise emission level for a fully configured Power E1080 server.
Table 1-4 Power E1080 environment requirements and noise emission
Property
Environment
Operating
Non-operating
Recommended
Allowable
Temperature
18.0°C – 27.0°C
(64.4°F – 80.6°F)
5.0°C – 40.0°C
(41.0°F – 104.0°F)
5°C - 45°C
(41°F - 113°F)
Low-end moisture
-9.0°C (15.8°F) dew point
-12.0°C (10.4°F) dew point and 8% relative humidity
N/A
High-end moisture
60% relative humidity and 15°C (59°F) dew point
85% relative humidity and 24.0°C (75.2°F) dew point
N/A
Relative humidity
N/A
N/A
8% to 85%
Maximum dew point
N/A
N/A
27.0°C (80.6°F)
Maximum altitude
N/A
3,050 m (10,000 ft)
N/A
Maximum noise level
10.0 B LWA,m1
(heavy workload on one fully configured 16-socket four-node system, 35°C (95°F) at 500 m (1640 ft))
N/A
N/A

1 Declared level LWA,m is the upper-limit A-weighted sound power level measured in bel (B).
A comprehensive list of noise emission values for various different Power E1080 server configurations is provided by the Power E1080 product documentation. For more information about noise emissions, search for “Model 9080-HEX server specifications” at IBM Documentation.
 
Note: Government regulations, such as those regulations that are issued by the Occupational Safety and Health Administration (OSHA) or European Community Directives, can govern noise level exposure in the workplace and might apply to you and your server installation. The Power E1080 server is available with an optional acoustical door feature that can help reduce the noise emitted from this system.
The sound pressure levels in your installation depend on various factors, including the number of racks in the installation; the size, materials, and configuration of the room where you designate the racks to be installed; the noise levels from other equipment; the room ambient temperature, and employees' location in relation to the equipment.
Also, compliance with such government regulations depends on various other factors, including the duration of employees’ exposure and whether employees wear hearing protection. IBM recommends that you consult with qualified experts in this field to determine whether you are in compliance with the applicable regulations.
1.5 System features
This section lists and explains the available system features on a Power E1080 server. These features describe the resources that are available on the system by default or by virtue of procurement of configurable feature codes.
An overview of various feature codes and the essential information also is presented that can help users design their system configuration with suitable features that can fulfill the application compute requirement. This information also helps with building a highly available, scalable, reliable, and flexible system around the application.
1.5.1 Minimum configuration
A minimum configuration babels a user to order a fully qualified and tested hardware configuration of a Power system with a minimum set of offered technical features. The modular design of a Power E1080 server enables the user to start low with a minimum configuration and scale up vertically as and when needed.
Table 1-5 lists the Power E1080 server configuration with minimal features.
Table 1-5 Minimum configuration
Feature
FC
Feature code description
Min quantity
Primary operating system Feature Code
2145
2146
2147
Primary OS - IBM i
Primary OS - AIX
Primary OS - Linux
1
System enclosure
EDN1
5U System node Indicator drawer
1
Flexible service processor
EDFP
Flexible service processor
2
Bezel
EBAB
IBM Rack-mount Drawer Bezel and Hardware
1
Processor
EDP2
EDP3
EDP4
40-core (4x10) Typical 3.65 to 3.90 GHz (max) Power10 Processor with 5U system node drawer
48-core (4x12) Typical 3.60 to 4.15 GHZ (max) Power10 Processor with 5U system node drawer
60-core (4x15) Typical 3.55 to 4.00 GHz (max) Power10 Processor with 5U system node drawer
1 of either FC
Processor activation
EDPB
EDPC
EDPD
1 core Processor Activation for #EDBB/#EDP2
1 core Processor Activation for #EDBC/#EDP3
1 core Processor Activation for #EDBD/#EDP4
16
1 if in PEP 2.0
Memory
EMC1
EMC2
EMC3
EMC4
128 GB (4x32 GB) DDIMMs, 3200 MHz, 16 GBIT DDR4 Memory
256 GB (4x64 GB) DDIMMs, 3200 MHz, 16 GBIT DDR4 Memory
512 GB (4x128 GB) DDIMMs, 2933 MHz, 16 GBIT DDR4
1 TB (4x256 GB) DDIMMs, 2933 MHz, 16 GBIT DDR4
8 features of either FC
Memory activation
EMAZ
1 GB Memory activation for HEX
50% or 256 GB in PEP 2.0
DASD backplanes
EJBC
4-NVMe U.2 (7mm) Flash drive bays
1
Data protection
0040
Mirrored System Disk Level, Specify Code
1 if IBM i
0 for other OS
UPIC cables
EFCH
System Node to System Control Unit Cable Set for Drawer 1
1
 
Note: The minimum configuration that is generated by IBM configurator includes more administrative and indicator features.
1.5.2 Processor features
Each system node in a Power E1080 server provides four sockets to accommodate Power10 single chip modules (SCMs). Each processor feature code represents four of these sockets, which are offered in 10-core, 12-core, and 15-core density.
Table 1-6 lists the available processor feature codes for a Power E1080 server. The system configuration requires one, two, three, or four quantity of same processor feature, according to the number of system nodes.
Table 1-6 Processor features.
Feature code
CCIN
Description
OS support
EDP2
5C6C
40-core (4x10) Typical 3.65 to 3.90 GHz (max) Power10 Processor with 5U system node drawer
AIX, IBM i, and Linux
EDP3
5C6D
48-core (4x12) Typical 3.60 to 4.15 GHZ (max) Power10 Processor with 5U system node drawer
AIX, IBM i, and Linux
EDP4
5C6E
60-core (4x15) Typical 3.55 to 4.00 GHz (max) Power10 Processor with 5U system node drawer
AIX, IBM i, and Linux
The system nodes connect to other system nodes and to the system control unit through cable connect features. Table 1-7 lists the set of cable features that are required for one-, two-, three-, and four-node configurations.
Table 1-7 Cable set features quantity
System configuration
Qty EFCH
Qty EFCG
Qty EFCF
Qty EFCE
1-node
1
0
0
0
2-node
1
0
0
1
3-node
1
0
1
1
4-node
1
1
1
1
Every feature code that is listed in Table 1-6 on page 16 provides the processor cores, not their activation. The processor core must be activated to be assigned as resource to a logical partition. The activations are offered through multiple permanent and temporary activation features. For more information about these options, see 2.4, “Capacity on-demand” on page 76.
Table 1-8 lists the processor feature codes and the associated permanent activation features. Any of these activation feature codes can permanently activate one core.
Table 1-8 Processor and activation features
Processor feature
Static activation feature
Static linux only activation feature
EDP2
EDPB
ELCL
EDP3
EDPC
ELCQ
EDP4
EDPD
ECLM
The following types of permanent activations are available:
Static These features permanently activate cores or memory resources in a system. These activations cannot be shared among multiple systems and remain associated with the system for which they were ordered.
Regular A regular static activation can run any supported operating system workload.
Linux-only A Linux-only static activation can run only Linux workloads and are priced less than regular static activations.
Mobile These features permanently activate cores or memory resources. They are priced more than static activation features because they can be shared among multiple eligible systems that are participating in a Power Enterprise Pool. They also can be moved dynamically among the systems without IBM involvement, which brings more value to the customer than static activations.
Base In a Power Enterprise Pool 2.0 (PEP 2.0) environment, systems are ordered with some initial compute capacity. This initial capacity is procured by using base activation features. These activations do not move like mobile activations in PEP 1.0 environment, but can be shared among multiple eligible systems in a PEP 2.0 pool.
Any OS base These base activations are supported on any operating system (AIX, IBM i, or Linux) in a PEP 2.0 environment.
Linux only base These base activations are priced less than any OS base activation, but support only Linux workloads in a PEP 2.0 environment.
A minimum of 16 processor cores must always be activated with the static activation features, regardless of the Power E1080 configuration. Also, if the server is associated to a PEP 2.0, a minimum of one base activation is required.
For more information about other temporary activation offerings that are available for the Power E1080 server, see 2.4, “Capacity on-demand” on page 76.
Regular and PEP 2.0 associated activations for Power E1080 are listed in Table 1-9. The Order type table column includes the following designations:
Initial Denotes the orderability of a feature for only the new purchase of the system.
MES Denotes the orderability of a feature for only the MES upgrade purchases on the system.
Both Denotes the orderability of a feature for new and MES upgrade purchases.
Supported Denotes that feature is not orderable, but is supported. That is, the feature can be migrated only from existing systems.
Table 1-9 Processor activation features
Feature code
Description
Order type
EPS0
1 core Base Proc Act (Pools 2.0) for #EDP2 any OS (from Static)
MES
EPS1
1 core Base Proc Act (Pools 2.0) for #EDP3 any OS (from Static)
MES
EPS2
1 core Base Proc Act (Pools 2.0) for #EDP4 any OS (from Static)
MES
EPS5
1 core Base Proc Act (Pools 2.0) for #EDP2 Linux (from Static)
MES
EPS6
1 core Base Proc Act (Pools 2.0) for #EDP3 Linux (from Static)
MES
EPS7
1 core Base Proc Act (Pools 2.0) for #EDP4 Linux (from Static)
MES
EPSK
1 core Base Proc Act (Pools 2.0) for #EDP2 any OS (from Prev)
MES
EPSL
1 core Base Proc Act (Pools 2.0) for #EDP3 any OS (from Prev)
MES
EPSM
1 core Base Proc Act (Pools 2.0) for #EDP4 any OS (from Prev)
MES
EDP2
40-core (4x10) Typical 3.65 to 3.90 GHz (max) Power10 Processor with 5U system node drawer
Both
EDP3
48-core (4x12) Typical 3.60 to 4.15 GHZ (max) Power10 Processor with 5U system node drawer
Both
EDP4
60-core (4x15) Typical 3.55 to 4.00 GHz (max) Power10 Processor with 5U system node drawer
Both
ED2Z
Single 5250 Enterprise Enablement
Both
ED30
Full 5250 Enterprise Enablement
Both
EDPB
1 core Processor Activation for #EDBB/#EDP2
Both
EDPC
1 core Processor Activation for #EDBC/#EDP3
Both
EDPD
1 core Processor Activation for #EDBD/#EDP4
Both
EDPZ
Mobile processor activation for HEX/80H
Both
EPDC
1 core Base Processor Activation (Pools 2.0) for EDP2 any OS
Both
EPDD
1 core Base Processor Activation (Pools 2.0) for EDP3 any OS
Both
EPDS
1 core Base Processor Activation (Pools 2.0) for EDP4 any OS
Both
EPDU
1 core Base Processor Activation (Pools 2.0) for EDP2 Linux only
Both
EPDW
1 core Base Processor Activation (Pools 2.0) for EDP3 Linux only
Both
EPDX
1 core Base Processor Activation (Pools 2.0) for EDP4 Linux only
Both
ELCL
Power Linux processor activation for #EDP2
Both
ELCM
Power Linux processor activation for #EDP4
Both
ELCQ
Power Linux processor activation for #EDP3
Both
1.5.3 Memory features
This section describes the memory features that available on a Power E1080 server. Careful selection of these features helps the user to configure their system with the correct amount of memory that can meet the demands of memory intensive workloads. On a Power E1080 server, the memory features can be classified into the following feature categories:
Physical memory
Memory activation
These features are described next.
Physical memory features
Physical memory features that are supported on Power E1080 are the next generation differential dual inline memory modules, called DDIMM (see 2.3, “Memory subsystem” on page 72). DDIMMS that are used in the E1080 are Enterprise Class 4U DDIMMs.
The memory DDIMM features are available in 32-, 64-, 128-, and 256-GB capacity. Among these DDIMM features, 32 GB and 64 GB DDIMMs run at 3200 MHz frequency and 128 GB and 256 GB DDIMMs run at 2933 MHz frequency.
Each system node provides 64 DDIMM slots that support a maximum of 16 TB memory and a four system node E1080 can support a maximum of 64 TB memory. DDIMMs are ordered by using memory feature codes, which include a bundle of four DDIMMs with the same capacity.
Consider the following points regarding improved performance:
Plugging DDIMMs of same density provides the highest performance.
Filling all the memory slots provides maximum memory performance.
System performance improves when more quads of memory DDIMMs match.
System performance also improves as the amount of memory is spread across more DDIMM slots.
For example, if 1TB of memory is required, 64 x 32 GB DDIMMs can provide better performance than 32 x 64 GB DDIMMs
Figure 1-6 on page 20 shows a DDIMM memory feature.
Figure 1-6 New DDIMM feature
Table 1-10 lists the available memory DDIMM feature codes for the Power E1080.
Table 1-10 E1080 memory feature codes
Feature code
CCIN
Description
OS support
EMC1
32AB
128 GB (4x32 GB) DDIMMs, 3200 MHz, 16 GBIT DDR4 Memory
AIX, IBM i, and Linux
EMC2
32AC
256 GB (4x64 GB) DDIMMs, 3200 MHz, 16 GBIT DDR4 Memory
AIX, IBM i, and Linux
EMC3
32AD
512 GB (4x128 GB) DDIMMs, 2933 MHz, 16 GBIT DDR4
AIX, IBM i, and Linux
EMC4
32AE
1 TB (4x256 GB) DDIMMs, 2933 MHz, 16 GBIT DDR4
AIX, IBM i, and Linux
Memory activation features
Software keys are required to activate part or all of the physical memory that is installed in the Power E1080 to be assigned to logical partitions (LPAR). Any software key is available when memory activation feature is ordered and can be ordered at any time during the life-cycle of the server to help the user scale up memory capacity without outages, unless an more physical memory upgrade and activation are required.
A server administrator or user cannot control which physical memory DDIMM features are activated when memory activations are used.
The amount of memory to activate depends on the feature code ordered; for example, if an order contains two of feature code EDAB (100 GB DDR4 Mobile Memory Activation for HEX), these feature codes activate 200 GB of the installed physical memory.
Different types of memory activation features that are available for the Power E1080 server are known to the PowerVM hypervisor as total quantity for each type. The PowerVM hypervisor determines the physical DDIMM memory to be activated and assigned to the LPARs.
Similar to processor core activation features, different types of permanent memory activation features are offered on the Power E1080 server. For more information about the available types of activations, 1.5.2, “Processor features” on page 16.
Orders for memory activation features must consider the following rules:
The system must have a minimum of 50% activated physical memory. It can be activated by using static or static and mobile memory activation features.
The system must have a minimum of 25% of physical memory activated by using static memory activation features.
When a Power E1080 is part of a PEP 2.0 environment, the server must have a minimum of 256 GB of base memory activations.
Consider the following examples:
For a system with 4 TB of physical memory, at least 2 TB (50% of 4 TB) must be activated.
When a Power E1080 is part of a PEP 1.0 environment, a server with 4 TB of physical memory and 3.5 TB of activated memory requires a minimum of 896 GB (25% of 3.5 TB) of physical memory activated by using static activation features.
When a Power E1080 is part of a PEP 2.0 environment, a server with 4 TB of physical memory requires a minimum of 256 GB of memory activated with base activation features.
Table 1-11 lists the available memory activation feature codes for Power E1080. The Order type column provides indicates whether the feature code is available for an initial order only, or also with a MES upgrade on a existing server only, or both.
Table 1-11 Memory activation features.
Feature code
Description
Order type
EDAB
100 GB DDR4 Mobile Memory Activation for HEX
Both
EDAG
256 GB Base Memory Activation (Pools 2.0)
Both
EDAH
512 GB Base Memory Activation (Pools 2.0)
Both
EDAL
256 GB Base Memory Activation Linux only
Both
EDAM
512 GB Base Memory Activation Linux only
Both
EDAP
1 GB Base Memory activation (Pools 2.0) from Static
Both
EDAQ
100 GB Base Memory activation (Pools 2.0) from Static
Both
EDAR
512 GB Base Memory activation (Pools 2.0) from Static
Both
EDAS
500 GB Base Memory activation (Pools 2.0) from Static
Both
EDAT
1 GB Base Memory activation (Pools 2.0) MES only
Both
EDAU
100 GB Base Memory activation (Pools 2.0) MES only
Both
EDAV
100 GB Base Memory Activation (Pools 2.0) from Mobile
Both
EDAW
500 GB Base Memory Activation (Pools 2.0) from Mobile
Both
EDAX
512 GB Base Memory Activation Linux only - Conversion
Both
ELME
512 GB Power Linux Memory Activations for HEX
Both
EMAZ
1 GB Memory activation for HEX
Both
EMBK
500 GB DDR4 Mobile Memory Activation for HEX/80H
Both
EMBZ
512 GB Memory Activations for HEX
Both
EMQZ
100 GB of #EMAZ Memory activation for HEX
Both
1.5.4 System node PCIe features
Each system node provides eight PCIe Gen 5 hot-plug enabled slots; therefore, a two-system nodes server provides 16 slots, a three-system nodes server provides 24 slots, and a four-system nodes system provides 32 slots.
Table 1-12 lists all the supported PCIe adapter feature codes inside the Power E1080 server node drawer.
Table 1-12 PCIe adapters supported on Power E1080 server node
Feature code
CCIN
Description
OS support
EN1A
578F
PCIe Gen 3 32 Gb 2-port Fibre Channel Adapter
AIX, IBM i, and Linux
EN1B
578F
PCIe Gen 3 LP 32 Gb 2-port Fibre Channel Adapter
AIX, IBM i, and Linux
EN1C
578E
PCIe Gen 3 16 Gb 4-port Fibre Channel Adapter
AIX, IBM i, and Linux
EN1D
578E
PCIe Gen 3 LP 16 Gb 4-port Fibre Channel Adapter
AIX, IBM i, and Linux
EN1F
579A
PCIe Gen 3 LP 16 Gb 4-port Fibre Channel Adapter
AIX, IBM i, and Linux
EN1H
579B
PCIe Gen 3 LP 2-Port 16 Gb Fibre Channel Adapter
AIX, IBM i, and Linux
EN1K
579C
PCIe Gen 4 LP 32 Gb 2-port Optical Fibre Channel Adapter
AIX, IBM i, and Linux
EN2A
579D
PCIe Gen 3 16 Gb 2-port Fibre Channel Adapter
AIX, IBM i, and Linux
EN2B
579D
PCIe Gen 3 LP 16 Gb 2-port Fibre Channel Adapter
AIX, IBM i, and Linux
5260
576F
PCIe2 LP 4-port 1 GbE Adapter
AIX, IBM i, and Linux
5899
576F
PCIe2 4-port 1 GbE Adapter
AIX, IBM i, and Linux
EC2T
58FB
PCIe Gen 3 LP 2-Port 25/10 Gb NIC&ROCE SR/Cu Adapter1
AIX, IBM i, and Linux
EC2U
58FB
PCIe Gen 3 2-Port 25/10 Gb NIC&ROCE SR/Cu Adaptera
AIX, IBM i, and Linux
EC67
2CF3
PCIe Gen 4 LP 2-port 100 Gb ROCE EN LP adapter
AIX, IBM i, and Linux
EN0S
2CC3
PCIe2 4-Port (10 Gb+1 GbE) SR+RJ45 Adapter
AIX, IBM i, and Linux
EN0T
2CC3
PCIe2 LP 4-Port (10 Gb+1 GbE) SR+RJ45 Adapter
AIX, IBM i, and Linux
EN0X
2CC4
PCIe2 LP 2-port 10/1 GbE BaseT RJ45 Adapter
AIX, IBM i, and Linux

1 Requires SFP to provide 10 Gb, 2 Gb, or 1 Gb BaseT connectivity
1.5.5 System node disk and media features
At the time of this writing, the Power E1080 server node supports up to four 7 mm NVMe U.2 drives that are plugged into the 4-bay NVMe carrier backplane (feature code EJBC). Each system node requires one backplane, even if no NVMe U.2 drives are selected.
Each NVMe U.2 drive can be independently assigned to different LPARs for hosting the operating system and to start from them. They also can be used for non-data intensive workloads. NVMe U.2 drives are concurrently replaceable.
Table 1-13 lists the available NVMe drive feature codes for the Power E1080 and the operating system support.
Table 1-13 NVMe features
Feature code
CCIN
Description
OS support
EC5J
59B4
Mainstream 800 GB SSD NVMe U.2 module
AIX and Linux
EC5K
59B5
Mainstream 1.6 TB SSD NVMe U.2 module
AIX and Linux
EC5L
59B6
Mainstream 3.2 TB SSD NVMe U.2 module
AIX and Linux
EC7Q
59B4
800 GB Mainstream NVMe U.2 SSD 4k for AIX/Linux
AIX and Linux
For systems that are running IBM i, an expansion or storage drawer can meet the NVMe requirements.
1.5.6 System node USB features
The Power E1080 supports one stand-alone external USB drive that is associated to feature code EUA5. The feature code includes the cable that is used to the USB drive to the preferred front accessible USB port on the SCU.
The Power E1080 server node does not offer an integrated USB port. The USB 3.0 adapter feature code EC6J is required to provide connectivity to an optional external USB DVD drive and requires one system node or I/O expansion drawer PCIe slot. The adapter connects to the USB port in the rear of the SCU with the cable associated to feature code EC6N.
Because this cable is 1.5 m long, in case of a Power E1080 with more than one system node, the USB 3.0 adapter can be used in the first or the second system node only.
The USB 3.0 adapter feature code EC6J supports the assignment to an LPAR and can be migrated from an operating LPAR to another, including the connected DVD drive. This design allows it to assign the DVD drive feature to any LPAR according to the need.
Dynamic allocation of system resources such as processor, memory, and I/O is also referred to as dynamic LPAR or DLPAR.
For more information about the USB subsystem, see 1.5.6, “System node USB features” on page 23.
1.5.7 Power supply features
Each Power E1080 server node has four 1950 W bulk power supply units that are operating at 240 V. These power supply unit features are a default configuration on every Power E1080 system node. The four units per system node do not have an associated feature code and are always auto-selected by the IBM configurator when a new configuration task is started.
Four power cords from the power distribution units (PDU) drive these power supplies, which connect to four C13/C14 type receptacles on the linecord conduit in the rear of the system. The power linecord conduit source power from the rear and connects to the power supply units in the front of the system.
The system design provides N+2 redundancy for system bulk power, which allows the system to continue operation with any two of the power supply units functioning. The failed units must remain in the system until new power supply units are available for replacement.
The power supply units are hot-swappable, which allows replacement of a failed unit without system interruption. The power supply units are placed in front of the system, which makes any necessary service that much easier.
Figure 1-7 shows the power supply units and their physical locations marked as E1, E2, E3, and E4 in the system.
Figure 1-7 Power supply units
1.5.8 System node PCIe interconnect features
Each system node provides 8 PCIe Gen5 hot-plug enabled slots; therefore, a 2-node system provides 16 slots, a 3-node system provides 24 slots, and a 4-node system provides 32 slots.
Up to four I/O expansion drawer features #EMX0 can be connected per node to achieve the slot capacity that is listed in Table 1-14.
Table 1-14 PCIe slots availability for different system nodes configurations
System node #
I/O expansion #
LP slots
Full height slots
1
4
8
48
2
8
16
96
3
12
24
144
4
16
32
192
Each I/O expansion drawer consists of two Fanout Module feature #EMXH, each providing six PCIe slots. Each Fanout Module connects to the system by using a pair of CXP cable features. The CXP cable features are listed in Table 1-16 on page 27.
Table 1-15 Optical CXP cable feature
Feature code
CCIN
Description
OS support
Order type
ECCR
 
2 M Active Optical Cable Pair for PCIe Gen 3 Expansion Drawer
 
Both
ECCY
 
10 M Active Optical Cable Pair for PCIe Gen 3 Expansion Drawer
 
Both
ECCZ
 
20 M Active Optical Cable Pair for PCIe Gen 3 Expansion Drawer
 
Both
The RPO-only cables in this list are not available for ordering new or MES upgrade, but for migrating from a source system. Select a longer length feature code for inter-rack connection between the system node and the expansion drawer.
The one pair of CXP optical cable connects to system node by using one 2-ports PCIe optical cable adapter feature EJ24, which is placed in the CEC.
Both CXP optical cable pair and the optical cable adapter features are concurrently maintainable. Therefore, careful balancing of I/O, assigning adapters through redundant EMX0 expansion drawers, and different system nodes can ensure high-availability for I/O resources that are assigned to partitions.
 
Note: At the time of this writing, feature EMXH is the only supported or orderable fanout module. Previous fanout module features EMXF and EMXG are not supported on Power E1080 system.
For more information abut internal buses and the architecture of internal and external I/O subsystems, see 2.5, “Internal I/O subsystem” on page 83.
1.6 I/O drawers
If more PCIe slots beyond the system node slots are required, the Power E1080 server supports adding I/O expansion drawers.
At initial availability zero, one, two, three, or four PCIe Gen 3 I/O Expansion Drawers per system node are supported. To connect an I/O expansion drawer, a PCIe slot is used to attach a 6-slot expansion module in the I/O drawer. A PCIe Gen 3 I/O Expansion Drawer (#EMX0) holds two expansion modules that are attached to any two PCIe slots in the same system node or in different system nodes.
For the connection of SAS disks, a disk-only I/O drawer is available. The EXP24SX is the only disk drawer that is supported.
1.6.1 PCIe Gen 3 I/O Expansion Drawer
The 19-inch 4 EIA (4U) PCIe Gen 3 I/O Expansion Drawer (#EMX0) and two PCIe Fanout Modules (#EMXH) provide 12 PCIe I/O full-length, full-height slots. One Fanout Module provides six PCIe slots that are labeled C1 - C6. C1 and C4 are x16 slots, and C2, IBM C3®, C5, and C6 are x8 slots. PCIe Gen1, Gen2, and Gen 3 full-high adapters are supported.
A blind-swap cassette (BSC) is used to house the full-high adapters that are installed in these slots. The BSC is the same BSC that is used with the previous generation server’s 12X attached I/O drawers (#5802, #5803, #5877, and #5873). The drawer is shipped with a full set of BSCs, even if the BSCs are empty.
Concurrent repair and adding or removing PCIe adapters is done through HMC-guided menus or by operating system support utilities.
A PCIe CXP converter adapter and Active Optical Cables (AOCs) connect the system node to a PCIe Fanout Module in the I/O expansion drawer. Each PCIe Gen 3 I/O Expansion Drawer has two power supplies.
Drawers can be added to the server later, but system downtime must be scheduled for adding a PCIe Gen 3 Optical Cable Adapter or a PCIe Gen 3 I/O drawer (#EMX0) or Fanout Module.
Figure 1-8 shows a PCIe Gen 3 I/O Expansion Drawer.
Figure 1-8 PCIe Gen 3 I/O Expansion Drawer
The AOC cable feature codes are listed in Table 1-16 on page 27. Also listed is the supported order type. The feature codes that are associated to cables that support RPO only are not available for new orders or MES upgrades. Instead, they are used to manage the migration of supported I/O expansion drawers from previous IBM POWER® technology-based servers to the Power E1080. Feature codes that are associated to cables with longer length are required to support inter-rack connection between the system node and I/O expansion drawer.
Table 1-16 Active Optical Cables feature codes
Feature code
Description
Order type
ECCR
2M Active Optical Cable Pair for PCIe Gen 3 Expansion Drawer
Both
ECCY
10M Active Optical Cable Pair for PCIe Gen 3 Expansion Drawer
Both
ECCZ
20M Active Optical Cable Pair for PCIe Gen 3 Expansion Drawer
Both
Careful balancing of I/O, assigning adapters through redundant EMX0 expansion drawers, and connectivity to different system nodes can ensure high-availability for I/O resources assigned to LPARs.
Note: Consider the following points:
Older PCIe Gen 3 Fanout Modules (#EMXG, #ELMF, and #ELMG) cannot be mixed with PCIe Gen 3 Fanout Module (feature #EMXH) in the same I/O Expansion Drawer.
Older I/O Expansion Drawers can be mixed with PCIe Gen 3 Fanout Modules (#ELMF, #ELMG) and I/O Expansion Drawers with PCIe Gen 3 Fanout Modules #EMXH in same configuration.
Older PCIe Gen 3 Fanout Modules (#ELMF, #ELMG) can be connect to older PCIe Gen 3 Optical Cable Adapters (#EJ05 or #EJ08).
PCIe Gen 3 Optical Cable Adapters (#EJ20, or #EJ1R) requires the use of Optical Cables (#ECCR, #ECCX, ECCY, or #ECCZ, or copper cable #ECCS).
 
1.6.2 I/O drawers and usable PCIe slots
Figure 1-9 shows the rear view of the PCIe Gen 3 I/O Expansion Drawer with the location codes for the PCIe adapter slots in the PCIe Gen 3 6-slot Fanout Module.
Figure 1-9 Rear view of a PCIe Gen 3 I/O Expansion Drawer with PCIe slots location codes
Table 1-17 lists the PCIe slots in the PCIe Gen 3 I/O Expansion Drawer.
Table 1-17 PCIe slot locations and descriptions for the PCIe Gen 3 I/O Expansion Drawer
Slot
Location code
Description
Slot 1
P1-C1
PCIe Gen 3, x16
Slot 2
P1-C2
PCIe Gen 3, x8
Slot 3
P1-C3
PCIe Gen 3, x8
Slot 4
P1-C4
PCIe Gen 3, x16
Slot 5
P1-C5
PCIe Gen 3, x8
Slot 6
P1-C6
PCIe Gen 3, x8
Slot 7
P2-C1
PCIe Gen 3, x16
Slot 8
P2-C2
PCIe Gen 3, x8
Slot 9
P2-C3
PCIe Gen 3, x8
Slot 10
P2-C4
PCIe Gen 3, x16
Slot 11
P2-C5
PCIe Gen 3, x8
Slot 12
P2-C6
PCIe Gen 3, x8
Consider the following points regarding the information in Table 1-17 on page 28:
All slots support full-length, regular-height adapters or short (low-profile) adapters with a regular-height tailstock in single-wide, Gen 3 BSCs.
Slots C1 and C4 in each PCIe Gen 3 6-slot Fanout Module are x16 PCIe Gen 3 buses; slots C2, C3, C5, and C6 are x8 PCIe buses.
All slots support enhanced error handling (EEH).
All PCIe slots are hot-swappable and support concurrent maintenance.
Table 1-18 lists the maximum number of I/O drawers that are supported and the total number of PCIe slots that are available when the expansion drawer consists of a single drawer type.
Table 1-18 Maximum number of supported I/O drawers and the total number of PCIe slots
System nodes
Maximum #EMX0 drawers
Total number of slots
PCIe Gen 3, x16
PCIe Gen 3, x8
Total PCIe Gen 3
One system node
4
16
32
48
Two system nodes
8
32
64
96
Three system nodes
12
48
96
144
Four system nodes
16
64
128
192
1.6.3 EXP24SX SAS Storage Enclosures
If you need more disks than are available with the internal disk bays, you can attach more external disk subsystems, such as an EXP24SX SAS Storage Enclosure (#ESLS).
The EXP24SX drawer is a storage expansion enclosure with 24 2.5-inch SFF SAS bays. It supports up to 24 hot-plug HDDs or SSDs in only 2 EIA of space in a 19-inch rack. The EXP24SX SFF bays use SFF Gen2 (SFF-2) carriers or trays.
With AIX/Linux/VIOS, the EXP24SX can be ordered with four sets of 6 bays (mode 4), two sets of 12 bays (mode 2) or one set of 24 bays (mode 1). With IBM i one set of 24 bays (mode 1) is supported. It is possible to change the mode setting in the field by using software commands along with a specifically documented procedure.
 
Important: When changing modes, a skilled, technically qualified person must follow the special documented procedures. Improperly changing modes can destroy RAID sets, which prevent access to data, or allow other partitions to access another partition’s data.
The attachment between the EXP24SX drawer and the PCIe Gen 3 SAS adapter is through SAS YO12 or X12 cables. The PCIe Gen 3 SAS adapters support 6 Gb throughput. The EXP24SX drawer can support up to 12 Gb throughput if future SAS adapters support that capability.
The EXP24SX drawer includes redundant AC power supplies and two power cords.
Figure 1-10 shows the EXP24SX drawer.
Figure 1-10 EXP24SX drawer
 
Note: For the EXP24SX drawer, a maximum of 24 2.5-inch SSDs or 2.5-inch HDDs are supported in the #ESLS 24 SAS bays. HDDs and SSDs cannot be mixed in the same mode-1 drawer. HDDs and SSDs can be mixed in a mode-2 or mode-4 drawer, but they cannot be mixed within a logical split of the drawer. For example, in a mode-2 drawer with two sets of 12 bays, one set can hold SSDs and one set can hold HDDs, but you cannot mix SSDs and HDDs in the same set of 12 bays.
For more information about SAS cabling and cabling configurations at IBM Documentation.
Table 1-19 lists the SFF-2 SSD and HDD feature codes that the Power E1080 supports in the expansion drawer at the time of this writing.
Table 1-19 Supported SFF-2 and HDD feature codes in the expansion drawer
Feature code
CCIN
Description
OS support
ES94
5B10
387 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ES95
5B10
387 GB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ESB2
5B16
387 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
AIX and Linux
ESB6
5B17
775 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
AIX and Linux
ESBA
5B10
387 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESBB
5B10
387 GB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ESBG
5B11
775 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESBH
5B11
775 GB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ESBL
5B12
1.55 TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESBM
5B12
1.55 TB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ESGV
5B16
387 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
AIX and Linux
ESGZ
5B17
775 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
AIX and Linux
ESJ0
5B29
931 GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESJ1
5B29
931 GB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESJ2
5B21
1.86 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESJ3
5B21
1.86 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESJ4
5B2D
3.72 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESJ5
5B2D
3.72 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESJ6
5B2F
7.45 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESJ7
5B2F
7.45 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESJJ
5B29
931 GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESJK
5B29
931 GB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESJL
5B21
1.86 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESJM
5B21
1.86 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESJN
5B2D
3.72 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESJP
5B2D
3.72 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESJQ
5B2F
7.44 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESJR
5B2F
7.44 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESK1
5B16
387 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
AIX and Linux
ESK3
5B17
775 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
AIX and Linux
ESK8
5B10
387 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESK9
5B10
387 GB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ESKC
5B11
775 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESKD
5B11
775 GB Enterprise SAS 4k SFF-2 SSD for IBM  i
IBM i
ESKG
5B12
1.55 TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESKH
5B12
1.55 TB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ESKK
5B29
931 GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESKM
5B29
931 GB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESKP
5B21
1.86 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESKR
5B21
1.86 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESKT
5B2D
3.72 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESKV
5B2D
3.72 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESKX
5B2F
7.44 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESKZ
5B2F
7.44 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESMB
5B29
931 GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESMD
5B29
931 GB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESMF
5B21
1.86 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESMH
5B21
1.86 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESMK
5B2D
3.72 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESMS
5B2D
3.72 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESMV
5B2F
7.44 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESMX
5B2F
7.44 TB Mainstream SAS 4k SFF-2 SSD for IBM i
IBM i
ESNA
5B11
775 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESNB
5B11
775 GB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ESNE
5B12
1.55 TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ESNF
5B12
1.55 TB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ETK1
 
387 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
AIX and Linux
ETK3
 
775 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
AIX, and Linux
ETK8
 
387 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX, and Linux
ETK9
 
387 GB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ETKC
 
775 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ETKD
 
775 GB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
ETKG
 
1.55 TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
AIX and Linux
ETKH
 
1.55 TB Enterprise SAS 4k SFF-2 SSD for IBM i
IBM i
1953
19B1
300 GB 15k RPM SAS SFF-2 Disk Drive (AIX/Linux)
AIX and Linux
1964
19B3
600 GB 10k RPM SAS SFF-2 Disk Drive (AIX/Linux)
AIX and Linux
ESEU
59D2
571 GB 10K RPM SAS SFF-2 Disk Drive 4K Block - 4224
IBM i
ESEV
59D2
600 GB 10K RPM SAS SFF-2 Disk Drive 4K Block - 4096
AIX and Linux
ESF2
59DA
1.1 TB 10K RPM SAS SFF-2 Disk Drive 4K Block - 4224
IBM i
ESF3
59DA
1.2 TB 10K RPM SAS SFF-2 Disk Drive 4K Block - 4096
AIX and Linux
ESFS
59DD
1.7 TB 10K RPM SAS SFF-2 Disk Drive 4K Block - 4224
IBM i
ESFT
59DD
1.8 TB 10K RPM SAS SFF-2 Disk Drive 4K Block - 4096
AIX and Linux
ESNL
5B43
283 GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (IBM i)
IBM i
ESNM
5B43
300 GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (AIX/Linux)
AIX and Linux
ESNQ
5B47
571 GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (IBM i)
IBM i
ESNR
5B47
600 GB 15K RPM SAS SFF-2 4k Block Cached Disk Drive (AIX/Linux)
AIX and Linux
1.6.4 IBM System Storage
The IBM System Storage Disk Systems products and offerings provide compelling storage solutions with superior value for all levels of business, from entry-level to high-end storage systems.
IBM Storage simplifies data infrastructure by using an underlying software foundation to strengthen and streamline the storage in the hybrid cloud environment, which uses a simplified approach to containerization, management, and data protection. For more information about the various offerings, see this web page.
The following section highlights a few of the offerings.
IBM FlashSystem Family
The IBM FlashSystem® family is a portfolio of cloud-enabled storage systems designed to be easily deployed and quickly scaled to help optimize storage configurations, streamline issue resolution, and lower storage costs.
IBM FlashSystem is built with IBM Spectrum® Virtualize software to help deploy sophisticated hybrid cloud storage solutions, accelerate infrastructure modernization, address security needs, and maximize value by using the power of AI. The products are designed to provide enterprise-grade functions without compromising affordability or performance. They also offer the advantages of end-to-end NVMe, the innovation of IBM FlashCore® technology, and SCM for ultra-low latency. For more information, see this web page.
IBM System Storage DS8000
IBM DS8900F is the next generation of enterprise data systems that are built with the most advanced Power processor technology and feature ultra-low application response times.
Designed for data-intensive and mission-critical workloads, DS8900F adds next-level performance, data protection, resiliency, and availability across hybrid cloud solutions through ultra-low latency, better than seven 9's (99.99999) availability, transparent cloud, tiering, and advanced data protection against malware and ransomware. This enterprise class storage solution provides superior performance and higher capacity, which enables the consolidation of all mission critical workloads in one place.
IBM DS8900F can provide 100% data encryption at-rest, in-flight and in the cloud. This flexible storage supports IBM Power, IBM Z®, and IBM LinuxONE. For more information, see this web page.
IBM SAN Volume Controller
IBM SAN Volume Controller is an enterprise-class system that consolidates storage from over 500 IBM and third-party storage systems to improve efficiency, simplify management and operations, modernize storage with new capabilities, and enable a common approach to hybrid cloud regardless of storage system type.
IBM SAN Volume Controller provides a complete set of data resilience capabilities with high availability, business continuance, and data security features. Storage supports automated tiering with AI-based IBM Easy Tier® that can help improve performance at a lower cost. For more information, see this web page.
1.7 System racks
The Power E1080 server fits a standard 19-inch rack. The server is certified and tested in the IBM Enterprise racks (7965-S42, 7014-T42, 7014-T00, or 7965-94Y). Customers can choose to place the server in other racks if they are confident that those racks have the strength, rigidity, depth, and hole pattern characteristics that are needed. Contact IBM Support to determine whether other racks are suitable.
 
Order information: It is highly recommended that you order the Power E1080 server with an IBM 42U enterprise rack #ECR0 (7965-S42). This rack provides a more complete and higher quality environment for IBM Manufacturing system assembly and testing, and provides a complete package.
If a system is installed in a rack or cabinet that is not from IBM, ensure that the rack meets the requirements that are described in 1.7.7, “Original equipment manufacturer racks” on page 41.
 
Responsibility: The customer is responsible for ensuring the installation of the drawer in the preferred rack or cabinet results in a configuration that is stable, serviceable, safe, and compatible with the drawer requirements for power, cooling, cable management, weight, and rail security.
1.7.1 New rack considerations
Consider the following points when racks are ordered:
The new IBM Enterprise 42U Slim Rack 7965-S42 offers 42 EIA units (U) of space in a slim footprint.
The 7014-T42, 7014-T00, and 7965-94Y racks are no longer available to purchase with a Power E1080 server. Installing a Power E1080 server in these racks is still supported.
 
Vertical PDUs: All PDUs that are installed in a rack that contains a Power E1080 server must be installed horizontally to allow for cable routing in the sides of the rack.
1.7.2 IBM Enterprise 42U Slim Rack 7965-S42
The 2.0-meter (79-inch) Model 7965-S42 is compatible with past and present IBM Power servers and provides an excellent 19-inch rack enclosure for your data center. Its 600 mm (23.6 in.) width combined with its 1100 mm (43.3 in.) depth plus its 42 EIA enclosure capacity provides great footprint efficiency for your systems. It can be placed easily on standard 24-inch floor tiles.
Compared to the 7965-94Y Slim Rack, the Enterprise Slim Rack provides extra strength and shipping and installation flexibility.
The 7965-S42 rack includes space for up to four PDUs in side pockets. Extra PDUs beyond four are mounted horizontally and each uses 1U of rack space.
The Enterprise Slim Rack front door, which can be Basic Black/Flat (#ECRM) or High-End appearance (#ECRF) has perforated steel, which provides ventilation, physical security, and visibility of indicator lights in the installed equipment within.
Standard is a lock that is identical to the locks in the rear doors. The door (#ECRG) can be hinged on the left or right side.
 
Orientation: #ECRF must not be flipped because the IBM logo would be upside down.
1.7.3 AC power distribution unit and rack content
The Power E1080 servers that are integrated into a rack at the factory feature PDUs that are mounted horizontally in the rack. Each PDU takes 1U of space in the rack. Mounting the PDUs vertically in the side of the rack can cause cable routing issues and interfere with optimal service access.
Two possible PDU ratings are supported: 60A/63A (orderable in most countries) and 30A/32A. Consider the following points:
The 60A/63A PDU supports four system node power supplies and one I/O expansion drawer or eight I/O expansion drawers.
The 30A/32A PDU supports two system node power supplies and one I/O expansion drawer or four I/O expansion drawers.
Rack-integrated system orders require at least two of #7109, #7188, or #7196:
Intelligent PDU (iPDU) with Universal UTG0247 Connector (#7109) is for an intelligent AC PDU that enables users to monitor the amount of power that is used by the devices that are plugged into this PDU. This PDU provides 12 C13 power outlets. It receives power through a UTG0247 connector. It can be used for many different countries and applications by varying the PDU to Wall Power Cord, which must be ordered separately.
Each iPDU requires one PDU to Wall Power Cord. Supported power cords include #6489, #6491, #6492, #6653, #6654, #6655, #6656, #6657, and #6658.
Power Distribution Unit (#7188) mounts in a 19-inch rack and provides 12 C13 power outlets. The PDU has six 16 A circuit breakers, with two power outlets per circuit breaker.
System units and expansion units must use a power cord with a C14 plug to connect to #7188. One of the following power cords must be used to distribute power from a wall outlet to #7188: #6489, #6491, #6492, #6653, #6654, #6655, #6656, #6657, or #6658.
The Three-phase Power Distribution Unit (#7196) provides six C19 power outlets and is rated up to 48 A. It has a 4.3 m (14 ft) fixed power cord to attach to the power source (IEC309 60A plug [3P+G]). A separate “to-the-wall” power cord is not required or orderable.
Use the Power Cord 2.8 m (9.2 ft), Drawer to Wall/IBM PDU (250V/10A) (#6665) to connect devices to this PDU. These power cords are different than the ones that are used for the #7188 and #7109 PDUs.
Supported countries for the #7196 PDU are Antigua and Barbuda, Aruba, Bahamas, Barbados, Belize, Bermuda, Bolivia, Brazil, Canada, Cayman Islands, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Guam, Guatemala, Haiti, Honduras, Indonesia, Jamaica, Japan, Mexico, Netherlands Antilles, Nicaragua, Panama, Peru, Puerto Rico, Surinam, Taiwan, Trinidad and Tobago, US, and Venezuela.
High-function PDUs provide more electrical power per PDU and offer better “PDU footprint” efficiency. In addition, they are intelligent PDUs that provide insight to power usage by receptacle and remote power on and off capability for easier support by individual receptacle. The new PDUs are orderable as #EPTJ, #EPTL, #EPTN, and #EPTQ.
High-function PDU FCs are listed in Table 1-20.
Table 1-20 Available high-function PDUs
PDUs
1-phase or 3-phase depending on country wiring standards
3-phase 208 V depending on country wiring standards
Nine C19 receptacles
EPTJ
EPTL
Twelve C13 receptacles
EPTN
EPTQ
In addition, the following high-function PDUs are available:
High Function 9xC19 PDU plus (#ECJJ)
This intelligent, switched 200-240 volt AC PDU includes nine C19 receptacles on the front of the PDU. The PDU is mounted on the rear of the rack, which makes the nine C19 receptacles easily accessible. For comparison, this PDU is most similar to the earlier generation #EPTJ PDU.
High Function 9xC19 PDU plus 3-Phase (#ECJL)
This intelligent, switched 208 volt 3-phase AC PDU includes nine C19 receptacles on the front of the PDU. The PDU is mounted on the rear of the rack, which makes the nine C19 receptacles easily accessible. For comparison, this PDU is most similar to the earlier generation #EPTL PDU.
High Function 12xC13 PDU plus (#ECJN)
This intelligent, switched 200-240 volt AC PDU includes 12 C13 receptacles on the front of the PDU. The PDU is mounted on the rear of the rack, which makes the 12 C13 receptacles easily accessible. For comparison, this PDU is most similar to the earlier generation #EPTN PDU.
High Function 12xC13 PDU plus 3-Phase (#ECJQ)
This intelligent, switched 208 volt 3-phase AC PDU includes 12 C13 receptacles on the front of the PDU. The PDU is mounted on the rear of the rack, which makes the 12 C13 receptacles easily accessible. For comparison, this PDU is most similar to the earlier generation #EPTQ PDU.
Table 1-21 lists the feature codes for the high-function PDUs announced in October 2019.
Table 1-21 High-function PDUs available after October 2019
PDUs
1-phase or 3-phase depending on country wiring standards
3-phase 208 V depending on country wiring standards
Nine C19 receptacles
ECJJ
ECJL
Twelve C13 receptacles
ECJN
ECJQ
Two more PDUs can be installed horizontally in the rear of the rack. Mounting PDUs horizontally uses 1U per PDU and reduces the space that is available for other racked components. When mounting PDUs horizontally, the preferred approach is to use fillers in the EIA units that are occupied by these PDUs to facilitate proper air-flow and ventilation in the rack.
The PDU receives power through a UTG0247 power-line connector. Each PDU requires one PDU-to-wall power cord. Various power cord features are available for various countries and applications by varying the PDU-to-wall power cord, which must be ordered separately.
Each power cord provides the unique design characteristics for the specific power requirements. To match new power requirements and save previous investments, these power cords can be requested with an initial order of the rack or with a later upgrade of the rack features.
Table 1-22 lists the available wall power cord options for the PDU and iPDU features, which must be ordered separately.
Table 1-22 Wall power cord options for the PDU and iPDU features
Feature code
Wall plug
Rated voltage
(V AC)
Phase
Rated amperage
Geography
6653
IEC 309,
3P+N+G, 16 A
230
3
16 amps/phase
Internationally available
6489
IEC309
3P+N+G, 32 A
230
3
32 amps/phase
EMEA
6654
NEMA L6-30
200 - 208, 240
1
24 amps
US, Canada, LA, and Japan
6655
RS 3750DP (watertight)
200 - 208, 240
1
24 amps
US, Canada, LA, and Japan
6656
IEC 309,
P+N+G, 32 A
230
1
24 amps
EMEA
6657
PDL
230-240
1
32 amps
Australia, New Zealand
6667
PDL
380-415
3
32 amps
Austraiia, New Zealand
6658
Korean plug
220
1
30 amps
North and South Korea
6492
IEC 309, 2P+G, 60 A
200 - 208, 240
1
48 amps
US, Canada, LA, and Japan
6491
IEC 309, P+N+G, 63 A
230
1
63 amps
EMEA
 
Notes: Ensure that the suitable power cord feature is configured to support the power that is being supplied. Based on the power cord that is used, the PDU can supply
4.8 - 19.2 kVA. The power of all the drawers that are plugged into the PDU must not exceed the power cord limitation.
The Universal PDUs are compatible with previous models.
To better enable electrical redundancy, each CEC has four power supplies that must be connected to separate PDUs, which are not included in the base order.
For maximum availability, a preferred approach is to connect power cords from the same system to two separate PDUs in the rack, and to connect each PDU to independent power sources.
For more information about power requirements of and the power cord for the 7965-94Y rack, see IBM Documentation.
1.7.4 PDU connection limits
Two possible PDU ratings are supported: 60/63 amps and 30/32 amps. The PDU rating is determined by the power cord that is used to connect the PDU to the electrical supply. The number of system nodes and I/O expansion drawers that are supported by each power cord are listed in Table 1-23.
Table 1-23 Maximum supported enclosures by power cord
Feature code
Wall plug
PDU Rating
Maximum supported system nodes per PDU pair
Maximum supported I/O drawers with no system nodes
6653
IEC 309,
3P+N+G, 16 A
60 Amps
Two system nodes and 1 I/O expansion drawer
8
6489
IEC309
3P+N+G, 32 A
60 Amps
Two system nodes and 1 I/O expansion drawer
8
6654
NEMA L6-30
30 Amps
One system node and 1 I/O expansion drawer
4
6655
RS 3750DP (watertight)
30 Amps
One system node and 1 I/O expansion drawer
4
6656
IEC 309,
P+N+G, 32 A
30 Amps
One system node and 1 I/O expansion drawer
4
6657
PDL
30 Amps
One system node and 1 I/O expansion drawer
4
6658
Korean plug
30 Amps
One system node and 1 I/O expansion drawer
4
6492
IEC 309, 2P+G, 60 A
60 Amps
Two system node and 1 I/O expansion drawer
4
6491
IEC 309, P+N+G, 63 A
60 Amps
Two system nodes and 1 I/O expansion drawer
8
1.7.5 Rack-mounting rules
Consider the following primary rules when you mount the system into a rack:
The system can be placed at any location in the rack. For rack stability, start filling the rack from the bottom.
Any remaining space in the rack can be used to install other systems or peripheral devices. Ensure that the maximum permissible weight of the rack is not exceeded and the installation rules for these devices are followed.
Before placing the system into the service position, follow the rack manufacturer’s safety instructions regarding rack stability.
 
Order information: The racking approach for the initial order must be 7965-S42 or #ECR0. If an extra rack is required for I/O expansion drawers, an MES to a system or an #0553 must be ordered.
If you install the Power E1080 server into a 7965-S42, no extra 2 U space is needed. All 42U can be populated with equipment.
1.7.6 Useful rack additions
This section highlights several rack addition solutions for IBM Power rack-based systems.
IBM System Storage 7226 Model 1U3 Multi-Media Enclosure
The IBM System Storage 7226 Model 1U3 Multi-Media Enclosure can accommodate up to two tape drives, two RDX removable disk drive docking stations, or up to four DVD-RAM drives.
The IBM System Storage 7226 Multi-Media Enclosure supports LTO Ultrium and DAT160 Tape technology, DVD-RAM, and RDX removable storage requirements on the following IBM systems:
IBM POWER6 processor-based systems
IBM POWER7 processor-based systems
IBM POWER8® processor-based systems
IBM POWER9 processor-based systems
IBM Power10 processor-based systems
The IBM System Storage 7226 Multi-Media Enclosure offers an expansive list of drive feature options, as listed in Table 1-24.
Table 1-24 Supported drive features for the 7226-1U3
Feature code
Description
Status
1420
DVD-RAM SAS Optical Drive
Available
1422
DVD-RAM Slim SAS Optical Drive
Available
5762
DVD-RAM USB Optical Drive
Available
5763
DVD Front USB Port Sled with DVD-RAM USB Drive
Available
5757
DVD RAM Slim USB Optical Drive
Available
8348
LTO Ultrium 6 Half High Fibre Tape Drive
Available
8341
LTO Ultrium 6 Half High SAS Tape Drive
Available
8441
LTO Ultrium 7 Half High SAS Tape Drive
Available
8546
LTO Ultrium 8 Half High Fibre Tape Drive
Available
EU03
RDX 3.0 Removable Disk Docking Station
Available
The following options are available:
LTO Ultrium 6 Half-High 2.5 TB SAS and FC Tape Drive: With a data transfer rate up to 320 MBps (assuming a 2.5:1 compression), the LTO Ultrium 6 drive is read/write compatible with LTO Ultrium 6 and 5 media, and read-only compatibility with LTO Ultrium 4. By using data compression, an LTO-6 cartridge can store up to 6.25 TB of data.
The LTO Ultrium 7 drive offers a data rate of up to 300 MBps with compression. It also provides read/write compatibility with Ultrium 7 and Ultrium 6 media formats, and read-only compatibility with Ultrium 5 media formats. By using data compression, an LTO-7 cartridge can store up to 15TB of data.
The LTO Ultrium 8 drive offers a data rate of up to 300 MBps with compression. It also provides read/write compatibility with Ultrium 8 and Ultrium 7 media formats. It is not read or write compatible with other Ultrium media formats. By using data compression, an LTO-8 cartridge can store up to 30 TB of data.
DVD-RAM: The 9.4 GB SAS Slim Optical Drive with an SAS and USB interface option is compatible with most standard DVD disks.
RDX removable disk drives: The RDX USB docking station is compatible with most RDX removable disk drive cartridges when it is used in the same OS. The 7226 offers the following RDX removable drive capacity options:
 – 500 GB (#1107)
 – 1.0 TB (#EU01)
 – 2.0 TB (#EU2T)
Removable RDX drives are in a rugged cartridge that inserts in to an RDX removable (USB) disk docking station (#1103 or #EU03). RDX drives are compatible with docking stations, which are installed internally in POWER8, POWER9, and Power10 processor-based servers, where applicable.
Figure 1-11 shows the IBM System Storage 7226 Multi-Media Enclosure.
Figure 1-11 IBM System Storage 7226 Multi-Media Enclosure
The IBM System Storage 7226 Multi-Media Enclosure offers a customer-replaceable unit (CRU) maintenance service to help make the installation or replacement of new drives efficient. Other 7226 components also are designed for CRU maintenance.
The IBM System Storage 7226 Multi-Media Enclosure is compatible with most POWER8, POWER9, and Power10 processor-based systems that offer current level AIX, IBM i, and Linux operating systems.
 
Unsupported: IBM i does not support 7226 USB devices.
For a complete list of host software versions and release levels that support the IBM System Storage 7226 Multi-Media Enclosure, see System Storage Interoperation Center (SSIC).
 
Note: Any of the existing 7216-1U2, 7216-1U3, and 7214-1U2 multimedia drawers are also supported.
Flat panel display options
The IBM 7316 Model TF5 is a rack-mountable flat panel console kit that can also be configured with the tray pulled forward and the monitor folded up, which provides full viewing and keying capability for the HMC operator.
The Model TF5 is a follow-on product to the Model TF4 and offers the following features:
A slim, sleek, and lightweight monitor design that occupies only 1U (1.75 in.) in a 19-inch standard rack
A 18.5-inch (409.8 mm x 230.4 mm) flat panel TFT monitor with truly accurate images and virtually no distortion
The ability to mount the IBM Travel Keyboard in the 7316-TF5 rack keyboard tray
Support for the IBM 1x8 Rack Console Switch (#4283) IBM Keyboard/Video/Mouse (KVM) switches
The #4283 is a 1x8 Console Switch that fits in the 1U space behind the TF5 It is a CAT5-based switch. It contains eight analog rack interface (ARI) ports for connecting PS/2 or USB console switch cables. It supports chaining of servers that use an IBM Conversion Options switch cable (#4269). This feature provides four cables that connect a KVM switch to a system, or can be used in a daisy-chain scenario to connect up to 128 systems to a single KVM switch. It also supports server-side USB attachments
1.7.7 Original equipment manufacturer racks
The system can be installed in a suitable OEM rack if that the rack conforms to the EIA-310-D standard for 19-inch racks. This standard is published by the Electrical Industries Alliance. For more information, see IBM Documentation.
The IBM Documentation provides the general rack specifications, including the following information:
The rack or cabinet must meet the EIA Standard EIA-310-D for 19-inch racks that was published August 24, 1992. The EIA-310-D standard specifies internal dimensions, for example, the width of the rack opening (width of the chassis), the width of the module mounting flanges, and the mounting hole spacing.
The front rack opening must be a minimum of 450 mm (17.72 in.) wide, and the rail-mounting holes must be 465 mm plus or minus 1.6 mm (18.3 in. plus or minus 0.06 in.) apart on center (horizontal width between vertical columns of holes on the two front-mounting flanges and on the two rear-mounting flanges).
Figure 1-12 is a top view showing the rack specification dimensions.
Figure 1-12 Rack specifications (top-down view)
The vertical distance between mounting holes must consist of sets of three holes that are spaced (from bottom to top) 15.9 mm (0.625 in.), 15.9 mm (0.625 in.), and 12.7 mm (0.5 in.) on center, which makes each three-hole set of vertical hole spacing 44.45 mm (1.75 in.) apart on center.

Figure 1-13 shows the vertical distances between the mounting holes.
Figure 1-13 Vertical distances between mounting holes
The following rack hole sizes are supported for racks where IBM hardware is mounted:
 – 7.1 mm (0.28 in.) plus or minus 0.1 mm (round)
 – 9.5 mm (0.37 in.) plus or minus 0.1 mm (square)
The rack or cabinet must be capable of supporting an average load of 20 kg (44 lb.) of product weight per EIA unit. For example, a four EIA drawer has a maximum drawer weight of 80 kg (176 lb.).
1.8 Hardware management console overview
The Hardware Management Console (HMC) can be a hardware appliance or virtual appliance that can be used to configure and manage your systems. The HMC connects to one or more managed systems and provides capabilities for following primary functions:
Systems Management functions, such as Power Off, Power on, system settings, Capacity on Demand, Enterprise Pools, Shared processor Pools, Performance and Capacity Monitoring, and starting Advanced System Management Interface (ASMI) for managed systems.
Deliver virtualization management through support for creating, managing, and deleting Logical Partitions, Live Partition Mobility, Remote Restart, configuring SRIOV, managing Virtual IO Servers, dynamic resource allocation, and operating system terminals.
Acts as the service focal point for systems and supports service functions, including call home, dump management, guided repair and verify, concurrent firmware updates for managed systems, and around-the-clock error reporting with Electronic Service Agent for faster support.
Provides appliance management capabilities for configuring network, users on the HMC, updating and upgrading the HMC
We discuss the available HMC offerings next.
1.8.1 HMC 7063-CR2
The 7063-CR2 IBM Power Systems HMC (see Figure 1-14) is a second-generation Power processor-based HMC.
The CR2 model includes the following features:
6-core POWER9 130W processor chip
64 GB (4x16 GB) or 128 GB (4x32 GB) of memory RAM
1.8 TB with RAID1 protection of internal disk capacity
4-ports 1 Gb Ethernet (RH-45), 2-ports 10 Gb Ethernet (RJ-45), two USB 3.0 ports (front side) and two USB 3.0 ports (rear side), and 1 Gb IPMI Ethernet (RJ-45)
Two 900W power supply units
Remote Management Service: IPMI port (OpenBMC) and Redfish application programming interface (API)
Base Warranty is 1 year 9x5 with available optional upgrades
A USB Smart Drive not included.
 
Note: The recovery media for V10R1 is the same for 7063-CR2 and 7063-CR1.
Figure 1-14 HMC 7063-CR2
The 7063-CR2 is compatible with flat panel console kits 7316-TF3, TF4, and TF5.
 
Note: The 7316-TF3 and TF4 were withdrawn from marketing.
1.8.2 Virtual HMC
Initially, the HMC was sold only as a hardware appliance, including the HMC firmware installed. However, IBM extended this offering to allow the purchase of the hardware appliance and a virtual appliance that can be deployed on ppc64le architectures and deployed on x86 platforms.
Any customer with valid contract can download it from ESS site or it can be included within an initial Power E1080 order.
The virtual HMC supports the following hypervisors:
On x86 processor-based servers:
 – KVM
 – Xen
 – VMware
On Power processor-based servers: PowerVM
The following minimum requirements must be met to install the virtual HMC:
16 GB of Memory
4 virtual processors
2 network interfaces (maximum 4 allowed)
1 disk drive (500 GB available disk drive)
For an initial Power E1080 order with the IBM configurator (e-config), HMC virtual appliance can be found by selecting add software  Other System Offerings (as product selections) and then:
5765-VHP for IBM HMC Virtual Appliance for Power V10
5765-VHX for IBM HMC Virtual Appliance x86 V10
For more information about an overview of the Virtual HMC, see this web page.
For more information about how to install the virtual HMC appliance and all requirements, see IBM Documentation.
1.8.3 Baseboard management controller network connectivity rules for 7063-CR2
The 7063-CR2 HMC features a baseboard management controller (BMC), which is a specialized service processor that monitors the physical state of the system by using sensors. OpenBMC that is used on 7063-CR2 provides a graphical user interface (GUI) that can be accessed from a workstation that has network connectivity to the BMC. This connection requires an Ethernet port to be configured for use by the BMC.
The 7063-CR2 provides two network interfaces (eth0 and eth1) for configuring network connectivity for BMC on the appliance.
Each interface maps to a different physical port on the system. Different management tools name the interfaces differently. The HMC task Console Management  Console Settings  Change BMC/IPMI Network Settings modifies only the Dedicated interface.
The BMC ports are listed see Table 1-25.
Table 1-25 BMC ports
Management Tool
Logical Port
Shared/Dedicated
CR2 Physical Port
OpenBMC UI
eth0
Shared
eth0
OpenBMC UI
eth1
Dedicated
Management port only
ipmitool
lan1
Shared
eth0
ipmitool
lan2
Dedicated
Management port only
HMC task (change BMC/IMPI Network settings
lan2
Dedicated
Management port only
Figure 1-15 shows the BMC interfaces of the HMC.
Figure 1-15 BMC interfaces
The main difference is the shared and dedicated interface to the BMC can coexist. Each has its own LAN number and physical port. Ideally, the customer configures one port, but both can be configured. Connecting power systems to the HMC rules remain the same as previous versions.
1.8.4 High availability HMC configuration
For the best manageabiltiy and redundancy, a dual HMC configuration is suggested. This configuration can be two hardware appliances, but also one hardware appliance and one virtual appliance or two virtual appliances.
The following requirements must be met:
Two HMCs are at the same version.
The HMCs use different subnets to connect to the FSPs.
The HMCs can communicate with the servers’ partitions over a public network to allow for full synchronization and function.
1.8.5 HMC code level requirements for the Power E1080
The minimum required HMC version for the Power E1080 is V10R1. V10R1 is supported only on 7063-CR1, 7063-CR2, and Virtual HMC appliances. It is not supported on 7042 Machine types. HMC with V10R1 cannot manage POWER7 processor-based systems.
An HMC that is running V10R1 M1010 includes the following features:
HMC OS Secure Boot support for the 7063-CR2 appliance
Ability to configure login retries and suspended time and support for inactivity expiration in password policy
Ability to specify HMC location and data replication for groups
VIOS Management Enhancements:
 – Prepare for VIOS Maintenance:
 • Validation for redundancy for the storage and network that is provided by VIOS to customer partitions
 • Switch path of redundant storage and network to start failover
 • Rollback to original configuration on failure of prepare
 • Audit various validation and prepare steps performed
 • Report any failure seen during prepare
 – Command Line and Scheduled operations support for VIOS backup or restore VIOS Configuration and SSP Configuration
 – Option to backup or restore Shared Storage Pool configuration in HMC
 – Options to import or export the backup files to external storage
 – Option to failover all Virtual NIC’s from one VIOS to another
Support 128 and 256 MB LMB sizes
Automatically choose fastest network for LPM memory transfer
HMC user experience enhancements:
 – Usability and performance improvements
 – Enhancements to help connect global search
 – Quick view of serviceable events
 – More progress information for UI Wizards
Allow LPM/Remote Restart when virtual optical device is assigned to partition
Update Access Key Support
Scheduled operation function: In the Electronic Service Agent, a new feature that allows customers to receive message alerts only if scheduled operations fail (see Figure 1-16).
Figure 1-16 HMC alert feature
Log retention of the HMC audit trail also is increased.
1.8.6 HMC currency
In recent years, cybersecurity emerged as a national security issue and an increasingly critical concern for CIOs and enterprise IT managers.
The IBM Power processor-based architecture always ranked highly in terms of end-to-end security, which is why it remains a platform of choice for mission-critical enterprise workloads.
A key aspect of maintaining a secure Power environment is ensuring that the HMC (or virtual HMC) is current and fully supported (including hardware, software, and Power firmware updates).
Outdated or unsupported HMCs represent a technology risk that can quickly and easily be mitigated by upgrading to a current release.

1 PowerAXON stands for A-bus/X-bus/OpenCAPI/Networking interfaces of the Power10 processor.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset