2.1 Peak performance of EPCC systems during the last 35 years
2.2 Map of user distribution across the UK
2.3 Typical MPI profiling output from Allinea MAP
2.4 Typical debugging session using Allinea DDT
2.5 The pipework for the ARCHER cooling system
2.6 The high-voltage switchroom supplying ARCHER with power
2.7 ARCHER’s Aries interconnect
2.9 ARCHER usage by subject area for the period 16th December 2013 to the 16th December 2014
2.10 ARCHER usage by code for the period January–November 2014
2.11 ARCHER usage by job size for the period January–November 2014
2.12 Drain time as a function of utilization
3.2 NERSC resource utilization by science area
3.4 Intra-chassis connections (Rank-1)
3.5 Compute blade block diagram
3.6 Service blade block diagram
3.8 Cabinet controller diagram
3.9 Logical Cray Cascade LNET routing
3.10 Artist’s rendering of the new Computational Research and Theory facility at Berkeley Lab
3.12 The Cray XC30 side-to-side airflow design
4.1 Evolution of high performance computing at ZIB
4.2 Financial contribution of the states to the HLRN-III investment costs
4.3 Cray XC30 “Konrad” at ZIB in the first installation phase
4.4 CPU time usage breakdown on HLRN-II in 2013
4.5 Glycerol molecule adsorbed at a zirconia crystal
4.6 Dust devil formation simulated with PALM
4.7 Large eddy simulation with OpenFOAM of passive scalar mixing
4.8 Continuum 3D radiation transfer for a stellar convection model with PHOENIX
4.9 Overview of the HLRN-III system architecture
4.11 Cray XC30 blade with four compute nodes and the Aries router chip
4.12 Cray XC30 Aries network cabling between two cabinets
4.13 Managing the Berlin and Hanover installations as a single HPC system
4.14 Cray XC30 “Konrad” at ZIB in its final installation
5.1 Logo image of the K computer
5.3 Development schedule of the K computer
5.4 Configuration of the K computer
5.5 A node and a logical 3-dimensional torus network
5.8 Software stack of the K computer
5.9 Batch job flow on the K computer
5.10 Configuration: From chip to system
5.12 Ordinary mode and large-scale job mode operation
5.15 Categorization of the six applications
5.18 Seismic isolated structures
5.19 Pillar-free computer room
5.20 Cross-section of the building
6.4 Nek5000 scaling on a medium-sized problem
6.5 Nek5000 scaling on a large problem
6.8 Power consumption and capacity at PDC over time
6.10 Custom-made ducts on top of Lindgren
6.11 Ring-shaped cooling system at the KTH campus
7.2 Peregrine architectural system overview
7.3 Peregrine hydronics sketch
7.5 The Energy System Integration Facility at NREL
7.6 The NREL HPC data center mechanical space
7.8 Peregrine utilization by node count
7.9 Peregrine utilization by node count
8.1 The Yellowstone Supercomputer
8.2 High-level architecture of NCAR’s data-centric environment
8.3 Yellowstone switch hierarchy
8.4 GLADE storage configuration
8.5 Yellowstone usage by discipline
8.6 Simulation rate for high-resolution CESM on Yellowstone
8.7 The simulation speed of WRF on Yellowstone
8.8 Yellowstone’s workload distribution
8.11 Climate change and air quality
8.12 Resolving ocean-atmosphere coupling features
8.13 Improved detection of seismic vulnerability
8.14 The effects of resolution and physics parameterization choices