List of Figures

2.1    Peak performance of EPCC systems during the last 35 years

2.2    Map of user distribution across the UK

2.3    Typical MPI profiling output from Allinea MAP

2.4    Typical debugging session using Allinea DDT

2.5    The pipework for the ARCHER cooling system

2.6    The high-voltage switchroom supplying ARCHER with power

2.7    ARCHER’s Aries interconnect

2.8    ARCHER usage against time for the 16th December 2013 to the 16th December 2014 in allocation units. The large increase in the end is due to the size increase with Phase 2 installation

2.9    ARCHER usage by subject area for the period 16th December 2013 to the 16th December 2014

2.10  ARCHER usage by code for the period January–November 2014

2.11  ARCHER usage by job size for the period January–November 2014

2.12  Drain time as a function of utilization

3.1    Edison, NERSC’s newest supercomputer, is a Cray XC30 with a peak performance of 2.57 petaflops/sec, 133,824 compute cores, 357 terabytes of memory, and 7.4 petabytes of disk

3.2    NERSC resource utilization by science area

3.3    The Dragonfly Network

3.4    Intra-chassis connections (Rank-1)

3.5    Compute blade block diagram

3.6    Service blade block diagram

3.7    HSS cabinet level detail

3.8    Cabinet controller diagram

3.9    Logical Cray Cascade LNET routing

3.10  Artist’s rendering of the new Computational Research and Theory facility at Berkeley Lab

3.11  A cross-section of NERSC’s CRT data center, opening in 2015. The bottom mechanical floor houses air handlers, pumps and heat exchangers. The second floor is the data center; the top two floors are offices

3.12  The Cray XC30 side-to-side airflow design

3.13  Simulation showing computed pH on calcite grains at 1 micron resolution. (Image: David Trebotich.)

3.14  Using the Nyx code on Edison, scientists were able to run the largest simulation of its kind (370 million light years on each side) showing neutral hydrogen in the large-scale structure of the universe. (Image: Casey Stark.)

3.15  This volume rendering shows the reactive hydrogen/air jet in crossflow in a combustion simulation. (Image: Hongfeng Yu, University of Nebraska.)

4.1    Evolution of high performance computing at ZIB

4.2    Financial contribution of the states to the HLRN-III investment costs

4.3    Cray XC30 “Konrad” at ZIB in the first installation phase

4.4    CPU time usage breakdown on HLRN-II in 2013

4.5    Glycerol molecule adsorbed at a zirconia crystal

4.6    Dust devil formation simulated with PALM

4.7    Large eddy simulation with OpenFOAM of passive scalar mixing

4.8    Continuum 3D radiation transfer for a stellar convection model with PHOENIX

4.9    Overview of the HLRN-III system architecture

4.10  Architecture of the Lustre filesystem in the first installation phase of the HLRN-III system for one site

4.11  Cray XC30 blade with four compute nodes and the Aries router chip

4.12  Cray XC30 Aries network cabling between two cabinets

4.13  Managing the Berlin and Hanover installations as a single HPC system

4.14  Cray XC30 “Konrad” at ZIB in its final installation

5.1    Logo image of the K computer

5.2    A view of the K computer

5.3    Development schedule of the K computer

5.4    Configuration of the K computer

5.5    A node and a logical 3-dimensional torus network

5.6    System board

5.7    K computer rack

5.8    Software stack of the K computer

5.9    Batch job flow on the K computer

5.10    Configuration: From chip to system

5.11  Progression of increasing the number of applications and the number of concurrent applications. The solid line denotes the number of applications executed. Percentage denotes the ratio of the number of concurrent applications

5.12  Ordinary mode and large-scale job mode operation

5.13  Back-fill job scheduling

5.14  Job-filling rate

5.15  Categorization of the six applications

5.16  A view of the facilities

5.17  A view of the buildings

5.18  Seismic isolated structures

5.19  Pillar-free computer room

5.20  Cross-section of the building

6.1    Bellman

6.2    Lindgren phase 1

6.3    Lindgren phase 2

6.4    Nek5000 scaling on a medium-sized problem

6.5    Nek5000 scaling on a large problem

6.6    GROMACS scaling

6.7    Lindgren usage

6.8    Power consumption and capacity at PDC over time

6.9    Layout of cooling at PDC

6.10  Custom-made ducts on top of Lindgren

6.11  Ring-shaped cooling system at the KTH campus

7.1    Picture of Peregrine

7.2    Peregrine architectural system overview

7.3    Peregrine hydronics sketch

7.4    Photograph of NREL scientists analyzing a Peregrine molecular dynamics simulation in the ESIF Insight Center

7.5    The Energy System Integration Facility at NREL

7.6    The NREL HPC data center mechanical space

7.7    Peregrine use by area

7.8    Peregrine utilization by node count

7.9    Peregrine utilization by node count

8.1    The Yellowstone Supercomputer

8.2    High-level architecture of NCAR’s data-centric environment

8.3    Yellowstone switch hierarchy

8.4    GLADE storage configuration

8.5    Yellowstone usage by discipline

8.6    Simulation rate for high-resolution CESM on Yellowstone

8.7    The simulation speed of WRF on Yellowstone

8.8    Yellowstone’s workload distribution

8.9    HPSS growth

8.10  GLADE growth

8.11  Climate change and air quality

8.12  Resolving ocean-atmosphere coupling features

8.13  Improved detection of seismic vulnerability

8.14  The effects of resolution and physics parameterization choices

8.15  Numerical simulation of the solar photosphere

8.16  NWSC facility

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset