High Performance Parallel I/O
Coordonnateurs : Prabhat , Koziol Quincey
Gain Critical Insight into the Parallel I/O Ecosystem
Parallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.
The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O hardware, middleware, and applications. The book then traverses up the I/O software stack. The second part covers the file system layer and the third part discusses middleware (such as MPIIO and PLFS) and user-facing libraries (such as Parallel-NetCDF, HDF5, ADIOS, and GLEAN).
Delving into real-world scientific applications that use the parallel I/O infrastructure, the fourth part presents case studies from particle-in-cell, stochastic, finite volume, and direct numerical simulations. The fifth part gives an overview of various profiling and benchmarking tools used by practitioners. The final part of the book addresses the implications of current trends in HPC on parallel I/O in the exascale world.
Parallel I/O in Practice. File Systems. I/O Libraries. I/O Case Studies. I/O Profiling Tools. Future Trends. Index.
Prabhat leads the NERSC Data and Analytics Services Group at Lawrence Berkeley National Laboratory. His main research interests include Big Data analytics, scientific data management, parallel I/O, HPC, and scientific visualization. He is also interested in atmospheric science and climate change.
Quincey Koziol is the director of core software development and HPC at The HDF Group, where he leads the HDF5 software project. His research interests include HPC, scientific data storage, and software engineering and management.
Date de parution : 11-2014
15.6x23.4 cm
Date de parution : 10-2019
15.6x23.4 cm
Thèmes de High Performance Parallel I/O :
Mots-clés :
IBM Blue Gene; National Energy Research Scientific Computing; Cray XE6; Energy Research Scientific Computing Center; NERSC; MPI Process; Lawrence Berkeley National Laboratory; Parallel File System; Los Alamos National Laboratory; Lustre File Systems; HDF5 Files; RDMA; Burst Buffer; Raid; MPI Task; File System; Compute Nodes; Blue Gene; Lustre Client; MPI Rank; Texas Advanced Computing Center; ALCF; Distributed Lock Manager; HPC Cluster; Power Consumption