Please see The HDF Group's new Support Portal for the latest information.
-
Parallel HDF5 files had to be compatible with serial HDF5 files and sharable between different serial and parallel platforms.
-
Parallel HDF5 had to be designed to have a single file image to all processes, rather than having one file per process. Having one file per process can cause expensive post processing, and the files are not usable by different processes.
-
A standard parallel I/O interface had to be portable to different platforms.
With these requirements of HDF5 our initial target was to support MPI programming, but not for shared memory programming. We had done some experimentation with thread-safe support for Pthreads and for OpenMP, and decided to use these.
Implementation requirements were to:
-
Not use Threads, since they were not commonly supported in 1998 when we were looking at this.
-
Not have a reserved process, as this might interfere with parallel algorithms.
-
Not spawn any processes, as this is not even commonly supported now.
The following shows the Parallel HDF5 implementation layers:
- - Last modified: 05 July 2016