HDF5-related tools are available to assist the user in a variety of activities, including examining or managing HDF5 files, converting raw data between HDF5 and other special-purpose formats, moving data and files between the HDF4 and HDF5 formats, measuring HDF5 library performance, and managing HDF5 library and application compilation, installation and configuration. Unless otherwise specified below, these tools are distributed and installed with HDF5.
http://www.hdfgroup.org/products/hdf5_tools/h5check.html
.)
http://www.hdfgroup.org/projects/jpss/h5edit_index.html
.)
http://www.hdfgroup.org/products/java/hdf-java-html/
.)
HDFview
-- a browser that
works with both HDF4 and HDF5 files and
can be used to transfer data between the two formats
http://www.hdfgroup.org/h4toh5/
.)
h5dump
[
OPTIONS]
files
h5dump
enables the user to examine
the contents of an HDF5 file and dump those contents, in human
readable form, to an ASCII file.
h5dump
dumps HDF5 file content to standard output.
It can display the contents of the entire HDF5 file or
selected objects, which can be groups, datasets, a subset of a
dataset, links, attributes, or datatypes.
The --header
option displays object header
information only.
Names are the absolute names of the objects. h5dump
displays objects in the order same as the command order. If a
name does not start with a slash, h5dump
begins
searching for the specified object starting at the root group.
If an object is hard linked with multiple names,
h5dump
displays the content of the object in the
first occurrence. Only the link information is displayed in later
occurrences.
h5dump
assigns a name for any unnamed datatype in
the form of
#
oid1:
oid2, where
oid1 and oid2 are the object identifiers
assigned by the library. The unnamed types are displayed within
the root group.
Datatypes are displayed with standard type names. For example,
if a dataset is created with H5T_NATIVE_INT
type
and the standard type name for integer on that machine is
H5T_STD_I32BE
, h5dump
displays
H5T_STD_I32BE
as the type of the dataset.
h5dump
can also dump a subset of a dataset.
This feature operates in much the same way as hyperslabs in HDF5;
the parameters specified on the command line are passed to the
function
H5Sselect_hyperslab
and the resulting selection
is displayed.
The h5dump
output is described in detail in the
DDL for HDF5, the
Data Description Language document.
Note: It is not permissible to specify multiple
attributes, datasets, datatypes, groups, or soft links with one
flag. For example, one may not issue the command
WRONG:
h5dump -a /attr1 /attr2 foo.h5
to display both /attr1
and /attr2
.
One must issue the following command:
CORRECT:
h5dump -a /attr1 -a /attr2 foo.h5
One byte integer type data is displayed in decimal by default. When displayed in ASCII, a non-printable code is displayed in 3 octal digits preceeded by a back-slash unless there is a C language escape sequence for it. For example, CR and LF are printed as \r and \n. Though the NUL code is represented as \0 in C, it is printed as \000 to avoid ambiguity as illustrated in the following 1 byte char data (since this is not a string, embedded NUL is possible).
141 142 143 000 060 061 062 012 a b c \0 0 1 2 \nh5dump prints them as "abc\000012\n". But if h5dump prints NUL as \0, the output is "abc\0012\n" which is ambiguous.
Using file drivers:
It is possible to select the file driver with which to open an
HDF5 file by using the --filedriver
(or -f
)
command-line option.
Valid values for the --filedriver
option are
sec2
, family
,
split
, and multi
.
If the file driver flag is not specified,
then the file will be opened with each driver in turn,
and in the order specified above,
until one driver succeeds in opening the file.
Special file naming restrictions apply when using h5dump
with either the split
or the multi
driver.
To dump a split file, h5dump
requires that
the metadata and raw data filenames end with
-m.h5
and -r.h5
, respectively,
and that the entire virtual HDF5 file,
or the logical HDF5 file, be referred to
on the command line by the common portion of the filename preceding
the -m
and -r
.
For example, assume that a split HDF5 file has its
metadata in a file named splitfile-m.h5
and its
raw data in a file named splitfile-r.h5
.
The following command would dump the contents of this
logical HDF5 file:
h5dump --filedriver="split" splitfile
Note that the above split filename restrictions are specific to
h5dump
;
HDF5 applications do not necessarily have the same limitations.
To dump a multi file, h5dump
requires that
the metadata and raw data filenames end with a subset of the following:
-s.h5
for userblock, superblock,
and driver information block data
-b.h5
for B-tree node information
-r.h5
for dataset raw data
-g.h5
for global heap data
-l.h5
for local heap data (object names)
-o.h5
for object headers
The entire virtual HDF5 file must also be referred to
on the command line by the common portion of the filename preceding
those special tags.
For example, assume that a multi HDF5 file has its
userblock, superblock, and driver information block data
in a file named multifile-s.h5
, its
B-tree node information in a file named multifile-b.h5
,
its raw data in a file named multifile-r.h5
, its
global heap data in a file named multifile-g.h5
,
et cetera.
The following command would dump the contents of this
logical HDF5 file:
h5dump --filedriver="multi" multifile
Note that the above multi filename restrictions are specific to
h5dump
;
HDF5 applications do not necessarily have the same limitations.
To dump a family file, h5dump
requires that
the logical file’s name on the command line include the
printf(3c)
-style integer format specifier that
specifies the format of the family file member numbers.
For example, if an HDF5 family of files consists of the files
family_000.h5
,
family_001.h5
,
family_002.h5
, and
family_003.h5
,
the logical HDF5 file would be specified on the command line as
family_%3d.h5
.
The following command would dump the contents of this
logical HDF5 file:
h5dump --filedriver="family" family_%3d.h5
--xml
option, h5dump
generates
XML output. This output contains a complete description of the file,
marked up in XML. The XML conforms to the HDF5 Document Type
Definition (DTD) available at
“HDF5
XML Software.”
The XML output is suitable for use with other tools, including the HDF5 Java Products.
-h or
--help |
Print a usage message and exit. |
-V or
--version |
Print version number and exit. |
Formatting options: | |
-e or
--escape |
Escape non-printing characters. |
-r or
--string |
Print 1-byte integer datasets as ASCII. |
-y or
--noindex |
Do not print array indices with data. |
-m T
or
--format=T |
Set the floating point output format.
T is a string defining the floating point format, e.g., '%.3f' .
|
-q Q
or
--sort_by=Q |
Sort groups and attributes by the specified
index type, Q.
Valid values of Q are as follows:
name
Alpha-numeric index by name (Default)
creation_order
Index by creation order
|
-z Z
or
--sort_order=Z |
Sort groups and attributes in the specified
order, Z.
Valid values of Z are as follows:
ascending
Sort in ascending order (Default)
descending
Sort in descending order
|
--enable-error-stack |
Prints messages from the HDF5 error stack
as they occur.
Injects error stack information, which is normally suppressed,
directly into the output stream.
This will disrupt normal dump error: unable to print data h5dump can be called again with
‘--enable-error-stack ’ plus the original
options to reveal error stack messages.
|
--no-compact-subset |
Enables recognition of the left square bracket
( [ )
as a character in a dataset name.
This option must disable compact subsetting, which is described at the end of this “Options and Parameters” section. |
-w N or
--width=N |
Set the number of columns of output.
A value of 0 (zero) sets the number of columns to the maximum (65535). Default width is 80 columns. |
File options: | |
-n or
--contents |
Print a list of the file contents and exit. |
-n 1 or
--contents=1 |
The optional value 1 (one)
on the -n, --contents option adds attributes
to the output. |
-B or
--superblock |
Print the content of the superblock. |
-H or
--header |
Print the header only; no data is displayed. |
-f D
or
--filedriver=D |
Specify which driver to open the file with. |
-o F
or
--output=F |
Output raw data into file F.
The files specified for the
To suppress the raw data display, use this option with no
filename, as in either of the following examples. This has the
effect of sending the output to a NULL file:
|
-b B
or
--binary=B |
Output dataset to a binary file
using the datatype specified by B .
B must have one of the following values:
LE
Little-endian
BE
Big-endian
MEMORY
Memory datatype
FILE
File datatype
Recommended usage is with the -d and -o
options.
|
-O F
or
--ddl=F |
Output DDL text into file F.
The files specified for the
To suppress the DDL display, use this option with no filename,
as in either of the following examples. This has the effect
of sending the output to a NULL file:
|
Object options: | |
-a P or --attribute=P |
Print the specified attribute. |
-d P
or
--dataset=P |
Print the specified dataset. |
-g P
or
--group=P |
Print the specified group and all members. |
-l P
or
--soft-link=P |
Print the value(s) of the specified soft link. |
-t P
or
--datatype=P |
Print the specified named datatype. |
-A or
--onlyattr |
Print the header and value of attributes; data of datasets is not displayed. |
-A 0 or
--onlyattr=0 |
The optional value 0 (zero)
on the -A, --onlyattr option prints everything
except attributes. |
-N P
or
--any-path=P |
Print any attribute, dataset, datatype,
group, or link whose path matches P.
P may match either the absolute path or any portion of the path. |
Object property options: | |
-i or
--object-ids |
Print the object ids. |
-p or
--properties
|
Print information regarding dataset properties,
including filters, storage layout, fill value,
and allocation time.
The filter output lists any filters used with a dataset, including the type of filter, its name, and any filter parameters. The storage layout output specifies the dataset layout (chunked, compact, or contiguous), the size in bytes of the dataset on disk, and, if a compression filter is associated with the dataset, the compression ratio. The compression ratio is computed as (uncompressed size)/(compressed size). The fill value output includes the fill value datatype and value. The allocation time output displays the allocation time as specified with H5Pset_alloc_time .
|
-M L
or
--packedbits=L |
Print packed bits as unsigned integers,
using the mask format L for an integer dataset
specified with option -d .
L is a list of offset,length values, separated by commas. offset is the beginning bit in the data value. length is the number of bits in the mask. |
-R
or
--region |
Print dataset pointed by region references. |
XML options: | |
-x
or
--xml |
Output XML using XML schema (default) instead of DDL. |
-u
or
--use-dtd |
Output XML using XML DTD instead of DDL. |
-D U
or
--xml-dtd=U |
In XML output, refer to the DTD or schema at U instead of the default schema/DTD. |
-X S
or
--xml-ns=S |
In XML output, (XML Schema) use qualified names in
the XML: ":": no namespace, default: "hdf5:" |
Subsetting options and compact subsetting: | |
Subsetting is available by using the following options with
the dataset option, -d or --dataset .
Subsetting is accomplished by selecting a hyperslab from the
data, so the options mirror those for performing a hyperslab
selection.
At least one of the |
|
-s START or
--start=START |
Offset of start of subsetting selection. Default: 0 in all dimensions, specifying the beginning of the dataset. Each of START, STRIDE, COUNT, and BLOCK must be a comma-separated list of integers with one integer for each dimension of the dataset. |
-S STRIDE or
--stride=STRIDE |
Hyperslab stride. Default: 1 in all dimensions. |
-c COUNT or
--count=COUNT |
Number of blocks to include in the selection. Default: 1 in all dimensions. |
-k BLOCK or
--block=BLOCK |
Size of block in hyperslab. Default: 1 in all dimensions.
Compact subsetting:
It is not required to use all parameters,
but until the last parameter value used,
all of the semicolons ( Each of START, STRIDE, COUNT, and BLOCK must be a comma-separated list of integers with one integer for each dimension of the dataset.
When not specified, default parameter values are used:
zeros ( |
Option Argument Conventions: | |
-- |
Two dashes followed by whitespace.
Indicates that the following argument is not an option.
For example, this structure can be used to dump a file called
h5dump -- -fThis option is necessary only when the name of the file to be examined starts with a dash ( - ), which could confuse
the tool’s command-line parser. |
Option parameters appearing above are defined as follows: | |
D | File driver to use in opening the file
Valid values are sec2 , family ,
split , and multi .
Without the file driver option, the file will be opened with each driver in turn, and in the order specified immediately above, until one driver succeeds in opening the file. |
P | Path to the object
For most options, this must be the absolute path from the root group to the object. With the -N , --any-path option,
this may be either the absolute path or a partial path.
|
F | A filename |
N | An integer greater than 1 |
START, STRIDE,
COUNT, and BLOCK |
Comma-separated lists of integers
Each of these option parameters must be a list of integers with one integer for each dimension of the dataspace being queried. |
U | A URI (as defined in [IETF RFC 2396], updated by [IETF RFC 2732]) that refers to the DTD to be used to validate the XML |
B | The form of binary output:MEMORY
for a memory typeFILE
for the file typeLE or BE
for pre-existing little- or big-endian types
|
Files parameter: | |
files |
File or files to be examined;
one or more files may be listed.
The file name may include a
On Unix, Linux, and Mac OS X systems,
multiple files can be examined through the use of
Unix-style wildcards.
For example, assume that we are working with the files
The wildcard capability is not currently available on Windows systems. |
0 | Succeeded. |
> 0 | An error occurred. |
/GroupFoo/GroupBar
in the file
quux.h5
:
h5dump -g /GroupFoo/GroupBar quux.h5
Fnord
, which is in the group
/GroupFoo/GroupBar
in the file quux.h5
:
h5dump -d /GroupFoo/GroupBar/Fnord quux.h5
metadata
of the dataset
Fnord
, which is in the group
/GroupFoo/GroupBar
in the file quux.h5
:
h5dump -a /GroupFoo/GroupBar/Fnord/metadata quux.h5
metadata
which is an
attribute of the root group in the file quux.h5
:
h5dump -a /metadata quux.h5
bobo.h5
,
saving the listing in the file bobo.h5.xml
:
h5dump --xml bobo.h5 > bobo.h5.xml
/GroupFoo/databar/
in the file quux.h5
:
h5dump -d /GroupFoo/databar --start="1,1" --stride="2,3"
--count="3,19" --block="1,1" quux.h5
h5dump -d "/GroupFoo/databar[1,1;2,3;3,19;1,1]" quux.h5
/GroupD/FreshData/
in the file quux.h5
, with data written in little-endian
form, to the output file FreshDataD.bin
:
h5dump -d "/GroupD/FreshData" -b LE
-o "FreshDataD.bin" quux.h5
/dset
of the file quux.h5
:
h5dump -d /dset -M 0,1,4,3 quux.h5
/GroupFoo/GroupBar/Fnord
to the file quux.h5
and
output the DDL into the file ddl.txt
and
the raw data into the file data.txt
:
h5dump -d /GroupFoo/GroupBar/Fnord --ddl=ddl.txt -y
-o data.txt quux.h5
/GroupFoo/GroupBar/Fnord
to the file quux.h5
,
suppress the DDL output,
and output the raw data into the file data.txt
:
h5dump -d /GroupFoo/GroupBar/Fnord --ddl= -y
-o data.txt quux.h5
h5dump
displays the
following information:
|
Release | Change |
1.10.1 |
File space information (strategy, persist, threshold, page size) was added when printing the contents of the
superblock with the -B option.
|
1.8.12 |
Optional value of 0 (zero) for the
-A, --onlyattr option added in this release.
Option added in this release:
|
1.8.11 |
Option added in this release:
-O F
or
--ddl=F
This option can be used to suppress the DDL output. This option, combined with the '--output=F'
(or '-o F' ) option
will generate files that can be used as input to
h5import .
In |
1.8.9 |
Option added in this release:
--no-compact-subset
|
1.8.7 |
Option added in this release:
--enable-error-stack
Tool updated in this release to correctly display reference type:
|
1.8.5 |
Bitfield display fixed in this release.
Option added in this release for packed bits data display:
|
1.8.4 |
Option added in this release for region reference display:
-R
or
--region option
|
1.8.1 |
Compression ratio added to output of
-p or --properties option
in this release.
|
1.8.0 |
Options added in this release:
-q or --sort_by
-z or --sort_order
|
1.6.5 |
Options added in this release:
-n or --contents
-e or --escape
-y or --noindex
-p or --properties
-b or --binary
|
h5ls
[
OPTIONS]
file[/
OBJECT]
[file[/
OBJECT]...
]
h5ls
prints selected information about
specified HDF5 file(s) and/or object(s)
in the specified format. In some cases,
information regarding symbolic links is also provided.
-h or -? or --help |
Print a usage message and exit. |
-a or
--address |
Print addresses for raw data.
If a dataset is contiguous, the returned address is the offset in the file of the beginning of the raw data. If the dataset is chunked, the returned list of addresses indicates the offset of the beginning of each chunk.
If the option
This option works only in combination with the verbose option,
|
-d or
--data |
Print the values of datasets. |
--enable-error-stack |
Prints messages from the HDF5 error stack
as they occur.
Injects error stack information, which is normally suppressed,
directly into the output stream.
This will disrupt normal |
--follow-symlinks |
Follow symbolic links (soft links and external
links) to display target object information.
Without this option, |
--no-dangling-links |
Check for any symbolic links (soft links or external links)
that do not resolve to an existing object (dataset, group, or
named datatype).
If any dangling link is found, this situation is treated
as an error and h5ls returns an exit code of
1 .
Must be used with the |
-f or
--full |
Print full path names instead of base names. |
-g or
--group |
Show information about a group, not its contents. |
-l or
--label |
Label members of compound datasets. |
-r or
--recursive |
List all groups recursively, avoiding cycles. |
-s or
--string |
Print 1-bytes integer datasets as ASCII. |
-S or
--simple |
Use a machine-readable output format. |
-w N or
--width= N |
Set the number of columns of output. |
-v or
--verbose |
Generate more verbose output. |
-V or
--version |
Print version number and exit. |
--vfd=DRIVER |
Use the specified virtual file drver.
Valid values for DRIVER include:
sec2 family multi split mpio mpiposix |
-x or
--hexdump |
Show raw data in hexadecimal format. |
file | The file to be examined.
The file name may include a
On Unix, Linux, and Mac OS X systems,
multiple files can be specified through the use of
Unix-style wildcards.
For example, assume that we are working with the files
The wildcard capability is not currently available on Windows systems. |
objects | Each object consists of an HDF5 file name optionally
followed by a slash and an object name within the file
(if no object is specified within the file then the
contents of the root group are displayed). The file name
may include a printf(3C) integer format such
as %05d to open a file family. |
Deprecated options:
The following options have been deprecated in HDF5. While they remain available, they have been superseded as indicated and may be removed from HDF5 in the future. Use the indicated replacement option in all new work; where possible, existing scripts, et cetera, should also be updated to use the replacement option. | |
-e or
--errors |
Show all HDF5 error reporting.
Replaced by --enable-error-stack .
This is an option name change only.
|
-E or
--external |
Follow external links.
Replaced by --follow-symlinks .
|
0 | Succeeded. |
>0 | An error occurred. |
quux.h5
:
h5ls quux.h5
quux.h5
:
h5ls -d quux.h5/dset1
quux.h5
:
h5ls --follow-symlinks quux.h5
g1
recursively, avoiding cycles, in file quux.h5
:
h5ls -r -g quux.h5/g1
Release | Change |
1.8.5 |
Options added in this release:
--follow-symlinks
--no-dangling-links
|
1.8.7 |
Option name --enable-error-stack replaces
deprecated option name --error in this release.
|
h5diff
[OPTIONS]
file1 file2
[object1 [object2 ] ]
ph5diff
[OPTIONS]
file1 file2
[object1 [object2 ] ]
h5diff
and ph5diff
are command line tools
that compare two HDF5 files, file1 and file2,
and report the differences between them.
h5diff
is for serial use while
ph5diff
is for use in parallel environments.
Optionally, h5diff
and ph5diff
will compare two objects within these files.
If only one object, object1, is specified,
h5diff
will compare
object1 in file1
with object1 in file2.
If two objects, object1 and object2,
are specified, h5diff
will compare
object1 in file1
with object2 in file2.
object1 and object2 can be groups, datasets, named datatypes, or symbolic links (soft links or external links) and must be expressed as absolute paths from the respective file’s root group.
h5diff
first compares the names of member objects (the relative path
from the specified group) and generates a report of objects
that appear in only one group or in both groups.
Common objects are then compared recursively.
H5Tequal
.
--follow-symlinks
overrides the default
behavior when symbolic links are compared.)
Output modes:
h5diff
and ph5diff
have the following output modes:
Default | |
Prints the number of differences found
and where they occurred.
If no differences are found, h5diff and
ph5diff produce no output.
This normal behavior is achieved by using none of the following output mode options. |
Report mode | -r |
Prints the above plus the differences. |
Verbose mode | -v |
Prints all of the above plus a list of objects and warnings. |
Verbose mode
with levels |
-vn |
Prints a selectable level of detail.
For details, see “Options and Parameters” below. |
Quiet mode | -q |
Prints no output.
The h5diff exit code will be
the only feedback provided.
|
Difference controls:
h5diff
offers several mutually-exclusive criteria for
analyzing differences in raw data:
'-d delta'
or
'--delta=delta'
option,
h5diff
considers two data values to be equal
if the absolute value of the difference is less than the
specified delta
.
'-p relative'
or
'--relative=relative'
option,
h5diff
considers two data values to be equal
if the absolute value of the relative difference is less
than the value specified in relative
.
'--use-system-epsilon'
option,
h5diff
considers two data values to be equal
if the absolute value of the difference is less than the
computing platform’s system epsilon (or a pre-determined
value if no system epsilon is defined).
h5diff
and NaNs:
h5diff
detects when a value in a dataset is a NaN
(a "not a number" value), but does not differentiate among various
types of NaNs.
Thus, when one NaN is compared with another NaN, h5diff
treats them as equal; when a NaN is compared with a valid number,
h5diff
treats them as not equal.
Note that NaN detection is computationally expensive and slows
h5diff
performance dramatically.
If you do not have NaNs in your files, or do not care about NaNs,
use the -N
option to turn off NaN detection.
Similarly, if h5diff -N
produces unexpected differences,
running h5diff
without -N
should reveal
whether any of the differences are associated with NaN values.
Difference between h5diff
and ph5diff
:
With the following exception,
h5diff
and ph5diff
behave identically.
With ph5diff
, the comparison of objects is shared across
multiple processors, with the comparison of each pair of objects
assigned to a single processor. This work assignment means that
ph5diff
will not speed up the comparison of any
given pair of datasets,
as the comparison of the pair will still occur on a single processor.
-h
or
--help |
Print help message. | ||||||
-V
or
--version |
Print version number and exit. | ||||||
-r
or
--report |
Report mode — Print the differences. | ||||||
-v
or
--verbose |
Verbose mode — Print difference information, list of objects, warnings, etc. | ||||||
-vn
or
--verbose= n |
Verbose mode with levels—
Print difference information, list of objects,
warnings, etc., with the level of detail determined by
value of n :
| ||||||
-q
or
--quiet |
Quiet mode —
Do not print output.
| ||||||
--follow-symlinks |
Follow symbolic links
(soft links and external links) and compare the links’
target objects.
If symbolic link(s) with the same name exist in the files being compared, then determine whether the target of each link is an existing object (dataset, group, or named datatype) or the link is a dangling link (a soft or external link pointing to a target object that does not exist).
If any symbolic link specified in the call to
| ||||||
--no-dangling-links |
Must be used with the
--follow-symlinks option;
otherwise, h5diff shows error message and
returns an exit code of 2 .
Check for symbolic links (soft links or external links)
that do not resolve to an existing object (dataset, group,
or named datatype). If a dangling link is found, this
situation is treated as an error and | ||||||
-N
or
--nan |
Disables NaN detection;
see “h5diff and NaNs” above.
| ||||||
-n count
or
--count= count
|
Print difference up to count
differences, then stop.
count must be a positive integer.
| ||||||
-d delta
or
--delta= delta |
Print only differences that are greater than the
limit delta. delta must be a positive number.
The comparison criterion is whether the absolute value of the
difference of two corresponding values is greater than
delta
(i.e., |a–b| > delta ,
where a is a value in file1 and
b is a value in file2).
Do not use | ||||||
-p relative
or
--relative= relative |
Print only differences that are greater than a
relative error. relative must be a positive number.
The comparison criterion is whether the absolute value of the
ratio of the difference between two values and one of those
values is greater than relative (that is,
|(a–b)/b)| > relative
where a is a value in file1 and
b is the corresponding value in file2).
Do not use | ||||||
--use-system-epsilon
|
Return a difference if and only if the difference
between two data values exceeds the system value for epsilon.
That is, if a is a data value in one dataset,
b is the corresponding data value in the
dataset with which the first dataset is being compared, and
epsilon is the system epsilon,
return a difference if and only if
|a-b| > epsilon .
If no system epsilon is defined, FLT_EPSILON=1.19209E-07 for
floating-point datatypes
DBL_EPSILON=2.22045E-16 for
double precision datatypes
Do not use
| ||||||
--exclude-path "path"
|
Exclude the specified path to an object
when comparing files or groups. If a group is excluded,
all member objects will also be excluded.
The specified path is excluded wherever it occurs. This flexibility enables the same option to exclude either objects that exist only in one file or common objects that are known to differ.
When comparing files, path is the
absolute path to the excluded object; when comparing
groups, path is similar to the relative path
from the group to the excluded object.
This path can be taken from the first section of
the output of the If there are multiple paths to an object, only the specified path(s) will be excluded; the comparison will include any path not explicitly excluded. This option can be used repeatedly to exclude multiple paths.
| ||||||
file1 file2 | The HDF5 files to be compared. | ||||||
object1 object2 | Specific object(s) within the files to be compared, expressed as absolute paths from the respective file’s root group. |
0 | No differences were found. |
1 | Some differences were found. |
>1 | An error occurred. |
/a/b
in file1
with the object /a/c
in file2
: h5diff file1 file2 /a/b /a/c
Compare the object /a/b
in file1
with the same object in file2
:
h5diff file1 file2 /a/b
Compare all objects in both files:
h5diff file1 file2
Comparisons executed with the verbose options can produce
object and attribute status reports as illustrated below:
h5diff -v file1 file2
... file1 file2 --------------------------------------- x x / x /dset x /g2 x x /g3 ...The sample output above shows that the dataset
dset
exists only in file2
,
the group /g2
exists only in file1
, and
the group /g3
and the root group exist in both files.
Only objects that exist in both files will be compared.
More verbose levels can produce more information: h5diff -v2 file1 file2
... group : ‹/g2› and ‹/g2› 0 differences found obj1 obj2 -------------------------------------- x x float2 x float3 x x integer1 Attributes status: 2 common, 1 only in obj1, 0 only in obj2 ...In this illustration, both objects,
obj1
and obj2
,
have attributes named float2
and integer1
,
while only obj1
has an attribute named float3
.
Only attributes that exist on both objects will be compared.
The “Attributes status:” line reports that there are
two attributes common to both objects:
one attribute attached only to obj1
, and
zero attributes attached only to obj2
.
To see the “Attributes status:” line independently
of the immediately-preceding table, use the -v1 option.
h5diff -v1 file1 file2
... group : ‹/g2› and ‹/g2› 0 differences found Attributes status: 2 common, 1 only in obj1, 0 only in obj2 ...
Release | Change |
1.6.0 |
h5diff introduced in this release. |
1.8.0 |
ph5diff introduced in this release.
h5diff command line syntax changed in this
release. |
1.8.2 and 1.6.8 | Return value on failure changed in this release. |
1.8.4 and 1.6.10 |
--use-system-epsilon option added in this release.
|
1.8.5 |
--follow-symlinks option added in this release.
--no-dangling-links option added in this release.
|
1.8.6 |
--exclude-path option added in this release.
|
1.8.7 |
-vn, --verbose=n option,
specifying levels of verbose output, added in this release.
|
h5repack
[OPTIONS]
in_file
out_file
h5repack
-i
in_file
-o
out_file
[OPTIONS]
h5repack
is a command line tool that
applies HDF5 filters to an input file in_file,
saving the output in a new output file, out_file.
-i
in_file
-o
out_file
-h
or
--help
-v
or
--verbose
-V
or
--version
-n
or
--native
h5repack
generated
files only with native datatypes.
-L
or
--latest
-c
max_compact_links
or
--compact
=max_compact_links
-d
min_indexed_links
or
--indexed
=min_indexed_links
max_compact_links and min_indexed_links are closely related and the first must be equal to or greater than the second. In the general case, however, performance will suffer, possibly dramatically, if they are equal; performance can be improved by tuning the gap between the two values to minimize unnecessary thrashing between the compact storage and indexed storage modes as group size waxes and wanes. The relationship between max_compact_links and min_indexed_links is most important when group sizes are highly dynamic; that relationship is much less important in files with a stable structure. Compact mode is space and performance-efficient when groups have small numbers of members; indexed mode requires slightly more storage space, but provides increasingly better performance as the number of members in each group increases.
-m
size
or
--minimum
=size
1
).
Default: If no size is specified, a threshold of 1024 bytes is assumed.
-u
file
or
--ublock
=file
-b
user_block_size
or
--block
=user_block_size
512
or greater
and a power of 2
.
Default: 1024
-M
size
or
--metadata_block_size
=size
h5repack
calls
H5Pset_meta_block_size
.
-t
alignment_threshold
or
--threshold
=alignment_threshold
H5Pset_alignment
call.
-a
alignment
or
--alignment
=alignment
H5Pset_alignment
call.
-s
min_size[:header_type]
or
--ssize
=min_size[:header_type]
min_size is the minimum size, in bytes, of a shared object header message. Header messages smaller than the specified size will not be shared.
header_type specifies the type(s) of header message that
this minimum size is to be applied to.
Valid values of header_type are any of the following:
dspace
for dataspace header messages
dtype
for datatype header messages
fill
for fill values
pline
for property list header messages
attr
for attribute header messages
If header_type is not specified,
min_size will be applied to all header messages.
-f
filter
or
--filter
=filter
filter is a string of the following format:
list_of_objects is a comma separated list of object names meaning apply the filter(s) only to those objects. If no object names are specified, the filter is applied to all objects.
name_of_filter can be one of the following:
GZIP
, to apply the HDF5 GZIP filter
(GZIP compression)
SZIP
, to apply the HDF5 SZIP filter
(SZIP compression)
SHUF
, to apply the HDF5 shuffle filter
FLET
, to apply the HDF5 checksum filter
NBIT
, to apply the HDF5 N-bit filter
SOFF
, to apply the HDF5 scale/offset filter
UD
, to apply a user-defined filter
NONE
, to remove any filter(s)
filter_parameters conveys optional compression
information:
GZIP=
deflation_level from 1-9
SZIP=
pixels_per_block,coding_method
pixels_per_block is a even number
in the range 2-32.
coding_method is
EC
or NN
.
SHUF
(no parameter)
FLET
(no parameter)
NBIT
(no parameter)
SOFF=
scale_factor,scale_type
scale_factor is an integer.
scale_type is either IN
or
DS
.
UD=
filter_id,nfilter_params,value_1[,value_2,....,value_n]
filter_id is the filter identifier.
nfilter_params is the number of filter parameters.
value_1 through value_n are the values
of each filter parameter.
Number of values must match the value of
nfilter_params.
NONE
(no parameter)
-l
layout
or
--layout
=layout
layout is a string of the following format:
list_of_objects is a comma separated list of object names, meaning that layout information is supplied for those objects. If no object names are specified, the layout is applied to all objects.
layout_type can be one of the following:
CHUNK
, to apply chunking layout
COMPA
, to apply compact layout
CONTI
, to apply contiguous layout
layout_parameters is present only in the
CHUNK
case and specifies the chunk size of
each dimension in the following format with no intervening
spaces:
dim_1 × dim_2 × ...
dim_n
-e
file
or
--file
=file
-f
(or --filter
) and
-l
(or --layout
) options.
-G
fs_pagesize
or
--fs_pagesize
=fs_page_size
H5Pset_file_space_page_size
).
512
that is
used by the library when the file space strategy PAGE
is used.
-P
fs_persist
or
--fs_persist
=fs_persist
H5Pset_file_space_strategy
).
1
for persisting free space and 0
for not persisting free space .
-S
fs_strategy
or
--fs_strategy
=fs_strategy
H5Pset_file_space_strategy
).
fs_strategy is a string indicating the file space strategy:
FSM_AGGR
: Use free-space
managers, aggregators and virtual file driver for file space allocation
PAGE
: Use free-space managers with embedded
paged aggregation and virtual file driver for file space allocation
AGGR
: Use aggregators and virtual file
driver for file space allocation
NONE
: Use virtual file driver for
file space allocation
-T
fs_threshold
or
--fs_threshold
=fs_threshold
H5Pset_file_space_strategy
).
0 | Succeeded. |
>0 | An error occurred. |
h5repack -f GZIP=1 -v file1 file2
file1
and saves the output in file2
.
Prints verbose output.
h5repack -f dset1:SZIP=8,NN file1 file2
dset1
.
h5repack -l dset1,dset2:CHUNK=20x10 file1 file2
dset1
and dset2
.
h5repack -f UD=307,1,9 file1 file2
bzip2
filter to all datasets.
Release | Change |
1.10.1 |
Options added or modified in this release for file space management and page buffering:
-G, --fs_page_size
-P, --fs_persist
-S, --fs_strategy (modified)
|
1.10.0 |
Options added in this release for file space management:
-S, --fs_strategy
-T, --fs_threshold
|
1.8.12 |
Added user-defined filter parameter (UD ) to
-f filter , --filter=filter
option for use in read and write operations. |
1.8.9 |
-M number, --medata_block_size=number
option introduced in this release. |
1.8.1 | Original syntax restored; both the new and the original syntax are now supported. |
1.8.0 |
h5repack command line syntax changed in this
release. |
1.6.2 |
h5repack introduced in this release. |
h5repart
[-v]
[-V]
[-[b|m]
N[g|m|k]]
[-family_to_sec2]
source_file
dest_file
h5repart
joins a family of files into a single file,
or copies one family of files to another while changing the size
of the family members. h5repart
can also be used to
copy a single file to a single file with holes. At this stage,
h5repart
can not split a single non-family file into
a family of file(s).
To convert a family of file(s) to a single non-family file
(sec2
file), the option -family_to_sec2
has to be used.
Sizes associated with the -b
and -m
options may be suffixed with g
for gigabytes,
m
for megabytes, or k
for kilobytes.
File family names include an integer printf
format such as %d
.
-v |
Produce verbose output. |
-V |
Print a version number and exit. |
-b N |
The I/O block size, defaults to 1kB |
-m N |
The destination member size or 1GB |
-family_to_sec2 |
Convert file driver from family to sec2 |
source_file | The name of the source file |
dest_file | The name of the destination files |
0 | Succeeded. |
>0 | An error occurred. |
h5jam -u user_block -i in_file.h5 [-o out_file.h5]
[--clobber]
h5jam -h
h5unjam -i in_file.h5
[-u user_block | --delete]
[
-o out_file.h5]
h5unjam -h
h5jam
:
Adds user block to front of an HDF5 file,
to create a new concatenated file.
h5unjam
:
Splits user block and HDF5 file into two files:
user block data and HDF5 data.
h5jam
:
h5jam
concatenates a user_block
file and an HDF5 file to create an HDF5 file with a user block.
If out_file.h5
is given, a new file is created
with the user_block
followed by the contents of
in_file.h5
. In this case, infile.h5
is unchanged.
If out_file.h5
is not specified, the
user_block
is added to in_file.h5
.
If in_file.h5
already has a user block, the contents of
user_block
will be added to the end of the existing
user block, and the file shifted to the next boundary.
If --clobber
is set, any existing user block will be
overwritten.
A user block can contain either binary or text data.
The minimum size of a user block is 512 bytes. As needed, the user block can be any power of 2 greater than that: 1024 bytes, 2048 bytes, etc. The user block in the output file is padded so that the HDF5 header begins on the first appropriate boundary. For example, if only 8 bytes of data are inserted for the user block, the HDF5 header will be found at byte 512; if 1100 bytes of data are inserted for the user block, the HDF5 header will be found at byte 2048.
h5unjam
:
h5unjam
splits an HDF5 file, writing the user block
to a file or to stdout and the HDF5 file to an HDF5 file with a
header at byte zero (0
, i.e., with no user block).
If out_file.h5
is given, a new file is created with
the contents of in_file.h5
without the user block.
In this case, infile.h5
is unchanged.
If out_file.h5
is not specified, the
user_block
is removed and in_file.h5
is rewritten, starting at byte 0.
If user_block
is set, the user block will be written
to user_block
.
If user_block
is not set, the user block, if any,
will be written to stdout.
If --delete
is selected,
the user block will not be written.
The last portion of a returned user block may contain padding
or undefined data
(see discussion below: “h5jam
and
h5unjam
not necessarily transitive”).
It is the user’s or the user application’s responsibility
to handle this correctly.
newfile.h5
, with the text in file
mytext.txt
as the user block for the HDF5 file
file.h5
.
h5jam -u mytext.txt -i file.h5 -o newfile.h5Add text in file
mytext.txt
to front of HDF5 dataset,
file.h5
.
h5jam -u mytext.txt -i file.h5Overwrite the user block, if any, in
file.h5
with the contents of mytext.txt
.
h5jam -u mytext.txt -i file.h5 --clobberFor an HDF5 file,
with_ub.h5
, with a user block,
extract the user block to user_block.txt
and
the HDF5 portion of the file to wo_ub.h5
.
h5unjam -i with_ub.h5 -u user_block.txt -o wo_ub.h5
0 | Succeeded. |
>0 | An error occurred. |
The most efficient way to create a user block is to create the file
with a user block (see
H5Pset_userblock
),
and write the user block data into that space from a program.
The user block is completely opaque to the HDF5 library and to
the h5jam
and h5unjam
tools.
The user block is read or written in a single block
as a string of bytes;
it can contain text or any kind of binary data;
and it is up to the user to know what the user block content means
and how to process it.
When the user block is extracted, its entire contents are written as a single block of output, including any padding or uninitialized data.
This tool moves the HDF5 portion of the file through byte copies; i.e., it does not read or interpret the HDF5 objects.
h5jam
and h5unjam
not necessarily transitive:
Note that h5jam
and h5unjam
are not necessarily transitive operations.
Any amount of data can be inserted into a user block,
but an HDF5 user block itself has specific size requirements.
The minimum size is 512 bytes; beyond that, the user block can
be 512 bytes times any positive power of 2.
That is, a user block’s size will be one of the following:
512 bytes, 1024 bytes, 2048 bytes, 4096 bytes, et cetera.
If h5jam
is used to insert a 700 byte file into
the user block, h5jam
will create a user block
of 1024 bytesa and insert the user’s file as the first
700 bytes of that block.
The remaining 324 bytes will be undefined.
If the remaining bytes must have a particular fill value,
for instance, the user must modify the input file by padding
it to exactly 1024 bytes with the required fill value
before inserting it with h5jam
.
When h5unjam
is asked to return the above user block,
it will be returned
with the padding in the last 324 bytes if the user defined it
or with undefined data in the last 324 bytes if the user took
no action to insert the padding.
If the file must be cleaned up for use, it is the user’s or the user application’s responsibility.
If a community of users employs user block data that must be
cleaned up after the use of h5unjam
, the community
should establish a protocol for that process so that every community
member knows what is required.
The community may prefer to create and provide a tool
to perform standard cleanup.
A simple protocol might be for a user community to declare that the
first N bytes of the user block will always contain
the length or size of the valid user block content,
much as a Pascal string starts with the length of the string data.
Also see the HDF5 source code for examples of examining or reading
the user block without modifying the file in any way.
The relevant source files are the test programs
tools/h5jam/tellub.c
and
tools/h5jam/getub.c
.
h5copy [OPTIONS] [OBJECTS]
h5copy
copies an HDF5 object (a dataset, named datatype, or group)
from an input HDF5 file to an output HDF5 file.
If a group is specified as the input object, any objects in that group will be recursively copied.
The output file may or may not already exist.
h5copy
will fail if the destination object name
already exists.
-h
or
--help
-v
or
--verbose
-V
or
--Version
-p
or
--parents
-f flag_type
or
--flag=flag_type
flag_type
may be one of the following strings
or a logical AND of two or more:
shallow |
Copy only immediate members of a group.
(Default: Recursively copy all objects below the group.) |
soft |
Expand soft links to copy target objects.
(Default: Keep soft links as they are.) |
ext |
Expand external links to copy external objects.
(Default: Keep external links as they are.) |
ref |
Copy references and any referenced objects,
i.e., objects that the references point to.
Referenced objects are copied in addition to the objects specified on the command line and reference datasets are populated with correct reference values. Copies of referenced datasets outside the copy range specified on the command line will normally have a different name from the original. (Default:Without this option, reference value(s) in any reference datasets are set to NULL and referenced objects are not copied unless they are otherwise within the copy range specified on the command line.) |
attr |
Copy objects without copying attributes.
(Default: Copy objects and all attributes.) |
allflags |
Switch each setting above from the default
to the setting described in this table.
Equivalent to logical AND of all flags above. |
-i input_file
or
--input=input_file
-o output_file
or
--output=output_file
-s source_object
or
--source=source_object
-d destination_object
or
--destination=destination_object
0 | Succeeded. |
>0 | An error occurred. |
test1.out.h5
,
containing the object array
in the root group,
copied from the existing file test1.h5
and object array
.
h5copy -v -i "test1.h5" -o "test1.out.h5" -s "/array" -d "/array
In verbose mode and using the flag shallow
to prevent recursion in the file hierarchy,
create a new file, test1.out.h5
,
containing the object array
in the root group,
copied from the existing file test1.h5
and object array
.
h5copy -v -f shallow -i "test1.h5" -s "/array" -o test1.out.h5" -d "/array"
Release | Command Line Tool |
1.8.0 | Tool introduced in this release. |
1.8.7 | Tool updated to accept same file as input file and as output file. |
h5mkgrp [OPTIONS] file_name group_name...
h5mkgrp
creates one or more new groups
in an HDF5 file.
file_name
group_name
-h, --help
-l, --latest
-p, --parents
-v, --verbose
-V, --version
0 | Succeeded. |
>0 | An error occurred. |
new_group
, within the
existing group /a/b
in the file HDF5_file
.
h5mkgrp "HDF5_file" "/a/b/new_group"Create a new group,
new_group
, within the
group /a/b
in the file HDF5_file
.
Create the groups a
and b
if they do not
already exist.
Issue no error if the intervening groups or the new group already exist.
h5mkgrp -p "HDF5_file" "/a/b/new_group"Create the new groups
/a/b/new_c
and
/a/x/new_4
in the file HDF5_file
.
The groups /a/b
and /a/x
must already exist.
h5mkgrp -p "HDF5_file" "/a/b/new_c" "/a/x/new_4"
Release | Command Line Tool |
1.8.0 | Tool introduced in this release. |
h5import
infile in_options
[infile in_options ...]
-o outfile
h5import
infile in_options
[infile in_options ...]
-outfile outfile
h5import -h
h5import -help
h5import
converts data
from one or more ASCII or binary files, infile
,
into the same number of HDF5 datasets
in the existing or new HDF5 file, outfile
.
Data conversion is performed in accordance with the
user-specified type and storage properties
specified in in_options
.
The primary objective of h5import
is to
import floating point or integer data.
The utility's design allows for future versions that
accept ASCII text files and store the contents as a
compact array of one-dimensional strings,
but that capability is not implemented in HDF5 Release 1.6.
Input data and options:
Input data can be provided in one of the following forms:
infile
,
contains a single n-dimensional
array of values of one of the above types expressed
in the order of fastest-changing dimensions first.
Floating point data in an ASCII input file may be expressed either in the fixed-point form (e.g., 323.56) or in scientific notation (e.g., 3.23E+02) in an ASCII input file.
Each input file can be associated with options specifying the datatype and storage properties. These options can be specified either as command line arguments or in a configuration file. Note that exactly one of these approaches must be used with a single input file.
Command line arguments, best used with simple input files, can be used to specify the class, size, dimensions of the input data and a path identifying the output dataset.
The recommended means of specifying input data options is in a configuration file; this is also the only means of specifying advanced storage features. See further discussion in "The configuration file" below.
The only required option for input data is dimension sizes; defaults are available for all others.
h5import
will accept up to 30 input files in a single call.
Other considerations, such as the maximum length of a command line,
may impose a more stringent limitation.
Output data and options:
The name of the output file is specified following
the -o
or -output
option
in outfile
.
The data from each input file is stored as a separate dataset
in this output file.
outfile
may be an existing file.
If it does not yet exist, h5import
will create it.
Output dataset information and storage properties can be specified only by means of a configuration file.
Dataset path | If the groups in the path leading to the dataset
do not exist, h5import will create them.If no group is specified, the dataset will be created as a member of the root group. If no dataset name is specified, the default name is dataset0 for the first input dataset,
dataset1 for the second input dataset,
dataset2 for the third input dataset,
etc.h5import does not overwrite a pre-existing
dataset of the specified or default name.
When an existing dataset of a conflicting name is
encountered, h5import quits with an error;
the current input file and any subsequent input files
are not processed.
| |
Output type | Datatype parameters for output data | |
Output data class | Signed or unsigned integer or floating point | |
Output data size | 8-, 16-, 32-, or 64-bit integer 32- or 64-bit floating point | |
Output architecture | IEEE STD NATIVE (Default)Other architectures are included in the h5import
design but are not implemented in this release.
| |
Output byte order | Little- or big-endian. Relevant only if output architecture is IEEE , UNIX , or STD ;
fixed for other architectures.
| |
Dataset layout and storage properties | Denote how raw data is to be organized on the disk. If none of the following are specified, the default configuration is contiguous layout and with no compression. | |
Layout | Contiguous (Default) Chunked | |
External storage | Allows raw data to be stored in a non-HDF5 file or in an
external HDF5 file. Requires contiguous layout. | |
Compressed | Sets the type of compression and the
level to which the dataset must be compressed. Requires chunked layout. | |
Extendable | Allows the dimensions of the dataset increase over time
and/or to be unlimited. Requires chunked layout. | |
Compressed and extendable | Requires chunked layout. | |
Command-line arguments:
The h5import
syntax for the command-line arguments,
in_options
, is as follows:
h5import infile -d dim_list
[-p pathname]
[-t input_class]
[-s input_size]
[infile ...]
-o outfile or h5import infile -dims dim_list
[-path pathname]
[-type input_class]
[-size input_size]
[infile ...]
-outfile outfile or h5import infile -c config_file
[infile ...]
-outfile outfile
|
Note the following:
-c config_file
option is used
with an input file, no other argument can be used with that
input file.
-c config_file
option
is not used with an input data file,
the -d dim_list
argument
(or -dims dim_list
)
must be used and any combination of the remaining options
may be used.
Any arguments used must appear in exactly
the order used in the syntax declarations immediately above.
Using h5dump
to create input for
h5import
:
h5import
can use the output of h5dump
as input to create a dataset or file.
As in all uses of h5import
, an import action is
limited to a single dataset with an atomic numeric or text datatype.
h5dump
must first create two files:
•
A DDL file, which will be used as an
h5import
configuration file
•
A raw data file containing the data to be imported
The DDL file must be generated with the h5dump -p
option, to generate properties.
The raw data file may contain either numeric or string data.
Numeric data can be imported by this method only if h5dump
writes it to a binary file.
String data must be written with the h5dump -y
and
--width=1
options, generating a single column of
strings without indices.
Two examples follow:
The first imports a dataset with a numeric datatype.
Note that numeric data requires use of the
h5dump -b
option to produce a binary data file.
h5dump -p -d "/int/buin/16-bit" --ddl=binuin16.h5.dmp -o binuin16.h5.bin \ -b binuin16.h5 h5import binuin16.h5.bin -c binuin16.h5.dmp -o new_binuin16.h5
The second example imports a dataset containing text data.
Note that string data requires use of the
h5dump -y
option to exclude indexes and the
h5dump --width=1
option to generate
a single column of strings.
h5dump -p -d "/mytext/data" -O txtstr.h5.dmp -o txtstr.h5.bin \ -y --width=1 xtstr.h5 h5import txtstr.h5.bin -c txtstr.h5.dmp -o new_txtstr.h5
The configuration file:
A configuration file is specified with the
-c config_file
option:
h5import infile -c config_file
[infile -c config_file2 ...]
-outfile outfile
|
The configuration file is an ASCII file and must be
organized as "Configuration_Keyword Value" pairs,
with one pair on each line.
For example, the line indicating that
the input data class (configuration keyword INPUT-CLASS
)
is floating point in a text file (value TEXTFP
)
would appear as follows:
INPUT-CLASS TEXTFP
A configuration file may have the following keywords each
followed by one of the following defined values.
One entry for each of the first two keywords,
RANK
and DIMENSION-SIZES
,
is required; all other keywords are optional.
Keyword Value
| Description | ||
---|---|---|---|
RANK
| The number of dimensions in the dataset. (Required) | ||
rank
| An integer specifying the number of dimensions
in the dataset. Example: 4 for a
4-dimensional dataset.
| ||
DIMENSION-SIZES
| Sizes of the dataset dimensions. (Required) | ||
dim_sizes
| A string of space-separated integers
specifying the sizes of the dimensions in the dataset.
The number of sizes in this entry must match the value in
the RANK entry.
The fastest-changing dimension must be listed first.Example: 4 3 4 38 for a
38x4x3x4 dataset.
| ||
PATH
| Path of the output dataset. | ||
path
| The full HDF5 pathname identifying the output dataset
relative to the root group within the output file. I.e., path is a string consisting of
optional group names, each followed by a slash,
and ending with a dataset name.
If the groups in the path do no exist, they will be
created.If PATH is not specified, the output dataset
is stored as a member of the root group and the
default dataset name is
dataset0 for the first input dataset,
dataset1 for the second input dataset,
dataset2 for the third input dataset, etc.Note that h5import does not overwrite a
pre-existing dataset of the specified or default name.
When an existing dataset of a conflicting name is
encountered, h5import quits with an error;
the current input file and any subsequent input files
are not processed.Example: The configuration file entry
dataset1 will
be written in the group grp2/ which is in
the group grp1/ ,
a member of the root group in the output file.
| ||
INPUT-CLASS
| A string denoting the type of input data. | ||
TEXTIN
| Input is signed integer data in an ASCII file. | ||
TEXTUIN
| Input is unsigned integer data in an ASCII file. | ||
TEXTFP
| Input is floating point data in either fixed-point notation (e.g., 325.34) or scientific notation (e.g., 3.2534E+02) in an ASCII file. | ||
IN
| Input is signed integer data in a binary file. | ||
UIN
| Input is unsigned integer data in a binary file. | ||
FP
| Input is floating point data in a binary file. (Default) | ||
STR
| Input is character data in an ASCII file.
With this value, the configuration keywords
RANK , DIMENSION-SIZES ,
OUTPUT-CLASS , OUTPUT-SIZE ,
OUTPUT-ARCHITECTURE , and
OUTPUT-BYTE-ORDER
will be ignored.(Not implemented in this release.) | ||
INPUT-SIZE
| An integer denoting the size of the input data, in bits. | ||
8 16 32 64
| For signed and unsigned integer data:
TEXTIN , TEXTUIN ,
IN , or UIN .
(Default: 32 )
| ||
32 64
| For floating point data:
TEXTFP
or FP .
(Default: 32 )
| ||
OUTPUT-CLASS
| A string denoting the type of output data. | ||
IN
| Output is signed integer data. (Default if INPUT-CLASS is
IN or TEXTIN )
| ||
UIN
| Output is unsigned integer data. (Default if INPUT-CLASS is
UIN or TEXTUIN )
| ||
FP
| Output is floating point data. (Default if INPUT-CLASS is not specified or is
FP or TEXTFP )
| ||
STR
| Output is character data,
to be written as a 1-dimensional array of strings. (Default if INPUT-CLASS is STR )(Not implemented in this release.) | ||
OUTPUT-SIZE
| An integer denoting the size of the output data, in bits. | ||
8 16 32 64
| For signed and unsigned integer data:
IN or UIN .
(Default: Same as INPUT-SIZE , else 32 )
| ||
32 64
| For floating point data:
FP .
(Default: Same as INPUT-SIZE , else 32 )
| ||
OUTPUT-ARCHITECTURE
| A string denoting the type of output architecture. | ||
NATIVE STD IEEE INTEL * CRAY * MIPS * ALPHA * UNIX *
| See the "Predefined Atomic Types" section
in the "HDF5 Datatypes" chapter
of the HDF5 User’s Guide
for a discussion of these architectures. Values marked with an asterisk (*) are not implemented in this release. (Default: NATIVE )
| ||
OUTPUT-BYTE-ORDER
| A string denoting the output byte order. This entry is ignored if the OUTPUT-ARCHITECTURE
is not specified or if it is not specified as IEEE ,
UNIX , or STD .
| ||
BE
| Big-endian. (Default) | ||
LE
| Little-endian. | ||
The following options are disabled by default, making the default storage properties no chunking, no compression, no external storage, and no extensible dimensions. | |||
CHUNKED-DIMENSION-SIZES | Dimension sizes of the chunk for chunked output data. | ||
chunk_dims
| A string of space-separated integers specifying the
dimension sizes of the chunk for chunked output data.
The number of dimensions must correspond to the value
of RANK .The presence of this field indicates that the output dataset is to be stored in chunked layout; if this configuration field is absent, the dataset will be stored in contiguous layout. | ||
COMPRESSION-TYPE
| Type of compression to be used with chunked storage. Requires that CHUNKED-DIMENSION-SIZES
be specified.
| ||
GZIP
| Gzip compression. Other compression algorithms are not implemented in this release of h5import .
| ||
COMPRESSION-PARAM
| Compression level. Required if COMPRESSION-TYPE is specified.
| ||
1 through 9
| Gzip compression levels:
1 will result in the fastest compression
while 9 will result in the
best compression ratio.(Default: 6. The default gzip compression level is 6; not all compression methods will have a default level.) | ||
EXTERNAL-STORAGE
| Name of an external file in which to create the output dataset. Cannot be used with CHUNKED-DIMENSIONS-SIZES ,
COMPRESSION-TYPE , OR MAXIMUM-DIMENSIONS .
| ||
external_file
| A string specifying the name of an external file. | ||
MAXIMUM-DIMENSIONS
| Maximum sizes of all dimensions. Requires that CHUNKED-DIMENSION-SIZES be specified.
| ||
max_dims
| A string of space-separated integers specifying the
maximum size of each dimension of the output dataset.
A value of -1 for any dimension implies
unlimited size for that particular dimension.The number of dimensions must correspond to the value of RANK . | ||
infile(s) |
Name of the Input file(s). |
in_options |
Input options.
Note that while only the -dims argument
is required, arguments must used in the order in which
they are listed below. |
-d dim_list |
|
-dims dim_list |
Input data dimensions.
dim_list is a string of
comma-separated numbers with no spaces
describing the dimensions of the input data.
For example, a 50 x 100 2-dimensional array would be
specified as -dims 50,100 .Required argument: if no configuration file is used, this command-line argument is mandatory. |
-p pathname |
|
-pathname pathname
|
pathname
is a string consisting of
one or more strings separated by slashes (/ )
specifying the path of the dataset in the output file.
If the groups in the path do no exist, they will be
created.Optional argument: if not specified, the default path is dataset1 for the first input dataset,
dataset2 for the second input dataset,
dataset3 for the third input dataset,
etc.h5import does not overwrite a pre-existing
dataset of the specified or default name.
When an existing dataset of a conflicting name is
encountered, h5import quits with an error;
the current input file and any subsequent input files
are not processed. |
-t input_class |
|
-type input_class |
input_class
specifies the class of the
input data and determines the class of the output data.Valid values are as defined in the Keyword/Values table in the section "The configuration file" above. Optional argument: if not specified, the default value is FP . |
-s input_size |
|
-size input_size |
input_size
specifies the size in bits of the
input data and determines the size of the output data.
Valid values for signed or unsigned integers are 8 , 16 , 32 ,
and 64 .Valid values for floating point data are 32 and 64 .Optional argument: if not specified, the default value is 32 . |
-c config_file |
config_file specifies a
configuration file.This argument replaces all other arguments except infile and
-o outfile |
-h |
|
-help |
Prints the h5import usage summary:h5import -h[elp], OR Then exits. |
outfile |
Name of the HDF5 output file. |
0 | Succeeded. |
>0 | An error occurred. |
h5import infile -dims 2,3,4 -type TEXTIN -size 32
-o out1
| |
This command creates a file out1 containing
a single 2x3x4 32-bit integer dataset.
Since no pathname is specified, the dataset is stored
in out1 as /dataset1 .
| |
h5import infile -dims 20,50 -path bin1/dset1 -type FP
-size 64 -o out2
| |
This command creates a file out2 containing
a single a 20x50 64-bit floating point dataset.
The dataset is stored in out2 as
/bin1/dset1 .
|
outfile
at /work/h5/pkamat/First-set
.PATH work/h5/pkamat/First-set INPUT-CLASS TEXTFP RANK 3 DIMENSION-SIZES 5 2 4 OUTPUT-CLASS FP OUTPUT-SIZE 64 OUTPUT-ARCHITECTURE IEEE OUTPUT-BYTE-ORDER LE CHUNKED-DIMENSION-SIZES 2 2 2 MAXIMUM-DIMENSIONS 8 8 -1The next configuration file specifies the following:
NATIVE
format
(as the output architecture is not specified).outfile
at /Second-set
.
PATH Second-set INPUT-CLASS IN RANK 5 DIMENSION-SIZES 6 3 5 2 4 OUTPUT-CLASS IN OUTPUT-SIZE 32 CHUNKED-DIMENSION-SIZES 2 2 2 2 2 COMPRESSION-TYPE GZIP COMPRESSION-PARAM 7
Release | Change |
1.6.0 | Tool introduced in this release. |
1.8.10 |
Tool updated to accept h5dump output. |
1.8.11 |
Process simplified for using h5dump output.
See “Using
h5dump to create input for
h5import .”
|
gif2h5
gif_file h5_file
gif2h5
accepts as input the GIF file gif_file
and produces the HDF5 file h5_file as output.
gif_file | The name of the input GIF file |
h5_file | The name of the output HDF5 file |
0 | Succeeded. |
> 0 | An error occurred. |
Release | Change |
1.8.5 | Tool exit status codes updated. |
h52gif h5_file gif_file
-i
h5_image
h52gif
accepts as input the HDF5 file
h5_file
and the name of an image within
that file as input and produces the GIF file
gif_file
,
containing the image, as output.
h5_file |
The name of the input HDF5 file |
gif_file |
The name of the output GIF file |
-i h5_image
|
Image option, specifying the name of an HDF5 image or dataset containing an image to be converted |
0 | Succeeded. |
> 0 | An error occurred. |
Release | Change |
1.8.5 | Tool exit status codes updated. |
1.8.15 | The -p option was removed from the “Options and Parameters” section: the feature has not been implemented. |
h5toh4
command-line utility is distributed with the
H4toH5 Conversion Library
and documented in the
“H4toH5
Conversion Library Reference Manual.”
h4toh5
command-line utility is distributed with the
H4toH5 Conversion Library
and documented in the
“H4toH5
Conversion Library Reference Manual.”
h5stat
[OPTIONS] file
h5stat
reports selected statistics regarding an HDF5 file
and the objects in that file.
-h
or
--help |
Print a usage message and exit. |
-V
or
--version |
Print version of HDF5 and exit. |
-f
or
--file |
Print file information. |
-F
or
--filemetadata |
Print file space information for file metadata. |
-g
or
--group |
Print group information. |
-l N
or
--links=N |
Set the threshold for the number of links
when printing information for small groups.
N is an integer greater than 0. The default threshold is 10. |
-G or --groupmetadata |
Print file space information for group metadata. |
-d
or
--dset |
Print dataset information. |
-m N
or
--dims=N |
Set the threshold for the dimension sizes
when printing information for small datasets.
N is an integer greater than 0. The default threshold is 10. |
-D
or
--dsetmetadata |
Print file space information for dataset metadata. |
-T
or
--dtypemetadata |
Print dataset datatype information. |
-A
or
--attribute |
Print attribute information. |
-a N
or
--numattrs=N |
Set the threshold for the number of of attributes
when printing information for small numbers of attributes.
N is an integer greater than 0. The default threshold is 10. |
-s
or
--freespace |
Print free space information. |
-S
or
--summary |
Print summary of file space information. |
0 | Succeeded. |
> 0 | An error occurred. |
Release | Change |
1.10.1 |
When printing file space information via the -S option, the file space page size
is included.
|
1.10.0 |
Option added in this release:
-s, --freespace
|
1.8.12 |
Options added in this release:
-l N ,
--links=N
-m N ,
--dims=N
-a N ,
--numattrs=N
|
1.8.9 |
Option added in this release:
-S ,
--summary
|
1.8.0 | Tool introduced in this release. |
h5check
[OPTIONS]
file
h5check
is a validation tool designed to verify that an HDF5 file is encoded
according to the HDF5 File Format
Specification. The purpose is to ensure data model integrity and
long-term compatibility between evolving versions of the HDF5 Library.
Independent Verification Tool:
Note that h5check
is designed to operate independently of the
HDF5 Library:
h5check
is distributed separately; see
“HDF5
Tools and Software.”
h5check
scans through the encoded content,
verifying it against the defined library format.
If it finds any non-compliance, h5check
prints the error and
the reason behind the non-compliance; if possible, it continues the scanning.
If h5check
does not find any non-compliance, it prints an
approval statement upon completion.
By default, the file is verified against the latest version of the
file format; as of this writing, that is the format recognized by the
HDF5 Release 1.8.x series. A format version can be explicitly specified
with the -fn
(or --format=n
) option.
For example, -f16
(or --format=16
) would specify
verification against the format recognized by the HDF5 Release 1.6.x series.
-h, --help
-V, --version
-vn, --verbose n
n=0
| Terse | Indicate only whether file is compliant. |
n=1
| Normal | Print progress and all errors found. (Default) |
n=2
| Verbose | Print all known information; usually used for debugging. |
-e, --external
-fn, --format n
n=16
| Validate according to HDF5 Release 1.6.x series. |
n=18
| Validate according to HDF5 Release 1.8.x series. (Default) |
-oa, --object a
a
is the address of the object header to be validated.
0
| Succeeded. |
1
| Command failures, such as argument errors. |
2
| Format compliance errors found. |
Release | Change |
1.8.5 | Tool first distributed shortly before this release. |
h5perf
[-h
| --help
]
h5perf
[options]
h5perf
is a tool for testing the performance
of the Parallel HDF5 Library.
The tool can perform testing with 1-dimensional and 2-dimensional
buffers and datasets.
For details regarding data organization and access, see the
“h5perf
User Guide.”
The following environment variables have the following
effects on h5perf
behavior:
HDF5_NOCLEANUP |
If set, h5perf does not remove data files.
(Default: Data files are removed.) | |
HDF5_MPI_INFO |
Must be set to a string containing a list of semi-colon separated
key=value pairs for the MPI INFO object.
Example: | |
HDF5_PARAPREFIX | Sets the prefix for parallel output data files. |
These terms are used as follows in this section: | |
file | A filename |
size | A size specifier, expressed as an integer
greater than or equal to 0 (zero) followed by a size indicator:
K for kilobytes (1024 bytes)
M for megabytes (1048576 bytes)
G for gigabytes (1073741824 bytes)
Example: 37M specifies 37 megabytes or
38797312 bytes. |
N | An integer greater than or equal to 0 (zero) |
-h , --help |
||||||||||||||||||||||
Prints a usage message and exits. | ||||||||||||||||||||||
-a size,
--align= size |
||||||||||||||||||||||
Specifies the alignment of objects in the HDF5 file.
(Default: 1 ) |
||||||||||||||||||||||
-A api_list,
--api= api_list |
||||||||||||||||||||||
Specifies which APIs to test. api_list
is a comma-separated list with the following valid values:
Example, --api=mpiio,phdf5 specifies that the MPI I/O
and Parallel HDF5 APIs are to be monitored. |
||||||||||||||||||||||
-B size,
--block-size= size |
||||||||||||||||||||||
Controls the block size within the transfer buffer.
(Default: Half the number of bytes per process per dataset)
Block size versus transfer buffer size:
Transfer buffer size is discussed below with the
The pattern in which the blocks are written to the file is
described in the discussion of the |
||||||||||||||||||||||
-c ,
--chunk |
||||||||||||||||||||||
Creates HDF5 datasets in chunked layout.
(Default: Off) |
||||||||||||||||||||||
-C ,
--collective |
||||||||||||||||||||||
Use collective I/O for the MPI I/O and
Parallel HDF5 APIs.
(Default: Off, i.e., independent I/O)
If this option is set and the MPI-I/O and PHDF5 APIs are in use,
all the blocks of every process will be written at once
with an MPI derived type. |
||||||||||||||||||||||
-d N,
--num-dsets N |
||||||||||||||||||||||
Sets the number of datasets per file.
(Default: 1 ) |
||||||||||||||||||||||
-D debug_flags,
--debug= debug_flags |
||||||||||||||||||||||
Sets the debugging level. debug_flags
is a comma-separated list of debugging flags with the following
valid values:
Example:
Throughput values are computed by dividing the total amount of
transferred data (excluding metadata) over the time spent
by the slowest process.
Several time counters are defined to measure the data transfer
time and the total elapsed time; the latter includes the time
spent during file open and close operations.
A number of iterations can be specified with the
option for each iteration initialize elapsed time counter initialize data transfer time counter for each file start and accumulate elapsed time counter file open start and accumulate data transfer time counter access entire file stop data transfer time counter file close stop elapsed time counter end file save elapsed time counter save data transfer time counter end iteration The reported write throughput is based on the accumulated data transfer time, while the write open-close throughput uses the accumulated elapsed time. |
||||||||||||||||||||||
-e size,
--num-bytes= size |
||||||||||||||||||||||
Specifies the number of bytes per process per
dataset.
(Default: 256K for 1D,
8K for 2D)
Depending on the selected geometry, each test dataset
can be a linear array of size
bytes-per-process * num-processes
or a square array of size
(bytes-per-process * num-processes) ×
(bytes-per-process * num-processes).
The number of processes is set by the
|
||||||||||||||||||||||
-F N,
--num-files= N |
||||||||||||||||||||||
Specifies the number of files.
(Default: 1 ) |
||||||||||||||||||||||
-g ,
--geometry |
||||||||||||||||||||||
Selects 2D geometry for testing.
(Default: Off, i.e., 1D geometry) |
||||||||||||||||||||||
-i N,
--num-iterations= N |
||||||||||||||||||||||
Sets the number of iterations to perform.
(Default: 1 ) |
-I ,
--interleaved |
|
Sets interleaved block I/O.
(Default: Contiguous block I/O)
Interleaved and contiguous patterns in 1D geometry:
For example, with a three process run, 512KB bytes-per-process, 256KB transfer buffer size, and 64KB block size, each process must issue two transfer requests to complete access to the dataset.
Contiguous blocks of the first transfer request are written
as follows:
Interleaved blocks of the first transfer request are written
as follows:
The actual number of I/O operations involved in a transfer request depends on the access pattern and communication mode. When using independent I/O with an interleaved access pattern, each process performs four small non-contiguous I/O operations per transfer request. If collective I/O is turned on, the combined content of the buffers of the three processes will be written using one collective I/O operation per transfer request. For details regarding the impact of performance and access patterns in 2D, see the “h5perf User Guide.” |
-m ,
--mpi-posix |
This option is no longer available. |
-n , --no-fill |
Specifies to not write fill values to HDF5 datasets.
This option is supported only in HDF5 Release v1.6 or later.
(Default: Off, i.e., write fill values) |
-o file,
--output= file |
Sets the output file for raw data to file.
(Default: None) |
-p N,
--min-num-processes= N |
Sets the minimum number of processes to be used.
(Default: 1 ) |
-P N,
--max-num-processes= N
|
Sets the maximum number of processes to be used.
(Default: All MPI_COMM_WORLD processes) |
-T size,
--threshold= size |
Sets the threshold for alignment of objects in the
HDF5 file.
(Default: 1 ) |
-w , --write-only |
Performs only write tests, not read tests.
(Default: Read and write tests) |
-x size,
--min-xfer-size= size |
Sets the minimum transfer buffer size.
(Default: Half the number of bytes per processor per dataset)
This option and the |
-X size,
--max-xfer-size= size |
Sets the maximum transfer buffer size.
(Default: The number of bytes per processor per dataset) |
0 | Succeeded. |
>0 | An error occurred. |
Release | Change |
1.6.0 | Tool introduced in this release. |
1.6.8 and 1.8.0 |
Option -g , --geometry introduced
in this release. |
h5perf_serial
[-h
| --help
]
h5perf_serial
[options]
h5perf_serial
provides tools for testing the performance
of the HDF5 Library in serial mode.
See “h5perf_serial, a Serial File System Benchmarking Tool” for a complete description of this tool.
The following environment variable can be set to control the
specfied aspect of h5perf_serial
behavior:
HDF5_NOCLEANUP
|
If set, h5perf_serial does not remove data files.
(Default: Data files are removed.) | ||
HDF5_PREFIX
| Sets the prefix for output data files. |
K
for kilobytes (1024 bytes)
M
for megabytes (1048576 bytes)
G
for gigabytes (1073741824 bytes)
37M
specifies 37 megabytes or
38797312 bytes.
-A api_list
|
Specifies which APIs to test. api_list
is a comma-separated list with the following valid values:
Example: |
||||||
-c chunk_size_list
| Specifies chunked storage
and defines chunks dimensions and sizes.
(Default: Chunking is off.)
chunk_size_list is a comma-separated list of
size specifiers.
For example, a chunk_size_list value of
|
||||||
-e dataset_size_list
| Specifies dataset dimensionality
and dataset dimension sizes.
(Default dataset size is 100x200, or 100,200 .)
dataset_size_list is a comma-separated list of size specifiers, which are defined above.
For example, a dataset_size_list value of
|
||||||
-i iterations
| Specifies the number of iterations to perform.
(Default: A single iteration, 1 , is performed.)
iterations is an integer specifying the number of iterations. |
||||||
-r access_order
| Specifies dimension access order.
(Default: 1,2 )
access_order is a comma-separated list of
integers specifying the order of access.
For example,
|
||||||
-t
| Selects extendable HDF5 dataset dimensions.
(Default: Datasets are fixed size.)
|
||||||
-v file_driver
| Selects HDF5 driver to be used for HDF5 file access.
(Default: sec2 )
Valid values are as follows:
|
||||||
-w
| Specifies the performance of write tests only,
read performance will not be tested.
(Default: Both write and read tests are performed.) |
||||||
-x buffer_size_list
| Specifies transfer buffer dimensions and sizes.
(Default: 10,20 )
|
0 | Succeeded. |
>0 | An error occurred. |
Release | Command Line Tool |
1.8.1 | Tool introduced in this release. |
h5clear [OPTIONS] file
status_flags
field in the
superblock of the file specified in the command line or to
remove the metadata cache image from the file.h5clear
tool can either clear the
status_flags
field in the superblock of the file or close
a metadata cache image in the specified file.
With the implementation of file locking, the library uses the
status_flags
field in the superblock to mark a
file as in writing or SWMR writing mode when a file is opened.
The library will clear this field when the file closes.
However, a situation may occur where an open file is closed
without going through the normal library file closing procedure,
and this field will not be cleared as a result. An example would
be if an application program crashed. This situation will prevent
a user from opening the file. The h5clear
tool will
clear the status_flags
field, and the user can
then open the file.
When used to close a metadata cache image, h5clear
will
open the supplied HDF5 file in Read-Write (R/W) mode, check to see
if it contains a cache image, and then close it. If the file does
not contain a cache image, h5clear
will generate a warning message
to that effect.
-h, --help |
Print a usage message and exit |
-V, --version |
Print version number and exit |
-s, --status |
Clear the status_flags field in the file's superblock |
-m, --image |
Remove the metadata cache image from the file |
0 | Succeeded. |
> 0 | An error occurred. |
Release | Change |
1.10.1 |
-m , --image option added to remove the metadata cache image. | 1.10.0 | Tool introduced in this release. |
h5watch
h5watch [OPTIONS] OBJECT
h5watch
can be used to watch data that is added
to a dataset. The functionality is similar to the Unix user command
tail
with the follow option, which outputs appended
data as the file grows. h5watch
can only be used on chunked
datasets with unlimited dimension(s) or a fixed dimension with
a maximum dimension setting. h5watch:
Long Format | Comments |
--help |
Print a usage message and exit. |
--version |
Print the version of HDF5 and exit. |
--label |
Label members of a compound datatyped dataset. |
--simple |
Use a machine-readable output format. |
--dim |
Monitor changes in dimension size of the dataset only. |
--width=N |
Set the number of columns to N for output. A value of 0 sets the number of columns to the maximum (65535). The default width is 80 columns. |
--polling=N |
Set the polling interval to N (in seconds) when the dataset will be checked for appended data. The default polling interval is 1. |
--fields=list_of_fields |
Display data for the fields specified in
list_of_fields for a compound datatype.
list_of_field can be specified as follows:
|
OBJECT
is the dataset to be monitored and
is specified with the following format:
filename/path_to_dataset/dsetname
|
|
Each element in this specification of
OBJECT is defined as follows:
| |
filename |
The name of the HDF5 file.
May be preceded by a path separated by slashes to the specified HDF5 file. |
path_to_dataset |
The path within the HDF5 file separated by slashes to the specified dataset |
dsetname |
The name of the dataset to be monitored. |
0 | Succeeded. |
> 0 | An error occurred. |
The default use is to include the tool name and the name of the dataset that will be watched. The output will describe any change and will list the changes.
Suppose there is a dataset called dsetA in an HDF5 file called
example.h5, and this dataset is a one-dimensional dataset with
three records. After h5watch
is run with a command line
of h5watch example.h5/dsetA
, the dimension size in the
file is changed from three to five, and data is written to the
dataset. The output from h5watch
would be the
following:
dims[0]: 3->5 Data: (3): record (4): record
For more examples, see the h5watch Examples document.
Release | Change |
1.10.0 | Tool introduced in this release. |
h5format_convert [OPTIONS] file_name
file_name
. It will convert
datasets as follows:
If errors are encountered, no further conversion will be performed, and the tool will exit with failure.
Short and Long Forms | Comments |
-h
--help |
The tool will print a usage message and exit with success. |
-V
--version |
The tool will print the version # and exit with success. |
-v
--verbose |
This will enable the verbose mode. The tool will print the steps being done while converting a dataset. |
-d dataset_name
--dname=dataset_name
|
This is the name including its path of the
dataset to be converted (the links from the root group to the
dataset). If no conversion is needed,
the tool will exit with success.
Only one dataset can be specified with this option each time the tool is run.
If this option is not used, the tool will attempt to convert
every dataset in file_name .
|
-n
--noop |
Noop is short for no operation. The file will not be modified. The tool will perform all the steps except the actual conversion and exit with success. When errors are encountered along the way, the tool will exit with failure. |
file_name
|
The name of the file that the tool will operate on. |
Release | Change |
1.10.0 | Tool introduced in this release. |
h5redeploy
[help
| -help
]
h5redeploy
[-echo
]
[-force
]
[-prefix=
dir]
[-exec-prefix=
dir]
[-libdir=
dir]
[-includedir=
dir]
[-tool=
tool]
[-show
]
h5redeploy
updates the HDF5 compiler tools after
the HDF5 software has been installed in a new location.
help , -help |
Prints a help message. |
-echo |
Shows all the shell commands executed. |
-force |
Performs the requested action without offering any prompt requesting confirmation. |
-prefix= dir |
Specifies a new directory in which to find the
HDF5 subdirectories lib/ and include/ .
(Default: current working directory) |
-exec-prefix= dir
|
Specifies a new directory in which to find the
HDF5 lib/ subdirectory.
(Default: prefix ) |
-libdir= dir |
Specifies a new directory for the
HDF5 lib/ directory.
(Default: exec-prefix/lib ) |
-includedir= dir |
Specifies a new directory for the
HDF5 include/ directory.
(Default: prefix/include ) |
-tool= tool |
Specifies the tool to update.
tool must be in the current directory and must be writable. (Default: h5cc h5pcc h5fc h5pfc h5c++ ) |
-show |
Shows all of the shell commands to be executed without actually executing them. |
0 | Succeeded. |
> 0 | An error occurred. |
Release | Command Line Tool |
1.8.11 |
-exec-prefix , -libdir , and
-includedir options added. |
1.8.5 | Tool exit status codes updated. |
1.6.0 | Tool introduced in this release. |
h5cc
[
OPTIONS]
<compile line>
h5pcc
[
OPTIONS]
<compile_line>
h5cc
and h5pcc
can be used in much
the same way as mpicc
by MPICH is used to compile
an HDF5 program.
These tools take care of specifying on the command line
the locations of the HDF5 header files and libraries.
h5cc
is for use in serial computing environments;
h5pcc
is for parallel environments.
h5cc
and h5pcc
subsume all other
compiler scripts in that if you have used a set of scripts to compile
the HDF5 library, then h5cc
and h5pcc
also use those scripts.
For example, when compiling an MPICH program, you use the
mpicc
script.
If you have built HDF5 using MPICH, then h5cc
uses the MPICH program for compilation.
Some programs use HDF5 in only a few modules.
It is not necessary to use h5cc
or h5pcc
to compile those modules which do not use HDF5.
In fact, since h5cc
and h5pcc
are only
convenience scripts, you can still compile HDF5 modules in the
normal manner, though you will have to specify the HDF5 libraries
and include paths yourself.
Use the -show
option to see the details.
For example, running h5cc
for an HDF5 library built
using gcc
with --disable-shared
,
zlib
and szlib
,
all installed in /usr/local/lib
would provide this compile command:
gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE \ -D_BSD_SOURCE -L/usr/local/lib /usr/local/lib/libhdf5_hl.a \ /usr/local/lib/libhdf5.a /usr/local/lib/libsz.a /usr/local/lib/libz.a \ -lm -W1,-rpath -W1,/usr/local/lib [OPTIONS] <compile line>
An example of how to use h5cc
to compile the program
hdf_prog
, which consists of the modules
prog1.c
and prog2.c
and uses the HDF5
shared library, would be as follows.
h5pcc
is used in an identical manner.
# h5cc -c prog1.c # h5cc -c prog2.c # h5cc -shlib -o hdf_prog prog1.o prog2.o
-help |
Prints a help message. |
-echo |
Show all the shell commands executed. |
-prefix=DIR |
Use the directory DIR to
find the HDF5 lib/ and include/
subdirectories.
Default: prefix specified when configuring HDF5. |
-show |
Show the commands without executing them. |
-shlib |
Compile using shared HDF5 libraries.
Default for HDF5 built without static libraries. |
-noshlib |
Compile using static HDF5 libraries.
Default for HDF5 built with static libraries. |
<compile line> | The normal compile line options for your compiler.
h5cc and h5pcc use the
the same compiler you used to compile HDF5.
Check your compiler's manual for more information on which
options are needed. |
h5cc
and h5pcc
defaults.
HDF5_CC |
Use a different C compiler. |
HDF5_CLINKER |
Use a different linker. |
HDF5_USE_SHLIB=[yes|no] |
Use shared version of the HDF5 library.
Default: no, except when HDF5 built with only shared libraries. |
HDF5_CPPFLAGS |
Use additional preprocessor flags. |
HDF5_CFLAGS |
Use additional C compiler flags. |
HDF5_LDFLAGS |
Use additional library paths. |
HDF5_LIBS |
Use additional libraries. |
The last four of these environment variables have corresponding
variables with names ending in BASE
that can also be
set by editing their values in the "Things You Can Modify to Override
HDF5 Library Build Components" section of the h5cc
and
h5pcc
scripts.
Note that adding library paths to HDF5_LDFLAGS
where another HDF5 version is located may link your program
with that other HDF5 Library version.
0 | Succeeded. |
> 0 | An error occurred. |
Release | Change |
1.8.12 |
Tool modified to switch default to link to shared libraries
when HDF5 configured with --disable-static .
|
1.8.6 | Four compiler flags and environment variables added. |
1.8.5 | Tool exit status codes updated. |
h5fc
[
OPTIONS]
<compile line>
h5pfc
[
OPTIONS]
<compile_line>
h5fc
and h5pfc
can be used in much the
same way mpif90
by MPICH is used to compile
an HDF5 program.
These tools take care of specifying on the command line
the locations of the HDF5 header files and libraries.
h5fc
is for use in serial computing environments;
h5pfc
is for parallel environments.
h5fc
and h5pfc
subsume all other
compiler scripts in that if you have used a set of scripts to compile
the HDF5 Fortran library, then h5fc
and h5pfc
also use those scripts. For example, when
compiling an MPICH program, you use the mpif90
script. If you have built HDF5 using MPICH, then h5fc
uses the MPICH program for compilation.
Some programs use HDF5 in only a few modules. It is not necessary
to use h5fc
and h5pfc
to compile those
modules which do not use HDF5.
In fact, since h5fc
and h5pfc
are only
convenience scripts, you can still compile HDF5 Fortran modules in
the normal manner, though you will have to specify the
HDF5 libraries and include paths yourself.
Use the -show
option to see the details.
An example of how to use h5fc
to compile the program
hdf_prog
, which consists of the modules
prog1.f90
and prog2.f90
and uses the HDF5 Fortran library, would be as follows.
h5pfc
is used in an identical manner.
# h5fc -c prog1.f90 # h5fc -c prog2.f90 # h5fc -o hdf_prog prog1.o prog2.o
-help |
Prints a help message. |
-echo |
Show all the shell commands executed. |
-prefix=DIR |
Use the directory DIR to find HDF5
lib/ and include/ subdirectories.
Default: prefix specified when configuring HDF5. |
-show |
Show the commands without executing them. |
-shlib |
Compile using shared HDF5 libraries.
Default for HDF5 built without static libraries. |
-noshlib |
Compile using static HDF5 libraries.
Default for HDF5 built with static libraries. | <compile line> | The normal compile line options for your compiler.
h5fc and h5pfc use the
the same compiler you used to compile HDF5.
Check your compiler's manual for
more information on which options are needed. |
h5fc
and h5pfc
defaults.
HDF5_FC |
Use a different Fortran compiler. |
HDF5_FLINKER |
Use a different linker. |
HDF5_USE_SHLIB=[yes|no]
|
Use shared version of the HDF5 library.
Default: no, except when HDF5 built with only shared libraries. |
HDF5_FFLAGS |
Use additional Fortran compiler flags. |
HDF5_LDFLAGS |
Use additional library paths. |
HDF5_LIBS |
Use additional libraries. |
The last three of these environment variables have corresponding
variables with names ending in BASE
that can also be
set by editing their values in the "Things You Can Modify to Override
HDF5 Library Build Components" section of the h5fc
and
h5pfc
scripts.
Note that adding library paths to HDF5_LDFLAGS
where another HDF5 version is located may link your program
with that other HDF5 Library version.
0 | Succeeded. |
> 0 | An error occurred. |
Release | Change |
1.8.12 |
Tool modified to switch default to link to shared libraries
when HDF5 configured with --disable-static .
|
1.8.11 |
Tool updated to recognize
.f95 , .f03 , and .f08
file extensions.
|
1.8.6 | Three compiler flags and environment variables added. |
1.8.5 | Tool exit status codes updated. |
1.6.0 | Tool introduced in this release. |
h5c++
[
OPTIONS]
<compile line>
h5c++
can be used in much the same way MPIch is used
to compile an HDF5 program. It takes care of specifying where the
HDF5 header files and libraries are on the command line.
h5c++
supersedes all other compiler scripts in that
if you've used one set of compiler scripts to compile the
HDF5 C++ library, then h5c++
uses those same scripts.
For example, when compiling an MPIch program,
you use the mpiCC
script.
Some programs use HDF5 in only a few modules. It isn't necessary
to use h5c++
to compile those modules which don't use
HDF5. In fact, since h5c++
is only a convenience
script, you are still able to compile HDF5 C++ modules in the
normal way. In that case, you will have to specify the HDF5 libraries
and include paths yourself.
Use the -show
option to see the details.
An example of how to use h5c++
to compile the program
hdf_prog
, which consists of modules
prog1.cpp
and prog2.cpp
and uses the HDF5 C++ library, would be as follows:
# h5c++ -c prog1.cpp # h5c++ -c prog2.cpp # h5c++ -o hdf_prog prog1.o prog2.o
-help |
Prints a help message. |
-echo |
Show all the shell commands executed. |
-prefix=DIR |
Use the directory DIR to find HDF5
lib/ and include/ subdirectories
Default: prefix specified when configuring HDF5. |
-show |
Show the commands without executing them. |
-shlib |
Compile using shared HDF5 libraries.
Default for HDF5 built without static libraries. |
-noshlib |
Compile using static HDF5 libraries.
Default for HDF5 built with static libraries. | <compile line> |
The normal compile line options for your compiler.
h5c++ uses the same compiler you used
to compile HDF5. Check your compiler's manual for
more information on which options are needed. |
h5c++
.
HDF5_CXX |
Use a different C++ compiler. |
HDF5_CXXLINKER |
Use a different linker. |
HDF5_USE_SHLIB=[yes|no] |
Use shared version of the HDF5 library.
Default: no, except when HDF5 built with only shared libraries. |
HDF5_CPPFLAGS |
Use additional preprocessor flags. |
HDF5_CXXFLAGS |
Use additional C++ compiler flags. |
HDF5_LDFLAGS |
Use additional library paths. |
HDF5_LIBS |
Use additional libraries. |
The last four of these environment variables have corresponding
variables with names ending in BASE
that can also be set
by editing their values in the "Things You Can Modify to Override HDF5
Library Build Components" section of the h5c++
script.
Note that adding library paths to HDF5_LDFLAGS
where another HDF5 version is located may link your program
with that other HDF5 Library version.
0 | Succeeded. |
> 0 | An error occurred. |
Release | Command Line Tool |
1.8.12 |
Tool modified to switch default to link to shared libraries
when HDF5 configured with --disable-static .
|
1.8.6 | Four compiler flags and environment variables added. |
1.8.5 | Tool exit status codes updated. |
1.6.0 | Tool introduced in this release. |
The HDF Group Help Desk:
Describes HDF5 Release 1.10. |
Copyright by
The HDF Group
and the Board of Trustees of the University of Illinois |