All Rights Reserved
By obtaining, using, and/or copying this software and/or its associated documentation, you agree that you have read, understood, and will comply with the following terms and conditions: Permission to use, copy, modify, and distribute this software and its documentation, without fee, is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Swedish Meteorological and Hydrological Institute or SMHI not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. SMHI DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL SMHI BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. \hlineCopyright Notice and Statement for NCSA Hierarchical Data Format (HDF) Software Library and Utilities NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities Copyright 1998, 1999, 2000 by the Board of Trustees of the University of Illinois. All rights reserved. Contributors: National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC), Lawrence Livermore National Laboratory (LLNL), Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), Jean-loup Gailly and Mark Adler (gzip library). Redistribution and use in source and binary forms, with or without modification, are permitted for any purpose (including commercial purposes) provided that the following conditions are met:1. Redistributions of source code must retain the above copyright notice, this list of conditions, and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following disclaimer in the documentation and/or materials provided with the distribution.
3. In addition, redistributions of modified forms of the source or binary code must carry prominent notices stating that the original code was changed and the date of the change.
4. All publications or advertising materials mentioning features or use of this software are asked, but not required, to acknowledge that it was developed by the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign and to credit the contributors.
5. Neither the name of the University nor the names of the Contributors may be used to endorse or promote products derived from this software without specific prior written permission from the University or the Contributors.
6. THIS SOFTWARE IS PROVIDED BY THE UNIVERSITY AND THE CONTRIBUTORS "AS IS" WITH NO WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED. In no event shall the University or the Contributors be liable for any damages suffered by the users arising out of the use of this software, even if advised of the possibility of such damage.
Øystein Godøy | The Norwegian Meteorological Institute |
Harri Hohti | Finnish Meteorological Institute |
Otto Hyvärinen | Finnish Meteorological Institute |
Pirkko Pylkkö | Finnish Meteorological Institute |
Per Kållberg | European Centre for Medium Range Weather Forecasting and SMHI |
Hans Alexandersson | SMHI |
Bengt Carlsson | SMHI |
Adam Dybbroe | SMHI |
Jörgen Sahlberg | SMHI |
/usr/local/src % tar xvzf hlhdf_r0.34.tgzThis will create a directory called hlhdf and the distribution will be placed in it. If the above arguments fail, then you have not used GNU tar. HL-HDF has a configure script to determine paths to compilers, headers and libraries. In short it tries to find everything HL-HDF needs to be built. Execute the configure script. The most relevant arguments are:
--prefix=PATH | Set the root of the installation path. Defaults to '/usr/local/hlhdf' |
--with-zlib=INC,LIB | Use GNU ZLIB compression headers located at INC and library located at LIB |
--with-hdf5=INC,LIB | Use the HDF5 headers located at INC and libraries located at LIB |
--with-python=yes|no | Configure in Python support. Default is yes. Enables building a Python interface. |
--with-fortran=yes|no | Configure with Fortran. Default is no. Useful if integrating with F77 code. |
/usr/local/src/hlhdf % ./configure -helpIf configure fails, which is unlikely, then you may be in trouble. See Section platforms 3.5 for platform-specific notes. The bottom line is that you may have to make some manual adjustments to your configuration files. If configuration has been carried out without any problems then you're ready to build HL-HDF with:
/usr/local/src/hlhdf % makeThis will generate the library libhlhdf.a located in the /usr/local/src/hlhdf/hlhdf directory.
/usr/local/src/hlhdf/test % makewhich should build the test program testRaveObject. This program can be used to test read or test write an artificial image along with a number of different kinds of header parameters. To test the reading, execute
/usr/local/src/hlhdf/test % testRaveObject readand an ASCII representation of the contents of rave_image_file.hdf will be written to stdout. To test writing, execute
/usr/local/src/hlhdf/test % testRaveObject writeand and an ASCII representation of the contents of rave_image_file.hdf will be written to stdout and the file itself will be re-written. Alternatively, if rave_image_file.hdf doesn't exist, execute the test program with the write argument first to create the file, and then read it to examine its contents. If this test program works, then you can be confident that the HL-HDF library works! (The above use of "rave" in the test program and file refers to Radar Analysis and Visualization Environment software, which is freely available software maintained by SMHI).
/usr/local/src/hlhdf % make installand the header files, libraries, binaries, scripts and an MK-file will be installed to the include, lib, bin and mkf directories located under the path specified by the prefix variable which was used when HL-HDF was build. HL-HDF is complete when this has been carried out. For information on how to compile and install the Python interface, see Chapter ??.
CFLAGS= -woff 1174,1429,1209,1196,1685 -woff 799,803,835 -Wl,-woff,47,-woff,84,-woff,85,-woff,134Add these arguments to the end of your CFLAGS variable. This list may not be complete. Fortran Compiler: MIPSpro Compilers: Version 7.2.1.3m Notes: FMI is gratefully acknowledged for letting use one of their machines.
. /NOAA 14/ /NOAA 14/info /NOAA 14/info/xsize /NOAA 14/info/ysize /NOAA 14/Channel 1/ /NOAA 14/Channel 2/ /NOAA 14/Channel 3/where info is an object containing header information. The same strategy could be used to store several polar scans of weather radar data, for example. Alternatively, a numerical weather prediction model state could be represented in part using GRIB descriptors like this:
. /Level 0/ /Level 0/Type 105/ /Level 0/Type 105/Parameter 11/ /Level 0/Type 102/ /Level 31/ /Level 31/Type 109/ /Level 31/Type 109/Parameter 11/Or, why not a point from a weather station containing wind speed and direction values:
. /WMO 02064/ /WMO 02064/dd/ /WMO 02064/ff/ /WMO 02036/
UNDEFINED_ID=-1 ATTRIBUTE_ID=0 GROUP_ID=1 DATASET_ID=2 TYPE_ID=3 REFERENCE_ID=4
DTYPE_UNDEFINED_ID=-1 HL_SIMPLE=0 HL_ARRAY=1When new nodes are initiated, they contain HL_DataType=DTYPE_UNDEFINED.
NMARK_ORIGINAL=0 NMARK_CHANGED=1 NMARK_SELECT=2A node with HL_NodeMark=NMARK_CHANGED can be used to mark that it has been modified. A node with HL_NodeMark=NMARK_SELECT is used to indicate that this node should be read when performing a fetch.
typedef struct HL_Node { HL_Type type; /* the type of node */ char name[256]; /* the node's name */ int ndims; /* the number of dimensions in the array */ hsize_t dims[4]; /* the dimensions of each of ndims */ unsigned char* data; /* actual data (fixed type) */ unsigned char* rawdata; /* actual raw data, machine dependent */ char format[64]; /* the string representation of the data type */ hid_t typeId; /* reference to HDF's internal type management */ size_t dSize; /* size of one value in data (fixed type) */ size_t rdSize; /* size of one value in raw data, machine dependent */ HL_DataType dataType; /* identifies whether data is single or an array */ hid_t hdfId; /* like typeId: for internal use */ HL_NodeMark mark; /* is this node marked? */ HL_CompoundTypeDescription* compoundDescription; /* a list of compound type descriptions*/ } HL_Node;
typedef struct { char filename[256]; /* a file string */ char tmp_name[512]; /* temporary names for internal use */ int nNodes; /* the number of nodes in the list */ int nAllocNodes; /* the number of allocated nodes in the list; internal */ HL_Node** nodes; /* the nodes themselves */ } HL_NodeList;
typedef struct { char attrname[256]; /* the Attribute's name */ size_t offset; /* the offset to where the data begins */ char format[256]; /* the string representation of the atomic data type */ int ndims; /* the number of dimensions in the array */ size_t dims[4]; /* the dimensions of each of ndims */ } HL_CompoundTypeAttribute;
typedef struct { char typename[256]; /* the list's name */ unsigned long objno[2]; /* markers used to tag nodes in the list */ size_t size; /* size of this data type */ int nAttrs; /* the number of attributes in the list */ int nAllocAttrs; /* the number of allocated attributes */ HL_CompoundTypeAttribute** attrs; /* the attributes themselves */ } HL_CompoundTypeDescription;
Initiates the HL-HDF functions. This call must be made before anything else is done. Returns nothing.
Deactivates HDF5 debugging. Returns nothing.
Activates HDF5 debugging. Returns nothing.
Sets the debug mode. flag can be 0 (no debugging), 1 (debug only HL-HDF), or 2 (debug HL-HDF and HDF5). Returns nothing.
Checks whether filename is an HDF5 file. Returns 1 if it is and 0 otherwise.
Opens an HDF5 file. Arguments: filename: String containing the files name. how: What mode that should be used for opening the file, can be 'r' (read only), 'w' (write only) or 'rw' (read and write). Returns the hid_t reference upon success otherwise -1.
Creates an HDF5 file filename, if the file already exists it will be truncated. filename is the name of the file to be created. Returns the hid_t reference upon success otherwise -1.
Closes the HDF5 file with the hid_t reference file_id. Returns a value greater or equal to 0 upon success otherwise a negative value.
Translates from the datatype specified by type to a native datatype. Returns the native datatype hid_t upon success, or a negative value on failure.
Creates an HDF5 datatype hid_t from the string representation dataType. dataType can be one of: char, schar, uchar, short, ushort, int, uint, long, ulong, llong, ullong, float, double, hsize, hssize, herr or hbool. Returns a value < 0 upon failure, otherwise a hid_t reference to the new type.
Translates the HDF5 type type to an HDF5 string representation of the datatype. The returned string can be one of: H5T_STD_I8BE, H5T_STD_I8LE, H5T_STD_I16BE, H5T_STD_I16LE, H5T_STD_I32BE, H5T_STD_I32LE, H5T_STD_I64BE, H5T_STD_I64LE, H5T_STD_U8BE, H5T_STD_U8LE, H5T_STD_U16BE, H5T_STD_U16LE, H5T_STD_U32BE, H5T_STD_U32LE, H5T_STD_U64BE, H5T_STD_U64LE, H5T_NATIVE_SCHAR, H5T_NATIVE_UCHAR, H5T_NATIVE_SHORT, H5T_NATIVE_USHORT, H5T_NATIVE_INT, H5T_NATIVE_UINT, H5T_NATIVE_LONG, H5T_NATIVE_ULONG, H5T_NATIVE_LLONG, H5T_NATIVE_ULLONG, H5T_IEEE_F32BE, H5T_IEEE_F32LE, H5T_IEEE_F64BE, H5T_IEEE_F64LE, H5T_NATIVE_FLOAT, H5T_NATIVE_DOUBLE, H5T_NATIVE_LDOUBLE, H5T_STRING or H5T_COMPOUND. Returns the string representation upon success, otherwise NULL.
Translates the HDF5 type type to a HL-HDF string representation of the datatype. The returned string can be one of dataType can be one of char, schar, uchar, short, ushort, int, uint, long, ulong, llong, ullong, float, double, hsize, hssize, herr, hbool, string or compound. Returns the string representation upon success, otherwise NULL.
Returns a string representation of the type type's padding. The returned string can be one of H5T_STR_NULLTERM, H5T_STR_NULLPAD, H5T_STR_SPACEPAD or ILLEGAL STRPAD. Returns the string representation upon success, otherwise NULL.
Returns a string representation of the type type's character set. The returned string can be one of H5T_CSET_ASCII or UNKNOWN CHARACTER SET. Returns the string representation upon success, otherwise NULL.
Returns a string representation of the type type's character type. The returned string can be one of H5T_C_S1, H5T_FORTRAN_S1 or UNKNOWN CHARACTER TYPE. Returns the string representation upon success, otherwise NULL.
Calculates the size in bytes that the specified type takes. The attribute format can be one of char, schar, uchar, short, ushort, int, uint, long, ulong, llong, ullong, float, double, hsize, hssize, herr or hbool. Returns the size in bytes if successful or -1 in case of failure.
Checks wether the string type format is recognized. format can be one of char, schar, uchar, short, ushort, int, uint, long, ulong, llong, ullong, float, double, hsize, hssize, herr or hbool. Returns 1 if the format is supported, otherwise 0.
Defines a new, empty node of undefined type. name is a string used to identify the node. Returns the node if successful or NULL upon failure.
Creates an empty HL node list which can be filled with an arbitrary number of nodes. Returns the node list if successful or NULL upon failure.
Frees a node from memory. The node is given as the only argument. Returns nothing.
Frees a complete node list from memory, along with all the nodes contained in it. The node list is given as the only argument. Returns nothing.
Creates an empty HL node of Group type. name is a string used to identify the node. Returns the node if successful or NULL upon failure.
Creates an empty HL node of Attribute type. name is a string used to identify the node. Returns the node if successful or NULL upon failure.
Creates an empty HL node of Reference type. name is a string used to identify the node. Returns the node if successful or NULL upon failure.
Creates an empty HL node of Dataset type. name is a string used to identify the node. Returns the node if successful or NULL upon failure.
Creates an empty HL node of Datatype type. name is a string used to identify the node. Returns the node if successful or NULL upon failure.
Creates a list containing HL_CompoundTypeAttributes. Returns the compound type list if successful or NULL upon failure.
Frees a given compound type attribute from memory. The only argument is the HL_CompoundTypeAttribute to be freed. Returns nothing.
Frees the compound type list, along with all its members, from memory. The only argument is the HL_CompoundTypeDescription to be freed. Returns nothing.
Appends a node to (the end of) a node list. Note: If this operation is successful the responsibility for releasing the memory of the node node is taken by the nodelist, so do not release the node afterwards. Arguments: nodelist: The node list. node: The node to append to nodelist. Returns 1 if successful and 0 otherwise.
Provides a reference to a node from a node list. Note: A reference to the node is returned, so do not release the node when finished with the node. Arguments: nodelist: The node list. nodeName: A string identifying the node to extract. Returns (a reference to) the node if it is found, and NULL if not.
Writes a scalar value to a node. Scalar values are individual atomic words. Arguments: node: The node in which to write the value. sz: Size of the data type. value: The value to write. fmt: String representation of the data format, for example "short", "signed int" or "double". typid: Reference to used data type. Must be set manually if using a compound data type, otherwise set it to -1. Returns 1 if successful and 0 otherwise.
Writes an array to a node. Arguments: node: The node in which to write the array. sz: Size of the data type. ndims: The number of dimensions of the array, which may range from 0 to 4. dims: The dimensions of each of ndims. value: The array to write. fmt: String representation of the data format. typid: Reference to used data type. Must be set manually if using a compound data type, otherwise set it to -1. Returns 1 if successful and 0 otherwise.
Seperates the last node (the child) in a node name consisting of several nodes (the parent). For example, for a node name given as /group1/group2/group3, this function will set /group1/group2 as the parent and group3 as the child. Arguments: node: The node under scrutiny. parent: A string to hold the parent's node name. child: A string to hold the child's node name. Returns 1 if successful and 0 otherwise.
If a compound type has been created and there is a wish to have this node "named", then use this function for marking this node to be committed. See the HDF5 documentation for a more detailed description on what "committed" means. Arguments: node: A Datatype node to mark. testStruct_hid: The HDF5 hid_t reference to the datatype. Returns 1 if successful and 0 otherwise.
Prints the names in a node list to the terminal. The only argument is the node list. Returns nothing.
Prints to the terminal the names of all nodes in the typelist list of compound nodes. Returns nothing.
Recursively reads the HDF5 file filename from the group fromPath and builds a list of nodes with corresponding names. I.e. no data will be read at this step, just the nodetypes and names will be determined. Returns an HL_NodeList pointer upon success, otherwise NULL.
Recursively read the HDF5 file filename from the root group and builds a list of nodes with corresponding names. I.e. no data will be read at this step, just the nodetypes and names will be determined. Returns an HL_NodeList pointer upon success, otherwise NULL.
Marks the node with name name in the nodelist for retrival. Returns 1 upon success, otherwise 0.
Marks all nodes in the nodelist for retrival. Returns 1 upon success, otherwise 0.
Reads all nodes in the nodelist that has been marked for retrival. Returns 1 upon success, otherwise 0.
Fills the attribute node node with data and dimensions from the file referenced by file_id. Returns 1 upon success, otherwise 0.
Fills the reference node node with data from the file referenced by file_id. The data field in the node will be filled with a string that specifies the name of the referenced node. Returns 1 upon success, otherwise 0.
Fills the dataset node node with data and dimensions from the file referenced by file_id. Returns 1 upon success, otherwise 0.
Fills the group node node with data from the file referenced by file_id. Returns 1 upon success, otherwise 0.
Fills the type node node with data from the file referenced by file_id. Returns 1 upon success, otherwise 0.
Fills the node node with data from the file referenced by file_id. Returns 1 upon success, otherwise 0.
Builds a compound type description from the type type_id reference. \\ Returns a HL_CompoundTypeDescription pointer upon success, otherwise NULL.
This is a helper function for locating the name of an object that is referenced by ref in the file file_id. Returns a pointer to a string upon success, otherwise NULL.
Commits a datatype. See the HDF5 documentation for more detailed descriptions on what "committed" means. Arguments: loc_id: Where should the datatype be placed. name: What should the datatype be called. type_id: The hid_t reference to the datatype. Returns a negative value upon failure, otherwise the operation was successful.
Creates a HDF5 string type of length length. Returns a negative value upon failure, otherwise a hid_t reference to the datatype.
Changes the size of the datatype referenced by type_id to the size theSize. Returns a negative value upon failure, otherwise the operation was successful.
Closes the datatype referenced by type_id. Returns a negative value upon failure, otherwise the operation was successful.
Writes a scalar value to an HDF5 file. Arguments: loc_id: The group or dataset the attribute should be written to. type_id: The datatype of the attribute. name: The name that should be used for the attribute. buf: The data that should be written. Returns 0 upon success, otherwise -1.
Writes a scalar value to an HDF5 file. Arguments: loc_id: The group or dataset the attribute should be written to. fmt: A string describing the format of the datatype, e.g. char, short, ... name: The name that should be used for the attribute. buf: The data that should be written. Returns 0 upon success, otherwise -1.
Closes the dataset referenced by loc_id. Returns a negative value upon failiure, otherwise the operation was successful.
Creates a compound type with the size size. Returns a negative value upon failiure, otherwise a hid_t reference.
Adds an scalar attribute to a compound type. Arguments: loc_id: The type the attribute should be appended to, name: The name of the attribute. offset: At what offset in the data does this attribute begin. type_id: The datatype of the attribute. Returns a negative value upon failure, otherwise the operation was successful.
Adds an scalar attribute to a compound type. Arguments: loc_id: The type the attribute should be appended to. name: The name of the attribute. offset: At what offset in the data does this attribute begin. fmt: A string describing the format of the datatype, e.g. char, short, ... Returns a negative value upon failure, otherwise the operation was successful.
Adds an array attribute to a compound type. Arguments: loc_id: The type the attribute should be appended to. name: The name of the attribute. offset: At what offset in the data does this attribute begin. ndims: The rank of the data to be written, between 0-4. dims: The dimensions of the data, a pointer to ndims number of hsize_t values. type_id: The datatype of the attribute. Returns a negative value upon failiure, otherwise the operation was successful.
Creates a group in an HDF5 file. Arguments: loc_id: The group or file reference the group should be written to. groupname: The name of the group to be written. comment: A comment of the group, if NULL, no comment will be added to the group. Returns a negative value on failure, otherwise a hid_t reference.
Closes a group referenced by loc_id. Returns a negative value upon failure, otherwise the operation was successful.
Creates a reference from name in object loc_id to the object with the name targetname which must be a complete path in the file file_id. For example, to create a reference from PALETTE to the dataset /GRP/PALETTE one should write createReference(loc_id,file_id,"PALETTE","/GRP/PALETTE"). loc_id: The object where the reference name should be created. file_id: The file the reference should be created in. name: The name of the reference to be created. targetname: The referenced object, must be a complete path. Note that the referenced object must always be created before creating a reference to it. Returns 0 upon success, otherwise -1
Writes a nodelist in HDF5 format. Arguments: nodelist: The nodelist to be written. doCompress: The compression level that should be used on the datasets, betwen 0-9 where 0 is no compression and 9 is highest compression. Returns 1 upon success, otherwise 0.
Creates a new group named name and attaches this group to the parentGroup. If the parentGroup is NULL, then the created group will be the root group. Returns the new group upon success or NULL upon failure.
Creates a new dataset named name and attaches this dataset to the parentGroup. Returns the new dataset upon success or NULL upon failiure.
Creates a new type object. Returns the allocated type upon success or NULL upon failure.
Creates a new attribute with the name name, if name is NULL, then the attribute will be nameless. Returns the allocated attribute upon success or NULL upon failure.
Creates a new compound attribute definition with the name name, if name is NULL, then the attribute will be nameless. Returns the allocated compound attribute definition upon success or NULL upon failure.
Translates an NameListType_t instance to a compound attribute definition instance and then gives the compound attribute definition the name name. Returns the allocated compound attribute definition upon success or NULL upon failure.
Adds the compound attribute definition compoundAttr to the name list type newType. Returns a value $\geq$ 0 upon success, otherwise -1.
Adds the attribute attr to the group group. Returns 0 upon success otherwise -1.
Adds the attribute attr to the dataset dset. Returns 0 upon success otherwise -1.
Deallocates the compound attribute definition attr. Returns nothing.
Deallocates the attribute attr. Returns nothing.
Deallocates the name list type type. Returns nothing.
Deallocates the internals for the dataset dset. Returns nothing.
Deallocates the dataset dset. Returns nothing.
Adds the type type to the local list of types in the group group. Returns a value $\geq$ 0 upon success otherwise -1.
Adds the type type to the global list of types in the group group. Returns a value $\geq$ 0 upon success otherwise -1.
Searches the global list of types in the group grp if any occurence of the objno exists. Arguments: grp: The group that should be searched in. objno: An list of two unsigned long's. Returns the index number in the global list if an occurance was found otherwise -1.
Searches the local list of types in the group grp if any occurence of the objno exists. Arguments: grp: The group that should be searched in. objno: A list of two unsigned long's. Returns the index number in the local list if an occurance was found otherwise -1.
Removes the type with a matching objno from the group group's list of local types and returns the type. Arguments: group: The group that should be searched. objno: A list of two unsigned long's. Returns the type with a matching object number if it was found, otherwise NULL
Removes the type with a matching objno from the group group's list of global types and returns the type. Arguments: group: The group that should be searched. objno: A list of two unsigned long's. Returns the type with a matching object number if it was found, otherwise NULL
Displays the data in a format similar to the one produced when using h5dump distributed with the HDF5 distribution. Arguments: data: A pointer to the data. fmt: The hlhdf string representation of the dataformat. ndims: The rank of the data. dims: The dimensions of the data. typeSize: The size of each value. offs: The number of blanks that should be padded before the data. addNewline: If a linebreak should be added or not, 1 means add linebreak. Returns nothing.
Displays a compound dataset in a format similar to the one produced when using h5dump distributed with the HDF5 distribution. Arguments: data: The data pointer. type: The compound type definition. ndims: The rank of the data. dims: The dimensions of the data. offs: The number of blanks that should be padded before the data. Returns nothing
Displays one attribute in a compound attribute in a format similar to the one produced when using h5dump distributed with the HDF5 distribution. Arguments: def: The compound attribute definition. offs: The number of blanks that should be padded before the data. Returns nothing
Displays one datatype in a format similar to the one produced when using h5dump distributed with the HDF5 distribution. Arguments: type: The datatype to display. offs: The number of blanks that should be padded before the data. Returns nothing.
Displays one attribute in a format similar to the one produced when using h5dump distributed with the HDF5 distribution. Arguments: attr: The attribute to display. offs: The number of blanks that should be padded before the data. Returns nothing.
Displays one dataset in a format similar to the one produced when using h5dump distributed with the HDF5 distribution. Arguments: dset: The dataset to display. offs: The number of blanks that should be padded before the data. Returns nothing.
Displays one group in a format similar to the one produced when using h5dump distributed with the HDF5 distribution. This function will recursively go through all sub-groups belonging to this group. Arguments: grp: The group to display. offs: The number of blanks that should be padded before the data. Returns nothing.
Recursively reads a complete HDF5 file with name filename and builds a complete tree structure. Returns a pointer to a NameListGroup_t instance upon success, otherwise NULL.
Recursively reads an HDF5 file with name filename from the group from and builds a complete tree structure. Returns a pointer to a NameListGroup_t instance upon success, otherwise NULL.
Frees a HDF5 tree structure that has been read by using either readHlHdfFile or readHlHdfFileFrom. Be aware that this function must be used if one of the two functions above was used since it knows how the tree structure was built. Returns nothing.
include /usr/local/hlhdf/mkf/hldef.mk HLHDF_INCDIR = -I/usr/local/hlhdf/include HLHDF_LIBDIR = -L/usr/local/hlhdf/lib CFLAGS = $(OPTS) $(DEFS) -I. $(ZLIB_INCDIR) $(HDF5_INCDIR) \ $(HLHDF_INCDIR) LDFLAGS = -L. $(ZLIB_LIBDIR) $(HDF5_LIBDIR) $(HLHDF_LIBDIR) LIBS = -lhlhdf -lhdf5 -lz -lm TARGET=myTestProgram SOURCES=test_program.c OBJECTS=$(SOURCES:.c=.o) all: $(TARGET) $(TARGET): $(OBJECTS) $(CC) -o $@ $(LDFLAGS) $(OBJECTS) $(LIBS) clean: @\rm -f *.o *~ so_locations core distclean: clean @\rm -f $(TARGET) distribution: @echo "Would bring the latest revision upto date" install: @$(HL_INSTALL) -f -o -C $(TARGET) ${MY_BIN_PATH}/$(TARGET)Now, when the Makefile has been created, it might be a good idea to write your own HDF5 product. The following example will create a dataset with a two-dimensional array of integers, and two attributes connected to this dataset. It will also create a group containing one attribute.
#include <read_vhlhdf.h> #include <write_vhlhdf.h> int main(int argc, char** argv) { HL_NodeList* aList=NULL; HL_Node* aNode=NULL; int* anArray=NULL; int anIntValue; float aFloatValue; hsize_t dims[]={10,10}; int npts=100; int i; initHlHdf(); /* Initialize the HL-HDF library */ debugHlHdf(2); /* Activate debugging */ if(!(aList = newHL_NodeList())) { fprintf(stderr,"Failed to allocate nodelist"); goto fail; } if(!(anArray = malloc(sizeof(int)*npts))) { fprintf(stderr,"Failed to allocate memory for array."); goto fail; } for(i=0;i<npts;i++) anArray[i]=i; addNode(aList,(aNode = newHL_Group("/group1"))); addNode(aList,(aNode = newHL_Attribute("/group1/attribute1"))); anIntValue=10; setScalarValue(aNode,sizeof(anIntValue),(unsigned char*)&anIntValue,"int",-1); addNode(aList,(aNode = newHL_Dataset("/dataset1"))); setArrayValue(aNode,sizeof(int),2,dims,(unsigned char*)anArray,"int",-1); addNode(aList,(aNode = newHL_Attribute("/dataset1/attribute2"))); anIntValue=20; setScalarValue(aNode,sizeof(anIntValue),(unsigned char*)&anIntValue,"int",-1); addNode(aList,(aNode = newHL_Attribute("/dataset1/attribute3"))); aFloatValue=99.99; setScalarValue(aNode,sizeof(aFloatValue),(unsigned char*)&aFloatValue, "float",-1); strcpy(aList->filename,"written_hdffile.hdf"); writeNodeList(aList,6); freeHL_NodeList(aList); exit(0); return 0; /* Won't come here */ fail: freeHL_NodeList(aList); exit(1); return 1; /* Won't come here */ }When you have created your own HDF5 product, it might be a good idea to create some code for reading this file and checking its contents.
#include <read_vhlhdf.h> #include <write_vhlhdf.h> int main(int argc, char** argv) { HL_NodeList* aList=NULL; HL_Node* aNode=NULL; int* anArray=NULL; int anIntValue; float aFloatValue; int npts; int i; initHlHdf(); /* Initialize the HL-HDF library */ debugHlHdf(2); /* Activate debugging */ if(!(aList = readHL_NodeList("written_hdffile.hdf"))) { fprintf(stderr,"Failed to read nodelist\n"); goto fail; } selectAllNodes(aList); /* Select everything for retrival */ fetchMarkedNodes(aList); if((aNode = getNode(aList,"/group1"))) printf("%s exists\n",aNode->name); if((aNode = getNode(aList,"/group1/attribute1"))) { memcpy(&anIntValue,aNode->data,aNode->dSize); printf("%s exists and have value %d\n",aNode->name,anIntValue); } if((aNode = getNode(aList,"/dataset1"))) { anArray = (int*)aNode->data; npts = 1; for(i=0;i<aNode->ndims;i++) npts*=aNode->dims[i]; printf("%s exists and has the values:\n",aNode->name); for(i=0;i<npts;i++) { printf("%d ", anArray[i]); if((i%aNode->dims[0])==0) { printf("\n"); } } printf("\n"); } if((aNode = getNode(aList,"/dataset1/attribute2"))) { memcpy(&anIntValue,aNode->data,aNode->dSize); printf("%s exists and have the value %d\n",aNode->name,anIntValue); } if((aNode = getNode(aList,"/dataset1/attribute3"))) { memcpy(&aFloatValue,aNode->data,aNode->dSize); printf("%s exists and have the value %f\n",aNode->name,aFloatValue); } freeHL_NodeList(aList); exit(0); return 0; /* Never reached */ fail: freeHL_NodeList(aList); exit(1); return 1; /* Never reached */ }
When creating a node and using this value, the node will become an attribute node.
When creating a node and using this value, the node will become a dataset node.
When creating a node and using this value, the node will become a group node.
When creating a node and using this value, the node will become a datatype node.
When creating a node and using this value, the node will become a reference node.
Sets a scalar value in the node instance. itemSize is used for specifying the size of the value in bytes. It is not necessary to specify unless a compound type is set. data is the data to be set in the node. typename is the string representation of the datatype, for example int, string, compound, \ldots lhid is the hid_t reference to the datatype, is not nessecary to specify unless a compound type is set.
NOTE: If the data to be set is of compound type, then the data
should be of string type.
NOTE: If the node is a Reference node, the data should be
set as a string type, where the data is the name of the referenced node.
Sets an array value in the node instance. itemSize is used for specifying the size of the value in bytes. It is not nessecary to specify unless a compound type is set. dims is a list of dimensions of the data. data is the data to be set in the node. typename is the string representation of the datatype, for example int, string, compound, \ldots lhid is the hid_t reference to the datatype, is not nessecary to specify unless a compound type should be set.
NOTE: If the data to be set is of compound type, the data should be of string type.
Marks a TYPE_ID node to be committed. datatype is the hid_t reference to the datatype.
Returns the name of the node instance.
Returns the type of the node instance.
Returns a list of the dimensions of the node instance
Returns the string representation of the node's datatype.
Returns the fixed data of the node instance.
NOTE: If the data is of compound type, the data will be returned as a string.
Returns the raw data of the node instance.
NOTE: If the data is of compound type, the data will be returned
as a string.
Returns a dictionary with all attributes in the compund attribute, it only works if the node instance is a compound attribute.
import _pyhl from Numeric import * # Create an empty node list instance aList = _pyhl.nodelist() # Create an group called info aNode = _pyhl.node(_pyhl.GROUP_ID,"/info") # Add the node to the nodelist # Remember that the nodelist takes responsibility aList.addNode(aNode) # Insert the attribute xscale in the group "/info" aNode = _pyhl.node(_pyhl.ATTRIBUTE_ID,"/info/xscale") # Set the value to a double with value 10.0 # Note the -1's that has been used since the data not is compaound aNode.setScalarValue(-1,10.0,"double",-1) aList.addNode(aNode) # Similar for yscale,xsize and ysize aNode = _pyhl.node(_pyhl.ATTRIBUTE_ID,"/info/yscale") aNode.setScalarValue(-1,20.0,"double",-1) aList.addNode(aNode) aNode = _pyhl.node(_pyhl.ATTRIBUTE_ID,"/info/xsize") aNode.setScalarValue(-1,10,"int",-1) aList.addNode(aNode) aNode = _pyhl.node(_pyhl.ATTRIBUTE_ID,"/info/ysize") aNode.setScalarValue(-1,10,"int",-1) aList.addNode(aNode) # Add a description aNode = _pyhl.node(_pyhl.ATTRIBUTE_ID,"/info/description") aNode.setScalarValue(-1,"This is a simple example","string",-1) aList.addNode(aNode) # Add an array of data myArray = arange(100) myArray = array(myArray.astype('i'),'i') myArray = reshape(myArray,(10,10)) aNode = _pyhl.node(_pyhl.DATASET_ID,"/data") # Set the data as an array, note the list with [10,10] which # Indicates that it is an array of 10x10 items aNode.setArrayValue(-1,[10,10],myArray,"int",-1) aList.addNode(aNode) # And now just write the file as "simple_test.hdf" with # Compression level 9 (highest compression) aList.write("simple_test.hdf",9)When checking this file with h5dump, the command syntax would be: prompt% h5dump simple_test.hdf And the result would be:
HDF5 "simple_test.hdf" { GROUP "/" { DATASET "data" { DATATYPE { H5T_STD_I32LE } DATASPACE { SIMPLE ( 10, 10 ) / ( 10, 10 ) } DATA { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99 } } GROUP "info" { ATTRIBUTE "xscale" { DATATYPE { H5T_IEEE_F64LE } DATASPACE { SCALAR } DATA { 10 } } ATTRIBUTE "yscale" { DATATYPE { H5T_IEEE_F64LE } DATASPACE { SCALAR } DATA { 20 } } ATTRIBUTE "xsize" { DATATYPE { H5T_STD_I32LE } DATASPACE { SCALAR } DATA { 10 } } ATTRIBUTE "ysize" { DATATYPE { H5T_STD_I32LE } DATASPACE { SCALAR } DATA { 10 } } ATTRIBUTE "description" { DATATYPE { { STRSIZE 25; STRPAD H5T_STR_NULLTERM; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } DATASPACE { SCALAR } DATA { "This is a simple example" } } } } }
import _pyhl import _rave_info_type # Create the rave info HDF5 type typedef = _rave_info_type.type() # Create the rave info HDF5 object obj = _rave_info_type.object() # Set the values obj.xsize=10 obj.ysize=10 obj.xscale=150.0 obj.yscale=150.0 aList = _pyhl.nodelist() # Create a datatype node aNode = _pyhl.node(_pyhl.TYPE_ID,"/MyDatatype") # Make the datatype named aNode.commit(typedef.hid()) aList.addNode(aNode) # Create an attribute containing the compound type aNode = _pyhl.node(_pyhl.ATTRIBUTE_ID,"/myCompoundAttribute") # Note that I use both itemSize and lhid # Also note how I translate the compound object to a string aNode.setScalarValue(typedef.size(),obj.tostring(),"compound",typedef.hid()) aList.addNode(aNode) # Better create a dataset also with the compound type obj.xsize=1 obj.ysize=1 aNode = _pyhl.node(_pyhl.DATASET_ID,"/myCompoundDataset") # I use setArrayValue instead aNode.setArrayValue(typedef.size(),[1],obj.tostring(),"compound",typedef.hid()) aList.addNode(aNode) # And finally write the HDF5 file. aList.write("compound_test.hdf")When checking this file with h5dump, the command syntax would be: prompt% h5dump compound_test.hdf And the result would be:
HDF5 "compound_test.hdf" { GROUP "/" { ATTRIBUTE "myCompoundAttribute" { DATATYPE { H5T_STD_I32LE "xsize"; H5T_STD_I32LE "ysize"; H5T_IEEE_F64LE "xscale"; H5T_IEEE_F64LE "yscale"; } DATASPACE { SCALAR } DATA { { [ 10 ], [ 10 ], [ 150 ], [ 150 ] } } } DATATYPE "MyDatatype" { H5T_STD_I32LE "xsize"; H5T_STD_I32LE "ysize"; H5T_IEEE_F64LE "xscale"; H5T_IEEE_F64LE "yscale"; } DATASET "myCompoundDataset" { DATATYPE { "/MyDatatype" } DATASPACE { SIMPLE ( 1 ) / ( 1 ) } DATA { { [ 1 ], [ 1 ], [ 150 ], [ 150 ] } } } } }
# Read the file aList = _pyhl.read_nodelist("simple_test.hdf") # Select individual nodes, instead of all of them aList.selectNode("/info/xscale") aList.selectNode("/info/yscale") aList.selectNode("/data") # Fetch the data for selected nodes aList.fetch() # Print the data aNode = aList.getNode("/info/xscale") print "XSCALE=" + `aNode.data()` aNode = aList.getNode("/info/yscale") print "YSCALE=" + `aNode.data()` aNode = aList.getNode("/data") print "DATA=" + `aNode.data()`
import _pyhl import _rave_info_type # There is no meaning creating the type obj = _rave_info_type.object() aList = _pyhl.read_nodelist("compound_test.hdf") # Select everything for retrival aList.selectAll() aList.fetch() aNode = aList.getNode("/myCompoundAttribute") # Translate from the string representation to object obj.fromstring(aNode.rawdata()) # Display the values print "XSIZE="+`obj.xsize` print "YSIZE="+`obj.ysize` print "XSCALE="+`obj.xscale` print "YSCALE="+`obj.yscale`
import _pyhl import _rave_info_type # There is no meaning creating the type obj = _rave_info_type.object() aList = _pyhl.read_nodelist("compound_test.hdf") # Select everything for retrival aList.selectAll() aList.fetch() aNode = aList.getNode("/myCompoundAttribute") # Translate from the string representation to object cdescr = obj.compound_data() obj.xsize = cdescr["xsize"] obj.ysize = cdescr["ysize"] obj.xscale = cdescr["xscale"] obj.yscale = cdescr["yscale"] # Display the values print "XSIZE="+`obj.xsize` print "YSIZE="+`obj.ysize` print "XSCALE="+`obj.xscale` print "YSCALE="+`obj.yscale`
import _pyhl from Numeric import * # Function for creating a dummy palette def createPalette(): a=zeros((256,3),'b') for i in range(0,256): a[i][0]=i return a # Function for creating a dummy image}
def createImage(): a=zeros((256,256),'b') for i in range(0,256): for j in range(0,256): a[i][j] = i return a # Function for the HDF5 file def create_test_image(): a=_pyhl.nodelist() # First create the palette}
b=_pyhl.node(_pyhl.DATASET_ID,"/PALETTE") c=createPalette() b.setArrayValue(1,[256,3],c,"uchar",-1) a.addNode(b) b=_pyhl.node(_pyhl.ATTRIBUTE_ID,"/PALETTE/CLASS") b.setScalarValue(-1,"PALETTE","string",-1) a.addNode(b) b=_pyhl.node(_pyhl.ATTRIBUTE_ID,"/PALETTE/PAL_VERSION") b.setScalarValue(-1,"1.2","string",-1) a.addNode(b) b=_pyhl.node(_pyhl.ATTRIBUTE_ID,"/PALETTE/PAL_COLORMODEL") b.setScalarValue(-1,"RGB","string",-1) a.addNode(b) b=_pyhl.node(_pyhl.ATTRIBUTE_ID,"/PALETTE/PAL_TYPE") b.setScalarValue(-1,"STANDARD8","string",-1) a.addNode(b) # Now create the image to display}
b=_pyhl.node(_pyhl.DATASET_ID,"/IMAGE1") c=createImage() b.setArrayValue(1,[256,256],c,"uchar",-1) a.addNode(b) b=_pyhl.node(_pyhl.ATTRIBUTE_ID,"/IMAGE1/CLASS") b.setScalarValue(-1,"IMAGE","string",-1) a.addNode(b) b=_pyhl.node(_pyhl.ATTRIBUTE_ID,"/IMAGE1/IMAGE_VERSION") b.setScalarValue(-1,"1.2","string",-1) a.addNode(b) # Finally insert the reference}
b=_pyhl.node(_pyhl.REFERENCE_ID,"/IMAGE1/PALETTE") b.setScalarValue(-1,"/PALETTE","string",-1) a.addNode(b) a.write("ahewrittenimage.hdf") # The main function} if __name__=="__main__" create_test_image()