Please see The HDF Group's new Support Portal for the latest information.
Contents
- General Information
- Obtaining the HDF Software and Documentation
- Installing and Compiling
- HDF Tools for Conversion and Visualization
- Backward/Forward Compatibility
- Integration of HDF with netCDF
- Data Compression Support
- Other: Mailing List, Contributed Software, Bug Reports
- What is HDF?
- Copyright Information
- Should new products be developed in HDF (HDF4) ?
- What is in the HDF library?
- What are the HDF command line utilities?
- What is the latest official release of HDF and what platforms does it support?
- When will the next release of HDF4 be?
- What are the new features included in the latest release?
- Is there a Java Interface?
- Are there limitations to HDF4 files?
- Is there a limit on the name of a dataset?
- Where can I get the HDF source code and information relevant to HDF?
- What documentation for HDF is available?
- Who do I contact for information on SZIP licensing issues ?
Installing and Compiling
- How do I install HDF (4)?
- Building and Using HDF and Windows
- How do I compile application programs that call HDF functions?
- HDF4.2r2 Patch for Linux Fedora on Sparc architecture
- What header file are you supposed to use in your application ?
- How to use open (unbuffered I/O) instead of fopen?
- How can you compile HDF with shared libraries?
- Windows: Using MFC, getting unresolved external symbol errors for
__argv
,__argc
,__mbctype
- Windows: Instructions for building with
VS .NET (with and without MFC)
- Configure fails, "cannot compute sizeof (int*), 77"
HDF Tools for Conversion and Visualization
- Are there any conversion programs available to convert non-HDF image files into HDF files or vise versa?
- Which HDF Group tools can I use to view HDF objects?
- Is there any commercial or public domain visualization software that accepts HDF files?
- How would you convert a netCDF file to/from HDF?
Backward/Forward Compatibility
- Can new versions of HDF read hdf files written using older versions of the HDF library?
- Can application programs which work with old versions of the HDF library always be compiled with new versions of HDF?
Integration of HDF with netCDF
- How does the 'integration of netCDF with HDF' affect application programmers?
Data Compression Support
- Does HDF support data compression?
Other: Mailing List, Contributed Software, Bug Reports
- Is there a mailing list for HDF discussions and questions?
- How do I contribute my software to the HDF user community?
- How do I make a bug report?
- Configure fails, "cannot compute sizeof (int*), 77"
General Information
Obtaining the HDF Software and Documentation
What is HDF?
HDF stands for Hierarchical Data Format. It is a library and multi-object file format for the transfer of graphical and numerical data between machines.It is freely available. The distribution consists of the HDF library, the HDF command line utilities, and a test suite (source code only).
Features of the HDF File Format:
- It is versatile. HDF supports several different data models. Each
data model defines a specific aggregate data type and provides an
API for reading, writing, and organizing data and metadata of the
corresponding type. Data models supported include multidimensional
arrays, raster images, and tables.
- It is self-describing, allowing an application to interpret
the structure and contents of a file without any outside information.
- It is flexible. With HDF, you can mix and match related objects
together in one file and then access them as a group or as
individual objects. Users can also create their own grouping
structures using an HDF feature called vgroups.
- It is extensible. It can easily accommodate new data models,
regardless of whether they are added by the HDF development team or
by HDF users.
- It is portable. HDF files can be shared across most common platforms,
including many workstations and high performance computers.
An HDF file created on one computer can be read on a different
system without modification.
Copyright Information
The COPYING file at the top of the HDF source code tree provides the copyright information regarding HDF.Should new products be developed in HDF (HDF4) ?
No, we do not recommend storing new products in HDF4. You should use HDF5 instead. We also want to encourage and help users move from HDF4 to HDF5. Please let us know if you need help with this.
HDF4 still works well for many users, and we are pleased that it has served users so well. However, there are good reasons to use or move to HDF5.
In general, HDF4 is based on technology from the 1980's and 1990's. Some specific reasons to use HDF5 instead of HDF4 are listed below:
-
HDF5 is very flexible in determining how to build on unknown platforms. In HDF4, code changes must be made to support new platforms or compilers (especially for Fortran).
-
HDF5 does not have a file size limitation. In theory, HDF4 has a 2 GB limit on file sizes. In reality, the size of the files you can store in HDF4 is less than 2 GB, particularly if you have a lot of objects.
-
HDF5 has one flexible data model and supports a rich collection of datatypes. HDF4 has several rigid data models and supports a small number of datatypes.
-
HDF5 provides an easy mechanism for applications to invoke data filters such as an application-specific compression method. This feature is not available in HDF4.
-
HDF5 is designed to support fast I/O on massively parallel systems, as well as for fast serial data acquisition. HDF4 does not support parallel I/O, and is much slower for serial data acquisition.
-
HDF5 is in active development, and HDF4 is in maintenance mode only.
-
Lastly, HDF5 is more powerful than HDF4, and HDF5 continues to become even more powerful as new features are added.
See the HDF vs. HDF5 table which compares HDF4 and HDF5 features.
What is in the HDF library?
HDF currently supports several data structure types: Scientific Datasets (multi-dimensional arrays), vdatas (binary tables), "general" raster images, text entries (annotations), 8-bit raster images, 24-bit raster images, and color palettes.HDF contains: the base library, the multi-file (SDS) library, the jpeg library, and the gzip library. HDF library functions can be called from C or FORTRAN user application programs.
The base library contains a general purpose interface and application level interfaces, one for each data structure type. Each application level interface is specifically designed to read, write and manipulate one type. The general purpose interface contains functions, such as file I/O, error handling, memory management and physical storage.
The multi-file (SDS) library integrates the netCDF model with HDF Scientific data sets, and supports simultaneous access to multiple files and multiple objects. This part is referred to as the mfhdf library in the rest of this FAQ.
The jpeg and gzip libraries allow you to use jpeg and gzip compression for those application programming interfaces that support them.
What are the HDF command line utilities?
The HDF command line utilities are application programs that can be executed by entering them at the command level, just like UNIX commands.There are HDF utilities to:
- analyze and view HDF files ('hdp' being one of the more useful tools)
- convert from one format to another
- manipulate HDF files
They provide capabilities for doing things with HDF files for which you would normally have to write your own program.
The 'hdp' utility is one of the more useful HDF utilities. Following is a description of its function:
-
It provides quick information about the contents and data objects
in an HDF file. It can list the contents of hdf files at various
levels with different details. It can also dump the data of one or
more objects in the file in binary or ASCII format.
The other utilities are:
- gif2hdf -- converts GIF image into HDF GR image.
- hdf24to8 -- converts a 24-bit image into an 8-bit image
- h4cc -- compiles an HDF4 C program
- h4fc -- compiles an HDF4 F90 program
- h4redeploy -- Updates paths in h4cc/h4fc after the HDF4 pre-compiled binaries have been installed in a new location
- hdf2gif -- converts HDF GR image into GIF image.
- hdf2jpeg -- converts hdf raster images to jpeg images
- hdf8to24 -- converts an 8-bit image into a 24-bit image
- hdfcomp -- re-compresses an 8-bit raster hdf file
- hdfed -- hdf file editor - requires advanced knowledge of HDF.
- hdfimport -- imports ASCII or binary data into HDF
- hdfls -- lists basic information about an hdf file
- hdfpack -- compacts an hdf file by eliminating unused space that has been created due to file modifications.
- hdftopal -- extracts a palette from an hdf file
- hdftor8 -- extracts 8-bit raster images and palettes from an hdf file
- hdfunpac -- unpacks an HDF file by exporting the scientific data elements (DFTAG_SD) to external object elements
- hdiff -- compares two HDF files and reports the differences
- hrepack -- copies an HDF file to a new file with/without compression and/or chunking
- jpeg2hdf -- converts jpeg images to hdf raster images
- paltohdf -- converts a raw palette to hdf
- r8tohdf -- converts 8-bit raster images to hdf
- ristosds -- converts a raster image into an SDS
- vmake -- creates vsets
- vshow -- dumps out vsets from an hdf file
In addition, the netCDF utilities, ncdump and ncgen, COMPILED with the HDF library, are included.
What is the latest official release of HDF, and what platforms does it support?
See the download page for the latest official release of HDF and the platforms supported.
When will the next release of HDF4 be?
In general, a maintenance release of HDF4 occurs once a year February. HDF5 releases occur every six months around May and November.
What are the new features included in the current release?
Details are listed in the RELEASE.txt file of the release.Is there a Java Interface?
Yes, there is a Java Interface for HDF. For information on this please refer to the HDF Java Products web page,Are there limitations to HDF4 files?
Yes. Here are some of the limitations on HDF4 files:- HDF4 files cannot be larger than 2 GB.
- There is a limit of 20000 objects in an HDF4 file. Usually the limit is even less than that, and depends on the type of objects or features used.
- Minimal parallel I/O support.
- Supports only 1-D array of compound datatypes.
- Cannot easily delete or move objects in a file.
- Supports only 1 unlimited dimension (SD).
- Datasets with an unlimited dimension cannot be chunked or compressed. (SD)
Is there a limit on the name of a dataset?
No. HDF has always allowed any length to be specified for a name, when creating an SDS, Vdata, or Vgroup. However, prior to HDF 4.2r2, the behavior was undefined for names that were greater than 64 characters, which caused problems for tools reading the data. Either characters were concatenated to the name or, in the case of the hdp tool, a crash occurred due to accessing undefined memory.
As of HDF 4.2r2, released in 2007, this issue was fixed and a name with any length could be retrieved. The SDgetnamelen, Vgetnamelen*, and Vgetclassnamelen* APIs were introduced for retrieving the length of a name.
* Vgetnamelen and Vgetclassnamelen were added to the source code in HDF 4.2r2, but were inadvertently left out of the documentation until HDF 4.2.5, and are therefore listed in the Release Notes for HDF 4.2.5.Where can I get the HDF source code and information relevant to HDF?
For information, take a look at the HDF home page.
What documentation for HDF is available?
See the HDF4 Documentation page.
Who do I contact for information on SZIP licensing issues ?
See the SZIP Licensing page.How do I install HDF (4)?
Optional: Set environment variables to point to the correct compilers. For example:setenv CC "/opt/SUNWspro/bin/cc -xarch=v9" setenv F77 "/opt/SUNWspro/bin/f90 -xarch=v9"Also, remember that HDF4 builds with Fortran by default. Particularly on systems that support both 32-bit and 64-bit, HDF may pick up the wrong compiler.
Solaris ONLY:
Make sure that your PATH variable points to the
correct ar
and tr
tools:
tr: /usr/ucb/tr ar: /usr/ccs/bin/arFollowing are the commands to build HDF. The JPEG and ZLIB libraries are required. You must include them when configuring. The SZIP library is optional.
./configure --with-zlib=/path_to_ZLIB_install_directory --with-jpeg=/path_to_JPEG_install_directory [--with-szlib=/path_to_SZIP_install_directory] --prefix=/path_to_HDF4_install_directory gmake >& gmake.out gmake check >& check.out gmake installTo disable fortran:
./configure --disable-fortran ...If you encounter problems where configure can't find the JPEG or ZLIB library, but you know that you have specified the location properly, then it may have found more than one JPEG/ZLIB installation and is confused.
How do I compile application programs that call HDF functions?
To use HDF routines in your C program, you must either have the line #include "hdf.h" if you do not use the mfhdf library, or #include "mfhdf.h" if you do, near the beginning of your code.The HDF Group provides the h4cc and h4fc scripts for compiling applications. These tools come with the pre-compiled binaries and source code.
What header file are you supposed to use in your application?
The header file hdf.h must be included in every HDF application written in C, except for programs that call routines in the SD interface. The header file, mfhdf.h, must be included in all programs that call the SD interface routines.Fortran programmers who use compilers that allow file inclusion can include the files hdf.inc and dffunc.inc. If a Fortran compiler that does not support file inclusion is used, the HDF library definitions must be explicitly defined in the Fortran program as they are included in the header files of the HDF library.
How to use open (unbuffered I/O) instead of fopen?
To use open instead of fopen in HDF4, you must:- Download and uncompress the HDF source code.
- Edit the
./hdf/src/hdfi.h
file. - Find the section of code for the platform you are on.
For example on Solaris, find this line:
#define DF_MT DFMT_SUN
Then go down further until you find the section below. - Change the 2nd define in the following lines in that section, from:
#ifdef HAVE_FMPOOL #define FILELIB PAGEBUFIO /* enable page buffering */ #else #define FILELIB UNIXBUFIO #endif
to:#ifdef HAVE_FMPOOL #define FILELIB PAGEBUFIO /* enable page buffering */ #else #define FILELIB UNIXUNBUFIO #endif
There is an unfixable bug in fopen
on Solaris such that the
maximum number of open files that are allowed is 256. The bug is fixed
in open
.
Configure fails, "cannot compute sizeof (int*), 77"
The problem is that you have to add the path to the SZIP library to LD_LIBRARY_PATH, since it is a shared library.To see this, you can edit config.log, go to the end, and search backwards for "compute". You will find the compute error. Then go back further, before the example program, and you will see that it cannot open the shared SZIP library.
How can you compile HDF with shared libraries?
Prior to HDF 4.2r4, shared libraries were not supported in HDF4. With HDF 4.2r4, shared libraries for C were added. To build the C shared library, configure with these flags:
--enable-shared --disable-fortran
This is required as Fortran shared libraries are not supported. Shared libraries are disabled by default.
Are there any conversion programs available to convert non-HDF image files into HDF files or vice versa?
Many of the HDF command line utilities are conversion programs. See Question #3 for more information regarding them.Take a look at the What Software uses HDF? page off of the HDF home page for information on other tools.
Which HDF Group tools can I use to view HDF objects?
The HDF Group has several tools for scientific visualization that are based on HDF.The latest tool available is:
- The The HDF Group Java-based HDF Browser
Is there any commercial or public domain visualization software that accepts HDF files?
Yes, there are numerous tools that accept HDF files. Please refer to the What Software uses HDF? section on our home page for more detailed, though not complete, information.Commercial tools that accept HDF include IDL and Matlab.
Comprehensive MATLAB, IDL and NCL example scripts and plots on how to access and visualize NASA HDF-EOS and HDF products such as MODIS, MISR, AIRS, TRMM, AMSR-E, CERES, QuikSCAT, Aquarius, SeaWIFS,ICESAT-2,MOPITT can be found on the HDF-EOS Tools and Information Center under hdfeos.org/zoo.
How would you convert a netCDF file to/from HDF?
The netCDF library from Unidata contains utilities ncdump and ncgen to convert between the netCDF binary format and a text format called CDL. The HDF4 library from the HDF Group also contains utilities ncdump and ncgen that do the same thing, but this time the underlying binary file format is HDF. It is a little tricky to install both, because by default the HDF commands will overwrite the netCDF commands or vice versa. You may want to rename the HDF commands. (In the paragraphs below, the HDF versions of these tools are referred to as hdf_ncdump and hdf_ncgen.)To do the "conversion" you would follow these steps:
ncdump -l 80 foo.nc > foo.cdl # Edit CDL file with a text editor to work around various # problems encountered by hdf_ncgen hdf_ncgen -b -o foo.hdf foo.cdlHDF uses netCDF 2.3.2 when building the HDF versions of these tools, which is a very old version of netCDF. The need to edit the CDL file arises because hdf_ncgen cannot read everything that ncdump can write.
The "-l 80" option above is intended to work around one of hdf_ncgen's limitations: a maximum line length of 80 characters. When hdf_ncgen encounters a problem, it halts and prints out the relevant line number in the CDL file. The error message will not necessarily make much sense, but when you go to that line you will usually find that the problem is reasonably obvious and easily corrected.
You may also have to do things like strip out attributes containing zero-length strings, remove netCDF fill values (represented in CDL as an underscore) and replace NaNs with finite floating-point values. (NaNs are illegal in netCDF anyway, though some platforms allow them.)
There is a Windows executable called ncdf2hdf from Fortner Software (defunct) that does the conversion. It is no longer available, but may be found by searching the web. However, the HDF files that it generates have been reported to lack some of the structure that is found in the ones produced by the route above.
Can new versions of HDF read HDF files written using older versions of the HDF library?
Our goal is to make HDF backward compatible in the sense that HDF files can always be read by newer versions of HDF. We have succeeded in doing this so far, and will continue to follow the principle as much as possible. In many instances, HDF is also forward compatible, at least in regards to the data. Metadata, such as attributes, may not be readable by previous releases, but the data should be. Please see the notes following the table below for information on when the data is not forward compatible.The table below lists the backward and forward compatibility of HDF in regards to the data (not metadata). The Vdata and Vgroup interfaces have been merged into HDF since HDF3.2. Before then, they were in a separate library named Vset.
| CAN READ DATA FILES CREATED WITH Interface | HDF3.1 | HDF3.2 | HDF3.3 | HDF4.0 | HDF4.1 ------------------------------------------------------------------- HDF3.1 | | | | | -RIS8 | YES | YES | YES(1) | YES(1) | YES(1) -RIS24 | YES | YES | YES(1) | YES(1) | YES(1) -PALETTE | YES | YES | YES | YES | YES -ANNOTATION | YES | YES | YES | YES | YES -SDS DFSD | Float32 | Float32 | Float32 | Float32(2)| Float32(2,3) Vset 2.1 | | | | | -VData | YES | YES | YES | YES | YES -Vgroup | YES | YES | YES | YES | YES ------------------------------------------------------------------- HDF3.2 | | | | | -RIS8 | YES | YES | YES(1) | YES(1) | YES(1) -RIS24 | YES | YES | YES(1) | YES(1) | YES(1) -PALETTE | YES | YES | YES | YES | YES -ANNOTATION | YES | YES | YES | YES | YES -SDS DFSD | YES | YES | YES | YES(2) | YES(2,3) -VData | YES | YES | YES | YES | YES -Vgroup | YES | YES | YES | YES | YES ------------------------------------------------------------------- HDF3.3 | | | | | -RIS8 | YES | YES | YES | YES | YES -RIS24 | YES | YES | YES | YES | YES -PALETTE | YES | YES | YES | YES | YES -ANNOTATION | YES | YES | YES | YES | YES -SDS SD | YES | YES | YES | YES(2) | YES(2,3) -SDS DFSD | YES | YES | YES | YES(2) | YES(2,3) -VData | YES | YES | YES | YES | YES -Vgroup | YES | YES | YES | YES | YES ------------------------------------------------------------------- HDF4.0 | | | | | -GR | YES | YES | YES | YES | YES -RIS8 | YES | YES | YES | YES | YES -RIS24 | YES | YES | YES | YES | YES -PALETTE | YES | YES | YES | YES | YES -MFAN | YES | YES | YES | YES | YES -ANNOTATION | YES | YES | YES | YES | YES -SDS SD | YES | YES | YES | YES | YES(3) -SDS DFSD | YES | YES | YES | YES(2) | YES(2,3) -VData | YES | YES | YES | YES | YES -Vgroup | YES | YES | YES | YES | YES ------------------------------------------------------------------- HDF4.1 | | | | | -GR | YES | YES | YES | YES | YES -RIS8 | YES | YES | YES | YES | YES -RIS24 | YES | YES | YES | YES | YES -PALETTE | YES | YES | YES | YES | YES -MFAN | YES | YES | YES | YES | YES -ANNOTATION | YES | YES | YES | YES | YES -SDS SD | YES | YES | YES | YES | YES -SDS DFSD | YES | YES | YES | YES | YES -VData | YES | YES | YES | YES | YES -Vgroup | YES | YES | YES | YES | YES (1) except for JPEG compression (2) except for gzip and nbit compression (3) except for chunking and chunking with compression NOTES: - The table above does not include the low-level compression interface, which was introduced in HDF 4.0. - The SD interface should always be able to read an HDF file that was created with the DFSD interface. - With HDF 4.1r2, the SD dimension representation introduced in 4.0r1 will ONLY be used by default. To be read by earlier versions of the software, the SDsetdimval_comp must be called to store the old and new dimension representations in an HDF file. - Old HDF libraries will NOT always be able to read HDF data written by newer version HDF libraries. For example, HDF3.1 can not read 16-bit integer SDS's because HDF 3.1 did not support this data type. - In HDF 4.1r1, chunking and Vdata/Vgroup attributes were added. Previous releases will not be able to read data which was created using these features.
Can application programs which work with old versions of the HDF library always be compiled with new versions of HDF?
As HDF evolves some functions have to be changed or removed. For example, in HDF3.2 some functions' formal parameters which were passed by value in HDF3.1 have to be passed by reference in order to support new number types. When this happens, old application programs need to be modified so that they can work with the new library.Our policy is as follows: Keep existing functions unchanged as much as possible; create new functions when necessary to accommodate new features; if a new function covers the feature of an existing old function, the old function should still be callable by old application programs; should an old function be phased out, the users will be forewarned and encouraged to switch to the new function; an old function will be removed from the library only if it is in conflict with the implementation of new features.
How does the 'integration of netCDF with HDF' affect application programmers?
The mfhdf library was designed to be completely transparent to the programmer. HDF supports a "multi-file" SDS interface and the complete netCDF interface as defined by Unidata netCDF Release 2.3.2. (Please note that HDF4 cannot read NetCDF 64-bit files.)Using either interface, you are able to read XDR-based netCDF files, HDF-based netCDF files and pre-HDF4.x HDF files. The library determines what type of file is being accessed and handles it appropriately. Any of the above types of files may be modified. However, the library will only create new files based on HDF (you can't create new XDR-based netCDF files).
Summary of HDF and XDR file interoperability for the HDF and netCDF application interfaces:
| Files created | Files created | Files written | | by DFSD | by SD interface | by NC interface | | interface | | | | HDF | HDF | HDF | Unidata netCDF| | ----------------------------------------------------------- | Accessed by DFSD | Yes | Yes | Yes | No | | | | | | Accessed by SD | Yes | Yes | Yes | Yes | | | | | | Accessed by NC | Yes | Yes | Yes | Yes | | | | | |For more information, you can refer to the section entitled HDF Interface vs. netCDF Interface in the SD chapter of the User's Guide.
Does HDF support data compression?
HDF 4.0 (and later releases) supports a low-level compression interface, which allows any data-object to be compressed using a variety of algorithms.Currently only three compression algorithms are supported: Run-Length Encoding (RLE), adaptive Huffman, and an LZ-77 dictionary coder (the gzip 'deflation' algorithm). Plans for future algorithms include an Lempel/Ziv-78 dictionary coding, an arithmetic coder and a faster Huffman algorithm.
HDF 4.0 (and later releases) supports n-bit compression for SDSs.
HDF 4.0 (and later releases) supports RLE (Run Length Encoding), IMCOMP, and JPEG compression for raster images.
New with HDF 4.1 is support for "chunking" and "chunking with compression". Data chunking allows an n-dimensional SDS or GR image to be stored as a series of n-dimensional chunks. See the HDF User's Guide for more information.
With HDF4.2r0, HDF supports SZIP compression. For further
information regarding it, see
SZIP Compression in HDF Products
.
NOTE: Compression and chunking are limited to fixed sized datasets. You cannot compress or chunk a dataset that has unlimited dimensions.
Is there a mailing list for HDF discussions and questions?
If you want to broadcast HDF technical questions to other HDF users in order to solicit their assistance, you can subscribe to the HDF-Forum mailing list. See the Community Support page for information on subscribing to the HDF-Forum and accessing prior postings to this mailing list.
How do I contribute my software to the HDF user community?
There are two ways that you can do this:
- You can either give us the software and we will place it on the the HDF Group
ftp server in the directory:
https://support.hdfgroup.org/ftp/HDF/contrib/
- You can tell us where the software is located and we will add a pointer
to it from our HDF home page, in the section What Software uses HDF?.
If you have developed or ported something you think would be helpful to other users, please contact:
help [at] hdfgroup [dot] org
Indicate that you would like to contribute your software to the HDF user community.If you wish to have it placed on the HDF Group ftp server, then for other users' convenience, your contribution package should include the software itself, a Makefile if possible, a man-page, test programs and input data files for testing. A README file is required. It should describe briefly the purpose, function and limitation of the software, on which platforms and operating systems it runs, how to compile, install, and test it, and who and where to contact for comments, suggestions, or bug reports.
How do I make a bug report?
All bug reports, comments, suggestions and questions should go to:
Attached below is a bug report template. It is not required but gives an idea of the kind of information we need in order to resolve a problem.
------------------ Template for bug report ------------------------ VERSION: HDF 4.N.N USER: [Name and Email Address] SYNOPSIS: [Brief description of the problem and where it is located] MACHINE / OPERATING SYSTEM: [Platform and platform version. On Unix platforms, please include the output from "uname -a".] COMPILER: [Compiler and compiler version] DESCRIPTION: [Detailed description of problem. ] REPEAT BUG BY: [What you did to get the error; include test program or session transcript if at all possible. If you include a program, make sure it depends only on libraries in the HDF distribution, not on any vendor or third-party libraries. Please be specific; if we can't reproduce it, we can't fix it. Tell us exactly what we should see when the program is run. NOTE: It helps us a lot if the example program can be written in C and can be run easily on Unix. ] SAMPLE FIX: [Fix or patch if you have one] ------------------ End of Bug Report Template ----------------------
- - Last modified: 28 June 2017