Hierarchical Data Format 5 (HDF5) is a unique open source software suite for managing data collections of all sizes and complexity. This header only library provides templates of [CREATE|READ|WRITE|APPEND] operations for popular Linear Algebra packages such as Armadillo C++.
 template <T> ds_t create( file, path, space [,lcpl] [,dcpl] [,acpl] );fd - open file descriptor or path to hdf5 file, path - how you reach dataset within file, space – describes the current and maximum dimensions of dataset, h5::lcpl | h5::dcpl h5::dapl are to fine tune link,dataset properties
 h5::read<T>( ds | path [,offset] [,stride] [,count] [,dxpl] );Templated full or partial IO READ operations that help you to have access to [dataset]s by either returning supported linear algebra and STL containers or updating the content of already existing objects by passing reference or pointer to them. The provided implementations rely on compile time constexpr evaluations, SFINEA pattern matching as well as static_assert compile time error handling where ever was permitted. Optional runtime error mechanism added with HDF5 error stack unwinding otherwise. Starting from the most convenient implementation, where you only have to point at a dataset and the right size object is returned, you find calls which operate on pre-allocated objects. In case the objects are unsupported there is efficient implementation for raw pointers. When objects are passed then the number of elements are computed from the size of the object, therefore specifying h5::count is compile time error. On the other hand when working with classes and raw pointers, h5::count is the only way to tell how much data you're to retrieve, hence it is required. The first group of function arguments are mandatory whereas the optional arguments may be specified in any order, or omitted entirely
 herr_t h5::write<T>( ds | path, object<T> [,offset] [ ,stride ] [,count] [,dxpl] );Templated WRITE operations object:= std::vector<S> | arma::Row<T> | arma::Col<T> | arma::Mat<T> | arma::Cube<T> | raw_ptr
 h5::append<T>( pt , T object);Dataset APPEND operations for streamed data access with examples
 [ handle | type_id ] -s with RAIIThin std::unique_ptr like type safe thin wrap-s for CAPI hid_t,herr_t types which upon destruction, or being passed to HDF5 CAPI functions will do the right thing
 h5::open | h5::create | h5::mute | h5::unmuteThese h5::open | h5::close | h5::create operations are to create/manipulate an hdf5 container. In POSIX sense HDF5 container is an entire image of a file system and the dataset is a document within. Datasets may be manipulated with h5::create | h5::read | h5::write | h5::append operations. While File IO operations are straight maps from already existing CAPI HDF5 calls, they are furnished with RAII, and type safety to aid productivity. How the returned managed handles may be passed to CAPI calls is governed by H5CPP conversion policy h5::mute | h5::unmute are miscellaneous thread safe calls for those rare occasions when you need to turn HDF5 CAPI error handler output off and on. Typically used when failure is information: checking existence of [dataset|path] by call-fail pattern, etc..