The term numerical linear algebra is often used almost
synonymous with matrix modifications. However, what's interesting
for most applications are really just points in some vector space
and linear mappings between them, not matrices (which represent
points or mappings, but inherently depend on a particular choice
of basis / coordinate system).
This library implements the crucial LA operations like solving
linear equations and eigenvalue problems, without requiring
that the vectors are represented in some particular basis.
This appoach offers:
1. conceptual elegance (only operations that are actually
geometrically sensible will typecheck – this is far stronger than
just confirming that the dimensions match, as some other libraries
do)
2. opportunity to type tensors more expressively. E.g. instead of
having a tensor with many dimensions that can easily be confused,
one can have e.g. a space of images and take the tensor product
with a linear batch space, etc..
3. it opens up optimisation possibilities: the vectors can be
unboxed, use dedicated sparse compression, possibly carry out the
computations on accelerated hardware (GPU etc.). The spaces can in
principle even be infinite-dimensional (e.g. function spaces).
The linear algebra algorithms in this package only require the
vectors to support fundamental operations like addition, scalar
products, double-dual-space coercion and tensor products. These
are expressed by a hierarchy of type classes, none of which requires
a basis representation.
Basis representations are optional to allow storage in matrix-based
backends, but this too is done in a way that allows e.g. taking the
tensor product of a lazy function space with a static-dimensional
matrix space with a low-dimensional channels space, and then only
the inner dimensions will be stored in a packed format.