You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The structure of libalgebra as on master at present is based around the sparse_vector class. This assumption is present at all levels of the class hierarchy, such as in the algebra class. This means that the data model is currently integral to the design of algebra, tensor, and lie types. In practice, this means that the performance of operations such as tensor multiplication is limited by the performance of the underlying map type for the sparse_vector class.
The purpose of this project is to separate the data model from the class hierarchy to unlock the performance of alternative data models, such as a densely stored vector based on std::vector. This will require a number of changes to the existing structure and class hierarchy in libalgebra. This should be done in stages to make the performance improvements available to end users early on in the project and allowing purely structural changes to happen behind the scenes.
Stage 1: Separating data model and improving performance
The first step is to split the class hierarchy from the underlying data structure. The first part of this is to generate a interface class between the vector implementation - such as sparse_vector or some other dense vector implementation - and move all specific implementation details into the underlying vector class. Then we can Then the multiplication and other operations can be modified to call vector based methods to actually facilitate the multiplication.
The steps for implementing this are as follows:
Create a new interface class template for vectors. All higher classes will derive publicly from this interface class instead of the underlying vector class, and this will create a strict separation of data model form mathematical structure. The interface will derive privately from vector implementation and perform the correct dispatching for degree optimisation and similar features.
Implement a temporary multiplication_operator type that implements multiplication between key-value pairs based on the basis prod function, as it is currently implemented.
Move the implementation of the multiplication functionality from the algebra class into the sparse_vector class, since this is currently designed around the use of the sparse_vector type. This means implementing the triangularbufferedmulitplyandcombine and squarebufferedmultiplyandcombine functions from the algebra class in the sparse_vector class, and relevant interface methods in the vector interface. Since these no longer necessarily relate to multiplication, they should be renamed.
Shift the multiplication implementation from the algebra class versions to those provided on the vector interface class.
In the interim, override the multiplication operator in free_tensor will be overwritten to call optimised versions that are available for tensors in dense vectors.
Stage 2: Implement dense vectors and hybrid vectors
Once the data model has been separated from the mathematical concepts in the class hierarchy, new underlying vector types can be added to implement different data models. This means we can introduce dense vectors based on std::vector and a hybrid vector that is dynamically sized dense with sparse overhead. This will unlock the performance for dense tensors (low degree, low depth).
The steps for implementing this stages are fairly simple.
Implement a dense vector with contiguous storage.
Implement a hybrid vector deriving from both dense and sparse and resizing dynamically according to the number of elements.
In the interim, override the multiplication operator in free_tensor will be overwritten to call optimised versions that are available for tensors in dense vectors.
Stage 3: Redesigning the interface for multiplication
The third stage of the project is to rethink the way that multiplication is defined in the library. Currently, multiplication is defined within the basis class as the prod function, which takes a pair of keys and returns either a reference to the corresponding algebra type, or the key that represents the product of the arguments.
There are two main problems with this approach.
First, the basis class must be aware of the output vector type in some cases. In particular, the basis must have knowledge of the scalar field and rational type of the underlying vector. Currently, these are provided to the basis classes as template arguments.
Second, a vector space - with a fixed basis - can be given any number of different multiplications without changing the underlying vector space structure. Currently, each different product requires a separate basis class to provide the prod function to define the multiplication.
Instead, the multiplication can be moved out into a separate operator class that is provided to the algebra class as a template argument, and takes care of dispatching multiplication to the vector class via the methods introduced in stage 1. A simple implementation for the free tensor multiplication might look as follows.
This implementation (which might not be the final) the multiplication operator does not have any dependence on either the basis type of the underlying vector, or the underlying algebra type. This means that the multiplication can potentially hold state, such as caching previous results.
This multiplication operator calls the buffered_apply_binary_transform from the vector interface class and passes the key_operator instance that actually performs the multiplication. at the basis level. However, if the product can be performed on densely stored data using the index instead of the key. This will be essential for implementing the performance improvements to the library. In this case, one would call the same function on the vector class but instead pass the key_fn and index_fn. Note that he buffered_apply_binary_transform is performing the relevant dispatching based on whether the basis has a degree or not (and some additional properties that have obvious optimisations).
The text was updated successfully, but these errors were encountered:
The structure of libalgebra as on master at present is based around the
sparse_vector
class. This assumption is present at all levels of the class hierarchy, such as in thealgebra
class. This means that the data model is currently integral to the design ofalgebra
,tensor
, andlie
types. In practice, this means that the performance of operations such as tensor multiplication is limited by the performance of the underlying map type for thesparse_vector
class.The purpose of this project is to separate the data model from the class hierarchy to unlock the performance of alternative data models, such as a densely stored vector based on
std::vector
. This will require a number of changes to the existing structure and class hierarchy in libalgebra. This should be done in stages to make the performance improvements available to end users early on in the project and allowing purely structural changes to happen behind the scenes.Stage 1: Separating data model and improving performance
The first step is to split the class hierarchy from the underlying data structure. The first part of this is to generate a interface class between the vector implementation - such as
sparse_vector
or some other dense vector implementation - and move all specific implementation details into the underlying vector class. Then we can Then the multiplication and other operations can be modified to call vector based methods to actually facilitate the multiplication.The steps for implementing this are as follows:
multiplication_operator
type that implements multiplication between key-value pairs based on the basisprod
function, as it is currently implemented.algebra
class into thesparse_vector
class, since this is currently designed around the use of thesparse_vector
type. This means implementing thetriangularbufferedmulitplyandcombine
andsquarebufferedmultiplyandcombine
functions from the algebra class in thesparse_vector
class, and relevant interface methods in thevector
interface. Since these no longer necessarily relate to multiplication, they should be renamed.algebra
class versions to those provided on thevector
interface class.free_tensor
will be overwritten to call optimised versions that are available for tensors in dense vectors.Stage 2: Implement dense vectors and hybrid vectors
Once the data model has been separated from the mathematical concepts in the class hierarchy, new underlying vector types can be added to implement different data models. This means we can introduce dense vectors based on
std::vector
and a hybrid vector that is dynamically sized dense with sparse overhead. This will unlock the performance for dense tensors (low degree, low depth).The steps for implementing this stages are fairly simple.
free_tensor
will be overwritten to call optimised versions that are available for tensors in dense vectors.Stage 3: Redesigning the interface for multiplication
The third stage of the project is to rethink the way that multiplication is defined in the library. Currently, multiplication is defined within the basis class as the
prod
function, which takes a pair of keys and returns either a reference to the corresponding algebra type, or the key that represents the product of the arguments.There are two main problems with this approach.
prod
function to define the multiplication.Instead, the multiplication can be moved out into a separate operator class that is provided to the
algebra
class as a template argument, and takes care of dispatching multiplication to the vector class via the methods introduced in stage 1. A simple implementation for the free tensor multiplication might look as follows.This implementation (which might not be the final) the multiplication operator does not have any dependence on either the basis type of the underlying vector, or the underlying algebra type. This means that the multiplication can potentially hold state, such as caching previous results.
This multiplication operator calls the
buffered_apply_binary_transform
from the vector interface class and passes thekey_operator
instance that actually performs the multiplication. at the basis level. However, if the product can be performed on densely stored data using the index instead of the key. This will be essential for implementing the performance improvements to the library. In this case, one would call the same function on thevector
class but instead pass thekey_fn
andindex_fn
. Note that hebuffered_apply_binary_transform
is performing the relevant dispatching based on whether the basis has a degree or not (and some additional properties that have obvious optimisations).The text was updated successfully, but these errors were encountered: