Skip to content

Version 1.1

Compare
Choose a tag to compare
@Tom94 Tom94 released this 30 Oct 08:50
· 478 commits to master since this release

Changes Since Last Release

Major Changes

  • tiny-cuda-nn now supports saving and loading snapshots via Trainer::serialize and Trainer::deserialize. These functions produce a nlohmann::json object containing the trained parameters of the model as well as, optionally, the state of the optimizer (to support continued training).

The intended way to efficiently store the resulting json blob to disk is:

std::ofstream f("checkpoint.msgpack", std::ios::out | std::ios::binary);
json::to_msgpack(trainer->serialize(), f);

and to load it again:

std::ifstream f{"checkpoint.msgpack", std::ios::in | std::ios::binary};
trainer->deserialize(json::from_msgpack(f));
  • tiny-cuda-nn now supports L1-type losses. Four new losses were added: L1, Relative L1, MAPE (Mean Absolute Percentage Error), and SMAPE (Symmetric Mean Absolute Percentage Error).
  • GPUMatrix has been made much less verbose. Column-major matrices now have the type GPUMatrix<T> and row-major matrices GPUMatrix<T, RM>. We also introduced a dynamically laid out matrix type: GPUMatrixDynamic<T>. As a result, the API for dynamically laid out network outputs is now simplified.

Minor Changes

  • Extends the functionality of Network/NetworkWithInputEncoding to support features such as extraction of neuron activations or gradients of the output w.r.t. the input.
  • Added Squareplus and Softplus activations to FullyFusedMLP.
  • CMake now automatically detects the GPU architecture of the system, simplifying the compilation process for Turing and A100 GPUs (see updated README.md)
  • Removed data_factor from all losses. To achieve the same behavior, please wrap existing losses in a helper class.