Skip to content

Commit

Permalink
Minor improvements to documentation and readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Tom94 committed Dec 15, 2021
1 parent a1b93c3 commit 27de412
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 18 deletions.
8 changes: 4 additions & 4 deletions DOCUMENTATION.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# JSON Configuration Documentation

This documentation so far only contains the JSON parameters for configuring each component of __tiny-cuda-nn__.
This document lists the JSON parameters of all components of __tiny-cuda-nn__.

For each component, we provide a sample configuration with each parameter's default value.
For each component, we provide a sample configuration that lists each parameter's default value.

## Networks

Expand All @@ -27,7 +27,7 @@ The following activation functions are supported:

### Fully Fused MLP

Lightning fast implementation of small multi-layer perceptrons (MLPs). Restricted to hidden layers of size 32, 64, or 128 and outputs of 16 or fewer dimensions.
Lightning fast implementation of small multi-layer perceptrons (MLPs). Restricted to hidden layers of size 32, 64, 128, or 256.

```json5
{
Expand Down Expand Up @@ -236,7 +236,7 @@ Relative L2 loss normalized by the network prediction [[Lehtinen et al. 2018]](h

### Relative L2 Luminance

Same as above, but normalized by the luminance of the network prediction. Only applicable when network prediction is RGB. Used in Neural Radiance Caching [Müller et al. 2021] (to appear).
Same as above, but normalized by the luminance of the network prediction. Only applicable when network prediction is RGB. Used in Neural Radiance Caching [[Müller et al. 2021]](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.pdf).

```json5
{
Expand Down
26 changes: 12 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ This framework powers the following publications:
> [ [Paper](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.pdf) ] [ [GTC talk](https://gtc21.event.nvidia.com/media/Fully%20Fused%20Neural%20Network%20for%20Radiance%20Caching%20in%20Real%20Time%20Rendering%20%5BE31307%5D/1_liqy6k1c) ] [ [Video](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.mp4) ] [ [Interactive Results Viewer](https://tom94.net/data/publications/mueller21realtime/interactive-viewer/) ] [ [BibTeX](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.bib) ]
> __Extracting Triangular 3D Models, Materials, and Lighting From Images__
> [Jakob Munkberg](https://research.nvidia.com/person/jacob-munkberg), [Jon Hasselgren](https://research.nvidia.com/person/jon-hasselgren), [Tianchang Shen](http://www.cs.toronto.edu/~shenti11/), [Jun Gao](http://www.cs.toronto.edu/~jungao/), [Wenzheng Chen](http://www.cs.toronto.edu/~wenzheng/), [Alex Evans](https://research.nvidia.com/person/alex-evans), [Thomas Müller](https://tom94.net), [Sanja Fidler](https://www.cs.toronto.edu/~fidler/)
> [Jacob Munkberg](https://research.nvidia.com/person/jacob-munkberg), [Jon Hasselgren](https://research.nvidia.com/person/jon-hasselgren), [Tianchang Shen](http://www.cs.toronto.edu/~shenti11/), [Jun Gao](http://www.cs.toronto.edu/~jungao/), [Wenzheng Chen](http://www.cs.toronto.edu/~wenzheng/), [Alex Evans](https://research.nvidia.com/person/alex-evans), [Thomas Müller](https://tom94.net), [Sanja Fidler](https://www.cs.toronto.edu/~fidler/)
> _[arXiv:2111.12503 [cs.CV]](https://arxiv.org/abs/2111.12503)_, Nov 2021
>
> [ [Website](https://nvlabs.github.io/nvdiffrec/) ] [ [Paper](https://nvlabs.github.io/nvdiffrec/assets/paper.pdf) ] [ [Video](https://nvlabs.github.io/nvdiffrec/assets/video.mp4) ] [ [BibTeX](https://nvlabs.github.io/nvdiffrec/assets/bib.txt) ]
Expand Down Expand Up @@ -125,26 +125,24 @@ producing an image every 1000 training steps. Each 1000 steps should take roughl

Begin by cloning this repository and all its submodules using the following command:
```sh
> git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
> cd tiny-cuda-nn
tiny-cuda-nn>
$ git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
$ cd tiny-cuda-nn
```

Then, use CMake to generate build files:

```sh
tiny-cuda-nn> mkdir build
tiny-cuda-nn> cd build
tiny-cuda-nn/build> cmake ..
tiny-cuda-nn$ mkdir build
tiny-cuda-nn$ cd build
tiny-cuda-nn/build$ cmake ..
```

Then, depending on your operating system

On Windows, open `tiny-cuda-nn/build/tiny-cuda-nn.sln` in Visual Studio and click the "Build" button.
On Linux you can compile with
```sh
tiny-cuda-nn/build> make -j
```
The last step differs by operating system.
- Windows: open `tiny-cuda-nn/build/tiny-cuda-nn.sln` in Visual Studio and click the "Build" button.
- Linux: run the command
```sh
tiny-cuda-nn/build$ make -j
```

## Components

Expand Down

0 comments on commit 27de412

Please sign in to comment.