You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, the reason some of the dimensionalities are commented out is that they increase the compile time and memory cost by quite a lot -- for a feature that most people are not going to use.
But I hear your preference for upstreaming this. How about making the enabled hash encoding dimensions a preprocessor define? I am thinking TCNN_HASH_MIN_DIM (with default 2) and TCNN_HASH_MAX_DIM (with default 4). Those variables could then be overridden via CMake options -- see the top of CMakeLists.txt for how existing options are defined and then turned into preprocessor defines via target_add_definitions.
For the Python bindings, we could have an additional setup.py argument, similar to how --no-networks is currently handled, that similarly gets turned into TCNN_HASH_MIN_DIM and TCNN_HASH_MAX_DIM.
Thank you very much for your reply. I agree that preporcessor defines would be the cleanest approch to balance feature scope and compile time for all users of tcnn.
In principle, I would be interested in making such a contribution. However, I cannot promise ATM when I find time to look into this in more detail. Until then, I will stick to the fork-solution and revist this issue once I start refactoring my own code.
Recently, I worked successfully with 5D hash grids of tinycudann by removing the comment in
tiny-cuda-nn/include/tiny-cuda-nn/encodings/grid.h
Line 1178 in 2ec562e
To streamline the setup process, I would like to get rid of my private fork and have 5D grids directly enabled in the official tinycudann codebase.
@Tom94 do you have any objections against this? If not, I will prepare a PR which removes the comment in the line and adapts the documentation.
The text was updated successfully, but these errors were encountered: