Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to integrate tiny-cuda-nn with my own custom CUDA kernel? #469

Open
bchao1 opened this issue Sep 13, 2024 · 0 comments
Open

Comments

@bchao1
Copy link

bchao1 commented Sep 13, 2024

Hi,

First of all, thanks for this amazing library! I was wondering if the following is doable (or how complicated it would be) with the tiny-cuda-nn framework.

I have a PyTorch model that uses a custom CUDA kernel that implements some forward/backward passes. The gradients from CUDA kernels are connected back to PyTorch to facilitate autodiff with other modules defined in PyTorch.

Now, I would like to integrate tiny-cuda-nn with my model. The caveat is that the tiny-cuda-nn input is actually calculated in my custom CUDA kernel (due to design and efficiency reasons, it is not practical to expose this calculation to PyTorch), so I cannot use the PyTorch binding that you already provided. This means I have to initialize a tiny-cuda-nn instance in my custom CUDA kernel, is that correct?

From my understanding, what I'll have to do is:

  1. Define the tiny-cuda-nn weights in PyTorch
  2. Pass the weights to my custom CUDA kernel from the Python process
  3. Initialize a tiny-cuda-nn instance in CUDA kernel
  4. Set tiny-cuda-nn parameters manually using the weights passed in from Python process
  5. Connect the forward / backward passes of my CUDA kernel with that of tiny-cuda-nn's

Thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant