You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The general framework for training is: x --> hash encoding --> network --> y <<loss
If I impose auxiliary losses directly on params like
x --> hash encoding --> network --> y <<loss
^ ^
^ ^
auxiliary loss1 auxiliary loss2
I wonder that is it possible in current tiny-cuda-nn? Currently, I tried in the way I indicated above to regularize the params, resulting in a terrible model.
The text was updated successfully, but these errors were encountered:
It seems like that the gradient calculation on network is correct, but the hash encoding is not correct. One can do backward multiple times for different losses to get better results, but it slows down the training. Is there any solution for this?
The general framework for training is:
x --> hash encoding --> network --> y <<loss
If I impose auxiliary losses directly on params like
I wonder that is it possible in current tiny-cuda-nn? Currently, I tried in the way I indicated above to regularize the params, resulting in a terrible model.
The text was updated successfully, but these errors were encountered: