[BUG] #502
Labels
bug
Something isn't working
priority:low
Low-priority issues
unread
This issue is new and hasn't been seen by the maintainers yet
Describe the bug
Training of LSTM models currently seems to currently have issues. Using the default parameters in the GUI to train the same re-amp as a model trained in the default Wavenet parameters yielded wildly different results. I also try the same in the CLI version with a different re-amp and while some seemed to train correctly, there was still some weird behavior.
To Reproduce
I have run replicates and gotten the same results multiple times. I used the standard NAM v3 input file, as well as some custom inputs (such as Aida-X input file, as they are using LSTM for their RT-Neural implementation). I tried an ny of both 8192 and 32768 as suggested in the config files. I further tried with larger LSTM models like 32 hidden units, and 2 layers. The increase in size did not help. In fact it made the ESR worse as seen in the picture above.
Desktop (please complete the following information):
Win 1124H2 - Anaconda Powershell
Both GUI and CLI NAM Trainer
v 0.10.0
The text was updated successfully, but these errors were encountered: