You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the excellent work on OpenCood! I encountered an issue while training with OpenCood: when the length of the dataset in an epoch exceeds a certain number (I haven't tested it exactly, but I suspect it is between 2500 and 3000), the entire training process becomes significantly slower. To analyze the issue, I tried using a smaller dataset and concatenating it to create a larger one, ensuring that the training data in the later parts was identical to the beginning. However, the issue still occurred.
Below are some screenshots showing the training time and the status of the devices. The dataset used for training is OPV2V+.
Have you encountered a similar issue during training, or do you have any suggestions on how to resolve this?
Thanks again for your great work!
The text was updated successfully, but these errors were encountered:
Hi!
Thank you for the excellent work on OpenCood! I encountered an issue while training with OpenCood: when the length of the dataset in an epoch exceeds a certain number (I haven't tested it exactly, but I suspect it is between 2500 and 3000), the entire training process becomes significantly slower. To analyze the issue, I tried using a smaller dataset and concatenating it to create a larger one, ensuring that the training data in the later parts was identical to the beginning. However, the issue still occurred.
Below are some screenshots showing the training time and the status of the devices. The dataset used for training is OPV2V+.
Have you encountered a similar issue during training, or do you have any suggestions on how to resolve this?
Thanks again for your great work!
The text was updated successfully, but these errors were encountered: