You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using 8 GPUs, if batch size = 2, the step is 500, for every epoch, the model only trains on 8000 images.
If running 10 epochs, the training will use 80000 images, but this amount is still far smaller than the total number of the whole downloaded dataset. I want to ask whether this setting is correct???
Besides, for each step, it takes about 2 minutes, it needs 10 days to finish training instead of 3 days. does this make sense??
The text was updated successfully, but these errors were encountered:
When using 8 GPUs, if batch size = 2, the step is 500, for every epoch, the model only trains on 8000 images.
If running 10 epochs, the training will use 80000 images, but this amount is still far smaller than the total number of the whole downloaded dataset. I want to ask whether this setting is correct???
Besides, for each step, it takes about 2 minutes, it needs 10 days to finish training instead of 3 days. does this make sense??
The text was updated successfully, but these errors were encountered: