You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in train_i3d.py file, you do loss.backward() for both train and val phases. Doesn't it accumulate gradients for the validation loss too no matter you put the model in eval mode (since it only affects the behaviour of some layers such as dropout, batch norm)? Is there pytorch 0.3.0 specific thing that blocks validation gradient accumulation?
The text was updated successfully, but these errors were encountered:
For efficiency, the loss.backward() could be removed from the validation step, but since they are never applied, it will not impact model accuracy.
I see. Then, as I said in #44 (comment), when num_steps_per_update is not a multiple of len(dataloader), the leftover accumulated training gradiens are zeroed before calling optimizer.step() when phase change from training to validation. As a result, leftover forward training pass losses are not used.
in
train_i3d.py
file, you do loss.backward() for bothtrain
andval
phases. Doesn't it accumulate gradients for the validation loss too no matter you put the model in eval mode (since it only affects the behaviour of some layers such as dropout, batch norm)? Is there pytorch 0.3.0 specific thing that blocks validation gradient accumulation?The text was updated successfully, but these errors were encountered: