You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, nice project! I've actually been using it as a base for some of my own work on scaling CDN on HPC systems. In doing that I noticed a bug, though:
In test_nn where you calculate accuracy, you set the counter used as the denominator equal to the batch_idx, which undercounts the number of batches by one. In the limit, one can imagine a case where the entire dataset is a single batch and in that case "count" would be 0.
I was using a fairly large batch size and getting really inflated accuracy numbers.
The text was updated successfully, but these errors were encountered:
Hi there, nice project! I've actually been using it as a base for some of my own work on scaling CDN on HPC systems. In doing that I noticed a bug, though:
CoDeepNEAT/src/phenotype/neural_network/evaluator/evaluator.py
Line 117 in 3476078
In test_nn where you calculate accuracy, you set the counter used as the denominator equal to the batch_idx, which undercounts the number of batches by one. In the limit, one can imagine a case where the entire dataset is a single batch and in that case "count" would be 0.
I was using a fairly large batch size and getting really inflated accuracy numbers.
The text was updated successfully, but these errors were encountered: