You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can you please ensure that your model is fixed and does not change when running your code (e.g., because of batch norm layers)? I suggest that this causes your observation, as the internal computation of the success variable (see here]) is pretty simple and most likely not incorrect.
You have to redo the normalization. The adversarial images will be between 0-1 (or your bounds). But your model expects something else. I had the same problem. This is easy to test if you check the min and max values of the images. Btw the part where you calculate the acc of your model yourself than becomes unnecessary, but it is a nice check.
images, labels, _ = utils.get_out_dataloader(out_dataloader, device)
min_max_images(images)
# denormalize for attack
images = images * std[:, None, None] + mean[:, None, None]
_, advs_list, is_adv = attack(fmodel, images, labels, epsilons=epsilons)
count_adv += is_adv.sum(dim=1)
total += images.shape[0]
for i, advs in enumerate(advs_list):
# min max before normalization
min_max_images(advs)
# normalize for model
advs_images = (advs - mean[:, None, None]) / std[:, None, None]
# after
min_max_images(advs_images)
preds = model(advs_images).argmax(dim=1)
correct[i] += (preds == labels).sum().item() # Compute accuracy for each epsilon
It seems like the 'success' value in the return of the 'attack' function is overconfident.
And the output of this code is
The text was updated successfully, but these errors were encountered: