Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MobileNetV2 baseline FLOPs #5

Open
andreysher opened this issue Oct 10, 2022 · 2 comments
Open

MobileNetV2 baseline FLOPs #5

andreysher opened this issue Oct 10, 2022 · 2 comments

Comments

@andreysher
Copy link

andreysher commented Oct 10, 2022

Hello, i suppose you have some bug in your table. There is 74.29% accuracy, 179.46 MFLOPs and 2.33 MParameters for MobileNetV2 baseline, but according to pytorch-cifar-models repo there is accuracy and parameters for MobileNetV2_1.0, but FLOPs for MobileNetV2_x1_4. And your compressed model have 111.96 FLOPs. This is bigger than original mobilenetv2_x1_0 baseline. Can you comment this situation, or fix your table?

@andreysher
Copy link
Author

According to thop profiler your compressed model have 178.64 MFLOPs and 0.816 MParams.

@yjlee0607
Copy link
Contributor

#MAdds in pytorch-cifar-models seems to be meaning multiply-add (MAD (same as MAC)). We used thop profiler and got 94.6M MACs (same as 189MFLOPs). You can see the details in the following codes.

from thop import profile
model = torch.hub.load("chenyaofo/pytorch-cifar-models", "cifar100_mobilenetv2_x1_0", pretrained=True)
device='cpu'
dummy_input = torch.ones((1,3,32,32)).to(device)

model = model.to(device)

original_macs, original_params = profile(model, inputs=(dummy_input, ))
print(original_macs, original_params)

94651392.0 2351972.0

And, we are now in re-check each models' performances.
Thank you very much for your report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants