Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why can't I get the mAPs reported in the paper? #28

Open
SquirrelAtHW opened this issue Mar 10, 2023 · 1 comment
Open

Why can't I get the mAPs reported in the paper? #28

SquirrelAtHW opened this issue Mar 10, 2023 · 1 comment

Comments

@SquirrelAtHW
Copy link

Trained model with default setting (default data separation, config: configs/PseCo/PseCo_faster_rcnn_r50_caffe_fpn_coco_180k.py) and I am getting mAP 19.6 for 1% labeled data and mAP 31.4 for 10% labeled data, against 22.43 for 1% and 36.06 for 10% reported in paper.

Also for each iteration, I keep getting "unsup_precision: 0.0000, unsup_recall: 0.0000". Does this look suspicious?

What could be the potential problems? Followed are some of my guesses:

  1. Incompatible environment, which might mistaken some calculations
  2. Non-optimal configs. In this case, may someone share an optimal configuration?
@miaowumonsyer
Copy link

Me too,for each iteration, I keep getting "unsup_precision: 0.0000, unsup_recall: 0.0000"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants