Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About detection performance #23

Open
JunqiaoLi opened this issue Aug 7, 2023 · 4 comments
Open

About detection performance #23

JunqiaoLi opened this issue Aug 7, 2023 · 4 comments

Comments

@JunqiaoLi
Copy link

Hi, sorry to bother you again since I have met another problem.
I use the 'forward_track' function to get the result and then evaluate the detection performance, however, the MAP is not good (like 0.0232).
When I use the same model, but use 'forward_test' function to get the result and then evaluate the detection performance, the performance is about 0.3191. (But forward_test won't output information related to track_ids)

Have you ever met this before? Is this within expectations?

@ziqipang
Copy link
Contributor

ziqipang commented Aug 7, 2023

@JunqiaoLi Hi, I haven't met with this before. Perhaps you could run a sanity check to see where these two functions are different? According to my memory, they should perform the same on a short video clip (e.g. 3 frames) under the setup of multi-frame detection (multi-frame detection as in here).

@JunqiaoLi
Copy link
Author

JunqiaoLi commented Aug 9, 2023

@JunqiaoLi Hi, I haven't met with this before. Perhaps you could run a sanity check to see where these two functions are different? According to my memory, they should perform the same on a short video clip (e.g. 3 frames) under the setup of multi-frame detection (multi-frame detection as in here).

Hi, let me describe my operation in detail:

  1. The model that I use is your f3_fullres_all model.
  2. I use test.py and forward_test to evaluate the model with 8 gpus. The MAP is 0.42181965127297494 (which is the same perfoemance as your log)
  3. I comment out the original forward_test, and change forward_track as forward_test, still use test.py to evaluate the model with only 1 gpu. The MAP is 0.055639788009346346.

From my understanding, the forward_test will regard the model as a detection model, which means it will generate_empty_instance at every time.
As for forward_track, it will only generate_empty_instance at the first frame, then keep some track instances and pass to next frame.

I'm not sure if there is any error in my steps described above? And could u plz tell me about how did you check that 'they should perform the same on a short video clip' ?

@ziqipang
Copy link
Contributor

ziqipang commented Aug 9, 2023

@JunqiaoLi I see. If you use the tracking evaluation for the detection evaluation, the phenomenon described above makes more sense now. (Actually, I don't recommend this. Please check out the reasons in here.)

Ok, now, potential solutions. There are a handful of differences between forward_test and forward_track. I cannot remember everything after several months, but here are two examples. (1) link test_tracking filters out the categories not for tracking evaluation; (2) link More complicated update of active tracks is used for tracking. Perhaps aligning the behaviors is the key to getting good detection results.

Furthermore, how about getting the bounding boxes via the tracking results tools/test_track.py then evaluate the mAP. You might need to align the format with detection, though.

@ziqipang
Copy link
Contributor

@JunqiaoLi It seems this issue is no longer active. Thanks for the discussion! Would you mind close this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants