Replies: 7 comments
-
Yes
It depends on your task. If the object moves really fast, then it would be a great idea to increase the seach region. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your reply. And I find that if I increase the search region, the anchor numbers will also increase, I try 255,300,400 and 500 as the search region size, the performance will drop accordinglly. I drawed several best bboxes and also superimpose the hanning window for better view. It do confuses me. As we know, if the target move on a same horizon,or more commonly, the In-Plane movement, it may run fast while the size doesn't change a lot. What can I do in such cases? I am a new beginner in tracking. Please correct me if I am wrong. Waiting for your reply. Thank you again for your warm heart. @zyc-ai |
Beta Was this translation helpful? Give feedback.
-
This is expected, I highly recommend you check out how the anchors are generated
If it was me, I would increase the size and penalty while decrease the movement pentaly. And in this particular case of basketball, I would also change the search region to a rectangle, so that it won't include those audience. No worries, glad to be helpful |
Beta Was this translation helpful? Give feedback.
-
Thank you for your reply! So the anchors will increases as the output size is enlarged with an enlarged instance size. Am I right? Please correct if I am wrong. And I wonder weather the change in anchor number during inference is also a factor that harm the performance?(So the instance size that closer to that in training, the anchor number won't change too much and the proposal quality will higher.)
Thank you again for your advices on these cases. I still have a question that for an unseen video, how can we detect such cases and implement corresponding measures? I try to retrain the SiamRPN++ and sorry to find that the confidence scores are not always related to its performance(ie,the iou between the predicted bbox and the gt bbox). Eventhough I already set the cfg.DATASET.NEG = 0.2. Waiting for your reply. Thank you again for your warm heart! @zyc-ai |
Beta Was this translation helpful? Give feedback.
-
Anchors are generated on the feature map. Hence, no matter what you do, it is fixed (that's why it is called anchor). I strongly encourage you to check out how anchors are generated.
I highly doubt so, since there would be a NMS step which shall elimate all redunant anchors, and if I recall right, only top 300~ anchors will be considered as proposal during testing.
If you have absolutely no idea what scenario you are facing, I'd recommend you do nothing. But of course, if you have time, you may try adding momentum or/and other tricks, but setting a hard constraint is usually not ideal for unseen cases.
This is one thing you shouldn't do, the weakness of SiamRPN is the ability to distinguish between background and foreground. More negative samples are generally helpful.
I'm a bit confused here, by |
Beta Was this translation helpful? Give feedback.
-
Thank you so much for your patience! After your reply I think twice and so sorry for my late reply. ^-^
Here, the "anchors" I refer to the total anchor numbers in the responds map. As the search region enlarge, the output responds map will enlarge given a fixed template size, and as the anchor number in each location of the responds map are fixed to 5, therefore the total number of the anchor will increase 5*(new responds size-origin responds size). Please correct me if I am still missing something.
emmm,I double check the code and couldn't find the nms operation in the lastest code. I guess maybe some improvements haven't released?If so, looking forward for your updates. ^^
Here, the anchor numbers I still refer to the total number in a responds map, that is 5*(reponds map size) in this code. As the SiamRPN++ mitigate the effects of paddings by adding random shift, it actually isn't a fully convolution network, I wonder weather the enlarge of search region would bring some side effects.
As for the "momentum" here, do you refer to the momentum that in lr during inference? If it is, emmm, however, we can't get the gt information during inference and can't get some feedbacks. I search some data and still didn't find a profound interpretation for this trick during inference. Or maybe you refer to some online updating tricks?
I set cfg.DATASET.NEG = 0.2 according to the config.yaml files in SiamRPN++. And I compare several config.yaml and find their cfg.DATASET.NEG are all 0.2. I try to use 0.3 in a same experiments and seems only trival improvements. Would you please share your latest configuration on it?
Yes. I met some cases of this kind. Could you please provide have any advices? Emmm, maybe improve the discriminative ability of SiamRPN++ is also a solution? [1]New discovery 1 Long term lost search size: [2]New discovery 2 I am a new beginner in tracking. Please correct me if I am wrong. Waiting for your reply. Thank you again for your warm heart. @zyc-ai |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Thank you for your sharing. And I wonder if there is the search region update strategy in this project. I try to change the instance size and output size to enlarge the search, while it seems that the result will become even worser. It seems that a large search region imply the target becoming smaller? What is the corret implement of the strategy? Please correct me if I am wrong.Waiting for your reply.
Beta Was this translation helpful? Give feedback.
All reactions