Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The results in your paper #71

Open
PuJanhan opened this issue Feb 13, 2021 · 9 comments
Open

The results in your paper #71

PuJanhan opened this issue Feb 13, 2021 · 9 comments

Comments

@PuJanhan
Copy link

PuJanhan commented Feb 13, 2021

Hi, I modified the model and retrain it with python3 train.py configs/car_auto_T3_train_train_config configs/car_auto_T3_train_config
and I want to compare with all the results listed in the table in the paper to evaluate the revised results So I want to know what should do to get the result in your paper
Car:
Easy:88.33 Moderate :79.47 Hard:72.29
Pedestrian:
Easy:51.92 Moderate:43.77 Hard:40.14
Cyclist:
Easy:78.60 Moderate:63.48 Hard:57.08

Because I use your pretrained model and use run.py checkpoints/car_auto_T3_train/ --dataset_root_dir DATASET_ROOT_DIR --output_dir DIR_TO_SAVE_RESULTS and kitti_native_evaluation offline evaluation get like this:
pujianhan@pujianhan-virtual-machine:/Point-GNN$ ./evaluate_object_3d_offline /home/pujianhan/Point-GNN/DATASET_ROOT_DIR/labels/training/label_2/ /home/pujianhan/Point-GNN/DIR_TO_SAVE_RESULTS
bash: ./evaluate_object_3d_offline: No such file or directory
pujianhan@pujianhan-virtual-machine:
/Point-GNN$ cd kitti_native_evaluation/
pujianhan@pujianhan-virtual-machine:~/Point-GNN/kitti_native_evaluation$ ./evaluate_object_3d_offline /home/pujianhan/Point-GNN/DATASET_ROOT_DIR/labels/training/label_2/ /home/pujianhan/Point-GNN/DIR_TO_SAVE_RESULTS
done.
car_detection_AP : 96.740417 93.441170 90.850136
PDFCROP 1.38, 2012/11/02 - Copyright (c) 2002-2012 by Heiko Oberdiek.
==> 1 page written on car_detection_AP.pdf'. car_orientation_AOS : 44.610443 41.549076 41.066380 PDFCROP 1.38, 2012/11/02 - Copyright (c) 2002-2012 by Heiko Oberdiek. ==> 1 page written on car_orientation_AOS.pdf'.
car_detection_BEV_AP : 93.198029 89.592430 86.979401
PDFCROP 1.38, 2012/11/02 - Copyright (c) 2002-2012 by Heiko Oberdiek.
==> 1 page written on car_detection_BEV_AP.pdf'. car_orientation_BEV_AHS : 43.201939 40.407784 38.238274 PDFCROP 1.38, 2012/11/02 - Copyright (c) 2002-2012 by Heiko Oberdiek. ==> 1 page written on car_orientation_BEV_AHS.pdf'.
car_detection_3D_AP : 90.850861 82.294922 77.717575
PDFCROP 1.38, 2012/11/02 - Copyright (c) 2002-2012 by Heiko Oberdiek.
==> 1 page written on car_detection_3D_AP.pdf'. car_orientation_3D_AHS : 42.087048 37.282787 34.374344 PDFCROP 1.38, 2012/11/02 - Copyright (c) 2002-2012 by Heiko Oberdiek. ==> 1 page written on car_orientation_3D_AHS.pdf'.

the results is not same as the paper and better than it and what do car_detection_AP and car_detection_3D_AP represent ? which of them is related to the results in your paper ?

@WeijingShi
Copy link
Owner

Hi @PuJanhan, Could you confirm you use the @f2f7005 checkpoint of the kitti_native_evaluation? Kitti changed the recall intervals from 11 to 40. In all the ablation study we use the 11 recall version. If you used the same kitti_native_evaluation checkpoint and still have better results, it means your model/training's accuracy is better. It'll be very helpful if you can share your methods.

car_detection_AP: 2D detection
car_detection_3D_AP: 3D detection
car_detection_BEV_AP: 3D detection in the bird's eye view.
Please check Kitti for the details.
We focus on car_detection_3D_AP and car_detection_BEV_AP.

Hope it helps.
Thanks,

@PuJanhan
Copy link
Author

Yes, thanks your reminder . I find that I use the updated @f2f7005 checkpoint of the kitti_native_evaluation and the recall version is 40. So the results of validation is better.

But I confused that how can I get the results of test(not the validation) because I do not have the labels of test and when I use python run.py checkpoints/car_auto_T3_trainval/ --test --dataset_root_dir DATASET_ROOT_DIR --output_dir DIR_TO_SAVE_RESULTS
the kitti_native_evaluation can not get the test results and have errors like the following :
ERROR: Couldn't read: 007482.txt of ground truth. Please write me an email!

Should I submit the file named data to kitti directly?If so, what should I do?

@WeijingShi
Copy link
Owner

Hi @PuJanhan, Kitti does not release the test labels. To get the test evaluation, we need to submit the results to the website.

Some tips for the submission:

  1. The test results are expected to be your final results. Therefore kitti has limited the frequency of submissions. Just be sure you got satisfying results on val set first.
  2. When it's time for submission, make sure all the files within the data folder are zipped (no need to zip the folder, just the files). Make sure you get every file zipped (7518 files for the test set). An incorrect submission may still count as a submission, which may hit the submission limits.

Sorry for the late reply. Good luck!

@typhoonlee
Copy link

Yes, thanks your reminder . I find that I use the updated @f2f7005 checkpoint of the kitti_native_evaluation and the recall version is 40. So the results of validation is better.

But I confused that how can I get the results of test(not the validation) because I do not have the labels of test and when I use python run.py checkpoints/car_auto_T3_trainval/ --test --dataset_root_dir DATASET_ROOT_DIR --output_dir DIR_TO_SAVE_RESULTS
the kitti_native_evaluation can not get the test results and have errors like the following :
ERROR: Couldn't read: 007482.txt of ground truth. Please write me an email!

Should I submit the file named data to kitti directly?If so, what should I do?

May I ask why there are only car results, no pedestrians and cyclists in the results obtained using kitti verification, but there are still two types of 3DmAP results in the paper. How do you get them?

@WeijingShi
Copy link
Owner

@typhoonlee, There is ped_cyl_auto_T3_trainval checkpoint trained for pedestrian and cyclist detection using tran+val kitti data.

@PuJanhan
Copy link
Author

PuJanhan commented Mar 16, 2021

I noticed that you provide another split file which contains train_car.txt trainval_car.txt trian_pedestrain_cyclist.txt and trainval_pedestrain_cyclist.txt.May I ask what the difference between them and files in https://xiaozhichen.github.io/files/mv3d/imagesets.tar.gz. the number of file(same name) is not equal.

if trainval_pedestrain_cyclist = train+val , the train is trian_pedestrain_cyclist.txt then where is the val.txt?
Because I modify the model and want to train in per_cyclist data and use kitti_native_evaluation to evaluate the model. which two data split file should i use when train and evaluate?

@WeijingShi
Copy link
Owner

Hi @PuJanhan,

The 3DOP split constains: train, val, trainval, and test. And trainval = train + val

What we provides are:
train_car: a subset of train split, where samples without car are filtered.
trian_pedestrain_cyclist: a subset of train split, where samples without pedestrian or cyclist are filtered.

trainval_car: a subset of trainval split, where samples without car are filtered.
trainval_pedestrain_cyclist: a subset of trainval split, where samples without pedestrian or cyclist are filtered.

Those split files are for the training script. The empty samples cause some training stability problems sometimes. Removing the empty sample files helps.

For evaluation, you can always use the val split, no matter what type of object you are working on.
Hope it helps,

@curiousboy20
Copy link

curiousboy20 commented Sep 30, 2021

Hi @PuJanhan,

The 3DOP split constains: train, val, trainval, and test. And trainval = train + val

What we provides are: train_car: a subset of train split, where samples without car are filtered. trian_pedestrain_cyclist: a subset of train split, where samples without pedestrian or cyclist are filtered.

trainval_car: a subset of trainval split, where samples without car are filtered. trainval_pedestrain_cyclist: a subset of trainval split, where samples without pedestrian or cyclist are filtered.

Those split files are for the training script. The empty samples cause some training stability problems sometimes. Removing the empty sample files helps.

For evaluation, you can always use the val split, no matter what type of object you are working on. Hope it helps,

Hi @WeijingShi , thank you for sharing this. For the result in your paper, which one did you use? I run checkpoint with ped_cyl_auto_T3_trainval config and model but got a different result. The recall version of kitti_native_evaluation is 11 just like you mentioned before. I got 85.66 83.29 80.89 for pedestrian_detection_3D_AP instead of 43.77 51.92 and 40.14 in your paper.

Can you give me the detail to obtain the results similar with your paper?

@WeijingShi
Copy link
Owner

Hi @curiousboy20,

Sorry for the late reply. The numbers in the paper that you referred to are from the test set of KITTI. Did you use the val set to for your evaluation? The ped_cyl_auto_T3_trainval checkpoint is trained on train+val set. Maybe that's the reason for your much better scores.

BTW, to use test set for evaluation, just enable --test flag as

python3 run.py checkpoints/car_auto_T3_trainval/ --test --dataset_root_dir DATASET_ROOT_DIR --output_dir DIR_TO_SAVE_RESULTS

The results have to be zipped and uploaded to KITTI website for scoring through.

Hope it helps,
Weijing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants