-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduction of metrics for the ForInstance dataset in the paper results #10
Comments
Hi, Did you manage to download the FOR-instance data from the following link? Regarding the parameters, they are detailed in our paper. I will check again to see if the default parameters provided in the GitHub repository remain unchanged. If you have time, it would be great if you could also verify this. For the third issue, I have updated the file "torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py". It should now generate the files needed for evaluation_stats_FOR.py. Please give it a try and see if it helps. Additionally, you might find some updates in this new Git link (related to our new paper) for your reference: https://github.com/bxiang233/ForAINet. Hope this helps! |
The above issue has been resolved. Although the miou and F1 in train.log and eval.log did not reach the values mentioned in the article, after successfully running evaluation_stats_FOR.py, the overall metric has reached the level discussed in the article. Thank you for your timely response. Additionally, regarding the recent modification you made in torch_points3d/metrics/panoptic_tracker_pointgroup_treeins.py, I believe there is a bug on line 665, which should be: |
Hi, great! Thank you for pointing out this bug. Fix it now. |
After I successfully ran evaluation_stats_FOR.py to get proper evaluation results, I didn't focus on the ‘NoneType’ problem. I'm sorry there's nothing I can do to help you. |
Hi, someone met the same problem, and we solved it by:
to to fold:['/path/to/project/PanopticSegForLargeScalePointCloud/data_outpointsrm/treeinsfused/raw/CULS/CULS_plot_2_annotated_test.ply', '/path/........ to if self.forest_regions == [] or self.forest_regions ==None: # @treeins: get all data file names in folder self.raw_dir Hope it helps! |
Thanks for your great work! But there are some confused questions.
First, I directly used the command you mentioned in the readme, python train.py task=panoptic data=panoptic/treeins_rad8 models=panoptic/area4_ablation_3heads_5 model_name=PointGroup-PAPER training=treeins job_name=treeins_my_first_run. However, the results show that in the train.log, the test mIoU at epoch 149 is only 90.44 and the test F1 score is 0.72, while in the eval.log, the test mIoU is only 90.41 and the test F1 score is 0.74.
I used the default parameters provided in the GitHub repository for the run, and the data is also same, so I am curious about how the parameters were set in your paper to achieve an mIoU of 97.2% for the ForInstance dataset?
Second, when I tried to run eval.py based on the PointGroup-PAPER.pt you provided, a mistake happened:
File "eval.py", line 13, in main
trainer = Trainer(cfg)
File "/XXX/torch_points3d/trainer.py", line 48, in init
self._initialize_trainer()
File "/XXX/torch_points3d/trainer.py", line 90, in _initialize_trainer
self._dataset: BaseDataset = instantiate_dataset(self._checkpoint.data_config)
File "/XXX/torch_points3d/datasets/dataset_factory.py", line 46, in instantiate_dataset
dataset = dataset_cls(dataset_config)
File "/XXX/torch_points3d/datasets/panoptic/treeins.py", line 629, in init
self.test_dataset = dataset_cls(
File "/XXX/torch_points3d/datasets/segmentation/treeins.py", line 495, in init
super().init(root, grid_size, *args, **kwargs)
File "XXX/torch_points3d/datasets/segmentation/treeins.py", line 189, in init
super(TreeinsOriginalFused, self).init(root, transform, pre_transform, pre_filter)
File "XXX/lib/python3.8/site-packages/torch_geometric/data/in_memory_dataset.py", line 60, in init
super().init(root, transform, pre_transform, pre_filter)
File "XXX/lib/python3.8/site-packages/torch_geometric/data/dataset.py", line 83, in init
self._download()
File "XXX/lib/python3.8/site-packages/torch_geometric/data/dataset.py", line 136, in _download
if files_exist(self.raw_paths): # pragma: no cover
File "XXX/lib/python3.8/site-packages/torch_geometric/data/dataset.py", line 125, in raw_paths
files = to_list(self.raw_file_names)
File "XXX/torch_points3d/datasets/segmentation/treeins.py", line 233, in raw_file_names
for region in self.forest_regions:
TypeError: 'NoneType' object is not iterable
But as I mentioned above, my data has been preprocessed by you previously, and the settings are the same as those mentioned in your GitHub repository. So, could this issue be due to some differences in the dataset?
Third, after running eval.py, the result files only include Instance_results_withColor_X.ply and vote1regular.ply_X.ply. However, these files cannot be used as inputs for evaluation_stats_FOR.py because they lack the 'pre' and 'gt' attributes. So, I am wondering if there's a need for some processing steps between the results from eval.py and evaluation_stats_FOR.py. If so, could you please inform me about it?
Looking forward to your reply.
The text was updated successfully, but these errors were encountered: