You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks very much for your inspiring work! We have encountered some problems when reproducing the performance described in your paper. We use the code in 'eval_path.py' to get the performance values but the results are not good. We would be very grateful if you are so kind to answer our questions and solve our confusion.
We first introduce how we get the performance. First of all, we use the code herein 'eval_path.py' but get the following error information as
File "utils/eval_path.py", line 112, in main
diff_completion = DiffCompletion(ckpt_path) if diff else None
TypeError: __init__() missing 3 required positional arguments: 'refine_path', 'denoising_steps', and 'cond_weight'
It seems it has four arguments but only one is given. We try to fix this problem by compensate these arguments to the init() function. Secondly, the model inference has two outputs. One is refined and the other is not refined. The computed code uses only one variable to receive the outputs. We then modify the code to have two variables receiving the outputs, respectively.
Our first question is, are our two modifications of the code correct? Do these modification affect the final performance?
We can't reproduce the metric value in the paper using the above process. Specifically, we used the get_ground_truth function in the eval_path.py to get the Ground truth in L125. But the generated effect is as follows, which is different from the Ground truth point cloud shown in Fig 5 of the paper. This seems to cause us to fail to reproduce the performance. On SemanticKITTI, the Chamfer distance value we reproduce is 0.8, but it the original paper CD is 0.434. Our second question is , would you please give some advices to help use reproduce the results?
Thx in advance.
The text was updated successfully, but these errors were encountered:
Hi, I guess this are due to the changes you have done on the script. But you are right, we did not give enough information about running the eval_path.py script. I have updated the code now such that the script is cleaner. Please, git pull the latest version and then you can run the script with the following command: python3 utils/eval_path.py --path SCAN_SEQUENCE_PATH --diff DIFF_WEIGHTS --refine REFINE_NET_WEIGHTS
Below you can see the ground truth point cloud by running the command above:
Thanks very much for your inspiring work! We have encountered some problems when reproducing the performance described in your paper. We use the code in 'eval_path.py' to get the performance values but the results are not good. We would be very grateful if you are so kind to answer our questions and solve our confusion.
We first introduce how we get the performance. First of all, we use the code herein 'eval_path.py' but get the following error information as
It seems it has four arguments but only one is given. We try to fix this problem by compensate these arguments to the init() function. Secondly, the model inference has two outputs. One is refined and the other is not refined. The computed code uses only one variable to receive the outputs. We then modify the code to have two variables receiving the outputs, respectively.
Our first question is, are our two modifications of the code correct? Do these modification affect the final performance?
We can't reproduce the metric value in the paper using the above process. Specifically, we used the get_ground_truth function in the eval_path.py to get the Ground truth in L125. But the generated effect is as follows, which is different from the Ground truth point cloud shown in Fig 5 of the paper. This seems to cause us to fail to reproduce the performance. On SemanticKITTI, the Chamfer distance value we reproduce is 0.8, but it the original paper CD is 0.434. Our second question is , would you please give some advices to help use reproduce the results?
Thx in advance.
The text was updated successfully, but these errors were encountered: