Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We have encountered some problems when reproducing the performance values #32

Closed
happyw1nd opened this issue Oct 28, 2024 · 3 comments
Closed

Comments

@happyw1nd
Copy link

Thanks very much for your inspiring work! We have encountered some problems when reproducing the performance described in your paper. We use the code in 'eval_path.py' to get the performance values but the results are not good. We would be very grateful if you are so kind to answer our questions and solve our confusion.

We first introduce how we get the performance. First of all, we use the code herein 'eval_path.py' but get the following error information as

File "utils/eval_path.py", line 112, in main
    diff_completion = DiffCompletion(ckpt_path) if diff else None
TypeError: __init__() missing 3 required positional arguments: 'refine_path', 'denoising_steps', and 'cond_weight'

It seems it has four arguments but only one is given. We try to fix this problem by compensate these arguments to the init() function. Secondly, the model inference has two outputs. One is refined and the other is not refined. The computed code uses only one variable to receive the outputs. We then modify the code to have two variables receiving the outputs, respectively.

Our first question is, are our two modifications of the code correct? Do these modification affect the final performance?

We can't reproduce the metric value in the paper using the above process. Specifically, we used the get_ground_truth function in the eval_path.py to get the Ground truth in L125. But the generated effect is as follows, which is different from the Ground truth point cloud shown in Fig 5 of the paper. This seems to cause us to fail to reproduce the performance. On SemanticKITTI, the Chamfer distance value we reproduce is 0.8, but it the original paper CD is 0.434. Our second question is , would you please give some advices to help use reproduce the results?

generated ground truth of seq08_000000 using get_ground_truth function

Thx in advance.

@nuneslu
Copy link
Collaborator

nuneslu commented Oct 28, 2024

Hi, I guess this are due to the changes you have done on the script. But you are right, we did not give enough information about running the eval_path.py script. I have updated the code now such that the script is cleaner. Please, git pull the latest version and then you can run the script with the following command: python3 utils/eval_path.py --path SCAN_SEQUENCE_PATH --diff DIFF_WEIGHTS --refine REFINE_NET_WEIGHTS

Below you can see the ground truth point cloud by running the command above:

image

@nuneslu
Copy link
Collaborator

nuneslu commented Oct 28, 2024

Let me know whether it works. Also, the script will run the diffusion pipeline and compute the metrics after generating the complete scene.

@happyw1nd
Copy link
Author

Now everything works fine! Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants