Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to reproduce the results on ModelNet40 #4

Open
khoshsirat-udel opened this issue Aug 30, 2022 · 13 comments
Open

Unable to reproduce the results on ModelNet40 #4

khoshsirat-udel opened this issue Aug 30, 2022 · 13 comments

Comments

@khoshsirat-udel
Copy link

Hello,

I'm trying to reproduce the reported results on ModelNet40.
I have written a data loader for ModelNet40 and I'm training it with all the implementation details in the paper (Appendix G).
The maximum accuracy I'm getting is 92.8 which is way less than the reported numbers.

Can you upload the code for ModelNet40?

@dhliubj
Copy link

dhliubj commented Sep 1, 2022

I would also like to run this code on the modelnet40 dataset to facilitate applying this model to my own dataset. @hancyran

@hancyran
Copy link
Owner

Sorry for the late reply.

We have uploaded the implementation on segmentation tasks. And if you are interested in the repsurf model on modelnet, we will plan to upload it recently.

Thank you for your interest in our work!

@khoshsirat-udel
Copy link
Author

Any updates for the ModelNet implementation?

@Margaretya
Copy link

Hello,

We have also reproduced the ModelNet40 now. But the data are all NaN after UmbrellaSurfaceConstructor. specifically, after mlp in UmbrellaSurfaceConstructor. Would you mind sharing your data loader for ModelNet40? It will be very helpful.
@khoshsirat-udel

@khoshsirat-udel
Copy link
Author

@Margaretya, I have done a few modifications to my copy of the code, and I did not run this recently, but I hope it works. Make sure you using the modelnet40_normal_resampled

import os
import numpy as np
from torch.utils.data import Dataset


class ModelNet40DataLoader(Dataset):
    def __init__(self, root, split):
        assert (split == 'train' or split == 'test')
        cat = [line.rstrip() for line in open(os.path.join(root, 'modelnet40_shape_names.txt'))]
        classes = dict(zip(cat, range(len(cat))))

        self.data = []
        with open(os.path.join(root, f'modelnet40_{split}.txt')) as file:
            for i, line in enumerate(file):
                shape_id = line.rstrip()
                class_name = '_'.join(shape_id.split('_')[0:-1])
                class_id = classes[class_name]
                file_path = os.path.join(root, class_name, shape_id) + '.txt'
                self.data.append([file_path, class_id])

    def __len__(self):
        return len(self.data)

    def __getitem__(self, index):
        file_path, class_id = self.data[index]
        points = np.loadtxt(file_path, delimiter=',')
        return points.astype(np.float32).T, class_id

@Margaretya
Copy link

@khoshsirat-udel Thank you so much!

@kabouzeid
Copy link

can someone confirm the score of "94.7" reported on modelnet? also were the modelnet scores reported with or without voting?

@jamekuma
Copy link

jamekuma commented Jan 1, 2023

Hello,

We have also reproduced the ModelNet40 now. But the data are all NaN after UmbrellaSurfaceConstructor. specifically, after mlp in UmbrellaSurfaceConstructor. Would you mind sharing your data loader for ModelNet40? It will be very helpful. @khoshsirat-udel

I also noticed this phenomenon. This is because the drop-out augmentation of ModelNet40 leads to the repetition of the point. So when the network query and group the KNN points, some groups have many totally same points. After calculating the relative position, the torch.cross operation return a zero vector, which makes normalized vector become a NaN.

@jamekuma
Copy link

jamekuma commented Jan 1, 2023

Thanks for your excellent work!
As I said above, the default setting of drop-out augmentation leads to the NaN result. When I remove the drop-out augmentation, training is back to normal. However, This caused me to be unable to reproduce the results of modelnet40.
Could authors provide the correct augmentation codes of ModelNet40? Or is there any better solution? thanks a lot.

@Margaretya
Copy link

Hello,
We have also reproduced the ModelNet40 now. But the data are all NaN after UmbrellaSurfaceConstructor. specifically, after mlp in UmbrellaSurfaceConstructor. Would you mind sharing your data loader for ModelNet40? It will be very helpful. @khoshsirat-udel

I also noticed this phenomenon. This is because the drop-out augmentation of ModelNet40 leads to the repetition of the point. So when the network query and group the KNN points, some groups have many totally same points. After calculating the relative position, the torch.cross operation return a zero vector, which makes normalized vector become a NaN.

Yes, we found it, too. Basically it cannot process any set contains same coordinates points. Remove dropout in dataloader help. Thank you for your information.

@jamekuma
Copy link

jamekuma commented Jan 5, 2023

Hello,
We have also reproduced the ModelNet40 now. But the data are all NaN after UmbrellaSurfaceConstructor. specifically, after mlp in UmbrellaSurfaceConstructor. Would you mind sharing your data loader for ModelNet40? It will be very helpful. @khoshsirat-udel

I also noticed this phenomenon. This is because the drop-out augmentation of ModelNet40 leads to the repetition of the point. So when the network query and group the KNN points, some groups have many totally same points. After calculating the relative position, the torch.cross operation return a zero vector, which makes normalized vector become a NaN.

Yes, we found it, too. Basically it cannot process any set contains same coordinates points. Remove dropout in dataloader help. Thank you for your information.

Thanks for your information. But when I remove dropout augmentation, I cannot reproduce the results in ModelNet40, may be because of the overfitting. How did you solve this problem? Looking forward to your reply.

@Margaretya
Copy link

Hello,
We have also reproduced the ModelNet40 now. But the data are all NaN after UmbrellaSurfaceConstructor. specifically, after mlp in UmbrellaSurfaceConstructor. Would you mind sharing your data loader for ModelNet40? It will be very helpful. @khoshsirat-udel

I also noticed this phenomenon. This is because the drop-out augmentation of ModelNet40 leads to the repetition of the point. So when the network query and group the KNN points, some groups have many totally same points. After calculating the relative position, the torch.cross operation return a zero vector, which makes normalized vector become a NaN.

Yes, we found it, too. Basically it cannot process any set contains same coordinates points. Remove dropout in dataloader help. Thank you for your information.

Thanks for your information. But when I remove dropout augmentation, I cannot reproduce the results in ModelNet40, may be because of the overfitting. How did you solve this problem? Looking forward to your reply.

Unfortunately, we cannot reproduce the results, either. We guess there are some tricks setting in dataloader. And we are looking forward the author’s official codes.

@said-ohamouddou
Copy link

Hello,
We have also reproduced the ModelNet40 now. But the data are all NaN after UmbrellaSurfaceConstructor. specifically, after mlp in UmbrellaSurfaceConstructor. Would you mind sharing your data loader for ModelNet40? It will be very helpful. @khoshsirat-udel

I also noticed this phenomenon. This is because the drop-out augmentation of ModelNet40 leads to the repetition of the point. So when the network query and group the KNN points, some groups have many totally same points. After calculating the relative position, the torch.cross operation return a zero vector, which makes normalized vector become a NaN.

Yes, we found it, too. Basically it cannot process any set contains same coordinates points. Remove dropout in dataloader help. Thank you for your information.

Thanks for your information. But when I remove dropout augmentation, I cannot reproduce the results in ModelNet40, may be because of the overfitting. How did you solve this problem? Looking forward to your reply.

Unfortunately, we cannot reproduce the results, either. We guess there are some tricks setting in dataloader. And we are looking forward the author’s official codes.

Can you provide the specific shape of the input tensor to use with the ModelNet40 dataset? Thanks!
RuntimeError: Given groups=1, weight of size [10, 10, 1, 1], expected input[16, 9, 8, 1024] to have 10 channels, but got 9 channels instead

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants