Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in data preprocessing #5

Open
InhwanBae opened this issue Jul 3, 2022 · 3 comments
Open

Issue in data preprocessing #5

InhwanBae opened this issue Jul 3, 2022 · 3 comments

Comments

@InhwanBae
Copy link

Hi @Gutianpei

Thank you for your great work and for releasing the code! It was an exciting paper, so I wanted to talk with you offline in New Orleans.

While looking at your code, I found out that your codebase uses the old version of the Trajectron++ code. There was a major bug in data preprocessing.

def derivative_of(x, dt=1, radian=False):
if radian:
x = make_continuous_copy(x)
if x[~np.isnan(x)].shape[-1] < 2:
return np.zeros_like(x)
dx = np.full_like(x, np.nan)
dx[~np.isnan(x)] = np.gradient(x[~np.isnan(x)], dt)
return dx

This issue, first known in October 2020, has not been fixed in so many Trajectron++-based variant models(e.g. SGNet, DisDis, BiTraP, Social-NCE). I think the authors should fix this issue and update the numbers in the arXiv paper for a fair comparison.

Thank you.

@InhwanBae InhwanBae changed the title Issues in data preprocessing Issue in data preprocessing Jul 3, 2022
@Gutianpei
Copy link
Owner

Hello @InhwanBae

Thanks for your interest! I was at New Orleans presenting my paper, it was a pity we cannot meet. Thanks for bring this issue to us, I actually did not know Trajectron++ has such a problem otherwise we'll use PECNet as our baseline, since MID is an encoder-agnostic mehtod. I apologize I did not look into the code very carefully, and I agree with your. I'll try running some experiments using the encoder with fixing problem and show the comparison. Results and code will be updated after I've done all the experiments. Thanks again for your comment.

@InhwanBae
Copy link
Author

Hi @Gutianpei, thank you for your prompt response.

I strongly agree that MID is the encoder-agnostic method. There is no doubt that if applied to a good baseline, greater synergy can be achieved. However, I would like to mention that PECNet also has problems with using a train-test split rather than the official train-val-test split for ETH/UCY datasets (issue 25 raised by STGCNN author). When it is corrected, the performance is somewhat degraded, as seen in the Table1-PECNet-MC of the NPSN paper.

@WangHonghui123
Copy link

Hi @Gutianpei. I would like to know if you updated the code and result. I tried to use the same baseline and the modified code provided by @InhwanBae but the results degraded. I would like to know why it happens. Could you explain it? Thank you very much.

Hello @InhwanBae

Thanks for your interest! I was at New Orleans presenting my paper, it was a pity we cannot meet. Thanks for bring this issue to us, I actually did not know Trajectron++ has such a problem otherwise we'll use PECNet as our baseline, since MID is an encoder-agnostic mehtod. I apologize I did not look into the code very carefully, and I agree with your. I'll try running some experiments using the encoder with fixing problem and show the comparison. Results and code will be updated after I've done all the experiments. Thanks again for your comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants