-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ETH original dataset #50
Comments
hello, I have the same question. I can't get the same *.pickle data from original data(sgan provided) using image2world function, do you have some progress? |
Hello, I'm sorry to tell you that I haven't made any progress on this issue, so I turned to other model explorations. If you have any questions, please contact me by email. |
Hello, now I know how to make the world-pixel transformation using the homography matrix, but I still don't know how Ynet filter the data, so I turned to other model explorations, too. def world2image(traj_w, H_inv):
# Converts points from Euclidean to homogeneous space, by (x, y) \u2192 (x, y, 1)
traj_homog = np.hstack((traj_w, np.ones((traj_w.shape[0], 1)))).T
# to camera frame
traj_cam = np.matmul(H_inv, traj_homog)
# to pixel coords
traj_uvz = np.transpose(traj_cam/traj_cam[2])
return traj_uvz[:, :2] |
I would like to ask if the UCY dataset has the same homography matrix for converting coordinates between world coordinates and pixel coordinates. Similar work, Y-net, NSP-SFM, etc., all use map information in pixel space. The final indicators (ADE/FDE) are in world coordinates. How do they achieve conversion on UCY? Also I noticed that the original dataset of UCY seems to be in pixel coordinates, most of the existing work uses world coordinates, how is this converted, thank you very much |
Thank you very much for your work, I have a request I hope you can agree to
Could you share with me the data of your ETH processing script before processing, thank you very much!
The text was updated successfully, but these errors were encountered: