Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ETH original dataset #50

Open
12num opened this issue Jul 20, 2022 · 4 comments
Open

ETH original dataset #50

12num opened this issue Jul 20, 2022 · 4 comments

Comments

@12num
Copy link

12num commented Jul 20, 2022

Thank you very much for your work, I have a request I hope you can agree to
Could you share with me the data of your ETH processing script before processing, thank you very much!

@mulplue
Copy link

mulplue commented May 8, 2023

hello, I have the same question. I can't get the same *.pickle data from original data(sgan provided) using image2world function, do you have some progress?

@12num
Copy link
Author

12num commented May 8, 2023

你好,我有同样的问题。我无法使用 image2world 函数从原始数据(提供 sgan)中获取相同的 *.pickle 数据,您有什么进展吗?

Hello, I'm sorry to tell you that I haven't made any progress on this issue, so I turned to other model explorations. If you have any questions, please contact me by email.

@mulplue
Copy link

mulplue commented May 18, 2023

Hello, I'm sorry to tell you that I haven't made any progress on this issue, so I turned to other model explorations. If you have any questions, please contact me by email.

Hello, now I know how to make the world-pixel transformation using the homography matrix, but I still don't know how Ynet filter the data, so I turned to other model explorations, too.
Here's the world2image transformation(from ETH official guidance), I hope it can help someone who are concerned about this issue:

def world2image(traj_w, H_inv):    
    # Converts points from Euclidean to homogeneous space, by (x, y) \u2192 (x, y, 1)
    traj_homog = np.hstack((traj_w, np.ones((traj_w.shape[0], 1)))).T  
    # to camera frame
    traj_cam = np.matmul(H_inv, traj_homog)  
    # to pixel coords
    traj_uvz = np.transpose(traj_cam/traj_cam[2]) 
    return traj_uvz[:, :2]

@Chenzhou727
Copy link

Hello, I'm sorry to tell you that I haven't made any progress on this issue, so I turned to other model explorations. If you have any questions, please contact me by email.

Hello, now I know how to make the world-pixel transformation using the homography matrix, but I still don't know how Ynet filter the data, so I turned to other model explorations, too. Here's the world2image transformation(from ETH official guidance), I hope it can help someone who are concerned about this issue:

def world2image(traj_w, H_inv):    
    # Converts points from Euclidean to homogeneous space, by (x, y) \u2192 (x, y, 1)
    traj_homog = np.hstack((traj_w, np.ones((traj_w.shape[0], 1)))).T  
    # to camera frame
    traj_cam = np.matmul(H_inv, traj_homog)  
    # to pixel coords
    traj_uvz = np.transpose(traj_cam/traj_cam[2]) 
    return traj_uvz[:, :2]

I would like to ask if the UCY dataset has the same homography matrix for converting coordinates between world coordinates and pixel coordinates. Similar work, Y-net, NSP-SFM, etc., all use map information in pixel space. The final indicators (ADE/FDE) are in world coordinates. How do they achieve conversion on UCY? Also I noticed that the original dataset of UCY seems to be in pixel coordinates, most of the existing work uses world coordinates, how is this converted, thank you very much

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants