Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualize trajectory in picture #63

Open
ot4f opened this issue Jul 14, 2022 · 6 comments
Open

Visualize trajectory in picture #63

ot4f opened this issue Jul 14, 2022 · 6 comments

Comments

@ot4f
Copy link

ot4f commented Jul 14, 2022

Hi, great work!
I have some problems when I try to visualize the trajectory in picture like Figure 4 in your paper. I got the original video (such as seq_eth.avi) and extract frames at 0.4s interval. Then I want to map the trajectory x,y back to the corresponding frame pixel coordinate. For example, Frame 840 for eth dataset, the x,y,z of pedestrian 2 is (9.57,6.24, 0),then I use the inverse of homography matrix stored in H.txt to get the frame coordinates (285, 188). But in the 840th frame there is no person at around (285,188) (I assume that the up left is the (0,0) of the Frame). Hope to get your help, Thank you !

@abduallahmohamed
Copy link
Owner

Hi,
Please see our work "Social-Implicit" for better visualization.
Some frames without pedestrians were omitted.

@Jayoprell
Copy link

Hi, great work!
I have some questions:

  1. the data format [frame_id, pedestrian_id, x, y] , how are the x and y obtained? I want to know this because I want to get train data from new video, but I don't know how to get this coordinate.
  2. the above data [frame_id, pedestrian_id, x, y], how can I map x,y back to the frame coordinates? is this formular right:
    pixel_coordinate = (x, y) * inverse(H) ?
    Hope to get your help. Thanks!

@abduallahmohamed
Copy link
Owner

Thanks!
1- Manually annotated. See the ETH-UCY datasets source paper about this.
2- As you said using Homography matrix. You can find it within the datasets source as in (1)

@Jayoprell
Copy link

Thanks for the help.

@Pradur241
Copy link

pt_world = np.dot(H_zara02_inv, pt_pixel_resized)

I recently successed with this code. Hope it helps.

@RedOne88
Copy link

Hi,
I find your work very interesting. I have a question that has been previously asked. Do you have a code that allows testing the model on a video, for instance, 'seq_eth.avi' as shown in your Figure 4? I have managed to execute your code, and I am eager to see the tracking predictions on real images. I have searched through your work "Social-Implicit", but I couldn't find what I am looking for anywhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants