You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am doubting that how does you get translation vector working as camera location? In my own monocular video, the translation vectors do not shift a lot.
The text was updated successfully, but these errors were encountered:
Since nerf requires static object at a canonical space and cameras revolving around it, we use the transformation matrix of the rigid head tracking as the camera rigid transformations.
(And then normalize the translation such that average z is at 0.5, because for nerf you want the scene to be bounded within the unit cube).
Am 26.06.2022 06:34 schrieb Min Zhang ***@***.***>:
Hi, I am doubting that how does you get translation vector working as camera location? In my own monocular video, the translation vectors do not shift a lot.
—
Reply to this email directly, view it on GitHub<#41>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AI7AQ5GIEQKYORY2UMX4TZTVQ7MTRANCNFSM5Z3J52WQ>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
Thanks for your reply! I am actually using the rigid head tracking as the camera rigid transformation as well. However, when trying to render using the render function you provided, the mesh can't be matched to the face. I am wondering whether the fx is negative when you do rendering? And what is the coordinate system you use? Many thanks!
Hi, I am doubting that how does you get translation vector working as camera location? In my own monocular video, the translation vectors do not shift a lot.
The text was updated successfully, but these errors were encountered: