You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the reconstruction is done on the image patch of the detected human bounding box. I tried directly rendering the reconstructed poses of a video with fixed camera parameters, but the result kept jittering. Is there an easy way to transfer from the predicted local 3D pose (with respect to the image patch) to a global 3D pose based on the bounding box parameters?
The text was updated successfully, but these errors were encountered:
Actually our SMPLERX model is trained by camera space data, which is not good at world coord consistency.
As for your concern, please checkout SMPLERX output, the estimated intrinsics could be varing. So i suggest you to use "SMPLX translation / focal length" to get a "real" 3D translation, then smooth it. Please checkout if that helps.
Thank you for your exciting job. For my current work, I need the information about camera extrinsics. Since the intrinsics can be varing in this way, is there any way that I can get the camera extrinsics?
Thank you for your exciting job. For my current work, I need the information about camera extrinsics. Since the intrinsics can be varing in this way, is there any way that I can get the camera extrinsics?
Our smplerx only focus on camera space pose estimation, for global information (extrinsics), please see our newest work WHAC: World-grounded Humans and Cameras
Currently, the reconstruction is done on the image patch of the detected human bounding box. I tried directly rendering the reconstructed poses of a video with fixed camera parameters, but the result kept jittering. Is there an easy way to transfer from the predicted local 3D pose (with respect to the image patch) to a global 3D pose based on the bounding box parameters?
The text was updated successfully, but these errors were encountered: