You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry for the similar issue with #24 ,but the question is a little different.
My purpose is to get the updated position of camera and all the pointcloud in order to mapping to the 2-d map. And publish it as a topic in ROS. (maybe ROSOutput3DWrapper::publishTrajectory in ROSOutput3DWrapper, but it is still not implemented )
Therefore, other node on the ROS can easily get the result of LSD-SLAM.
(Just like the way slam-viewer get the result )
So i started working on the code of slam-viewer.
Below is what i understand with the code,but i am not sure if i understand it correctly:
i think i can get the (x,y,z,intensity) of pointcloud from KeyFrameGraphDisplay::draw() in KeyFrameGraphDisplay.cpp. And the function flushpc() in KeyFrameDisplay deal with the position of pointclouds.
However i had difficulty to understand how the cam position being processed. i think "camToWorld" in KeyFrameDisplay.cpp may be what i need. But the data structure has stopped me, especially Sim3f, which is the type of "camToWorld"
My question is
is the variable "camToWorld" in KeyFrameDisplay.cpp storing the cam position?
should i know how the Sophus group work?(it seem involved lots of linear algebra that i had no clue ) Is there any suggested document i can ref before i go further with this code?
3.Or i just digging into the wrong direction?
Thanks a lot.
Regards,
The text was updated successfully, but these errors were encountered:
Yes, you should probably have a look at how 3D pose representations with Lie-Algebras work. a Sim3 pose contains the translation, rotation of the camera, as well as the scale of the camera.
I have got the (x,y,z,intensity) of pointcloud from KeyFrameGraphDisplay::draw() in KeyFrameGraphDisplay.cpp. The result looks like:
0.486514 -0.521726 1.2571 171
0.495575 -0.524695 1.26373 151
0.501138 -0.526741 1.26836 141
...
But I found that the coordinate of pointcloud(x, y, z) has been proportional scaled, compared to the value in real world. For example, the coordinate of a 10_20_20(meter) room is represented only by (0.1, 0.2, 0.2).
My question is:
What should I do if I wanna measure the length or width of an object in real world?
if the coordinate of pointcloud(x, y, z) is proportional scaled, where could I find the scale coefficient?
Sorry for the similar issue with #24 ,but the question is a little different.
My purpose is to get the updated position of camera and all the pointcloud in order to mapping to the 2-d map. And publish it as a topic in ROS. (maybe ROSOutput3DWrapper::publishTrajectory in ROSOutput3DWrapper, but it is still not implemented )
Therefore, other node on the ROS can easily get the result of LSD-SLAM.
(Just like the way slam-viewer get the result )
So i started working on the code of slam-viewer.
Below is what i understand with the code,but i am not sure if i understand it correctly:
i think i can get the (x,y,z,intensity) of pointcloud from KeyFrameGraphDisplay::draw() in KeyFrameGraphDisplay.cpp. And the function flushpc() in KeyFrameDisplay deal with the position of pointclouds.
However i had difficulty to understand how the cam position being processed. i think "camToWorld" in KeyFrameDisplay.cpp may be what i need. But the data structure has stopped me, especially Sim3f, which is the type of "camToWorld"
My question is
3.Or i just digging into the wrong direction?
Thanks a lot.
Regards,
The text was updated successfully, but these errors were encountered: