Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get the position of camera and pointcloud in the slam-viewer #51

Open
EvanceChen opened this issue Jan 5, 2015 · 3 comments
Open

Get the position of camera and pointcloud in the slam-viewer #51

EvanceChen opened this issue Jan 5, 2015 · 3 comments

Comments

@EvanceChen
Copy link

Sorry for the similar issue with #24 ,but the question is a little different.
My purpose is to get the updated position of camera and all the pointcloud in order to mapping to the 2-d map. And publish it as a topic in ROS. (maybe ROSOutput3DWrapper::publishTrajectory in ROSOutput3DWrapper, but it is still not implemented )
Therefore, other node on the ROS can easily get the result of LSD-SLAM.
(Just like the way slam-viewer get the result )

So i started working on the code of slam-viewer.
Below is what i understand with the code,but i am not sure if i understand it correctly:
i think i can get the (x,y,z,intensity) of pointcloud from KeyFrameGraphDisplay::draw() in KeyFrameGraphDisplay.cpp. And the function flushpc() in KeyFrameDisplay deal with the position of pointclouds.

However i had difficulty to understand how the cam position being processed. i think "camToWorld" in KeyFrameDisplay.cpp may be what i need. But the data structure has stopped me, especially Sim3f, which is the type of "camToWorld"

My question is

  1. is the variable "camToWorld" in KeyFrameDisplay.cpp storing the cam position?
  2. should i know how the Sophus group work?(it seem involved lots of linear algebra that i had no clue ) Is there any suggested document i can ref before i go further with this code?
    3.Or i just digging into the wrong direction?

Thanks a lot.
Regards,

@JakobEngel
Copy link
Member

Yes, you should probably have a look at how 3D pose representations with Lie-Algebras work. a Sim3 pose contains the translation, rotation of the camera, as well as the scale of the camera.

@QichaoXu
Copy link

QichaoXu commented Jul 9, 2015

Hi,

I have got the (x,y,z,intensity) of pointcloud from KeyFrameGraphDisplay::draw() in KeyFrameGraphDisplay.cpp. The result looks like:
0.486514 -0.521726 1.2571 171
0.495575 -0.524695 1.26373 151
0.501138 -0.526741 1.26836 141
...
But I found that the coordinate of pointcloud(x, y, z) has been proportional scaled, compared to the value in real world. For example, the coordinate of a 10_20_20(meter) room is represented only by (0.1, 0.2, 0.2).

My question is:

  1. What should I do if I wanna measure the length or width of an object in real world?
  2. if the coordinate of pointcloud(x, y, z) is proportional scaled, where could I find the scale coefficient?
  3. Or is there other possibilities?

Thanks a lot.

@ghost
Copy link

ghost commented Apr 14, 2016

I think keyframe scale coefficient could be found by:
In KeyFrameGraph::addKeyFrame, add std::cout << camToWorld_estimate.scale() << std::endl.
:-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants