-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation and library on ICoordinateMapper #80
Comments
If you cant understand how to use these functions ask me and i will show you an example. |
In the main of your example I get [-inf, -inf] when trying to get the depth point of a color point. |
Kinect returns -inf for some values that have too much noise and cannot map them from one frame to another. First try to tilt or move slightly the camera and try to clean the lens to reduce noise. Also if you are close to a window, thelight from the sun might interfere with the sensor. Make sure to avoid direct contact with the sun and use artificial light to have a clear view. Also change the if statement to: To make sure that the kinect has retreived at least one depth and color frame. Try all this and let me know if anything changed. |
It works, thank you very much. |
Thank you very much, feel free to ask me any time. I am coding a big project with kinect2 in python and I have learnt a lot functions, i cannot share the code yet but I can help with any question relating the pykinect2 not only the mapper functions. |
Hey, I tried your repo to map depth images to color space with already captured images but depth_2_color_space function returns arrays with zeros. Can you help me out with it? |
It returns arrays of zeros because the Kinect device is not connected and running. In order to map a depth Image to the Color space you need the Kinect's depth values that represent the distance of the object in meters. A saved Depth Image or Color Image only has pixels values from 0 to 255, thus it cannot be used to produce a mapped image. You can try it with the code below, but I don't think that it will produce accurate results without the Kinect running: import numpy as np kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Color) """ import your images here """ color2depth_points_type = _DepthSpacePoint * np.int(1920 * 1080) Without accessing the Kinect device you cannot map the color pixels to the depth pixels. Also if you want to Map Depth Frames to Color Space you should use the color_2_depth function. Or the code below: import numpy as np kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Color) """ import your images here """ depth2color_points_type = _ColorSpacePoint * np.int(512 * 424) But again I don't think it would produce something useful. |
How do I extract the real time X,Y,Z coordinates of a detected blob? depth_colormap= cv2.applyColorMap(cv2.convertScaleAbs(depth_img, alpha=255/clipping_distance), cv2.COLORMAP_JET) |
Using my mapper repo you can use the following code to get the world points of the detected blobs:
|
color_point_2_depth_point,I always get [0,0]
|
I have found how to use most of the ICoordinateMapper functions using the PyKinectV2. For anyone struggling with this i wrote a file in my repository to use for free:
https://github.com/KonstantinosAng/PyKinect2-Mapper-Functions
The text was updated successfully, but these errors were encountered: