Skip to content
This repository has been archived by the owner on May 27, 2020. It is now read-only.

Amplitude image scaling #34

Open
akgoins opened this issue Mar 7, 2017 · 2 comments
Open

Amplitude image scaling #34

akgoins opened this issue Mar 7, 2017 · 2 comments

Comments

@akgoins
Copy link

akgoins commented Mar 7, 2017

I have a similar issue to #23. I am trying to use the /ifm/amplitude image in order to perform extrinsic calibration of the sensor location. However, the default amplitude image is very dark and the calibration target cannot be seen in the image.
frame0000
I looked at the data and noticed that there is information there, but it appears to be scaled like it should be mono8, but it is being published as mono16. Is the data supposed to be mono16? I tried publishing as a mono8, but it is too bright and still wouldn't work for extrinsic calibration.
frame0003
I then scaled the image by the mean and published as the original mono16. With this I got a much better result.
frame0001
What is the scaling for the amplitude image and why is the image so dark by default? Or is the image supposed to be dark? We have auto exposure turned on and are using the less than 5m setting, so I would think that the image should be brighter. Would it be possible to change the scaling of the image prior to publishing in order to enable extrinsic calibration using the amplitude image?

@tpanzarella
Copy link

These data are published into the ROS computation graph as the sensor transmits them to us. Specifically, for the O3D the hardware produces amplitude data as uint16. To be clear, what we publish on /amplitude is the normalized amplitude from the camera (normalized on the camera-side, before we get the pixels). We also publish /raw_amplitude which is named in an obvious way.

FWIW, we do our extrinsic calibrations using the point cloud data and targets with geometric properties we can exploit rather than treating the sensor as a traditional machine vision camera. YMMV.

Also, a word of caution, do not store your extrinsic calibration on the camera via the XMLRPC interface. We compute our "ROS coord frame" inline as we are parsing the pixel byte buffer and assume the extrinsics as stored on the camera are 0,0,0,0,0,0 -- that is, we transform directly from the optical frame to our right-handed coord frame (consistent with ROS). This happens at the driver level.

I hope the above info helps.

@akgoins
Copy link
Author

akgoins commented Mar 7, 2017

We want to use the image and calibration target because we have multiple cameras that we are calibrating and having one calibration target is preferable. If you don't see a need to change the ROS drivers, we will probably make a separate node that will rescale the image for us so we can perform our extrinsic calibration.

Thanks for the XMLRPC warning. We are using the industrial calibration library and storing our results externally so we shouldn't have that problem.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants