You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I used the Pre-trained AdaBins on KITTI in the following.
Then i cropped the rectified KITTI-360 images to the resolution that the AdaBins network uses (only horizontal cropping) and performed Depth Prediction.
My question now is whether or not the output is a reasonable result. Is it normal that the depth image has in the upper part a ceiling. Has this something to do with the LiDAR that only has an angle up to 10 Degrees tot the top in the vertical axes and therefore the training had only labels for lower parts of the image.
The visualization was made with the following code:
Hello,
I used the Pre-trained AdaBins on KITTI in the following.
Then i cropped the rectified KITTI-360 images to the resolution that the AdaBins network uses (only horizontal cropping) and performed Depth Prediction.
My question now is whether or not the output is a reasonable result. Is it normal that the depth image has in the upper part a ceiling. Has this something to do with the LiDAR that only has an angle up to 10 Degrees tot the top in the vertical axes and therefore the training had only labels for lower parts of the image.
The visualization was made with the following code:
Here are the Intrinsic Camera Coordinates:
[[552.554261 0. 682.049453]
[ 0. 552.554261 238.769549]
[ 0. 0. 1. ]]
And the Extrinsic Camera Coordinates:
[[ 0.04307104 -0.08829286 0.99516293 0.80439144]
[-0.99900437 0.00778461 0.04392797 0.29934896]
[-0.01162549 -0.99606414 -0.08786967 -0.17702258]
[ 0. 0. 0. 1. ]]
I would be very thankful for a respond
The text was updated successfully, but these errors were encountered: