Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible errors in the transform_lidar_to_camera function ? #11

Open
RaviBeagle opened this issue Sep 4, 2022 · 2 comments
Open

Possible errors in the transform_lidar_to_camera function ? #11

RaviBeagle opened this issue Sep 4, 2022 · 2 comments

Comments

@RaviBeagle
Copy link

Hello @jedeschaud ,

The code for the function:

def transform_lidar_to_camera(lidar_transform, camera_transform):
    R_camera_vehicle = rotation_carla(camera_transform.rotation)
    R_lidar_vehicle = np.identity(3) #rotation_carla(lidar_tranform.rotation) #we want the lidar frame to have x forward
    R_lidar_camera = R_camera_vehicle.T.dot(R_lidar_vehicle)
    T_lidar_camera = R_camera_vehicle.T.dot(translation_carla(np.array([[lidar_transform.location.x],[lidar_transform.location.y],[lidar_transform.location.z]])-np.array([[camera_transform.location.x],[camera_transform.location.y],[camera_transform.location.z]])))
    return np.vstack((np.hstack((R_lidar_camera, T_lidar_camera)), [0,0,0,1]))


According to this post: https://robotics.stackexchange.com/questions/21401/how-to-make-two-frames-relative-to-each-other

R_lidar_camera = R_camera_vehicle.T.dot(R_lidar_vehicle) should be 
R_lidar_camera = R_lidar_vehicle.T.dot(R_camera_vehicle)

Also the difference in translation is: translation_carla(np.array([[lidar_transform.location.x],[lidar_transform.location.y],[lidar_transform.location.z]])-np.array([[camera_transform.location.x],[camera_transform.location.y],[camera_transform.location.z]])))

Should it not be camera - lidar ?

@RaviBeagle
Copy link
Author

The following code gives me the correct results

def get_transform_lidar_to_camera(lidar_transform,camera_transform):
    R_camera_vehicle = rotation_carla(camera_transform.rotation)
    R_lidar_vehicle = rotation_carla(lidar_transform.rotation)
    T_camera_vehicle = np.array([[camera_transform.location.x],[-camera_transform.location.y],[camera_transform.location.z]])
    T_lidar_vehicle = np.array([[lidar_transform.location.x],[-lidar_transform.location.y],[lidar_transform.location.z]])
    Tr_camera_vehicle = np.vstack((np.hstack((R_camera_vehicle, T_camera_vehicle)), [0,0,0,1]))
    Tr_lidar_vehicle = np.vstack((np.hstack((R_lidar_vehicle, T_lidar_vehicle)), [0,0,0,1]))
    return np.linalg.inv(Tr_lidar_vehicle).dot(Tr_camera_vehicle)


[[ 1.          0.          0.          0.27000001]
 [ 0.          1.          0.          0.        ]
 [ 0.          0.          1.         -0.08000004]
 [ 0.          0.          0.          1.        ]]

which is the correct transformation when the LIDAR is not at a 180 yaw to the camera..

@zorzpapaduby
Copy link

Hello! Running this simulation is extremely slow in my case. I have noticed that if I change fps_simu to fps_simu=10 instead of 1000. Simulation runs faster, but I am not sure how it influence data generation and will it influence the final output which is supposed to be KITTI like. Do you have any suggestions on how I should approach this? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants