-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to train on kitti dataset? #1
Comments
We provide a converted KITTI data that you can directly download. The 5 dimensions include: x, y, z, intensity, mask, where the mask indicates if this is a missing pixel or not. The 2d projection method is described in the SqueezeSegV1 paper: https://arxiv.org/abs/1710.07368 |
@BichenWuUCB thanks, still has one more question, the ouput is [64, 512], with cls id inside, does it means, network can only predict of an area cls pretty much like bev image bouding box? How to visual the result directly on point cloud with color points? |
I mean, network can not preduce the height of object, does it? |
@BichenWuUCB i wrote a code 2d projection method described in the SqueezeSegV1 paper, but there always have lines in the middle of the image, why is that? are you use parameter v_res=26.9/64, h_res=0.17578125? |
May I ask how is the mask data obtained?
|
How to train on kitti?
To be specific, there 2 questions:
What does network input which is inside samples directory, it is a array of [64, 512, 5], but normally point cloud is 4 dimensions, where does the 5th dimension from?
How to generate lidar-2d data?
The text was updated successfully, but these errors were encountered: