Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues about frame index 'dataset_index' in the file 'labels.csv' #29

Open
liululu1 opened this issue Aug 16, 2022 · 5 comments
Open

Issues about frame index 'dataset_index' in the file 'labels.csv' #29

liululu1 opened this issue Aug 16, 2022 · 5 comments

Comments

@liululu1
Copy link

Hi, thank you very much for the good research about RADIal.
I have a few questions about how to generate a "ready to use" RADIal dataset from raw data.

  1. When I try to read the box information in the 'labels.csv' and draw the box to the corresponding image frame, I found that the image found by 'dataset_index' did not match. I don't know how to get the correspondence between the 'dataset_index' in 'labels.csv' and the original data frame in the 'RECORD@*_events_log.rec'
  2. I noticed something in https://github.com/valeoai/RADIal/issues/25. When I read the code in 'generate_database_2_updated.py', I found that some files are missing, can you provide the following files: /HD_Radar_Processing_Library, 'label_candidates_0deg.npy'.

Thank you again!

@JeroenBax
Copy link

Hi Liululu1,

I'm also trying to get this file.
Did you manage to receive this, or did you find a work arround for this?

@liululu1
Copy link
Author

Hi @JeroenBax,

We can recreate the "ready to use" dataset from raw data as follows:

  1. Read labels information from 'labels.csv' file,
    filename=labels[:, -3],
    frame_index=labels[:, -2].
  2. Get data from different sensors with the same timestamp,
    db=SyncReader(filename_path,tolerance=20000),
    data = db.GetSensorData(frame_index)
    It is necessary to pay attention to the parameter 'tolerance=20000' to ensure that the labels information corresponds to the original data.

@JeroenBax
Copy link

Hi,
Thanks for the reply.

According to the scrips, when receiving the data several folders are made based on the config file. It seems like the radar_Freespace is not an option. How do you generate those?

@liululu1
Copy link
Author

Hi, I haven't done much research on Freespace, but it's mentioned in the RADIal reference. The description of freespace in the paper is as follows:

"The free-space annotation was done fully automatically on the camera images. A DeepLabV3+ [5], pre-trained on Cityscape, has been fine-tuned with 2 classes (free space and occupied) on a small manually-annotated part of our dataset. This model segmented each video frame and the obtained segmentation mask was projected from the camera’s coordinate system to the radar’s one thanks to known calibration. Finally, already available vehicle bounding boxes were subtracted from the free-space mask. The quality of the segmentation mask is limited due to the automatic method we employed and to the projection inaccuracy from camera to real world."——Raw High-Definition Radar for Multi-Task Learning

@BowenXu2020
Copy link

Hi @JeroenBax,

We can recreate the "ready to use" dataset from raw data as follows:

1. Read labels information from 'labels.csv' file,
   filename=labels[:, -3],
   frame_index=labels[:, -2].

2. Get data from different sensors with the same timestamp,
   db=SyncReader(filename_path,tolerance=20000),
   data = db.GetSensorData(frame_index)
   It is necessary to pay attention to the parameter 'tolerance=20000' to ensure that the labels information corresponds to the original data.

Hi @liululu1 ,

Thank you for this reply! It really worked.

However, when I reproduce the "ready to use" data, I found that the radar point cloud is sparser. For example, for the first frame 000018, the ready to use radar data has 1000+ points, but the one I generated using the provided code has only 700+ radar points. I was wondering did you notice this problem? I think it may due to the released radar signal processing code is different from the author actually used for generating the ready to use data.

BTW, LiDAR point cloud and image are exactly the same. Thank you again for your tolerance parameter clarification!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants