Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistant size of training set between the paper and the code #54

Open
needylove opened this issue Dec 29, 2021 · 2 comments
Open

Inconsistant size of training set between the paper and the code #54

needylove opened this issue Dec 29, 2021 · 2 comments

Comments

@needylove
Copy link

Hi,

I noted the size of the training set for NYU V2 is 50k in your paper, however, that is 24k+ in the code. Might I know is it a typo or I got something wrong.

Thanks.

@leoshine
Copy link

leoshine commented Jan 7, 2022

Dear Authors,

@needylove
I also have the some question.
Somehow, I managed to use the 50k training data from your previous work DenseDepth here, and re-generate the train_test_inputs/nyudepthv2_train_files_with_gt.txt and train_test_inputs/nyudepthv2_test_files_with_gt.txt.

I trained the the model with python train.py args_train_nyu.txt

However, the result is not normal:


wandb: Run summary:
wandb:               Epoch 24
wandb:          Metrics/a1 0.0
wandb:          Metrics/a2 0.0
wandb:          Metrics/a3 0.0
wandb:     Metrics/abs_rel 0.97262
wandb:      Metrics/log_10 1.57101
wandb:        Metrics/rmse 2.85693
wandb:    Metrics/rmse_log 3.62115
wandb:       Metrics/silog 14.78748
wandb:      Metrics/sq_rel 2.60408
wandb:          Test/SILog 14.18139
wandb:   Train/ChamferLoss 0.11677
wandb:         Train/SILog 0.5751

Could you give any tips on this?
Thanks.

@shariqfarooq123
Copy link
Owner

DenseDepth uses inpainted ground truth depth maps (required particularly for SSIM loss) which may result in noisy supervision, while as AdaBins is trained on raw ground truth depth maps (which may have missing values). In either case, Evaluation is always done on the raw depth maps as per the norm. Train/Test split is according to the officially provided split.

In this repo, we indeed use a subset (24k) of the training set, following the previous work: BTS in order to have a direct comparison. All the dataset-related aspects (including the dataloaders) are directly taken from BTS and you may check out the linked repo for detailed instructions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants