Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation set ratio #4

Open
Yashay opened this issue May 23, 2021 · 1 comment
Open

Validation set ratio #4

Yashay opened this issue May 23, 2021 · 1 comment

Comments

@Yashay
Copy link

Yashay commented May 23, 2021

Hi,

Hope you are well.

In your DOAM article on page 7 you said:

"We further select the best performance model to calculate the AP of
each category to observe the performance improvement in different
categories. "

Does this mean after every epoch you evaluate on the testset and select the best performing model?
Or do you complete training and then evaluate on the testset once?

I ask this because I want to find a stopping criteria for training, usually monitoring validation-set loss does this.
Because using the testset as a validation-set we can overfit the testset.

Kind Regards
Yashay

@LoveIsAGame
Copy link

Can you share with me the download address of opixray dataset?thanks!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants