Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The code that you submitted to the Fishyscapes submission? #15

Open
pswena opened this issue Dec 14, 2021 · 12 comments
Open

The code that you submitted to the Fishyscapes submission? #15

pswena opened this issue Dec 14, 2021 · 12 comments

Comments

@pswena
Copy link

pswena commented Dec 14, 2021

I saw the issue 13, but could you please upload your code which you submitted to the Fishyscapes submission to get the result of table 1? Please.

@hermannsblum
Copy link
Collaborator

Sure, you can find that code here: hermannsblum/fishyscapes#4

@pswena
Copy link
Author

pswena commented Dec 14, 2021

Sure, you can find that code here: hermannsblum/fishyscapes#4

Thank you very much! Thank you very much! Cause I am a freshman in deep-learning, so I asked lots of questions. Thank you for you reply.

@pswena
Copy link
Author

pswena commented Dec 14, 2021

Firstly, I can't find test_fishy_torch in driving_uncertainty. Is that you missing this file?
Secondly, I want to replicate the result of table 1, should I submit the code you mentioned before to the Fishyscapes submission?

@hermannsblum
Copy link
Collaborator

I may miss the point, but you cannot replicate the result of table 1 yourself. The results for Synboost are available at https://fishyscapes.com/results . Table 1 is literally copy-pasted from there.

@pswena
Copy link
Author

pswena commented Dec 14, 2021

I just want to replicate your results of table 1 in FS Lost & Found 、FS Static and FS Web datasets. So what should I do? Which code should I submit to the Fishyscapes submission? Or, I cannot replicate by myself.

@hermannsblum
Copy link
Collaborator

No, you cannot replicate this by yourself. The idea of public benchmarks and challenges is that their test-sets are hidden and only they can measure/evaluate your models, such that you cannot overfit on the test sets.

@pswena
Copy link
Author

pswena commented Dec 14, 2021

No, you cannot replicate this by yourself. The idea of public benchmarks and challenges is that their test-sets are hidden and only they can measure/evaluate your models, such that you cannot overfit on the test sets.

So, you mean I have no idea to replicate the result that you mentioned in your paper, right?

@CesarCadena
Copy link
Collaborator

@pswena you always have the validation sets available for you to replicate results (for example table 2 in the paper).

@pswena
Copy link
Author

pswena commented Dec 14, 2021

Yeah, I have done it. Thanks for your replay. But I have another question. Can I run your code on FS Lost & Found 、FS Static and FS Web datasets? Not to replicate the result, just curious about how to run the code on Fishyscapes benchmark.

@hermannsblum
Copy link
Collaborator

If you want to try it out on some additional images, I would recommend these: https://segmentmeifyoucan.com/ The test-images in that benchmark are available for download, so you can run Synboost over them.

@pswena
Copy link
Author

pswena commented Dec 14, 2021

If you want to try it out on some additional images, I would recommend these: https://segmentmeifyoucan.com/ The test-images in that benchmark are available for download, so you can run Synboost over them.

Thanks for your quick responses and congratulations for the paper! Thank you very much.

@pswena
Copy link
Author

pswena commented Dec 15, 2021

I may miss the point, but you cannot replicate the result of table 1 yourself. The results for Synboost are available at https://fishyscapes.com/results . Table 1 is literally copy-pasted from there.

May I ask you for the test_fishy_torch in driving_uncertainty?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants