Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TorchMetrics for higher reproducibility ! #19

Open
tchaton opened this issue May 5, 2021 · 2 comments
Open

TorchMetrics for higher reproducibility ! #19

tchaton opened this issue May 5, 2021 · 2 comments

Comments

@tchaton
Copy link

tchaton commented May 5, 2021

Dear @RJT1990,

Awesome project there !!!

I looked internally and I have seem metrics being manually implemented without any testing.

This makes me pretty scary in term of reproducibility and accurate reporting.

I think you should consider using https://github.com/PytorchLightning/metrics as the tool for benchmarking the runs.

There are extremely well tested metrics which works automatically in distributed settings and plain PyTorch.

Best,
T.C

@Borda
Copy link

Borda commented May 5, 2021

I smell @tchaton is volunteering to make it for you guys 🐰

@RJT1990
Copy link
Contributor

RJT1990 commented May 5, 2021

Heya,

As discussed yesterday, we are not maintaining sotabench (and associated tools) at this stage, and our focus is elsewhere - particularly on lighter forms of capturing results for the main Papers with Code website.

On testing: this was an experimental product. As such, the emphasis was on extracting user signal rather than committing wholly to a particular implementation. I.e. "manual implementation" was sufficient for our objectives at the time :).

Thanks!

Ross

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants