Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

warn if misjudging mixed integer/floating point output #152

Open
godmar opened this issue Nov 22, 2019 · 4 comments
Open

warn if misjudging mixed integer/floating point output #152

godmar opened this issue Nov 22, 2019 · 4 comments

Comments

@godmar
Copy link
Contributor

godmar commented Nov 22, 2019

I noticed that MAPS 2017 Figurine Figures is broken on Kattis, presumably because the problem package contains a floating point tolerance setting that is then applied to all valid floating point tokens (the problem asks for 3 integers, to be computed exactly, and 1 floating point output). The problem requires a custom validator to ensure that the integers are output exactly. See submission 5034294 for a wrong answer that was accepted.

Perhaps problemtools could warn if (a) the default validator is active with a floating point tolerance and (b) there exists a k such that the k-th token in the all judge outputs is an integer.

@austrin
Copy link
Contributor

austrin commented Dec 4, 2019

I think I would prefer solving this by simply making the documentation of the behavior of the default validator more clear.

If the problem you refer to had been mine, I would not even have considered this a bug -- if a submission gets accepted it is clearly computing the right answer (up to possibly rounding it to the nearest integer). It might not be formatting the answer as dictated but that's not really the main purpose of the problem.

But different problem-setters can view this differently of course, and I think that the most important thing is that it is clear from the documentation what a problem-setter should expect.

@godmar
Copy link
Contributor Author

godmar commented Dec 4, 2019

If the problem you refer to had been mine, I would not even have considered this a bug -- if a submission gets accepted it is clearly computing the right answer (up to possibly rounding it to the nearest integer).

Perhaps I'm confused here. With a relative tolerance of 1e-4, if the correct answer is, for instance, the integer 200,000, then the validator accepts any integer or floating point number from 199,980 to 200,020. For a counting problem, why would you accept answers with an absolute error of up to 20?

@austrin
Copy link
Contributor

austrin commented Dec 4, 2019

Ah sorry, you are right, I was reading sloppily and missed the relative part. Then I agree that this problem is buggy on Kattis (though I wouldn't go so far as to call it broken). I opened an internal issue in Kattis for this.

I am still inclined to resolve this on the problemtools side by making the documentation better (and warning clearly about this), but I'll think about it.

@godmar
Copy link
Contributor Author

godmar commented Dec 4, 2019

PS: the problem here is not merely theoretical. This particular problem involves a convolution and then counting the number of non-zero coefficients. If this is implemented using a natural number transform, the exact number is obtained easily and exactly. If this is implemented using a fast fourier transform using real-valued coefficients, then the coefficients will have an error associated with them - some coefficients are non-zero when the exact value would be zero. Then, the count of non-zero coefficients will vary based on what threshold is used, easily yielding different answers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants