You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During the validation of the FROG spec with FBCMT, I have encountered some cases where models were reproducible, with the exception of a small number of reactions. Instead of having to read through the whole report, not being sure what exactly caused the reported errors and simply trying randomly with relaxed parameters (-a 0.01 -r 0.01), it would be easier for me as a curator if this (informal) summary would be printed out at the end. And if it’s too much for the current version of the specification, it could be implemented as a --verbose flag by different tools for now.
The text was updated successfully, but these errors were encountered:
Hi,
I'm all in. Adding a bit of continuation here from the thread we had in mail before:
I'll need to think about how much of this should be mandated for all implementations via the spec requirements, and how much of this should be recommended (or just implemented in FBCMT&others).
Certain pros for me:
the tolerances are a pretty wild subject for discussion and the whole interpretation of these cannot be summarized by just putting 2 thresholds
reporting of the numerical imprecisions would be much better if the users could at least print out a histogram or see which categories of tests failed most
Possible problems:
we already adhered to a quite binding methodology, and this might cause frog to easily irreversibly adhere to an even more specific one, complicating the implementation of alternative approaches
I suggest this:
spec recommends that the implementations are capable of outputting the tolerance stats and test results in an interoperable format, recommending a simple TSV
During the validation of the FROG spec with FBCMT, I have encountered some cases where models were reproducible, with the exception of a small number of reactions. Instead of having to read through the whole report, not being sure what exactly caused the reported errors and simply trying randomly with relaxed parameters (
-a 0.01 -r 0.01
), it would be easier for me as a curator if this (informal) summary would be printed out at the end. And if it’s too much for the current version of the specification, it could be implemented as a--verbose
flag by different tools for now.The text was updated successfully, but these errors were encountered: