You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm filing this as a bug because of a mismatch in behavior I expect from experience with another tool that performs a semantically similar function on a different input format.
If verify.py identifies semantic issues in data, such as a term found in an input file that does not exist in the glossary, a note is made in the output, but the program exits
with status 0.
In contrast, another tool that performs a validation function, xmllint with the --schema argument, exits 0 only if the input XML document adheres to the specified schema. If there is any deviation from the schema, xmllint exits 1. (Without the --schema argument, xmllint exits 1 if the input is malformed XML.)
I personally expect a tool that validates content against a specification to exit non-0 if the input document does not adhere to the specified format/schema/vocabulary. Exitin
g 0 for non-adhering content can give a false sense of content validation. I've had to resort to running grep on the output log for patterns I know indicate incorrect content, as a post-processing step.
What should the default behavior of verify.py be?
Exit non-0 on non-adhering content, by default?
Exit non-0 on non-adhering content, if a --strict flag is passed?
Continue current behavior (only exiting non-0 if the command line is malformed), perhaps including documentation on how to identify incorrect content flagged in the output stream?
Continue current behavior, but add a mechanically recognizable/parseable summary statement at the end saying "Content passes" or "Content fails"?
I personally vote for the first, though I think the fourth is followed in other validating programs. (I don't have a Schematron instance handy to check, but I think the fourth option is its behavior.)
The text was updated successfully, but these errors were encountered:
I am also in favor of the "non-zero" by default means an issue has occurred with the parsing of the data. I With the "verbose flag" discussed in issue 3, we can add the "content passes" or "content fails" statement.
I'm filing this as a bug because of a mismatch in behavior I expect from experience with another tool that performs a semantically similar function on a different input format.
If
verify.py
identifies semantic issues in data, such as a term found in an input file that does not exist in the glossary, a note is made in the output, but the program exitswith status
0
.In contrast, another tool that performs a validation function,
xmllint
with the--schema
argument, exits0
only if the input XML document adheres to the specified schema. If there is any deviation from the schema,xmllint
exits1
. (Without the--schema
argument,xmllint
exits1
if the input is malformed XML.)I personally expect a tool that validates content against a specification to exit non-
0
if the input document does not adhere to the specified format/schema/vocabulary. Exiting
0
for non-adhering content can give a false sense of content validation. I've had to resort to runninggrep
on the output log for patterns I know indicate incorrect content, as a post-processing step.What should the default behavior of
verify.py
be?0
on non-adhering content, by default?0
on non-adhering content, if a--strict
flag is passed?0
if the command line is malformed), perhaps including documentation on how to identify incorrect content flagged in the output stream?I personally vote for the first, though I think the fourth is followed in other validating programs. (I don't have a Schematron instance handy to check, but I think the fourth option is its behavior.)
The text was updated successfully, but these errors were encountered: