-
Notifications
You must be signed in to change notification settings - Fork 1
Description
I'm filing this as a bug because of a mismatch in behavior I expect from experience with another tool that performs a semantically similar function on a different input format.
If verify.py identifies semantic issues in data, such as a term found in an input file that does not exist in the glossary, a note is made in the output, but the program exits
with status 0.
In contrast, another tool that performs a validation function, xmllint with the --schema argument, exits 0 only if the input XML document adheres to the specified schema. If there is any deviation from the schema, xmllint exits 1. (Without the --schema argument, xmllint exits 1 if the input is malformed XML.)
I personally expect a tool that validates content against a specification to exit non-0 if the input document does not adhere to the specified format/schema/vocabulary. Exitin
g 0 for non-adhering content can give a false sense of content validation. I've had to resort to running grep on the output log for patterns I know indicate incorrect content, as a post-processing step.
What should the default behavior of verify.py be?
- Exit non-
0on non-adhering content, by default? - Exit non-
0on non-adhering content, if a--strictflag is passed? - Continue current behavior (only exiting non-
0if the command line is malformed), perhaps including documentation on how to identify incorrect content flagged in the output stream? - Continue current behavior, but add a mechanically recognizable/parseable summary statement at the end saying "Content passes" or "Content fails"?
I personally vote for the first, though I think the fourth is followed in other validating programs. (I don't have a Schematron instance handy to check, but I think the fourth option is its behavior.)