Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

verify.py exit status should reflect validation status #2

Open
ajnelson-nist opened this issue Apr 9, 2018 · 1 comment
Open

verify.py exit status should reflect validation status #2

ajnelson-nist opened this issue Apr 9, 2018 · 1 comment

Comments

@ajnelson-nist
Copy link
Member

I'm filing this as a bug because of a mismatch in behavior I expect from experience with another tool that performs a semantically similar function on a different input format.

If verify.py identifies semantic issues in data, such as a term found in an input file that does not exist in the glossary, a note is made in the output, but the program exits
with status 0.

In contrast, another tool that performs a validation function, xmllint with the --schema argument, exits 0 only if the input XML document adheres to the specified schema. If there is any deviation from the schema, xmllint exits 1. (Without the --schema argument, xmllint exits 1 if the input is malformed XML.)

I personally expect a tool that validates content against a specification to exit non-0 if the input document does not adhere to the specified format/schema/vocabulary. Exitin
g 0 for non-adhering content can give a false sense of content validation. I've had to resort to running grep on the output log for patterns I know indicate incorrect content, as a post-processing step.

What should the default behavior of verify.py be?

  • Exit non-0 on non-adhering content, by default?
  • Exit non-0 on non-adhering content, if a --strict flag is passed?
  • Continue current behavior (only exiting non-0 if the command line is malformed), perhaps including documentation on how to identify incorrect content flagged in the output stream?
  • Continue current behavior, but add a mechanically recognizable/parseable summary statement at the end saying "Content passes" or "Content fails"?

I personally vote for the first, though I think the fourth is followed in other validating programs. (I don't have a Schematron instance handy to check, but I think the fourth option is its behavior.)

@jstroud-mitre
Copy link
Contributor

I am also in favor of the "non-zero" by default means an issue has occurred with the parsing of the data. I With the "verbose flag" discussed in issue 3, we can add the "content passes" or "content fails" statement.

Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants