Skip to content

Add power to automated testing - e.g., by storing baselines for comparison #27

@billsacks

Description

@billsacks

The automated testing of cprnc relies on baseline comparisons for much of its testing power. However, as far as I can tell, the github workflow that triggers these tests doesn't do these baseline comparisons.

We should make the automated testing more powerful by doing one of these or something similar:

  1. Store baseline outputs in the repository and have the automated tests compare against these. (Then, if baselines change, they would need to be verified and updated in the repository before merging a PR.)
  2. Rather than relying on baseline comparisons, have each test encode the key things to look for in the test output (such as the line that the files are identical, or the line that the files differ along with the specific line(s) noting those differences). Then have the test runner look for these lines and fail if they aren't found. We could also list certain lines that should not be present in the output. (e.g., a test where we expect diffs should look for the line noting that the files differ, and look for the absence of the line noting that the files are identical.) (I like that this approach is more robust to incidental changes, but it seems like it could be prone to letting some unintended changes slip through the cracks.)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions