Skip to content

Improve tracking of skipped tests #13623

Open
@roaks3

Description

@roaks3

Background

We currently have a few different ways to skip tests. For VCR tests, we have the skip_vcr field in mmv1, and the t.skipIfVcr() function for handwritten tests, which will both prevent the test from running in VCR (ie. for PRs). More broadly, we have the skip_test field in mmv1, which will prevent the test file from being generated, and the t.Skip() function for handwritten tests, which will prevent the test from running in any environment.

While these mechanisms all effectively accomplish the same thing (not running the test), they can make it hard for us to track the number of tests currently being skipped. This is particularly important for tests that are skipped because they are consistently failing, and we intend to come back to fix them. We currently need to individually sort through all instances of skip_test in the code to determine why those tests were skipped, while handwritten tests are a little easier to sort through because t.Skip() calls are surfaced in TeamCity.

Details

The ideal solution would be to have skipped mmv1 tests still generate the test and use t.Skip(), so that we can track them in TeamCity. In some cases, skip_test is used for examples that are not intended to have a corresponding test at all, so an ideal solution would also give us the ability to specify these differently than tests that need to be fixed.

Possible solutions:

  • Add a new field for tests that are meant to be fixed later, eg. skip_test_with_generation
  • Use skip_test for tests that are meant to be fixed later, and create a new field for tests that should not be generated, eg. no_test
  • Come up with another form of tracking, like relying on comments next to all instances of skip_test in the code

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions