-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[MAINTENANCE] Remove test_expectations_v3_api.py #11098
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for niobium-lead-7998 canceled.
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## develop #11098 +/- ##
===========================================
- Coverage 81.20% 80.31% -0.89%
===========================================
Files 502 502
Lines 41454 41443 -11
===========================================
- Hits 33661 33284 -377
- Misses 7793 8159 +366
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
7f572a0
to
b5e7f62
Compare
@@ -79,6 +79,10 @@ def test_event_identifiers(analytics_config): | |||
assert "organization_id" not in properties | |||
|
|||
|
|||
@pytest.mark.xfail( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test was only passing because other tests ran first setting up the context. This test always fails when run in isolation. I will write a ticket to followup fixing this.
@@ -0,0 +1,92 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I deleted all the json files but this one. It turns out we have some postgresql tests that depend on this table being created. It was previously created in the delete tests setup. I've updated this file to be postgresql only. I've deleted all the test info and just left the data. I moved the file to be next to the test module that uses it.
@@ -30,6 +42,30 @@ | |||
VALUES_ON_MY_FAVORITE_DAY = [8] | |||
|
|||
|
|||
@pytest.fixture(scope="module", autouse=True) | |||
def setup_module(): | |||
# setup table |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests rely on a table that used to be created and populated in the test_expectations_v3_api.py file. They would only pass if run after those tests and would fail if run in isolation. I've moved the table creation logic to this module as well as an edited version of that 1 json file that defined this table.
"w", | ||
) as f: | ||
json.dump(test_results, f, indent=2) | ||
def test_expectations_using_expectation_definitions(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the purpose of this test was to a specific render for all expectations that were defined using test_expectations_v3_api.py
. Since that testing strategy was hard to maintained and didn't contain all the expectations, this strategy isn't ideal. For the purpose of this PR, I extracted all the expectations that were under test and moved them into BulletListContentBlock.json
. This let me remove all the filtering and I can not iterate through them and run the asserts in this test. This should be equivalent to what was being run before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we ensure tests/render/BulletListContentBlock.json
is always up to date?
and I can not iterate through them
s/not/now ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's already out of date since we've already moved away from this json testing framework. I think we should eventually delete this method and incorporate this test in our new expectation testing framework. I will file a followup ticket.
I think it's outside the scope of this work since it's currently falling out of date and this doesn't make it worse. I think it's a good callout to fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ticket is: https://greatexpectations.atlassian.net/browse/GX-850
) | ||
|
||
# TODO: accommodate case where multiple datasets exist within one expectation test definition | ||
with open( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed this since it doesn't seem to be testing anything. Maybe it tests that the results are json serializable? Every result should be json serializable because we have called to_json_dict
above and unit tests for that behavior should go there. Also, we don't need to actually write a file to verify this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the refactor to test_expectations_using_expectation_definitions
, but I'm curious if we have a way to generate tests/render/BulletListContentBlock.json
and ensure it is always current.
I've added https://greatexpectations.atlassian.net/browse/GX-850 to track the block render test work. Adding this to the merge queue. |
invoke lint
(usesruff format
+ruff check
)For more information about contributing, visit our community resources.
After you submit your PR, keep the page open and monitor the statuses of the various checks made by our continuous integration process at the bottom of the page. Please fix any issues that come up and reach out on Slack if you need help. Thanks for contributing!