[Storage] Storage Tier2: use DataSource instead of artifactory in test_wffc.py#4368
Conversation
|
/wip |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 28 minutes and 31 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughReplaced Cirros/Artifactory DV template construction with blank-source DataVolumes using sourceRef (RHEL data source). Added fixtures to select a single WFFC storage class and create blank DVs/templates. Updated VM tests to use RHEL, new fixtures, adjusted DV-add/upload flows, and removed two tests (registry DV and clone DV). Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Container Operations
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
AI Features
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/storage/test_wffc.py`:
- Around line 121-126: The call to create_rhel_dv_from_data_source uses the
wrong kwarg and a hard-coded param key: the helper expects rhel10_data_source
(not rhel_data_source) and the test parametrization supplies
first_dv_name/second_dv_name (not request.param["dv_name"]). Update the call to
pass rhel10_data_source=<fixture> and replace request.param["dv_name"] with the
appropriate param key (e.g., request.param["first_dv_name"] or make the test use
a single normalized param name like "dv_name") so the kwarg names and parameter
schema match the create_rhel_dv_from_data_source signature.
- Around line 113-120: The fixture rhel_dv_for_wffc_tests is declared with
scope="module" but depends on the function-scoped fixture
data_volume_multi_wffc_storage_scope_function, causing a scope mismatch; either
change rhel_dv_for_wffc_tests to be function-scoped or swap the dependency to
the module-scoped fixture data_volume_multi_wffc_storage_scope_module so the
dependent fixture scopes are compatible—update the fixture signature accordingly
to reference the chosen dependency and adjust the `@pytest.fixture`(scope=...)
declaration on rhel_dv_for_wffc_tests.
- Around line 95-110: Remove the `@pytest.fixture`(scope="module") decorator from
the create_rhel_dv_from_data_source function so it remains a plain function that
uses the create_dv contextmanager and can be invoked with with statements; also
rename the parameter from rhel10_data_source to rhel_data_source (or update
callers consistently) to match the argument passed elsewhere and avoid the
TypeError, keeping the body calling create_dv, dv.wait_for_dv_success(), and
yielding dv unchanged.
- Around line 337-373: The parametrize currently places pytest.mark.polarion
into argvalues and the test never requests the rhel_dv_for_wffc_tests fixture
while still using request.param; fix by changing the `@pytest.mark.parametrize` to
supply two explicit parameters (first_dv_name, second_dv_name) and apply
pytest.mark.polarion via the marks= argument to the single test case, ensure the
test function signature includes first_dv_name and second_dv_name (and keep
request only if other uses require it), and replace all
request.param["first_dv_name"]/["second_dv_name"] usages with the direct
parameters first_dv_name and second_dv_name; leave the rhel_dv_for_wffc_tests
fixture alone if still needed, but do not expect request.param from it.
- Around line 302-323: The parametrization block for rhel_dv_for_wffc_tests is
ineffective because the fixture isn't included in the test function signature;
either add rhel_dv_for_wffc_tests to the parameters of
test_wffc_add_dv_to_vm_with_data_volume_template (and similarly to
test_wffc_vm_with_two_data_volume_templates) so pytest will run the indirect
fixture and request.param will reflect {"dv_name":"template"}, or remove the
entire pytest.mark.parametrize(...) for rhel_dv_for_wffc_tests (and its
indirect=True) if that fixture/side-effect is not needed; update references to
request.param accordingly so the code uses the intended source of dv_name.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 2dfbafa3-6823-4a2d-a4f5-1e60267ba2bb
📒 Files selected for processing (1)
tests/storage/test_wffc.py
|
/build-and-push-container |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4368 published |
|
/build-and-push-container |
Signed-off-by: Adam Cinko <acinko@redhat.com>
Signed-off-by: Adam Cinko <acinko@redhat.com>
module Signed-off-by: Adam Cinko <acinko@redhat.com>
Signed-off-by: Adam Cinko <acinko@redhat.com>
as well as removing some tests that have coverage elsewhere Signed-off-by: Adam Cinko <acinko@redhat.com>
fix the test with two templates failing Signed-off-by: Adam Cinko <acinko@redhat.com>
and fix the status VM status check in `vm_from_uploaded_dv` since `VirtualMachineInstance.Status` doesn't contain `SCHEDULING` constant Signed-off-by: Adam Cinko <acinko@redhat.com>
Signed-off-by: Adam Cinko <acinko@redhat.com>
Signed-off-by: Adam Cinko <acinko@redhat.com>
|
Clean rebase detected — no code changes compared to previous head ( |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4368 published |
|
/verified |
Test execution plan request detailsCRITICAL: You MUST post an inline review comment on the first changed line of the first file. As an expert software testing engineer, analyze all modified files in this PR and create a targeted test execution plan. Analysis Requirements:
Your deliverable: Test Execution Plan
Guidelines:
Hardware-Related Checks (SR-IOV, GPU, DPDK): When PR modifies fixtures for hardware-specific resources:
CRITICAL WORKFLOW COMPLETION RULES: When responding to this test execution plan request, you MUST follow these rules EXACTLY:
ABSOLUTE PROHIBITIONS (violating these creates empty/meaningless reviews):
Remember: The pull request review is visible to users. Additional comments are redundant noise. CRITICAL: Before setting "Run smoke tests: True", you MUST use shell scripts to: |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 2674 🧠 Learnings used |
|
/lgtm |
1 similar comment
|
/lgtm |
|
/approve |
|
Successfully removed PR tag: quay.io/openshift-cnv/openshift-virtualization-tests:pr-4368. |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:latest published |
|
/cherry-pick cnv-4.20 |
|
Manual cherry-pick is needed |
Short description:
Use DataSource instead of artifactory in test_wffc.py
More details:
https://redhat.atlassian.net/browse/CNV-83360
Currently, most storage tests use import from HTTP as the default method to deploy DataVolumes/VMs.
Switching to golden images (except for tests that explicitly validate the import workflow) will improve test stability and reduce flakiness.
Acceptance criteria:
What this PR does / why we need it:
In
test_wffc.pythe tests are using HTTP to create DVs which can make them fail when the requrest for the image is unsuccessful.Which issue(s) this PR fixes:
Special notes for reviewer:
Don't hold out on critisism, I'm new and I want to get better.
Assisted with Claude Code
jira-ticket:
Summary by CodeRabbit