Skip to content

Latest commit

 

History

History
443 lines (314 loc) · 22.4 KB

File metadata and controls

443 lines (314 loc) · 22.4 KB

Openshift-virtualization-tests Test plan

[Feature Title/Name] - Quality Engineering Plan

Metadata & Tracking

  • Enhancement(s): [Links to enhancement(s); KubeVirt, OpenShift, etc.]
  • Feature Tracking: [Link to the relevant feature in Jira]
  • Epic Tracking: [Link to the tracking Jira Epic]
  • QE Owner(s): [Name(s)]
  • Owning SIG: [sig-xyz]
  • Participating SIGs: [List of participating SIGs]

Document Conventions (if applicable): [Define acronyms or terms specific to this document]

Feature Overview

[Brief description of the feature and its purpose]


I. Motivation and Requirements Review (QE Review Guidelines)

This section documents the mandatory QE review process. The goal is to understand the feature's value, technology, and testability before formal test planning.

1. Requirement & User Story Review Checklist

  • Review Requirements

    • List the key D/S requirements reviewed: [Summarize requirements here]
  • Understand Value and Customer Use Cases

    • Describe the feature's value to customers: [Describe the customer value here]
    • List the customer use cases identified: [List use cases here]
  • Testability

    • Note any requirements that are unclear or untestable: [List unclear or untestable requirements, or "None"]
  • Acceptance Criteria

    • List the acceptance criteria: [Add acceptance criteria here]
    • Note any gaps or missing criteria: [Describe gaps, or "None"]
  • Non-Functional Requirements (NFRs)

    • List applicable NFRs and their targets: [e.g., Resource Efficiency: < 5% CPU overhead on host during feature operation, Security: RBAC enforced, Scalability: supports 500 VMs]
    • Note any NFRs not covered and why: [e.g., "Scalability — no test environment with 500+ VMs available", or "None"]

2. Known Limitations

The limitations are documented to ensure alignment between development, QA, and product teams. The following are confirmed product constraints accepted before testing begins.

  • [Feature Limitation 1]

    • Sign-off: [Name/Date — confirms awareness and acceptance of this limitation]
  • [Feature Limitation 2]

    • Sign-off: [Name/Date — confirms awareness and acceptance of this limitation]

3. Technology and Design Review

  • Developer Handoff/QE Kickoff

    • Key takeaways and concerns: [Summarize key points and concerns]
  • Technology Challenges

    • List identified challenges: [Describe challenges here]
    • Impact on testing approach: [Describe impact on testing]
  • API Extensions

    • List new or modified APIs: [Add APIs here]
    • Testing impact: [Describe testing impact]
  • Test Environment Needs

    • See environment requirements in Section II.3 and testing tools in Section II.3.1
  • Topology Considerations

    • Describe topology requirements: [Add topology requirements here]
    • Impact on test design: [Describe impact on test design]

II. Software Test Plan (STP)

This STP serves as the overall roadmap for testing, detailing the scope, approach, resources, and schedule.

1. Scope of Testing

Testing Goals

  • [P0] [List key functional areas to be tested with priority]
  • [P1] [List non-functional requirements to be tested with priority]
  • [P2] [Reference specific user stories from Section I with priority]

Out of Scope (Testing Scope Exclusions)

The following items are explicitly Out of Scope for this test cycle and represent intentional exclusions. No verification activities will be performed for these items, and any related issues found will not be classified as defects for this release.

  • [e.g., Core OCP network functionality]

    • Rationale: The core functionality is already covered by the OCP Network team; no duplication of their test effort
    • PM/Lead Agreement: [Name/Date]
  • [e.g., Special guest OS coverage (e.g., Windows)]

    • Rationale: Feature is expected to work with Windows guests but no explicit tests are planned; validation uses Fedora-based guests
    • PM/Lead Agreement: [Name/Date]

Test Limitations

  • [Test Limitation 1]

    • Sign-off: [Name/Date — confirms awareness and acceptance of this limitation]
  • [Test Limitation 2]

    • Sign-off: [Name/Date — confirms awareness and acceptance of this limitation]

2. Test Strategy

Functional

  • Functional Testing — Validates that the feature works according to specified requirements and user stories

    • Details: [ Add details ]
  • Automation Testing — Confirms test automation plan is in place for CI and regression coverage (all tests are expected to be automated)

    • Details: [ Add details ]
  • Regression Testing — Verifies that new changes do not break existing functionality

    • Details: [ Add details ]

Non-Functional

  • Performance Testing — Validates feature performance meets requirements (latency, throughput, resource usage)

    • Details: [ Add details ]
  • Scale Testing — Validates feature behavior under increased load and at production-like scale (e.g., large number of VMs, nodes, or concurrent operations)

    • Details: [ Add details ]
  • Security Testing — Verifies security requirements, RBAC, authentication, authorization, and vulnerability scanning

    • Details: [ Add details ]
  • Usability Testing — Validates user experience and accessibility requirements

    • Does the feature require a UI? If so, ensure the UI aligns with the requirements (UI/UX consistency, accessibility)
    • Does the feature expose CLI commands? If so, validate usability and that needed information is available (e.g., status conditions, clear output)
    • Does the feature trigger backend operations that should be reported to the admin? If so, validate that the user receives clear feedback about the operation and its outcome (e.g., status conditions, events, or notifications indicating success or failure)
    • Details: [ Add details ]
  • Monitoring — Does the feature require metrics and/or alerts?

    • Details: [ Add details ]

Integration & Compatibility

  • Compatibility Testing — Ensures feature works across supported platforms, versions, and configurations

    • Does the feature maintain backward compatibility with previous API versions and configurations?
    • Details: [ Add details ]
  • Upgrade Testing — Validates upgrade paths from previous versions, data migration, and configuration preservation

    • Details: [ Add details ]
  • Dependencies — Blocked by deliverables from other components/products. Identify what we need from other teams before we can test.

    • Details: [ Add details ]
  • Cross Integrations — Does the feature affect other features or require testing by other teams? Identify the impact we cause.

    • Details: [ Add details ]

Infrastructure

  • Cloud Testing — Does the feature require multi-cloud platform testing? Consider cloud-specific features.
    • Details: [ Add details ]

3. Test Environment

  • Cluster Topology: 3-master/3-worker bare-metal

  • OCP & OpenShift Virtualization Version(s): [e.g., OCP 4.20 with OpenShift Virtualization 4.20]

  • CPU Virtualization: VT-x (Intel) or AMD-V enabled

  • Compute Resources: Minimum per worker node: 8 vCPUs, 32GB RAM

  • Special Hardware: N/A

  • Storage: ocs-storagecluster-ceph-rbd-virtualization

  • Network: OVN-Kubernetes, IPv4

  • Required Operators: N/A

  • Platform: Bare metal

  • Special Configurations: N/A

3.1. Testing Tools & Frameworks

  • Test Framework: Standard

  • CI/CD: N/A

  • Other Tools: N/A

4. Entry Criteria

The following conditions must be met before testing can begin:

  • Requirements and design documents are approved and merged
  • Test environment can be set up and configured (see Section II.3 - Test Environment)
  • [Add feature-specific entry criteria as needed]

5. Risks

Timeline/Schedule

  • Risk: [Describe the specific scheduling or deadline risk that could delay testing]
    • Mitigation: [Propose how to adjust scope, priorities, or resources to meet the timeline]
    • Estimated impact on schedule: [Add estimated delay or schedule impact]
    • Sign-off: [Name/Date — confirms awareness and acceptance of this risk]

Test Coverage

  • Risk: [Describe gaps in test coverage and which areas remain unverified]
    • Mitigation: [Propose alternative testing strategies or acceptance of reduced coverage]
    • Areas with reduced coverage: [List affected areas]
    • Sign-off: [Name/Date — confirms awareness and acceptance of this risk]

Test Environment

  • Risk: [Describe hardware, software, or infrastructure constraints that limit testing]
    • Mitigation: [Propose how to secure required resources or adapt the test plan]
    • Missing resources or infrastructure: [List what is unavailable]
    • Sign-off: [Name/Date — confirms awareness and acceptance of this risk]

Untestable Aspects

  • Risk: [Describe scenarios or conditions that cannot be reproduced in a test environment]
    • Mitigation: [Propose alternative validation methods such as smaller-scale tests or production monitoring]
    • Alternative validation approach: [Describe fallback validation method]
    • Sign-off: [Name/Date — confirms awareness and acceptance of this risk]

Resource Constraints

  • Risk: [Describe staffing, skill, or capacity limitations affecting test execution]
    • Mitigation: [Propose how to prioritize work, cross-train, or coordinate with other teams]
    • Current capacity gaps: [Describe staffing or skill gaps]
    • Sign-off: [Name/Date — confirms awareness and acceptance of this risk]

Dependencies

  • Risk: [Describe external team or component dependencies that could block testing]
    • Mitigation: [Propose coordination plans, fallback strategies, or stub implementations]
    • Dependent teams or components: [List external dependencies]
    • Sign-off: [Name/Date — confirms awareness and acceptance of this risk]

Other

  • Risk: [Describe any additional risks not covered by the categories above]
    • Mitigation: [Propose a specific plan to reduce or eliminate this risk]
    • Sign-off: [Name/Date — confirms awareness and acceptance of this risk]

III. Test Scenarios & Traceability

  • [Jira-123] — As a user...

    • Test Scenario: [Tier 1] Verify VM can be created with new feature X
    • Priority: P0
  • [Jira-124] — As an admin...

    • Test Scenario: [Tier 2] Verify API for feature X is backward-compatible
    • Priority: P0
  • [Jira-125] — As an admin user, I want to block non-admin users from deleting VMs

    • Test Scenario: [Tier 2] Verify non-admin user cannot delete a VM
    • Priority: P1
  • [Jira-126] — As a cluster admin...

    • Test Scenario: [Tier 2] Verify upgrade from version X to Y preserves feature state
    • Priority: P2

IV. Sign-off and Approval

This Software Test Plan requires approval from the following stakeholders:

  • Reviewers:
    • [Name / @github-username]
    • [Name / @github-username]
  • Approvers:
    • [Name / @github-username]
    • [Name / @github-username]