Skip to content

improve automated test coverage #1118

Open
0 of 4 issues completed
Open
0 of 4 issues completed
@smira

Description

@smira

Missing Automated Tests for New Code

  • new workload proxy code has some bugs which were not discovered in automated testing/manual testing

Missing Omni "Upgrade" Tests

  • workload service prefix annotation didn't handle properly existing services
  • machine config generation bug (need "old" Talos clusters)

Solutions

Missing Automated Tests

  1. Workload Proxy: more services, more clusters, more transitions (enabling/disable service).
  2. Frontend "Clicky" Tests:
  • use talemu to speed it up
  • stable identifiers for elements to "click"/"assert" on - how do we mark them (e.g. CSS class or ID), how do we make sure these are stable (e.g. they are test-*).
  1. Frontend unit-tests.

Omni Upgrade Test

  1. Run "previous" Omni, create a Talos cluster (we need to define a version of Talos to use?).
  2. Prepare a cluster template.
  3. Run omnictl matching Omni, apply the template, assert the cluster is ready.
  4. Any additional tests, like workload service proxy is working, kubectl access, ...
  5. Stop "previous" Omni, run HEAD Omni.
  6. Assert that:
  • the cluster is healthy, everything still works
  • machines haven't been rebooted
  • machine configuration hasn't been changed
  • Omni controllers don't fail repeatedly

Blackbox approach: don't use current Omni client code, use matching omnictl, we can run integration test of matching version.

Test matrix:

  • what is previous - one or two stable releases (?)
  • what is HEAD - current version, last stable release?

Example:

  • v0.48.3 -> HEAD
  • v0.47.1 -> v0.48.3

Sub-issues

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions