Skip to content

Add better unit tests for watcher #1528

Open
@clux

Description

What problem are you trying to solve?

Avoiding releasing accidentally breaking watcher changes in the future.

0.92.0 had a change that passed 3 different types of tests; #1524 (comment) and still failed.

Describe the solution you'd like

Create a #[cfg(test)] struct in watcher.rs and provide an injectable test implementation of ApiMode for it:

/// Used to control whether the watcher receives the full object, or only the
/// metadata
#[async_trait]
trait ApiMode {
type Value: Clone;
async fn list(&self, lp: &ListParams) -> kube_client::Result<ObjectList<Self::Value>>;
async fn watch(
&self,
wp: &WatchParams,
version: &str,
) -> kube_client::Result<BoxStream<'static, kube_client::Result<WatchEvent<Self::Value>>>>;
}

This would allow us to use this test struct to inject synthetic events (via e.g. TestStruct::new(list, stream)) that model actual apiserver responses (but without actually calling watch). This should allow us to verify a bunch of things from unit tests, such as (but not limited to):

  • watcher does indeed call list N times (depending on page sizes), and then watch
  • watcher does not call list in streaming lists
  • watcher actually presents the union of the initial list pages and the next watch events (literally what 0.92.1 was made for Fix watcher not fully paginating on Init #1525 )
  • watcher maintains chosen selectors for both list and watch calls
  • watcher maintains correct flags on desyncs

It also lets us verify that that we are handling edge cases (currently we have very little error testing here).

Describe alternatives you've considered

Extend mock tests in https://github.com/kube-rs/kube/blob/main/kube/src/mock_tests.rs#L29-L159

This has already been somewhat done to avoid the previous case triggering, but would like something a little more robust ideally. There's little distinction between list and watch in these mock tests atm and I have not been able to test the watch side with it. If this test harness can be made more robust then this is also helpful, but ideally we should have unit tests in the are where the code is present (rather than rely on top level stuff too much).

Target crate for feature

kube-runtime

Metadata

Assignees

No one assigned

    Labels

    automationci and testing relatedhelp wantedNot immediately prioritised, please help!runtimecontroller runtime related

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions