Skip to content

AKS System MachinePool nodes are reported as Worker nodes #1298

Open
@anmazzotti

Description

@anmazzotti

What steps did you take and what happened?

When importing a AKS Cluster into Rancher, the "System" nodes are reported as Worker
In the picture the pool0 was defined as:

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureManagedMachinePool
metadata:
  annotations:
    "helm.sh/resource-policy": keep
  name: "${CLUSTER_NAME}-pool0"
  namespace: default
spec:
  mode: System
  name: pool0
  sku: Standard_Ds2_v2

Image

What did you expect to happen?

This reflects how AKS works internally, by having System and User nodepools.
It's probably technically correct that Rancher reports all as Worker, however this can be confusing for the user.

It would be great if there was a way to correct the Role for this case.

How to reproduce it?

No response

Rancher Turtles version

No response

Anything else you would like to add?

No response

Label(s) to be applied

/kind bug

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/capzarea/uxkind/bugSomething isn't workingstatus/needs-investigationRequires more information (from opener or investigation) before work can begin

    Type

    Projects

    Status

    No status

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions