Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions charts/scalar-manager/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,20 @@ Current chart version is `3.0.0-SNAPSHOT`
| scalarManager.api.image.tag | string | `""` | |
| scalarManager.api.resources | object | `{}` | |
| scalarManager.api.secretName | string | `""` | Secret name that includes sensitive data such as credentials. Each secret key is passed to Pod as environment variables using envFrom. |
| scalarManager.headlamp | object | `{"enabled":false,"kubernetes":{"serviceLabelName":"app.kubernetes.io/name","serviceLabelValue":"headlamp","servicePortName":"http"},"serviceAccount":{"create":true,"name":"","namespace":""},"web":{"basePath":"/headlamp","namespace":"","serviceInfoCacheTTL":"180000"}}` | Headlamp integration configuration |
| scalarManager.headlamp.enabled | bool | `false` | Enable Headlamp integration |
| scalarManager.headlamp.kubernetes | object | `{"serviceLabelName":"app.kubernetes.io/name","serviceLabelValue":"headlamp","servicePortName":"http"}` | Kubernetes service discovery configuration (used by API) |
| scalarManager.headlamp.kubernetes.serviceLabelName | string | `"app.kubernetes.io/name"` | Label name to identify Headlamp service |
| scalarManager.headlamp.kubernetes.serviceLabelValue | string | `"headlamp"` | Label value to identify Headlamp service |
| scalarManager.headlamp.kubernetes.servicePortName | string | `"http"` | Port name of the Headlamp service |
| scalarManager.headlamp.serviceAccount | object | `{"create":true,"name":"","namespace":""}` | Service account configuration for Headlamp token generation. This SA gets cluster-admin privileges and is used to generate tokens that are used for login into Headlamp. |
| scalarManager.headlamp.serviceAccount.create | bool | `true` | Create a dedicated ServiceAccount for Headlamp |
| scalarManager.headlamp.serviceAccount.name | string | `""` | Name of the ServiceAccount. If not set and create is true, defaults to "<release>-headlamp-sa" |
| scalarManager.headlamp.serviceAccount.namespace | string | `""` | Namespace for the ServiceAccount. Defaults to release namespace |
| scalarManager.headlamp.web | object | `{"basePath":"/headlamp","namespace":"","serviceInfoCacheTTL":"180000"}` | Web container proxy configuration |
| scalarManager.headlamp.web.basePath | string | `"/headlamp"` | Headlamp proxy base path (must match Headlamp's -base-url setting) |
| scalarManager.headlamp.web.namespace | string | `""` | Headlamp namespace filter (optional). If empty, auto-selects if exactly one service found |
| scalarManager.headlamp.web.serviceInfoCacheTTL | string | `"180000"` | Cache TTL for Headlamp service info (milliseconds) |
| scalarManager.imagePullSecrets | list | `[]` | |
| scalarManager.nodeSelector | object | `{}` | |
| scalarManager.podAnnotations | object | `{}` | Pod annotations for the scalar-manager deployment |
Expand Down
22 changes: 22 additions & 0 deletions charts/scalar-manager/templates/_helpers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -61,3 +61,25 @@ Create the name of the service account to use
{{- print (include "scalar-manager.fullname" .) "-sa" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}

{{/*
Create the name of the Headlamp service account
*/}}
{{- define "scalar-manager.headlampServiceAccountName" -}}
{{- if .Values.scalarManager.headlamp.serviceAccount.name }}
{{- .Values.scalarManager.headlamp.serviceAccount.name }}
{{- else }}
{{- print (include "scalar-manager.fullname" .) "-headlamp-sa" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}

{{/*
Get the namespace for Headlamp service account
*/}}
{{- define "scalar-manager.headlampServiceAccountNamespace" -}}
{{- if .Values.scalarManager.headlamp.serviceAccount.namespace }}
{{- .Values.scalarManager.headlamp.serviceAccount.namespace }}
{{- else }}
{{- .Release.Namespace }}
{{- end }}
{{- end }}
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,10 @@ rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list"]
{{- if .Values.scalarManager.headlamp.enabled }}
# Permission to create tokens for Headlamp ServiceAccount
- apiGroups: [""]
resources: ["serviceaccounts/token"]
verbs: ["create"]
resourceNames: ["{{ include "scalar-manager.headlampServiceAccountName" . }}"]
{{- end }}
Comment on lines +14 to +20
Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ClusterRole adds token creation permission for the Headlamp ServiceAccount whenever headlamp is enabled, but doesn't verify that the ServiceAccount is actually being created. If headlamp.enabled is true but serviceAccount.create is false and no custom serviceAccount.name is provided, this could result in referencing a non-existent ServiceAccount in the resourceNames field.

Consider updating the condition to check both enabled and create flags, or add validation to ensure a ServiceAccount name is provided when create is false.

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can refer to an existing service account as well. This is made on purpose

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that we can set the following configuration and deploy Scalar Manager without any errors (i.e., we can run the helm install command without errors).

scalarManager:
  headlamp:
    enabled: true
    serviceAccount:
      create: false
      name: ""

However, in the above deployment, it seems that the Generate Token button shows an error when I click it on the UI.

Could you please confirm whether this behavior is expected or not?

(I will share more details in internal Slack later.)

Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,22 @@ roleRef:
kind: ClusterRole
name: {{ include "scalar-manager.fullname" . }}
apiGroup: rbac.authorization.k8s.io

{{- if and .Values.scalarManager.headlamp.enabled .Values.scalarManager.headlamp.serviceAccount.create }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "scalar-manager.fullname" . }}-headlamp-cluster-admin
labels:
{{- include "scalar-manager.labels" . | nindent 4 }}
app.kubernetes.io/component: headlamp
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a question. What is the purpose of this label? When and where do we use this label?
Please let me confirm the purpose of this label, since I couldn't find a clear reason to set this label here from this PR.

subjects:
- kind: ServiceAccount
name: {{ include "scalar-manager.headlampServiceAccountName" . }}
namespace: {{ include "scalar-manager.headlampServiceAccountNamespace" . }}
Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ServiceAccount subject is missing the apiGroup field. For consistency with the existing ClusterRoleBinding in this file (line 11), add 'apiGroup: ""' to the subject specification. While this field is optional for ServiceAccounts and defaults to an empty string, it's better to be explicit for consistency and clarity.

Suggested change
namespace: {{ include "scalar-manager.headlampServiceAccountNamespace" . }}
namespace: {{ include "scalar-manager.headlampServiceAccountNamespace" . }}
apiGroup: ""

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have specified in the cluster roles already

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To keep the consistency in this chart, could you please add apiGroup: "" for now?
If we omit apiGroup: "" here, I think we should apply the same thing in other places in this chart.

roleRef:
kind: ClusterRole
name: cluster-admin

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-critical critical

Granting cluster-admin privileges to the Headlamp service account introduces a significant security risk. This provides unrestricted access to the entire cluster, and a compromise of this service account's token could lead to a full cluster compromise. It is highly recommended to follow the principle of least privilege and define a more restrictive ClusterRole with only the permissions that Headlamp requires.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be able to fully utilize the Headlamp features, the service account that is used for Headlamp should be bound to the cluster-admin role, so we did that on purpose.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding this point, I have three questions:

1. The number of Service Accounts used for access to Headlamp

Please let me confirm the design of the access pattern for Headlamp through Scalar Manager from the user role perspective.

Which of the following is the designed behavior in Scalar Manager?

  • Pattern 1 (Share one service account in all Scalar Manager users)

                                    +---[Scalar Manager]------------------+
                                    |                                     |
    [User A (ADMINISTRATOR)]---+    |                                     |
                               |    |                                     |
    [User B (WRITER)]----------+--->+---[Headlamp Integration Feature]--->+---(Share one SA with `cluster-admin`)--->[Headlamp]---(Monitor Kubernetes Resources)--->[Kubernetes]
                               |    |                                     |
    [User C (READER)]----------+    |                                     |
                                    |                                     |
                                    +-------------------------------------+
    
  • Pattern 2 (Generate dedicated service account for each Scalar Manager user)

                                    +---[Scalar Manager]------------------+
                                    |                                     |
    [User A (ADMINISTRATOR)]------->+---[Headlamp Integration Feature]--->+---(Dedicated SA `user-a` with `cluster-admin`)---+
                                    |                                     |                                                  |
    [User B (WRITER)]-------------->+---[Headlamp Integration Feature]--->+---(Dedicated SA `user-b` with `edit`)------------+--->[Headlamp]---(Monitor Kubernetes Resources)--->[Kubernetes]
                                    |                                     |                                                  |
    [User C (READER)]-------------->+---[Headlamp Integration Feature]--->+---(Dedicated SA `user-c` with `view`)------------+
                                    |                                     |
                                    +-------------------------------------+
    
  • Pattern 3 (Generate shared service account for each Scalar Manager role)

                                    +---[Scalar Manager]------------------+
                                    |                                     |
    [User A (ADMINISTRATOR)]------->+---[Headlamp Integration Feature]--->+---(Shared SA `scalar-manager-admin` with `cluster-admin`)----+
                                    |                                     |                                                              |
    [User B (WRITER)]-------------->+---[Headlamp Integration Feature]--->+---(Shared SA `scalar-manager-writer` with `edit`)------------+
                                    |                                     |                                                              |
    [User C (WRITER)]-------------->+---[Headlamp Integration Feature]--->+---(Shared SA `scalar-manager-writer` with `edit`)------------+--->[Headlamp]---(Monitor Kubernetes Resources)--->[Kubernetes]
                                    |                                     |                                                              |
    [User D (READER)]-------------->+---[Headlamp Integration Feature]--->+---(Shared SA `scalar-manager-reader` with `view`)------------+
                                    |                                     |                                                              |
    [User E (READER)]-------------->+---[Headlamp Integration Feature]--->+---(Shared SA `scalar-manager-reader` with `view`)------------+
                                    |                                     |
                                    +-------------------------------------+
    

2. The default role for access to Headlamp

It seems that the service account for Headlamp has the cluster-admin role. However, I think it's too strong as a default value.

Especially, if the designed access pattern in Q1 is Pattern 1 (Share one service account in all Scalar Manager users), normal users, even though READER, have full access, including any update operations of the Kubernetes cluster through Headlamp.

I think it's not good from the perspective of the principle of least privilege. It seems that similar discussions exist on the Headlamp side as follows:

Therefore, I think it would be better to:

  1. Set the view role as a default, and make it configurable in the values.yaml file.
    • I think we should also discuss whether a configurable role is really necessary or not.
  2. Or, define fine-grained custom roles for Scalar Manager users based on Scalar Manager roles, ADMINISTRATOR, WRITER, and READER.

Also, users can use the generated token on the Scalar Manager page to access the Kubernetes cluster through other tools like the kubectl command. Therefore, we need to treat the generated token very carefully. Sorry, I overlooked this point when I talked about the rough design before, but I think we should reconsider this point again.

3. The purpose of Headlamp integration in Scalar Manager

This is related to the 2. The default role for access to Headlamp, but I'd like to confirm the purpose of the Headlamp integration.

I think Scalar Manager is almost a monitoring tool, for now. And, I think we decided to integrate Headlamp as an alternative to the CPU or Memory panel at the top page of Scalar Manager that existed in the prior implementation. In such a case, I think it's enough to provide integration with Headlamp only with read operations. In other words, I think we don't need to provide the way to create, update, and delete operations for Kubernetes resources through Scalar Manager.

I'd like to clarify the goal of the Headlamp integration and consider what role we need carefully. What is the purpose of this Headlamp integration?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@feeblefakie
As I shared in the meeting, I think we need additional discussions about design based on this thread.

Please let me discuss these points with you and the middleware team later. 🙏

apiGroup: rbac.authorization.k8s.io
{{- end }}
24 changes: 24 additions & 0 deletions charts/scalar-manager/templates/scalar-manager/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,19 @@ spec:
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.scalarManager.headlamp.enabled }}
# Headlamp discovery configuration
- name: HEADLAMP_KUBERNETES_SERVICE_LABEL_NAME
value: "{{ .Values.scalarManager.headlamp.kubernetes.serviceLabelName }}"
- name: HEADLAMP_KUBERNETES_SERVICE_LABEL_VALUE
value: "{{ .Values.scalarManager.headlamp.kubernetes.serviceLabelValue }}"
- name: HEADLAMP_KUBERNETES_SERVICE_PORT_NAME
value: "{{ .Values.scalarManager.headlamp.kubernetes.servicePortName }}"
- name: HEADLAMP_SERVICE_ACCOUNT_NAME
value: "{{ include "scalar-manager.headlampServiceAccountName" . }}"
- name: HEADLAMP_SERVICE_ACCOUNT_NAMESPACE
value: "{{ include "scalar-manager.headlampServiceAccountNamespace" . }}"
{{- end }}
{{- if .Values.scalarManager.api.secretName }}
envFrom:
- secretRef:
Expand Down Expand Up @@ -196,6 +209,17 @@ spec:
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.scalarManager.headlamp.enabled }}
# Headlamp proxy configuration
- name: HEADLAMP_BASE_PATH
value: "{{ .Values.scalarManager.headlamp.web.basePath }}"
{{- if .Values.scalarManager.headlamp.web.namespace }}
- name: HEADLAMP_NAMESPACE
value: "{{ .Values.scalarManager.headlamp.web.namespace }}"
{{- end }}
- name: HEADLAMP_SERVICE_INFO_CACHE_TTL
value: "{{ .Values.scalarManager.headlamp.web.serviceInfoCacheTTL }}"
{{- end }}
ports:
- containerPort: {{ .Values.scalarManager.web.service.ports.web.targetPort }}
imagePullPolicy: {{ .Values.scalarManager.web.image.pullPolicy }}
Expand Down
11 changes: 11 additions & 0 deletions charts/scalar-manager/templates/scalar-manager/serviceaccount.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,14 @@ metadata:
labels:
{{- include "scalar-manager.labels" . | nindent 4 }}
{{- end }}
{{- if and .Values.scalarManager.headlamp.enabled .Values.scalarManager.headlamp.serviceAccount.create }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "scalar-manager.headlampServiceAccountName" . }}
namespace: {{ include "scalar-manager.headlampServiceAccountNamespace" . }}
labels:
{{- include "scalar-manager.labels" . | nindent 4 }}
app.kubernetes.io/component: headlamp
{{- end }}
50 changes: 50 additions & 0 deletions charts/scalar-manager/values.schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,56 @@
}
}
},
"headlamp": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean"
},
"kubernetes": {
"type": "object",
"properties": {
"serviceLabelName": {
"type": "string"
},
"serviceLabelValue": {
"type": "string"
},
"servicePortName": {
"type": "string"
}
}
},
"serviceAccount": {
"type": "object",
"properties": {
"create": {
"type": "boolean"
},
"name": {
"type": "string"
},
"namespace": {
"type": "string"
}
}
},
"web": {
"type": "object",
"properties": {
"basePath": {
"type": "string"
},
"namespace": {
"type": "string"
},
"serviceInfoCacheTTL": {
"type": "string"
}
}
}
}
},
"imagePullSecrets": {
"type": "array"
},
Expand Down
34 changes: 34 additions & 0 deletions charts/scalar-manager/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,40 @@ scalarManager:
# cpu: 100m
# memory: 128Mi

# -- Headlamp integration configuration
headlamp:
# -- Enable Headlamp integration
enabled: false

# -- Kubernetes service discovery configuration (used by API)
kubernetes:
# -- Label name to identify Headlamp service
serviceLabelName: "app.kubernetes.io/name"
# -- Label value to identify Headlamp service
serviceLabelValue: "headlamp"
# -- Port name of the Headlamp service
servicePortName: "http"
Comment on lines +200 to +207
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have two questions about the Headlamp discovery:

1. In case of multiple Headlamp deployments

I think we assume that the Headlamp deployment is only one per Kubernetes cluster, but I think there is a possibility that multiple Headlamp deployments exist in one Kubernetes cluster.

What will happen in the Scalar Manager's Headlamp discovery feature if there are two or more deployments of Headlamp in one Kubernetes cluster?

2. Method of Headlamp detection

It's a bit related to Q1, but I think it might be better to set the Headlamp information explicitly instead of the dynamic discovery based on the labels, because we assume that we use only one Headlamp deployment.

If we can detect the Headlamp deployment without any configurations like label name and label value, it is worth providing such a feature because customers don't need to care about the Headlamp-related configurations.

However, users need to set the label name and label value in the current implementations. In such a case, I think it might be better to set a more explicit configuration, for example, namespace and service name, instead of dynamic detection.

In such implementations, I think we can also avoid unexpected issues when multiple Headlamp deployments exist in one Kubernetes cluster.

What do you think?


# -- Service account configuration for Headlamp token generation.
# This SA gets cluster-admin privileges and is used to generate tokens
# that are used for login into Headlamp.
serviceAccount:
# -- Create a dedicated ServiceAccount for Headlamp
create: true
# -- Name of the ServiceAccount. If not set and create is true, defaults to "<release>-headlamp-sa"
name: ""
# -- Namespace for the ServiceAccount. Defaults to release namespace
namespace: ""

# -- Web container proxy configuration
web:
# -- Headlamp proxy base path (must match Headlamp's -base-url setting)
basePath: "/headlamp"
# -- Headlamp namespace filter (optional). If empty, auto-selects if exactly one service found
namespace: ""
# -- Cache TTL for Headlamp service info (milliseconds)
serviceInfoCacheTTL: "180000"

imagePullSecrets: []

# -- Unified TLS configuration for both API and Web components.
Expand Down