Skip to content

S3CSI-224: Port log sanitization and -o flag rejection#323

Draft
anurag4DSB wants to merge 1 commit intomainfrom
improvement/S3CSI-224-log-sanitize-flag-reject
Draft

S3CSI-224: Port log sanitization and -o flag rejection#323
anurag4DSB wants to merge 1 commit intomainfrom
improvement/S3CSI-224-log-sanitize-flag-reject

Conversation

@anurag4DSB
Copy link
Copy Markdown
Collaborator

@anurag4DSB anurag4DSB commented Feb 16, 2026

Summary

  • Replace protosanitizer.StripSecrets() with dedicated logSafeNodePublishVolumeRequest() for explicit credential stripping
  • Add -o mount flag rejection in NodePublishVolume to prevent bypass of mount option validation

Changes

  • pkg/driver/node/node.go: Add logSafeNodePublishVolumeRequest(), replace protosanitizer calls, add -o flag check
  • pkg/driver/node/log_sanitize_test.go: New unit tests for log sanitization
  • pkg/driver/node/node_test.go: New unit test for -o flag rejection
  • pkg/driver/node/protosanitizer_test.go: Removed (replaced by log_sanitize_test.go)

Test Plan

  • Unit test: logSafeNodePublishVolumeRequest strips CSIServiceAccountTokens and Secrets
  • Unit test: non-sensitive fields (VolumeId, TargetPath, etc.) preserved in safe copy
  • Unit test: -o flag returns InvalidArgument
  • 16/16 unit tests pass in pkg/driver/node/
  • CI E2E: Mount options suite explicitly tests -o flag stripping. Credentials suite validates mount operations with secrets.

Issue: S3CSI-224

Replace protosanitizer.StripSecrets() with explicit field stripping to
prevent credential leakage in logs. Add -o mount flag rejection to
prevent users from passing raw mount options that bypass validation.

Issue: S3CSI-224
@codecov
Copy link
Copy Markdown

codecov bot commented Feb 16, 2026

❌ 2 Tests Failed:

Tests completed Failed Passed Skipped
128 2 126 58
View the top 2 failed test(s) by shortest run time
Scality CSI Driver for S3 E2E Suite::[It] [sig-storage] CSI Volumes [Driver: s3.csi.scality.com] [Testpattern: Dynamic PV (default fs)] dynamic-provisioning-mount-options Mount args policy enforcement in dynamic provisioning should strip -o from mount options [sig-storage]
Stack Traces | 308s run time
[FAILED] pod "pvc-tester-p8gc5" not Running: Timed out after 300.000s.
Expected Pod to be in <v1.PodPhase>: "Running"
Got instead:
    <*v1.Pod | 0xc000138008>: 
        metadata:
          creationTimestamp: "2026-02-16T07:13:13Z"
          generateName: pvc-tester-
          managedFields:
          - apiVersion: v1
            fieldsType: FieldsV1
            fieldsV1:
              f:metadata:
                f:generateName: {}
              f:spec:
                f:containers:
                  k:{"name":"write-pod"}:
                    .: {}
                    f:command: {}
                    f:image: {}
                    f:imagePullPolicy: {}
                    f:name: {}
                    f:resources: {}
                    f:securityContext:
                      .: {}
                      f:allowPrivilegeEscalation: {}
                      f:capabilities:
                        .: {}
                        f:drop: {}
                      f:runAsGroup: {}
                      f:runAsNonRoot: {}
                      f:runAsUser: {}
                    f:terminationMessagePath: {}
                    f:terminationMessagePolicy: {}
                    f:volumeMounts:
                      .: {}
                      k:{"mountPath":"/mnt/volume1"}:
                        .: {}
                        f:mountPath: {}
                        f:name: {}
                f:dnsPolicy: {}
                f:enableServiceLinks: {}
                f:restartPolicy: {}
                f:schedulerName: {}
                f:securityContext:
                  .: {}
                  f:runAsGroup: {}
                  f:runAsNonRoot: {}
                  f:runAsUser: {}
                  f:seccompProfile:
                    .: {}
                    f:type: {}
                f:terminationGracePeriodSeconds: {}
                f:volumes:
                  .: {}
                  k:{"name":"volume1"}:
                    .: {}
                    f:name: {}
                    f:persistentVolumeClaim:
                      .: {}
                      f:claimName: {}
            manager: e2e.test
            operation: Update
            time: "2026-02-16T07:13:13Z"
          - apiVersion: v1
            fieldsType: FieldsV1
            fieldsV1:
              f:status:
                f:conditions:
                  k:{"type":"ContainersReady"}:
                    .: {}
                    f:lastProbeTime: {}
                    f:lastTransitionTime: {}
                    f:message: {}
                    f:reason: {}
                    f:status: {}
                    f:type: {}
                  k:{"type":"Initialized"}:
                    .: {}
                    f:lastProbeTime: {}
                    f:lastTransitionTime: {}
                    f:status: {}
                    f:type: {}
                  k:{"type":"PodReadyToStartContainers"}:
                    .: {}
                    f:lastProbeTime: {}
                    f:lastTransitionTime: {}
                    f:status: {}
                    f:type: {}
                  k:{"type":"Ready"}:
                    .: {}
                    f:lastProbeTime: {}
                    f:lastTransitionTime: {}
                    f:message: {}
                    f:reason: {}
                    f:status: {}
                    f:type: {}
                f:containerStatuses: {}
                f:hostIP: {}
                f:hostIPs: {}
                f:startTime: {}
            manager: kubelet
            operation: Update
            subresource: status
            time: "2026-02-16T07:13:13Z"
          name: pvc-tester-p8gc5
          namespace: dynamic-provisioning-mount-options-441
          resourceVersion: "3865"
          uid: 296bbe91-4914-4dad-a17d-a2bbe619c04a
        spec:
          containers:
          - command:
            - /bin/sh
            - -c
            - trap exit TERM; while true; do sleep 1; done
            image: registry.k8s.io/e2e-test-images/busybox:1.36.1-1
            imagePullPolicy: IfNotPresent
            name: write-pod
            resources: {}
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              runAsGroup: 2000
              runAsNonRoot: true
              runAsUser: 1001
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /mnt/volume1
              name: volume1
            - mountPath: .../run/secrets/kubernetes.io/serviceaccount
              name: kube-api-access-wkmfz
              readOnly: true
          dnsPolicy: ClusterFirst
          enableServiceLinks: true
          nodeName: helm-test-cluster-control-plane
          preemptionPolicy: PreemptLowerPriority
          priority: 0
          restartPolicy: OnFailure
          schedulerName: default-scheduler
          securityContext:
            runAsGroup: 2000
            runAsNonRoot: true
            runAsUser: 1001
            seccompProfile:
              type: RuntimeDefault
          serviceAccount: default
          serviceAccountName: default
          terminationGracePeriodSeconds: 30
          tolerations:
          - effect: NoExecute
            key: node.kubernetes.io/not-ready
            operator: Exists
            tolerationSeconds: 300
          - effect: NoExecute
            key: node.kubernetes.io/unreachable
            operator: Exists
            tolerationSeconds: 300
          volumes:
          - name: volume1
            persistentVolumeClaim:
              claimName: fs-tab-pvc
          - name: kube-api-access-wkmfz
            projected:
              defaultMode: 420
              sources:
              - serviceAccountToken:
                  expirationSeconds: 3607
                  path: token
              - configMap:
                  items:
                  - key: ca.crt
                    path: ca.crt
                  name: kube-root-ca.crt
              - downwardAPI:
                  items:
                  - fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.namespace
                    path: namespace
        status:
          conditions:
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:13:13Z"
            status: "False"
            type: PodReadyToStartContainers
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:13:13Z"
            status: "True"
            type: Initialized
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:13:13Z"
            message: 'containers with unready status: [write-pod]'
            reason: ContainersNotReady
            status: "False"
            type: Ready
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:13:13Z"
            message: 'containers with unready status: [write-pod]'
            reason: ContainersNotReady
            status: "False"
            type: ContainersReady
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:13:13Z"
            status: "True"
            type: PodScheduled
          containerStatuses:
          - image: registry.k8s.io/e2e-test-images/busybox:1.36.1-1
            imageID: ""
            lastState: {}
            name: write-pod
            ready: false
            restartCount: 0
            started: false
            state:
              waiting:
                reason: ContainerCreating
          hostIP: 172.18.0.2
          hostIPs:
          - ip: 172.18.0.2
          phase: Pending
          qosClass: BestEffort
          startTime: "2026-02-16T07:13:13Z"
In [It] at: .../e2e/customsuites/dynamic_provisioning_mount_options.go:465 @ 02/16/26 07:18:13.716
Scality CSI Driver for S3 E2E Suite::[It] [sig-storage] CSI Volumes [Driver: s3.csi.scality.com] [Testpattern: Pre-provisioned PV (default fs)] mountoptions Mount arg policy enforcement strips -o flag [sig-storage]
Stack Traces | 310s run time
[FAILED] pod "pvc-tester-k8grz" not Running: Timed out after 300.000s.
Expected Pod to be in <v1.PodPhase>: "Running"
Got instead:
    <*v1.Pod | 0xc00174e908>: 
        metadata:
          creationTimestamp: "2026-02-16T07:14:33Z"
          generateName: pvc-tester-
          managedFields:
          - apiVersion: v1
            fieldsType: FieldsV1
            fieldsV1:
              f:metadata:
                f:generateName: {}
              f:spec:
                f:containers:
                  k:{"name":"policy-test-fstab"}:
                    .: {}
                    f:command: {}
                    f:image: {}
                    f:imagePullPolicy: {}
                    f:name: {}
                    f:resources: {}
                    f:securityContext:
                      .: {}
                      f:allowPrivilegeEscalation: {}
                      f:capabilities:
                        .: {}
                        f:drop: {}
                    f:terminationMessagePath: {}
                    f:terminationMessagePolicy: {}
                    f:volumeMounts:
                      .: {}
                      k:{"mountPath":"/mnt/volume1"}:
                        .: {}
                        f:mountPath: {}
                        f:name: {}
                f:dnsPolicy: {}
                f:enableServiceLinks: {}
                f:restartPolicy: {}
                f:schedulerName: {}
                f:securityContext:
                  .: {}
                  f:runAsGroup: {}
                  f:runAsNonRoot: {}
                  f:runAsUser: {}
                  f:seccompProfile:
                    .: {}
                    f:type: {}
                f:terminationGracePeriodSeconds: {}
                f:volumes:
                  .: {}
                  k:{"name":"volume1"}:
                    .: {}
                    f:name: {}
                    f:persistentVolumeClaim:
                      .: {}
                      f:claimName: {}
            manager: e2e.test
            operation: Update
            time: "2026-02-16T07:14:33Z"
          - apiVersion: v1
            fieldsType: FieldsV1
            fieldsV1:
              f:status:
                f:conditions:
                  k:{"type":"ContainersReady"}:
                    .: {}
                    f:lastProbeTime: {}
                    f:lastTransitionTime: {}
                    f:message: {}
                    f:reason: {}
                    f:status: {}
                    f:type: {}
                  k:{"type":"Initialized"}:
                    .: {}
                    f:lastProbeTime: {}
                    f:lastTransitionTime: {}
                    f:status: {}
                    f:type: {}
                  k:{"type":"PodReadyToStartContainers"}:
                    .: {}
                    f:lastProbeTime: {}
                    f:lastTransitionTime: {}
                    f:status: {}
                    f:type: {}
                  k:{"type":"Ready"}:
                    .: {}
                    f:lastProbeTime: {}
                    f:lastTransitionTime: {}
                    f:message: {}
                    f:reason: {}
                    f:status: {}
                    f:type: {}
                f:containerStatuses: {}
                f:hostIP: {}
                f:hostIPs: {}
                f:startTime: {}
            manager: kubelet
            operation: Update
            subresource: status
            time: "2026-02-16T07:14:33Z"
          name: pvc-tester-k8grz
          namespace: mountoptions-9619
          resourceVersion: "6321"
          uid: ef626675-1d75-4ef1-b953-4cb7a4ad754c
        spec:
          containers:
          - command:
            - /bin/sh
            - -c
            - trap exit TERM; while true; do sleep 1; done
            image: registry.k8s.io/e2e-test-images/busybox:1.36.1-1
            imagePullPolicy: IfNotPresent
            name: policy-test-fstab
            resources: {}
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /mnt/volume1
              name: volume1
            - mountPath: .../run/secrets/kubernetes.io/serviceaccount
              name: kube-api-access-w92wm
              readOnly: true
          dnsPolicy: ClusterFirst
          enableServiceLinks: true
          nodeName: helm-test-cluster-control-plane
          preemptionPolicy: PreemptLowerPriority
          priority: 0
          restartPolicy: OnFailure
          schedulerName: default-scheduler
          securityContext:
            runAsGroup: 2000
            runAsNonRoot: true
            runAsUser: 1001
            seccompProfile:
              type: RuntimeDefault
          serviceAccount: default
          serviceAccountName: default
          terminationGracePeriodSeconds: 30
          tolerations:
          - effect: NoExecute
            key: node.kubernetes.io/not-ready
            operator: Exists
            tolerationSeconds: 300
          - effect: NoExecute
            key: node.kubernetes.io/unreachable
            operator: Exists
            tolerationSeconds: 300
          volumes:
          - name: volume1
            persistentVolumeClaim:
              claimName: s3-e2e-pvc-97a17d9a-c8ab-46b3-a515-bf47610609a5
          - name: kube-api-access-w92wm
            projected:
              defaultMode: 420
              sources:
              - serviceAccountToken:
                  expirationSeconds: 3607
                  path: token
              - configMap:
                  items:
                  - key: ca.crt
                    path: ca.crt
                  name: kube-root-ca.crt
              - downwardAPI:
                  items:
                  - fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.namespace
                    path: namespace
        status:
          conditions:
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:14:33Z"
            status: "False"
            type: PodReadyToStartContainers
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:14:33Z"
            status: "True"
            type: Initialized
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:14:33Z"
            message: 'containers with unready status: [policy-test-fstab]'
            reason: ContainersNotReady
            status: "False"
            type: Ready
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:14:33Z"
            message: 'containers with unready status: [policy-test-fstab]'
            reason: ContainersNotReady
            status: "False"
            type: ContainersReady
          - lastProbeTime: null
            lastTransitionTime: "2026-02-16T07:14:33Z"
            status: "True"
            type: PodScheduled
          containerStatuses:
          - image: registry.k8s.io/e2e-test-images/busybox:1.36.1-1
            imageID: ""
            lastState: {}
            name: policy-test-fstab
            ready: false
            restartCount: 0
            started: false
            state:
              waiting:
                reason: ContainerCreating
          hostIP: 172.18.0.2
          hostIPs:
          - ip: 172.18.0.2
          phase: Pending
          qosClass: BestEffort
          startTime: "2026-02-16T07:14:33Z"
In [It] at: .../e2e/customsuites/mountoptions.go:195 @ 02/16/26 07:19:33.253

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@anurag4DSB anurag4DSB added the help wanted Extra attention is needed label Feb 16, 2026
@anurag4DSB anurag4DSB marked this pull request as draft February 16, 2026 08:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

help wanted Extra attention is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant