Skip to content

Implement embedded NFS server for local development #3

@aktech

Description

@aktech

Summary

Deploy an in-cluster NFS server to provide ReadWriteMany (RWX) storage for local development. This allows the Helm chart to be self-contained and work on any Kubernetes cluster without external storage dependencies.

Background

JupyterHub requires shared storage (RWX) for user home directories. Cloud providers offer managed solutions (EFS, Filestore, Azure Files), but for local development on k3d/kind, we need an embedded solution.

Nebari uses an in-cluster NFS server pod that:

  1. Uses a backing PVC (ReadWriteOnce) from the default storage class
  2. Exposes NFS protocol on ports 2049, 20048, 111
  3. Provides RWX access to consumers

Reference: https://github.com/nebari-dev/nebari/tree/main/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/nfs-server

Tasks

  • Create templates/storage/nfs-server-pvc.yaml - backing storage for NFS server
  • Create templates/storage/nfs-server-deployment.yaml - NFS server pod
  • Create templates/storage/nfs-server-service.yaml - expose NFS ports
  • Add storage configuration to values.yaml
  • Make NFS server optional via storage.nfs.enabled
  • Test that NFS server pod starts and is healthy

values.yaml

storage:
  nfs:
    enabled: true
    image:
      name: quay.io/nebari/volume-nfs
      tag: "0.8-repack"
    capacity: 10Gi
    storageClass: ""  # empty = default
    resources:
      requests:
        cpu: 100m
        memory: 256Mi
      limits:
        cpu: 500m
        memory: 512Mi

Acceptance Criteria

  • helm install creates NFS server deployment, service, and backing PVC
  • NFS server pod reaches Running state
  • NFS service exposes ports 2049, 20048, 111
  • Setting storage.nfs.enabled=false skips NFS server creation

Depends on

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions