-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Summary
Deploy an in-cluster NFS server to provide ReadWriteMany (RWX) storage for local development. This allows the Helm chart to be self-contained and work on any Kubernetes cluster without external storage dependencies.
Background
JupyterHub requires shared storage (RWX) for user home directories. Cloud providers offer managed solutions (EFS, Filestore, Azure Files), but for local development on k3d/kind, we need an embedded solution.
Nebari uses an in-cluster NFS server pod that:
- Uses a backing PVC (ReadWriteOnce) from the default storage class
- Exposes NFS protocol on ports 2049, 20048, 111
- Provides RWX access to consumers
Tasks
- Create templates/storage/nfs-server-pvc.yaml - backing storage for NFS server
- Create templates/storage/nfs-server-deployment.yaml - NFS server pod
- Create templates/storage/nfs-server-service.yaml - expose NFS ports
- Add storage configuration to values.yaml
- Make NFS server optional via storage.nfs.enabled
- Test that NFS server pod starts and is healthy
values.yaml
storage:
nfs:
enabled: true
image:
name: quay.io/nebari/volume-nfs
tag: "0.8-repack"
capacity: 10Gi
storageClass: "" # empty = default
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512MiAcceptance Criteria
- helm install creates NFS server deployment, service, and backing PVC
- NFS server pod reaches Running state
- NFS service exposes ports 2049, 20048, 111
- Setting storage.nfs.enabled=false skips NFS server creation
Depends on
- Set up local development environment #2 (Local development environment)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels