Skip to content

Extend persistent volume configuration options#865

Merged
loujar merged 3 commits into
mainfrom
lsj/sub-path-config
May 11, 2026
Merged

Extend persistent volume configuration options#865
loujar merged 3 commits into
mainfrom
lsj/sub-path-config

Conversation

@loujar
Copy link
Copy Markdown
Contributor

@loujar loujar commented May 11, 2026

Adding two additional PVC configuration options for the sourcegraph/sourcegraph helm chart for all stateful services:

  • storageSubPath: allows mounting a subdirectory of a volume by setting the subPath on the primary data volume mount for each stateful service
  • storageAnnotations: allows setting arbitrary annotations on PVC resources, for both standalone PersistentVolumeClaim resources and StatefulSet volumeClaimTemplates sections

Checklist

Test plan

CI

Copy link
Copy Markdown
Member

@michaellzc michaellzc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, but what is the use case? are they looking to share the same PVC instance across multiple services?

@marcleblanc2
Copy link
Copy Markdown
Contributor

marcleblanc2 commented May 11, 2026

LGTM.

what is the use case? are they looking to share the same PVC instance across multiple services?

I'll let Louis speak to the specifics of this case, but I can imagine that adding annotations to PVCs would be helpful with things like selecting which PVCs to include in snapshot policies (ex. gitserver yes, indexServer no), and subPath would help with mounting additional files from a configMap / secret, into the same destination dir.

As an aside, the subPath for Prometheus may be the workaround we need to support read-only root volumes for Prometheus, something I've got half sorted out.

@loujar
Copy link
Copy Markdown
Contributor Author

loujar commented May 11, 2026

sure, but what is the use case? are they looking to share the same PVC instance across multiple services?

that's correct. This was a specific requirement that came up during a trial implementation. We worked around the constraints by manually templating & editing the helm chart resources to validate that we could support these requirements, but we want to manage this in standard helm chart configuration long term.

@michaellzc
Copy link
Copy Markdown
Member

sure, but what is the use case? are they looking to share the same PVC instance across multiple services?

that's correct. This was a specific requirement that came up during a trial implementation. We worked around the constraints by manually templating & editing the helm chart resources to validate that we could support these requirements, but we want to manage this in standard helm chart configuration long term.

not blocking. just curious on why? are they having constraint around number of PVCs they're allowed to provision? simplify backup of PVC (backing-up a single disk vs. many)?

@loujar
Copy link
Copy Markdown
Contributor Author

loujar commented May 11, 2026

sure, but what is the use case? are they looking to share the same PVC instance across multiple services?

that's correct. This was a specific requirement that came up during a trial implementation. We worked around the constraints by manually templating & editing the helm chart resources to validate that we could support these requirements, but we want to manage this in standard helm chart configuration long term.

not blocking. just curious on why? are they having constraint around number of PVCs they're allowed to provision? simplify backup of PVC (backing-up a single disk vs. many)?

As I understand, this is just how this organization prefers to manage the volumes used by services in their shared cluster; a single volume shared across all associated PVCs used by the application. It's most likely related to their standardized DR procedures.

@loujar loujar enabled auto-merge (squash) May 11, 2026 21:31
@loujar loujar merged commit 0eb4889 into main May 11, 2026
5 checks passed
@loujar loujar deleted the lsj/sub-path-config branch May 11, 2026 21:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants