Skip to content

Add support for custom Persistent Volumes to enable GCS storage#381

Closed
seanson wants to merge 1 commit intoOctopusDeploy:mainfrom
seanson:feat_gcs_persistence
Closed

Add support for custom Persistent Volumes to enable GCS storage#381
seanson wants to merge 1 commit intoOctopusDeploy:mainfrom
seanson:feat_gcs_persistence

Conversation

@seanson
Copy link
Copy Markdown

@seanson seanson commented Feb 24, 2025

In some environments, such as our Google Kubernetes Engine with Autopilot, there is no support for privileged containers to run the NFS provisioner or standard ReadWriteMany storage classes are too heavy. This PR adds the ability to declare a custom Persistent Volume configuration, which enables things like Google Cloud Storage via their FUSE CSI provider. This additional feature does not impact existing installations and I have included additional Helm unittest cases for it. I have tested this locally in our GKE cluster and it is working.

@seanson seanson requested a review from a team as a code owner February 24, 2025 08:58
@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Feb 24, 2025

⚠️ No Changeset found

Latest commit: ae2c8d1

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Feb 24, 2025

CLA assistant check
All committers have signed the CLA.

@seanson seanson force-pushed the feat_gcs_persistence branch from 55c3eb0 to 053a054 Compare February 24, 2025 09:13
@APErebus
Copy link
Copy Markdown
Contributor

Hi @seanson! Thanks for the PR.

Just letting you know our team is mostly unavailable this week so this might not get a review until next week.

@seanson
Copy link
Copy Markdown
Author

seanson commented Mar 17, 2025

Hi @seanson! Thanks for the PR.

Just letting you know our team is mostly unavailable this week so this might not get a review until next week.

Hi @APErebus - Any update on this?

@seanson seanson force-pushed the feat_gcs_persistence branch from 053a054 to ae2c8d1 Compare March 17, 2025 05:20
@APErebus
Copy link
Copy Markdown
Contributor

Hi @seanson! Thanks for the PR.
Just letting you know our team is mostly unavailable this week so this might not get a review until next week.

Hi @APErebus - Any update on this?

Hey @seanson, let me double check in the team. We did discuss this but I can't remember exactly what the outcome was 😅

@seanson
Copy link
Copy Markdown
Author

seanson commented Mar 18, 2025

Hi @seanson! Thanks for the PR.
Just letting you know our team is mostly unavailable this week so this might not get a review until next week.

Hi @APErebus - Any update on this?

Hey @seanson, let me double check in the team. We did discuss this but I can't remember exactly what the outcome was 😅

Awesome @APErebus! Let me know if I can do anything else to help with this. It is also related to OctopusDeploy/OctopusTentacle#1071 as this enables the behaviour for the tentacle but additional functionality would be required to make this work for the script pods.

@APErebus
Copy link
Copy Markdown
Contributor

Hi @seanson,

We had a chat internally and feel like this isn't the direction we want to take the Agent (and this helm chart).

Our design preference would be that customers configure their PV's outside of the helm chart and then reference the PV via the persistence.storageClassName value. That being said, we don't currently support binding to a PV directly using the PV name, so we could look at adding that functionality (something like persistence.persistentVolumeName).

Is there a specific reason why you need the agent helm chart to manage the PV? If we added support for binding the PVC directly to a named PV, would that work for you?

@seanson
Copy link
Copy Markdown
Author

seanson commented Mar 19, 2025

I would generally prefer managing as many resources as possible through the Helm chart of whatever I'm deploying as it keeps things simplified. Would the team be amenable to another common Helm chart pattern of providing an extra objects definition in combination with your suggestion to allow binding directly to a named PV? The extra objects pattern is fairly common and would help solve issues for other folks who may need some specific CRDs in their environment.

An example of this is in the Unleash chart:
https://github.com/Unleash/helm-charts/blob/8c50fd658fa3449399e31206f2924d51326705c2/charts/unleash/values.yaml#L242
https://github.com/Unleash/helm-charts/blob/8c50fd658fa3449399e31206f2924d51326705c2/charts/unleash/templates/extra-manifests.yaml

With an example usage in this chart looking like:

persistence:
  pvName: octopus-deploy-tentacle-pv
extraObjects:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: octopus-deploy-tentacle-pv
  spec:
    accessModes:
    - ReadWriteMany
    capacity:
      storage: 10Gi # other yaml config, etc.

@APErebus
Copy link
Copy Markdown
Contributor

I would generally prefer managing as many resources as possible through the Helm chart of whatever I'm deploying as it keeps things simplified. Would the team be amenable to another common Helm chart pattern of providing an extra objects definition in combination with your suggestion to allow binding directly to a named PV? The extra objects pattern is fairly common and would help solve issues for other folks who may need some specific CRDs in their environment.

An example of this is in the Unleash chart: Unleash/helm-charts@8c50fd6/charts/unleash/values.yaml#L242 Unleash/helm-charts@8c50fd6/charts/unleash/templates/extra-manifests.yaml

With an example usage in this chart looking like:

persistence:
  pvName: octopus-deploy-tentacle-pv
extraObjects:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: octopus-deploy-tentacle-pv
  spec:
    accessModes:
    - ReadWriteMany
    capacity:
      storage: 10Gi # other yaml config, etc.

@seanson Hey, sorry I missed your response.

I'll chat to the team again :)

@APErebus
Copy link
Copy Markdown
Contributor

@seanson I had a chat to @liam-mackie on our team and we are comfortable with the extraObjects approach.

We are happy to accept a PR for this change. We will need to do some extra documentation to make it clear that the automatic upgrades of the Agent by Octopus Server may cause unexpected side-affects (such as PV's being reprovisioned etc)

@seanson seanson closed this May 12, 2025
@seanson
Copy link
Copy Markdown
Author

seanson commented May 12, 2025

Closing this request and will re-open with a new PR in the next while.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants