-
-
Notifications
You must be signed in to change notification settings - Fork 61
Description
Recently, we have been experimenting with caching verdaccio as introduced in #153 since our backing storage has uptime problems in the recent weeks.
However, when enabling the cache via cachingNginx.enabled: true it does not cache anything as confirmed by connecting to the nginx and looking into the caching directory.
Additional Context
We use verdaccio mainly as registry proxy for the npmjs registry, though are hosting a few internal packages too, configured to be stored in a S3 backed storage through verdaccio-minio. The infrastructure of the S3 provider had quite a few outages in the recent months, causing verdaccio to stop responding and refusing to take requests until the storage had been restored by the provider.
The main traffic comes from a quite active CI pipeline that is executed in the same k8s cluster, with a few requests executed by the engineers externally through a configured ingress with GitLab authentication. Therefore, the verdaccio instance needs to be accessible from the internal service as well as externally (with authentication due to the use of private packages). To make sure that verdaccio can handle this traffic, we use multiple instances for high-availability.
We cannot expose the caching instance externally, since nginx does not configure the authentication that we require. Therefore, we opted to have a separate caching verdaccio instance that can be used internally, configuring it to uplink to the main verdaccio instances via its k8s internal DNS name.
The verdaccio charts are deployed via native helm with some value overrides. However, as we want to have this configuration be managed through ArgoCD, we cannot rely on manual patches to be applied to the helm output. Furthermore, we aim not to manually create a helm chart if there is a provided community chart available.
Solution
We found that modifying the default.conf (see code) as provided in the configmap solves this problem. This is described in the nginx blog: https://blog.nginx.org/blog/nginx-caching-guide.
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:4873;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
+ proxy_cache STATIC;
+ proxy_cache_valid 200 1h;
+ proxy_cache_valid 404 1m;
+ add_header X-Proxy-Cache $upstream_cache_status;
}
}
If you like, I can prepare a pull request that fixes this issue, but am not quite sure if this was an intentional decision by the author due to the complexity of different cache configurations.
However, for the final solution, I think that my solution above needs to be made configurable, as it is dependent on the actual configuration. Furthermore, for a bare minimum setup, the proxy_cache_valid can be removed as it should be handled by the inactive timeout for the proxy_cache_path anyways.
Therefore, this might require some discussion on how to solve these configuration dependencies should be handled without making the helm chart values too complicated, but still allow for a dynamic configuration without breaking existing helm deployments.
If you require additional information or an example configuration, please let me know so that I can spend some more time creating a proper minimal example.