Releases: newrelic-forks/node-local-dns-cache-chart
node-local-dns-2.1.6
A chart to install node-local-dns. NodeLocal DNSCache improves Cluster DNS performance by running a DNS caching agent on cluster nodes as a DaemonSet. In today's architecture, Pods in 'ClusterFirst' DNS mode reach out to a kube-dns serviceIP for DNS queries. This is translated to a kube-dns/CoreDNS endpoint via iptables rules added by kube-proxy. With this new architecture, Pods will reach out to the DNS caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query kube-dns service for cache misses of cluster hostnames ("cluster.local" suffix by default). Further documentation is here This helm chart works for both kube-proxy setups (iptables or ipvs).
node-local-dns-2.1.5
A chart to install node-local-dns. NodeLocal DNSCache improves Cluster DNS performance by running a DNS caching agent on cluster nodes as a DaemonSet. In today's architecture, Pods in 'ClusterFirst' DNS mode reach out to a kube-dns serviceIP for DNS queries. This is translated to a kube-dns/CoreDNS endpoint via iptables rules added by kube-proxy. With this new architecture, Pods will reach out to the DNS caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query kube-dns service for cache misses of cluster hostnames ("cluster.local" suffix by default). Further documentation is here This helm chart works for both kube-proxy setups (iptables or ipvs).
node-local-dns-2.1.4
A chart to install node-local-dns. NodeLocal DNSCache improves Cluster DNS performance by running a DNS caching agent on cluster nodes as a DaemonSet. In today's architecture, Pods in 'ClusterFirst' DNS mode reach out to a kube-dns serviceIP for DNS queries. This is translated to a kube-dns/CoreDNS endpoint via iptables rules added by kube-proxy. With this new architecture, Pods will reach out to the DNS caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query kube-dns service for cache misses of cluster hostnames ("cluster.local" suffix by default). Further documentation is here This helm chart works for both kube-proxy setups (iptables or ipvs).
wiremock-1.4.6
A service virtualization tool (some call it mock server) for testing purposes. This is a templated deployment of WireMock for mocking services during test scenario execution for load tests as well as for manual and automated QA purposes. By default the chart will install WireMock with only a /status mapping for readiness probes. One can utilize it's HTTP API as well as the file configuration documented in the Running as a Standalone Process described in the "Configuring via JSON over HTTP" and "JSON file configuration" chapters. The JSON file configuration is the recommended setup and the stub mappings should be provided in ConfigMaps one per folder. > mappings and __files are optional but each folder requires it's own ConfigMap. The -mappings and -files suffixes are obligate. console kubectl create configmap my-service1-stubs-mappings --from-file=path/to/your/service1/mappings kubectl create configmap my-service1-stubs-files --from-file=path/to/your/service1/__files kubectl create configmap my-service2-stubs-mappings --from-file=path/to/your/service2/mappings kubectl create configmap my-service2-stubs-files --from-file=path/to/your/service2/__files Install the chart passing the stubs as a value omitting the suffixes as both mappings and __files folders are handled transparently during initialization depending on their existence. console helm install my-wiremock deliveryhero/wiremock \ --set consumer=my-consumer --set "consumer.stubs.my-service1-stubs=/mnt/my-service1-stubs" \ --set "consumer.stubs.my-service2-stubs=/mnt/my-service2-stubs" WireMock's admin API is not publicly exposed, but can be accessed using port forwarding. console kubectl port-forward my-wiremock-123456789a-bcdef 8080 The HTTP API can then be accessed using http://localhost:8080/__admin/docs/ where a swagger UI is availabe. > ConfigMaps, one can define a binary config map with a zip archive that contains the file in question. console gzip large.json kubectl create configmap my-binary-stub --from-file=large.json.gz The resulting archive can be best installed in the wiremock using a values.yaml file. yaml consumer: initContainer: - name: unzip-large-file image: busybox:latest command: ["sh", "-c", "cp /archive/large.json.gz /working/mappings; gunzip /working/mappings/large.json.gz"] volumeMounts: - mountPath: /working name: working - mountPath: /archive name: my-binary-stub initVolume: - name: my-binary-stub configMap: name: my-binary-stub
weblate-0.3.2
Free web-based translation management system.
toxiproxy-1.3.9
A TCP proxy to simulate network and system conditions for chaos and resiliency testing. By default the chart will install toxiproxy with blank configuration. You can add toxics to the running configuration using the API. For large configurations it is easier to store your toxics in a JSON file, in a ConfigMap and pass this to the chart to be used by toxiproxy: console kubectl create configmap my-toxiproxy-config --from-file path/to/your/toxiproxy.json And then install the chart passing the name of the ConfigMap as a value: console helm install toxiproxy deliveryhero/toxiproxy --set toxiproxyConfig=my-toxiproxy-config
superset-1.1.3
A Helm chart for Apache Superset
service-account-1.1.1
Creates a ServiceAccount, ClusterRoleBinding and a ClusterRole with some provided rules. This is useful when used with IAM roles for service accounts
rds-downscaler-1.0.5
A small python script that runs on a cron schedule and periodically downscales AWS RDS instances. It will filter RDS instances/clusters by tag key and value or a particular cluster specified with cluster identifier.
prometheus-statsd-exporter-0.1.5
StatsD to Prometheus metrics exporter