-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: App metrics doc updated #43
base: main
Are you sure you want to change the base?
Changes from 2 commits
1932649
890ce7f
1d950f3
9333b23
e2f7343
f635fe8
3b20ee6
b670d3c
571b692
b7a1292
481ad0a
abdb068
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,67 +1,96 @@ | ||
# Application Metrics | ||
|
||
Application metrics can be enabled to see your application's metrics. | ||
Application Metrics are the indicators used to evaluate the performance and efficiency of your application. It can be enabled in the Devtron platform to see your application's metrics. | ||
|
||
## Standard Metrics | ||
## Types of Metrics available in the Devtron platform: | ||
|
||
Devtron provides certain metrics (CPU and Memory utilization) for each application by default i.e. you do not need to enable “Application metrics”. However, prometheus needs to be present in the cluster and the endpoint of the same should be updated in Global Configurations --> Clusters & Environments section. | ||
1. **CPU usage:** Overall CPU utilization per pod and aggregated. | ||
2. **Memory Usage:** Overall memory utilization per pod and aggregated. | ||
3. **Throughput:** Number of requests processed per minute. | ||
4. **Latency:** Delay between request and response, measured in percentiles. | ||
|
||
## Advanced Metrics | ||
## Setup Application Metrics | ||
|
||
There are certain advanced metrics (like Latency, Throughput, 4xx, 5xx, 2xx) which are only available when "Application metrics" is enabled from the Deployment Template. When you enable these advanced metrics, devtron attaches a envoy sidecar container to your main container which runs as a transparent proxy and passes each request through it to measure the advanced metrics. | ||
1. **Install Grafana Dashboard:** | ||
|
||
**Note: Since, all the requests are passed through envoy, any misconfiguration in envoy configs can bring your application down, so please test the configurations in a non-production environment extensively.** | ||
To use the Grafana dashboard, you need to first install the integration from the Devtron Stack Manager. | ||
|
||
```yaml | ||
envoyproxy: | ||
image: envoyproxy/envoy:v1.14.1 | ||
configMapName: "" | ||
resources: | ||
limits: | ||
cpu: "50m" | ||
memory: "50Mi" | ||
requests: | ||
cpu: "50m" | ||
memory: "50Mi" | ||
``` | ||
[Read Grafana Dashboard](https://docs.devtron.ai/devtron/v0.7/usage/integrations/grafana) | ||
|
||
2. **Install Required CRDs:** | ||
|
||
 | ||
Before installing Prometheus from the chart store, manually apply the necessary CRDs in your cluster. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this required? I doubt. I guess the chart includes CRDs as well There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's required. The CRDs do not get applied by the helm chart, hence we have to do it manually. |
||
|
||
Navigate to the `Resource Browser → Select your cluster → Click View Terminal` to open the cluster terminal. | ||
|
||
## CPU Usage Metrics | ||
 | ||
|
||
CPU usage is a utilization metric that shows the overall utilization of cpu by an application. It is available as both, aggregated or per pod. | ||
Now, apply these manifests with server-side validation: | ||
|
||
## Memory Usage Metrics | ||
``` | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0alertmanagerConfigCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0alertmanagerCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0podmonitorCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0probeCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0prometheusCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0prometheusagentCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0prometheusruleCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0prometheusruleCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0scrapeconfigCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0servicemonitorCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/0thanosrulerCustomResourceDefinition.yaml | ||
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/setup/namespace.yaml | ||
|
||
Memory usage is a utilization metric that shows the overall utilization of memory by an application. It is available as both, aggregated or per pod. | ||
``` | ||
|
||
3. **Install Prometheus:** | ||
Go to the Chart Store and search for `prometheus`. Use the Prometheus community's `kube-prometheus-stack` chart to deploy Prometheus. | ||
|
||
## Throughput Metrics | ||
 | ||
|
||
This application metrics indicates the number of request processed by an application per minute. | ||
After selecting the chart, configure these values as needed before deployment. | ||
|
||
## Status Code Metrics | ||
``` | ||
kube-state-metrics: | ||
metricLabelsAllowlist: | ||
- pods=[*] | ||
``` | ||
|
||
This metrics indicates the application’s response to client’s request with a specific status code i.e 1xx(Communicate transfer protocol-level information), 2xx(Client’s request was accepted successfully), 3xx(Client must take some additional action to complete their request), 4xx(Client side error) or 5xx(Server side error). | ||
Search for the above parameters, update them as shown (or customize as needed), and then click `Deploy Chart`. | ||
|
||
## Latency Metrics | ||
 | ||
|
||
Latency metrics shows the latency for an application. Latency measures the delay between an action and a response. | ||
4. **Setup Prometheus Endpoint:** | ||
After installing Prometheus, find its endpoint under `Networking → Service` in the K8s resources. Expand the Prometheus server service to see the endpoints. | ||
|
||
**99.9th percentile latency**: The maximum latency, in seconds, for the fastest 99.9% of requests. | ||
Copy the URL of the `kube-prometheus` service as shown in the image below. | ||
|
||
**99th percentile latency**: The maximum latency, in seconds, for the fastest 99% of requests. | ||
 | ||
|
||
**95th percentile latency**: The maximum latency, in seconds, for the fastest 95% of requests. | ||
To set Prometheus as a data source in Grafana, navigate to `Global Configurations → Clusters & Environments`, select your cluster, and edit its settings. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
|
||
**Note:** We also support custom percentile input inside the dropdown .A latency measurement based on a single request is not meaningful. | ||
 | ||
|
||
Now to set up the Prometheus endpoint: | ||
- Enable the See metrics for applications in this cluster option, as shown in the image below. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since this is a checkbox option, we need to make it prominent by either: |
||
- Paste the copied URL into the Prometheus endpoint field, ensuring it includes `http://` | ||
- Click Update Cluster to save the changes. | ||
|
||
## Checklist for enabling Advanced Application metrics in Production | ||
 | ||
|
||
* [ ] Have adjusted resources to the envoy sidecar container, by default Devtron allocates 50m CPU and 50Mi Memory as both limits as well as requests. This should be enough for handling traffic upto 3000rpm per pod, if each replica of your pod is expected to handle more than 3000rpm, please adjust the resources accordingly. | ||
* [ ] If you are not leveraging http2 / streaming protocols, make sure to set supportStreaming and useHTTP2 in ContainerPort as false. | ||
* [ ] Use envoy image as "quay.io/devtron/envoy:v1.14.1" instead of default "envoyproxy/envoy:v1.14.1" if your cluster occasionally hit dockerhub pull rate limit or if you are running too many replicas/micro-services in a cluster. | ||
* [ ] Enabled and tested extensively in non-production environment including load testing till highest rpm capacity per pod. | ||
After adding the endpoint, application metrics will be visible in the Devtron dashboard for all the Devtron apps in the cluster. This includes CPU usage and Memory usage. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it may take some time after adding, ~3-5 mins |
||
|
||
 | ||
|
||
5. **Enable Application Metrics:** | ||
|
||
To enable Throughput and Latency metrics in Devtron, follow these steps: | ||
- Open your Devtron app. | ||
- Go to `Configurations → Base Configurations → Deployment Template`. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
- Enable `Application Metrics` in the Deployment Template as shown below and save the changes. | ||
|
||
 | ||
|
||
Now, you can track all your application metrics by navigating to `Devtron Apps → Your App → App Details`, where you can view the Application Metrics as shown below. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
|
||
 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Give an interlinking to our Devtron Stack Manager doc.