Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: App metrics doc updated #43

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
96 changes: 58 additions & 38 deletions docs/user-guide/creating-application/app-metrics.md
Original file line number Diff line number Diff line change
@@ -1,67 +1,87 @@
# Application Metrics

Application metrics can be enabled to see your application's metrics.
Application Metrics are the indicators used to evaluate the performance and efficiency of your application. It can be enabled in the Devtron platform to see your application's metrics.

## Standard Metrics
## Types of Metrics available in the Devtron platform:

Devtron provides certain metrics (CPU and Memory utilization) for each application by default i.e. you do not need to enable “Application metrics”. However, prometheus needs to be present in the cluster and the endpoint of the same should be updated in Global Configurations --> Clusters & Environments section.
1. **CPU usage:** Overall CPU utilization per pod and aggregated.
2. **Memory Usage:** Overall memory utilization per pod and aggregated.
3. **Throughput:** Number of requests processed per minute.
4. **Latency:** Delay between request and response, measured in percentiles.

## Advanced Metrics
## Setup Application Metrics

There are certain advanced metrics (like Latency, Throughput, 4xx, 5xx, 2xx) which are only available when "Application metrics" is enabled from the Deployment Template. When you enable these advanced metrics, devtron attaches a envoy sidecar container to your main container which runs as a transparent proxy and passes each request through it to measure the advanced metrics.
1. **Install Grafana Dashboard:**

**Note: Since, all the requests are passed through envoy, any misconfiguration in envoy configs can bring your application down, so please test the configurations in a non-production environment extensively.**
To use the Grafana dashboard, you need to first install the integration from the [Devtron Stack Manager](../integrations/README.md).

```yaml
envoyproxy:
image: envoyproxy/envoy:v1.14.1
configMapName: ""
resources:
limits:
cpu: "50m"
memory: "50Mi"
requests:
cpu: "50m"
memory: "50Mi"
```
[Read Grafana Dashboard](../integrations/grafana.md)

2. **Install Prometheus:**

Go to the Chart Store and search for `prometheus`. Use the Prometheus community's `kube-prometheus-stack` chart to deploy Prometheus.

![](../../images/creating-application/app-metrics/app-metrics-1.jpg)
![Figure 1: Chart Store](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/app-metrics/app2.jpg)

After selecting the chart, configure these values as needed before deployment.

## CPU Usage Metrics
```
kube-state-metrics:
metricLabelsAllowlist:
- pods=[*]
```

CPU usage is a utilization metric that shows the overall utilization of cpu by an application. It is available as both, aggregated or per pod.
Search for the above parameters, and update them as shown (or customize as needed).

## Memory Usage Metrics
![Figure 2: Prometheus Chart](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/app-metrics/app3.jpg)

Memory usage is a utilization metric that shows the overall utilization of memory by an application. It is available as both, aggregated or per pod.
3. **Enable `upgradeJob` paramter to install CRDs:**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spelling of parameter


Since Helm does not automatically apply CRDs, you need to enable the `upgradeJob` parameter in the Helm chart to ensure CRDs are applied before deploying Prometheus.

## Throughput Metrics
- In the Prometheus Helm chart settings, locate the `upgradeJob` parameter and set it to `true` if it is `false`.

![Figure 3: upgradeJob Parameter](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/app-metrics/app-new2.jpg)

After enabling the parameter, click `Deploy Chart`.

This application metrics indicates the number of request processed by an application per minute.
4. **Setup Prometheus Endpoint:**

Once Prometheus is installed, go to its **App Details** and navigate to **Networking → Service** in the K8s resources. Expand the Prometheus server service to see the endpoints.

## Status Code Metrics
Copy the URL of the `kube-prometheus` service as shown in the image below.

This metrics indicates the application’s response to client’s request with a specific status code i.e 1xx(Communicate transfer protocol-level information), 2xx(Client’s request was accepted successfully), 3xx(Client must take some additional action to complete their request), 4xx(Client side error) or 5xx(Server side error).
![Figure 4: Prometheus Service](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/app-metrics/app4.jpg)

## Latency Metrics
To set Prometheus as a data source in Grafana, navigate to **Global Configurations → Clusters & Environments**, select your cluster, and edit its settings.

Latency metrics shows the latency for an application. Latency measures the delay between an action and a response.
![Figure 5: Clusters and Environments](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/app-metrics/app5.jpg)

**99.9th percentile latency**: The maximum latency, in seconds, for the fastest 99.9% of requests.
Now to set up the Prometheus endpoint:
- Enable the `See metrics for applications in this cluster` option, as shown in the image below.
- Paste the copied URL into the Prometheus endpoint field, ensuring it includes `http://`
- Click Update Cluster to save the changes.

**99th percentile latency**: The maximum latency, in seconds, for the fastest 99% of requests.
![Figure 6: Prometheus Endpoint](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/app-metrics/app6.jpg)

**95th percentile latency**: The maximum latency, in seconds, for the fastest 95% of requests.
After adding the endpoint, application metrics will be visible in the Devtron dashboard for all the Devtron apps in the cluster. This includes CPU usage and Memory usage.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it may take some time after adding, ~3-5 mins


**Note:** We also support custom percentile input inside the dropdown .A latency measurement based on a single request is not meaningful.
![Figure 7: CPU Usage & Memory Usage](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/app-metrics/app7.jpg)

5. **Enable Application Metrics:**

## Checklist for enabling Advanced Application metrics in Production
To enable Throughput and Latency metrics in Devtron, follow these steps:
- Open your Devtron app.
- Go to **Configurations → Base Configurations → Deployment Template**.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the env is overridden this won't work!!

- Enable **Application Metrics** in the Deployment Template as shown below and save the changes.

* [ ] Have adjusted resources to the envoy sidecar container, by default Devtron allocates 50m CPU and 50Mi Memory as both limits as well as requests. This should be enough for handling traffic upto 3000rpm per pod, if each replica of your pod is expected to handle more than 3000rpm, please adjust the resources accordingly.
* [ ] If you are not leveraging http2 / streaming protocols, make sure to set supportStreaming and useHTTP2 in ContainerPort as false.
* [ ] Use envoy image as "quay.io/devtron/envoy:v1.14.1" instead of default "envoyproxy/envoy:v1.14.1" if your cluster occasionally hit dockerhub pull rate limit or if you are running too many replicas/micro-services in a cluster.
* [ ] Enabled and tested extensively in non-production environment including load testing till highest rpm capacity per pod.
![Figure 8: Enable Application Metrics](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/app-metrics/app8.jpg)

Now, you can track all your application metrics by navigating to Applications and going to the App Details page of your Devtron App as shown below.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will the normal metrics (CPU / Memory) will work in case of custom charts?


![Figure 9: Application Metrics](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/app-metrics/app-new3.jpg)

{% hint style="warning" %}
### Note
Enable metrics option is only available for [Devtron charts](../deploy-chart/README.md) and not for [Custom Deployment Charts](../global-configurations/deployment-charts.md).
{% endhint %}