diff --git a/install_config/cluster_metrics.adoc b/install_config/cluster_metrics.adoc index 8110598518fb..1b9ba4ffef93 100644 --- a/install_config/cluster_metrics.adoc +++ b/install_config/cluster_metrics.adoc @@ -165,10 +165,10 @@ metrics data in this scenario is: .Data Accumulated by 120 Nodes and 10000 Pods ==== In a test scenario including 120 nodes and 10000 pods, a 24 hour period -accumulated 25 GB of metrics data. Therefore, the capacity planning formula for +accumulated 11.4 GB of metrics data. Therefore, the capacity planning formula for metrics data in this scenario is: -(((11.410 × 10^9^) ÷ 1000) ÷ 24) ÷ 10^6^ = 0.475 MB/hour +(((11.4 × 10^9^) ÷ 1000) ÷ 24) ÷ 10^6^ = 0.475 MB/hour ==== |=== @@ -185,8 +185,8 @@ These two test cases are presented on the following graph: image::https://raw.githubusercontent.com/ekuric/openshift/master/metrics/1_10kpods.png[1000 pods vs 10000 pods monitored during 24 hours] endif::openshift-origin[] -If the default value of 7 days for `openshift_metrics_duration` and 10 seconds for -`openshift_metrics_resolution` are preserved, then weekly storage requirements for the Cassandra pod would be: +If the default value of 7 days for `openshift_metrics_duration` is preserved, and +`openshift_metrics_resolution` is set to 10 seconds, then weekly storage requirements for the Cassandra pod would be: |=== | |1000 pods | 10000 pods @@ -248,8 +248,8 @@ Cluster Metrics] topic. In the above calculation, approximately 20 percent of the expected size was added as overhead to ensure that the storage requirements do not exceed calculated value. -If the `METRICS_DURATION` and `METRICS_RESOLUTION` values are kept at the -default (`7` days and `15` seconds respectively), it is safe to plan Cassandra +If the `METRICS_DURATION` value is kept to the default (`7` days), and `METRICS_RESOLUTION` value +is set to `15` seconds, it is safe to plan Cassandra storage size requrements for week, as in the values above.