Note
|
This repository contains the guide documentation source. To view the guide in published form, view it on the Open Liberty website. |
You’ll explore how to provide system and application metrics from a microservice with MicroProfile Metrics.
You will learn how to use MicroProfile Metrics to provide metrics from a microservice. You can monitor metrics to determine the performance and health of a service. You can also use them to pinpoint issues, collect data for capacity planning, or to decide when to scale a service to run with more or fewer resources.
The application that you will work with is an inventory
service that stores information about various
systems. The inventory
service communicates with the system
service on a particular host to retrieve
its system properties when necessary.
You will use annotations provided by MicroProfile Metrics to instrument the inventory
service to
provide application-level metrics data. You will add counter, gauge, and timer metrics to the service.
You will also check well-known REST endpoints that are defined by MicroProfile Metrics to review the metrics data collected. Monitoring agents can access these endpoints to collect metrics.
Point your browser to the http://localhost:9080/inventory/systems URL to access the inventory
service. Because you just started the application, the inventory is currently empty. Access the
http://localhost:9080/inventory/systems/localhost URL to add the localhost into the inventory.
Access the inventory
service at the http://localhost:9080/inventory/systems URL at least once
for application metrics to be collected. Otherwise, the metrics will not appear.
Next, point your browser to the http://localhost:9443/metrics MicroProfile Metrics endpoint. Log in
as the admin
user with adminpwd
as the password. You can see both the system and application
metrics in a text format.
To see only the application metrics, point your browser to https://localhost:9443/metrics/application.
See the following sample outputs for the @Timed
, @Gauge
, and @Counted
metrics:
# TYPE application_inventoryProcessingTime_rate_per_second gauge application_inventoryProcessingTime_rate_per_second{method="get"} 0.0019189661542898407 ... # TYPE application_inventoryProcessingTime_seconds summary # HELP application_inventoryProcessingTime_seconds Time needed to process the inventory application_inventoryProcessingTime_seconds_count{method="get"} 1 application_inventoryProcessingTime_seconds{method="get",quantile="0.5"} 0.127965469 ... # TYPE application_inventoryProcessingTime_rate_per_second gauge application_inventoryProcessingTime_rate_per_second{method="list"} 0.0038379320982686884 ... # TYPE application_inventoryProcessingTime_seconds summary # HELP application_inventoryProcessingTime_seconds Time needed to process the inventory application_inventoryProcessingTime_seconds_count{method="list"} 2 application_inventoryProcessingTime_seconds{method="list",quantile="0.5"} 2.2185000000000002E-5 ...
# TYPE application_inventorySizeGauge gauge # HELP application_inventorySizeGauge Number of systems in the inventory application_inventorySizeGauge 1
# TYPE application_inventoryAccessCount_total counter # HELP application_inventoryAccessCount_total Number of times the list of systems method is requested application_inventoryAccessCount_total 1
To see only the system metrics, point your browser to https://localhost:9443/metrics/base.
See the following sample output:
# TYPE base_jvm_uptime_seconds gauge # HELP base_jvm_uptime_seconds Displays the start time of the Java virtual machine in milliseconds. This attribute displays the approximate time when the Java virtual machine started. base_jvm_uptime_seconds 30.342000000000002
# TYPE base_classloader_loadedClasses_count gauge # HELP base_classloader_loadedClasses_count Displays the number of classes that are currently loaded in the Java virtual machine. base_classloader_loadedClasses_count 11231
To see only the vendor metrics, point your browser to https://localhost:9443/metrics/vendor.
See the following sample output:
# TYPE vendor_threadpool_size gauge # HELP vendor_threadpool_size The size of the thread pool. vendor_threadpool_size{pool="Default_Executor"} 32
# TYPE vendor_servlet_request_total counter # HELP vendor_servlet_request_total The number of visits to this servlet since the start of the server. vendor_servlet_request_total{servlet="microprofile_metrics_io_openliberty_guides_inventory_InventoryApplication"} 1
pom.xml
link:finish/pom.xml[role=include]
server.xml
link:finish/src/main/liberty/config/server.xml[role=include]
Navigate to the start
directory to begin.
The MicroProfile Metrics API is included in the MicroProfile dependency specified by your pom.xml
file.
Look for the dependency with the microprofile
artifact ID.
This dependency provides a library that allows you to use the MicroProfile Metrics API
in your code to provide metrics from your microservices.
Replace the server configuration file.
src/main/liberty/config/server.xml
The mpMetrics
feature enables MicroProfile Metrics support in Open Liberty. Note that this
feature requires SSL and the configuration has been provided for you.
The quickStartSecurity
and keyStore
configuration elements provide basic security to secure the server. When you visit the /metrics
endpoint, use the credentials defined in the server configuration to log in and view the data.
Replace theInventoryManager
class.src/main/java/io/openliberty/guides/inventory/InventoryManager.java
InventoryManager.java
link:finish/src/main/java/io/openliberty/guides/inventory/InventoryManager.java[role=include]
Apply the @Timed
annotation to the get()
method,
and apply the @Timed
annotation to the list()
method.
This annotation has these metadata fields:
|
Optional. Use this field to name the metric. |
|
Optional. Use this field to add tags to the metric with the same |
|
Optional. Use this field to determine whether the metric name is the exact name that is specified in the |
|
Optional. Use this field to describe the purpose of the metric. |
The @Timed
annotation tracks how frequently the method is invoked and how long it takes for each invocation of the method to complete.
Both the get()
and list()
methods are annotated with the @Timed
metric and have the same inventoryProcessingTime
name. The method=get
and method=list
tags add a dimension that uniquely identifies the collected metric data from the inventory processing time in getting the system properties.
-
The
method=get
tag identifies theinventoryProcessingTime
metric that measures the elapse time in getting the system properties from calling thesystem
service. -
The
method=list
tag identifies theinventoryProcessingTime
metric that measures the elapse time for theinventory
service to list all of the system properties in the inventory.
The tags allow you to query the metrics together or separately based on the functionality of the monitoring tool of your choice. The inventoryProcessingTime
metrics for example could be queried to display an aggregate time of both tagged metrics or individual times.
Apply the @SimplyTimed
annotation to the add()
method to track how frequently the method is invoked and how long it takes for each invocation of the method to complete. @SimplyTimed
supports the same fields as @Timed
in the previous table.
Apply the @Counted
annotation to the list()
method to count how many times the
http://localhost:9080/inventory/systems
URL is accessed monotonically, which is counting up sequentially.
Apply the @Gauge
annotation to the getTotal()
method to track the number of systems that are stored in
the inventory. When the value of the gauge is retrieved, the underlying getTotal()
method
is called to return the size of the inventory. Note the additional metadata field:
|
Set the unit of the metric. If it is |
Additional information about these annotations, relevant metadata fields, and more are available at the MicroProfile Metrics Annotation Javadoc.
server.xml
link:finish/src/main/liberty/config/server.xml[role=include]
MicroProfile Metrics API implementers can provide vendor metrics in the same forms as the base and application metrics do.
Open Liberty as a vendor supplies server component metrics when the mpMetrics
feature is enabled in the server.xml
configuration file.
You can see the vendor-only metrics in the metrics/vendor
endpoint.
You see metrics from the runtime components, such as Web Application, ThreadPool and Session Management.
Note that these metrics are specific to the Liberty application server. Different vendors may provide other metrics.
Visit the Metrics reference list for more information.
The Open Liberty server was started in development mode at the beginning of the guide and all the changes were automatically picked up.
Point your browser to the https://localhost:9443/metrics URL to review the all available metrics
that have been enabled through MicroProfile Metrics. Log in with admin
as your username and
adminpwd
as your password. You see only the system and vendor metrics because the server just started,
and the inventory
service has not been accessed.
Next, point your browser to the http://localhost:9080/inventory/systems URL. Reload the https://localhost:9443/metrics URL, or access only the application metrics at the https://localhost:9443/metrics/application URL.
You can see the system metrics in the https://localhost:9443/metrics/base URL as well as see the vendor metrics in the https://localhost:9443/metrics/vendor URL.
MetricsIT.java
link:finish/src/test/java/it/io/openliberty/guides/metrics/MetricsIT.java[role=include]
InventoryManager.java
link:finish/src/main/java/io/openliberty/guides/inventory/InventoryManager.java[role=include]
You can test your application manually, but automated tests ensure code quality because they trigger a failure whenever a code change introduces a defect. JUnit and the JAX-RS Client API provide a simple environment for you to write tests.
Create theMetricsIT
class.src/test/java/it/io/openliberty/guides/metrics/MetricsIT.java
-
The
testPropertiesRequestTimeMetric()
test case validates the@Timed
metric. The test case sends a request to thehttp://localhost:9080/inventory/systems/localhost
URL to access theinventory
service, which adds thelocalhost
host to the inventory. Next, the test case makes a connection to thehttps://localhost:9443/metrics/application
URL to retrieve application metrics as plain text. Then, it asserts whether the time that is needed to retrieve the system properties for localhost is less than 4 seconds. -
The
testInventoryAccessCountMetric()
test case validates the@Counted
metric. The test case obtains metric data before and after a request to thehttp://localhost:9080/inventory/systems
URL. It then asserts that the metric was increased after the URL was accessed. -
The
testInventorySizeGaugeMetric()
test case validates the@Gauge
metric. The test case first ensures that the localhost is in the inventory, then looks for the@Gauge
metric and asserts that the inventory size is greater or equal to 1. -
The
testPropertiesAddSimplyTimeMetric()
test case validates the@SimplyTimed
metric. The test case sends a request to thehttp://localhost:9080/inventory/systems/localhost
URL to access theinventory
service, which adds thelocalhost
host to the inventory. Next, the test case makes a connection to thehttps://localhost:9443/metrics/application
URL to retrieve application metrics as plain text. Then, it looks for the@SimplyTimed
metric and asserts true if the metric exists.
The oneTimeSetup()
method retrieves the port number for the server and builds a base URL string
to set up the tests. Apply the @BeforeAll
annotation to this method to run it before any of
the test cases.
The setup()
method creates a JAX-RS client that makes HTTP requests to the inventory
service.
Register this client with a JsrJsonpProvider
JSON-P provider to process JSON resources. The
teardown()
method destroys this client instance. Apply the @BeforeEach
annotation so that a method
runs before a test case and apply the @AfterEach
annotation so that a method runs after a test
case. Apply these annotations to methods that are generally used to perform any setup and teardown tasks
before and after a test.
To force these test cases to run in a particular order, annotate your MetricsIT
test class with the @TestMethodOrder(OrderAnnotation.class)
annotation.
OrderAnnotation.class
runs test methods in numerical order,
according to the values specified in the @Order
annotation.
You can also create a custom MethodOrderer
class or use built-in MethodOrderer
implementations,
such as OrderAnnotation.class
, Alphanumeric.class
, or Random.class
. Label your test cases
with the @Test
annotation so that they automatically run when your test class runs.
In addition, the endpoint tests src/test/java/it/io/openliberty/guides/inventory/InventoryEndpointIT.java
and src/test/java/it/io/openliberty/guides/system/SystemEndpointIT.java
are provided for you to
test the basic functionality of the inventory
and system
services. If a test failure occurs, then you might have
introduced a bug into the code.
Because you started Open Liberty in development mode at the start of the guide, press the enter/return
key to run the tests and see the following output:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.4 sec - in it.io.openliberty.guides.system.SystemEndpointIT
Running it.io.openliberty.guides.metrics.MetricsIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.476 sec - in it.io.openliberty.guides.metrics.MetricsIT
Running it.io.openliberty.guides.inventory.InventoryEndpointIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.264 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT
Results :
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0
To determine whether the tests detect a failure, go to the MetricsIT.java
file and change any of the assertions
in the test methods. Then re-run the tests to see a test failure occur.
You learned how to enable system, application and vendor metrics for microservices by using MicroProfile Metrics and wrote tests to validate them in Open Liberty.