Skip to content

Latest commit

 

History

History
107 lines (74 loc) · 4.73 KB

README.md

File metadata and controls

107 lines (74 loc) · 4.73 KB

Prometheus Assets docker pulls docker stars

Explore and monitor your assets metrics using Prometheus and Grafana.

Grafana

Getting started

First, you need to set up the exporter server. You can use official docker image andersonba/prometheus-assets.

Example

You can test all workspace in your machine using docker-compose. Running the following command will set up Server, Prometheus and Grafana services.

docker-compose up

Server http://localhost:3000 • Grafana http://localhost:8080 • Prometheus http://localhost:9090

Now, you should import the dashboard template to Grafana, see how: Importing a dashboard from Grafana.com

Configure

There are two ways to configure the server:

Using multiples targets in Prometheus (On-demand)

The server have a route to scrap the page and extract the metrics on demand.

$ curl http://localhost:3000/metrics?url=www.andersonba.com

Nice! Now you have to configure the targets in your prometheus.yml. See example

The metrics of each URL are stored in the cache for 300 seconds, check the onDemandQueryCacheTTL configuration to change it.

URL params

URL Param Description Type Example
url Required. Page URL string ?url=google.com
labels[] List of key:value labels Array<string> ?labels[]=page:anderson&labels[]=section:home
mimeTypes[] List of mimeTypes to be filtered Array<string> ?mimeTypes[]=javascript&mimeTypes[]=css
cookies[] List of cookies to be defined Array<string> ?cookies[]='name=user;value=123'
nocache Force scrap without cache boolean ?nocache=1

Using a configuration file (Job scheduler)

If you don't prefer to put many targets in your prometheus configuration, you can configure a time-based job scheduler in the server. You just need to create a configuration file called config.yml. See example

Now you have to configure only one target in your prometheus.yml using path /metrics in the server URL, without params. See example

The server will scrap the pages every 1h, but you can change it. See configuration file.

Attribute Description Type
interval Interval time used in the job scheduler number
configurations List of pages to extract metrics Array<Configuration> (See below)
labels List of common label names used in all page configurations Array<string>
defaults Default values for the configuration objects {[configurationKey: string]: configurationValue}
metricName Metric name used by Gauge collector. string
enableOnDemandQuery Also enable on-demand scrapping (using query params, see below) boolean
onDemandQueryCacheTTL Time to expire cached metrics for on-deamand scraping number
path Change the metrics path of the server string

Configuration spec

In addition to the options of scraper, there are some configurations here:

  • url - Required. Page URL
  • metrics.file - Boolean. Enable file metrics
  • metrics.count - Boolean. Enable count metrics
  • metrics.size - Boolean. Enable size metrics
  • metrics.gzip - Boolean. Enable gzip metrics
  • metrics.countByMimeType - Boolean. Enable countByMimeType metrics
  • metrics.sizeByMimeType - Boolean. Enable sizeByMimeType metrics
  • metrics.gzipByMimeType - Boolean. Enable gzipByMimeType metrics
  • labels - Object. Key-value used to tag the prometheus metrics.

See more details in example.

Docker environments

  • CONFIG_PATH
  • METRICS_PATH
  • METRIC_NAME
  • METRICS_ON_DEMAND_QUERY
  • METRICS_ON_DEMAND_QUERY_CACHE_TTL
  • METRICS_INTERVAL
  • METRICS_FILE_ENABLED
  • METRICS_COUNT_ENABLED
  • METRICS_SIZE_ENABLED
  • METRICS_GZIP_ENABLED
  • METRICS_COUNT_BY_MIMETYPE_ENABLED
  • METRICS_SIZE_BY_MIMETYPE_ENABLED
  • METRICS_GZIP_BY_MIMETYPE_ENABLED

See more details in settings.js.

Priority order of configurations:

  1. Default values
  2. Environment
  3. Config file