Skip to content

Prometheus Specific know how

Amnon Heiman edited this page Feb 28, 2019 · 2 revisions

Updating Prometheus targets while running

Prometheus checks regularly for changes in its target files. If Prometheus run as a stand alone server, changes the targets files will propagate to the server.

When using docker, it's a bit more tricky, there are some utilities, vi is an example, that will not edit a file in place, insted, it will work on a copy and when you're done, it will replace the files. But, the container will hold a link to the old file, so in practice, the changes will not take effect.

Two things does work:

  1. concatinating to a file (i.e. echo " --172.0.0.1:9180" >> scylla_targets.yml)
  2. Edit the file in place. For example the following python script gets two parameters, it would overwrite the file with the value of the second parameter.
#!/usr/bin/python

import sys

with open(sys.argv[1], "w") as data_file:
    data_file.write(sys.argv[2])

Uploading partial Database (applicable for Prometheus version 2.0 and higher)

When monitoring many cores Prometheus database can grow segnificently, that can be an issue if you need to send the metrics somewhere (e.g. when you need external support with solving an issue).

Luckily, Promtheus database was optimized for timeseries.

You can read about Prometheus Storage here.

When you look at you data directory, ignore the lock file and the wal directory. The othe directories with a hash-kind of name (i.e. 01BKGTZQ1SYQJTR4PB43C8PD98) are slices of up to 1GB of data. Check for the timestamp of the directory around the time you are intrested in.

You should transfer the entire directory (with the chunks subdirectory).

Open a partial prometheus database

  1. Create a data directory

  2. Copy the directories (those with the hash names) inside it.

Clone this wiki locally