A little service that publishes the assisted-service cluster events to Elasticsearch. It pulls all the cluster events and merge it with the existing data.
make install Documentation about testing can be found here.
This tool can be used by events_scrape cli command (if installed) or running using python without installing using the command: python3 -m assisted-events-scrape.events_scrape
| Variable | Description | Example |
|---|---|---|
ES_INDEX_PREFIX |
Elasticsearch index prefix, will be suffixed by YYYY-MM | assisted-service-events-v3- |
ES_SERVER |
Elasticsearch server address | |
ES_USER(optional) |
Elasticsearch user name | elastic |
ES_PASS(optional) |
Elasticsearch password | |
ASSISTED_SERVICE_URL |
Assisted service url | https://api.openshift.com |
OFFLINE_TOKEN |
Assisted service offline token | |
BACKUP_DESTINATION |
Path to save backup, if empty no backups will be saved | |
SSO_URL |
SSO server URL | |
SENTRY_DSN |
Sentry DSN | |
ERRORS_BEFORE_RESTART |
Maximum numbner of errors allowed before restarting the application | |
MAX_IDLE_MINUTES |
Minutes allowed for the application to be idle. Idle time is when the application is not being updated, either succesfully or unsuccesfully | |
N_WORKERS |
Number of workers in the thread pool. Defaults to 5 - minimum 1. |
In order to proxy elasticsearch/kibana, we use OAuth Proxy to provide authentication layer.
Below we describe the options that are used:
-http-addressThe binding address.-providerOAuth provider. We useopenshift-openshift-service-accountService account whereclient-idandclient-secretwill be read from-openshift-sarJSON Subject Access Review-pass-basic-authWe turn this option off: we just want to proxy provided Basic Auth headers, and not pass authorized user as user-htpasswd-filehtpasswd file path used for authorizing system users