Skip to content

Downtime test #132

@pznamensky

Description

@pznamensky

Is your feature request related to a problem? Please describe.
Hey everyone,
We have several internal websites. They are reachable only for specific IP addresses as we don't want everyone could access them.
To be sure, we didn't forget to set up firewall rules correctly, it would be cool to check that those websites are not reachable for the whole internet.
The same could be useful not only for HTTP, but also for TCP checks (i.e check that SSH is closed)

Describe the solution you'd like
On the one hand, we have a special http code "0" which means that the crawl timed out.
And we could just exclude it from status_codes.
On the other hand, this code implicitly added to the list of codes that will fire the alert. And indeed this is something everyone expects from HTTP checks.

My suggestion is to add an additional flag to HTTP or TCP checks that will allow us to reverse the check and fire an alert only if the target is reachable.
I.e invert_alert = false|true or fire_when_reachable = false|true

In case of HTTP checks it might be also useful to add a flag good_status_codes which must not be used together with status_codes (which might be renamed to bad_status_codes to be more self-descriptive).
In this case we can add this check:

  http_check {
    timeout          = 5
    validate_ssl     = false

    good_status_codes = [
      "0",
      "403"
    ]
  }

Describe alternatives you've considered
It looks like the only alternative is to use other services.

Additional context
I understand that this might not be the easiest feature to implement. But I'm sure someone will find it useful too.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions