Table of contents
- Input alerts
- Development and debugging
Everything started as a way of forwarding Prometheus alertmanager alerts to Telegram because the solutions that I found were too complex, I just wanted to forward alerts to channels without trouble. And Alertgram is just that, a simple app that forwards alerts to Telegram groups and channels and some small features that help like metrics and dead man's switch.
- Alertmanager alerts webhook receiver compatibility.
- Telegram notifications.
- Metrics in Prometheus format.
- Optional dead man switch endpoint.
- Optional customizable templates.
- Configurable notification chat ID targets (with fallback to default chat ID).
- Easy setup and flexible.
- Perfect for any environment, from a company cluster to home cheap clusters (e.g K3S).
Alertgram is developed in a decoupled way so in a future may be extended to more inputs apart from Alertmanager's webhook API (ask for a new input if you want).
--help flag to show the options.
The configuration of the app is based on flags that also can be set as env vars prepending
ALERTGRAM to the var. e.g: the flag
--telegram.api-token would be
ALERTGRAM_TELEGRAM_API_TOKEN. You can combine both, flags have preference.
To forward alerts to Telegram the minimum options that need to be set are
docker run -p8080:8080 -p8081:8081 slok/alertgram:latest --telegram.api-token=XXXXX --telegram.chat-id=YYYYY
The app comes with Prometheus metrics, it measures the forwarded alerts, HTTP requests, errors... with rate and latency.
By default are served on
Development and debugging
You can use the
--notify.dry-run to show the alerts on the terminal instead of forwarding them to telegram.
Also remember that you can use
Only alertmanager alerts are supported?
At this moment yes, but we can add more input alert systems if you want, create an issue so we can discuss and implement.
Where does alertgram listen to alertmanager alerts?
By default in
0.0.0.0:8080/alerts, but you can use
--alertmanager.webhook-path to customize.
Can I notify to different chats?
There are 3 levels where you could customize the notification chat:
- By default: Using the required
- At URL level: using query string parameter, e.g.
0.0.0.0:8080/alerts?chat-id=-1009876543210. This query param can be customized with
- At alert level: If alerts have a label with the chat ID the alert notification will be forwarded to
that label content. Use the flag
--alert.label-chat-idto customize the label name, by default is
The preference is in order from highest to lowest: Alert, URL, Default.
Can I use custom templates?
Yes!, use the flag
--notify.template-path. You can check testdata/templates for examples.
You can use also the notification dry run mode to check your templates without the need to notify on telegram:
export ALERTGRAM_TELEGRAM_API_TOKEN=fake export ALERTGRAM_TELEGRAM_CHAT_ID=1234567890 go run ./cmd/alertgram/ --notify.template-path=./testdata/templates/simple.tmpl --debug --notify.dry-run
To send an alert easily and check the template rendering without an alertmanager, prometheus, alerts... you can use the test alerts that are on testdata/alerts:
curl -i http://127.0.0.1:8080/alerts -d @./testdata/alerts/base.json
Dead man's switch?
A dead man's switch (from now on, DMS) is a technique or process where at regular intervals a signal must be received so the DMS is disabled, if that signal is not received it will be activated.
In monitoring this would be: If an alert is not received at regular intervals, the switch will be activated and notify that we are not receiving alerts, this is mostly used to know that our alerting system is working.
For example we would set Prometheus triggering an alert continously, Alertmanager sending this specific alert
7m to the DMS endpoint in Alertgram, and Alertgram would be configured with a
10m interval DMS.
With this setup if Prometheus fails creating the alert, Alertmanager sending the alert to Alertgram, or Alertgram not receiving this alert (e.g. network problems), Alertmanager will send an alert to Telegram to notify us that our monitoring system is broken.
You could use the same alertgram or another instance, usually in other machine, cluster... so if the cluster/machine fails, your is isolated and could notify you.
To Enable Alertgram's DMS use
--dead-mans-switch.enable to enable. By default it will be listening in
/alert/dms, with a
15m interval and use the telegrams default notifier and chat ID. To customize this settings use:
--dead-mans-switch.interval: To configure the interval.
--dead-mans-switch.chat-id: To configure the notifier chat, is independent of the notifier although at this moment is Telegram, if not set it will use the notifier default chat target.
--alertmanager.dead-mans-switch-pathTo configure the path the alertmanager can send the DMS alerts.