preloader
  • Home
  • Learn Once and for All How to Configure Alertmanager in OpenShift

Learn how to configure Alertmanager in OpenShift to send alerts to a webhook, ensuring efficient and customizable monitoring. How to set up Alertmanager in OpenShift. Alertmanager Webhook Configuration.

blog-thumb

📖 Estimated reading time: 4 min


Alertmanager

is an essential component in the OpenShift and Kubernetes monitoring ecosystem.

It is responsible for managing and dispatching alerts generated by Prometheus, allowing grouping, inhibition, and routing to different destinations, such as emails, messaging systems, and custom webhooks.

In this tutorial, we will configure Alertmanager in OpenShift to send alerts to a webhook, which will serve as a receiver for validation and testing.


Configuring Alertmanager in OpenShift

Alertmanager in OpenShift is already integrated with Prometheus. To configure alert sending, we need to edit the YAML file of Alertmanager or use the OpenShift dashboard.

In our test environment, we are using OpenShift 4.16.27.

OpenShift Cluster

If your installation is recent, there is probably already an alert indicating that Alertmanager is not configured.

OpenShift Alertmanaget not Configured

Webhook Receiver

We need a webhook to receive the alerts sent by Alertmanager. In a real production environment, this component is usually an enterprise monitoring solution. However, to validate our configuration, we can use a webhook written in python, which will simply display the alerts sent by OpenShift.

The source code and configuration details are available on our GitHub.

https://github.com/linuxelitebr/alertmanager


⚠️ IMPORTANT: The webhook destination address must be registered in DNS!


[root@bastion alertmanager]# host python.homelab.rhbrlabs.com
python.homelab.rhbrlabs.com has address 10.1.224.2
 

Editing the Alertmanager Configuration

In the OpenShift dashboard, follow this path to find the Alertmanager configuration screen:

⚙️ Administration > Cluster Settings > Configuration > Alertmanager

Observe the highlighted items in the image. We will need to configure all of them.

OpenShift Alertmanaget Default Unconfigured

Alert Routing

This screen configures routing settings for OpenShift’s built-in Alertmanager, which is responsible for managing and sending alerts generated by Prometheus.

To speed things up in our lab, reduce the message grouping times before each dispatch.

OpenShift Alertmanaget Default Unconfigured

Here’s what each field does:

Setting Description
Group by • Alerts will be grouped based on the value specified here.
• In this case, they are grouped by namespace, meaning all alerts from the same namespace will be processed together.
Group wait • Defines how long Alertmanager waits before sending the first alert in a group.
• Here, it waits 3 seconds before sending an alert after it's first received.
• Useful for grouping multiple alerts together to avoid unnecessary notifications.
Group interval • The time between sending grouped alerts.
• If new alerts arrive for an already existing group, they will be sent together every 6 seconds.
• Prevents excessive alerts from being sent immediately.
Repeat interval • The time after which an alert is re-sent if it is still active.
• Here, it’s set to 12 hours, meaning unresolved alerts will be resent every 12 hours.
• Helps to remind teams about ongoing critical issues.

⚠️ IMPORTANT: Make sure to use appropriate time values for your environment and needs.


Receivers

Click the Configure link in the following items and complete the setup as shown in the image.

Critical

As the name implies, use this component to send critical severity alerts.

OpenShift Alertmanaget Receiver Critical

Default

This is the default destination for alerts to be sent to.

OpenShift Alertmanaget Receiver Default

At this point, alerts should already be appearing in our Python webhook receiver, validating Alertmanager’s operation. 🚀

OpenShift Alertmanaget Received Alerts

✅ Seriously. That was really easy! 😎


Custom Webhook Receiver

For customization, for example sending info events, or with regex filters, use this option.

Let’s assume you need to configure a new receiver with your own filters, such as:

severity = ~warning|critical

Simply click the Create Receiver button and fill in the details as in the example below.

OpenShift Alertmanaget Custom Webhook Receiver

⚠️ Check the severities and destinations configured in each alert you define, to avoid sending duplicate alerts.


✅ To learn how to adjust your own matchers to capture only the alerts of interest, check the Alerting documentation from Prometheus.


Testing Alert Sending

We can generate test alerts directly in Alertmanager to verify that the webhook is receiving the data:


oc exec alertmanager-main-0 -n openshift-monitoring -- \
    amtool alert add --alertmanager.url http://localhost:9093 \
    alertname="TestAlert" \
    severity=critical \
    --annotation summary="This is a test alert"
     

If the webhook is working correctly, after the defined grouping time, the alert will be displayed in the terminal where webhook.py is running.

OpenShift Alertmanaget Test Alert

Mass Alert Testing

Check our GitHub to learn how to perform this type of test. It is very useful for fine-tuning Alertmanager.



Conclusion

With this configuration, we ensure that Alertmanager alerts in OpenShift are sent to a Python webhook, enabling advanced monitoring and real-time event validation.

If you want to integrate this webhook with other tools like Grafana, Slack, or Discord, just adapt the code to send notifications as needed.

Do you have any questions or suggestions? Comment below!




Share this post and keep following our site for more updates on Kubernetes and open-source technologies!

Check out these other interesting articles! 🔥



Support us 💖

Do you like what you find here? With every click on a banner, you help keep this site alive and free. Your support makes all the difference so that we can continue to bring you the content you love. Thank you very much! 😊


Articles you might like 📌
comments powered by Disqus