I am super new to Grafana, and the data source I am using is AWS CloudWatch on Grafana 5.0 BETA.
Goal: Trying to get email notifications every 24 hours on the health of my AWS resources via Grafana alerts.
Set Up: The alert is set to execute every 24 hour for a particular resource as long as it’s max() is above -1, I did this on purpose because the max of the resource will always be above -1.
Issue: The problem is my graph will always be in a state of “alerting”. Thus the next day upon execution it does not send me an email notification. I believe the graph will only send notifications if the state switches from “alerting” to “ok” (vice-versa).
you are right, Grafana only sends notifications when there is a change in the state (for instance “OK” → “Alerting”)
Your problem is that the query raises an alert every day, so the first time it raises an alert (change state from “Unknown” to "Alerting), but the second time the state is already “Alerting”. You should clear the state but Grafana does not support this use case right now: Send alert notification for every alert eval regardless of state · Issue #12356 · grafana/grafana · GitHub
One trick you can do however to clear the state between two executions is:
Add a second query in the panel, let’s say a query “B”:
SELECT CURRENT_DATE - INTERVAL '1 day' AS "time", date_part('hour', CURRENT_TIMESTAMP) AS "hour"
Now, in the alerts, alongside with your initial condition, add a second condition based on the query B (which is the hour of the day when the query runs by the scheduler):
Finally, schedule the query to be executed every hour (or at least every 12h) and choose in the condition the range of hours an alert should be triggered.