Grafana Alert fails while fetching query results

Hello all

could somebody help me with the following. We have Grafana alert based on AWS Cloudwatch Insight query, but very often the alert fails with the error:

logger=context userId=1 orgId=1 uname=grafana t=2022-10-21T09:32:57.331863615Z level=error msg="Failed to evaluate queries and expressions: failed to execute conditions: [plugin.downstreamError] failed to query data: fetching of query results exceeded max number of attempts" error="Failed to evaluate queries and expressions: failed to execute conditions: [plugin.downstreamError] failed to query data: fetching of query results exceeded max number of attempts" remote_addr=172.28.15.91 traceID=

logger=context userId=1 orgId=1 uname=grafana t=2022-10-21T09:32:57.331953268Z level=info msg="Request Completed" method=POST path=/api/v1/eval status=400 remote_addr=172.28.15.91 time_ms=8340 duration=8.340299489s size=205 referer="https://grafana.xxx.com/alerting/grafana/dwZtj1U7k/view?returnTo=%2Falerting%2Flist" handler=/api/v1/eval

logger=context userId=1 orgId=1 uname=grafana t=2022-10-21T09:32:57.628097966Z level=error msg="Failed to evaluate queries and expressions: failed to execute conditions: [plugin.downstreamError] failed to query data: fetching of query results exceeded max number of attempts" error="Failed to evaluate queries and expressions: failed to execute conditions: [plugin.downstreamError] failed to query data: fetching of query results exceeded max number of attempts" remote_addr=172.28.15.91 traceID=

logger=context userId=1 orgId=1 uname=grafana t=2022-10-21T09:32:57.628187452Z level=info msg="Request Completed" method=POST path=/api/v1/eval status=400 remote_addr=172.28.15.91 time_ms=8143 duration=8.143793138s size=205 referer="https://grafana.xxx.com/alerting/grafana/dwZtj1U7k/view?returnTo=%2Falerting%2Flist" handler=/api/v1/eval

Sometimes all is good, data is returned and the alert works fine.

In attempt to view the alert data, the error appears in ~8 sec.
Apparently some time out happens here, cannot get which one.

Does anybody has an idea?

Regards
Vytas

same behavior here when looking for logs with errors. I configured an alert with CloudWatch Logs and the query:
filter @message like /Exception/ | stats count(*) by @log