"ALERTS" metric only displays in one job, rather than all of the jobs

Hi,

Facing issue with ALERTS metric.

The ALERTS metric only appears for integrations/kubernetes/kube-state-metrics, not all of the scrape jobs that I have configured.

Any suggestions how to make all of the scrape jobs use the ALERTS metric.

Thanks
Satya

Hi @satyarameshkali!

The ALERTS metric label is applied by Prometheus to active alerts so you can quickly review pending and/or firing alerts. If the ALERTS metric only appears for certain scrape jobs, like kube-state-metrics, it is because there has been an active alert for that scrape job for the selected duration in Explore.

To read about this in the Prometheus docs, see Inspecting alerts during runtime.

Regarding adding this metric to all scrape jobs, what is the end goal that you are working towards? The ALERTS metric should be applied for all scrape jobs, but only when they are actively alerting.

Hi @Melody,

I appreciate the response, we are using Grafana-agent to send themetrics to the Grafana-cloud (Version 9.2.0).
I have set up several scrap jobs that are currently in the firing state. But not all of the scrape jobs appear in the ALERTS stats.

Here is a picture for your reference.

But the ALERTS metrics label only displays only kube-state-metrics job.

This is the configmap for the grafana-agent that we are utilising.

Thanks,
Satya

kind: ConfigMap
metadata:
name: grafana
namespace: grafana
apiVersion: v1
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
global:
scrape_interval: 1m
external_labels:
cluster: cloud
configs:
- name: integrations
remote_write:
- url: https://*****/api/prom/push
basic_auth:
username: ****
password: ****
write_relabel_configs:
- source_labels: [name]
regex: ALERTS|
action: keep

    scrape_configs:   
    - job_name: integrations/argocd
      metrics_path: /metrics
      scheme: http
      static_configs:
      - targets:
        - metrics.svc.cluster.local:8082
        - server-metrics.svc.cluster.local:8083
        - repo-server.svc.cluster.local:8084
      metric_relabel_configs:
      - source_labels: [__name__]
        regex: grpc_server_handled_total
        action: drop

    - job_name: integrations/istio
      metrics_path: /metrics
      scheme: http
      static_configs:
      - targets:
        - istiod.svc.cluster.local:15014
  
    - job_name: integrations/job1
      scrape_interval: 1m
      metrics_path: /
      static_configs:
      - targets:
        - app1.svc.cluster.local:8080

    - job_name: integrations/job2
      scrape_interval: 1m
      metrics_path: /
      static_configs:
      - targets:
        - app2.svc.cluster.local:8080

    - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      job_name: integrations/kubernetes/cadvisor
      kubernetes_sd_configs:
          - role: node
      metric_relabel_configs:
          - source_labels: [__name__]
            regex: <metrics>
            action: drop
      relabel_configs:
          - replacement: kubernetes.default.svc.cluster.local:443
            target_label: __address__
          - regex: (.+)
            replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
            source_labels:
              - __meta_kubernetes_node_name
            target_label: __metrics_path__
      scheme: https
      tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          insecure_skip_verify: false
          server_name: kubernetes
    - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      job_name: integrations/kubernetes/kubelet
      kubernetes_sd_configs:
          - role: node
      metric_relabel_configs:
          - source_labels: [__name__]
            regex: <metrics>
            action: drop
      relabel_configs:
          - replacement: kubernetes.default.svc.cluster.local:443
            target_label: __address__
          - regex: (.+)
            replacement: /api/v1/nodes/${1}/proxy/metrics
            source_labels:
              - __meta_kubernetes_node_name
            target_label: __metrics_path__
      scheme: https
      tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          insecure_skip_verify: false
          server_name: kubernetes
    - job_name: integrations/kubernetes/kube-state-metrics
      kubernetes_sd_configs:
          - role: service
      metric_relabel_configs:
          - source_labels: [__name__]
            regex: <metrics>
            action: drop
      relabel_configs:
          - action: keep
            regex: kube-state-metrics
            source_labels:
              - __meta_kubernetes_pod_label_app_kubernetes_io_name 
    
integrations:
  eventhandler:
    cache_path: /var/lib/agent/eventhandler.cache
    logs_instance: integrations

Hi @Melody,

Identified the issue.

Only “Mimir or Loki alert” is supported by the ALERTS metric; “Grafana Managed alert” is not supported.

Is it possible to extend Grafana Managed alert to include the ALERTS metric?

Thanks,
Satya