Kubernetes cluster with existing prometheus setup : remote_write vs grafana agent

Hello,
I am wondering what is the recommended way to install Kubernetes integration for a cluster that already has Prometheus setup (via Prometheus operator). I know I can just configure it to remote_write to Grafana cloud but how would cAdvisor, kube-state-metrics and kubelet related metrics make it into Grafana cloud? Currently we are not collecting any of these using Prometheus since we use a third party tool to monitor the cluster.

Would I just update my existing Prometheus setup to include the scrape job configs as described in the Grafana Agent Configmap? Is that recommended approach or is there a different way to do this?

Thanks

Hey there @fmm thanks for your question!

The Kubernetes integration is a prebuilt set of dashboards, alerting rules, and recording rules that we provision for you when you install the integration. For these to work correctly, you need to ship cAdvisor, kubelet, and kube-state-metrics metrics to Grafana Cloud (as you’ve identified).

Yes, including the scrape job configs is the way to go. When you install the integration, you’re given a ConfigMap for the Grafana Agent, and this contains Grafana Agent configuration (which includes the scrape jobs for cAdvisor, kubelet, and KSM). Since Grafana Agent config and Prometheus config are almost identical, you could add these scrape jobs to your existing Prometheus configuration, which will configure Prom to scrape the relevant endpoints. From there, you can then configure remote_write to ship the metrics to Grafana Cloud. If the job and cluster labels are set correctly, the K8s integration should then work properly. Also be sure to install kube-state-metrics into your cluster (you can find instructions for doing this in the integration install instructions).

The easier solution is to deploy the Grafana Agent as per the integration install instructions — everything is preconfigured and should work out of the box!

2 Likes