Hi! I have a setup with promtail installed as a DaemonSet in k8s, scraping the stdout/stderr of pods though /var/log/pods/...
The scrape config is as such:
- job_name: 'kubernetes-pods'
pipeline_stages:
- docker:
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_controller_name
target_label: __service__
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
source_labels:
- name
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
Now, I also need to scrape some other log files that are written by the applications (on a specific path on the host that is also mounted as a volume in the promtail pod). In order to have the same set of labels I have duplicated the same config under a different job
, like so:
- job_name: 'file-logs'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_controller_name
target_label: __service__
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
source_labels:
- name
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- replacement: /var/log/pod-files/$1/*.log
source_labels:
- __meta_kubernetes_pod_uid
target_label: __path__
The new logs don’t appear, so I checked Promtail’s /service-discovery
endpoint and for all the targets of the second job it says “Dropped: ignoring target, already exists”. It does say so also for about half of the targets of the first job, which are more than those of the second:
I tried adding a label to differentiate the two jobs, like so:
- action: replace
source_labels: [__meta_kubernetes_pod_name]
target_label: log_type
replacement: <unique value per job>
The new label gets indeed added to those targets it manages to activate, but it still doesn’t convince Promtail that the targets in the two jobs are not the same.
Questions:
- am I doing something obviously wrong?
- why is the total number of targets different for the two jobs?
- why are there duplicate targets within the first job?
- how do I convince Promtail to scrape the targets in the second job?